Example: marketing

Three Classical Tests; Wald, LM(Score), and LR tests

Econ 620 Three Classical tests ; wald , LM( score ), and LR testsSuppose that we have the density (y; ) of a model with the null hypothesis of the formH0; = ( ) be the log-likelihood function of the model and be the MLE of . wald test is based on the very intuitive idea that we are willing to accept the null hypothesis when isclose to distance between and 0is the basis of constructing the test statistic. On the other hand,consider the following constrained maximization problem,max L( ) = 0If the constraint is not binding (the null hypothesi is true), the Lagrangian multiplier associated withthe constraint is zero. We can construct a test measuring how far the Lagrangian multiplier is from zero.

LM test (Score test) If we have a priori reason or evidence to believe that the parameter vector satisfies some restrictions in the form of g(θ)=0, incorporating the information into the maximization of the likelihood function through constrained optimization will improve the efficiency of estimator compared to MLE from unconstrained

Tags:

  Tests, Score, Three, Classical, Three classical tests wald, Wald

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of Three Classical Tests; Wald, LM(Score), and LR tests

1 Econ 620 Three Classical tests ; wald , LM( score ), and LR testsSuppose that we have the density (y; ) of a model with the null hypothesis of the formH0; = ( ) be the log-likelihood function of the model and be the MLE of . wald test is based on the very intuitive idea that we are willing to accept the null hypothesis when isclose to distance between and 0is the basis of constructing the test statistic. On the other hand,consider the following constrained maximization problem,max L( ) = 0If the constraint is not binding (the null hypothesi is true), the Lagrangian multiplier associated withthe constraint is zero. We can construct a test measuring how far the Lagrangian multiplier is from zero.

2 -LM test. Finally, another way to check the validity of null hypothesis is to check the distance between twovalues of maximum likelihood function likeL L( 0)=log y; (y; 0)If the null hypothesis is true, the above statistic should not be far away from zero, Distributions of the Three TestsAssume that the observed variables can be partitioned into the endogenous variablesXand simplify the presentation, we assume that the observations (Yi,Xi) are and we canobtain conditional distribution of endogenous variables given the exogenous variables asf(yi|xi; )with conditional density is known up to unknown parameter vector .By assumption, wecan write down the log-likelihood function ofnobservations of (Yi,Xi)asL( )=n i=1logf(yi|xi; )We assume all the regularity conditions for existence, consistency and asymptotic normality of MLE anddenote MLE as hypotheses of interest are given asH0;g( 0)=0HA;g( 0) =0whereg( );Rp Rrand the rank of g testProposition 1 Wn=ng n g n I 1 n g n 1g n 2(r) 2logf(Y|X; ) andI 1 n is the inverse ofIevaluated at = n.

3 From the asymptotic characteristics of MLE, we know that n n 0 d N 0,I 1( 0) (1)1 The first order Taylor series expansion ofg n around the true value of 0,we haveg n =g( 0)+ g( 0) n 0 +op(1) n g n g( 0) = g( 0) n n 0 +op(1)(2)Hence, combining (1) and (2) gives n g n g( 0) d N 0, g( 0) I 1( 0) g ( 0) (3)Under the null hypothesis, we haveg( 0)= , ng n d N 0, g( 0) I 1( 0) g ( 0) (4)By forming the quadratic form of the normal random variables, we can conclude thatng n g( 0) I 1( 0) g ( 0) 1g n 2(r)underH0.(5)The statistic in (5) is useless since it depends on the unknown parameter 0. However, we can consistentlyapproximate the terms in inverse bracket by evaluating at MLE, , Wn=ng n g n I 1 n g n 1g n 2(r)underH0.

4 An asymptotic test which rejects the null hypothesis with probability one when the alternative hy-pothesis is true is called aconsistent test. Namely, a consistent test has asymptotic power of 1. The wald test we discussed above is a consistent test. A heuristic argument is that if the alternative hy-pothesis is true instead of the null hypothesis,g n p g( 0) = ,g n g( n) I 1 n g ( n) 1g nis converging to a constant instead of zero. By multiplying a constant byn, Wn asn ,whichimplies that we always reject the null hypothesis when the alternative is true. Another form of the wald test statistic is given by -caution: this is quite confusing - Wn=g n g n I 1n n g n 1g n 2(r) 2L( ) =EXE ni=1 2log(yi|xi; ) andI 1n n is the inverse ofIneval-uated at = thatIn=nI.

5 A quite common form of the null hypothesis is the zero restriction on a subset of parameters, ,H0; 1=0HA; 1 =0where 1is a (q 1) subvector of withq< , the wald statistic is given by Wn=n 1 I11 n 1 1 2(q) ( ) is the upper left block of the inverse information matrix,I( )= I11( )I12( )I21( )I22( ) then,I11( )= I11( ) I12( )I 122( ) 1by the formula for partitioned n isI11( )evaluated at test ( score test)If we have a priori reason or evidence to believe that the parameter vector satisfiessome restrictions in theform ofg( )=0,incorporating the information into the maximization of the likelihood function throughconstrained optimization will improve the efficiency of estimator compared to MLE from unconstrainedmaximization.

6 We solve the following problem;maxL( ) ( )=0 FOC s are given by L n + g n =0(6)g n =0(7)where nis the solution of constrained maximization problem called constrained MLE and is the vector ofLagrange multiplier. The LM test is based on the idea that properly scaled has an asymptotically 2 Sn=1n L n I 1 n L n =1n g n I 1 n g n 2(r)underH0. First order Taylor expansions ofg n andg n around 0gives, ignoringop(1) terms, ng n = ng( 0)+ g( 0) n n 0 (8) ng n = ng( 0)+ g( 0) n n 0 (9)Note thatg n = 0 from (7) and substracting (9) from (8), we have ng n = g( 0) n n n (10)On the other hand, taking first order Taylor series expansions of L( n) and L( n) around 0gives,ignoringop(1) terms, L n = L( 0) + 2L( 0) n 0 1 n L n =1 n L( 0) +1n 2L( 0) n n 0 1 n L n =1 n L( 0) I( 0) n n 0 (11)note that 1n 2L( 0) = 1n ni=1p 2log(yi|xi; ) I( 0) by the law of large numbers.

7 Similarly,1 n L n =1 n L( 0) I( 0) n n 0 (12)3 Considering the fact that L( n) = 0 by FOC of the unconstrained maximization problem, we take thedifference between (11) and (12). Then,1 n L n = I( 0) n n n =I( 0) n n n (13)Hence, n n n =I 1( 0)1 n L n (14)From (10) and (14), we obtain ng n = g( 0) I 1( 0)1 n L n Using (6), we deduce ng n = g( 0) I 1( 0) g n n g( 0) I 1( 0) g ( 0) n(15)since np 0henceg n p g( 0).Therefore, n= g( 0) I 1( 0) g ( 0) 1 ng n (16)From (4), under the null hypothesis, ng n d N 0, g( 0) I 1( 0) g ( 0) .Consequently, we have nd N 0, g( 0) I 1( 0) g ( 0) 1 (17)Again, forming the quadratic form of the normal random variables, we obtain1n g( 0) I 1( 0) g ( 0) 2(r)underH0.

8 (18)Alternatively, using (6), another form of the test statistic is given by1n L n I 1( 0) L n 2(r)underH0(19)Note that (18) and (19) are useless since they depend on the unknown parameter value canevaluate the terms involved in 0at theconstrained MLE, nto get a usable statistic. Again, another form of LM test is Sn= L( n) I 1n( 0) L( n) = g( 0) I 1n( 0) g ( 0) . We can approximateI( 0) with either 1n ni=1 2log(yi|xi; n) or1n ni=1 log(yi|xi; n) log(yi|xi; n) .If4we choose the second approximation, the LM test statistic becomes Sn=1n L n 1nn i=1 log yi|xi; n log yi|xi; n 1 L n =1nn i=1 log yi|xi; n 1nn i=1 log yi|xi; n log yi|xi; n 1n i=1 log yi|xi; n =n i=1 log yi|xi; n n i=1 log yi|xi; n log yi|xi; n 1n i=1 log yi|xi; n this expression seems quite familiar to us - looks like a projection matrix.

9 The intuition is (uncentered)R2ufrom the regression of 1 on log(yi|xi; n) is given byR2u=1 X(X X) 1X X(X X) 111 1=1 X(X X) 1X 11 1whereX(n p)= log(y1|x1; n) log(y2|x2; n) log(yn|xn; n) and1(n 1)= 11 1 .Then,R2u= ni=1 log(yi|xi; n) ni=1 log(yi|xi; n) log(yi|xi; n) 1 ni=1 log(yi|xi; n) nHence, Sn=nR2uThis is quite an interesting result since the computation of LM statistic is nothing but an OLS re-gression. We regress 1 on thescoresevaluated at constrained MLE and compute uncenteredR2andthen multiply it with the number of observations to get LM statistic. One thing to be cautious is thatmost software will automatically try to print out centeredR2,whichisimpossibleinthiscasesi ncethe denominator of centeredR2is simply zero.

10 LM test is also an asymptotically consistent test. From (16) and (18), Wn=ng n g n I 1 n g n 1g n ng n g( 0) I 1( 0) g ( 0) 1g n = SnLikelihood ratio(LR) testProposition 3 Rn=2 L n L n 2(r)underH0. We consider the second order Taylor expansions ofL n andL n around the null5hypothesis, ignoring stochastically dominated terms,L n =L( 0)+ L( 0) n 0 +12 n 0 2L( 0) n 0 =L( 0)+1 n L( 0) n n 0 +12 n n 0 1n 2L( 0) n n 0 L n =L( 0)+ L( 0) n 0 +12 n 0 2L( 0) n 0 =L( 0)+1 n L( 0) n n 0 +12 n n 0 1n 2L( 0) n n 0 Taking differences and multiplying by 2, we obtain2 L n L n =2 n L( 0) n n n + n n 0 1n 2L( 0) n n 0 n n 0 1n 2L( 0) n n 0 2n n 0 I( 0) n n n n 0 I( 0) n 0 +n n 0 I( 0) n 0 since1 n L( 0) =I( 0) n n 0 from (11) and 1n 2L( 0)


Related search queries