Example: biology

Asymptotic Relative Efficiency in Estimation

Asymptotic Relative Efficiency in EstimationRobert Serfling University of Texas at DallasOctober 2009 Prepared for forthcomingINTERNATIONAL ENCYCLOPEDIA OF STATISTICAL SCIENCES,to be published by SpringerAsymptotic Relative efficiency of two estimatorsFor statistical Estimation problems, it is typical and evendesirable that several reasonableestimators can arise for consideration. For example, the mean and median parameters ofa symmetric distribution coincide, and so thesample meanand thesample medianbecomecompeting estimators of the point of is preferred? By what criteria shallwe make a choice?One natural and time-honored approach is simply to compare the sample sizes at whichtwo competing estimators meet a given standard of performance.

Asymptotic Relative Efficiency in Estimation Robert Serfling∗ University of Texas at Dallas October 2009 Prepared for forthcoming INTERNATIONAL ENCYCLOPEDIA OF STATISTICAL SCIENCES,

Tags:

  Relative, Estimation, Efficiency, Asymptotic relative efficiency in estimation, Asymptotic

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of Asymptotic Relative Efficiency in Estimation

1 Asymptotic Relative Efficiency in EstimationRobert Serfling University of Texas at DallasOctober 2009 Prepared for forthcomingINTERNATIONAL ENCYCLOPEDIA OF STATISTICAL SCIENCES,to be published by SpringerAsymptotic Relative efficiency of two estimatorsFor statistical Estimation problems, it is typical and evendesirable that several reasonableestimators can arise for consideration. For example, the mean and median parameters ofa symmetric distribution coincide, and so thesample meanand thesample medianbecomecompeting estimators of the point of is preferred? By what criteria shallwe make a choice?One natural and time-honored approach is simply to compare the sample sizes at whichtwo competing estimators meet a given standard of performance.

2 This depends upon thechosen measure of performance and upon the particular population make the discussion of sample mean versus sample median more precise, consider adistribution functionFwith density functionfsymmetric about an unknown point tobe estimated. For{X1, .. , Xn}a sample fromF, putXn=n 1 ni=1 Xiand Medn=median{X1, .. , Xn}. Each ofXnand Mednis a consistent estimator of in the senseof convergence in probability to as the sample sizen . To choose between theseestimators we need to use further information about their performance. In this regard, onekey aspect isefficiency, which answers:How spread out about is the sampling distributionof the estimator?

3 The smaller the variance in its sampling distribution, the more efficient is that we consider large-sample sampling distributions. ForXn, the classical centrallimit theorem tells us: ifFhas finite variance 2F, then the sampling distribution ofXnisapproximatelyN( , 2F/n), , Normal with mean and variance 2F/n. For Medn, a similar Department of Mathematical Sciences, University of Texas at Dallas, Richardson, Texas 75083-0688,USA. serfling. Support by NSF GrantDMS-0805786 and NSA Grant H98230-08-1-0106 is gratefully result [11] tells us: if the densityfis continuous and positive at , then the samplingdistribution of Mednis approximatelyN( ,1/4[f( )]2n).

4 On this basis, we considerXnandMednto perform equivalently at respective sample sizesn1andn2if 2Fn1=14[f( )] in mind that these sampling distributions are only approximations assuming thatn1andn2are large , we define theasymptotic Relative efficiency (ARE)of Med toXasthelarge-sample limitof the ration1/n2, ,ARE(Med,X, F) = 4[f( )]2 2F.(1)Definition in the general caseFor any parameter of a distributionF, and for estimators (1)and (2)approximatelyN( , V1(F)/n) andN( , V2(F)/n), respectively, theARE of (2)to (1)is given byARE( (2), (1), F) =V1(F)V2(F).(2)Interpretation. If (2)is used with a sample of sizen, the number of observations needed for (1)to perform equivalently is ARE( (2), (1), F) to the case of multidimensional parameter.

5 For a parameter taking values inRk,and two estimators (i)which arek-variate Normal with mean and nonsingular covariancematrices i(F)/n,i= 1,2, we use (see [11])ARE( (2), (1), F) =(| 1(F)|| 2(F)|)1/k,(3)the ratio ofgeneralized variances(determinants of the covariance matrices), raised to thepower 1 with the maximum likelihood estimatorLetFhave densityf(x| ) parameterized by Rand satisfying some differentiabilityconditions with respect to . Suppose also thatI(F) =E {[ logf(x| )]2}(theFisherinformation) is positive and finite. Then [5] it follows that (i) themaximum likelihoodestimator (ML)of is approximatelyN( ,1/I(F)n), and (ii) for a wide class of estimators that are approximatelyN( , V( , F)/n), alower boundtoV( , F) is 1/I(F).

6 In thissituation, (2) yieldsARE( , (ML), F) =1I(F)V( , F) 1,(4)making (ML)(asymptotically) the most efficient among the given class of estimators .We note, however, as will be discussed later, that (4) does not necessarily make (ML)theestimator of choice, when certain other considerations aretaken into discussion of Estimation of point of symmetryLet us now discuss in detail the example treated above, withFa distribution with densityfsymmetric about an unknown point and{X1, .. , Xn}a sample fromF. For estimationof , we will consider not onlyXnand Mednbut also a third important versus medianLet us now formally compareXnand Mednand see how the ARE differs with choice (1) withF=N( , 2F), it is seen thatARE(Med,X, N( , 2F)) = 2/ = , for sampling from aNormaldistribution, the sample mean performs as efficiently asthe sample median using only 64% as many observations.

7 (Since and Fare locationand scale parameters ofF, and since the estimatorsXnand Mednare location and scaleequivariant, their ARE does not depend upon these parameters.) The superiority ofXnhereis no surprise since it is the MLE of in the modelN( , 2F).As noted above,asymptoticrelative efficiencies pertain to large sample comparisons andneed not reliably indicate small sample performance. In particular, forFNormal, theexactrelative efficiency of Med toXfor sample sizen= 5 is a very high 95%, although thisdecreases quickly, to 80% forn= 10, to 70% forn= 20, and to 64% in the sampling from adouble exponential(orLaplace) distribution with densityf(x) = e |x |/2, < x < (and thus variance 2/ 2), the above result favoringXnoverMednis reversed: (1) yieldsARE(Med,X,Laplace) = 2,so that the sample mean requires 200% as many observations toperform equivalently to thesample median.

8 Again, this is no surprise because for this model the MLE of is compromise: the Hodges-Lehmann location estimatorWe see from the above that the ARE depends dramatically upon the shape of the densityfand thus must be used cautiously as a benchmark. For Normal versus Laplace,Xniseither greatly superior or greatly inferior to Medn. This is a rather unsatisfactory situation,since in practice we might not be quite sure whetherFis Normal or Laplace or some othertype. A very interesting solution to this dilemma is given byan estimator that has excellentoverall performance, the so-calledHodges-Lehmann location estimator[2]:HLn= Median{Xi+Xj2},the median of all pairwise averages of the sample observations.

9 (Some authors include thecasesi=j, some not.) We have [3] that HLnis asymptoticallyN( ,1/12[ f2(x)dx]2n),3which yields that ARE(HL,X, N( , 2F)) = 3/ = and ARE(HL,X,Laplace) = , for theLogisticdistribution with densityf(x) = 1e(x )/ /[1+e(x )/ ]2, < x < , for which HLnis the MLE of and thus optimal, we have ARE(HL,X,Logistic) = 2/9= (see [4]). Further, forFthe class of all distributions symmetric about and havingfinite variance, we have infFARE(HL,X, F) = 108/125 = (see [3]). The estimatorHLnis highly competitive withXat Normal distributions, can be infinitely more efficientat some other symmetric distributionsF, and is never much less efficient at any distributionFinF.

10 The computation of HLnappears at first glance to requireO(n2) steps, but a muchmore efficientO(nlogn) algorithm is available (see [6]).Efficiency versus robustness trade-offAlthough the asymptotically most efficient estimator is given by the MLE, the particularMLE depends upon the shape ofFand can be drastically inefficient when the actualFdeparts even a little bit from the nominalF. For example, if the assumed F isN( ,1) butthe actual model differs by a small amount of contamination , ,F= (1 )N( ,1) + N( , 2), thenARE(Med,X, F) =2 (1 + 1)2(1 + 2),which equals 2/ in the ideal case = 0 but otherwise as . A smallperturbation of the assumed model thus can destroy the superiority of the way around this issue is to take anonparametricapproach and seek an estimatorwith ARE satisfying a favorable lower bound.


Related search queries