Example: bankruptcy

LECTURE NOTES ON DONSKER’S THEOREM - Math

LECTURE NOTES ON donsker S THEOREMDAVAR KHOSHNEVISANABSTRACT. Some course NOTES on donsker s THEOREM . These are forMath 7880-1 ( Topics in Probability ), taught at the Deparment of Math-ematics at the University of Utah during the Spring semester of are constantly being updated and corrected. Read them atyour own authors thanks the National Science Foundation for their continued INTRODUCTIONLet {Xn} n=1denote random variables, all taking values inR. De-fineSn:=X1+ +Xn n the classical central limit THEOREM : THEOREM (CLT).IfE[X1]= andVar(X1) := 2 (0, ), thenSn n pn N(0, 2),where means weak convergence (or convergence in distribution), andN(m,v)denotes the normal distribution with mean m Rand variancev> is a most rudimentary example of an invariance principle.

LECTURE NOTES ON DONSKER’S THEOREM DAVARKHOSHNEVISAN ABSTRACT.Some course notes on Donsker’s theorem. These are for Math7880-1(“TopicsinProbability”),taughtattheDeparmentofMath-

Tags:

  Lecture, Notes, Lecture notes, Theorem, Donsker s theorem, Donsker

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of LECTURE NOTES ON DONSKER’S THEOREM - Math

1 LECTURE NOTES ON donsker S THEOREMDAVAR KHOSHNEVISANABSTRACT. Some course NOTES on donsker s THEOREM . These are forMath 7880-1 ( Topics in Probability ), taught at the Deparment of Math-ematics at the University of Utah during the Spring semester of are constantly being updated and corrected. Read them atyour own authors thanks the National Science Foundation for their continued INTRODUCTIONLet {Xn} n=1denote random variables, all taking values inR. De-fineSn:=X1+ +Xn n the classical central limit THEOREM : THEOREM (CLT).IfE[X1]= andVar(X1) := 2 (0, ), thenSn n pn N(0, 2),where means weak convergence (or convergence in distribution), andN(m,v)denotes the normal distribution with mean m Rand variancev> is a most rudimentary example of an invariance principle.

2 Herewe have a limit THEOREM where the limiting distribution depends on theapproximating sequence only through and is an application in classical statistical theory:Example we have a population ( , heart rates) whosemean, , is unknown to us. In order to learn about this , we can take alarge independent sampleX1,..,Xnfrom the said population, and con-struct the sample average Xn:=(Sn/n). By the strong law of large num-bers, Xn . In order to find a more quantitative estimate we appeal tothe CLT; it asserts thatpn( Xn ) N(0, 2).One can then use the preceding to derive approximate confidence bounds for.

3 For instance, ifn the sample size is large, then the CLT impliesthatP{ = Xn 2 pn} relies on the fact that P{N(0,1) [ 2,2]} , which you can find ina number of statistical tables. We have just found that, when the samplesize is large, we are approximately 90% certain that is to within 2/p 2nof the sample preceding example is quite elementary in nature. But it is a pow-erful tool in applied work. The reason for this is the said invariance prop-erty: We do not need to know much about the distribution of the sampleto say something about the limit.

4 Being inL2(P) suffices!Now suppose you are drawing samples as time passes, and you wishto know if the mean of the underlying population has changed over S THEOREM3 Then, it turns out that we need to consider the sample under the assump-tion thatX1,..,Xnall have the same distribution. In that case, we com-puteMn:=max1 j n(Sj j ) n there a CLT forMn? It turns out that the answer is a resounding yes, and involves an invariance same phenomenon holds for all sorts of other random variablesthat we can construct by applying nice functionals to {S1.}

5 ,Sn}. donsker stheorem asserts that something far deeper , let us make the usual simplification that, without loss of gen-erality, E[X1]=0 and Var(X1)=1. Else, replace theXi s everywhere byX i:=(Xi )/ . Keeping this in mind, we can define for all ,n 1,andt (0,1],( )Sn(t, ) :=1pnn i=1[Si 1( )+n(t i 1n)Xi( )]1(i 1n,in](t).Also defineSn(0, ) :=0. As usual, we do not write the dependence on .In this way, we see thatSn:={Sn(t);t [0,1]} is a random continuousfunction. This deserves to be made more precise. But before we do that,we should recognize thatSnis merely the linear interpolation of the nor-malized random walk {S1/pn.))

6 ,Sn/pn}, and is parametrized by [0,1].Thus, for instance, by the CLT,Sn(1) N(0,1).LetC[0,1] denote the collection of all continuous functionsf: [0,1] R, and metrize it withd(f,g) :=supx [0,1]|f(x) g(x)|forf,g C[0,1].Exercise that (C[0,1],d) is a complete, separable, metric vec-tor consider the measure space (C[0,1],B), whereB:=B(C[0,1]) isthe Borel -algebra onC[0,1]. Then, eachSnis now a random variablewith values in (C[0,1],C). Let P denote also the induced probability mea-sure on the said measure space. [This is cheating a little bit, but no greatharm will come of it.]

7 ]The bulk of these NOTES is concerned with the following invariance ( donsker ).Suppose{Xi} i=1is an sequence withE[X1]=0andVar(X1)=1. Then,Sn W as n , where W denotes Brownianmotion. The latter is viewed as a random element of(C[0,1],B).It may help to recall that this means that for all bounded, continuousfunctions :C[0,1] R,limn E[ (Sn)]=E[ (W)].4 KHOSHNEVISANOf course,boundedmeans that there exists a finite number such thatfor allf C[0,1],| (f)| .Definition measurable function :C[0,1] Ris called SOMEAPPLICATIONSB efore we prove THEOREM , we should learn to apply it in a variety ofsettings.

8 That is precisely what we do here in this :R Rbe a bounded, continuous function. Forf C[0,1] define (f) :=h(f(1)). It is easy to see that is a continuousfunctional. That is, (fn) (f) wheneverd(fn,f) 0. For this special , donsker s THEOREM says thatE[h(Snpn)]=E[h(Sn(1))] E[h(W(1))]=E[h(N(0,1))].This is the CLT in :R Rbe a bounded, continuous function. Forf C[0,1] define (f) :=h(supt [0,1]f(t)). Then, is a bounded, continuousfunctional. donsker s THEOREM says the following about this choice of :Asn ,E[h(supt [0,1]Sn(t))]=E[ (Sn)] E[ (W)]=E[h(supt [0,1]W(t))].

9 By the reflection principle, supt [0,1]W(t) has the same distribution as|N(0,1)|. Therefore, we have proved that supt [0,1]Sn(t) |N(0,1)|, wherenow denotes weak convergence inR(notC[0,1]). By convexity,supt [0,1]Sn(t)=max0 j nSj/pn.(Hash this out!) Therefore, we have proved that for allx 0,( )limn P{max1 j nSj xpn}= 2 x0e z2/2p2 d z.(Why?)Example with Example , and note that for allx 0,limn P{min1 j nSj xpn}= 2 x e z2/2p2 d z.(Why?) In fact, we can let , , Rbe fixed, and note that min1 j nSjpn+ max1 j nSjpn+ Snpn inft [0,1]W(t)+ supt [0,1]W(t)+ S THEOREM5(Why?)

10 This is a statement about characteristic functions: It asserts thatthe characteristic function of the random vector (minj nSj,maxj nSj,Sn)/pnconverges to that of (inf[0,1]W,sup[0,1]W,W(1)). Therefore, by the conver-gence THEOREM for three-dimensional Fourier transforms,(min1 j nSjpn, max1 j nSjpn,Snpn) (inft [0,1]W(t) , supt [0,1]W(t) ,W(1)),where, now, denotes weak convergence inR3. Write this asVn V,whereVnandVare the (hopefully) obvious three-dimensional randomvariables described above. It follows that for any bounded, continuoush:R3 R, limnE[h(Vn)]=E[h(V)].


Related search queries