Example: marketing

Topic 6: Convergence and Limit Theorems

Topic 6: Convergence and Limit Theorems Sum of random variables Laws of large numbers Central Limit theorem Convergence of sequences of RVsES150 Harvard SEAS1 Sum of random variablesLetX1,X2, .., Xnbe a sequence of random variables. DefineSnasSn=X1+X2+ +Xn The mean and variance ofSbecomeE[Sn]=E[X1]+E[X2]+ +E[Xn]var(Sn)=n k=1var(Xk)+n j=1j =kn k=1cov(Xj,Xk) IfX1,X2, .., Xnareindependentrandom variables, thenvar(Sn)=n k=1var(Xk)The characteristic function can be used to calculate the joint pdf as Sn( )=E[ej Sn]= X1( ) Xn( )fSn(x)=F 1{ X1( ) Xn( )}ES150 Harvard SEAS2 Sum of a random number of independent RVs Consider the sum RVsXiwith finite mean and varianceSN=N k=1 XkwhereNis a random variable independent of theXk.

Topic 6: Convergence and Limit Theorems ... – This is the Central Limit Theorem (CLT) and is widely used in EE. ES150 – Harvard SEAS 7 • Examples: 1. Suppose that cell-phone call durations are iid RVs with μ = 8 and σ = 2 (minutes). – Estimate the probability of 100 calls taking over 840 minutes.

Tags:

  Limits, Theorem, Limit theorem

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of Topic 6: Convergence and Limit Theorems

1 Topic 6: Convergence and Limit Theorems Sum of random variables Laws of large numbers Central Limit theorem Convergence of sequences of RVsES150 Harvard SEAS1 Sum of random variablesLetX1,X2, .., Xnbe a sequence of random variables. DefineSnasSn=X1+X2+ +Xn The mean and variance ofSbecomeE[Sn]=E[X1]+E[X2]+ +E[Xn]var(Sn)=n k=1var(Xk)+n j=1j =kn k=1cov(Xj,Xk) IfX1,X2, .., Xnareindependentrandom variables, thenvar(Sn)=n k=1var(Xk)The characteristic function can be used to calculate the joint pdf as Sn( )=E[ej Sn]= X1( ) Xn( )fSn(x)=F 1{ X1( ) Xn( )}ES150 Harvard SEAS2 Sum of a random number of independent RVs Consider the sum RVsXiwith finite mean and varianceSN=N k=1 XkwhereNis a random variable independent of theXk.

2 Using conditional expectation, the mean and variance ofSNareE[SN]=E[E[SN|N]] =E[NE[X]] =E[N]E[X]var(SN)=var(N)E[X]2+E[N]var(X) The characteristic function ofSNis SN( )=E[E[ej SN|N]]=E[ X( )N]=E[zN] z= X( )=GN( X( ))which is the generating function ofNevaluated atz= ( ).ES150 Harvard SEAS3 Example: Number of jobs N submitted to the CPU is a geometric RV withparameterp. The excution time of each job is an exponential RV with mean .Find the pdf of the total execution Harvard SEAS4 Laws of large numbersLetX1,X2, .., Xnbeindependent, identically distributed(iid) randomvariables with meanE[Xj]= ,( < ).

3 The sample mean of the sequence is defined asMn=1nn j=1Xj For largen,Mncan be used to estimate sinceE[Mn]=1nn j=1E[Xj]= var(Mn)=1n2var(Sn)=n 2n2= 2n From Chebyshev inequality,P[|Mn | ] 2n 2orP[|Mn |< ] 1 2/n 2 Asn , we have var(Mn) 0and 2/n 2 Harvard SEAS5 The Weak Law of Large Numbers (WLLN)limn P[|Mn |< ] = 1for any >0 The WLLN implies that for a large (fixed) value ofn, the sample meanwill be within of the true mean with high probability. The Strong Law of Large Numbers (SLLN)P[limn Mn= ]=1 The SLLN implies that, with probability 1, every sequence of samplemeans will approach and stay close to the true : Given an eventA, we can estimatep=P[A]by performing a sequence ofNBernoulli trials observing the relative frequency ofAoccurringfA(N)How large shouldNbe to haveP[|fA(N) p| ] ?

4 , a chance that the relative frequency is within ofP[A]?ES150 Harvard SEAS6 The Central Limit theorem LetX1,X2, .., Xnbe RVs with finite mean and varianceE[Xi]= < var(Xi)= 2< LetSn= ni=1Xi, and defineZnasZn=Sn n n,Znhas zero-mean and unit-variance. Asn thenZn N(0,1). That islimn P[Zn z]=1 2 z e x2/2dx. Convergence applies toanydistribution ofXwithfinite meanandfinite variance. This is the Central Limit theorem (CLT) and is widely used in Harvard SEAS7 Examples:1. Suppose that cell-phone call durations are iid RVs with = 8 and = 2 (minutes). Estimate the probability of 100 calls taking over 840 minutes.

5 After how many calls can we be 90% sure that the total time usedis more than 1000 minutes?2. Does the CLT apply to Cauchy random variables?ES150 Harvard SEAS8 Gaussian approximation for binomial probabilities A Binomial random variable is a sum of iid Bernoulli i=1Zi,Zi Bern(p) are binomial(np). By CLT, the Binomial cdfFX(x) approaches a Gaussian cdfp[X=k] 1 2 np(1 p)exp{ (k np)22np(1 p)}The approximation is best forknearnp. Example: A digital communication link has bit-error probabilityp. Estimate the probability that an-bit received message has at leastkbits in Harvard SEAS9 Convergence of sequences of RVs Given a sequence of RVs{Xn( )}: {Xn( )}can be viewed as a sequence offunctionsof.

6 For each ,{Xn( )}is a sequence of numbers{x1,x2,x3,..}. A sequence{xn}is said to converge toxif for any >0, thereexistsNsuch that|xn x|< for alln> writexn x. In what sense does{Xn( )}converge to a random variableX( )asn ?Types of Convergence for a sequence of RVs: Sure Convergence :{Xn( )}converges surely toX( )ifXn( ) X( )asn for all SFor every S, the sequence{Xn( )}converges toX( )asn .ES150 Harvard SEAS10 Almost-sure Convergence :{Xn( )}converges almost surelyX( )ifP[ :Xn( ) X( )asn ]=1Xn( )convergestoX( )asn for all inS, except possibly on aset ofzero probability. The strong LLN is an example of almost-sure Convergence .

7 Mean-square Convergence :{Xn( )}converges in the mean square sensetoX( )ifE[(Xn( ) X( ))2] 0asn Here the Convergence is in a sequence ofa functionofXn( ). Cauchy criterion:{Xn( )}converges in the mean square sense if and only ifE[(Xn( ) Xm( ))2] 0asn andm Convergence in probability:{Xn( )}converges in probability toX( )if, for any >0,P[|Xn( ) X( )|> ] 0asn ES150 Harvard SEAS11 For each S, the sequenceXn( ) is not required to stay within ofX( )asn , but only be within with high probability. The WLLN is an example of Convergence in probability. Convergence in distribution:{Xn( )}with cdf{Fn(x)}converges indistribution toXwith cdfF(x)ifFn(x) F(x)asn for allxat whichF(x) is continuous.

8 The CLT is an example of Convergence in distribution. Relationship among different convergencesAlmost-SureConvergenceMean SquareConvergenceConvergence inProbabilityConvergence inDistributionSure ConvergenceMS Convergence does not imply Convergence and vice Harvard SEAS12


Related search queries