Example: bachelor of science

Lecture 1. Random vectors and multivariate normal …

Lecture 1. Random vectors and multivariate normal Moments of Random vectorA Random vectorXof sizepis a column vector consisting ofprandom variablesX1,..,Xpand isX= (X1,..,Xp) . The mean or expectation ofXis defined by the vector ofexpectations, E(X) = E(X1)..E(Xp) ,which exists ifE|Xi|< for alli= 1,.., a Random vector of sizepandYbe a Random vector of sizeq. Forany non- Random matricesA(m p),B(m q),C(1 n), andD(m n),E(AX+BY) =AE(X) +BE(Y),E(AXC+D) =AE(X)C+ a Random vectorXof sizepsatisfyingE(X2i)< for alli= 1,..,p, the variance covariance matrix (or just covariance matrix) ofXis Cov(X) =E[(X EX)(X EX) ].

1.2 Multivariate normal distribution - nonsingular case Recall that the univariate normal distribution with mean and variance ˙2 has density f(x) = (2ˇ˙2) 12 exp[ 2 1 2 (x )˙ (x )]: Similarly, the multivariate normal distribution for the special case …

Tags:

  Distribution, Normal, Vector, Multivariate, Random, Normal distribution, Multivariate normal distribution, Random vectors and multivariate normal

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of Lecture 1. Random vectors and multivariate normal …

1 Lecture 1. Random vectors and multivariate normal Moments of Random vectorA Random vectorXof sizepis a column vector consisting ofprandom variablesX1,..,Xpand isX= (X1,..,Xp) . The mean or expectation ofXis defined by the vector ofexpectations, E(X) = E(X1)..E(Xp) ,which exists ifE|Xi|< for alli= 1,.., a Random vector of sizepandYbe a Random vector of sizeq. Forany non- Random matricesA(m p),B(m q),C(1 n), andD(m n),E(AX+BY) =AE(X) +BE(Y),E(AXC+D) =AE(X)C+ a Random vectorXof sizepsatisfyingE(X2i)< for alli= 1,..,p, the variance covariance matrix (or just covariance matrix) ofXis Cov(X) =E[(X EX)(X EX) ].

2 The covariance matrix ofXis ap psquare, symmetric matrix. In particular, ij=Cov(Xi,Xj) = Cov(Xj,Xi) = properties:1. Cov(X) =E(XX ) E(X)E(X) .2. Ifc=c(p 1)is a constant, Cov(X+c) = Cov(X).3. IfA(m p)is a constant, Cov(AX) =ACov(X)A .Lemma pmatrix is a covariance matrix if and only if it is multivariate normal distribution - nonsingular caseRecall that the univariate normal distribution with mean and variance 2has densityf(x) = (2 2) 12exp[ 12(x ) 2(x )].Similarly, the multivariate normal distribution for the special case of nonsingular covariancematrix is defined as Rpand (p p)>0.

3 A Random vectorX Rphasp-variate normaldistribution with mean and covariance matrix if it has probability density functionf(x) =|2 | 12exp[ 12(x ) 1(x )],(1)forx Rp. We use the notationX Np( , ).Theorem Np( , )for >0, 12(X ) Np(0,Ip), 12Y+ whereY Np(0,Ip), (X) = and Cov(X) = ,4. for any fixedv Rp,v Xis univariate (X ) 1(X ) 2(p).Example1 (Bivariate normal ). Geometry of multivariate normalThe multivariate normal distribution has location parameter and the shape parameter >0. In particular, let s look into the contour of equal densityEc={x Rp:f(x) =c0}={x Rp: (x ) 1(x ) =c2}.

4 Moreover, consider the spectral decomposition of =U U whereU= [u1,..,up] and = diag( 1,.., p) with 1 2 .. p>0. TheEc, for anyc >0, is an ellipsoidcentered around with principal axesuiof length proportional to i. If =Ip, theellipsoid is the surface of a sphere of radiusccentered at .As an example, consider a bivariate normal distributionN2(0, ) with =[2 11 2]=[cos( /4) sin( /4)sin( /4)cos( /4)][3 00 1][cos( /4) sin( /4)sin( /4)cos( /4)] .The location of the distribution is the origin ( =0), and the shape ( ) of the distributionis determined by the ellipse given by the two principal axes (one at 45 degree line, theother at -45 degree line).

5 Figure 1 shows the density function and the correspondingEcforc= ,1, ,2,..2 Figure 1: Bivariate normal density and its contours. Notice that an ellipses in the plane canrepresent a bivariate normal distribution . In higher dimensionsd >2, ellipsoids play thesimilar General multivariate normal distributionThe characteristic function of a Random vectorXis defined as X(t) =E(eit X),fort that the characteristic function isC-valued, and always exists. We collect someimportant X(t) = Y(t) if and only ifXL= IfXandYare independent, then X+Y(t) = X(t) Y(t).

6 Xif and only if Xn(t) X(t) for important corollary follows from the uniqueness of the characteristic 4(Cramer Wold device).IfXis ap 1random vector then its distribution isuniquely determined by the distributions of linear functions oft X, for everyt 4 paves the way to the definition of (general) multivariate normal Random vectorX Rphas a multivariate normal distribution ift Xis anunivariate normal for allt definition says thatXis MVN if every projection ofXonto a 1-dimensional subspaceis normal , with a convention that a degenerate distribution chas a normal distribution withvariance 0, ,c N(c,0).

7 The definition does not require that Cov(X) is characteristic function of a multivariate normal distribution with mean and covariance matrix 0is, fort Rp, (t) = exp[it 12t t].If >0, then the pdf exists and is the same as (1).In the following, the notationX N( , ) is valid for a non-negative definite . How-ever, whenever 1appears in the statement, is assumed to be positive Np( , )andY=AX+bforA(q p)andb(q 1), thenY Nq(A +b,A A ).Next two results are concerning independence and conditional distributions of normalrandom vectors . LetX1andX2be the partition ofXwhose dimensions arerands,r+s=p, and suppose and are partitioned accordingly.

8 That is,X=[X1X2] Np([ 1 2],[ 11 12 21 22]).Proposition normal Random vectorsX1andX2are independent if and only ifCov(X1,X2) = 12= conditional distribution ofX1givenX2=x2isNr( 1+ 12 122(x2 2), 11 12 122 21) new Random vectorsX 1=X1 12 122X2andX 2=X2,X =[X 1X 2]=AX,A=[Ir 12 1220(s r)Is].By Proposition 6,X is multivariate normal . An inspection of the covariance matrix ofX leads thatX 1andX 2are independent. The result follows by writingX1=X 1+ 12 122X2,and that the distribution (law) ofX1givenX2=x2isL(X1|X2=x2) =L(X 1+ 12 122X2|X2=x2) =L(X 1+ 12 122x2|X2=x2), which is a MVN of multivariate Central Limit TheoremIfX1,X2.

9 Rpare withE(Xi) = and Cov(X) = , thenn 12n j=1(Xj ) Np(0, )asn ,or equivalently,n12( Xn ) Np(0, )asn ,where Xn=12 nj= delta-method can be used for asymptotic normality ofh( Xn) for some functionh:Rp R. In particular, denote h(x) for the gradient ofhatx. Using the first twoterms of Taylor series,h( Xn) =h( ) + ( h( )) ( Xn ) +Op( Xn 22),Then Slutsky s theorem gives the result, n(h( Xn) h( )) = ( h( )) n( Xn ) +Op( n( Xn ) ( Xn )) ( h( )) Np(0, )asn ,=Np(0,( h( )) ( h( ))) Quadratic forms in normal Random vectorsLetX Np( , ).

10 A quadratic form inXis a Random variable of the formY=X AX=p i=1p j=1 XiaijXj,whereAis ap psymmetric matrix andXiis theith element ofX. We are interested inthe distribution of quadratic forms and the conditions under which two quadratic forms special case: IfX Np(0,Ip) andA=Ip,Y=X AX=X X=p i=1X2i 2(p). the following:1. Ap pmatrixAis idempotent ifA2= IfAis symmetric, thenA= , where = diag( i) and is IfAis symmetric idempotent,5(a) its eigenvalues are either 0 or 1,(b) rank(A) = #{non zero eigenvalues}= trace(A).Theorem Np(0, 2I)andAbe ap psymmetric matrix.


Related search queries