Example: quiz answers

Gaussian Random Vectors - University of Utah

Gaussian Random Vectors 1. The multivariate normal distribution Let X := (X1 X ) be a Random vector . We say that X is a Gaussian Random vector if we can write X = + AZ . where R , A is an matrix and Z := (Z1 Z ) is a - vector of standard normal Random variables. Proposition 1. Let X be a Gaussian Random vector , as above. Then, 1 2 1 . EX = Var(X) := = AA and MX ( ) = e + 2 A = e + 2 . for all R . Thanks to the uniqueness theorem of MGF's it follows that the dis- tribution of X is determined by , , and the fact that it is multivariate normal.

Gaussian Random Vectors 1. The multivariate normal distribution Let X:= (X1 ￿￿￿￿￿X￿)￿ be a random vector. We say that X is a Gaussian random vector if we can write X = µ +AZ￿ where µ ∈ R￿, A is an ￿ × ￿ matrix and Z:= (Z1 ￿￿￿￿￿Z￿)￿ is a ￿-vector of i.i.d. standard normal random variables. Proposition 1.

Tags:

  Vector, Random, Random vectors

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of Gaussian Random Vectors - University of Utah

1 Gaussian Random Vectors 1. The multivariate normal distribution Let X := (X1 X ) be a Random vector . We say that X is a Gaussian Random vector if we can write X = + AZ . where R , A is an matrix and Z := (Z1 Z ) is a - vector of standard normal Random variables. Proposition 1. Let X be a Gaussian Random vector , as above. Then, 1 2 1 . EX = Var(X) := = AA and MX ( ) = e + 2 A = e + 2 . for all R . Thanks to the uniqueness theorem of MGF's it follows that the dis- tribution of X is determined by , , and the fact that it is multivariate normal.

2 From now on, we sometimes write X N ( ), when we mean that MX ( ) = exp( + 21 ). Interesetingly enough, the choice of A and Z are typically not unique; only ( ) influences the distribu- tion of X. Proof. The expectation of X is , since E(AZ) = AE(Z) = 0. Also, . E(XX ) = E [ + AZ] [ + AZ] = + AE(ZZ )A . Since E(ZZ ) = I, the variance-covariance of X is E(XX ) (EX)(EX) =. E(XX ) = AA , as desired. Finally, note that MX ( ) = exp( ) . 31. 32 6. Gaussian Random Vectors M . Z (A ). This establishes the result on the MGF of X, since MZ ( ) =.

3 =1 exp( 2. /2) = exp( 21 2 ) for all R .. We say that X has the multivariate normal distribution with param- eters and := AA , and write this as X N ( AA ). Theorem 2. X := (X . 1 X ) has a multivariate normal distribution . if and only if X = =1 X has a normal distribution on the line for every R . That is, X1 X are jointly normally distributed if and only if all of their linear combinations are normally distributed. Note that the distribution of X depends on A only through the pos- itive semidefinite matrix := AA . Sometimes we say also that X1 X are jointly normal [or Gaussian ] when X := (X1 X ) has a multivariate normal distribution.

4 Proof. If X N ( AA ) then we can write it as X = + AZ, we as before. In that case, X = + AZ is a linear combination of Z1 Z , whence has a normal distribution with mean 1 1 + + and variance AA = A 2 . For the converse, suppose that X has a normal distribution for every R . Let := EX and := Var(X), and observe that X. has mean vector and variance-covariance matrix . Therefore, the MGF of the univariate normal X is M X ( ) = exp( + 21 2 ). for all R. Note that M X ( ) = E exp( X). Therefore, apply this with := 1 to see that M X (1) = MX ( ) is the MGF of a multivariate normal.

5 The uniqueness theorem for MGF's (Theorem 1, p. 27) implies the result.. 2. The nondegenerate case Suppose X N ( ), and recall that is always positive semidefinite. We say that X is nondegenerate when is positive definite (equivalently, invertible). Take, in particular, X N1 ( ); can be any real number and is a positive semidefinite 1 1 matrix; , 0. The distribution of X is defined via its MGF as 1 2. MX ( ) = e + 2 . When X is nondegenerate ( > 0), X N( ). If = 0, then MX ( ) =. exp( ); therefore by the uniqueness theorem of MGFs, P{X = } = 1.

6 Therefore, N1 ( 2 ) is the generalization of N( 2 ) in order to include the case that = 0. We will not write N1 ( 2 ); instead we always write N( 2 ) as no confusion should arise. 2. The nondegenerate case 33. Theorem 3. X N ( ) has a probability density function if and only if it is nondegenerate. In that case, the pdf of X is . 1 1 1. X ( ) = exp ( ) ( ). (2 ) /2 (det )1/2 2. for all R . Proof. First of all let us consider the case that X is degenerate. In that case has some number < of strictly-positive eigenvalues. The proof of Theorem 2 tells us that we can write X = AZ + , where Z is a -dimensional vector of standard normals and A is an matrix.

7 Consider the -dimensional space . E := R : = A + for some R . Because P{Z R } = 1, it follows that P{X E} = 1. If X had a pdf X , then . 1 = P{X E} = X ( ) d . E. But the -dimensional volume of E is zero since the dimension of E is < . This creates a contradiction [unless X did not have a pdf, that is]. If X is nondegenerate, then we can write X = AZ + , where Z is an - vector of standard normals and = AA is invertible; see the proof of Theorem 2. Recall that the choice of A is not unique; in this case, we can always choose A := 1/2 because 1/2 Z + N ( ).

8 In other words, . /2 Z + := (Z1 Z ). 1. X = A Z + = (1 ) . =1 =1. If = + , then = 1/2 ( ). Therefore, the change of variables 1/2. formula of elementary probability implies that . Z 1/2 ( ). X ( ) = . |det J|. as long as det J = 0, where 1 1 . 1 A1 1 A1 .. = .. = A . J := .. A 1 A . 1 .. Because det( ) = det(AA ) = (det A)2 , it follows that det A = (det )1/2 , and hence . 1 1/2. X ( ) = Z ( ) . (det ) /2. 1. 34 6. Gaussian Random Vectors Because of the independence of the Z 's, 2. e /2 1 . Z ( ) = = e /2. 2 (2 ) /2. =1. for all R.

9 Therefore, . 1 1. Z /2 ( ) = 1. 1. exp ( ) ( ) . (2 ) /2 2. and the result follows.. 3. The bivariate normal distribution A bivariate normal distribution has the form N2 ( ), where 1 = EX1 , 2 = EX2 , 1 1 = Var(X1 ) := 12 > 0, 2 2 = Var(X2 ) := 22 > 0, and 1 2 = 2 1 = Cov(X1 X2 ). Let Cov(X1 X2 ). := Corr(X1 X2 ) := . Var(X1 ) Var(X2 ). denote the correlation between X1 and X2 , and recall that 1 1. Then, 1 2 = 2 1 = 1 2 , whence 2 . 1 1 2. = . 1 2 22. Since det = 12 22 (1 2 ), it follows immediately that our bivariate nor- mal distribution is non-degenerate if and only if 1 < < 1, in which case.

10 1 1. 2 (1 2 ) . 1. 1 2 1 2 .. 1 = .. 1 1 .. 1 2 1 2 2. 2 (1 ). 2. Because 2 2. 1 1 1 2 2. = 2 +. 1 1 2 2. for all R , the pdf of X = (X1 X2 ) in the non-degenerate case where there is a pdf is X ( 1 2 ).. 1 1 1 1 2 1 1 2 2 2 2 2. = exp 2 + . 2 1 2 1 2 2(1 2 ) 1 1 2 2. 4. A few important properties of multivariate normal distributions 35. But of course non-degenerate cases are also possible. For instance, suppose Z N(0 1) and define X := (Z Z). Then X = AZ where A := (1 1) , whence . 1 1. = AA =. 1 1. is singular. In general, if X N ( ) and the rank of is < , then X depends only on [and not ] N(0 1)'s.


Related search queries