Example: biology

Generalized Inverse and Projectors - Michigan State University

508 Appendix A. Matrix Algebra(b) From the spectral decompositionA= ,we obtainrank(A) = rank( ) = tr( ) =r,whereris the number of characteristic roots with value 1.(c) Let rank(A)=rank( )=n,then =InandA= =In.(a) (c) follow from the definition of an idempotent Generalized InverseDefinition anm n-matrix. Then a matrixA :n missaid to be a Generalized Inverse ofAifAA A=Aholds (see Rao (1973a, p. 24).Theorem Generalized Inverse always exists although it is not uniquein :Assume rank(A)=r. According to the singular-value decomposi-tion (Theorem ), we haveAm,n=Um,rLr,rV r,nwithU U=IrandV V=IrandL= diag(l1, ,lr),li> =V L 1 XYZ U (X,YandZare arbitrary matrices of suitable dimensions) is ag-inverseofA. Using Theorem , namely,A= XYZW withXnonsingular, we haveA = X 1000 as a Generalized Inverse509 Definition (Moore-Penrose Inverse )AmatrixA+satisfying the follow-ing conditions is called the Moore-Penrose Inverse ofA:(i)AA+A=A,(ii)A+AA+=A+,(iii) (A+A) =A+A,(iv) (AA+) =AA+.)

A.12 Generalized Inverse 511 Theorem A.70 Let A: n × n be symmetric, a ∈R(A), b ∈R(A),and assume 1+b A+a =0.Then (A+ab)+ = A+ −A +ab A 1+b A+a Proof: Straightforward, using Theorems A.68 and A.69. Theorem A.71 Let A: n×n be symmetric, a be an n-vector, and α>0 be any scalar. Then the following statements are equivalent: (i) αA−aa ≥ 0. (ii) A ≥ 0, a …

Tags:

  Projectors

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of Generalized Inverse and Projectors - Michigan State University

1 508 Appendix A. Matrix Algebra(b) From the spectral decompositionA= ,we obtainrank(A) = rank( ) = tr( ) =r,whereris the number of characteristic roots with value 1.(c) Let rank(A)=rank( )=n,then =InandA= =In.(a) (c) follow from the definition of an idempotent Generalized InverseDefinition anm n-matrix. Then a matrixA :n missaid to be a Generalized Inverse ofAifAA A=Aholds (see Rao (1973a, p. 24).Theorem Generalized Inverse always exists although it is not uniquein :Assume rank(A)=r. According to the singular-value decomposi-tion (Theorem ), we haveAm,n=Um,rLr,rV r,nwithU U=IrandV V=IrandL= diag(l1, ,lr),li> =V L 1 XYZ U (X,YandZare arbitrary matrices of suitable dimensions) is ag-inverseofA. Using Theorem , namely,A= XYZW withXnonsingular, we haveA = X 1000 as a Generalized Inverse509 Definition (Moore-Penrose Inverse )AmatrixA+satisfying the follow-ing conditions is called the Moore-Penrose Inverse ofA:(i)AA+A=A,(ii)A+AA+=A+,(iii) (A+A) =A+A,(iv) (AA+) =AA+.)

2 A+is any matrixA:m nand anyg-inverseA :m n,we have(i)A AandAA are idempotent.(ii) rank(A)=rank(AA )=rank(A A).(iii) rank(A) rank(A ).Proof:(a) Using the definition ofg- Inverse ,(A A)(A A)=A (AA A)=A A.(b) According to Theorem (iv), we getrank(A)=rank(AA A) rank(A A) rank(A),that is, rank(A A)=rank(A). Analogously, we see that rank(A)=rank(AA ).(c) rank(A)=rank(AA A) rank(AA ) rank(A ).Theorem anm n-matrix. Then(i)Aregular A+=A 1.(ii) (A+)+=A.(iii) (A+) =(A )+.(iv) rank(A)=rank(A+)=rank(A+A)=rank(AA+).(v) Aan orthogonal projector A+=A.(vi) rank(A):m n=m A+=A (AA ) 1andAA+=Im.(vii) rank(A):m n=n A+=(A A) 1A andA+A=In.(viii)IfP:m mandQ:n nare orthogonal (PAQ)+=Q 1A+P 1.(ix) (A A)+=A+(A )+and(AA )+=(A )+A+.510 Appendix A. Matrix Algebra(x)A+=(A A)+A =A (AA )+.For further details see Rao and Mitra (1971).Theorem (Baksalary, Kala and Klaczynski (1983))LetM:n n 0andN:m nbe any matrices. ThenM N (NM+N )+N 0if and only ifR(N NM) R(M).Theorem any squaren n-matrix andabe ann-vector witha R(A).

3 Then a g- Inverse ofA+aa is given by(A+aa ) =A A aa U Ua U Ua VV aa A a VV a+ VV aa U U(a U Ua)(a VV a),withA any g- Inverse ofAand =1+a A a, U=I AA ,V=I A :Straightforward by checkingAA A= A be a squaren n-matrix. Then we have the followingresults:(i)Assumea, bare vectors witha, b R(A),andletAbe the bilinear forma A bis invariant to the choice ofA .(ii)A(A A) A is invariant to the choice of(A A) .Proof:(a)a, b R(A) a=Acandb=Ad. Using the symmetry ofAgivesa A b=c A A Ad=c Ad.(b) Using the rowwise representation ofAasA= a n givesA(A A) A =(a i(A A) aj).SinceA Ais symmetric, we may conclude then (i) that all bilinearformsa i(A A)ajare invariant to the choice of (A A) , and hence (ii)is Generalized Inverse511 Theorem :n nbe symmetric,a R(A),b R(A),andassume1+b A+a = (A+ab )+=A+ A+ab A+1+b A+ :Straightforward, using Theorems and :n nbe symmetric,abe ann-vector, and >0beany scalar. Then the following statements are equivalent:(i) A aa 0.(ii)A 0,a R(A),anda A a ,withA being any g- Inverse :(i) (ii): A aa 0 A=( A aa )+aa 0 A 0.

4 UsingTheorem for A aa 0, we have A aa =BB, and, hence, A=BB+aa =(B, a)(B, a) . R( A)=R(A)=R(B, a) a R(A) a=Acwithc Rn a A a=c A aa 0 x ( A aa )x 0for any vectorx,choosingx=c,wehave c Ac c aa c= c Ac (c Ac)2 0, c Ac .(ii) (i): Letx Rnbe any vector. Then, using Theorem ,x ( A aa )x= x Ax (x a)2= x Ax (x Ac)2 x Ax (x Ax)(c Ac) x ( A aa )x (x Ax)( c Ac).In (ii) we have assumedA 0andc Ac=a A a . Hence, A aa :This theorem is due to Baksalary and Kala (1983). The version givenhere and the proof are formulated by G. A. Matrix AlgebraTheorem any matrixAwe haveA A=0if and only ifA= :(a)A=0 A A=0.(b) LetA A=0,andletA=(a(1), ,a(n))bethecolumnwisepresentation. ThenA A=(a (i)a(j))=0,so that all the elements on the diagonal are zero:a (i)a(i)=0 a(i)=0andA= =0be anm n-matrix andAann n-matrix. ThenX XAX X=X X XAX X=XandX XAX =X .Proof:AsX =0andX X =0,wehaveX XAX X X X=(X XA I)X X=0 (X XA I)=0 0=(X XA I)(X XAX X X X)=(X XAX X )(XAX X X)=Y Y,so that (by Theorem )Y= 0, and, henceXAX X= : LetX =0be anm n-matrix andAandbn X=BX X AX =BX.

5 Theorem (Albert s theorem)LetA= A11A12A21A22 be symmetric. Then(i)A 0if and only if(a)A22 0,(b)A21=A22A 22A21,(c)A11 A12A 22A21,((b) and (c) are invariant of the choice ofA 22).(ii)A>0if and only if(a)A22>0,(b)A11>A12A Generalized Inverse513 Proof: Bekker and Neudecker (1989):(i) AssumeA 0.(a)A 0 x Ax 0 for =(0 ,x 2) x Ax=x 2A22x2 0 for anyx2 A22 0.(b) LetB =(0,I A22A 22) B A= (I A22A 22)A21,A22 A22A 22A22 = (I A22A 22)A21,0 andB AB=B A12A12B= 0. Hence, by Theorem we getB A12=0. B A12A12=B A=0. (I A22A 22)A21= proves (b).(c) LetC =(I, (A 22A21) ).A 0 0 C AC=A11 A12(A 22) A21 A12A 22A21+A12(A 22) A22A 22A21=A11 A12A 22A21.(SinceA22is symmetric, we have (A 22) =A22.)Now assume (a), (b), and (c). ThenD= A11 A12A 22A2100A22 0,as the submatrices are by (a) and (b). Hence,A= IA12(A 22)0I D I0A 22A21I 0.(ii) Proof as in (i) ifA 22is replaced byA :n nandB:n nare symmetric, then(i) 0 B Aif and only if(a)A 0,(b)B=AA B,(c)B BA B.(ii) 0<B<Aif and only if0<A 1<B :Apply Theorem to the matrix BBBA.

6 514 Appendix A. Matrix AlgebraTheorem symmetric andc R(A). Then the followingstatements are equivalent:(i) rank(A+cc )=rank(A).(ii)R(A+cc )=R(A).(iii) 1 +c A c = 1: Assume (i) or (ii) or (iii) holds; then(A+cc ) =A A cc A 1+c A cfor any choice ofA .Corollary 2: Assume (i) or (ii) or (iii) holds; thenc (A+cc ) c=c A c (c A c)21+c A c=1 11+c A , asc R(A+cc ), the results are invariant for any special choicesof the g-inverses :c R(A) AA c=c R(A+cc )=R(AA (A+cc )) R(A).Hence, (i) and (ii) become equivalent. Proof of (iii): Consider the followingproduct of matrices: 10cA+cc 1 c0I 10 A cI = 1+c A c c0A .The left-hand side has the rank1+rank(A+cc )=1+rank(A)(see (i) or (ii)). The right-hand side has the rank 1 + rank(A)ifandonlyif 1 +c A c = :n nbe a symmetric and nonsingular matrix andc R(A). Then we have(i)c R(A+cc ).(ii)R(A) R(A+cc ).(iii)c (A+cc ) c=1.(iv)A(A+cc ) A=A.(v)A(A+cc ) c= Generalized Inverse515 Proof:AsAis assumed to be nonsingular, the equationAl=0hasanontrivial solutionl = 0, which may be standardized as (c l) 1lsuch thatc l= (A+cc )l R(A+cc ), and hence (i) is (ii) holds asc R(A).

7 Relation (i) is seen to be equivalent to(A+cc )(A+cc ) c= (iii) follows:c (A+cc ) c=l (A+cc )(A+cc ) c=l c=1,which proves (iii). Fromc=(A+cc )(A+cc ) c=A(A+cc ) c+cc (A+cc ) c=A(A+cc ) c+c,we have (v).(iv) is a consequence of the general definition of ag- Inverse and of (iii)and (iv):A+cc =(A+cc )(A+cc ) (A+cc )=A(A+cc ) A+cc (A+cc ) cc [=cc using (iii)]+A(A+cc ) cc [= 0 using (v)]+cc (A+cc ) A[= 0 using (v)].Theorem haveA 0if and only if(i)A+cc 0.(ii) (A+cc )(A+cc ) c=c.(iii)c (A+cc ) c 0;then(a)c=0 c (A+cc ) c=0.(b)c R(A) c (A+cc ) c<1.(c)c R(A) c (A+cc ) c= :A 0isequivalentto0 cc A+cc .Straightforward application of Theorem gives (i) (iii).Proof of (a):A 0 A+cc 0. Assumec (A+cc ) c=0,516 Appendix A. Matrix Algebraand replacecby (ii) c (A+cc ) (A+cc )(A+cc ) c=0 (A+cc )(A+cc ) c=0as (A+cc ) 0. Assumingc=0 c (A+cc )c= of (b): AssumeA 0andc R(A), and use Theorem (Corollary 2) c (A+cc ) c=1 11+c A c< opposite direction of (b) is a consequence of (c).Proof of (c): AssumeA 0andc R(A), and use Theorem (iii) c (A+cc ) c= opposite direction of (c) is a consequence of (b).

8 Note:The proofs of Theorems are given in Bekker andNeudecker (1989).Theorem linear equationAx=ahas a solution if and only ifa R(A)orAA a=afor this condition holds, then all solutions are given byx=A a+(I A A)w,wherewis an arbitrarym-vector. Further,q xhas a unique value for allsolutions ofAx=aif and only ifq A A=q ,orq R(A ).For a proof, see Rao (1973a, p. 25). ProjectorsConsider the range spaceR(A)ofthematrixA:m nwith existsR(A) , which is the orthogonal complement ofR(A)withdimensionm r. Any vectorx Rmhas the unique decompositionx=x1+x2,x1 R(A),andx2 R(A) ,of which the componentx1is called the orthogonal projection ofxonR(A).The componentx1can be computed asPx,whereP=A(A A) A ,which is called the projection operator onR(A). Note thatPis unique forany choice of theg- Inverse (A A) . Functions of Normally Distributed Variables517 Theorem anyP:n n, the following statements are equivalent:(i)Pis an orthogonal projection operator.(ii)Pis symmetric and proofs and other details, the reader is referred to Rao (1973a) andRao and Mitra (1971).

9 Theorem a matrix of orderT Kwith rankr<K,andU:(K r) Kbe such thatR(X ) R(U )={0}.Then(i)X(X X+U U) 1U =0.(ii)X X(X X+U U) 1X X=X X; that is,(X X+U U) 1is a g- Inverse ofX X.(iii)U U(X X+U U) 1U U=U U; that is,(X X+U U) 1is also ag- Inverse ofU U.(iv)U(X X+U U) 1U u=uifu R(U).Proof:SinceX X+U Uis of full rank, there exists a matrixAsuch that(X X+U U)A=U X XA=U U UA XA=0 andU =U UAsinceR(X )andR(U ) of (i):X(X X+U U) 1U =X(X X+U U) 1(X X+U U)A=XA= of (ii):X X(X X+U U) 1(X X+U U U U)=X X X X(X X+U U) 1U U=X (iii) follows on the same lines as result (ii).Proof of (iv):U(X X+U U) 1U u=U(X X+U U) 1U Ua=Ua=usinceu R(U). Functions of Normally Distributed VariablesLetx =(x1, ,xp)beap-dimensional random vector. Thenxis saidto have ap-dimensional normal distribution with expectation vector andcovariance matrix >0 if the joint density isf(x; , ) ={(2 )p| |} 12exp 12(x ) 1(x ).


Related search queries