Example: bankruptcy

Some Basic Matrix Theorems - Quandt.com

some Basic Matrix TheoremsRichard E. QuandtPrinceton UniversityDefinition a square Matrix of ordernand let be a scalar quantity. Then det(A I)is called the characteristic polynomial is clear that the characteristic polynomial is annthdegree polynomial in and det(A I) = 0will haven(not necessarily distinct) solutions for .Definition values of that satisfy det(A I) = 0 are the characteristic roots oreigenvalues follows immediately that for each that is a solution of det(A I) = 0 there exists a nontrivialx( ,x6= 0) such that(A I)x=0.(1)Definition vectorsxthat satisfy Eq.(1) are the characteristic vectors or eigenvectors consider a particular eigenvalue and its corresponding eigenvectorx, for which we have x=Ax.

Some Basic Matrix Theorems Richard E. Quandt Princeton University Definition 1. Let A be a squarematrix of ordern and let λ be a scalarquantity. Then det(A−λI) is called the characteristic polynomial of A. It is clear that the characteristic polynomial is an nth degree polynomial in λ and det(A−λI) = 0 will have n (not necessarily distinct) …

Tags:

  Basics, Matrix, Some, Theorem, Some basic matrix theorems

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of Some Basic Matrix Theorems - Quandt.com

1 some Basic Matrix TheoremsRichard E. QuandtPrinceton UniversityDefinition a square Matrix of ordernand let be a scalar quantity. Then det(A I)is called the characteristic polynomial is clear that the characteristic polynomial is annthdegree polynomial in and det(A I) = 0will haven(not necessarily distinct) solutions for .Definition values of that satisfy det(A I) = 0 are the characteristic roots oreigenvalues follows immediately that for each that is a solution of det(A I) = 0 there exists a nontrivialx( ,x6= 0) such that(A I)x=0.(1)Definition vectorsxthat satisfy Eq.(1) are the characteristic vectors or eigenvectors consider a particular eigenvalue and its corresponding eigenvectorx, for which we have x=Ax.

2 (2)Premultiply (2) by an arbitrary nonsingular matrixPwe obtain P x=PAx=PAP 1Px,(3)and definingPx=y, y=PAP 1y.(4)Hence is an eigenvalue andyis an eigenvector of the matrixPAP matricesAandPAP 1are called similar have shown above that any eigenvalue ofAis also an eigenvalue ofPAP show the converse, , that any eigenvalue ofPAP 1is also an eigenvalue matrixAis symmetric ifA=A .2 QuandtTheorem eigenvalues of symmetric matrices are A polynomial ofnthdegree may, in general, have complex roots. Assume then, contraryto the assertion of the theorem , that is a complex number. The corresponding eigenvectorxmayhave one or more complex elements, and for this and thisxwe haveAx= x.

3 (5)Both sides of Eq. (5) are, in general, complex, and since they are equal to one another, their complexconjugates are also equal. Denoting the conjugates of andxby andxrespectively, we haveAx= x,(6)since(a+bi)(c+di)=ac bd+(ad+bc)i=ac bd (ad+bc)i=(a bi)(c di). Premultiply(5) byx and premultiply (6) byx and subtract, which yieldsx Ax x Ax=( )x x.(7)Each term on the left hand side is a scalar and and sinceAis symmetric, the left hand side is equalto zero. Butx xis the sum of products of complex numbers times their conjugates, which can neverbe zero unless all the numbers themselves are zero. Hence equals its conjugate, which means that is eigenvectors of a symmetric matrixAcorresponding to different eigenvaluesare orthogonal to each Let i6= j.

4 Substitute in Eq. (5) first iand its corresponding eigenvectorxi, andpremultiply it byx j, which is the eigenvector corresponding to j. Then reverse the procedure andsubstitute in (5) thejtheigenvalue and eigenvector and premultiply byx i. Subtracting the tworesults from one another yields ( i j)x ixj= 0, from which it follows thatx ixj= all the eigenvalues of a symmetric matrixAare distinct, the matrixX, whichhas as its columns the corresponding eigenvectors, has the property thatX X=I, ,Xis anorthogonal To prove this we need merely observe that (1) since the eigenvectors are nontrivial ( ,do not have all zero elements), we can replace each eigenvector by a corresponding vector whichis obtained from the original one by dividing each of its elements by the squareroot of the sum ofsquares of its elements thus insuring that each of these vectors has length 1.

5 And (2) thenvectorsare mutually orthogonal and hence form a orthonormal basis Theory3 theorem iis a repeated root with multiplicitym>=2, then there existmorthonormaleigenvectors corresponding to First, we note that corresponding to ithere will be at least one eigenvectorxi. For anyarbitrary nonzero vectorxione can always find an additionaln 1 vectorsyj,j=2,..,n, so thatxi, together with then 1y-vectors forms an orthonormal basis. Collect theyvectors in a matrixY, ,Y=[y2,..,yn],and defineB=[xiY].(8)ThenB AB=[ ix ixix iAY iY xiY AY]=[ i00Y AY](9)since (1) the products in the first column under the11element are products of orthogonal vectors,and (2) replacing in the first row (other than in the11element) the termsx iAby ix ialso leads toproducts of orthogonal an orthogonal Matrix , hence its transpose is also its ABare similar matrices (see Definition 4) and they have the same (9), the characteristic polynomial ofB ABcan be written asdet(B AB In)=( i )det(Y AY In 1).

6 (10)If a root, say i, has multiplicitym>=2, then in the factored form of the polynomial the term ( i )occursmtimes; hence ifm>=2, det(Y AY iIn 1) = 0, and the null space of (B AB iIn)has dimension greater than or equal to 2. In particular, ifm= 2, the null space has dimension2, and there are two linearly independent and orthogonal eigenvectors in this themultiplicity is greater, say 3, then there are at least two orthogonal eigenvectorsxi1andxi2and wecan find anothern 2 vectorsyjsuch that [xi1,xi2,y3,..,yn] is an orthonormal basis and repeatthe argument also follows that if a root has multiplicitym, there cannot be more thanmorthogonal eigen-vectors corresponding to that eigenvalue, for that would lead to the conclusion that we could findmore thannorthoganl eigenvectors, which is not that any set ofnlinearly independent vectors inn-space can be transformed into an orthonormal basis theSchmidt orthogonalization process.


Related search queries