Example: bankruptcy

Introduction - UCONN

THE MINIMAL POLYNOMIAL AND SOME APPLICATIONSKEITH easiest matrices to compute with are the diagonal ones. The sum and product ofdiagonal matrices can be computed componentwise along the main diagonal, and takingpowers of a diagonal matrix is simple too. All the complications of matrix operations aregone when working only with diagonal matrices. If a matrixAis not diagonal but can beconjugated to a diagonal matrix, sayD:=PAP 1is diagonal, thenA=P 1 DPsoAk=P 1 DkPfor all integersk, which reduces us to computations with a diagonal matrix.

THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS 3 (See the proof of Theorem 2.2.) The converse is false: if all the eigenvalues of an operator are in F this does not necessarily mean the operator is diagonalizable.

Tags:

  Introduction, Eigenvalue

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of Introduction - UCONN

1 THE MINIMAL POLYNOMIAL AND SOME APPLICATIONSKEITH easiest matrices to compute with are the diagonal ones. The sum and product ofdiagonal matrices can be computed componentwise along the main diagonal, and takingpowers of a diagonal matrix is simple too. All the complications of matrix operations aregone when working only with diagonal matrices. If a matrixAis not diagonal but can beconjugated to a diagonal matrix, sayD:=PAP 1is diagonal, thenA=P 1 DPsoAk=P 1 DkPfor all integersk, which reduces us to computations with a diagonal matrix.

2 Inmany applications of linear algebra ( , dynamical systems, differential equations, Markovchains, recursive sequences) powers of a matrix are crucial to understanding the situation,so the relevance of knowing when we can conjugate a nondiagonal matrix into a diagonalmatrix is want look at the coordinate-free formulation of the idea of a diagonal matrix, whichwill be called a diagonalizable operator. There is a special polynomial, the minimal polyno-mial (generally not equal to the characteristic polynomial), which will tell us exactly whena linear operator is diagonalizable.

3 The minimal polynomial will also give us informationabout nilpotent operators (those having a power equal toO).All linear operators under discussion are understood to be acting on nonzero finite-dimensional vector spaces over a given OperatorsDefinition say the linear operatorA:V Visdiagonalizablewhen it admitsa diagonal matrix representation with respect to some basis ofV: there is a basisBofVsuch that the matrix [A]Bis s translate diagoinalizability into the language of eigenvectors rather than linear operatorA:V Vis diagonalizable if and only if there is abasis of eigenvectors there is a basisB={e1.}

4 ,en}ofVin which [A]Bis diagonal:[A]B= a10 00a2 an .ThenAei=aieifor alli, so eacheiis an eigenvector forA. Conversely, ifVhas a basis{v1,..,vn}of eigenvectors ofA, withAvi= ivifor i F, then in this basis the matrixrepresentation ofAis diag( 1,.., n). A basis of eigenvectors for an operator is called example of a linear operator that isnotdiagonalizable over any fieldFis (1 10 1) actingonF2. Its only eigenvectors are the vectors(x0). There are not enough eigenvectors to forma basis forF2, so (1 10 1) onF2does not diagonalize.

5 Remember this example! Since (1 10 1)12 KEITH CONRADand (1 00 1) have the same characteristic polynomial, and the second matrix is diagonalizable,the characteristic polynomialdoesn tdetermine (in general) if an operator is are the main results we will obtain about diagonalizability:(1) There are ways of determining if an operator is diagonalizable without having tolook explicitly for a basis of eigenvectors.(2) WhenFisalgebraically closed, most operators on a finite-dimensionalF-vectorspace are diagonalizable.(3) There is a polynomial, the minimal polynomial of the operator, which can be usedto detect diagonalizability.

6 (4) If two operators are each diagonalizable, they can be simultaneously diagonalized( , there is a common eigenbasis) precisely when s look at three examples related to diagonalizability (0 11 0), the 90-degree rotation matrix acting onR2. It is notdiagonalizable onR2since there are no eigenvectors: a rotation inR2sends no nonzerovector to a scalar multiple of itself. This geometric reason is complemented by an algebraicreason: the characteristic polynomialT2+ 1 ofRhas no roots inR, so there are no realeigenvalues and thus no eigenvectors inR2.

7 However, there are roots iofT2+ 1 inC,and there are eigenvectors ofRas an operator onC2rather thanR2. Eigenvectors ofRinC2for the eigenvaluesiand iare(i1)and( i1), respectively. In the basisB={(i1),( i1)},the matrix ofRis [R]B= (i00 i), where the first diagonal entry is the eigenvalue of the firstbasis vector inBand the second diagonal entry is the eigenvalue of the second basis vectorinB. (Review the proof of Theorem to see why this relation between the ordering ofvectors in an eigenbasis and the ordering of entries in a diagonal matrix always holds.)

8 Put more concretely, since passing to a new matrix representation of an operator froman old one amounts to conjugating the old matrix representation by the change-of-basismatrix expressing the old basis in terms of the new basis, we must have (i00 i) =PRP 1whereP= ([(10)]B[(01)]B) = ( i/2 1/2i/2 1/2). Verify thisPreally conjugatesRto (i00 i). NoteP 1= (i i1 1) has as its columns the eigenvectors ofRin the standard basis(10),(01) Mn(R) satisfyingA=A>can be diagonalized overR. Thisis a significant result, called the real spectral theorem.

9 (Any theorem that gives sufficientconditions under which an operator can be diagonalized is called a spectral theorem, becausethe eigenvalues of an operator is called its spectrum.) The essential step in the proof of thereal spectral theorem is to show that any real symmetric matrix has a real Mn(C) satisfyingAA>=A>Ais diagonalizable in Mn(C).1 WhenAis real, soA>=A>, sayingAA>=A>Ais weaker than sayingA=A>. In particular,the real matrix (0 11 0) commutes with its transpose and thus is diagonalizable overC,but the real spectral theorem does not apply to this matrix and in fact this matrix isn tdiagonalizable overR(it has no real eigenvalues).

10 Eigenvalues and DiagonalizabilityIf a linear operator on a finite-dimensionalF-vector space is diagonalizable, its eigenvaluesall lie inF, since a diagonal matrix representation has the eigenvalues along the complex square matrixAsatisfyingAA>=A>Ais callednormal, and normal matrices are unitarilydiagonalizable:A=U DU 1whereDis diagonal andUis unitary, meaningUU>=In. While the conditionofAbeing unitarily diagonalizable is equivalent toAA>=A>A, the condition of being diagonalizable aloneis not equivalent to any algebraic identity on complex MINIMAL POLYNOMIAL AND SOME APPLICATIONS3(See the proof of Theorem ) The converse is false: if all the eigenvalues of an operatorare inFthis doesnotnecessarily mean the operator is diagonalizable.


Related search queries