Example: stock market

The Matrix Exponential - University of Massachusetts Lowell

The Matrix Exponential (with exercises)by Dan and comments are Matrix ExponentialFor eachn ncomplex matrixA, define theexponentialofAto be the Matrix (1)eA= k=0 Akk!=I+A+12!A2+13!A3+ It is not difficult to show that this sum converges for all complex matricesAof any finitedimension. But we will not prove this a 1 1 Matrix [t], theneA= [et], by the Maclaurin series formula for the functiony=et. More generally, ifDis a diagonal Matrix having diagonal entriesd1,d2, .. ,dn,then we haveeD=I+D+12!D2+ = 10 001 ..00 0 1 + d10 00d2 ..00 0dn + d212!0 00d212! ..00 0d212! + = ed10 00ed2 ..00 0edn The situation is more complicated for matrices that are not diagonal. However, if amatrixAhappens to bediagonalizable, there is a simple algorithm for computingeA, aconsequence of the following A and P be complex n n matrices , and suppose that P is invertible. TheneP 1AP=P that, for all integersm 0, we have(P 1AP)m=P 1 AmP.

3! A3 + It is not difficult to show that this sum converges for all complex matrices A of any finite dimension. But we will not prove this here. If A is a 1 t1 matrix [t], then eA = [e ], by the Maclaurin series formula for the function y = et. More generally, if D is a diagonal matrix having diagonal entries d 1,d 2,. . .,dn, then we have eD ...

Tags:

  Matrix, Matrices, Exponential, The matrix exponential

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of The Matrix Exponential - University of Massachusetts Lowell

1 The Matrix Exponential (with exercises)by Dan and comments are Matrix ExponentialFor eachn ncomplex matrixA, define theexponentialofAto be the Matrix (1)eA= k=0 Akk!=I+A+12!A2+13!A3+ It is not difficult to show that this sum converges for all complex matricesAof any finitedimension. But we will not prove this a 1 1 Matrix [t], theneA= [et], by the Maclaurin series formula for the functiony=et. More generally, ifDis a diagonal Matrix having diagonal entriesd1,d2, .. ,dn,then we haveeD=I+D+12!D2+ = 10 001 ..00 0 1 + d10 00d2 ..00 0dn + d212!0 00d212! ..00 0d212! + = ed10 00ed2 ..00 0edn The situation is more complicated for matrices that are not diagonal. However, if amatrixAhappens to bediagonalizable, there is a simple algorithm for computingeA, aconsequence of the following A and P be complex n n matrices , and suppose that P is invertible. TheneP 1AP=P that, for all integersm 0, we have(P 1AP)m=P 1 AmP.

2 The definition (1)then yieldseP 1AP=I+P 1AP+(P 1AP)22!+ =I+P 1AP+P 1A22!P+ =P 1(I+A+A22!+ )P=P 1eAP If a matrixAis diagonalizable, then there exists an invertiblePso thatA=PDP 1,whereDis a diagonal Matrix of eigenvalues ofA, andPis a Matrix having eigenvectorsofAas its columns. In this case,eA=PeDP :LetAdenote the matrixA=[5 1 2 2]The reader can easily verify that 4 and 3 are eigenvalues ofA, with corresponding eigen-vectorsw1=[1 1]andw2=[1 2]. It follows thatA=PDP 1=[11 1 2][4 00 3][21 1 1]so thateA=[11 1 2][e400e3][21 1 1]=[2e4 e3e4 e32e3 2e42e3 e4] The definition (1) immediately reveals many other familiar properties. The followingproposition is easy to prove from the definition (1) and is left as an A be a complex square n n Matrix .(1)If0denotes the zero Matrix , then e0=I, the identity Matrix .(2)AmeA=eAAmfor all integers m.(3)(eA)T=e(AT)(4)If AB=BA then AeB=eBA and eAeB= not all familiar properties of the scalar Exponential functiony=etcarryover to the Matrix Exponential .

3 For example, we know from calculus thates+t=esetwhensandtare numbers. However this is oftennot truefor exponentials of other words, it is possible to haven nmatricesAandBsuch thateA+B6= , for example, Exercise10at the end of this section. Exactly when we have equality,eA+B=eAeB, depends on specific properties of the matricesAandBthat will be exploredlater on. Meanwhile, we can at least verify the following limited A be a complex square Matrix , and let s,t C. TheneA(s+t)= the definition (1) we haveeAseAt=(I+As+A2s22!+ )(I+At+A2t22!+ )=( j=0 Ajsjj!)( k=0 Aktkk!)= j=0 k=0Aj+ksjtkj!k!( )3 Letn=j+k, so thatk=n j. It now follows from( )thateAseAt= j=0 n=jAnsjtn jj!(n j)!= n=0 Ann!n j=0n!j!(n j)!sjtn j= n=0An(s+t)nn!=eA(s+t) Settings=1 andt= 1 in Proposition3, we find thateAe A=eA(1+( 1))=e0= other words, regardless of the matrixA, the Exponential matrixeAisalwaysinvertible,and has inversee can now prove a fundamental theorem about Matrix exponentials.

4 Both the statementof this theorem and the method of its proof will be important for the study of differentialequations in the next A be a complex square Matrix , and let t be a real scalar variable. Let f(t) = f (t) = Proposition3to the limit definition of derivative yieldsf (t) =limh 0eA(t+h) eAth=eAt(limh 0eAh Ih)Applying the definition (1) toeAh Ithen gives usf (t) =eAt(limh 01h[Ah+A2h22!+ ])=eAtA=AeAt. Theorem4is the fundamental tool for proving important facts about the Matrix exponen-tial and its uses. Recall, for example, that there existn nmatricesAandBsuch thateAeB6=eA+B. The following theorem provides a condition for when this identity A,B be n n complex matrices . If AB=BA then eA+B= , it follows from the formula (1) thatAeBt=eBtA, and similarly forother combinations ofA,B,A+B, and their (t) =e(A+B)te Bte At, wheretis a real (scalar) variable. By Theorem4, and theproduct rule for derivatives,g (t) = (A+B)e(A+B)te Bte At+e(A+B)t( B)e Bte At+e(A+B)te Bt( A)e At= (A+B)g(t) Bg(t) Ag(t)= then nzero Matrix .

5 Note that it was only possible to factor( A)and( B)out of the terms above because we areassumingthatAB= (t) =0for allt, it follows thatg(t)is ann nmatrix ofconstants, sog(t) =Cforsome constant matrixC. In particular, settingt=0, we haveC=g(0). But the definitionofg(t)then givesC=g(0) =e(A+B)0e B0e A0=e0e0e0=I,4the identity Matrix . Hence,I=C=g(t) =e(A+B)te Bte Atfor allt. After multiplying byeAteBton both sides we haveeAteBt=e(A+B)t. , the zero Matrix , prove thateA=I+ the definition (1) of the Matrix Exponential to prove the basic properties listed inProposition2. (Do not use any of the theorems of the section! Your proofs should useonly the definition (1) and elementary Matrix algebra.) thatecI+A=eceA, for all numberscand all square thatAis a realn nmatrix and thatAT= A. Prove thateAis an orthogonalmatrix ( Prove that, ifB=eA, thenBTB=I.) find a nice simple formula foreA, similar to the formula in the firstexercise each of the following examples:(a)A=[0 10 0](b)A=[1 10 1](c)A=[a b0a] each of the following examples:(a)A=[a b0 0](b)A=[a0b0] , show that2eA=(e+1e)I+(e 1e) CandX Cnis a non-zero vector such thatAX= thateAX=e the matricesA=[1 00 0]B=[0 10 0]Show by direct computation thateA+B6= a squaren nmatrixAis defined to be the sum of its diagonal entries:trace(A)=a11+a22+ + that, ifAis diagonalizable, then det(eA) =etrace(A).

6 Note:Later it will be seen that this is true for all square Answers and (eA)T=eAT, whenAT= Awe have(eA)TeA=eATeA=e AeA=eA A=e0= + (e 1) (a)eA=[1 10 1](b)eA=[e e0e](c)A=[eaeab0ea]7.(a)eA=[eaba(ea 1)01](b)eA=[ea0ba(ea 1)1](Replaceba(ea 1)by1in each case ifa=0.)6 Linear Systems of Ordinary Differential EquationsSuppose thaty=f(x)is a differentiable function of a real (scalar) variablex, and thaty =ky, wherekis a (scalar) constant. In calculus this differential equation is solved byseparation of variables:y y=k= y ydx= k dxso thatlny=kx+c,andy=ecekx,for some constantc R. Settingx=0 we find thaty0=f(0) =ec, and conclude that(2)y= , let us solve the same differential equationy =kyin a slightly different (x) =e kxy. Differentiating both sides, we haveF (x) = ke kxy+e kxy = ke kxy+e kxky=0,where the second identity uses the assumption thaty =ky. SinceF (x) =0 for allx, thefunctionF(x)must be a constant,F(x) =a, for somea R. Settingx=0, we find thata=F(0) =e k0y(0) =y0, where we again lety0denotey(0).

7 We conclude thaty=ekxy0as before. Moreover, this method proves that (2) describesallsolutions toy = second point of view will prove valuable for solving a more complicatedlinear systemof ordinary differential equations (ODEs). For example, supposeY(t)is a differentiablevector-valued function:Y=[y1(t)y2(t)]satisfying the differential equationsy 1=5y1+y2y 2= 2y1+2y2and initial conditionY0=Y(0) =[ 38]. In other words,Y (t) =[5 1 2 2]Y=AY,whereAdenotes the Matrix [5 1 2 2].To solve this system of ODEs, setF(t) =e AtY, wheree Atis defined using the matrixexponential formula (1) of the previous section. Differentiating (using the product rule)and applying Theorem4then yieldsF (t) = Ae AtY+e AtY = Ae AtY+e AtAY=~0,where the second identity uses the assumption thatY =AY. SinceF (t) =~0 (the zerovector), for allt, the functionFmust be equal to a constant vector~v; that is,F(t) =~vforallt. Evaluating att=0 gives~v=F(0) =e A0Y(0) =Y0,where we denote the valueY(0)by the symbolY0.

8 In other words,Y0=~v=F(t) =e AtY,7forallvalues oft. Hence,Y=eAtY0=eAt[ 38],and the differential equation is solved! Assuming, of course, that we have a formula the previous section we observed that the eigenvalues of the matrixA=[5 1 2 2]are 4 and 3, with corresponding eigenvectorsw1=[1 1]andw2=[1 2]. Therefore,for all scalar valuest,At=PDtP 1=[11 1 2][4t00 3t][21 1 1]so thateAt=PeDtP 1=[11 1 2][e4t00e3t][21 1 1].It follows thatY(t) =eAtY0=eAt[ 38]=[11 1 2][e4t00e3t][21 1 1][ 38],so thatY(t) =[y1(t)y2(t)]=[2e4t 5e3t 2e4t+10e3t]=e4t[2 2]+e3t[ 510]=2e4t[1 1] 5e3t[1 2]More generally, ifY (t) =AY(t),is a linear system of ordinary differential equations, then the arguments above imply thatY=eAtY0If, in addition, we can diagonalizeA, so thatA=PDP 1=P 10 00 2 ..00 0 n P 1theneAt=PeDtP 1=P e 1t0 00e 2t ..00 0e nt P 1andY(t) =PeDtP 1Y0. If the columns ofPare the eigenvectorsv1, .. ,vnofA, whereeachAvi= ivi, thenY(t) =PeDtP 1Y0= v1v2 vn e 1t0 00e 2t.

9 00 0e nt 8where(3) =P ,Y(t) = e 1tv1e 2tv2 e ntvn = =c1e 1tv1+c2e 2tv2+ +cne arguments are summarized as that Y(t):R Rn(orCn) is a differentiable function of t such thatY (t) =AY(t),and initial value Y(0) =Y0. where A is a diagonalizable Matrix , having eigenvalues 1, .. , nand corresponding eigenvectors v1, .. ,vn. Then(4)Y(t) =c1e 1tv1+c2e 2tv2+ +cne P is the Matrix having columns v1, .. ,vnthen the constants ciare given by the identity (3).If one is given a different initial value ofY, sayY(t0)at timet0, then the equation (4) stillholds, where =e Dt0P 1Y(t0).For exercises on differential equations, please consult the textbook.


Related search queries