Example: air traffic controller

Down with Determinants! Sheldon Axler

down with Determinants! Sheldon Axler21 December 19941. IntroductionAsk anyone why a square matrix of complex numbers has an eigenvalue, and you llprobably get the wrong answer, which goes something like this: The characteristicpolynomial of the matrix which is defined via determinants has a root (by thefundamental theorem of algebra); this root is an eigenvalue of the s wrong with that answer? It depends upon determinants , that s are difficult, non-intuitive, and often defined without motivation. Aswe ll see, there is a better proof one that is simpler, clearer, provides more insight,and avoids paper will show how linear algebra can be done better without using determinants , we will define the multiplicity of an eigenvalue andprove that the number of eigenvalues, counting multiplicities, equals the dimensionof the underlying space.

det 3 Theorem 2.1 Every linear operator on a finite-dimensional complex vector space has an eigenvalue. Proof. To show that T (our linear operator on V) has an eigenvalue, fix any non- zero vector v ∈ V.The vectors v,Tv,T2v,...,Tnv cannot be linearly independent, because V has dimension n and we have n + 1 vectors. Thus there exist complex numbers a0,...,an, not all 0, such that

Tags:

  With, Linear, Determinants, Proof, Theorem, Down, Down with determinants, Sheldon axler, Sheldon, Axler

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of Down with Determinants! Sheldon Axler

1 down with Determinants! Sheldon Axler21 December 19941. IntroductionAsk anyone why a square matrix of complex numbers has an eigenvalue, and you llprobably get the wrong answer, which goes something like this: The characteristicpolynomial of the matrix which is defined via determinants has a root (by thefundamental theorem of algebra); this root is an eigenvalue of the s wrong with that answer? It depends upon determinants , that s are difficult, non-intuitive, and often defined without motivation. Aswe ll see, there is a better proof one that is simpler, clearer, provides more insight,and avoids paper will show how linear algebra can be done better without using determinants , we will define the multiplicity of an eigenvalue andprove that the number of eigenvalues, counting multiplicities, equals the dimensionof the underlying space.

2 Without determinants , we ll define the characteristic andminimal polynomials and then prove that they behave as expected. Next, we willeasily prove that every matrix is similar to a nice upper-triangular one. Turningto inner product spaces, and still without mentioning determinants , we ll have asimple proof of the finite-dimensional Spectral are needed in one place in the undergraduate mathematics curricu-lum: the change of variables formula for multi-variable integrals. Thus at the end ofthis paper we ll revive determinants , but not with any of the usual abstruse defini-tions. We ll define the determinant of a matrix to be the product of its eigenvalues(counting multiplicities). This easy-to-remember definition leads to the usual for-mulas for computing determinants .

3 We ll derive the change of variables formula formulti-variable integrals in a fashion that makes the appearance of the determinantthere seem work was partially supported by the National Science Foundation. Many peoplemade comments that helped improve this paper. I especially thank Marilyn Brouwer,William Brown, Jonathan Hall, Paul Halmos, Richard Hill, Ben Lotto, and Wade Ramey. det 2A few friends who use determinants in their research have expressed unease at thetitle of this paper. I know that determinants play an honorable role in some areas ofresearch, and I do not mean to belittle their importance when they are most mathematicians and most students of mathematics will have a clearerunderstanding of linear algebra if they use the determinant-free approach to thebasic structure theorems in this paper are not new; they will already be familiar to mostreaders.

4 Some of the proofs and definitions are new, although many parts of thisapproach have been around in bits and pieces, but without the attention they de-served. For example, at a recent annual meeting of the AMS and MAA, I lookedthrough every linear algebra text on display. Out of over fifty linear algebra textsoffered for sale, only one obscure book gave a determinant-free proof that eigen-values exist, and that book did not manage to develop other key parts of linearalgebra without determinants . The anti-determinant philosophy advocated in thispaper is an attempt to counter the undeserved dominance of paper focuses on showing that determinants should be banished from muchof the theoretical part of linear algebra. determinants are also useless in the com-putational part of linear algebra.

5 For example, Cramer s rule for solving systemsof linear equations is already worthless for 10 10 systems, not to mention themuch larger systems often encountered in the real world. Many computer programsefficiently calculate eigenvalues numerically none of them uses determinants . Toemphasize the point, let me quote a numerical analyst. Henry Thacher, in a review(SIAM News, September 1988) of theTurbo Pascal Numerical Methods Toolbox,writes,I find it hard to conceive of a situation in which the numerical value of adeterminant is needed: Cramer s rule, because of its inefficiency, is com-pletely impractical, while the magnitude of the determinant is an indicationof neither the condition of the matrix nor the accuracy of the Eigenvalues and EigenvectorsThe basic objects of study in linear algebra can be thought of as either lineartransformations or matrices.

6 Because a basis-free approach seems more natural, thispaper will mostly use the language of linear transformations; readers who prefer thelanguage of matrices should have no trouble making the appropriate termlinear operatorwill mean a linear transformation from a vector space toitself; thus a linear operator corresponds to a square matrix (assuming some choiceof basis).Notation used throughout the paper:ndenotes a positive integer,Vdenotesann-dimensional complex vector space,Tdenotes a linear operator onV, andIdenotes the identity complex number is called aneigenvalueofTifT Iis not injective. Here isthe central result about eigenvalues, with a simple proof that avoids determinants . det 3 theorem linear operator on a finite-dimensional complex vector spacehas an show thatT(our linear operator onV) has an eigenvalue, fix any non-zero vectorv V.

7 The vectorsv, T v, T2v,..,Tnvcannot be linearly independent,becauseVhas dimensionnand we haven+ 1 vectors. Thus there exist complexnumbersa0,..,an, not all 0, such thata0v+a1Tv+ +anTnv= thea s the coefficients of a polynomial, which can be written in factored formasa0+a1z+ +anzn=c(z r1)..(z rm),wherecis a non-zero complex number, eachrjis complex, and the equation holdsfor all complexz. We then have0=(a0I+a1T+ +anTn)v=c(T r1I)..(T rmI)v,which means thatT rjIis not injective for at least onej. In other words,Thasan that a vectorv Vis called aneigenvectorofTifTv= vfor someeigenvalue . The next proposition which has a simple, determinant-free proof obviously implies that the number of distinct eigenvalues ofTcannot exceed thedimension eigenvectors corresponding to distinct eigenvalues ofTare linearly thatv1.

8 ,vmare non-zero eigenvectors ofTcorresponding todistinct eigenvalues 1,.., m. We need to prove thatv1,..,vmare linearly inde-pendent. To do this, supposea1,..,amare complex numbers such thata1v1+ +amvm= the linear operator (T 2I)(T 3I)..(T mI) to both sides of theequation above, gettinga1( 1 2)( 1 3)..( 1 m)v1= 0. In a similar fashion,aj= 0 for eachj, as desired. det 43. Generalized eigenvectorsUnfortunately, the eigenvectors ofTneed not spanV. For example, the linearoperator onC2whose matrix is 0100 has only one eigenvalue, namely 0, and its eigenvectors form a one-dimensionalsubspace ofC2. We will see, however, that the generalized eigenvectors (definedbelow) ofTalways vectorv Vis called ageneralized eigenvectorofTif(T I)kv=0for some eigenvalue ofTand some positive integerk.

9 Obviously, the set ofgeneralized eigenvectors ofTcorresponding to an eigenvalue is a subspace following lemma shows that in the definition of generalized eigenvector, insteadof allowing an arbitrary power ofT Ito annihilatev, we could have restrictedattention to thenthpower, wherenequals the dimension ofV. As usual, ker is anabbreviation for kernel (the set of vectors that get mapped 0).Lemma set of generalized eigenvectors ofTcorresponding to an eigen-value equalsker(T I) , every element of ker(T I)nis a generalized eigenvector ofTcorresponding to . To prove the inclusion in the other direction, letvbe ageneralized eigenvector ofTcorresponding to . We need to prove that (T I)nv=0. Clearly, we can assume thatv = 0, so there is a smallest non-negative integerksuch that (T I)kv= 0.

10 We will be done if we show thatk n. This will beproved by showing thatv,(T I)v,(T I)2v,..,(T I)k 1v( )are linearly independent vectors; we will then haveklinearly independent elementsin ann-dimensional space, which implies thatk prove the vectors in ( ) are linearly independent, supposea0,..,ak 1arecomplex numbers such thata0v+a1(T I)v+ +ak 1(T I)k 1v=0.( )Apply (T I)k 1to both sides of the equation above, gettinga0(T I)k 1v=0,which implies thata0= 0. Now apply (T I)k 2to both sides of ( ), gettinga1(T I)k 1v= 0, which implies thata1= in this fashion, we seethataj= 0 for eachj, as next result is the key tool we ll use to give a description of the structure ofa linear operator. det 5 Proposition generalized eigenvectors proof will be by induction onn, the dimension ofV.


Related search queries