Example: bankruptcy

Chapter 3 Cartesian Tensors - University of Cambridge

Chapter 3 Cartesian Suffix Notation and the Summation ConventionWe will consider vectors in 3D, though the notation we shall introduce applies (mostly)just as well tondimensions. For a general vectorx= (x1, x2, x3)we shall refer toxi, theithcomponent ofx. The indeximay take any of the values1, 2 or 3, and we refer to the vectorxi to mean the vector whose components are(x1, x2, x3) . However, we cannot writex=xi, since the LHS is a vector and the RHS ascalar. Instead, we can write [x]i=xi, and similarly [x+y]i=xi+ that the expressionyi=xiimplies thaty=x; the statement in suffix notationis implicitly true for all three possible values ofi(one at a time!).Einstein introduced a convention whereby if a particular suffix ( ,i) appears twicein a single term of an expression then it is implicitly summed. For example, in traditionalnotationx . y=x1y1+x2y2+x3y3=3 i=1xiyi;using summation convention we simply writex.

3} is a right-handed orthogonal set of unit vectors, and that a vector v has com-ponents v i relative to axes along those vectors. That is to say, v = v 1e 1 +v 2e 2 +v 3e 3 = v je j. What are the components of v with respect to axes which have been rotated to align with a different set of unit vectors {e0 1,e 0 2,e 3}? Let v = v 0 1 e 1 +v 0 ...

Tags:

  Vector, Orthogonal

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of Chapter 3 Cartesian Tensors - University of Cambridge

1 Chapter 3 Cartesian Suffix Notation and the Summation ConventionWe will consider vectors in 3D, though the notation we shall introduce applies (mostly)just as well tondimensions. For a general vectorx= (x1, x2, x3)we shall refer toxi, theithcomponent ofx. The indeximay take any of the values1, 2 or 3, and we refer to the vectorxi to mean the vector whose components are(x1, x2, x3) . However, we cannot writex=xi, since the LHS is a vector and the RHS ascalar. Instead, we can write [x]i=xi, and similarly [x+y]i=xi+ that the expressionyi=xiimplies thaty=x; the statement in suffix notationis implicitly true for all three possible values ofi(one at a time!).Einstein introduced a convention whereby if a particular suffix ( ,i) appears twicein a single term of an expression then it is implicitly summed. For example, in traditionalnotationx . y=x1y1+x2y2+x3y3=3 i=1xiyi;using summation convention we simply writex.

2 Y= we are doing is not bothering to write down the !The Rules of Summation ConventionSummation convention does not allow any one suffix to appear more thantwicewithina single term; soxiyiziis meaningless. We have to take care to avoid this: for example,46 R. E. Hunt, 2002consider the vector relationy= (a . b) havea . b=aibi, but wecannotwriteyi=aibixias this would be ambiguous. Howcan we correct this? Note thata . b=aibi=ajbj the suffix we use for the summation is immaterial. (Compare with the use of dummyvariables in integrations: 0e xdx= 0e ydy.) So wecanwriteyi= any given term, then, there are two possible types of suffix: one that appearsprecisely once, ,iinajbjxi, which is known as afree suffix; and one that appearsprecisely twice, ,jinajbjxi, which is known as adummy suffix. It is an importantprecept of summation convention that the free suffixes must match precisely in everyterm (though dummy suffixes can be anything you like so long as they do not clash withthe free suffixes).

3 So in the equationajbjzk=xk+aiaiykbjbjeveryterm has a free suffixk, and all other suffixes are dummy ones. In vector notation,this equation reads(a . b)z=x+|a|2|b|2y.(Note that the order of variables in the final term of this equation in suffix notation isunimportant: we could equally well have writtenbjykaibjai.)There need not be any free suffixes at all, as in the equationaizi= (xi+yi)ai(whichreadsa . z= (x+y). ain vector notation).Suffix notation can also be used with matrices. For a matrixA, we writeaijto denotethe entry in theithrow andjthcolumn ofA(for eachi= 1,2,3 andj= 1,2,3). Wewrite eitherA= (aij) or [A]ij=aij these equations are equivalent to indicate this.(Sometimes the upper-case letter is used instead, in which case the matrixAwould haveentriesAij.)Examples of Summation Convention(i) 2x+y=z 2xi+yi=zi. Note that the RHS of this suffix notation equationdoesnotmeanz1+z2+z3 no repeated suffix, no sum!

4 (ii) (a . b)(x . y) = 0 aibixjyj= R. E. Hunt, 2002(iii) In summation convention,y=Axis writtenyi= [Ax]i=aijxj(check that this is correct by writing it out long-hand for each possible value of thefree suffixi).(iv) The matrix multiplicationC=AB(whereAandBare 3 3 matrices) is writtencij= [AB]ij=aikbkj.(v) Thetraceof a matrixCmay be written as TrC=cii, ,c11+c22+c33. HenceTr(AB) = two free suffixes ( ,jincij) by a single dummy suffix (cii) is all expressions written in suffix notation can be recast in vector or matrix example,aijk=xiyjzkis a valid equation in suffix notation (each term has three freesuffixes,i,jandk), but there is no vector The Kronecker Delta and the Alternating TensorThe Kronecker delta is defined by ij= 1i=j0i6=jand the alternating tensor is defined by ijk= 1 if (i, j, k) is a cyclic permutation of (1,2,3) 1 if (i, j, k) is an anti-cyclic permutation of (1,2,3)0 if any ofi,j,kare equal( , 123= 231= 312= 1; 213= 132= 321= 1; all others are zero).

5 Note that ij= jiand that ijk= jki= kij= the identity matrix(1 0 00 1 00 0 1)then [I]ij= ij. We see thatxi= ijxj48 R. E. Hunt, 2002because (i) this is equivalent tox=Ix; or (ii) we can check for each value ofi( ,wheni= 1, RHS = 1jxj= 11x1+ 12x2+ 13x3=x1= LHS). The Kronecker delta just selects entries: , ikajkis equal is ii? It alternating tensor can be used to write down the vector equationz=x yinsuffix notation:zi= [x y]i= ijkxjyk.(Check this: ,z1= 123x2y3+ 132x3y2=x2y3 x3y2, as required.) There is one veryimportant property of ijk: ijk klm= il jm im makes many vector identities easy to prove.(The property may be proved by first proving the generalisation ijk lmn= det il im in jl jm jn kl km kn .Both sides clearly vanish if any ofi, j, kare equal; or if any ofl,m,nare. Now takei=l= 1,j=m= 2,k=n= 3: both sides are clearly 1. Finally consider the effectof swapping sayiandj. Once we have proved this generalisation, contractkandnandsimplify, noting that for example jk km= jm.)

6 Example: prove thata (b c) = (a . c)b (a . b)c.[a (b c)]i= ijkaj[b c]k= ijkaj klmblcm= ( il jm im jl)ajblcm=ajbicj ajbjci= (a . c)bi (a . b)ci= [(a . c)b (a . b)c]i,as required. ijkcan also be used to calculate determinants. The determinant of a 3 3 matrixA= (aij) is given by ijka1ia2ja3k(check this by just expanding the product and sum in49 R. E. Hunt, 2002full). This can be written in several other ways; for example,detA= ijka1ia2ja3k= jika1ja2ia3k[swappingiandj]= proves that swapping two rows of a matrix changes the sign of the What is a vector ?A vector is more than just 3 real numbers. It is also a physical entity: if we know its3 components with respect to one set of Cartesian axes then we know its componentswith respect to any other set of Cartesian axes. (Thevectorstays the same even if itscomponents do not.)For example, suppose that{e1,e2,e3}is a right-handedorthogonal set of unit vectors, and that a vectorvhas com-ponentsvirelative to axes along those vectors.

7 That is tosay,v=v1e1+v2e2+v3e3= are the components ofvwith respect to axes whichhave been rotated to align with a different set of unit vectors{e 1,e 2,e 3}? Letv=v 1e 1+v 2e 2+v 3e 3=v je i. e j= ij, sov . e i=v je j. e i=v j ij=v ibut alsov . e i=vjej. e i=vjlijwhere we define the matrixL= (lij) bylij=e i. i=lijvj(or, in matrix notation,v =Lvwherev is the column vector with componentsv i).Lis called therotation , but isnot quite the same as, rotating the vectorvround to a different vectorv using atransformation matrixL. In the present case,vandv are thesamevector, just measured with respect50 R. E. Hunt, 2002to different axes. The transformation matrix corresponding to the rotation{e1,e2,e3}7 {e 1,e 2,e 3}isnotL(in fact it isL 1).Now consider the reverse of this argument. Exactly the same discussion would leadtovi= lijv jwhere lij=ei. e j(we swap primed and unprimed quantities throughout the argument). We note that lij=ljifrom their definitions; hence L=LTand sov= Lv =LTv.

8 We can deduce thatv=LTLv,and furthermore, this is true forallvectorsv. We conclude thatLTL=I, ,LT=L 1.(HenceLLT=Ialso.)Lis therefore an orthogonal matrix. In suffix notation, theequationLTL=Ireadslkilkj= ij,andLLT=Ireadslikljk= ij;both of these identities will be way of seeing thatLLT=I(or, equivalently,LTL=I) is to consider thecomponents ofL. Sincee i. ejis just thejthcomponent ofe imeasured with respect tothe first frame, we see that theithrow ofLjust consists of the components ofe imeasuredwith respect to the first frame:L= e 1. e1e 1. e2e 1. e3e 2. e1e 2. e2e 2. e3e 3. e1e 3. e2e 3. e3 = e 1Te 2Te 3T [measured with respect to frame 1].51 R. E. Hunt, 2002 Alternatively, theithcolumn consists of the components ofeiwith respect to the calculate the top left component ofLLT, we find the dot product of the first rowofLwith the first column ofLT. Both are simplye 1measured with respect to the firstframe, so we obtaine 1.

9 E 1, which is 1. Similarly, the top right component ofLLTise 1. e 3, which is zero. So, considering all possible combinations of rows and columns, wesee thatLLT=Ias TensorsTensors are a generalisation of vectors. We think informally of a tensor as somethingwhich, like a vector , can be measured component-wise in any Cartesian frame; and whichalso has a physical significance independent of the frame, like a MotivationRecall the conductivity law,J= E, whereEis the applied electric field andJisthe resulting electric current. This is suitable for simple isotropic media, where theconductivity is the same in all directions. But a matrix formulation may be more suitablein anisotropic media; for example,J= 5 0 00 4 00 0 0 Emight represent a medium in which the conductivity is high in thex-direction but inwhich no current at all can flow in thez-direction. (For instance, a crystalline latticestructure where vertical layers are electrically insulated.)

10 More generally, in suffix notation we haveJi= ijEjwhere is theconductivity happens if we measureJandEwith respect to a different set of axes? Wewould expect the matrix to change too: let its new components be ij. ThenJ i= ijE vectors, soJ i=lijJj52 R. E. Hunt, 2002andEi=ljiE jfrom the results regarding the transformation of vectors in Hence ijE j=J i=lipJp=lip pqEq=lip pqljqE j= ( ij lipljq pq)E j= is truefor allvectorsE , and hence the bracket must be identically zero; hence ij=lipljq pq. This tells us how this argument with the corresponding argument for the caseAx=0whereAis a matrix; if itis true for allxthenAmust be zero, though this is not the case if it is only true for somex s. is asecond ranktensor, because it has two suffixes ( ij).Definition: In general, a tensor of ranknis a mathematical object withnsuffixes, , which obeys thetransformation lawT .. the rotation matrix between : for second rank Tensors such as , the transformation lawT ij=lipljqTpqcan be rewritten in matrix notation asT =LT LT check this yourself!


Related search queries