You are on page 1of 59
P inciples of Symmetry, Dynamics, and Spectroscopy| - W. G. Harter - Wiley (1993) ‘Chapter 1 A Review of Matrix Algebra and Quantum Mechanics 1 1.1 Matrices Used in Quantum Mechanics / 1 ‘A. The Transformation Matrix / 2 B. Row and Column Matrices: State Vectors / 9 C. Hamiltonian Matrices and Operators / 12 1.2. Matrix Diagonalization / 15 ‘A. Obtaining Eigenvalues: The Secular Equation / 17 B. Obtaining Eigenvectors: ‘The Hamilton-Cayley Equation / 18 1.3. Some Other Matrices Used in Quantum Mechanics / 28 ‘A. Scattering Matrix / 29 B. Density Matrix / 30 1.4 Some Matrices Used in Classical Mechanics / 30 ‘A. Force, Mass, and Acceleration Matrices / 31 B. Potential Energy and Hamiltonian Functions / 35 1.5. Wave Functional Analysis and Continuous Vector Spaces / 37 A. Functional Scalar Products / 38 B. Orthonormality and Completeness / 38 C. Differential Operators and Conjugates / 40 D. Resolvants / 41 ‘Appendix A. Elementary Vector Notations and Theory / 43 ~—Appendix B. Linear Equations, Matrices, Determinants, and Inverses / 47 ‘Appendix C. Proof that Hermitian and Unitary Matrices and Diagonalizable / 52 Doe a S s cs cS Book page numbers. we oe ny S Ch.1 pdf. page numbers w S w cS CHAPTER 1 —<——<________. A REVIEW OF MATRIX ALGEBRA AND QUANTUM MECHANICS The study of symmetry and its application to spectroscopy involves the mathematics of operators, vectors, and matrices, and this chapter contains a review of matrix theory and some of its applications. The review includes a few concepts and procedures which may not be well known to physicists, but which are needed in the development of this book. Also, we use the review to establish much of our symbolism, conventions, and definitions. Finally, there are some previews of the contents of the rest of the book. 1.1. MATRICES USED IN QUANTUM MECHANICS ‘The matrix description of quantum mechanics was first developed by Heisen- berg and has since become widely used. This description differs from some others mostly by the organization of its bookkeeping, wherein numbers of physical interest are stored in arrays called MATRICES. Such an array is shown in Eq. (1.1.1): [4 412 M3 = |41 Mn Ms 7 quay We will use script notation #,./’,... for a whole matrix, and let the same letter with subscripts .#, designate the (complex) number which is the COMPONENT of the matrix, located where the ith row meets the jth column, 2 AREVIEW OF MATRX ALGEBRA AND QUANTUM MECHANICS Many, but not all, of the matrices which we shall use will be finite-square or (x Xn) matrices, that is, matrices with rows and 1 columns, where n=1,2,3,... is finite, [A (Lx 1) matrix is just a number.] This is because ‘many physical situations can be described by some elation between a set of, rn quantum states and some other set of » different states. Consider now somie examples of such matrices, A. Transformation Matrix Quite often in quantum mechanics we can describe a particle or a system of particles by stating whether they are in one or another of some set of quantum states 1,2,3,...,7- The most famous example of this might be the states (1): spin “up” and (2): spin “down” for an electron. ‘Now someone else can come along and describe the same system with a different set of states 1',2',3',..., and 7’, which are just as complete in their description as the first set. For example, I: spin “north” and 2': spin “south” are acceptable states for the electron, t00, as we will see shortly. However, you must know the relation between any two equivalent deserip- tions. This relation is given entirely by a TRANSFORMATION MATRIX Fb’ —b), the meaning of which we shall review: (iy dr) “rind sv eo ]2P 2D em an, Gilt) Gila) s+ Cetin) For the time being we shall exhibit the 7 matrix for going between “up-down” (1-2) and “north-south” (-2') directions shown in Figure 1.1.1 for electronic spin. (The figure shows up-down tilted to avoid “geo chauvinism” or undue favoring of a particular inhabited latitude.) The 7 matrix that follows and generalizations of it are derived in Chapter 5, and in fact most of symmetry theory is devoted to finding some sort of 7 matrix: ° @ rh cos 5 wisn 5 F(W = b) = 3 g | aaa D1 @»] | -isins eos 5 It is important to understand the meaning of a J matrix. It is conven- tional t0 say that component (i'|j) gives the “PROBABILITY AMPLI- TUDE for a system in state j (0 be found in state i'...”. Now one has a perfect right to ask what that means, but many students find it difficult to get fa satisfying answer. However. it is not so difficult to learn that (i'l) squared, MATRICES USED IN QUANTUM MECHANICS 3 north(t) down(2) Figure Ill Alternative axes for spin state definition, The up-down axis (2) is south(2!? rotated by angle @ from the north-south axis (29) » 71/2, is the fraction of, or probability for, systems in state j to end up state i when forced to make the choice. The difficulty is that it is apparently impossible to predict the outcome of an individual event. To know this with certainty we must really wait for the individual particle or system to make its choice, The prediction |(7'j)I? is only the probability for the individual outcome ((’) from (j)] or the average percentage for choices in a large number of individual experiments. A device that forces a particle to make a choice is called an analyzer. An example of a spin-state analyzer is the magnetic Stern-Gerlach device which Feynman has described in his Lectures on Physics (Vol. IIL, p. 5-1). The analyzer accepts a beam of spinning particles and displaces each one up or down according to the projection of the particles’ spin on the vertical axis of the analyzer. For electrons only two possible projections are possible. One possible projection is along the axis which is up [state (1)] for an up-down analyzer, or north [state (1)] for a north-south pointing analyzer, and s0 on. ‘The other possibility is opposite to the axis, ie., down [state (2)}, or south {state (2), etc. Given an incoming beam of particles in state (j), one finds on the average that a fraction |(i'|)|° = | ¥,1* of them will come out of the (i’) exit of the (12) or north-south analyzer. For example, the fraction of spin-up (1) electrons that would actually choose to point south (2') in a north-south analyzer (see Figure 1.1.2) is KK2'11) = sin%(0/2) given (2'11) in Eq. (1.1.3). The remaining fraction 1 = sin*(0/2) = c0s?(8/2) end up choosing north (1) Understanding the amplitude ¢?'|j), apart from its square, is more diffi cult. Let us state some axioms which most people like to have the amplitudes obey, and discuss some thought experiments which may help us to make them more plausible. The first axiom has already been introduced. 4A REVIEW OF MATRIX ALGEBRA AND QUANTUM MECHANICS Krtol? “ . 0 wan Hy BARARRA a tH yen? ® Figure 1.1.2 Spin analyzer experiment. An incident beam of particles in state (up) comes in on the right-hand side. The north-south analyzer forces each particle choose between state I’ (north) or state 2' (south). On the average a fraction (2'1)|? = sin'(/2) choose the latter state 2" and emerge from the bottom of the analyzer. The remaining fraction |(1'|1){? = cos*(0/2) come out at the top in the state 1. Axiom 1 (i'L{)* (ij) = [4i|J)/? is the probability for occurrence in state # of a system originally in state j which is forced 10 choose between P28, ‘One should note that in classical wave and polarization theory the abso- lute square (\A|*) of an amplitude denotes intensity. In quantum theory it ives probability, which is analogous. ‘The second axiom deals with complex conjugation (+) and order reversal. Axiom 2 (i'1)* = (ili). In Axiom 2 one supposes that a reverse process obtained by playing. time backwards is accomplished mathematically by complex conjugation. For example, a plane-wave amplitude (x,1Ik,w) = eM**-* reversed is (xs thk,@)* = et(-*9-!), This axiom is a direct carryover from classical wave theory. ‘The next axiom tells how states within a given set are related. Axiom 3 The amplitudes connecting a set of states (1,2,....f with itself ae 1 0F 0; that is, ij) = 3, {4 ini Axiom 3 is closely related to the following experimental result. If some electrons that originally chose the state of spin north (1) are once again given the chance in another analyzer to choose north or south (Figure 1.1.3), then 100% will choose north. This means [(I"I}|? = 1 and |(2'|1')|* = 0. One then assumes that the phases are such that VII") = 1. The same applies to the (2) state. Axiom 3 means that the transformation matrix of states (1,2,...,1} to themselves, or any other set {Y,2',...,7} to itself must be the IDENTITY or MATRICES USED IN QUANTUM MECHANICS 5 OHHH See a ja1e?-0— HHH tel “Si isin ler Figure 1.13 Analyzer experiment associated with Axiom 3. If the incident beam contains only particles in state Y (north) or separate (south) then they will be unchanged by a north-south analyzer. UNIT MATRIX, 1: |? = L (positive definite) +E (phase sensitive) would survive after averaging over many particles, i, just the sum of squares or the sum of probabilities for each j: L (Positive definite) = D|¢k"| {<1 7 Meanwhile, the phase-sensitive “interference” terms = LER (CRIEMKLD) & (phase sensitive) would average to zero. ‘Axiom 4 can also be visualized in terms of basic wave mechanics. It is essentially a restatement of the principle of Huygens (circa 1660). Imagine a lightwave in state i’ (Figure 1.1.5), going through a film to be (possibly) in some state k”. If each of the points j on the film absorb the light, but then rebroadcast it starting with the right amplitude and phase ¢jli’) for that point, then Axiom 4 says state £” will be achieved just as well as it would have if the film had not been there at all. This principle will be discussed in Chapter 8. “ureoq s om soutquionos Youy TB Aq paase sozayeue 8 AREVIEW OF MATRIX ALGEBRA AND QUANTUM MECHANICS, = LOLS A | ( (wy souRcEB \ \ ¥ rE Figure 1.1.5 Relating Huygen's principle to Axiom 4 ‘Axiom 4 is the same as mathematical relations called the COMPLETE- NESS CONDITIONS for states {1,2,...,.1}, and we shall soon discuss these from several points of view. Such relations are central to the development of symmetry analysis. Furthermore, Axiom 4 is the real reason for ever studying matrix mathe- matics in the first place. Let us rewrite Axiom 4 using the script notation introduced in Eq, (1.1.2). Axiom 4 then corresponds to the standard defini- tion of INNER MATRIX MULTIPLICATION (we discuss outer multiplica- tion later): EF AU wT (b—b), — (1.464) FW ED) = FW D)I(b = bY (1.1.60) FeV Od) Just writing two script letters next to cach other, as in Eq. (1.1.60). implies that the special sum in Eq. (1.1.6a) is to be performed. Continuing the review of matrix mathematics, we define the TRANSPOSE CONJUGATE! of matrix .« by [tr Mir Hn Hn Aa ° Mn ‘i aal® S 1.7) \4mi Mao Fan \#in #5 MATRICES USED IN QUANTUM MECHANICS 9 Observe that the transformation matrix 7 satisfies Eq. (1.1.8), which is written in several different forms, starting with Axiom 4: Lk MHL = Beis Ew ale’y = by, (1.1.8a) z FHLB & b)Foj(B & b) = Buy LHe = YT E(B < b) = Bey , , (1.1.86) 7"(b = b)7(bi —b) FW BINH Hb). (1186) Any matrix satisfying an equation of the last form, Eq, (1.1.86), is called a UNITARY MATRIX. In fact, when the product of any two finite matrices 2” and &@ is equal to J, then each is said to be the INVERSE of the other, written 9 =.9~! or @ =./! (Appendix B reviews the calculation of inverses.) The unitary matrix has the very convenient property that its inverse is just its transpose conjugate (7~! = 7"). For any real matrix @ (% has no complex components) the transpose conjugate .#" simply equals the TRANSPOSE #7 defined by eeiiceeeacceeeessseeee Ry By &, B= |hy Bn Raq Ba Fra]. (11.9) Fm Maa Ron in Ban Ban ‘Any real transformation matrix will satisfy Eq. (1.1.10). A matrix satisfying this equation is said to be an ORTHOGONAL matrix: gras, (1.1.10a) LIj/Fin = Fe — LIF (1.1.10b) B. Row (n x 1) and Column (1 x n) Matrices: State Vectors One of Dirac’s famous ideas was to associate the jth column of 7(b' = b) with a unit KET vector |) and the jth row of 7"(b' < b) with a unit BRA vector (j| as in Eq, (1.1.11), which follows. (A review of elementary vectors properties is in Appendix A.) The arrow («+) means “is associated with.” “rin ip — 7 [ OP Jota, AGI) + Gn} @ ile 11) gga wild 10 A REVIEW OF MATRIX ALGEBRA AND QUANTUM MECHANICS ‘These columns and rows of numbers represent the state j in terms of the states 1,2%,... and n’. In the first equation (1.1.11) the numbers jy = (I'l/), Fe = 2 Dress are the components of vector |j) in the BASIS 1), 12,..-, and lv, and constitute a m x 1 column matrix j. Similarly Ge = Gir!) are components of a LX n row matrix. ;* that represents ¢ | in the basis (1'|, (2'|, and (n'| We shall see as we go along why it is necessary to define two types of vectors (| and |) that do about the same thing. However, the main point to note is that each vector |j) is a linear combination of any complete set of vectors (11°,12°),-.-5 l”)) [Ba, (11.122) or (11°),12"),..., ln")}, oF even of the set {|1),|2),..., |m)) to which it belongs, although the latter is trivially simple. These equations are Dirac’s abstraction of Axiom 4 wherein the “bra” (k| is removed from the bracket (k| j) to give KET vector | j): = z Wah Ei, le), (1.1.12a) d= Eien (1.1.12) d= Ewa = Lal =, (1.28) 7 For example, in the vector equation that follows we see that the state vector of spin up (|1)) is, for small 6, a lot of spin north (j1'>) plus a litte of spin south (|2")): I) = yy + [2911 @ cos 5 6 ~isin = 2 @ @ = 008 511°) ~ isin 512") « (1.43) But, in its own basis it is just spin up: ny = Iydalny + 129¢2K1) - eo} = 111) +012) © (§). This “mixing” of states to make other states is the embodiment of the auantum SUPERPOSITION PRINCIPLE. The coefficients (1'|j),<2'1j),-.. in the column matrix j. Eq. (1.1.11), are said to be a REPRESENTATION of |j) in the |I'), 2').... basis. Strictly MATRICES USED IN QUANTUM MECHANICS 11 speaking, it is not correct to replace the arrow j + |j) by an equality sign, although you will see it done in many works. When you do set j = |), there is no way to tell if the stack of numbers in j is supposed to be (AZ| yes of (UL), Q'Deves OF something else entirely. The notation |) i reserved for the unique ABSTRACT VECTOR and the physical state it denotes. ‘One defines the SCALAR PRODUCT of a bra ¢j with a ket [A") by Eq. ((.1.14a) so that it equals the bracket (j1k">. This is represented in any basis as an inner matrix product between a (1X n) row matrix and (n x 1) column matrix, as in Eq, (11.140): Gilkey = E Gyre, (1.14) ak) etek) FAO» GIKGIZD + Gly (11.140) Bee: (nk) (The dot is a common notation for the scalar or inner product. See Appendix A) ‘The following definition for the transpose conjugate of vectors corre sponds to the one for square matrices given previously «rli> 4 _——— vtec | 2? | - Giver cel : eta (alli) = GINGI29 Ge). (1.1) Se Either the bra vector (W or the ket vector |) defines a physical state W. ‘The scalar product requires one of each. (W|¢) gives the amplitude for a system in state to choose state V. We shall always obey the axioms by requiring that (|W) = (W|®)* and (WIV) = 1 = (O16), ‘The connection between transformation matrix mechanics and wave me- cchanies is made when cach state |i) or |) is associated with a wave function Qxli) = U(x) or Ca¥) = ¥(2) (1.1.16) (See Section 1.5.) 12 A REVIEW OF MATRIX ALGEBAA AND QUANTUM MECHANICS €. Hamiltonian Matrices and Operators Probably the best-known example of @ matrix in quantum mechanics is the HAMILTONIAN matrix Z;,. Among other things, this matrix defines the time behavior of a state [W()) through the time-dependent Schr8dinger equation in Eq, (1.1.17); (Planck's angular constant is h = h/2 = 1.05 x 10 Joule second.) a c ins ie(O) = Ew), (1.t7a) 2 Gyn) = E caomxsiweo) (1.1.17) ‘The Dirac notation for matrix components. #;, is used in the second cquation, The boldface symbol denotes an ABSTRACT OPERATOR Hi. In fact, the abstract Schrodinger equation is obtained [Eg. (1.1.17c)], by remov- ing all reference to basis 1), |2),...» lm) in Eq, (141.176): m2jw(s)) = HIME), (14.16) (1lw(t)) *u Fo Fin) (CIN) a [(2h¥() 4n Fn Fan) | 21) al (alec) Fu Fur Fan | [Cnt¥(t) LHI) Ly (iI¥() q (tay D#,cil¥(nyy Finally, Eq. (1.1.17d) gives a matrix representation of the entire equation, It shows the inner product of the (n x ) matrix representing 2 with matrix or vector which represents |Y(s)), and the resulting new vector on the extreme right of the equation MATRICES USED IN QUANTUM MECHANICS 13, (a) Elementary Operators Roughly speaking, operators change old vec- tors into new and different ones, and matrices represent this process from some “viewpoint” or basis. We can see this clearly when we apply the abstract Axiom 4 completeness relations to the abstract operator: H= DDG =D L#ylicil (1.1.18) Equation (1.1.18) shows clearly how 2, or any quantum operator for that matter, is a linear combination of ELEMENTARY OPERATORS ¢, = i) jl. (Gometimes these are called UNIT TENSOR OPERATORS.) Each elementary operator can perform the elementary operation of Eq. (1,18), converting |j) into |), or Eg. (1.1.19b), converting bra (i| into . (1.2.30) LéeilUlHIm) = e; Fu Fn #ay|| Cle,) (1.2.3e) Fn Fra Fan} \dnle,) Fu Xa Fn upon | 22% Fn EME /2) + Ce) cep Kes)" See Fn Fea Fan a = 6 6G[e)12) ++ Cea. (1.231) Cie Se ‘The equations contain several unknowns. For the right-handed eigenea tions (#1le,) = ele;)) one needs to find the ket components (ile;), ie. all 1 components of the 7 matrix, as well as m eigenvalues ¢,. The solutions to the left-handed bra equation ((e,|# ~ e,(e;1) are derived easily from the kets, as we will see. If an eigenvector’s components (Il¢;), (2[¢;),... were known, we could find the corresponding eigenvalue ¢, immediately from Eq. (1.2.30) or Eq (1.23e). This, incidentally, is what symmetry theory can do, as explained in the next chapters. Inthe right situations it finds eigenvectors and transforma- tions 7(i =e). However, for now let us suppose that they are still un- knowns. If the eigenvalues are known, there are several ways to get the eigen vectors or 7(i =e), and a description of these occupies most of the re- mainder of this review, But first let us review the equation that determines eigenvalues MATRIX DIAGONALIZATION 17. ‘A. Obtaining Eigenvalues: The Secular Equation Equation (1.2.3) has been rewritten in Eq. (1.2.4a) by putting everything on the left side. The same has been done for the matrix form, Eq. (1.2.4b). LE ((llHlm) ~ 3,,e,) (" ; fears ~{o] (12.40), Note that the example could never be a Hamiltonian matrix since it is not Hermitian, but it serves as a good example for mathematical purposes. ‘Now Eqs. (1.2.4) can have a nonzero solution only if the determinant of the matrix vanishes. (See Appendix B.) It is the resulting equation, Eq, (1.25), that will determine the eigenvalues, and it is usually called the SECULAR EQUATION. Note that itis the same for bra and ket eigenequa- tions, tu-e %a Fn 7% ne Fan det Fn Fr Fan — +a,y6+4,=0, (1.2.5) be +5=0. (12.5), It can be a tedious job to solve an mth-degree secular equation. Itis easy enough for small matrices like our example, for which the roots ate (e, = 1) and (e; = 5). But one should know that no formula exists for roots of quintic, 18 AREVIEW OF MATRIX ALGEBRA AND QUANTUM MECHANICS sextic, or higher-degree equations. All nth-degree polynomial equations have exactly n roots somewhere in the complex plane, but no formula exists to give them all for n = 5 (Abels’s theorem). In general finding the eigenvalues e, can be the hardest part of the problem, if the eigenvectors are not known B. Obtaining Eigenvectors: The Hamilton-Cayley Equi One very helpful fact is that any matrix satisfies its own secular equation, This is the content of the following theorem, The resulting equation, Eq. (1.26), is called the HAMILTON-CAYLEY EQUATION (HCEq). Hamilton-Cayley Theorem:: If in the secular equation S(e) = det | - el] =e’ ayer ta, ye tay 0 one replaces the terms a,e"-" by the matrices a,,@*-”, the constant term a, by the matrix a,1, and the 0 by the zero matrix 0, then the resulting matrix equation, Bq, (1.2.6), is valid: SP) = 2" aR 4 sa, Ha =O, (1.26) 4 1\)lf41 2 41 10 sl(3 I} (5 3) 3 5)+ (6 :) -(19 6)_ (2 6), /s 0)_(o 0 ~(B 5)-(% 8)+ (3 $)-(3 §}- a20. We can prove this theorem by using the fact that the product of a matrix -@ with its ADJUNCT matrix .#“* equals det .# times a unit matrix. [As explained in Appendix B, .4;*° is (~ 1) multiplied by the determinant of # taken with its jth row and ith column missing.] This gives the following: Lape = Lyi = 84(det#) Setting # = % ~ el we have M(H ~ el) = det ¥ - el) = (7 - el) .4%, MF ~ el) = S(e) = (# - €l).4™. (12.7) Now replacement of the number © by the matrix is well defined, and doing so proves the theorem: S(#) = 0. MATRIXDIAGONALIZATION 19 (a) Eigenvector Projectors If the roots ¢, are known, we may use the factored form of the Eq, (1.2.6) to derive the eigenvectors: (4 ~ MY H ~ ek) 0 (el) = Tes (3 )-18 MAG )-8 M-G 8) a2 ‘The procedure depends upon whether or not any of the m roots ¢, are equal ‘We study first the cases in which all e, are distinct. This includes our example (=e 5). Case i: All n Roots ¢; Are Distinct With all roots having different values one may sclect » RELATIVELY PRIME factors p, (“relatively prime” means “no common factor shared by all”) from the factored HCEq, Eq. (1.28), by deleting first one (2 ~ eI) factor, then the next, and so on: “(#~ el) Cds (eds paw (aly Te Cds (@- ed), pen (®— al) (1.29) (3 3)-{0 ¢ m= [P08 (129), Now we see that these , contain the eigenvectors, if we rewrite the HCEq, Eq. (1.28), in terms of pas is done in the following. Equations (12.108) and (12.106) are essentially the eigenvalue equations (1.2.3e) and (1.2.39 with matrix p, in place of the eigenvector: (2 =e) p= (1.2.10) FH eke (12.100) B® = Es fy (12.100) According to the last equations, each column of a, must satisfy the eigen- value equation for a ket vector |e;), and each row must satisfy this equation 20 A REVIEW OF MATRIX ALGEBRA AND QUANTUM MECHANICS for a bra vector (¢;|. In Eq, (1.2.10), we write only the frst columns and rows ees 10b), Now if a vector satisties an eigenvalue equation, so will two times that vector, or any “length” of that vector including zero. It is conventional to normalize all our eigenvectors to give 1 » He-s) » Te-w) L——— bre PTT) PVT) ‘These have precisely the same form as the completeness relation (1.2.18) and the spectral decomposition relation which is discussed in the following. [See Eg. (1.2210) Finally, we observe that, while Eqs. (1.2.17) and (1.2.18) appear very dissimilar, they are more similar if you represent them with Dirac’s notation ‘The completeness relation for basis |e,) is represented by the following in the basis {|1), 12),..-): 1=DieXejl, (1.2.19) (Uitlm) = bj = ZMle;Xejlm). (1.2.19) ‘The orthonormality condition for basis le, is similarly represented: Kejle;) = 8, = DL eild XUle,). (1.2.20) The similarity between Eqs. (1.2.20) and (1.2.19) is now more evident. (c) Spectral Decomposition of Matrices Any diagonalizable matrix 2” will have a set of idempotent projection matrices , satisfying the orthonor- ality [Eq. (1.2.17)] and completeness (Eq, (1.2.18) relations. Now operating on Eg, (1.2.18) with the matrix 2 gives Eq. (1.2.21), which is called the 26 AREVIEW OF MATRIX ALGEBRA AND QUANTUM MECHANICS SPECTRAL DECOMPOSITION of # [eigenvalue equations (1.2.10) are used]: HAE A, Fe, + HEA, (sumover distinct e,), (1.2.21) (4 a) ‘This decomposition provides aver elegant way to manipulate mati For example i you wanted the 50th power ofthe mtx (+!) there ino need to begin multiplying it by itself 49 times. Instead, the spectral decompo- sition gives the answer quickly: i} aan, 4 ee 0 3 a) 7 ht tem) = 9? + 50,9, + -- +ePP ofa, + ea, [tle 143-59 59-4 3-5-3 345% 7 (1.2.22) The property 21H, = 6,,9; reduces a complicated expression to a very simple one. It is this type of analysis that is contral to the development of symmetry analysis and its applications, which are given in the next chapters. (d) Simultaneous Diagonalization of Commuting Matrices When any two Hermitian matrices .¢ and commute with each other 4 = #41] then there must exist a complete set of base vectors that are eigenvectors of both matrices. This is constructed easily using the spectral decompositions and completeness relations for each matrix. These are shown in Eqs. (1.2.23), where it is assumed that the number of distinct eigenvalues of .# ism, while for # the number i PM) +P (M) + APH), (1.2.23a) MH Wy PM) + F(A) + +H, P™M), (1.2.23) 1H P(N) 4 PUN) + 49D), (12.230) VPN) + PUP) + 4H, PH). (12.234) MATRIX OIAGONALZATION 27 Now if the matrices commute, so will their respective idempotents, since these are just polynomials of the matrices. Because of this, we get immedi- ately a complete set of idempotents and hence the eigenvectors for the two of them by simply multiplying the separate completeness relations: [o\a) +920) + 39"(0)] X[ FIN) +P) + HPN) (1.2248) L= BM) PS) + PM) PUN) 4 3PM PN) +0, (1.2.24) DAP Bh ee de Ee (1.2.24e) Each of the resulting terms must satisfy eigenvalue equations for .@ and 4°: AP! AP MIMS) = Wy P*%, NPP = IF" MPS) = SPN) P UM) = vy P*. Each term *? that is not zero is clearly one of a set of complete and orthogonal idempotents. However, many of the terms are usually 2210 in practice. In fact, if # is an m X m matrix with m distinct eigenvalues, then there will be exactly m nonzero terms on the right of (1.2240) or (12.240), and each of these will be identical to one of the °(#). Having two or more nonzero terms #8, 9°°,... amounts to having two orthogonal eigenv tors with eigenvalue j., where we said we only had one. In other words, the reduction of will, in the case of distinct j.,, serve immediately to reduce all the other matrices 7” which commute with “#. For example, the fact that matrix = (_3 ~}) commutes with the matrix (4 4), which we have reduced, implies that 7 is reduced by the same idempotents. Solving its secular equation is unnecessary a(t je-(44 (3 ‘This idea is extremely important in the development of symmetry analysis, as we shall see early in the next chapter. (e) Obtaining Eigenvectors from Adjuncts To find the eigenvectors le.) of alee Then an cigenbasis |a,), la.) is found such that the matrix a in this new system is diagonal, with eigenvalues a, and ay: (146) (47a) Lleol = 1 (1.4.7) [These vectors can be made according to the techniques of Section 1.2 so they satisfy, along with their left-handed companions (a,|,(a,), Eqs. (.4.6) and (1.4.7)] In this basis Eq. (1.4.5) will be very easy to solve since it is uncoupled now: Cait) = af ale) ‘The new coordinates (a,|x), which are often called NORMAL COORDI- NATES, each oscillate with their particular elementary resonance frequency ; = ya,. There are quite a number of examples of these coordinates in the next few chapters. For more complicated problems it is sometimes convenient to avoid dealing with the a matrix until the very end of a calculation. One of the problems with a is that, unlike ¥ and m, it can be non-Hermitian. [See Ea, (1.4.)] When a #a! then left (|)) and right (|) eigenvectors are not related by (¥) conjugation (ie., ¢a,| + |a,)") as in Figure 12.1 However, there are other reasons why dealing with an equation like Eg, (2.4.80) can be easier than working with a in Eq, (1.4.8a): ale;) = m-IFle,) =ale,), (1.48a) Fle,) =a,mie,), (1.4.86) (F-am)le,) = 0. (1.4.8e) ‘This happens when, for various reasons, we want to use a nonorthogonal basis |4,),4,),... in which (4,14;) # 8,,, but (4,| = 14,)'. In this case we would be seeking an eigenvector [e,) of the form le.) = e114,) + e91A) +=, (14.98) 34 A REVIEW OF MATRIX ALGEBRA AND QUANTUM MECHANICS which satisfies a GENERALIZED EIGENVALUE equation F(e\|4,) + e314.) ++") wma(e,14,) + 6:1) +--+). (1.4.96) Forming the scalar product of this in turn with (4,1 = [4,)', (sl = 143)" ~ results in the following generalized matrix cigenvalue equation in which the force and mass matrices are Hermitian CAIFIA, AFA, | (A,IFIA,) ¢AzIFIA,) : [ = 40). (1.4.19¢) I is easy to see that the latter obey Hamilton’s equations: a oat aH MH Gy ~aa(i) ~ ~BUi), ao 7 PEA (1.4.20) This is a necessary prerequisite for any quantum theory in which each momentum p(j) is replaced by an operator (/1/i)4/aq(J) to make the Hamiltonian operator into the Schrédinger equator However, for classical vibration problems it is often more convenient to deal directly with Newton's equations. The F matrix is usually easy to obtain by inspection. For example, for the system in Figure 1.4.2 we would obtain the component (i|F| j) by simply computing the product of the projections of each spring on coordinate axis x, = (ilx) and y, = . is analogous to the following integral: Cal¥) = fdvdxb rv) = fa 8(x, y)¥(V) = WO). (1.5.30) B. Orthonormality and Completeness ‘The completeness relation (1.2.19a) for a discrete space {|1), 12) --~) has the form Lip ) + 1k) = 12> + (> + IRD) (a9) porenoxe 47 (4) There exists @ null element 0 such that 0+) =l +0" 1). (A2e) (5) There exists an element |i") for every |i) such that ly +) <0 (Asay (6) The associative law of multiplication holds; i.e, a(Bli)) = api) (Ase) (7) The distributive law with respect to the addition of complex number holds; ie., (a + B)li) = ali) + Bl’). (Asn (8) The distributive law with respect to the additions of vectors holds; ie., a(li) + 17) = ali) + aj). (A9s) ‘The mathematical definitions of the inner or scalar product follow @ Ci * (kd) = GLa) + lk). (A.10a) @ Gil(al > = ailj). (A.100) eo) i = Gl. (A100) (4) Aili) 20. (A.t0a) APPENDIX B. LINEAR EQUATIONS, MATRICES, DETERMINANTS, ‘AND INVERSES a The simplest nontrivial example of a matrix equation is the linear equa tion, Eq, (B.1), involving two unknowns , and 2, (we assume the quantities A; andy, are known constants). This is written in three ways to display the ‘matrix notation: Aun thats =p Mit thls = Yor (Bula) ( 2)G)-(h) on Meny. (B.lc) 48 A REVIEW OF MATRIX ALGEBRA AND QUANTUM MECHANICS. ‘This equation is solved if and when the INVERSE .4~' of matrix # is found such that Eq. (B.2) holds, where / is the IDENTITY matrix: (a. ws on (B2) eels 2) 2) \4n tn} \tn Mn This will give a solution because the product (I -) of the identity matrix ‘with any vector 2 is equal to 2, and so Eqs. (B.L) and (B.2) together give Eq, (B3): Meade ly. (Ba) Simple algebra gives, for the inverse of #, ir Mia ln We derive now a general formula for the inverse of an m Xm matrix. Such a formula follows from the properties of the DETERMINANT (det .@) of a matrix .@. The determinant is defined by Eq. (B.4), where, for convenience, we use a different notation for matrix components: by by bs by det-@= det}, C2 cy Cy Jor do ds da = x ~ DPM P(aybyeady +) ass = (ab2¢sdy ***) ~ (aibserdy ***) + (ay ~(arbiesds °°) + (arbsedg ©) + (ay + (asbyeads +) — (asbredg **+) + (ay i (Ba) A permutation operation P in Eq. (B.) is said to have parity = ~1 if it is accomplished by an odd number of “hops” of one number over another aPPENDKe — 49 Gk 1 + E+ For example, in permating 4123--- co 1234--+ we use threc’hops: 4123 -» 1423 + 1243 + 1234, On the other and a pormutation is said to have parity — +i iff done in an even ‘number of hops. From Eq. (B.4) we can understand the expansion of a determinant in terms of smaller subdeterminants called MINORS, as shown in the following. We abbreviate the sum over permutation by D(—). by by by by det w= det}er 2 Cs Ca ea DC Mbrede a, EL (-Yb reads 7) +a 5 bs bs by d,s by = adele; ce azdetler cy ca Jd, dy dy a, dy dy bbs by Hasdel|ey ea la a, a, J Bypag — alas + atts - (B3) ‘A minor y2,, of a matrix is the determinant obtained from # after erasing its ith row and its jth column. In Eq, (B.S) we used the minors of the first row of .#. However, any row or column of .# could be similarly used to give the general equation (B.6): orn, — (B.6a) det = D(H Mytyy F212 orn. (B.6) det #= D(H 1) Mims, = 1,2, Now suppose we define a matrix called the ADJUNCT matrix .¢* of by Eq, (B.7) (note switching of ¢ and j in 1): Ap = (-1)! ny (8) 50 AREVIEW OF MATRIX ALGEBRA AND QUANTUM MECHANICS. #° together with the original matrix .# obey Eq. (B.8), as can be seen by studying an explicit representation of it in Eq. (B.8c). LH = (det 4), cr) 7 MH = (0.4), (Bs) by bs by by a, dy dy dy E(=Yhaesde “1+ -E(=)aaeodg “+ B= )aabady | TEC Wied E-)ayesdy o> BC )arbade EC Pbierds == E(—Imerde > E(—)enbady HE(=Ihyeads + E(=)ayeady + -E(—arbads by by by by la, a ay ay by by by by deter es ey ca + fdetfey er ey cx = faetfay ap ay dy ds dy dy ty ds dy dy Jay ds dy ay by by bs ol lay a ay Ja a: as ay Rood Jo, b> by by | [ty Be by be =detle) 2 65 ee ldet}er 2 es Ce [aet}>, b2 bs by Je, dy dy dy Je, dy dy dy a dy dy a bd, by by by ey ee Jo, by by by deter ez es ceo (deter ex ceo fdetfer ca es ee ices as ty ds dy dy td dy dy renee) 0 dete 0 0 dete (B.8e) In the last fine we use the fact that any determinant with two identical rows must vanish, Now the general formula for the inverse 4" follows if we apex 51 can divide Eq. (B8) by det = 8 /(det 4), (B9) ‘This is the case when det . 0. Then from Eq, (B.3) we may write the general solution to the linear equation #2 = y, which is called KRAMER'S RULE: realy ya fact + (det_#)" aed ys Go Ca a ao by by det 6 G a, a (B10) A linear equation with y = 0 is called a HOMOGENEOUS equation. Equation (B.11) is an example: Mand 7 (Bal) Now if det # 0 then according to Kramer's rule the only solutions are zero vectors = 0. However, if det .@ = 0 there will exist nonzero solutions as shown in Chapter I, Section 1.2.B, A matrix .# is said to be SINGULAR if det @ is zero and NONSINGULAR if det .@’ is nonzero. In Appendix A, a set of vectors |a),|b),|c),... were defined to be linearly dependent if and only if a relation of the form of Eq. (B.12) could exist for values of the coefficients @, 8,7... not all zero: ala) + BIb) + yle) + 0 (B.12) By taking the scalar product of this relation with each vector in turn, one 52 A REVIEW OF MATRIX ALGEBRA AND QUANTUM MECHANICS derives a simple matrix equation which can be used to test for linear dependence. adala) + B6alb) + yale) + 0, abla) + B(bIb) + y PROBLEMS Section 1.1 1.1.1 (Beam stoppers) Consider an incoming beam of spin- particles approaching the apparatus shown in the following diagram. A “stopper” can block the intermediate (2) beam if inserted as shown in the diagram. PROBLEMS 5B: Use the following notation for the states having different analyzer angle 0. 11) = [up) [19 = IN) 1") = [right) 12) = dn) [2’) = 1S) 2") = Heft) 6 0<0

You might also like