You are on page 1of 230
THEORY and PROBLEMS of MATRICES by FRANK AYRES, JR. including 340 solwed problems Completely Solved in Detail SCHAUM PUBLISHING CO. NEW YORK S OUTLINE OF THEORY AND PROBLEMS oF MATRICES oy FRANK AYRES, JR., Ph.D. Formerly Professor and Head, Department of Mathematics Dickinson College sw? SCHAUM’S OUTLINE SERIES ‘McCRAW-HILL BOOK COMPANY New York, St. Louis, San Francisco, Toronto, Sydney Copsright @ 1982 by MeGraw-HUll, Inc, AIL Rights Reserved, Printed ia the United States of Amerien, No part of this publication may be reproduce stored in a retrieval system, or transmitted, in any form or by any mean electronic, mechanteal, photocopying, recording, or otherwise, without the prior written permission of the publisher. 656 T8910 SHSH 7543210 Preface Elementary matrix algebra has now become an integral part of the mathematical background necessary for such diverse fields as electrical engineering and education, chemistry and sociology, ‘as well as for statistics and pure mathematics. This book, in presenting the more essential mate~ rial, is designed primarily to serve as a useful supplement to current texts and as a handy refer- tence book for those working in the several fields which require some knowledge of matrix theory. Moreover, the statements of theory and principle are sufficiently complete that the book could be used as a text by itself. The material has been divided into twenty-six chapters, since the logical arrangement is thereby not disturbed while the usefulness as a reference book is increased. This also permits 4 separation of the treatment of real matrices, with which the majority of readers will be con- cerned, from that of matrices with complex elements. Each chapter contains a statement of perti- nent definitions, principles, and theorems, fully illustrated by examples. These, in turn, are followed by a carefully selected set of solved problems and a considerable number of supple- mentary exercises. ‘The beginning student in matrix algebra soon finds that the solutions of numerical exercises are disarmingly simple. Difficulties are likely to arise from the constant round of definition, the- orem, proof. The trouble here is essentially a matter of lack of mathematical maturity, and normally to be expected, since usually the student’s previous work in mathematics has been concerned with the solution of numerical problems while precise statements of principles and proofs of theorems have in large part been deferred for later courses. The aim of the present book is to enable the readcr, if he persists through the introductory paragraphs and solved prob- Jems in any chapter, to develop a reasonable degrce of self-assurance about the material. The solved problems, in addition to giving more variety to the examples illustrating the theorems, contain most of the proofs of any considerable length together with representative shorter proofs. The supplementary problems call both for the solution of numerical exercises and for proofs. Some of the latter require only proper modifications of proofs given earlier: more important, however, are the many theorems whose proofs require but a few lines. Some are of the type frequently misnamed “obvious” while others will be found to call for considerable ingenuity. None should be treated lightly, however, for it is due precisely to the abundance of such theorems that elementary matrix algebra becomes a natural first course for those seeking to attain a degree of mathematical maturity. While the large number of these problems in any chapter makes it impractical to solve all of them before moving to the next, special attention is directed to the supplementary problems of the first (wo chapters. A mastery of these will do much to give the reader confidence to stand on his own fect thereafter. The author wishes to take this opportunity to express his gratitude to the staff of the Schaum Publishing Company for their splendid cooperation. FRANK AYRES, JR, Carlisle, Pa, October, 1962 CONTENTS Page Chapter MATRICES cecteetetstctttteteeeees Matrices. Equal matrices. Sums of matrices. Products of matrices Products by partitioning. Chapter SOME TYPES OF MATRICES wee : we 10 ‘Triangular matrices. Sealar matrices, Diagonal matrices. The identity matrix. Tnverse of a matrix. Transpose of a matrix. Symmetric matrices. Skew-symmetrie matrices. Conjugate of a matrix. Hermitian matrices, Skew-Hermitian matrices. Direct sums. Chapter DETERMINANT OF A SQUARE MATRIX : 20 Determinants of orders 2 and 3. Properties of determinants. Minors and cofactors, Algebraic complements, Chapter EVALUATION OF DETERMINANTS 32. Expansion along @ row or column. The Laplace expansion. Expansion along the first row and colurmm, Determinant of a produet. Derivative of determinant, Chapter EQUIVALENCE ... eee ee eee i SO) Rank of a matrix, Non-tingular and singular matrices. Elementary transformations. Inverse of an elementary transformation. Equivalent matrices. Row canonical form. Normal form. Elementary matrices Canonieal sets under equivalence. Rank of a product. Chapter THE ADJOINT OF A SQUARE MATRIX 49 ‘The adjoint. The adjoint of a product. Minor of an adjoint. Chapter THE INVERSE OF A MATRIX... 55 Inverse of @ diagonal matrix. Inverse from the adjoint. Inverse from clementary matrices, Inverse by partitioning. Inverse of symmetric matrices. Right and left inverses of mn matrices Chapter FIELDS . 64 General fields, Sub-ficlds, Matrices over a field Number fie CONTENTS Page Chapter. 9 LINEAR DEPENDENCE OF VECTORS AND FORMS........ 67 Vectors. Linear dependence of vectors, linear forms, polynomials, and matrices Chapter J0 LINEAR EQUATIONS % System of non-homogeneous equations. Solution using matrices. Cramer's ule, Systems of homogeneous equations. Chapter 17 = VECTOR SPACES 85 Vector spaces. Sub-spaces. Basis and dimension. Sum space. Inter- section space, Null space of a matrix, Sylvester's laws of nllity Bases and coordinates. Chapter [2 LINEAR TRANSFORMATIONS oy Singular and non-singular transformstions, Change of basis, Invariant space, Permutation matrix Chapter 13 © VECTORS OVER THE REAL FIELD 100 Imer product. Length. Schwarz inequality Triangle inequality. Orthogonal vectors and’ spaces. Orthonormal basis. _Gram-Schmidt orthogonalization process, ‘The Gramian. Orthogonal matrices, “Osthog- onal transformations. Veetor product. Chapter 14 VECTORS OVER THE COMPLEX FIELD 110 Complex numbers. Inner product, Length. Schwarz inequality, ‘Tei angle inequality. Orthogonal vectors and paces. Orthonormal basis, Gram-Schmidt orthoyonalization process. ‘The Gramian, Unitary mat- ices. Unitary transformations. Chapter 15 CONGRUENCE . ee 15 Congruent matrices, Congruent symmetric matrices. Canonical forms of real symmetric, skew-symmetric, Hermitian, skew-Hlermitian matrices under congruence, chapter 16 BILINEAR FORMS 125 Matrix form. Transformations. Canonical forms. Cogredient. trans: formations. Contragredient transformations, Factorable forms. QUADRATIC FORMS IL Matrix form. Transformations. Canonical forms. Lagrange reduction, Sylvester's law of inertia. Definite and semi-definite forms. Principal minors. Regular form. Kronecker’s reduction, Factorable forma, Chapter 18 CONTENTS Page HERMITIAN FORMS 6 Matrix form. ‘Transformations, Canonical forms, Definite and semi- definite forms, Chapter 19 THE CHARACTERISTIC EQUATION OF A MATRIX. 9 Characteristic equation and roots. Invariant veetors and spaces Chapter 20 SIMILARITY cee : 156 Similar matrices. Reduction to triangular form. Diagonable matrices, Chapter 21 SIMILARITY TO A DIAGONAL MATRIX........... 163 Real symmetric matrices. Orthogonal similarity. Pairs of real quadratic forms. Hermitian matrices. Unitary similarity. Normal matrices, Spectral decomposition. Field of values. Chapter 22 POLYNOMIALS OVER A FIELD.......... - - 2 Sum, product, quotient of polynomials. Remainder theorem. Greatest common divisor. Least common multiple. Relatively prime polynomials. Unique factorization, Chapter 23 LAMBDA MATRICES . cecseeeseees 179 ‘The Amatrix or matrix polynomial. Sums, products, and quotients Remainder theorem. Cayley-Hamilton theorem. Derivative of a matrix. Chapter 24 SMITH NORMAL FORM ....... cevteeeeeeee - 188 Smith normal form. Invariant factors, Elementary divisors. Chapter 25 THE MINIMUM POLYNOMIAL OF A MATRIX. . A 196 Similarity invariants. Minimum polynomial. Derogatory and non- Gerogatory matrices, Companion matrix. Chapter 26 CANONICAL FORMS UNDER SIMILARITY. .... cee 208 Rational canonical form. A second canonieal form, Hypercompanion matrix. Jacobson eanonical form. Classical canonical form. A reduction to rational esnonical Zorm, 11) >, 215 INDEX OF SYMBOLS.............. 219 Chapter 1 Matrices A RECTANGULAR ARRAY OF NUMBERS enclosed by « pair of brackets, such as 2037 ae @) and (6) J21 4 A 5. AT 6, and subject to certain rules of operations given below is called a matrix. ‘The matrix (a) could be ° x- y+52=0 considered as the coetictent marx ofthe system of homogeneous Hiner eauations 25°97 2 of 2 the augmented natix of the system of nom-honogeneous neat equations {267% °7 Later, we shall see how the mats nay be used to obtain solutions of these systems. ‘The ma- trix (5) eobld be given similar Snterprettion or We might consider ies rows as simply the eoor dinates of the points (1,31), (2.1,4), and (7,6) In ordinary space. ‘The matrix will be used later to Sette such questions as whether or notte thee points Le in the same plane withthe otigin or on the sane line through the ogi, In the matrix aa the numbers of functions a,, are called its elements, In the double subscript notation, the first subscript indicates the row and the second subscript indicates the column in which the element stands. Thus, all elements in the second row have 2 as first subscript and all the elements in the fifth column have 5 as second subscript. A matrix of m rows and n columns is said to be of order "m by n" ot mxn. (Un indicating a matrix pairs of parentheses, ( ), and double bars, || |, are sometimes used, We shall use the double bracket notation throughout. ) At times the matrix (1.1) will be called "the mon matrix (a;]" or "the mxn matrix 4 = {aij". When the order bas been established, we shall write simply "the matrix A" SQUARE MATRICES. When m =n, (1.1) is square and will be called a square matrix of order n or an aesquare matrix. Ina square matrix, the elements @,;, a22,.-. @y,, 8f@ called its diagonal elements. ‘The sum of the diagonal elements of a square matrix 4 is called the trace of A 2 MATRICES. [oHaP.1 EQUAL MATRICES. Two matrices 4 = [aj] and B= [ij] are said to be equal (A=B) if and only if they have the same order and each element of one is equal to the corresponding element of the other, that is, if and only if 1g = Bag AQyeeeemi f= Lee) ‘Thus, two matrices ar equal if and only if one is a duplicate of the other ZERO MATRIX, A matrix, every element of which is zero, is called a zero matrix. When A is a zero matrix and there can be no confusion as to its order, we shall write 4 = 0 instead of the mxn atray of zero elements. SUMS OF MATRICES. If 4 = [a;;] and B ~ [b;;] are two mxn matrices, their sum (difference), 4 +B, is defined as the mxn matrix C= (e;;], where each element of C is the sum (difference) of the coresponding elements of A and B. Thus, 4B = [ays + bij] 125 230 Example 1, It A = and 8 then ‘ F 1 J i 2 ‘| isn 249 el iE 5 ‘| Aan = : orca is2 acs} La ao. ap PR BB HOP Ga ‘| O-(-1) 1-2 4-8 ana a ‘Two matrices of the same order are said to be conformable for addition or subtraction. Two matrices of different orders cannot be added or subtracted. For example, the matrices (a) and (b) above are non-conformable for addition and subtraction, and ‘The sum of k matrices 4 is a matrix of the same order as A and each of its elements is k times the corresponding element of A. We define: If F is any scalar (we call k a sealar to dis tinguish it from [] which 1s a 1x1 matrix) then by £4 = Ak is meant the matrix obtained from A by multiplying each of its elements by k. Example 2. If 4 2) then mien, = [ >]. Ataea [i aE hE q : {i “] = = ae 2 al’ al*b al |e o. and cya [2° al . [ “5 ) 512) 86) 10-18 In particular, by ~A, called the negative of 4, is meant the matrix obtained from A by mul- tiplying euch of its elements by -1 or by simply changing the sign of all of its elements. For every A, we have 4 +(—A) = 0, where 0 indicates the zero matrix of the same order as A. Assuming that the mateices 4,8,C are conformable for addition, we state: (a) A+B= Bea (commutative law) () 44 Bs) = sRy4C (associative law) (c) k(A+B) = kA + RB = (A+B)k, ka scalar (d) There exists a matrix D such that 4+D = B. ‘These laws are a result of the laws of elementary algebra governing the addition of numbers and polynomials. ‘They show, moreover, 1. Conformable matrices obey the same laws of addition as the elements of these matrices. CHAP. 1] MATRICES. 3 MULTIPLICATION. By the product 4B in that order of the 1xm matrix A= (as; a2 @o -.. ayy) and bas bas the mat matrix B= || 4s meant the 131 mattix C = [ass bya # robs * ++ aaq bes) as ae bot Phat 8, (as ea --s tae) -| 2 | = [eicbes bare bea tot aebes) = abn] bes, Note that the operation is row by column; each element of the row is multiplied into the cor- responding element of the column and then the products are summed. ’ branole a.) (2 34] EJ + Baaeneae) ~ 1 2, = ote Le] ce-con 2 By the product AB in that order of the mxxp matrix A= [aj] and the pxn matrix B= (6 {is meant the mn matrix C = (cj;] where PVD. ccems F912. cag = Gigbyj + aesbaj 4+" + aipbos = Eanes. ¢ ‘Think of A as consisting of m rows and B as consisting of m columns. In forming C = AB exch row of 4 4s multiplied once and only once into each column of 8. The element cg; of C is then ‘the product of the ith row of A and the jth column of B. Example 4. ma 2) rg an Osa F Oreb2s @asbio + aay bye AB = las, a2 = Jacsbar tavcbos aasbre + azcboe box boo orbia tanzbex a1 bre + ean boo, ‘The product 4B is defined or A is conformable to B for multiplication only when the number of columns of 4 is equal to the number of rows of B. If A is conformable to B for multiplication (AB is defined), B is not necessarily conformable to A for multiplication (BA may or may not be defined). See Problems 3-4 Assuming that 4, B,C are conformable for the indicated sums and products, we have (©) A(R +0) = ARE AC (first distributive Law) (f) ABC = AC+BC (second distributive law) (@) ABC) = (ABYC (associative law) However, (A) AB + BA, generally, (i) AB = 0 does not necessarily imply A =0 or B = (i) AB ~ AC does not necessarily imply B = C. See Problems 3-8, 4 MATRICES (CHAP. 1 PRODUCTS BY PARTITIONING, Let A= [a;;] be of order mxp and B= [b;;] be of order pxn. In forming the product 4B, the matrix A is in effect partitioned into m matrices of order 1xp and B into n matrices of order px1, Other partitions may be used. For example, let A and B be parti- tioned into matrices of indicated orders by drawing in the dotted lines as prem) | (Paxne) eae eo nee ‘ms xpe) | (maxpe) | (mxps) a= [pees yet A B = |opoxm) | oxn) {imgxpx) | (mp xpo) | (mexPa), <4 =a (poxn) | (Paxne), Bia | Bio Bea | By or B= | Boy | Boo Bas | Bao In any such partitioning, it is necessary that the columns of A and the rows of B be partitioned in exactly the same way; however my, mz,na,nz may be any non-negative (including 0) integers such that m,* m= m and ny+ny =n. Then ArsBea + AroBor + AsoBos AaiBaz + ArzBoo + Ais Bso Ca Cre AB = eile a AorBia + doaBor + AzoBos A21Ba2 + Apo Bao + Ao Bi Cor Coo 210 aa Example 5. Compute 48, given 4 =|3 2 0| and B=|2 1 104 23 Partitioning so that 4 - [4s Ae © [Aas doo] © AesBis ¢AseBos Aya Byo + AroBor we have AB AorBs3 +AaeBox AasBio + Aas Eee} ]-Be so Bag: 8 fo aft }ecne en nal * He] - (bs d-bo9 GG . en | fhigd+paq [o]+(2] {94 2)(2) B42? See also Problem 9. 2 5 2 Let 4,B,C,... be n-square matrices. Let 4 be partitioned into matrices of the indicated axes (Pax Ass Goxpd expe | eo see pes (ps P1) | (Psx Po) 1 (Ps Ps). Ass, and Iet B,C, ... be partitioned in exactly the same mannet. Then sums, differences, and products may be formed using the matrices Ais, Aya, «i Bass Buoy 1 Crus Cron CHAP. 1] MATRICES SOLVED PROBLEMS 2 143 26(-4) -141 042 4-1 3]. fet See ae J ; 54-2) 148 a+ ao 24 -1-1 0-9 26 ~2 -2 o-5 2-0 1-3] = | 3-5 2-2 “542 1-9 244, 0-3-2 4 43. Po, 1-3-p 2-2-9] foa-p 9 00 we a+B—D = |aeicr 4-5-5] =| 4-7 -1e] = fo of, -2-p=0 and p=—2, 4 12 3 -2 b |] 2. A=|3 4] and B=] 1-5], tind D=|r s| such that A+B-D = 0. 5 6. Seder 6+ 3—u, ot on 00, 2 0 men D=| 4 -1|2 448 88 eeu 3. (@) (a5 6] LI = (4ay+5@+ee-n) »fi7) 1 a 24) 2) 2067 8 10 17 @) | a} [456] = | 3) 36) 306) 12:15 18 1 @) -16) 16), 4-5 6. 4-6 9 6 wy (12a) }o-7 10 7 5B 8-11 ~8 [aca + 2¢0) + 3(5) 1(-6) + 2(—7) + 3¢8)1(9) + 210) #811) 146) + 27) + 3-8) [is 4 -4 -4] A + + w P24] - forse seen) _ pr re AE] ~ Lassa sec] ~ |» 3-4 ofedias Ake 2, 2 ott ance a =[6 2], oem 104 » [2-2 gp e117 pear @ alo rallo 12)-|2 14] one 1 ogh oi} Ls -re, ‘The reader will show that 4° 1@) +2412) 1-H ¥26)+10)] _ fs 8 4() 400) +22) 4(-4) +05) +2} ~ Ls -12 4 8-3} -1 a) fu ao 4.4 =|2 14]]o 12)-| 3-18 sah o4 8-43 a8, AA? and APA? 6 MATRICES. [cHAP. 1 5. Show that: A F @ 2 aadysten) = 3, einbey +3 avers. aanbandeng- Babyy teas) * 2elbog taj) = Past Aiohs) + Cintas taints) = Seary + Jeane (era + ain + a19) + (sa + 000 + @20) (213 +402) + (2 + 420) + (O39 * 30) Rb ‘This is simply the statement that in summing all of the elements of a sratrix, one may sum first the elements of each row or the elements of each column © Jeacd = Zein arey * bactag * Pant es? = aintbraces + braeost baacaj) + ialboxeas + beats + bootsj) + isha Haiabodery + Ciabie + tibasdens + Gtabie + aizbradeos (Eertines +6 aumbeadeaj + (E ainbyade inberders + (E aiebys) = Ed eaters 6, Prove: If A = [ag] Is of order mm and if B = [b;,] and C~ {e,4] are of order nxp, then A(B + C) AB+ AC. “ ‘The elements of the ith row of A ate ais, ajo, -:+ iq and the elements of the jth column of B+C are Bajtenss bast 603 ‘vonj- Then the element standing In the ith row and jth column of A(8 + C) 1s efbaj ea) ¥ bes eap) Fon tugs tog = E aay tea; E ondas+,E,euens. the sum ot the elements standing inthe th row and jth coluan of 40 and AC, 1. Prove: it A= [ai] is of on then A(BC) = (AB)C. 2 ‘The elements ofthe throw of atewis,ais,...94y andthe elenents ofthe th column of BC are E. bineyy. t mxn, it B= (4s5] 18 of order np, and if C = [cy] 18 of order peg, 2 Ejbentaj o> tunengi hence the element standing inthe ith row and th column of 4 (BHC) is 7 t zo a2 nay tie B bonny to + aig Zany = Foch taney : (2 embmrens 5 2 ambaney + CZ, aiebeadeng + + (2 emebaphens ‘This is the element standing in the ith row and jth column of (48)C; hence, A(BC) = (AB)C. 8. Assuming A,B, C,D conformable, show in two ways that (A +B)(C+D) = AC + AD + BC +BD. Using (e) and then (f), (A +B)(C+D) = (A+B)C +(A+B)D = AC + BC +40 + BD. Using (7) and then (e), (A+8)(C+D) = A(C+D)+B(C+D) = AC +AD +BC + BD AC +BC +4 + BD. CHAP. 1] MATRICES 7 hoo Hoo] fo] fodfiog i oo] faa] fang] 9.c@ fo 10, 2\\5 6 9 = or olfo 1 of+|2|(a 121 = for ole 2 a] - Joa 4 oor fe 2"} Joo allo oa] |p oor) issal [sar 12, 1 0 3 |? 0 0 i hh 213 4516 1fi2) fiapas) fi gy 2 2 alas Git eis) babsad b Wh. wl? pais e7lal _ |e 1 Af 4] 1 Ale 6 a fa ate lo 45/67 8\9 12 1} ]4 5] ft 2 1] Je 7 8] fa 2 ho 0 jo site sia] fot als al lox bo sf lox le 0 pte 5 411 UMe7) (es4) Oy Elbe see] 35 1 9M13 4 TU0 13: 16) L 19, 4 7 10 13 16 19 _ |fpr 93]f35 37 soqfaty] _ fat a3 a5 a7 99 41 20 z2{[24 26 28l|s0|| ~ }20 22 24 26 28 30 13 saflis 13 aallia|} [is 13 13 13 13 13 fe Me s alti] L?7 65 #1 eyeter ery snes 10. Let |e = assy. asoye be three linear forms in ys and nat { : Xo = Geaya + Oo2yo Yinear transformation of the coordinates (y.,y2) into new coordinates (=, =2). The result of applying the transformation to the given forms is the set of forms B= (arabia t @yoboaden + (Brrbay + droboo) 0 % at aooberder + (derbs2 + teaboa) % + eaabosd22 + (aidis + Aaoboo) 22 Using matrix notation, we have the three forms |x2] = ass a22 fees (] and the transformation [e]-[E SE] merem ron te tana EES ae) ‘Thus, when a set of m linear forms in n variables with matrix d is subjected to a linear trans- formation of the variables with matrix B, there results set of m linear forms with matrix C = AB. ation Is the set of three forms 8 MATRICES (CHAP. 1 SUPPLEMENTARY PROBLEMS 12-9 fs i civen 4={5 0 af. 8 2 5), and s-1 4 2-4 @j ) Compute: -24 = |-10 0-4], 0-8 = 0 -2 2-2 (©) Verify: 4+(B—C) = (4¥B)-6 (d) Find the matelx D such that A+D =. Verity that D = B—A = ~(A~R), 23] um 6-1 12, Given 4 = 4 6], compute 48 =0 and BA =|~22 12 —2]. Hence, AByR4 23 <1 6-1 generally. 1-3 J 1 4a0 2 1-1-3] 13. Given A= [2 1-3), B= [2 114), and C=|3 2 ~1 1), show that 4B = AC. Thus, AB =AC 4-3-1 a -21 2 2-5-1 9 does not necessarily imply B = C. 1 14. civen 4 =|2 3 {]- show nat Bye = Be, 15, Using the matrices of Problem 11, show that A(B+C) = AB +4C and (A+8)C = AC +BC. 16. Explain why, in general, (A+R) = 4°4 248 + B? and A? — + AA ByA+B), 2-3-5] “13 9 2-2-4 aawen 4 =|-1 4 5), B=! 1-3-5], and c=|-1 3 al. 1-3-4] Pipiates 1-2-3 (@) show that AB = BA =0, AC» 4, CA (®) use the results of (a) to show that ACB=CBA, 4 i = (A-BY(A+B), (A+ BY © A? + BP 18. Given 4 -[ I]. whee = 1, derive oma for he postive inte powers ot & Ans. AT 21, A, 04 Dw that the product of any two or more matrices of the set {1 ° ede wep 2 OL fE 9) Fe sso at oummaattiorrmnemtionctoens fs LLL EEL I.E 9G) [: 4}: [ 4 is a matrix of the set. 20, Given the matrices 4 of order mxn, B of order nxp, and C of order rxq, under what conditions on p. 4, ‘and r would the mairices be conformable for finding the products and what is the order of each: (a) ABC’ (ACB, (e) ABHCN? Ans. (a) p A according as m= 4p, 4p+1, 4p+2, 4p+3, where J = [3 | mxq (b)r=m =a; mx (er =n, p=9: meg cHAP. 1] MATRICES. 9 21. Compute 4B, given: a ww 9 0 coat 22, Prove: (a) trace (4+B) = trace A + trace B, (b) trace (kA) = k trace A = tye ty ya = ent Oey p 1-2 i)" itty, ae le saath. sot [2] ‘fe By tye 899 ME YT 2) ~ [2 1-3)? nay tte ~224 ~ 62 24.164 Lay] and & = [645] ato of onder mn and i C= [egg] fe of order np, show that (L4B)C = AC + BC 2. Let A (oy) and = [ogy], where (f= 1,2,.c0omi7 = As2eve spi k= 1,2).0sm). Denote by Bi; the sum of Bs Bo Ma ‘he elements of the jth row of that is, let j= 2 bjy. Show that the element in the ith row of A> Pe 4s the sum of the elements lying in the ith row of AB. Use this procedure to check the products formed in Problems 12 end 13, 26, A elation (such as parallelism, congruency) between mathematical ontities possessing the following popertios: (4) Determinative Either « is in the relation to 6 or a is not in the relation to b. Gi) Reflexive «is in the relation to a, for all a (it) Symmetric Ifa is in tho relation to b then & is in the relation to a. (iv) Transitive Ifa 4s in the relation to & and bis in the relation to ¢ then a is in the relation ta e. 4s called an equivalence relation. Show that the parallelism of lines, similarity of trlangles, and equality of matrices are equivalence relations. Show that perpendicularity of lines is not an equlvalence relation 21. Show that conformability for addition of matrices 1s an equivalence relation while conformability for multi- plication is not, 28. Prove: If A,B,C are matrices such that AC = C4 and BC =CB, then (AB 4 BA)C = C(AB 4 BA), Chapter 2 Some Types of Matrices ‘THE IDENTITY MATRIX, A square matrix 4 whose cloments aj; 0 for i> is called upper triangu- Jar; a square matrix A whose elements aj;~0 for i 3] is 4'-]2 5). note that the element o,, in the ith row 458 ete i and jth column of A stands in the jth row and ith column of A If A’and Mare transposes respectively df 4 and B, and if k is a scalar, we have immediately @ y= 4 land (b) (kay = ka In Problems 10 and 11, we prove: | HL. The transpose of the sum of two matrices is the sum of their transposes, ive (Asby| = ao 12 SOME TYPES OF MATRICES (CHAP. 2 and MIL The transpose of the product of two matrices is the product in reverse onder of their transposes, i.e., ABy = Ba See Problems 10-12. SYMMETRIC MATRICES. A square matrix 4 such that A’= 4 is called symmetric, Thus, 2 square matrix 4 = [a5] is symmetric provided o;; - aj;, for all values of i andj, For example 12 8 2 4 -8| is symmetric and s0 also is ka for any scalar k. 3-5 6 4 1n Problem 13, we prove IV. If A is an n-square matrix, then +4 is symmetric A square matrix A such that 4° —A is called skew-symmetric. ‘Thus, a square matrix d is skew-symmetric provided a; for all values of i and j. Clearly, the diagonal elements are 0-29 zeroes. For example, A=| 2 0 4] is skew-symmetric and so also is £4 for any scalar k. -3-40 With only minor changes in Problem 13, we can prove V. If A ts any n-square matrix, then 4-4" Is skew-symmetric. From Theorems IV and V follows MI. Every square matrix A can be written as the sum of a symmetric matrix B= 4(4+4’) and a skew-symmetric matrix C= $(4~4’) See Problems 14-15. ‘THE CONJUGATE OF A MATRIX. Let « and b be real numbers and let i= V=1; then, ¢= a+bi is called a complex number. The complex numbers a+bi and a~ bi are called conjugates, each being the conjugate of the other. If z = a+ bi, Its conjugate is denoted by 2 = a+! If 2,=a4bi and 2, ,-@~bi, then x the conjugate of a complex number z is = Itself. If zy a+bi and = @=Fi=asbi, that is, the conjugate of =erdi, then (i) at ee = (ate) + (bed and PF (are) = (bedi = (abi) + (endiy = that is, the conjugate of the sum of two complex numbers is the sum of their conjugates, (ii) Ze = (ac-bd) + (adsbeyi and 20% = (ae-bd) ~ (adsbeyi = (a-biKe-di) = H-%, that is, the conjugate of the product of two complex numbers is the product of their conjugates. When 4 is a matrix having complex numbers as elements, the mattix obtained from A by re- placing each element by its conjugate is called the conjugate of 4 and is denoted by A (A conjugate). eee — fica Example?. When 4 = then 7 = 7 3 2-3 3 3e8i if A and B are the conjugates of the matrices A and B and if k is any scalar, we have readily © =A and () A - BA Using (i) and (ii) above, we may prove cHaP. 2) SOME TYPES OF MATRICES 13 VIL. The conjugate of the sum of two matrices is the sum of theit conjugates, i.¢. (AvB) = A+B. ‘VIM. The conjugate of the product of two matrices is the product, in the same order, of ‘their conjugates, i.e., (1B) = The transpose of J is denoted by A’(A conjugate transpose), It is sometimes written as A* We have IX, The transpose of the conjugate of A is equal to the conjugate of the transpose of Ate, (Ay = (A). Example 3. From Example 2 ye ATF 8 wnite are JAF are bo HERMITIAN MATRICES. A square matrix A=[a;j] such that =A is calied Hemnitian, Thus, A Js Hermitian provided oj; » aj; for all values of i and j. Clearly, the diagonel elements of an Hermitian matrix are real numbers. roast al Example, The matrix 4= {142 3 | 4s Hemitian. Is kA Hermitian if k is eny real number? any complex number? A square matrix A = (a;;] such that 7=-A Is called skew-Hermitian. Thus, 4 is skew- Hermitian provided a;; = dy for all values of i and j. Clearly, the diagonal elements of a skew-Hermitian matrix are either zeroes ot pure imaginaries, Example 5. The matrix A = é] 1s skew-Hermitian. ts £4 skew-Hemitian if k 1s any real ‘number? any complex number? any pure imaginary? ‘By making minor changes in Problem 13, we may prove X. If A is an n-square matrix then A+ is Hermitian and A~ is skew-Hermitian, From Theorem X follons Al. Every square matrix A with complex elements can be written as the sum of an Hermitian matrix B =4(A+4) anda skew-Hermitlan matrix C= 5(4- 4) DIRECT SUM. Let 4;, 4p,..., 4s be square matrices of respective orders my, mp, ...ms. ‘The general- ization of the diagonal matrix is called the direct sum of the 4; 14 SOME TYPES OF MATRICES (CHAP. 2 ; Laat Examples, Let z= [2], Ae [ he e-[2 8 a1 20000 6 oi2009 024000) 6 Aieet Sm Of Aas Ay 8 dg (Ay Aon eee oooz0 3 poosina Problem 9(b), Chapter, illustrates MIL If A= diag(Ay, Ay, Ag) and B = diag(B,, By, ...,B,), where A; and By have the same order for (= 1,2,...,8), then AB = diag(AB,, AgBy. «... AsBy. SOLVED PROBLEMS yy, 0 0 | [br Bap oe Ban Aibis adie 1b Gog 0 | [bor boo +. ban| Aebey daoben aban | 1. Sine {the product AB of oo un} [bas bao «+++ Ban] Ganbar San bao on ban| fan m-square diagonal mattix A = diag(a1, d2, nq) and any mxn matrix B is obtained by multi- plying the first row of B by as, the second row of B by azo, and 80 on. Epo) een (ere 2 stow that she matrices [7 | and [5.4] commute for att vatues of 0, sed a lfc ad] fact advte] _ fo df 8 ros coors con [FI] fest ce] - GIG 2-a-d a.snow tat ]-1 3 4] ss idempotent, 1-2-3 2-2 -i][ 2-2-4 22-4 a -1 3 4] = [-1 a 4 4 1-2-5 1-2-3, 4. Show that if AB=4 and BA~=B, then A and B are idempotent ABA «(AB)A = AvA =A" and ABA = A(BA)= AB =A; then A? =A and 4 ts dempotent, Use BAB to show that B is idempotent. Hap, 2} SOME TYPES OF MATRICES 15 rag 5. Show that A=] 5 2 6| is nilpotent of order 3. -2 -1 -3 ra sffaa gs ooo oo fii a #=|5 2 6] 5 2 6|-| 3 3 9} and 4° = aA 3 3 a{f5 2 6-0 2-1-3] [-2-1-3) [1-1 3 1-1-3] [2-1-3 6. If d is nilpotent of index 2, show that A(I+4)"= A for n any positive integer. since # Then AU tAJ'= AUlindy = Ata? =A 7. Let A,B,C be squate matrices such that AB=1 and CA=1. Then (CA)B = C(AB) 80 that B = C. Thus, B= C= A” is the unique inverse of A. (What is B~*?) 8. Prove: (ABy? = B.A By definition (ABY"(AB) = (AB)(ABY* = I. Now Ban 2 aes ks BRB eT and ABBA) = ABBA 2 Adt 2 By Problem 7, (ABj" is unique: hence, (4B)* - B*. 9, Prove: A matrix d is involutory if and only if (J—A)(I+ A) = 0 Suppose (J-A)(I+4) = 1-# = 0; then 4? = 1 and A is involutory ‘Suppose 4 is involutory; then 4° =I and (-A)(I+A) = I=A? = 1-1 « 0 10. Prove: (A+ By = A’ B Let 4 =ajj] and B= [bij]. We need only check that the element in the ith row and jth column of 4B. and (A+ RY are respectively og, by, and aj: + bj UL. Prove: (ABY = BA’ Let A (4) be of onermen, B = [Bij] be of order mp; then C= AB cap] 18 of oner mp. The stoment standing in the throw and th otunn of 48 $8 6; = oe by and this isso the element sand. ‘ing in the jth row and ith column of (4B). ‘The elements of the jth row of are by, bnj nd the elements of the ith column of A’ate aig 2%, .-jq: Then the element in the jth row and ith column of B/A’is Eien = Zewdy = ay Thus, (ABy = BA 12. Prove: (ABCY = CBA Write ABC = (4B)C. ‘Then, by Problem 11, (ABCY liaBycl = ciaBy = CB. 16 SOME TYPES OF MATRICES [cHAP. 2 13. Show that if A=(aj;] is n-square, then B First Proof, byl= 44a is symmetric, ‘The element inthe /h row and jth column of 4 is a4; and the corresponding element of A's aj; hence big = 045+ 0:4. Tho element in the jth row and th column of 4s ay; andthe corresponding element of A's gyi hence. By = aj, +a;;. Thus, by; « bss and B is symmetric Second Proof, By Problem 10, (44 = A'¢ (4 = ded = A 4A and (4444) Is symm 14. Prove: If A and B are n-square symmetric matrices then AB is symmetric if and only if 4 and B commute, Suppose A and B commute so that AB 1A, ‘Then (ABY= BA"= BA = AB and AB is symmetric. Suppose AB is symmetric so that (48Y tices 4 and B commute. B. Now (ABY = BA’ = BA; hence, AB = BA and the ma- 15. Prove: If the m-square matrix 4 is symmetric (skew-symmetric) and if P is of ordet mxn then B = PAP ts symmetric (skew-symmetric), IfA is symmetric then (seo Problem 12) B= (PAPY = PA‘P'Y = PA'P » PAP and R is symmetric If Ie skew-symmetric then B’ = (PAPY = -PAP and B is skew-aymmettic. 16. Prove: If 4 and B are n-square matrices then A and B commute if and only if 441 and B~kI ‘commute for every scalar k. Suppose 4 and B commute; then AB - B4 and ASRB =H) = AB ~ RAB) + PT = RA RAED = (B= RICA -Kt) Thus, 4 and B—&1 commute Suppose 41 and B-KI commute; then AHEM) = AB WAR) + ET BAW WASB)+ ET = (B—RIA—Kh AB A. and A and B commute. CHAP. 2] SOME TYPES OF MATRICES 17 SUPPLEMENTARY PROBLEMS 17, Show that the product of two upper (Lower) trungular mattices is upper (lower) triangular. Derive a rule for forming the product BA of anmxn matrix Band A = diagiess, azo, Hint. See Problem 1 nn 19, Show that the scalar matrix with diagonal element & can be written as kl and that kA where the order of fis the row order of 4. TA = diagth, ky k) A, 20. IFA is n-square, show that 4°47 49.4 where p and 4 ate positive intesers. 2-3-5 “18 8 21, (@) Show that A= J-1 4 5] and 9 =| 1-2 -5| are idempotent. 1-3 4, -1 3 5) () Using 4 and A, show that the converse of Problem 4 does not hold. 22. If is idempotent, show that B =A te sdempetent and that 4B = BA = 0, 23. (a) If A ota i Pi-1-if? fou oP ro zi. srow that [0 1 of = [-r-1-if = | o 00 4 oo a1) a 25. show that A= |-s 2 9] is pertodic, of period 2. 2 0-3. 44-51 =o. oe 1 2 1-3-4 26. snow that |-1 3 4 is nilpotent. 13 124) 2-1 = 21. show that @) 4 =| 3 2 of ang B=] 3 2 9) commute, 1-1-1 1-1 - 11d 2/3 0-1/3] 4 =| 231] and 8 =|-3/5 2/5 1/5] commute, 124 a8 -1/5 1/15] tet = [2 1} anttcommite anc + = AP + BP, 80. Prove: The only matrices which commute With every n-square matrix are the n-square sealar matrices. 31. (a) Find all matrices whick commute with diag(t, 2,2) (8) Pind all matrices which commute with diaa(azs, aoe, Ans. (a) diag(a,6,c) where a,b,¢ are arbit an 18 32, a 34 35, 36, at. 38, 39. 40. 2. 43, 44, 45, a. 48. o 1-1 5. Show that 4 = ]4-3 4] and 8 p34 SOME TYPES OF MATRICES (CHAP. 2 a 7] is the inverse of “5 As the inverse of wm JL J-[b Jremmimeerls am [5k te] ‘Show that the inverse of a diagonal matrix 4, all of whose diagonal elements ate different from zero, 1s & diagonal matrix whose diagonal elements are the inverses of those of A and in the same order. ‘Thus, the inverse of 48 Jy. sag =1 0-1] ate invotutory =4 4-3. sport [ ® Prove: (ABCY? = C™B"TAS, Hint, Write ABC = (ABYC. 6) (kay? = Eat, ce) dP? Prove: (a) (d"ty# FF 4°)? tor p a positive integer Show that every real symmetric matrix is Hermitian, Prove: (a) y= 4, (6) A¥B)= A+B, (0) GA)= EA, @) GB)= 4B. 1 ase 2eai Show: (a) A= |i-i 2 i | is Hermitian, a-3i G0 base geal (B= Jars: 211 | ts skew-Hermitian, -2-a8 -1 0 (©) iB ts Hermitian, (@) A ts Hermitian and B ts skew-Hermitian, If A is n-sauate, show that (a) 44” and A’A are symmetric, (b) A+’, Ad’, and 4A are Hermitian Prove: It is Hermitian and 4 is any conformable matrix then (AY HA is Hermitian. Prove: Every Hermitian matrix 4 can be written as B+iC where B is real and symmetric and C is real and ‘skew-symmetric Prove: (a) Every skew-Hermitian matrix A can be written as A= B+iC where B is real and skew-symmetric fand C is real and symmetric. (b) A°A is real if and only if B and C anti-commute, Prove: if A and B commute so also do * and B°*, 4’ and &’, and A’ and 8 Show that for m and n positive integers, A” and B” commute if A and B commute. CHAP. 2] SOME TYPES OF MATRICES 19 aa fet a aot of x" na” saan) 49. stow ca) | tf [A (Om | OPN er |e) (Operate oN fe 8 oo al jo o x 50. Prove: If 4 is symmetric or skew-symmetric then 4d’ 4 and # are symmetric. 51, Prove: If is symmetric s0 also is a4? +64?+...+4f where .5,....g ate scalars and p is a positive integer ‘52, Prove: Every square matrix 4 can be written as 4 = 8+C where B is Hemitian and C is skew-Hemnitian, 59, Prove: If Is real and skew-symmetiic or If 4 1s complex and skew Hermitian then #14 are Temmitian ‘54. Show that the theorem of Problem 52 can be stated: Every square matrix 4 can be written as 4=B+iC where B and C are Hermitian 35, Prove: If A and B are such that AB=A and BA=B then (a) BA’ 4’ and A'B’< A’, (b) A’and B’are Idempotent, (c) 4 = B= if A has an inverse. ‘36. If 4 ts involutory, show that 3(/+4) and 3(/-A) are idempotent and j(/+4)- 3(I-A) = 0. 51. If the a-square matrix 4 has an inverse 4*. show: cay VST 0) (ATS AM fey (Yt Hint. (2) From the transpose of 447 =, obtain (AY as the inverse of 4”. ‘58. Find all matrices which commute with (a) diag(1, 1.2.2), (8) dlag(1.1.2.2, Ans, (a) ding(A.b. ©), (6) diag(4.2) where 4 and 8 are 2-square matrices with arbitrary elements and 6, ¢ are scalats, 39.16 Ay dp. .u.dy ate Scalar matrices of respective orders m;,mo,....my. find all matrices which commute with diag(Ay, A... A3) Ans. diag(B,.B,...Bs) where By, By, Ry are of respective orders my,m. ....me with arbitrary elements, 60. If 4H = 0, where 4 and B are non-zero n-square matrices. then 4 and & are called divisors of zero. Show that the matrices 4 and B of Problem 21 are divisors of zero. GLI A= dag (Ay. Ao. nrg) and B = ding(B,,Bo.....B) where A, and By ate of the same order, (i= 1.2 8), show that (0) AB = GlaK(Ay+By. Ag $B, Ag+ Be) (8) AB = diag (ds Bs. AeB. AQ) (©) ttace AB = trace A,B; + trace AoBy + ... + thace Ay By 62, Prove: If 4 and # are n-squate skew-symmetric matrices then 4B 1s symmetric if and only if A and # commute. 63. Prove: If 4 is m-square and B rA+sl, where rand s fe sealers, then 4 and B commute, 64. Let A and B be n-square matrices and let 1,72. ss. 69 De Scalars such that nse 4 ros:. Prove that C: = nAts.8, Co=rg4+seB commute if and only if A and & commute, 465. Show that the n-square matrix A will not have an Inverse when (a) A has a row (column) of zero elements oF (8) A has two identical rows (colunns)or (c) 4 basa tow (column) whieh isthe sum of two other roves (columns). 66. If A and B are a-squate matrices and A has an inverse, show that ASB)AMA-By © (A~ByAA +BY Chapter 3 Determinant of a Square Matrix PERMUTATIONS. Consider the 3! = 6 permutations of the integers 1,2,3 taken together ay 123 192 213 291 912 32 and eight of the 4! = 24 permutations of the integers 1,2,3,4 taken together aa toed aise 3124 4129 : 1324 2314 3214 4213 If in a given permutation a larger integer precedes a smaller one, we say that there is an inversion. If in a given permutation the number of inversions is even (odd), the permutation is called even (odd). Fot example, in (3.1) the permutation 123 is even since there is no inver~ sion, the permutation 132 is odd since in it 3 precedes 2, the permutation 312 is even since in it 3 precedes 1 and 3 precedes 2. In (3-2) the permutation 4219 is even since in it 4 precedes 2, 4 precedes I, 4 precedes 3, and 2 precedes 1. eTERMINANT OF A SQUARE MATRIX. Consider the n-square matrix an 4 Ap digg 29 --on fins Bye na Bm and a product (3) 53,904, 49i,-~ Oni of n of its elements, selected so that one and only one element comes from any row and one and only one element comes from any column. In (3.4), as a matter of convenience, the factors have been arranged so that the sequence of first subscripts is the natural order 1,2,,...ni the sequence jy, js. -.jq of Second subscripts is then some one of the n! permutations of the tnte~ gers 1,2,..,n. Pacility will be gained ifthe reader will parallel the work of this section be~ inning with a product arranged so that the sequence of second subscripts is in natural order.) For a given permutation jy, ja. of the second subscripts, define € according as the permutation is even of odd and form the signed product, (3.5) sojy = Hor © faded Mla C2 Oi By the determinant of A, denoted by |], is meant the sum of all the different signed prod- ucts of the form (3.5), called terms of |4], which can be formed from the elements of 4; thus 6) VAL = Bib and Ooh taf oe Orie where the summation extends over pn! permutations j.jp...j_ of the integers 1, 2,...m ‘The determinant of a square matrix of order n is called a determinant of order n. 20 CHAP. 3] DETERM! NANT OF A SQUARE MATRIX 21 DETERMINANTS OF ORDER TWO AND THREE. From (3.6) we have for n=2 and n=3 GB ue = &19 @yxdon + £2 MaoGoy 11092 — Gy0Goq (3.8) 4g; Boo Mog 129 1200 + Gop Mx Gzaan + Grn 22% dog “Jag. daa a Jay Oe " © [2B] ~ 20-cm = ors o © fies] jy artang- ed in order of magnitude, be m of the column indices. Let the remaining row and column indi- es, arranged in order of magnitude, be respectively ines, fyeg, nriy M4 fae, fe ons fy. BUCH ‘8 separation of the row and column indices determines uniquely two matrices et Mite Fide or gieiocns in onde orto ebony Bight Bde Side DETERMINANT OF A SQUARE MATRIX [cHaP. 3 and sariewn Gineorigar Mnezinee Mn aotn ey ARSE = " Genin Bin daar — Minden called sub-matrices of A. ‘The determinant of each of these sub-matrices is called a minor of and the pair of minors futavdaborceesde forbs dn Agar tqiae 4a ovens be and are called complementary minors of 4, each being the complement of the other ses) fe Soe and hayes are a pair of complementary minors. Let (3.13) Poe tite tig thth tor tix and (3.14) q gay tigant ot in ties tineot tin pgs The signed minor (-0" | dies Ineiieen ‘| Huarhgear so fy is called the algebrate complement of soa cat| alee | ete he alge complenent of |e sdeeeodn taetpety xample4, For the minors of Examples, (-1y"*9\ 28) = [45 | is the atgetrie complement or A888] ana cout] 288 | 2 | ARSEL as ne algebraic complement of [AES]. Note tut the sign elven to the two complementary minors is the same. Is this alnars true? When m=t1, (8.11) becomes Ai. = [as,] and | Ai, | = oj,5.. am element of A. The fordoronbn complementary minor | Az. 3... 3, algebraic complement is the cofactor as, 5, ss [s,s | im the notation ofthe section above, andthe ‘A minor of 4, whose diagonal elements are also diagonal elements of 4, is called a principal minor of 4. The complement of @ principal minor of 4 is also a principal minor of 4; the alge- braic complement of @ principal minor is its complement. CHAP. 3] DETERMINANT OF A SQUARE MATRIX 25 xamples. Porto ssaunre satis 4 = for] BY} ana [6:05 ) = Joss ce tas fare a pair of complementary principal minors of 4. What is the algebraic complement of each ? ‘The terms minor. complementary minor. algebraic complement, and principal minor as de- fined above for a square matrix 4 will also be used without change in connection with [4] See Problems 12-12, SOLVED PROBLEMS os Ms 6) ib al (ya7 — 56) - 0 + 2196-45) (d |23 5) = 3-3-5) 4 4 radia oririg bara os rad fia oO 1-t ad ° raat Cueieaenl padi O11 a6 by Theorem 1 3. Adding the second column to the thitd, removing the common factor from this third column, and using Theorem Vit La bte Le esbee red Abeta] = |i d asbee (arbse)f1 ba} = 0 Le ass Le asbee ted 4, Adding to the third row the first and second rows, then removing the common factor 2; subtracting the second row from the third; subtracting the third row from the first: subtracting the first row from the second; finally carrying the third row over the other rows 26 DETERMINANT OF A SQUARE MATRIX (CHAP, 3 tytby ast be ato] ath, mathe agtby Jab; apt be a+b bytes betes bgtea] = 2| biter Baten bat ea 2]brver betes bared ext) cota est asl Jay byse, apt bytey aatbot ca ty bh by Be 4 as oy = afbyser bates bated afer co eo a}bs be ba ma Jar to a ee oa ea 5. Without expanding, show that [4] = 2 a, 1) = ~(a,~ a2) (a2 ~ aq) (ao ~ a ay Yt ‘Subtract the second row from the first; then aad qa 0 aytae 1 ol lat a = 0; hence, | is a real number. 123 8. For the matrix 4 = |2 3 2], 22, aa = cote |8 fea ae = coef? f= ns cur? fe oy = care]? tan = | 3] = tg = |! 3] = 0 am = cael? 3 = tae evel fen = ene! 3 cua. 3 DETERMINANT OF A SQUARE MATRIX 27 Note that the signs given to the minors of the elements in forming the cofactors follow the pattem ‘where each sien occupies the same position in the display as the element, whose cofactor is required, oc- cupies in 4, Write the display of signs for a S-square matrix. 9. Prove: The value ofthe determinant || of an n-square matrix A is the sum of the products obtained by multiplying each element of a row (column) of A by Its cofactor. We shall prove this for a row. ‘The toms of (3.6) having a:3 as @ factor are @ 212 idan MO On NOW Sf i..0jq = Sedseedy Since In a permutation 1. j.dy.ody. the 1 49 in natural order. ‘Then (a) may be written as o 8: BE fan fy Meho ey Anjy where the summation extends over the = (#1)! permutations of the integer 2.8,.00m, and hence, as 20 Ag. on © ayy] 98 ee on | Me Bee Bea Consider the matrix B obteined from 4 by moving its sth column over the first s—1 columns. By Theorem Mi IB| = (-1)" “lA, Moreover, the element standing in the first row and first column of B is ays and the minor of a5 in B is precisely the minor |Mys| of azs in 4. By the argument leading to (c), the tems of 13 lMfs5| are all the terms of |2/ having a15 88 a factor and, thus, all the terms of (—1)°*!4| having ais as factor, ‘Then the terms of axs{(—1)* ‘lifysiI are al the terme of || having a.5 as a factor. ‘Thus. (3.15) Al ana taal + ayel 19"? |My ol tet agalen isl) +o + ox" anll Martha + aoe +o + aynlin since (-1)° "= 1)". We have (3.9) with ¢=1, We shall call (3.18) the expansion of [A along its first ‘The expansion of [4] along sts rth row (that {s, (9.9) for ¢=r) is obtained by repeating the above argu- ments. Let 8 be the matrtx obtained from 4 by moving its rth row over the first rt rows and then ite, sth cols lumn over the frst s—1 columns. Then B| ey cay la] aya ‘The element standing in the first row and the first column of B is a,, and the minor of ay, in B is precisely the minor of a, in A. Thus, the terms of arty” “lM elt are atl te toms of || having ory a8 factor. “Then Val = Boater lial! = 2 opaaey and we have (3.9) for 28 DETERMINANT OF A SQUARE MATRIX (car. 3 10, When ajj is the cofactor of aj; in the n-square matrix 4 = [a;;], show that tag thee sen hy fay 59 0 00,5-3 Be 9,543 0 Bon @ Kyatys + Rattas + By ego Bn Ojeda ‘This relation follows from (3.10) by replacing a,j with ke, ap; with kp, .dqj with ky. Th making these replacements none of the cofactors Mj oj... hj appearing is affected since none contains an element from the jth column of 4 By Theorem VI, the determinant in (3) {8 0 when ky= ary, (r= 1.2....9 and s 4). By Theorems VII fand VIL, the determinant in (2) 1s |4| when ky = a7j+ hays. (P= 1,2). and s4)), rite the equality similar to (2) obtained from (3.9) when the elements of the ith row of 4 are replaced Dy a igs oly 1 02 345 28 25 38 41. Evaluate: (a) |A| = ]3 04 @ |Al =] 12.3 we [Al = ]42 38 65 2-51 -25-4 36 47 83 148 2 3-4 (b) (Al = |-215 @ |Al = [5-6 3 324 42-3 (a) Expanding along the second column (see Theorem X) 1 02 304 2-81 lal = (@) Subtracting twice the second column from the third (see ‘Theorem IX) (@) Subtracting three times the second row from the first and adding twice the second row to the third 04 5 3-40) 4-20) $-9@)— Jo-2-4 4 lal = Jars rer v2a} + -[2-4| asa] [-2eaa seam -e2@] do 0 2 =~ (4436) = 92 (@) Subtracting the firet column ftom the second and then proceeding as in (¢) Dat 2 1-4 amy 1 ~4e4ay lal = |s-e a) = serra) = |sqaeiy -11 aeaeap sana] [a 2-3 aaa) 2-242) o 1 0 = faa a} = -|% hl Sea 8-2 it CHAP. 3} DETERMINANT OF A SQUARE MATRIX 29 (e) Pactoring 14 from the first column, then using ‘Theorem IX to reduce the elements in the remaining colunns 28 25 38; 2 25 28 2 25~12(2) 38-202) la 42 28 63! 14] 38 65 14] 38~12¢3) 65-2043) 56 47 83, 447 83 4 47-124) 83 -2044)| 2 1-2] 0 10) = uals 2s} - a4af-1 29 = -14-1-59 = 10 4-13 B11 12, Show that p and q, given by (3.13) and (3.14), are either both even or both odd. Since each row (column) index 1s found in either p or 4 but never in both, P 4g = (F2+Hom) + LEBER) = Bedaett) = meet) Now pg 1s even (either n or n+1 is even); hence, p and q are either both even ot both odd. ‘Thus, (1)? = (1) and onty one need be computed 123 678 13. For the matrix A = [a,J = }11 12 13 16 17 18 19 20 21 22 23 24 25 the algebraic complement of | 45'3| is prsezea) 93.0.5) 28 8 cayeertLastsl| = -|16 18 20] (see Problem 12) am sod te alsa component ot [42%8lie [42 = =],7 8 SUPPLEMENTARY PROBLEMS 14. Show that the permutation 12594 of the integers 1,2,3.4.5 is even, 24135 is odd, 41532 1s even, 53142 18 odd, and 52914 1s even 45. List the complete set of permutations of 1.2.3.4, taken together; show that half are even and half are odd, 16. Let the elements of the diagonal of a S-square matrix 4 be o.5,c,d,e. Show. using (3.6). that when A is diagonal, upper triangular, of lower triangular then |4| = abede Eg] show that 42 84 2 di 252 Ad's fA ut tat the determinant of 11. Given BA each product is 4 48. Rvaluste, as in Problem 1, 2-11 22-2 0 22 @ | 3 24) = 27 @& |r2 s)-4 «ey |-2 04] -1 03 234i -3 40] 30 DETERMINANT OF A SQUARE MATRIX [cHAP. 3 19, @) Evaluate [4] - ]23 9) 45n 33 () Denote by |8 | the determinant obtained from | 4 | by multiplying the elements of Its second column by 5. Evaluate |8 | to verify Theorem If (©) Denote by |C| the determinant obtained trom |.4| by interchanging its fist and third columns, Evaluate 1c} to verity Theotem ¥. Psa (é) snow tnat [Al » fs e284 ana vetetng Tver vam eral Mlies ner (ott tom [|e deterisane [0] «23. 3] ny sasecing tee tines te elements ofthe ft coe column from the corresponding elements of the third column. Evaluate || to verity Theorem IX. (1) In |4| subtract twice the first row from the second and four times the first row from the third. Evaluate the resulting determinant, (g) in | multiply the first column by three and from i subtract the thitd column. Evaluate to show that || has been tripled. Compare with (c). Do not confuse (e) and (g). 20. If A Is an n-square matrix and k is a sealar, use (3.6) to show that |e| = 4"|4 | a1. Prove: (a) it |4] =. then |4| =e =| 2 ()) IEA Is skew-Hemitian, then | 4 | is either real or is a pure imaginaty number, 22. (a) Count the number of interchanges of adjacent rows (columns) necessary to obtain B from 4 in Theorem V ‘and thus prove the theorem (®) Same, for Theorem VI 23, Prove Theorem VIE. Hint: Interchange the Identical rows and use Theorem V, 24, Prove: If any two rows (columns) of « square matrix 4 are proportional. then || = 0, 25. Use Theorems VII, HI, and VI to prove Theorem IX. 26, Rvaluate the determinants of Problem 18 as in Problem 11 b00 oe py tten eneck that [4] = [2 8] ¢ £], hus. tt 4 = dhagits on, where 00gh 21. Use (8.6) to evaluate Ay, do are 2-square matrices, |4| = |4y}+| Aol ays -2/3 ~2/9 28. Show that the cofactor of each element of | 2/3 1/2 ~2/2| is that element 2/3 =2/8 1/4, M4 a3 a] 29. Show that the cofactor of an element of any cow of | { 0 1 Js the corresponding element of the seme 443 ‘numbered column. 30, Prove: (a) If is symmetric then aijs= ay when t 4) (6) 164 is n-squate and skew-symmetsic then agg = (-1Y" "ayy when ¢/ HAP. 5] DETERMINANT OF A SQUARE MATRIX 31 BA. For the matrix 4 of Problem 8; (a show that [4] <1 ay tty (0 orm ve mates ¢ = [tsp ay Map| and show tat AC Sha Mee Ase (©) explain why the testi) is An as soon as (0 known be oP a 32, mutiny he cotamns of |4| = |8? co 42] respectively by 2.6.0; remove the common factor trom each of 22 ab be a8 ee tre rows toanow tat [1] + [ob en be te be ab eat bed) Ja? oat 161 ood] for oe ot ou evaaaing show : = 9840 = 40 aXb eK —dKe wit vag anon tag [22 8 1 208] | ED eae one sy ox aye Petal Iowa ona oatat Lott lott 24 show hat tensa determinant [4] = [E292 gig], ym yy iio} dl iat aa ad 38. Prove: |” fe | > Ham aak 09) ay —0nKag~ a4) (ae ay yea ~ al nay, mast by nat bo ay ay a] 36. Without expanding, show that |nb:¢c1 bates mbgt col = (ntiyin?—n4+1)] bx by by acta, negtas nes+as| ee eal 0 xa xb 31. Wishout expanding, show that the equation | x+a 0 x-c| =0 nas 0 as a root, xth xte 0 = Fema by 2 a a ws ase Chapter 4 Evaluation of Determinants PROCEDURES FOR EVALUATING determinants of orders two and three are found in Chapters, In Problem 11 of that chapter, two uses of Theorem IX were illustrated: (a) to obtain an element 1 of =1 if the given determinant contains no such element, (4) to replace an element of a given determinant with 0. For determinants of higher orders, the general procedure is to replace, by repeated use of Theorem IX, Chapter3, the given determinant |4| vy another |B| =|b;;| having the property that all elements, excent one, in some row (column) are zero. If }, is this non-zero element, and py Is its cofactor, Mab = (BL = bpg=Bpy = Car %bpq + minor of ‘Then the minor of by. is treated in similar fashion and the process is continued until a determi- nant of order two or three is obtained. Poa-z4) Jesu sean -aeamy vam) | ato 8 saa |": ae pont 32 aa] © Jsct@ aan aaa eaen] ” Joa a0-2 cecoak Pa EP OO OL a tos ss] [acme 1 enemy] 0-1 = Cnt) -6 8-2 —|-6+818) 8 -2+8(8) | = -]58 8 62 ces] [tae 4 stam | Tao a or = cat tcn [2S] = a6 See Problems 1-3, For determinants having elements of the type in Example 2 below, the following variation may be used: divide the first row by one of its non-zero elements and proceed to obtain zero elements in a row or column, brane 2 ste 0208 0.492 0.157 0.240) = 0.921(-0.384Y(0.108) = ~0.037 32 cuap. 4 EVALUATION OF DETERMINANTS 33 ‘THE LAPLACE EXPANSION, The expansion of a determinant || of order n along a row (column) is ‘@ special case of the Laplace expansion. Instead of selecting one row of ||, let m rows num- bered i, i,...,éq , when atranged in order of magnitude, be selected. From these m rows n(n =1)...(0~ m1) LQ om can be formed by making all possible selections of m columns from the n columns, Using these minors and their algebraic complements, we have the Laplace expansion » minors an lal = gen 4 fines doe) 2 | where s = ijtigtuotiy + yt jz to" +i and the summation extends overthe p selections of the column indices taken m at a time Exanple 3. 29-24 212 evaluate [4 using minors of the frst two rows te Lal} 272 124 asta minoes of the tt “2405 From (4.1, eens sal + et?) 43. Lagl separ [abs] 4 eat] az. Ladt + entre] ats] (age) + attest] asst. [43g me Lee [2 s}f2 3 3 -2l'lo s al'ls o Le SHasl- Hal Fade acabls sl - Le shl2 3 az = 1S) = KB) + OXI) + HIKED = AKG) + ANI8) = ~a6 Seo Problems 4-6 DETERMINANT OF A PRODUCT. 1f 4 and B ate n-squate matrices, then (42) 4B] = |Al- See Problem 7 EXPANSION ALONG THE FIRST ROW AND COLUMN. If 4 = [ais] is n-square, then (4.3) |4 crnE where css is the cofactor of a,, and a;is the algebraic complement of the minor \& od) ot A DERIVATIVE OF A DETERMINANT. Let the n-square matrix A = [;;] have as elements differen- tiable functions of a variable x. ‘Then 34 EVALUATION OF DET! MINANTS (CHAP. 4 1 The derivative, [4], of |4| with respect to x 1s the sum of n determinants obtained by replacing in all possible ways the elements of one row (column) of || ay their derivatives with respect to x Exanple 4 me 1 oy [eet gs Monta Het ont aoa] sa ant 0 z o 10 See Problem 8 SOLVED PROBLEMS 2a-24 oles? ‘ 23-24 venga] [rae aac -2-aem ao} | 2-2 12] 2 ase ean examples) 3234 22 3 4 3234 jH-24005 24 ° 5 2408 ‘There ate, of course, many other ways of obtaining an element +1 of —1; for example, subtract the first column from the second, the fourth column from the second, the first row fom the second, ete 1 ott 2-20) 100 9 2 |? 3 242-2-20)] _ |2s4-6 2 4242 1-20) Daas 8 1 548 -3-2) 318 -a 4-6 3-208) 4-204) 6-2-0) 5 -4 0 a-s| = 4 4 -3 8-9 1-318) 8-304) -9-3-2) “5-4 0 tei deat Evaluate [4] - | 1-i 0 2-38 1-2) 2438 0 Multiply the second row by 14i and the third row by 1+2F; then oe Lea ote Lea o ite Lear carparzla| = Cteaold| = f2 0 snr 200 5-a 0 8-148 25-51 Sart 0 1-447 -10+ai 1 4+ ~10+24 Lei ea = 6+ Br [ois as al CHAP. 4] EVALUATION OF DETERMINANTS 35 4. Derive the Laplace expansion of [4| = lal of order», using minors of order m ~2 interchanges of edjacent roms the row munberd i ca be bowet into the second Tow, ... bY iy 72 11a4 a -4 5 6 nate 1-2 3-2-2 f41e8 2-113 2 Oletae @ fra 2a af = us 1-4-3 -2 -5 2427 3-2 2 2-2 10. 1f 4 is n-square, show that || is real and non-negative 11, Evaluate the determinant of Problem 9(a) using minors ftom the first two rows; also using minors from the frst two columns wow [near [ed Use |4B| = |4|:|B| to show that (aj+a3)(b5+85) = (ayb—aybe) + (agby + 045)". ay tat 4 =f 28 2264) yg. | bitis botiba nap ieg ata abytiby by iby Use [AB| = [4|-[B] to express (a;+as+aq+a,)(b:+ be+be+ba) as a sum of four squares. 13. Evaluate using minors from the first three rows, Ans, 120 38 EVALUATION OF DETERMINANTS (CHAP. 4 Ihaiad Dorit 14, Evaluate {1 1 0 0 0] using minors trom the first two columns. Ans. 2 core 122.1 1B. If Ay. Ap...dg are square matrices, use the Laplace expansion to prove [dine 4s. Aee ne ADd| = |All = Lgl by bo by ba 16. Expand using minors of te first tmo rows and show that ba ba by be . | = o by bof [a be bs dsl [be bal by bal [be bal o 4 17, Use the Laplace expansion to show thatthe n-sqare determinant |° 4, wnere ois k-square, is zero when b> in 18. [Al = osstir + eyotae + agatha + mata expand exch of the cofactors de ts, dz4 along ite fest cole ain to show lat ants - 2, 2 atsassaa} as 3 wher ait ts the algebraic complement of te minor 19. If 14; denotes the cofactor of aj; In the n-square matrix 4 = [a;;], show that the bordered determinant es 20. For each of the determinants ||, find the derivative. (a) “* (by fx? oxen (© ax+5| ee) O Bx—2 x°+1 tL o x Ans. (a) 28+ 9x7— Bx, (b) 1 = Gx + 21x74 128" ~ 15x", (0) 6x9 — 5x" — 28x74 974 208-2 21. Prove: If A and B are real n-square matrices with 4 non-singular and it H = 4448 is Hermitian, then aP = lah. |rectay| Chapter 5 Equivalence THE RANK OF A MATRIX. A non-zero matrix 4 1s said to have rank r if at least one of its r-square minors is different from zero while every (r41)-square minor, if any, Is zero. A zero matrix is said to have rank 0. Example 1. ‘The rank of A a isre2 since See Problem 1 ‘An n-square matrix A is called non-singular if its rank r=n, that is, if |4] 40. Otherwise, 4 is called singular, ‘The matrix of Example 1 is singular, From |A8| = | 4|-|B| follows I. The product of two or more non-singular n-square matrices is non-singular; the prod- uct of two or more n-square matrices is singular if at least one of the matrices is singular. ELEMENTARY TRANSFORMATIONS. The following operations, called elementary transformations, on a matrix do not change either its order or its rank: (1) The interchange of the ith and jth rows, denoted by His: ‘The interchange of the ith and jth columns, denoted by Ki; (2) The multiplication of every element of the éth row by a non-zero scalar k, denoted by H(k); ‘The multiplication of every element of the ith column by a non-zero scalar k, denoted by Ki) (3) The addition to the elements of the ith row of F, a scalar, times the cortesponding elements of the jth row, denoted by 1,;(h) ; ‘The addition to the elements of the ith column of &, a sealar, times the corresponding ele- ments of the jth column, denoted by K;;(k) ‘The transformations H are called elementary row transformations; the transformations K are called elementary column transformations. The elementary transformations, being precisely those performed on the rows (columns) of determinant, need no elaboration. It is clear that an elementary transformation cannot alter the order of a matrix. In Problem2, it is shown that an elementary transformation does not alter its rank. THE INVERSE OF AN ELEMENTARY TRANSFORMATION. ‘The inverse of an elementary transforma. tion is an operation which undoes the effect of the elementary transformation; that is, after A hhas been subjected to one of the elementary transformations and then the resulting matrix has been subjected to the inverse of that elementary transformation, the final result is the matrix 4. 39 40 EQUIVALENCE [cHAP. 5 Example 2. Let 4 = 2 5 8 ‘The effect of the elementary row 12 nsformation Hpx(~2) is to produce B= |2 1 0) 789. ‘The effect of the elementary row transformation Hay(+ 2) on B is to produce A again. ‘Thus, Hoy(~2) and Hp,(+2) are inverse elementary row transformations. ‘The inverse elementary transformations are: ay Hy @) Hedy = Mga/ky (8) Hjthy = Hib Kihy = We have Tl. The inverse of an elementary transformation is an elementary transformation of the same type. EQUIVALENT MATRICES. Two matrices 4 and are called equivalent, 4~B, from the other by a sequence of elementary transformations, Equivalent matrices have the same order and the same rank. if one can be obtained Example 3. Applying in tun the elementary transformations Hi(—2), Ha(2). Hao(-1, ra2-a af fr 2-1 4] froa al fi2a gl 4 = |2 4 3 sl~1o 0 5-s)~Jo0 s-2!~joo s-sf = B -1-2 6-1] J-1-2 6-7] Joo s-a} Joo o 9, sine a sneer ot ae zero wtte [-E_$] #0) te an ois 2: nae the rank of A is 2. This procedure of obtaining from 4 an equivalent matrix B from which the rank is evident by inspection is to be compared with that of computing the various minors of A. See Problem 3. ROW EQUIVALENCE. If a matrix 4 is reduced to B by the use of elementary row transformations a- lone, B is said to be row equivalent to A and conversely. The matrices 4 and B of Example 3 are row equivalent, ‘Any non-zero matrix 4 of rank r is row equivalent to a canonical matrix C in whieh (@) one or more elements of each of the first r rows are non-zero while all other rows have only zero elements. (b) in the ith row, (/=1,2,...,9, the first non-zero element is 1; let the column in which this element stands be numbered j, (AS So Si (d) the only non-zero element in the column numbered jj, (i =1, 2,....7) 18 the element 1 of the ith row. CHAP. 5] EQUIVALENCE 41 To reduce 4 to C, suppose fr is the number of the first non-zero column of A. (is) It ays, £0, use He(1/ay;,) to reduce it to 1, when necessary. (le) If ayj,=0 but ayj, #0, use Hx» and proceed as in (i). (li) Use row transformations of type (3) with appropriate multiples of the first row to obtain zeroes elsewhere in the j,st column If non-zero elements of the resulting matrix B occur only in the first row, B= C. Other wise, suppose jo 1s the number of the first column in which this does not occur. If baj, #0, use H41/byj.) 88 in (Ly); If baj,=0 but b,j, #0, use Hog and proceed as in (i,). ‘Then, as {in (I), clear the jznd column of ali other non-zero elements If non-zero elements of the resulting matrix occur only in the first two rows, we have C, Otherwise, the procedure is repeated until C is reached. Example 4. The sequence of tow transformations Hox(~2), Has(1); Ws(1/5); Hs to A of Example 3 yields (1), Hao(-5) applied 12-1 a) fia -1 a] ft 2-1 4] fa 2 0 ans 4+ |2 4 3 sl~]o 0 5 -3[~lo 0 1 -s/5/~lo 0 1 35 -1-2 6-1] Loo 5-3} loo 5 -2} Looo o = 6 hhaving the properties (a)-(d). See Problem 4 ‘THE NORMAL FORM OF A MATRIX. By means of elementary transformations any matrix A of rank r>0 can be reduced to one of the forms i, 6 12 on w [sh wa. [i] called its nommal form. A zero matrix is its own normal forn Since both row and column transformations may be used here, the element 1 of the first row obtained in the section above can be moved into the first column, ‘Then both the first row and first column can be cleared of other non-zero elements. Similarly, the element 1 of the second row can be brought into the second column, and so on. For example, the sequence Hox(—2), Hys(1), Kox(-2) Kex(1), Kax(-4), Koo, Ka(1/8), Hoo(~1), Kao(3) applied to A of Example 3 ylelds la ‘I the normal fora, See Problem 5. ELEMENTARY MATRICES. The matrix which results when an elementary row (column) transforma- tion is applied to the identity matrix /, is called an elementary row (column) matrix. Here, an elementary matrix will be denoted by the symbol introduced to denote the elementary transforma- tion which produces the matrix. 42 LET EQUIVALENCE, (CHAP. 5 Every elementary matrix is non-singular. (Why?) ‘The effect of applying an elementary transformation to an mxn matrix A can be produced by multiplying A by an elementary matrix To effect a given elementary row transformation on 4 of order mxn, apply the transformation to /, to form the corresponding elementary matrix Hf and multiply 4 on the left by I To effect a given elementary column transformation on A, apply the transformation to J, to forin the corresponding elementary matrix K and multiply 4 on the right by K 124 oo ifi2s] frag Example6. When 4 = }45 5]. Ho-4 - Jot alfa 5a] - ]4 5 6) intorcnanges the first and third 789 noolrso) Lrg 2s] fio Cras rows of 4; AK.9(2) = ]4 5 6]-]0 10] ~ ]16 5 6| adds to the tirst column of A two tines ne} boy bsoo the thitd column 4A AND B BE EQUIVALENT MATRICES. Let the elementary row end column matrices corre- sponding to the elementary row and column transformations which reduce 4 to B be designated as My, My: KyyKo.u-.Ky where Hl, is the first row transformation, H, is the second, ...; Ky is the fitst column transformation, K, is the second,.... Then (5.2) My oo Hy HysA+KyKy Ky = PAQ = B where 5.3) eee een eae tec der kee We have IML, Two matrices 4 and # are equivalent if and only if there exist non-singular matrices P and Q defined in (5.3) such that PAQ = B ro TOT Ly Tool foro) fore of foros [oreo fait ator 0 o1o0ffooro} joor of fooro0ljoozo req aes ‘ ao oO oo 1 ® Since any matrix is equivalent to its normal form, we have IV. If 4 is an n-squate non-singular matrix, there exist non-singular matrices P and Q as defined in (5.3) such that PAQ = [, See Problem 6 CHAP. 5} EQUIVALENCE, 43 INVERSE OF A PRODUCT OF ELEMENTARY MATRICES, Let Pom Myo Ha-Hy and Q = Kye Ka us Ke as in (5.3). Since each H and K has an inverse and since the Inverse of a product is the product in reverse order of the inverses of the factors (3-4) Pre WW Hand Qe Ke Let 4 be an n-square non-singular matrix and let P and Q defined above be such that PAQ Jy. Then 5) A = P°(PAQQ = PT te @? = P*.Q" We have proved V. Every non-singular matrix can be expressed as a product of elementary matrices. See Problem 7 From this follow Mi. If A is non-singular, the rank of AB (also of BA) is that of B ‘Vil. If P and Q are non-singular, the rank of PAQ is that of A. CANONICAL SETS UNDER EQUIVALENC! E. In Problem 8, we prove VII. Two mxn matrices A and B are equivalent if and only if they have the same rank. A set of mxn matrices is called a canonical set under equivalence if every mxn matrix is equivalent to one and only one matrix of the set. Such a canonical set is given by (5.1) a8 r ranges over the values 1,2,...,m of 1,2,....m whichever is the smaller See Problem 9, RANK OF A PRODUCT. Let 4 be anmxp mattix of rank r. By Theorem MI there exist non-singular matrices P and Q such that pig - y= |? 0 0, Then A= P"NQ™. Let B be apxn matrix and consider the rank of (5.6) 4B = P*NOTR By Theorem VI, the rank of AB is that of NQ“B. Now the rows of NQ”'B consist of the firstr rows of Q"B and m-r rows of zeroes. Honce, the rank of AB cannot exceed r, the rank of Similarly, the rank of AB cannot exceed that of B. We have proved IX. The rank of the product of two matrices cannot exceed the rank of either factor Suppose AB =0; then from (5.6). NO“R = 0. This requires that the first r rows of Q™B be zeroes while the remaining rows may be arbitrary. ‘Thus, the rank of @7B and, hence, the tank of B cannot exceed p-r. We have proved X. If the mxp matrix A is of rank r and if the pxn matrix B is such that AB =o, the rank of ff cannot exceed p-r 44 EQUIVALENCE, (CHAP. 5 SOLVED PROBLEMS 12 1. (a) The rank of asince | 1 2| 40 and em Inors of order t 2 (6) The rank of 25] ts 2sinco [Al ~0 ane |? [0 28 i 2g Ge) Tre rank of A={0 4 6{ is 1 since [4] =, each of the nine 2-square minors is 0, but not 068 every element is 0. 2. Show that the elementary transformations do not alter the rank of a matrix We shall consider only row transformations here and leave consideration of the column transformations, tus an exercise, Let the rank of the mxn mattix A be r so Uhat every (r+1)-square minor of A, if any, 18 zero. Let 8 be the matrix obtained from A by a row transformation. Denote by |f| any (r+1}-square minor of A and by || the +1)-square minor of & having the same position as |R| Let the row transformation be Mg. Its effect on [R| is either (i) to leave it unchanged, (i) to interchange two of its rows, of (Ml) to interchange one of its rows with a row not of {R|. In the ease (i, [S| = |R] = 0; 4m the case (Ab, |5| = =|R| =0; in the case (ti. |S] is, except possibly for sign, another (r+l)-sauare minor of (Al and, hence, is 0 Let the row transformation be Hj(k). Us effect on |R_| is either (X) to leave it unchanged or (it) to malti- ply one of ts rows by k. Then. respectively. |s|=|R|= 0 or |S! =IR] = 0 Let the tow transformation be Mf). Its effect on |R_ is either (1 to Teave it unchanged, (Hf) to increase fone of its rows by h times another of its rows, oF (it) to increase one of Its tows by k times a row not of | Im the cases (4) and (Ht), |S] =|R|= 0; in the case (iit), |S] = |R] 4K (another (r+1)-square minor of 4) = Otho = 0. ‘Thus. an elementary rove transformation cannot raise the rank of a matrix. On the other hand. it cannot ower the rank for. if it did. the inverse transformation would have to raise it. Hence, an elementary row transformation does not alter the rank of a matrix. 3. Por each of the matrices obtain an equivalent matrix B and from it, by inspection, determine the rank of A, 123 fi 2 Pfir2q fied wy 4 <|2 1 3[~Jo-3 -3)~Jo 1 i]~fo 1 a} = 3 32a) lo-a-a} foi 2} loon ‘The transformations used were foy(—2), Hay(—8)i Mak—1/8), He{—1/4); Hock—1). The rank Is 3. 1236) fi 2 sq fi 2 sq] fi 2 sq fi 2 ao oy a= [248 2ffo 0-32] Jo-« -2 3} [o-¢-e af fo -+-8 31s anesank ies 2213} |o- -ss] Jo o -32] Jo o-a2] Jo 0-32] ~ se75} lo-s-1 5) lo-<-1 5} fo o-sa} lo 0 00 1 oui -i] fio oF fio o A= fo i rerf~lo s re2]~fo trea] = 8, Te rank is 2 rasa aid Le asa} loo 0 Note, The equivalent matrices # obtained here are not unlaue. In particular, since in (a) and (b) only tow transformations were used, the reader may obtain others by using only column transfomations When the elements are rational numbers, there generally is no gain in mixing tow and column transformations. CHAP. 5] EQUIVALENCE 45 4. Obtain the canonical matrix C row equivalent to each of the given matrices A. 013-H forrs 2] foris 4] foi0e & (4249126 Of forse of foors-2| Joors—2| “Jo2s9 2) lo2s9 2! Joo1a—2| Joaoa o lori3 af lo 0013-2] [p00 a i2-23i] fi 2-23 i] fio-23 §] fioos #7 fioo0 gy a= f83-230/J0 1 00-1] Jor oo-r| for oo-1| for00-1| 24-364} lo 0 10 2!"Joo 10 2} Joo10 2] Joor0 2 r-1s6) fo-1 11 5} loo 11 4} fooon af lpoor 2, 5. Reduce each of the following to normal form, 120-1) [i 20-1) fi 006) fio od] fic o | fio o | finod (4 | 241 2}-fo-21 5|~fo-215]~Jo1-25]|~fo1-2 sl~lo1 0 of~for00 -2a2 5} jo 72 3) lo 723) loz 7a} loom) loou-z) loora = Us 0] ‘The elementary transformations are: Hox(~B), Hay(2¥; Kox(~2). Kas(1): Keg Maek—2); Kag(2). Kaol—8); Ko(1/11), Keo) b23 4) fos 5 J] fia 5 4 fissq food rood) fioog fioog] (4 = 23 5 4[~fo2 3 al~lo2 3 af~lo2sai~lo234|~Jo134|~Jo1 00|~/o100 seisi2} [ss1s12} [zaisi2} lors} Looe} forza} lorool foooo, i, 0 oo ‘The elementary transformations are: Hai KyCBY: Hax(—2): Kog(~9). Box(—5), Kay(—8)i Kol 2); KaolB). Kaol~4); Haol=1) 6. Reduce A to normal form N and compute the matrices P, and Q, such that P,AQ, 2 -2 ° Seven elements and each column transformation Is performed on a column of seven elements, 1000 1000 1-2-3 1-2-3 0100 0100 o 100 o 100 D010 oo1.9 eo 10 oo 10 coon soo on so 001 20001 123-2100 12 3-2 100 1000100 1000100 2-21 9010 0-6-5 7-210 0-6-57-210 0-6-8 7-2 10 304 1001 0-6-5 T-301 0-G-~S7-301 0 0 00-1-11 1 13-92 1 1/3 ~4/3 -1/3 0-1/8 00 0-1/6 -5/6 1/8 © 010 2 0 1 0 es +0 001 -o 0 o 1 or 1 000100 1 0 © o100 WH D 1-57-2110 0 1 0 0210 0 0 Oo-1-r1 0 0 © O-t-td 46 EQUIVALENCE, (omar. 5 Vat y 9 0 0 1 134 ‘The elementary transformations Hoy(-1), lax(—1%; Kox(—3), Kaa(~8) reduce A to ly, that is, [see (5.2)] T= HyHty AK, Ky From (8.9.4 = Hua = 8. Prove: Two mxn matrices A and B are equivalent if and only if they have the same rank 1f 4 and B have the same rank, both are equivalent to the same matrix (5.1) and are equivalent to each other. Conversely. if A and B are equivalent, there exist non-singular matsices P and Q such that B = PAQ. By Theorem VIl, 4 and 8 have the same rank. 9. A canonical set for non-zero matrices of order 3 1s, » fe BE fg- iss i008 L008 L008 tots foro i ooo i coe pore p00 0 po 0e 10. If from a square matrix 4 of order n and rank ry, a submatrix B consisting of s rows (columns) of A 4s selected, the rank r, of B is equal to or greater than 1 +s ~ n ‘The normal form of A has n= tows whose elements are zeroes and the normal form of & has s—rg rows whose elements are zeroes. Clearly o-y 2 6 from which follows 1, > 1,4 5 ~n as required. CHAP. 5] EQUIVALENCE 47 SUPPLEMENTARY PROBLEMS vg fan) [sag fisess eneas Ans. (a) 2. (6). (6) 4. (22 12. Show by considering minors that 4.4.1, and T have the same rank. 18. Show that the canonical matrix C, row equivalent to a given matrix 4. is uniquely determined by A 14, Pind the canonteal matrix row equivalent to each of the following: ~ 4312} [oor t1/9] 3-3 1 2) Loo12| wfaaa shfoor a] ow [it 2 e3ffere on pee eee a-2 1 oalfoor 20 1 3-1-3] [ooo o reed oY 15. Write the normal form of each of the matrices of Problem 14. ans.) ool once Usd [ee fd 16. Let 4 2 3 4 q 1 2 (6) Prom Ie form Myo. He(9) Hya(—A) and cheek that etch J offocts the corresponding row transformation (6) From I, form Koa. Ka(~1), Kao(3) and show that each 4K effects the corresponding column transformation. (6) Wrtethe inverses 13, 152), Hra(~A) of the elementary matrices of (a). Check that foreach If. Hl" (2) wite ue verses Ky Ke'CI), Kya) of the elementary matricesof(b). Cheek that foreach K. KK“! = og Paid (0) -H6'@) Me = [1/3 00) 0 04 03 0 (©) Compute B = thy -Hot3+Haa(—s) = ]1 0-4) and C = po () Show that BC = CR = 11. (a) Show that Ky jth) = High. and Ks) = Hs (b) Show that if R Is a product of elementary column mattices. Ris the product in reverse order of the same elementary row matrices, 18, Prove: (a) AR and AA are non-singular if 4 and B ate non-singular n-square matzices, (®) AB and BA ate singular if at least one of the n-square matrices 4 and B is singular 19 IfP and @ are non-singular, show that 4, PA, 4Q, and PAQ have the same rank. Hint, express P and Q as products of elementary matrices a 20. Reduce B= to normal form Nand compute the matrices P, and Qp such that P,BQy = N 4 5 48 EQUIVALENCE [CHAP. 5 a 2 a, 2, 25, n 30, Ee (a) Show that the number of matrices in a canonical sot of n-square matrices under equivalence is » +1 (2) Show that the number of matrices in a canonical set of mxn matrices under equivalence is the stmaller of met and n+ 1249 given 4=|13 2 6] of rank 2. Find a d-squate matrix B40 such that 4B = 0 2 5 6 10 Hint. Follow the proof of Theorem X and take d008 3 oo00 ae) abed sfak where a,b,....h are arbitrary ‘The matrix A of Problem 6 and the matrix B of Problem 20 are equivalent. Find P and @ such that B = PAQ. If the men mateices 4 and Hare of rank 4 and rp respectively, show that the rank of AB cannot exceed ate Let A be an arbitrary n-square mattix and f be an n-squate elementary matrix, By considering each of the sx different Opes of matnx B, show that [4B] = [4|-[] Let A and 8 be n-square matrices, (a) If at least one is singular show that |48] = |4|+|B|; (B) If both are non-singular, use (5.5) and Problem 25 to show that |42) = |4|-[B\ Show that equivalence of matrices is an equivalence relation . Prove: The tow equivalent canonical form of a non-singular matrix A is J and conversely. Prove: Not every matrix 4 can be reduced to normal fom by row transformations alone. Hint. Exhibit a matrix which cannot be so reduced. Show how to effect on any matrix A the transfommation H,; by using a succession of row transformations of types (2) and (3) Prove: If 4 is an mxn matrix, (man), of rank m then A4” is a non-singular symmetric matrix. State the theorem when the rank of 4 is < m. Chapter 6 The Adjoint of a Square Matrix THE ADJOINT. Let 4 = [a;,] be an n-squate matrix and a4; be the cofactor of a,,; then by definition, sy den One (6.1) adjoint A = adj = [2 S92 Mao an ton an Note carefully that the cofactors of the elements of the ith tow (column) of 4 are the elements of the ith column (row) of adj 4 124 Example 1. For the matrix 4=]2 3 2 33.4 Mys= 6, Oke =—2. Cea 5. dog= 3, Sox 61-8 and ee “3 3-4 Using Theorems X and XI of Chapter 3, we find 8, eg =A, Ogg = = See Problems 1-2, (6.2) AfadjAy = [92 M2 oon) | ae log one wdingdAl [Ald = Ldledy = (adh Ay Example 2, Por the matrix 4 of Example 1, [4 «—1 and 12afe 1-9 a) adja) = [2s a[{-2-s af - | o— ssajis 3-1 ° By taking determinants in (6.2), we have (6.3) [4|-|aaj a} la’ = laajal-tal ‘There follow I. If A fs n-square and non-singular, then (6.4) ladjal = 50 THE ADJOINT OF A SQUARE MATRIX (CHAP, 6 IL, If 4 is n-square and singular, then Aadjd) = (adjdyd = 0 If Ads of rank adj H* D wnere Ais 2, Use the method of Problem 19 to compute the adjoint of 1116 2302 (0) A of Problem’, Chanter o © acmaoters |? 332 4674 tai Ria 2-2 fl Ansa) |-1 10}. oo oe 104 aia ai. Let A=[ais] and B=[k—a;;] bo s-square matrices. If S(C) = sum of elements of matrix C, show that Stadj A) = S(adjBy and [Bl & + Scaajay - |] 22, Prove: If 4 48 n-squaro then | adjcaa) a) | = [al 23, Let Ay = Logg] Gf = 4.2...) be the lower trangular matrix whose triangle 1s the Pascal triangle; for example, 1006 1100 mw Vado 1334 Detine bij = ("logy and verity for n = 2.2.4 that “ addy = Leygl = 24. Let B be obtained trom A by deleting its ith and pth rows and th and gth columns. Show that my i] cyte eseatay tay Sq eq where gy 1s the cofactor of ayy in || Chapter 7 The Inverse of a Matrix IF 4 AND B are n-square matrices such that 48 ~ RA ~1, B is called the inverse of A, (B =A 4 is called the inverse of B, (A= 8"). ) and In Problem 1, we prove 1. An n-square matrix 4 has an inverse if and only if it is non-singular. ‘The inverse of a non-singular n-square matrix is unique, (See Problem 7, Chapter 2.) ML If A is non-singular, then 4B = AC implies B THE INVERSE of a non-singular diagonal matrix diag (fy, bo, ..4,) is the diagonal matrix ing (1/ by, V/s oo, W/E) Ao, 4 ds ate non-singular matrices, then the inverse of the direct sum diag(A,, Ay, Ay) is ding (4). 43", -.., AS) Procedutes for computing the inverse of e general non-singitlar matrix are given below. INVERSE FROM THE ADJOINT. From (6.2) 4 adj4 ~ {4-1 If 4 is non-singular 31/L4 aa.) tr: / LAT 0/14) doo/ Al tno! La aay at. A 2 Example 1, From Problem2. Chapter6, the agjoint of 4 «|i 9 aff ont rea} Lana Wag since [alana at atid fT E lal EP 1-4 See Problem 2. 55 56 ‘THE INVERSE OF A MATRIX [cHAP. 7 INVERSE FROM ELEMENTARY MATRICES. Let the non-singular n-square matrix 4 be reduced to 1 by elementary transformations so that Hy... Has Hy A Ky+Ky..Ky = PAQ = 1 Then 4=P*.Q* by (5.5) and, since (By = B, a At 2 (PRO = QeP = Kye Rae Ky Hoos Ha Th Example 2. From Problem 7, Chapter 5. 100][ 106 1-3 o|fi 0-7 WoyAKKe = | 010]]-110|-4-[o 10(for of = 7 101}, 001 lo or}loo + A 1-3 o]fro-s}f 100] [100 1-3-3 Then 4° = KKMoM, = Jo 10)for of|-110]}] 010] = |-1 1 0 o orffoo iff ooif-ro4 -1 01 In Chapter 5 it was shown that a non-singular matrix can be reduced to normal form by row transformations alone. Then, from (7.2) with Q =, we have (1.3) a Pos He... Hy: Hh That is. ML. If A is reduced to / by a sequence of row transformations alone, then A” is equal to the product in reverse order of the corresponding elementary matrices. Lad Example 3. Find the inverse of 4=|1 ¢ 3| of Example 2 using only row transformations toreduce A tol. 134 Write the matrix [4 i] and perform the sequence of row transformations which carry A into Jon the rows of six elements, We have 13-3 (a) = -1 100) “ot bby (7.3). ‘Thus, as A is reduced to Jy. fy is See Problem 3. INVERSE BY PARTITIONING. Let the matrix A =[a4;] of order n and its inverse B = (5;;] be par- titioned into submatrices of indicated orders: Best Boo (pxp)s (Px ag) pn ase | where peg and fox + Aas caxp Saxe) (x0) 5 x9) Boy | Boo CHAP. 7] THE INVERSE OF A MATRIX 37 since AB = BA= (i) AysBaa + AseBor = Ip (iii) Bys Aya + Bypdor = 0 oo | (My ABuy + AusBaz = 0 UW) Barua + Banden = by ‘Thon, provided Ay, 1s non-singular, we have ft + Ase Asad “das Ait) a Bag =~ (AE Aare" where € = doy ~ Aoy(Ai3 Azo). See Problem 4. In practice, A,, is usually taken of order n-1. To obtain A;3, the following procedure is used. Let acs fs Oy te as ae es Oe Map Boe a= fon a am]. Ge fore 2 [ * “) : [3 i =| Me: 5 59 Ogg After computing G;', partition G, so that Az9=[ass] and use (7.5) to obtain G3". Repeat the proc- ess on G, after partitioning it so that A2_=(a44], and so on. Take taf} 4 [1a} ee LBD EL oe cfs ]-on 33 43]. using partitioning, 34 El: 4or=(13], and doe = [4]. Now Eten Anltte = U—Craif§] = amt € = 1 Then fy = Ae dine Und « [A 4] fJort os [Ee 134 and “10 “01 See Problems 5-6. 58 ‘THE INVERSE OF A MATRIX [onar. 7 ‘THE INVERSE OF A SYMMETRIC MATRIX, When A is symmetric, aj;~ «j; and only $uins1) cotac- tors need be computed instead of the usual w? in obtaining 4° ftom adj A If there is to be any gain in computing 4 as the product of elementary matrices, the ele mentary transformations must be performed so that the property of being symmetric is preserved ‘This requites that the transformations Occur in pairs, a row transfornation followed Immediately by the same column transformation. For example, Db ab ba boe The Ky = abe 206 5 ° Haye By) Koy-% = |e However, when the element a in the diagonal is replaced by 1, the pair of transformations are H/va) and Ky(1/Va). In general, Va is either irrationalor imaginary; hence, this procedure is not recommended. ‘The maximum gain occurs when the method of partitioning is used since then (7.5) reduces to Buy is + (AA EArt AY, Bos 8) ASAE where & = dyp = doi AG Ay) See Problem 7. When 4 is not symmetric, the above procedure may be used to find the inverse of 44, which is symmetric, and then the inverse of 4 is found by an AY = (AA SOLVED PROBLEMS 1. Prove: An n-square matrix A has an inverse if and only if it {s non-singular Suppose 4 is non-singular. By Theorem IV, Chapter 5. there exist non-singular matrices P and Q such that PAQ=1, Then 4 =P*.Q* and 47=Q-P extsts * would be of rank [1 E = tey—towdinded = (DE afi] = Ld Now vis 0) Fy wis -v8 peers = Le a Pon [8 V9 tu FE} on = Ba : on aba ssa Conse mati atone 0 ht ai : Znc|| POA) cao | a) eae fa = td) ann a neers gee =a] 8 nt 5}. Att as}. &= [18/8], €° = (5/18). “5 5-5 w) 62 THE INVERSE OF A MATRIX rst i ‘Then | 3 = pI- Ltt = (5/1 is 18]. Bip 2 qglt -2 10). 8, [518] SUPPLEMENTARY PROBLEMS 8. Pind he eon and iver of ech of he aon rial feed. wfeaap wfeees ol ® A 21d pad ps aoa oa) foe J fray fags ns. taverses (a) f| 5 2 -1|. hf 15-4 asl. co foo aah om fo V8 9, 10. otsin the aves ofthe matrices of Probe ung the method of Problem ee 3427 ae 12 3-4 23932 tas Same, for the matrices (a) a () e) (a) 11. sane.torttemarces |! 2 S78} a [2 2221 wf ae 20-6 4 “ue 98 60 4 Ans ag (98) ag 4-12-13 0-2 a 20-20 -18 25 -J 17-26 30-11 -18 7 ~38| @ ATs 6 (@ J]-30 12 21-9 6 ae cis 2 “e5 6 12. Use the rosult of Example 4 to obtain the inverse of the matrix of Problem 11(d) by partitioning, 13, Obtain by partitioning the inverses of the matrices of Problems 8(«), 8(b), 11(a) —14¢e) (CHAP. 7 ° ° 1/6 val 20 3-1 aa bat 22 cnar. 7 THE INVERSE OF A MATRIX 6 12-7 foi 22 22411 1123 tain hy partitioning the invorses of the symettic matrices (a » 34. Obtain hy patitioning th the symmetric matrices (a) | TT) ih lo og 21-1 2 2333 p-a-r-t “3 3-3 2 faa ana 42 Masa) |e |) caer ato-t-t 2-2 3-2 15, Proves If A ts non-singular, then AB = AC implies B= C. 16. Show that if tie non-singular matrices A and B commute, so also do (oy #7 and B. (6) A and BY, (e) A? and BY, Hine, (a) A*CAB)A* = A QA A® 17, Show that if the non-singular matrix 4 is symmetric, so also is 4 Hint, A*A = = AAS = tA 18. Show that if the non-singular symmetric matrices A and B commute, then (a) 4B. (by AB, and (ey A*B* fare symmetric. Hint: (a) (4°BY = (BAY = (1°YB" = 4° 18, An mxn matrix d is said to have a right inverse B if AB = and a left inverse Cif CA a. Show that 4 has 4 right inverse if and only Lf is of rank m and has a left inverse sf and only Af the rank of 4 isn ane a 0 SJ-1 o 1 135 cor Tle 5 t 3 -3 Oo Le 4 24, Prove: It [Asa] #0, then Aes) *|4ae ~ Aor ATE Ane] BB.IF |L+ A] 40, then U+4y* and UA) commute. 26. Provex{i) of Problem 23, Chapter 6 : Chapter 8 Fields NUMBER FIELDS. A collection or set S of real or complex numbers, consisting of more than the ele: ‘ment 0, is called a number field provided the operations of addition, subtraction, multiplication, and division (except by 0) on any two of the numbers yield a number of S. Examples of number fields are: (a) the set of all rational numbers, (B) the set of all real numbers. (e) the set of all numbers of the form a+5V3, where a and b are rational numbers, (2) the set of all complex numbers a+ bi, where a and 6 are real numbers. ‘The set of all integers and the set of all numbers of the form 8V'3, where 6 is a rational number, are not number fields. GENERAL FIELDS, A collection ar set 5 of two or more elements, together with two operations called addition (+) and multiplication (-), is called a field F provided that a,,, ... being elements of F. i.e. scalars Ay: a+b is a unique element of F Agi a+b Ag: a+ (b+e) = (ard) te Aq: Por every element a in F there exists an element 0 in F such that a+0 =0+0 dg: For each element a in F there exists a unique element -a in F such that a+(~a) = 0 ab ab = be My: (abye = abe) For every element a in F there exists an element 1 40 such that 1-a=a-1=a, For each element a0 in F there exists a unique element ain F such that a- a” ae +b is a unique element of F Dy: a(bser= abe De: (asbye = aevbe tn ation to the muner feds sed above fer eames of lds ae (ote set ofa quotients PC) of polyoma in + wih eal eotTient (f) the sot of att 22 matrices of he torm [3 5] where and # ate seat numbers. (g) the set in which a+e= 0, This field, called of characteristic 2, will be excluded hereafter. 1h this field, for example, the customary proof that a determinant having two rows identical fs 0 is not valid. By interchanging the two Identical rows, we are led to D = -D or 2D = 0; but D is not necessarily 0. 64 CHAP. 8] FIELDS 65 SUBFIELDS. If $ and T are two sets and if every member of S is also a member of T, then S is called a subset of T If S and T are fields and if $ is a subset of T, then S is called a subfield of 7. For exam- ple, the field of all real numbers is a subfield of the field of all complex numbers; the field of all rational numbers is @ subfield of the field of all real numbers and the field of all complex MATRICES OVER A FIELD. When all of the eloments of a matrix 4 are in a field F, we say that Walaa ever ea pelea 14 Lisi is over the rational field and B= is over the complex field (i vl fhe rat i: ae Here, 4 is also over the real field while B is not; also A is over the complex field. Let A,8.C,... be matrices over the same field F and let F be the smallest field which contains all the elements; that is, If all the elements are rational numbers, the field F is the rational field and not the real or complex field. An examination of the various operations de- fined on these matrices, individually or collectively, in the previous chapters shows thet no elements other than those in F are ever tequired. For example: ‘The sum, difference, and product are matrices over F. If A is non-singular, its inverse is over F. If A~I then there exist matrices P and @Q over F such that PAQ = and I is over F. If A is over the rational field and is of rank r, its rank is unchanged when considered over the real or the complex field Hereafter when ts said to be over F it will be assumed that F is the smallest field con- taining all of its elements, In later chapters it will at times be necessary to restrict the field, say, to the real field. At other times, the field of the elements will be extended, say, from the rational field to the real field. Otherwise, the statement "4 over F'" implies no restriction on the field, except for the excluded field of characteristic two. SOLVED PROBLEM 1. Verity that the set of all complex numbers constitutes a field, To do this we simply check the properties ay ~ the unit element @f,) is 1 M,—My, and D, Dp. ‘The zero element (Aa) is 0 and If a+bi and c+di are two elements. the negative (4g) of a+bi is —a~ bi, the Droduet (M;) is (a+ bip(e4diy = (ac~bd) + (ad be)é; the inverse (Mg) of o+bi #0 is a ee bs Te ea PLR atv Verification of the remaining properties is left as an exercise for the reader 66 FIELDS [onar. 8 SUPPLEMENTARY PROBLEM: 2, Verify (a) the set of all real numbers of the form @+5V5 where a and # are ravsonal numbers and (6) the set of al quotients 54> of polynomials In x wth real coetietents constitute fies 3. Verlty (a) the set of all rational numbers, (®) the set of all numbers a+4\/3, where a and b are rational numbers, and (©) the set of all numbers a+ bi, where a end b ate rational numbers are subfields of the complex field 4. Verity thatthe set of all 22 mattices of the form [: “| winere and bare stional mmbors, forested Stow hat hi a ut othe Bl oa 22 macs ot om “wn nae extn ‘5. Why does not the set of all 2x2 mattices with real elements form a field? 6. A set R of elements a.b.c.... satisfying the conditions (4s. A>. do, 44, Ag: My, Mo; Ds, D;) of Page 64 és called f ring, To emphasize the fact that multiplication is not commutative, K may be called & non-commutative ring, When a ring R satistlos M>, it 1s called commutative, When a ting & satisties Ma, itis spoken of as ‘ring with unit element. Verity (a) the set of even integers 0,22,4,... 1s an example of a commutative ting without unit element, (b) the set of ail integers 0,1,2, 43, .. 1s an example of a commutative ring with unit element, (©) the set of all n-square matrices over F is an example of a non-conmutative ting with unit element, (2) the set ot a 2x2 matrces ot the tom ff ~'], wher « an» are et nusbers, 8a oxanpte of connate rng with wt sleet 7. Can the set (a) of Problem 6 be turned into a commutative ring with unit element hy simply adjoining the ele- ments #1 to the set? 8 By Proem 4, the sit (2) of Problem 6s 8 Held. s every tld ring? fs every connote ving wth ani flenent a Nei? \ 0 0 9. Describe the ring of all 2x2 matrices |? §], whe @ and 6 are tn F. If 4 18 any matex of the ting and Le i ‘}: show that LA =A. Call L eLeft unit element. 1s there aright unit element? 10, Let be the ld ofall complex numbers pol aid K be the dad of a 232 matoos [* ~L] wher no co mete nee of) [odds 9s (®) Show that the image of the sum (product) of two elements of K is the sum (praduet) vf their images in C. (©) Show that the image of the identity eloment of K is the identity element of C. (2) What is the image of the conjugate of a+ i? ‘This fs an example of an Lsomomphism between tio sets. Chapter 9 Linear Dependence of Vectors and Forms ‘THE ORDERED PAIR of real numbers (x;, x) 1s used to denote a point X in a plane. The same pair of numbers, written as (x, t,], Will be used here to denote the two-dimensional vector or 2-vector OX (see Fig. 9-1) Xolmaat sanetae +222) Xone) a za wate Fig. 91 Fig. 9-2 It Xy=Cmsio] and Xp= (toe, %20] are distinct 2-vectors, the parallelogram law for their sum (see Pig. 9-2) yields Xo +My = Cette ot tel ‘Treating X, and X, as 1x2 matrices, we see that this is merely the rule for adding matrices giv- en in Chapter 1, Moreover, if & is any scalar, AX, Urs, kxzo] is the familiar multiplication of a vector by a real number of physics. VECTORS, By an n-dimensional vector of n-veetor X over F is meant an ordered sot of n elements x; (9.1) x [eget onal ‘The elements x, %, .., x, afe called respectively the first, second, ..., nth components of X. Later we shall find it more convenient to write the components of a vector in a column, as (ony x Taste ml = Now (9.1) and (9.1) denote the same vector; however, we shall speak of (9.1) as a row vector and (9.1) as a column vector. We may, then, consider the pxg matrix A as defining p tow vectors (the elements of a row being the components of a g-vector) of as defining g column vectors. 67 68 LINEAR DEPENDENCE OF VECTORS AND FORMS [CHAP. 9 ‘The vector, all of whose components are zero, is called the zero vector and is denoted by 0. ‘The sum and difference of two row (column) vectors and the product of a sealar and a vec tor are formed by the rules governing matrices. Example 1. Consider the $-rectors My sta], Xp= [22-8], Xo=Co,—a.1], and X4= (4,-4.6) (a) 2X,~5Xy = 23,14] —5[2.2,-3] = [6.2.-8] - [10.10,-15] = [+ () OX, 4 X= 2f2.2,-3]+ [4,-4.6] = [0.0.0] = 0 (©) 2X = 3%_—%y = 0 (@) y= Xy— Nyt Xe = 0 -8.7) The vectors used here are row vectors. Note that if each bracket is primed to denote col- umn vectors, the results remain correct. LINEAR DEPENDENCE OF VECTORS. The m n-vectors over F X= Daemon tin) (9.2) xX Deas tees esp and Xa = Cena tee oo tnd ae said to be linearly dependent over F provided there exist m elements fay nike of F, not al zero, such that (0.3) Aik, + bal #4 bala = 0 Ottermise, the m veetors are said to be Linearly independent. Example2, Consider the fou vectors of Example 1, By (5) the vectors X, and 1, ate linatly dependents 0 also are Xj, Xan by () na the erie ety (2) ‘The vectors X, and Ny, however, ae lintriy independent, Fo. assune the contrary that HM, + hay = (8h, + Dh. y+ Oke thy = ate] = [0.0.0] Then Sky + kp and then y= 0. 0, y+ 2ky = 0, and ~tky— ay = 0. From the first two relations ky = 0 Any n-vector X and the n-zero vector 0 are linearly dependent, A vector Xs1 18 said to be expressible as a Linear combination of the vectors X,, Xo, if there exist elements Hy, kg, ....#_ of F such that Kasx = BX, + bak + BASIC THEOREMS. If in (9.3), 4 0, we may solve for itt hia Xing + hier Xiag tt adel or (9.4) Xp = Xt eet spaXies + sXua tt aly Thus, I. If m vectors ate linearly dependent, some one of them may always be expressed as ‘a near combination of the others. CHAP. 9] LINEAR DEPENDENCE OF VECTORS AND FORMS, 69 Uh. fm vootors Xy, Xo... Xy are Lineatly independent while the set obtained by add- ing another vector Xe. 1s linearly dependent, then Xqq, ean be expressed as a linear con- bination of Xs, Xp. Ay Example 2, Prom Example 2, the vectors X;, and Xz are Unearly independent while Xs, Xz, and Xy are nearly dependent, satisfying the relations 2X,—3X,—Xq= 0. Clearly, Xq = 2X,— 3X, ML. If among the m vectors X,, Xz, ....Xq there 1s a subset of rn. If the set of vectors (9.2) is Lineatly independent so also Is every subset of them, A LINEAR FORM over F in n variables x, % (9.6) Easy = ats + Ooty + + ant ‘x 18 a polynomial of the type where the cooffictents are in F Consider @ system of m linear forms in n variables = tam + aot tm + Oe aa re faueinee treme ert and the associated matrix hy iy ta ge | teh Oe Ben If there exist elements Jy, ks,...,by , not all zero, in F such that lah + bef + + ah o 70 LINEAR DE ENDENCE OF VECTORS AND FORMS (CHAP. 9 the forms (9.7) are said to be linearly dependent; otherwise the forms are said to be linearly independent. ‘Thus, the linear dependence or independence of the forms of (9.7) is equivalent to the linear dependence or independence of the row vectors of A. Example 5. The forms f, = 26; — 5+ Sts fy = Xi 4 Bet dea, fo = ey — Tao xe ate Hnesrly depend. 2-as ent since 4= 1 2 4| Is of sank 2. Here, 3f;—2f,- 4-01, ‘The system (9.7) is necessarily dependent if m>n. Why? SOLVED PROBLEMS 1. Prove: If among the m vectors Xs, Xp,....%_ there is a subset, say, Xs, Xo) Xp rm, whieh is Vinearly dependent, so also are the m vectors. Since, by hypothesis. ix; + kyXy + +4,Xp = 0 with not all of the k's equal to zero. then BX, + bala + om + lg + O-Xpas tot Ody = 0 with not all of the #s equal to zero and the entire set of vectors is linearly dependent. 2 Prove: If the rank of the matrix associated with a set of m n-vectors is rn, consider the matrix when to each of the given m vectors m-n additional zero compo- nents are added, This matrix is [40]. Clearly the linear dependence or independence of the vectors and also the rank of 4 have not been changed ‘Tus, in elther case, the vectors Xy,,.....q are linear combinations of the linearly independent vee tors Xy.Xq,..Xp a5 was to be proved, 3. Show, using a matrix, that each triple of vectors X,=[1,2,-3,4] (2.3.4,-1] (a) Xp=[3.-1,2.17 and (b) Xp=(2.3.1.-2] Xg=[1.-5,8,-7] Xy=[4.6,2.-3] Js linearly dependent. In each determine a maximum subset of linearly independent vectors and express the others as Linear combinations of these. 2K, 4 2Ky—BYy 2 0 and My = Xp Xe 2 LINEAR DEPENDENCE OF VECTORS AND FORMS, [cHAP. 9 4. Let P(1,1,1), BC 2,3), (8.1.2), and P¢2,3,4) be points in ordinary space. The points P, P, and the origin of coordinates determine plane 7 of equation eyed yiad ‘i zp dyte w Riersnl ey o ooo1 Substituting the coordinates of P, into the left member of (#). we have 2341 2840) 234 aaad 1110 = = frat ° 12491 1230 123 0001 ooo 234 ‘Thus. P Hes in 77. The significant fact here is that [P,.%.?J'=]1 1 1| is of rank 2, 123 We have verified: Any three points of ordinary space ite in a plane through the origin provided the matrix fof their coordinates is of rank 2. Show that , does not Ite in 7 SUPPLEMENTARY PROBLEMS 8. Prove: If m vectors X;,>,...,q are linearly Independent while the set obtained by adding another vector Aus 18 nearly dependent, then Nea, ean be expressed as a linear combination of 3. Xs, {6 Show that the representation of Yq,3 in Problem § is uniaue. Hints Suppose Kaas © ZX; = Eaux, and eosider ¥ chy) % 2. Prove: A necessary and sufficient condition that the vectors (9.2) be linearly dependent is that the matrix (8.5) of the vectors be of rank r ~=0, ... have solutions, each being « column of SUPPLEMENTARY PROBLEMS Find all solutions of: nt mt % @) 4- Met 3m = 1 Co) } any + 545 — 20 ay t Tap Ti mtmtmtne = 0 nt ata Mt tm (tee lan la w RIE ISTS mom tmtm = 2 Ans. (@) xy = 1+ 2a~b4Se, mya, Hyde Mere (@) m= tal +1913, 20 = 0/9 5/3, (d) xy = m2 a Find al nontseal stations mt tetin so note ° oe t= nett = 0 eS @) Jax tax t ag © 0 a eee anes ° xy > 4% t Sap = 0 rena deo las tte tte =o Ans. (a) %1= 3a, 19 = 0, & @) ae oep= maga 3 1,5 (4) xy = 8a 8b, ape, xy sta 80, ay @ 7 fof CHAP. 10) LINEAR EQUATIONS 83 R rig 18, Given 4 = |2 2 4], find a matrix B of rank 2 such that 4B =0. Hint. Select the columns of B from 33 the solutions of AX = 0, 14. Show that a square matrix is singular if and only if its rows (columns) are linearly dependent, 15, Let AX =0 be a system of n homogeneous equations in» unknowns and suppose A of rank r ‘hat any non-zero vector of cofactors [042, ALf0, ..., ttgnl' of a row of A Is a solution of AX = 0, +1. Show 16. Use Problem 15 to solve ate tIm = 0 tay + 3%- me = 0 ay tix tte = 0 TRE wf of = o 2x; + 5x + 6x = 0" 3x; — 4x2 + 2x = 0" Bx xy + Os Hint, ‘To the equations of (o) adjoin Ox + 0x» +02 = 0 and find the cofactors of the elements of the 1-29 third row of |2 5 6 0 00, Ans, (a) 4 =—21a, x9 =0, x9=90 ot [80,0,-]', ) [2¢,-T0.-174l', (c) [110,-20,—4a] 11. Let the coefficient and the augmented matrix of the system of 3 non-homogeneous equations in § unknowns AX =H be of rank 2 and assume the canonical form of the augmented matrix to be 10 bro bre bas ee 0 1 bes boa bos eo 000 0 0 Oo with not both of es.cy equal to 0. First choose 2 = 24 === 0 and obtain X; =[es.ez,0,0,0)' as @ solu- Hon of AX =H. Thon choose a= 1, ra= 260, alS0 xy =%520, y= 1 and ag =2y 0, 2571 to ob- fain other solutions Xs, Xs, and Ny. Show that these 5—2+1~4 solutions are linearly independent. 18. Consider the linesr combination Y = 5%; +s,%z+59Xg+54Xy of the solutions of Problem 17, show that Y is a solution of AX =H if and only if (i) s: +8489 +54 > 1. Thus, with sy, 9,50, 4 arbitrary except for (i), ¥ 48 m complete solution of AX = #1. 49, Prove: Theorem VI. Hint. Follow Problem 17 with ey =e9= 0. 20. Prove: If is an mp matrix of rank r, and R 1s a pxm matte of rank rp such that A = 0, then ry rp Hint. Use Theorem VI, Se 21. Using the 4x5 mateix =[a;;) of rank 2, verify: tn an men mattix A of rank r, the rsquare determi ants formed trom the columns of a submatrix consisting of any r rows of A ate proportional to the square determinants formed from any other submatrix consisting ofr rows of 4 int. Suppose the frat tworows ae linearly independent so that 00 (7 =1.2,.00.5). Evaluate the 2-square determinants i “I. 6g 03 roatay + Pande}. dy 2935 + Pants ag as 09 aes! 959 5 69 tas and 22. Write a proof of the theorem of Problem 21. 23. From Problem 7, obtain: If the n-square matrix 4 is of rank n-1, then the following relations among its eo factors hold © Giytae = Mata, yay © agay where (hfs = 1,2, 84 25. cm 3 LINEAR EQUATIONS (CHAP. 10 riaad 10000 12 3-42 01000 214126 coro Show that B = row equivalent to » Pom afer that the ow tnat B= (29 1 FS] os tent to [00 50 OL. wom B= (4 fl inter that th 122-24 oooo1 pa-3 11. 99000, system of 6 linear equations in 4 unknowns has 5 Linearly independent equations. Show that © system of ‘m>n linear equations in n unknowns can have at most n +1 linearly independent equations. show that when there are n +1, the system is inconsistent If AX =H 1s consistent and of rank r, for what set of r variables can one solve? Generalize the results of Problems 17 and 18 to m non-homogeneous equations inn unknowns with coeffi- cient and augmented matrix of the seme rank r to prove: If the coefficient and the augmented matrix of the system AX =H of m nonhomogeneous equations inn unknowns have rank r and if X,Xo,...,Xnares ee linearly independent solutions of the system, then x sky take tot tneretneret where" S 1, is a complete solution In a four-pole electrical network, the imput quantities E, and /; are given in terms of the output quantities Bp and I by = ak, + Bla Ey 2 alfe. iageters tray b e alle oom (2) of SE = Solve also for E and Ia, 1, and lp, fy and E> p . Let the system of n linear equations inn unknowns AX =H, H' #0, have a unique solution. Show that the system AX=K has «unique solution for any n-vector K ¥ 0. 1 pen ye Solve the set of linear forms 4X = |2 1 3l|x| = Y= [y2| for the =; as linear forms in the y's. 1 2 sls ye Now write down the solution of A°X Let A be n-square and non-singular, and let S, be the solution of AX'= Ey, (U = 1,2,....m), where B; 1s the -veetor whose ith component is 1 and whose other components are 0, Identity the matrix (8:,$5,..Sq) Let 4 de an msn matrix with m cn and let S; be a solution of AX = Ey, (é = 1,2,...,), where Ej s the im-vector whose ith component is 1 and whose other compouents are 0. If K = (ky, ka,...kq)’. show that kaS + beSg + + bgSy is a golution of AX = K Chapter 11 Vector Spaces ‘LESS STATED OTHERWISE, all vectors will now be columa vectors, When components are dis- played, we shall write [x,, %, ....%]'. ‘The transpose mark (') indicates that the elements are to be written ina column, A set of such n-vectors over F is said to be closed under addition if the sum of any two of them is @ vector of the set. Similarly, the set is said to be closed under scalar multiplication it every scalar multiple of a vector of the set is a vector of the set. Example 1. (a) Tho set of all vectors {x;.x».9]’ of ordinary space havinr equel components (x = y=) 4s closed under both addition and scalar multiplication, For, the sum of any two of the vectors and k times any vector (k real) are again vectors having equal components, () The set of all vectors [x,.2.x9]' of ordinaty space is closed under addition and scalar ‘multiplication VECTOR SPACES. Any set of n-vectors over F which is closed under both addition and scalar multi- plication is called a vector space. Thus, if X,,X,,...,%, are n-vectors over F, the set of all Unear combinations any IAXy + bake +e legXy (kin Fy is a vector space over F. For example, both of the sets of vectors (a) and (b) of Example1 are vector spaces, Clearly, every vector space (11-1) contains the zero n-vector while the zero m-vector alone is a vector space. (The space (11.1) is also called a linear vector space.) ‘The totality I (F) of all n-vectors over F is called the n-dimensional vector space over F. SUBSPACES. A set I’ of the vectors of ¥,(F) is called a subspace of J,(F") provided I’ is closed une dor addition and scalar multiplication, Thus, the zero n-vector is a subspace of 1,(F); so also is (Ff) itself. The set (a) of Example 1 is a subspace (a line) of ordinary space. In general, if Xy,Xoy Nw belong to F,(F), the space of all linear combinations (11.1) is a subspace of WP, A vector space V is sald to be spanned ot gene-ated by the n-vectors Xs, Xy,...Xy pro vided (a) the X; lie in V and (b) every vector of V is a linear combination (11.4). Note that the vectors »Xq are not restricted to be linearly independent, Bxample2, Let F be the field R of real numbers so that the 3-veetors X= (1.1.1). Xe = (1.2.8) No [1.9,2]' and X= [3.2.1] Me tn ordinary space S« High). Any vector [0.6 0] of Scan be expiessed as, 85 86 VECTOR SPACES (omar. 11 yr + ye + ya + | Dye + B¥9 + Oya. yy + By + ye + rake + yoXo + yo¥a + rate since the resulting system of equations Yet Yet yo + BY ° wo Ye + Bo + Bye + Be 6 Mt Be + Wet Ye ts consistent. Thus, the vectors X,.Xp.No.X4 span S. ‘Tho vectors X, and Xp ate Heatly independent. ‘They span a subspace (the plane 7) of 5 which contains every vector AX, +X, where A and k are real numbers. ‘The vector X, spans e subspace (the line L) of S which contains every vector KX, where 1 is a real number. See Problem 1. BASIS AND DIMENSION. By the dimension of a vector space V’ is meant tho maximum number of Lin- ‘early independent vectors in Vor, what is the same thing, the minimum number of linearly in~ dependent vectors required to span V. In elementary geometry, ordinary space is considered as f 3-space (space of dimension three) of points (a,b,c). Here we have been considering it as a Sespace of vectors (a,b, 0’. The plane 7 of Example? is of dimension 2 and the line L ts of dimension 1 | vector space of dimension r consisting of n-veotors will be denoted by 1(F). When r= m, we shall agree to write ¥,(F) for (F). |A set of r linearly independent vectors of 1(F) is called a basis of the space. Each vec~ tor of the space is then a unigue linear combination of the vectors of this basis, AN bases of VCE) have exactly the same number of vectors but any r linearly independent vectors of the space will serve as a basis. Example 3. The vectors Xy.Xo.%q of Example 2 span S since any veetor [a,b,c] of $ can be expressed Mt yo + wel mak + yee + aha = [nt are + Ore) + 20 y eben nique solution, The vectors X;.Xz.Xq are a basis of S. ‘The vectors Xa.Xo.X4 ate not a ‘asis of S. (Show this) ‘They span the Subspace 7 of Example 2, whose basis is theset Xs, 5, unlike the system (1), has @ u- ‘Theorems I-V of Chapter 9 apply here, of course. In particular, Theorem IV may be re- stated as: Lot XiXacueXy ate a set of n-vectors over F and ifr is the rank of the nam matrix of their components, then from the set r inearly independent vectors may be selected. These yyectors span a VCP) in which the remaining m-r vectors lie See Problems 2-2. cmap. 13) 87 Of considerable importance are W. if Xs, Xy..24,Xq ate mm or, when p Ate 1.3) Nes > Neo Nap > Np Ng < Mas See Problem 10 BASES AND COORDINATES. The n-vectors Fy =11,00,..07, 0,1,0,-.01, vey Be = (00,0... are called elementary or unit vectors over F, The elementary vector Fj, whose jth component is 1, is called the jth elementary vector. ‘The elementary vectors Fy, Ep,...,B,, constitute an important basis for Vy (F). Every vector X= [4,5 --1%nI’ of F(F) ean be expressed uniquely as the sum = E+ Ea to + nbn of the elementary vectors. The components x. %....% of X are now called the coordinates of X relative to the F-basis. Hereafter, unless otherwise specified, we shall assume that a vector X is given relative to this basis. Let Z,, Zp, 1.2 be another basis of Iq(F). Then there exist unique scalars a, dp, ..., in F such that Zp + 2g to + OnLy ‘These scalars ay. d>,....dy are called the coordinates of X relative to the Z-basis. Writing Np = [au das ne], We have (iL4) Xo« where Z is the matrix whose columns are the basis vectors Zs, Z,,.... 2 Zap vv Dy Xt ZXe Examples. 1 Z,= [2,-1.3 Zp= (1.2.1), Zy= [1-1-1750 basis of (FY and Xz = (1.2.37 1s a vector of 1(F) relative to that basis, then 2a afr X= [2uZq%lXy = [or 2-a]f2] =~ | of = r0.-2) 3-1-1]]3 2 relative to the E-basis. poses CHAP. 11) VECTOR SPACES 89 Let Fi, M,....%% be yet another basis of I(F). Suppose Xy = [By, by... by)’ 50 that (11.5) x (CR May ee Mi My = Ny From (11.4) and (11.5), X = Z-Xp = W-Xy and a6) My = WU ZXp = PX where P = WZ, ‘Thus, Vin. 1f a vector of ¥,¢F) has coordinates Xz and Xy respectively relative to two bases of F,(F), then there exists a non-singular matrix P, determined solely by the two bases and given by (11.6) such that Xy = PX, See Problem 12. SOLVED PROBLEMS 41. The set of all vectors X = [xp,x2,%)m4]', where x + x: + +24 = 0 is a subspace V of 4(F) since the sum of any two vectors of the set and any scalar multiple of a vector of the set hay ‘components whose sum is zero, that is, are vectors of the set. 13d 2. Since pee is of rank 2, the vectors X, 240 1.2,.21T, Xo=(84.4.31, and X, 1.00.17 134 are linearly dependent and span a vector space J Fy Now any two of these vectors are nearly independent: hence, we may take X, and Xp, Xj and Xp, of Xp and Xpas a basis of the Vi(F), 1424 131 " - 3. Since |) 5 gy Hs of rank 2, the vectors Xy=(11.1.07, Xp=(4.8.2-11, Xo=(21.0,-17, o-1-1-2 and X,~= [4,2,0,-2]' are linearly dependent and span a 1Z(F), For a basis, we may take any two of the vectors excent the pair Xo. Xe 4, The vectors X,,X,, Xq of Problem? lie in Vi(F). Find a basis. For a basis of this space we may take Xs.Xq.Xe = (1.0.0.0), and Xe = [0.1.0.01 ot Xa. Xp. Xe [1.23.41 and X= (1.3.6.8), ... since the matrices [X,.XqXq Xp] and (Xo. Xp Xp] ate of tank 4 90 VECTOR SPACES (CHAP. 1 B.Let X= (1.217, X= (1.2.3), X= (3.657, Y= [0,0,1), Y= (1.2.5) be vectors of KF). ‘Show that the space spanned by X,, Xp, X, and the space spanned by ¥,, Y, are identical. First, we note that X, and Xp are linearly independent whtle X52 2X, X>. Thus, the X; span a space of dimension two, say ,V3(F). Also, the ¥; being linearly independent span a space of dimension two, say Hel. Next, Wy = 3X_~ 2Xq. Yo = 2Xz— Xys Xa = Yy— 4¥y, Xp = Yo— Wy. Thus, any vector a¥, + bY. of WEF) 1s 0 vector ($+26)Xz — (Ja+b)X, of yWe(F) and any vector cX,+dXz of 4VZ(F) is a vector (ctdYs ~ (det 2h, of GIF). Hence. the two spaces are identical 6. (a) If X = [x,,x x9]/ les in the WG(F) spanned by X, = [1,-1.1] and Xp = [9.4.~21', then m1 3 fe -1 4| = -2 + 5m 4 Tx = 0 me 1-2 Leaecxate)’ lies in the LiF) spanned by X= (1.1.2.3) and Xo = (1,0,-2.1]', then mia m1 0 1 ar = 1 OF 6 of rank 2. since 0. this requites of = 201+ 420-20 = 2 a| som tale ee |e 1 2 m0 and mal iced mtd 10/~ u+te-m 0 mot » ‘These problems verity: Every Kj(F) may be defined as the totality of solutions over F of a system of n-k nearly independent homogeneous Linear equations over F in n unknowns, 7. Prove: If two vector spaces Wy(F) and TUF) have a(F) as sum space and Iq(F) as intersection space, then Atk = s+t Suppose 1=h; then WitF) is a subspace of ACF) and their sum space is Vi itself. Thus, s=k, ¢=h and e- Tho reader will show that the sume is true if ¢=h Suppose next that £ 4+ —m Suppose next that 4 is not of the above form. ‘Then there exist nonsingular matrices P and Q such that PAQ has that form while the rank of PAQR is exactly that of 4B (why?) “The reader may consider the special case when B= [x A oa =[1.2.1)' relative to the E-basis. Find its coordinates relative to a new basis Z,=[1,1,0)’. Goal, and Z, = (1,111 Solution (a). Write h a} of} of asbeced zy + 2+ cba, thatis. |2) = ali} + afo| + ela]. ten da ¢¢=2 and o-0, 8 1 o} by tt brent wx +2, Thus relative tothe Z-basis, we have Xy = [0,-1.2] 92 BR. 13 ro 15. 16. 1. VECTOR SPACES [cHAP. 11 Soluton (2), Rewlting 905 X = [yo Za]Xy = 2p, we have 1 om) f = 2x 2 ant offa] = fo-127 ara alls Let X; and Xy be the coordinates of a vector X with respect to the two bases Z, = (1.1.0]', Zo=t101Y, Zoe (tary and M= (112), W=(2.211, My= (1.2.2) Determine the ma- trix P sueh that Xy = PX, rat 1 2-3 7 Z=[%yZn%ql=]1 01), W= [1 22 a) 2 0-1 oat 212 -3 3 0 a4 tren Pw = 1) 2 1a) ware, 0-30 SUPPLEMENTARY PROBLEMS Let [xs.25 to saI’ be an arbitrary vector of W4(R), where R denotes the field of real numbers. Which of the following sets ate subspaces of (RY? (a) All vectors with == %2=45= 5 (@) All vectors with (B) Al vectors with x = x2. (e) All vectors with x4=0 Ans. All except (d) and (e), 1 Pree (e) AM vectors with x.%.%9.%5 Integral show that [1.1.1.1 1' and [2.2.3.2]' are a basis of the ¥2(F) of Problem 2 otemmine the dimension of the veetor space spanned by each set of vectors. Select a basis for each, a y Giaaal hasas! (1.1.0.1) y @ (saszat, (vaaar w feasel, Gaal (aaa) [ront-al Ans. (a). (8). (0). T= 2 (e) Show that the vectors X= {1-11 and Xe=(3.4,-21' span the same space as % = [0.5,-1]' and Yp=(-17.-11.81. (@) Show that the vectors X; = (1.-1.1 and Xp=[3.4.~21' do not span the same space as ¥ = [~2.2,-2]" and ¥ = [4.3.1] Show that If he set Xs. %z,oNy 18 8 basis for KeF), then any other vector ¥ of the space ean be repre sented uniquely as a linear combination of Ny, Ma... My a 2 Hint, assume Ye 2 ak; = ¥ bXy CHAP. 11} VECTOR SPACES 93 18. Consider the 4x4 matrix whose columns ate the vectors of a basis of the 1(R) of Problem 2 and a basis of the (R) of Problem 3. Show that the rank of this matrix is 4; hence, ¥4(R) is the sum space and T(R), the zero space. is the intersection space of the two given spaces, 19, Follow the proof given in Problem 8, Chapter 10, to prove Theorem Mt 20. Show that the space spanned by [1.0.0,0.0]'. [0.0.0.0.11. [1.0.1.0.0)', {0.0.1.0.0); [1.0.0.1.1]' and the space spanned by [1.0.0,0.1). [0.1.0.1.0. [0.1.-2.1.0/, [1.0.~1.0.1), (0.1.1.1.0)" ate of dimensions 4nd 3. respectively. Show that [1.0.1,0.1]’ and [1.0.2,0.1]" are a besis for the intersection space. 21. Pind, relative to the basis Z,=[1.1.2]', Z.= (2.2.11, Za (1.2.21 the coordinates of the vectors @ f0F, @ Looal. @ fia) Ans. (a) [-1/3.2/3.0), (6) [4/3.1/3.-11. ce) (1/8,1/3.0)" 22. Find, relative to the basis Z,=0.1,0]'. 2 =[1.1.11. Z5=[3.2.11' the coordinates of the vectors c@) (2-107, (6) (1-357. () (0.0.17. Ans. (a) [-2.-1.4)', (6) [-6.7.-2Y. (@) [-1/2,3/2. 121 28. Let Xy and Ay be the coordinates of a vector X with respect to the given palt of bases. Determine the ma- trix P such that Xy = PX, Z=(no0Y, Z-(ioa, Zefa) 4) Z=To1.0]. Ze= [1.407 Zo (123! m= foa0Y, eliza), Wye lay Rel0), m=laal, me ln2ay 324 1g ans. @) P= H-1 00, P= |-10 8 328 10 1 24. Prove: IEP; As a solution of AX = Bj. (j= 1.2...m). then ¥ ayPj in a solution of AX =H, where H= Thachas sadn ia Hint Hom WyBgt hoEo + = + hgk @ 25. The vector space defined by all linear combinations of the colunns of a matrix 4 is called the columa space of A. ‘The vector space defined by all linear combinations of the rows of 4 1s called the row space of 4 ‘Show that the columns of 48 are in the column space of 4 and the rows of AB are in the row space of B 26. Show that 4X = ii, a system of m non-homogeneous equations in nunknowns, 4s consistent if and only if the the vector # belongs to the column space of 4 11 o ira 21, Determine # basis for the null space of (a) o 1 1). ¢) {1 21 2 to. s4a34 ans. (oy ta, 6) Gana, (h2—1-2)' 28. Prove: (0) Nip > Np My >M Oy and Ys both sets are linearly dependent, Yy% 80 that CHAP. 12) LINEAR TRANSFORMATIONS 7 2 Prove: A linear transformation (12.1) is non-singular if and only if A is non-singular. Suppose 4 Is non-singular and the transforms of X,4Xp are Y= ANy=ANg. Then A(Xs—X2) = 0 and the system of homogeneous linear equations 4X = 0 has the non-trivial solution X sible if and only if |] = 0, a contradiction of the hypothesis that 4 is non-singular 3. Prove: A non-singular linear transformation carries linearly independent vectors into linearly in- dependent vectors, ‘Assume the contrary, thats. suppose that the images ¥; = AX;, (@ = 1,2...) of the Lineatly independ- dent vectors Xs.Xo..Ap are Unearly dependent. Then there exist scalars 8,,s2...5. aot all zero, such that 2 Pah = rskerwegyh = 0 e o ZB wAX) = Akt akon eapty = 0 Since A 1s non-singular. #X; + sX> 4 + sp Xp = 0. But this is contraty to the hypothesis that the X; are linearly independent. Hence, the ¥; are linearly independent. 4. A certain linear transformation Y = AN caries X,=(1,0,1)' into [2,3.-1]', Yo= (1-1, into {3.0.27 and Xg= [1,21] into [~2,7,~1]', Find the images of F,, F:, Ey and write the equa- tion of the transformation. jabs cot aithen | 642000 andes —$. bat, eck. This, Eye o+b— c20 and its image te ¥, = ~3{2.9.-11'+[3.0.~2]' +4[~2.7.-1)' = [1.2.-2)'. similarly, the image of Ep ts Yo={-1.3.1)' and the image of Eis Ys= [1.1.1]. ‘The equation of the transformation is Let aX y+ bXo4 eXy AX, + Noe dXe iat Yo: (tame = [2 3 ape -21t 112 5.1 y= AXg=|2 2 1/X; isa near transtormation relative to the Z-basis of Problem 12, Chap- 312 ter LL, find the Same transformation ¥; = BX, relative to the W-basis of that problem. aad From Problem 12, Chanter 11, Xy = PX; = 2) 2 1 afty, ‘Then 0-3 htt af r= Py = | oo-afey, = oxy 2a -2 4-6 Pye = Cun, = eon, = Af 7 14 oft, os and 98 10. 2 13. 14 15, 16. n LINEAR TRANSFORMATIONS (omar, 12 SUPPLEMENTARY PROBLEMS In Problem 1 show: (a) the transformation is non-singular, (3) X ~ 4°Y carties the column vectors of A into the elementary vectors Using the transformation of Problem 1, find (a) the image of X= [1.1.21', (8) the vector X whose image is (-2-s-5). ans. ce) [a5a1), (6) [-3.-1.2)) Study the effect of the transformation Y IX, IX, also ¥ Set up the linear transformation whieh carries E, into [1.2.8], Ep into (3.1.2), and Eq into [2.~1,-11- ‘Show that the transformation is singular and catties the linearly independent vectors [1.1.1] and [2.0.21 {nto the same image vector Suppose (12.1) Is non-singular and show that if Xy,Xz....,Xq ate linearly dependent so also are their im- ages. Ys, Yo, -.¥y Use Theoter IM to show that under @ non-singular transformation the dimension of a veetor space 4s un: changed. Hint. Consider the images of a basis of 1 (F) 110) Given the linear transformation ¥ =| 23 1]X, show (a) itis singular, (B) the images of the linearly in- 235 taal, p= Cana), and Gopendent veetors =[1.2.31' are linearly dependent, (e) the Image of HR) is a BCR. vid Given the lnear transformation Y=]1 2 4]X. show (a) it is singular, (6) the smage of every vector of the ria Wf spanned by [1.1.1] and [3.2.0] Hes in the ¥4(R) spanned by [5.7.5 J. Prove Theorem VIL Hint. Let Xz and ¥ ((= 1,2...) be the given sets of vectors. Let Z = 4¥ carry the sot Xj into Ay and Y = BZ carty the B; into Y; Prove: Similar matrices have equal determinants 12g Let ¥=AX«|3 2 1] be a linear transformation relative to the F-basis end let @ new basis, say Z, nig Liaol, Z= E101), Zy=[14.1)' be chosen. Let X= [1.2.9] relative to the F-basis. Show that (@) ¥ =[14.10.6]' is the image of X under the transformation, (&) X, when referted to the new basis, has coordinates Xp = [~2.-1.4]' and Y has coordinates Ye = [8.4.21 1 o-f (e) Xz=PX and ¥, = PY, where P=| 1-1 0 v1 a (a) Y= Q7AOX;. where Q = P™ io Given the linear transformation Yy © [0 1 1]Xy. relative to the F-basis: th = (0,-1.2), 101 cHap. 12] LINEAR TRANSFORMATIONS 99 18. 19 10.-1V, Zg= (1.24). = (~2.0,-4]'. Find tne representation relative tothe Z-basis: Z, = [1 “10 9 Ans. Yy =| 2 281K) -10 2 If, in the Linear transformation ¥ = AX, is singular, then the null space of 4 is the vector space each of ‘whose vectors transforms into the zero vector. Determine the null space of the transformation of 129 ( Problem 12, (0) Probiem1s, ey ¥=|2 4 of. 269 Ans. (a) WR) spanned by [1.—1.1)" (6) Wek) spanned by [2.1.-1]" (6) Wik) spanned by [2.-1.0) and (3.0.1 wy X cat 5 every vector of a vector space Vf into a vector of that same space, ¥; is called an ine Variant space of the transformation. Show that in the feal space V(R) under the linear transformation Lo -i (@) ¥ =|1 2° 1]. the &* spanned by (1,-1.0), the 1 spanned by [2-1.-2]', and the Kt spanned by 228 ft -2]' are invariant vector spaces. aad (Y= ]1 91}. the 14 spanned by (1.1.1]' and the 12 spanned by (1.0.~11 and (2,-1.0) are invariant 122 spaces, (Vote that every vector of the #? is cattied into itself.) o1od 0010 1 5 (e) Y= ofan X, the 1 spanned by [1,1,1,1]’ is an invariant vector space. 14-6 4 ‘Consider the Linear transformation ¥ ton of 1, 2,...0m (@) Deseribe the permutation matrix ? (8) Prove: Thete ate n! permutation matrices of order n (c) Prove: If P, and Po are pesutation matrices go also are Py=P,P, and P4=P2Ps fm) in whleh fifo sei 18 ® permuta- (a) Prove: IF is e permutation matrix so also are PY and. PP’ = (©) Show that each permitation matrix P can be expressed as a product of a number ofthe elementary col- umn matsiees Kia, Reo. oe Ryan Wate P= (Bj. Big on Beg) Mere bbe aig mentary n-vectors, Find a tule (otter than P™* = P’) for writing P™*. For example, when n= 4 and P= [Bo Ea Boh, then Po! = [Ba Ba. Eli when P (Eu, Bo Ess Bah, then P= (Eq a Ba Ex] permutation of 1,2,...m and Ey, are the ele- Chapter 13 Vectors Over the Real Field 1¢ of all real n-veetors. You %qY’ ate two vectors of KR), their inner product is INNER PRODUCT. In this chapter all vectors are real and K(R) is the spa X= [xp terete)’ and Y =O defined to be the scalar (13.1) XY = am + mae + + yn Example 1. For the veotors Xs aa)’, Kee E2naT's My [2a () XpXo = EZ E ELE LZ = 5 ) XpXy = FL +12) + FT = 0 ) XeXy ebb es bt (@) Xp2Xp = ee 2 ee = 10 = 20H) Note. The inner product is frequently defined as 3.1) XY = NY = ¥X The use of X’Y and Y’X is helpful; however, X°Y and ¥°X are 1x1 matrices while X-Y 1s the element of the matrix. With this understanding, (13.4) will be used here. Some authors write X|Y for X-Y. In vector analysis, the inner product is call- ed the dot product. ‘The following rules for inner products are Immediate (a) Kok NyeXys XyekX = WX X 03.2) XyoeXy = Kor Xo Xe = WaXe + MaMa Co) Ket Xe Not Xd = MeeXo + NaeXe + NoeMa + Xo Xe ORTHOGONAL VECTORS. Two vectors X and ¥ of Ij(R) are said to be orthogonal if their inner product is 0. The vectors X; and X, of Example 1 ate orthogonal. ‘THE LENGTH OF A VECTOR X of K(f), denoted by || ||, is defined as the square root of the in- net product of X and X; thus, (13.3) Ix Viewewe es Example 2. From Example 1(e), || Xs! See Problems 1-2, 100

You might also like