You are on page 1of 7

CORRECTION OF HOMEWORK 8

1. Page 149, Exercises 12 and 13 In these exercises, one considers a function D : Mnn (F) F, dened on the n n matrices with entry in a eld F, with values in F. (You can consider F = R for concreteness, this is unimportant here). We also assume that D has the same multiplicative property as the determinant, namely D(AB) = D(A)D(B) (1.1)

whenever A and B are n n matrices. The goal of these questions is to see that D is (under some nondegeneracy conditions) the determinant function. This gives another way to dene the determinant. Show that either D(A) = 0 for all A or D(In ) = 1 We want to prove that one of two possibilities occur. To prove this, we suppose that one does not occur, and try to see that the second has to occur.1 Hence we suppose that the rst possibility is not true. So we have two information on D, namely (1.1) and there exists a matrix M such that D(M) = 0 and we want to prove that D(In ) = 1 (1.2) For the moment, the second information seems mysterious, so we focus on (1.1) and what we want to prove. We have an information relating D and the multiplication of matrices, and we want to prove something about In . But In also has some relation to the multiplication. Namely that for any matrix M , there holds that M = In M = M In . Now, we want to use our information, so we take D of the identity above and using (1.1), we get D(M ) = D(In M ) = D(I)D(M ). This is almost what we want. We just want to make sure that we do not divide by 0. But this is precisely how we can use our second information. Namely we use the equality above (which is true for all matrix M ) when M = M. In this case, D(M) = 0, and dividing by D(M), we get that D(In ) = 1 If D(In ) = 1, prove that if M is invertible, D(M ) = 0. We want to prove something for all invertible matrices. So we let M be such a matrix, and we try to prove the result for this particular M . If we can do it without making any additional assumption on M , then the proof works for all
Date: Thursday, April 16th 2008. 1This has the advantage to give us another information, besides (1.1), namely that the rst possibility does not occur.
1

matrices2 What do we know about M . A priori, we know exactly that there exists another matrix N = M 1 such that In = M N = N M. This look encouraging, since we have a relation between a multiplication (so we can use our rst hypothesis (1.1)) and In (so we can use our second hypothesis D(In ) = 1). Applying D to the equality above, we get that 1 = D(In ) = D(M N ) = D(M )D(N ) and now, we see that for the product of D(M )D(N ) to be nonzero, both terms must be nonzero3. Thus D(M ) = 0, and since we have made no assumption on M , this works for all invertible matrices. This ends the proof. Now, we make the additional Hypothesis that n=2 D(I2 ) = D(J) where J is the matrix J= 0 1 1 . 0 (1.3)

In particular the analysis above shows that D(I2 ) = 1.4 At this point, we have introduced one new object, namely J. Since we only have information about the multiplicative properties of D, it is interesting to see what the multiplicative properties of J are. A simple computation shows that J a c b d = c a d b and a c b J= d b d a . c (1.4)

Hence, multiplication on the left by J exchange the rows of a matrix, while multiplication on the right exchange the columns. Besides, since we are considering products, we can compute J 2 = I2 , and hence, taking D on both sides of this equality, we get that 1 = D(I2 ) = D(J)2 , and using (1.3), we conclude that D(J) = 1. (1.5) These facts will be useful later on. Prove that D(0) = 0. (1.6) Again, we ask ourselves whether 0 has some special multiplicative property. And indeed, we know that for every matrix M , there holds that 0 = 0 M, and applying D to both sides of this equality and using (1.1), we get that, for all matrices M , D(0) = D(M )D(0).
2if we need to make additional assumptions, then we have at least a partial result (case 1). To nish the proof, we can then try to prove the result with the additional information that (case 1) does not hold. 3we even have the more precise information that D(M 1 ) = 1/D(M ). 4I think that all the results here would hold for any n if we replace (1.3) by D(I ) = D(E ) n for one transposition Sn , where E is the matrix with (i, j)-entry i (j) . For simplicity, we stick to the n = 2 case.

And from this we conclude that either D(0) = 0, in which case we are done, or D(0) = 0, but then, we can divide by D(M ) in the equality above to get D(M ) = 1 for all matrices M . But this is not true for M = J by (1.3).5 Consequently D(0) = 0 and the proof of (1.6) is complete Prove that if A2 = 0, then D(A) = 0. (1.7) We still only know three things, namely (1.1), (1.3) and (1.6). But the multiplicative property combines nicely with latter to give that in our case when A2 = 0, 0 = D(0) = D(A2 ) = D(A)D(A) = D(A)2 hence D(A) = 0, and (1.7) is proved. Prove that D(B) = D(A) (1.8) whenever B is obtained from A by exchanging the rows or the columns. Now, this question is not apparently, linked to any multiplication property, so we must examine more precisely what we know, which now consist of (1.1), (1.2), (1.3), (1.6) and (1.7). And all these are just consequences of our hypothesis, i.e (1.1) and (1.3). Now, there is one thing that we havent used so far, it is the precise statement of (1.3), which in particular involves a special matrix J. Then, (in case we did not do that before) we start to examine J and its multiplicative properties, and we rapidly see that (1.4) and (1.5) hold. But this is exactly what we need to answer the question since if B is obtained from A by interchanging the rows, we see by (1.4) that B = JA and, using (1.1) and (1.5) D(B) = D(JA) = D(J)D(A) = D(A). In the case when B is obtained by interchanging the columns, we get that B = AJ and we conclude similarly. Prove that D(A) = 0 (1.9) if one row or one column of A is 0. Now, we have accumulated some results, (1.1)(1.8), and it is not apparent what to use. So to get some idea, let us deal with the rows, and try to simplify the question. Using the action of J (1.4), we see that J a b 0 0 = 0 0 a b

and taking D of both sides, we see that it suces to prove the result when the second row is 0.6 But, actually, examining the equality above, we see that since the second row is 0, it was not important what the second column of J was and the equality above holds with many other matrices. Indeed, for all c, d, we get that 0 1 c d = a b 0 0 = 0 0 . a b

5another way to conclude without using that D(I 2) = 1 is to say that if D(0) = 0, then all matrices have the same image (1) under D, which again is ruled out by (1.3) 6Multiplying on the right by J, we also see that we can assume that a < b, although we wont use this fact.

And then, we can try some special choices for c or d. A rst obvious choice leads to the result, since taking c = d = 0 we get a matrix J such that 2 0 0 (J )2 = = 0. 1 0 Hence using (1.7), we see that D(J ) = 0, and consequently, using (1.1) 0=D J a b 0 0 =D 0 0 a b

This proves the result for the case when a row is 0, and the case when a column is treated similarly. Another special choice for J could be c = 0 and d = 1. Then JJ= 0 1 0 J= 1 0 1 0 1

and taking D of this equality, we get that D(J ) = D(J ). Hence D(J ) = 0, and we conclude similarly. Prove that D(A) = 0 whenever A is singular To prove this fact, we need to express the information A is singular by something more amenable to analysis. Let a b A= c d be a singular matrix. Since A is a square matrix, this is equivalent to saying that the system AX = 0 has a nontrivial solution X = (, )T with = 0 or = 0, i.e. that a b 0 = . (1.10) c d 0 But this looks encouraging: we have the product of two matrices that give a new matrix with a column which is 0. If we were dealing with square matrices, we could apply (1.1) and (1.9) and conclude. So let us try to make our column matrices into square matrices. The rst trial gives AB = a c b d = 0 0 a + b c + d =C

and taking D of this equality gives D(A)D(B) = D(C) = 0 after using (1.1) and (1.9). To conclude, we just need to be able to say that D(B) = 0. And using the result of the second question, we see that this is the case if B is invertible. Hence, we just need to choose , such that B is invertible. But det B = , so we see that choosing = and = we get an invertible matrix since (, ) was not trivial. To sum up, using (1.10), we get that AB = a c b d = 0 0 a + b c + d

and evaluating D of both side and using (1.1) and (1.9), we get that D(AB) = D(A)D(B) = 0,

and since det B = 2 + 2 > 0, B is invertible, so D(B) = 0, and consequently D(A) = 0. Since we have made no assumption on A (except that it is noninvertible), we get that for all noninvertible matrices A, D(A) = 0 and the proof is complete. 2. Page 155, Exercises 1,2,3,7 This section deals with computation of some determinants. Remember that to compute determinants, one can (1) Use the multilinearity with respect to rows or columns (2) Add a multiple of a row to another row, or a multiple of a column to another column without changing the value of the determinant. (3) Switch two rows or two columns. This multiplies the determinant by 1. (4) Expand with respect to a row or a column. (5) Use the fact that if two rows or columns are linearly dependent (in particular if they are the same, or one i 0!), the determinant is 0. (6) Use the formula for a 2 2 determinant: det a c b d = ad bc.

Note that this formula follows from item 4) and the fact that for a 1 1 determinant, det(a) = a. We refer to the book for more precise formulas. 2.1. Problem 1. Compute the following 0 det A = det a b determinant: a b 0 c . c 0

Here we have three parameters on which we have no information (except that they belong to a eld). To get some intuition, let us rst examine some particular cases. Looking at the matrix, it is easy to see that if a = 0, then the determinant is 0. Similarly if b = 0 or c = 0. So we can assume that abc = 0 and that all scalars are invertible. But in this case, let us try to come back to the case when (say) a = 0. Using item 2) in the list of properties of the determinant above, we decide to delete the entry a by adding a/b times the third column to the second. This gives 0 0 b det A = det a ac/b c . b c 0 To get back to a matrix of the original form, but with a = 0, we also need to cancel the rst entry in the second row. To use the symmetry of the matrix, we proceed as before, but with rows, that is we add a/b times the third row to the second row. This gives 0 0 b 0 c det A = det 0 b c 0 and the two rst column of this matrix are obviously linearly dependent (and this is indeed a matrix of the same shape as A, but with a = 0). Thus in all cases, det A = 0.

2.2. Problem 2. Compute the determinant 1 a V (a, b, c) = det 1 b 1 c

of the Vandermonde matrix a2 b2 . c2

Using item 2) in the list of properties of the determinant, we use the rst row to delete the other entries in the rst column. This gives 1 a a2 V (a, b, c) = det 0 b a b2 a2 . 0 c a c2 a2 Now, we use the fact that b2 a2 = (b a)(b + a), and similarly for c2 a2 , and item 1) (the multilinearity) above to factor (b a) out of the second row and (c a) out of the third one. This gives 1 a a2 V (a, b, c) = (b a)(c a) det 0 1 b + a . 0 1 c+a Expanding along the rst column (item 4)) and using the formula for a 2 2 determinant (item 6)), we nally get that V (a, b, c) = (c b)(c a)(b a). 2.3. Problem 3. Find all the elements in S3 , the symmetric group of 3 elements. We need to nd all the permutations of the set X = {1, 2, 3}. First, we can nd the permutations that send 1 to 1 (thus, we consider such a permutation, and we assume that (1) = 1). For such a permutation, either the image of 2 is 2, in which case the image of 3 is 3, and we get the identity transformation (no shuing), (1, 2, 3), or the image of 2 is 3, and we get the transposition which exchange 2 and 3, 23 = (1, 3, 2). Now, we can nd the permutations that send 1 to 2, and we look at the image of 2. It is either 1 or 3. In the rst case, we get (2, 1, 3) = 12 , and in the latter, we get (2, 3, 1) = 13 12 . Finally, the permutations that send 1 to 3 either send 2 to 2, in which case we get 13 = (3, 2, 1), or send 2 to 1, and we obtain (3, 1, 2) = 12 13 . Finally, we get all the permutations of S3 : Permutation Id 23 12 13 (1, 3, 2) (3, 1, 2) sign 1 1 1 1 1 1

In particular, one can check that S3 has cardinal 6 = 3! and that for any two permutations 1 , 2 , there holds that Sign(1 2 ) = Sign(1 )Sign(2 ).

2.4. Problem 7. An n n matrix, M = (mij )1i,jn is called triangular if mij = 0 when i < j. Prove that if M is triangular, det M = m11 m22 . . . mnn , that is to say, the determinant of M is the product of the diagonal entries of M . To prove this, we rst observe that this is true for a 2 2 matrix, using item 6) in the list above: a 0 det = ac. b c And for a general triangular matrix, we see that to compute the determinant, it is natural to expand with respect to the rst row (item 4)). But this gives det A = det a11 X 0 A2 = a11 det A2 , (2.1)

and if A was a triangular matrix, A2 is again a triangular matrix, but of smaller size, (n 1) (n 1). So, suppose that the formula is true for all matrices of size n, and let A = (aij )1i,jn+1 be a matrix of size n + 1. Since A is triangular, we can dene a new n n matrix, A2 = (mij )1i,jn such that A has the shape in (2.1), that is mij = ai+1,j+1 with this formula, we see that if i < j, then i+1 < j +1 and hence mij = ai+1,j+1 = 0, so that A2 is also triangular. Using the equality in (2.1), and the fact that the formula is true for n n matrices (hence for A2 ), we get det A = a11 det A2 = a11 a22 a33 . . . an+1,n+1 . This is the formula for (n + 1) (n + 1) matrices. Consequently, the formula is true for triangular matrices of all size (it is true for matrices of size 2 2, hence for matrices of size 3 3, hence for matrices of size 4 4, . . . ).
Brown University E-mail address: Benoit.Pausader@math.brown.edu

You might also like