You are on page 1of 5

3

Math 115B, Winter 2007: Homework 3

Exercise 3.1. (FIS, 6.5 Exercise 10) Let A be an n n real symmetric [complex normal] matrix. Then
n n

tr(A) =
i=1

i ,

tr(A A) =
i=1

|i |2 ,

where the i s are the (not necessarily distinct) eigenvalues of A. Proof. By Theorems 6.19 and 6.20 from FIS, A is orthogonally [unitarily] equivalent to a diagonal matrix D with the eigenvalues of A on the diagonal, i.e. there exists an orthogonal [unitary] matrix P such that A = P DP . Since similar matrices have the same trace, tr(A) = tr(P DP ). Moreover, for any square matrices M and N , tr(M N ) = tr(N M ), so in particular, tr(P DP ) = = tr((P D)P ) = tr(P (P D)) = tr((P P )D)
n

tr(ID) = tr(D) =
i=1

i .

For the second claim, note that A = (P DP ) = P D (P ) = P D P , and that the diagonal entries of D are the i s. This implies that DD is a diagonal matrix with entries |i |2 , so that we have tr(A A) = tr(P D P P DP ) = tr(P D DP )
n

tr(DP P D ) = tr(DD ) =
i=1

|i |2

Exercise 3.2. (FIS, 6.5 Exercise 31) Let V be a nite dimensional complex inner product space, and let u be a unit vector in V . Dene the Householder operator Hu : V V by Hu (x) = x 2 x|u u for all x V . (a) Hu is linear. (b) Hu (x) = x if and only if x|u = 0. (c) Hu (u) = u. 2 (d) Hu = Hu and Hu = IV , and hence Hu is a unitary operator on V . Note that if V is a real inner product space, Hu is a reection. Proof. (a) For any a C, and x, y V , Hu (ax + y ) = ax + y 2 ax + y |u u = ax + y 2a x|u u 2 y |u u = a(x 2 x|u u) + (y 2 y |u u) = aHu (x) + Hu (y ).

(b) For any x V , Hu ( x ) = x x 2 x|u u = x 2 x|u u = 0 x|u = 0.

(c) Hu (u) = u 2 u|u u = u 2u = u. (d) For any x and y V , we have x|Hu (y ) = = = = x|y 2 y |u u = x|y + x| y |u u x|y 2 y |u x|u = x|y 2 u|y x|u x|y + 2 x|u u|y = x|y + 2 x|u u|y x 2 x|u u|y = Hu (x)|y ,

2 so the uniqueness of the adjoint guarantees that Hu = Hu . To see that Hu = IV , simply compute 2 Hu (x)

= =

Hu (x 2 x|u u) = Hu (x) 2 x|u Hu (u) (x 2 x|u u) + 2 x|u u = x = IV (x).

Exercise 3.3. (FIS, 6.6 Exercise 6) Let T be a normal operator on a nite dimensional inner product space. If T is a projection, then T is also an orthogonal projection. Proof. An orthogonal projection is simply a self-adjoint projection, so it suces to prove that T is self-adjoint. As a corollary to the spectral theorem, we know that a normal operator is self-adjoint if and only if its eigenvalues are real. The eigenvalues of a projection are real. Indeed, if is an eigenvalue and x an eigenvector for , then x = T (x) = T 2 (x) = 2 (x), therefore ( 1) = 0, i.e. = 0 or 1. Hence T is an orthogonal projection.

Exercise 3.4. (FIS, 6.6 Exercise 7) Let T be a normal operator on a nitedimensional complex inner product space V . Using the spectral decomposition T = 1 T1 + ... + k Tk , we prove that (a) If g is a polynomial, then
k

g (T ) =
i=1

g (i )Ti .

(b) If T n = 0 for some n, then T = 0. (c) A linear operator U : V V commutes with T if and only if U commutes with each Ti . (d) There exists a normal operator U : V V such that U 2 = T . (e) T is invertible if and only if i = 0 for 1 i k . (f ) T is a projection if and only if every eigenvalue of T is 1 or 0. Proof. (a) We proceed by induction on deg (g ). For constant polynomials, the result is trivial. Let g = an X n + ... + a0 and suppose that the result holds for any polynomial of degree n 1. Then g (T ) = = = an T n + (an1 T n1 + ... + a0 )
1 1 an T T n1 + (an1 (n T1 + ... + n Tk ) + ... + a0 ) 1 k k k n1 k

an
r =1

r Tr
s=1 k,k

1 n Ts s

+
i=0

ai
j =1 n1,k

i j Tj

an
r =1,s=1 k

1 n1 r n Tr Ts s n1,k n (an n r )Tr

+
i=0,j =1

ai i j Tj

=
r =1 n,k

+
i=0,j =1 k

ai i j Tj

=
i=0,j =1

ai i j Tj =
j =1

g (j )Tj

(b) This follows from the fact that T1 , ..., Tk are linearly independent in homF (V, V ). Indeed, since Ti maps V to Wi , every T (v ) can be written uniquely as a linear combination of the (nitely many) basis vectors {vi,j } for Wi . Then we have
i

T n (v )

n n 1 T1 (v ) + ... + k Tk (v ) = i=1

n i
j

ai,j vi,j = 0

i = 1, 2, ..., n

n i ai,j = 0.

Since this holds for every v V , we may select v so that for each i, at least one ai,j = 0. Then we must have n i = 0 for all i = 1, ..., n. In particular, i = 0 for each i so that the eigenvalues of T are all zero. Hence T = 0. (c) We have, by linear independence of the Ti s, UT TU = consider 1 (U T1 T1 U ) + ... + k (U Tk Tk U ) = 0 U Ti = Ti U i = 1, 2, ..., k

(d) Every complex number has a square root, so setting U = one sees immediately by (a) that
k k

k i=1

i Ti ,

U2

=
i=1

( i )2 Ti =
i=1

i Ti = T.

k As a corollary to the spectral theorem, we know that U = g (U ) = i=1 g ( i )Ti , which obviously commutes with each Ti . By (c), U U = U U , therefore U is normal. (e) This one is almost tautological, since an eigenvector is by denition nonzero. T is invertible ker (T ) = {0} 0 is not an eigenvalue of T

(f) As we saw in (b), the Ti s are linearly independent, so that


k

(T 2 T )

(2 i i )Ti = 0
i=1 2 i

i = 0

i = 1, 2, ..., n.

In other words, T 2 = T if and only if every i is a root of X (X 1), i.e. T 2 = T i i = 0 or 1. Exercise 3.5. (FIS, 6.6 Exercise 8) If T is a normal operator on a complex nite dimensional inner product space and U is a linear operator that commutes with T , then U commutes with T . Proof. As a corollary to the Spectral Theorem, we know that T = g (T ) for some polynomial g . If g (X ) = an X n + ... + a0 , then UT = U g (T ) = U (an T n + ... + a0 ) = an U T n + ... + a0 U = an (T U )T n1 + ... + a0 U = an T n U + ... + a0 U = (an T n + ... + a0 )U = g (T )U = T U.

Exercise 3.6. (FIS, 6.6 Exercise 10) Simultaneous Diagonalization. Let U and T be normal operators on a nite-dimensional complex inner product space V such that T U = U T . Prove that there exists an orthonormal basis for V consisting of vectors that are eigenvectors of both T and U . Proof. Let 1 , ..., k be the eigenvalues of T , and set Wi = ker (T i I ). Each Wi is trivially T - and T -invariant. That Wi is U -invariant follows directly

from the fact that U and (T i IV ) commute. Indeed, by Exercise 3.5, U commutes with T , therefore (T i IV )U = (T U i U ) = (U T i U ) = U (T i IV ). Whenever x W , (T i IV )(U (x)) = U (T i IV )(x) = U (0) = 0, i.e. U (x) W . Replacing U by U in the above two lines, one sees that Wi is also U -invariant. Since W is both T and U -invariant, we know that W is both T and W -invariant. This allows us to consider T and W as operators on each Wi and Wi . From this point, the proof proceeds by induction on dim(V ). If dim(V ) = 1 there is nothing to prove - every operator is diagonalizable. Suppose that the result holds for any space of dimension strictly less than dim(V ). In particular, dim(Wi ) < dim(V ). By the induction hypothesis, W1 and W1 have bases {ui }iI and {wj }j J consisting of simultaneous eigenvectors for T and U . It suces to prove that the union of these two bases is a basis for V . Every vector v V has a unique expression v = u + w for some u W1 and w W1 (see Theorem 6.6, FIS), so that v = i ai ui + j bj wj is a unique. In particular, the zero vector is a unique linera combination so that = {ui }iI {wj }j J is linearly independent and spans V . Hence is a basis for V consisting of simultaneous eigenvectors for T and U .

You might also like