You are on page 1of 4

MAT 224 Linear Algebra | Tutorial 5 | 21.10.

13

page 1

Eigenproblems, Diagonalisation,
and Inner Product Spaces
Nikita Nikolaev

Problem 5.1
Every matrix A Mat33 (R) has at least one real eigenvalue.
Proof.
In short, the reason for this is the fact that the determinant of any 3 3 matrix is a polynomial of
degree at most three.
To expand on this, if A Mat33 (R), then the eigenvalues of A are solutions of the characteristic
polynomial equation in ; namely,
det(A Id) = 0,
where Id Mat33 (R) is the identity 3 3 matrix. This polynomial equation is of degree exactly three,
hence it has exactly three (not necessarily distinct) roots (by the Fundamental Theorem of Algebra ).
Since complex roots of polynomials with real coefficients occur in conjugate pairs (this is the statement of
the Complex Conjugate Root Theorem ), if all three roots 1 , 2 , 3 were complex, then their complex
conjugates 1 , 2 , 3 would also be roots of the same polynomial. Since the
total number of roots is
three, we must have the equality of (unordered) sets {1 , 2 , 3 } = 1 , 2 , 3 , whence it follows that at
least one of these roots must equal its complex conjugate; i.e., at least one root must be real. Q.E.D.
Note: the reason the above argument worked is that the integer 3 is odd; you can write down the same
proof for any other integer.
Note: but what if I, say, take the matrix

i
2i

,
3i

surely it has got three complex eigenvalues i, 2i, 3i, so whats wrong? This is true, the eigenvalues are all
complex, but this matrix is an element of Mat33 (C), not Mat33 (R), hence the degree three polynomial
has complex coefficients, in which case all bets are off.

MAT 224 Linear Algebra | Tutorial 5 | 21.10.13

page 2

Problem 5.2
Let: A Matnn (F), let be an eigenvalue of A, and let p(x) Pm (F) be any polynomial.
Then: p() is an eigenvalue of p(A).
Proof.
Any polynomial p(x) Pm (F) is of the form is an F-linear combination of monomials. Thus, to establish
our claim, it is sufficient to prove the following statements:
(1) If m Z>0 , then m is an eigenvalue of Am .
(2) If a F, then a is an eigenvalue of aA.
(3) If a, b F, and m, k Z>0 , then am + bk is an eigenvalue of aAm + bAk .
Proof of (1): Let v Fn be an eigenvector of A with eigenvalue ; that is,
Av = v.
Applying A to both sides of this equation one more time gives
A2 v = A(v) = Av = 2 v.
Applying A repeatedly, we therefore get
Am v = m v,
so m is an eigenvalue of Am .
Proof of (2): Again, let v Fn be an eigenvector of A with eigenvalue , so that Av = v. Then
(aA)v = aAv = av,
so a is an eigenvalue of aA.
Proof of (3): If v Fn be an eigenvector of A with eigenvalue , then by the parts (1) and (2), am
is an eigenvalue of aAm , and bk is an eigenvalue of bAk , both corresponding to the eigenvector v. Then

aAm + bAk v = (aAm )v + (bAk )v = am v + bk v = (am + bk )v,
which completes the proof.

Q.E.D.

Observe: if a matrix A Matnn (F) is nilpotent, then all of its eigenvalues are zero.
Indeed: since A is nilpotent, it follows by definition that Ak = 0 for some integer power k > 0. Take
the polynomial p(x) := xk . By what weve just established, if is an eigenvalue of A, then p() is an
eigenvalue of p(A); that is, k is an eigenvalue of Ak = 0, the zero matrix. This means that k = 0, so
= 0.

MAT 224 Linear Algebra | Tutorial 5 | 21.10.13

page 3

Problem 5.3
If A Matnn (C) is diagonalisable, then Ak is diagonalisable for every k Z>0 .
Proof.
By definition, a matrix A is diagonalisable if there exists an invertible matrix P Matnn (C) and a
diagonal matrix D Matnn (C) such that A = P 1 DP . Then we have the following computation:
Ak = AAAk2 = (P 1 DP )(P 1 DP )Ak2 = P 1 D2 P Ak2 = . . . = P 1 Dk P,
so Ak is diagonalisable.

Q.E.D.

Note: Since the diagonal matrix D is the matrix of eigenvalues of A, this computation is another way
to see claim (1) of the previous problems proof.
Problem 5.4 (The Polarisation Identity)
Let: (V, h, i) be an inner product space over C.
Then: for all vectors x, y V, the following identity holds:

1
hx, yi =
||x + y||2 ||x y||2 + i ||x + iy||2 i ||x iy||2 .
4
Proof.
Just expand the right-hand side like so:
||x + y||2 ||x y||2 = hx + y, x + yi hx + y, x + yi = 2hx, yi + 2hy, xi.
Similarly,
||x + iy||2 ||x iy||2 = 2hx, iyi + 2hiy, xi = 2ihx, yi + 2ihy, xi.
Thus, the right-hand side is
1
4


2hx, yi + 2hy, xi 2i hx, yi + 2i hy, xi = hx, yi.
2

Q.E.D.

MAT 224 Linear Algebra | Tutorial 5 | 21.10.13

page 4

Problem 5.5 (The Cauchy-Schwarz Inequality)


Let: (V, h, i) be an inner product space over C.
Then: for all x, y V,


hx, yi 6 ||x|| ||y||
with equality if and only if x, y are linearly dependent.
Proof.
If hx, yi = 0, then the statement is obviously true, so we may assume that hx, yi 6= 0. In that case, at
least one of x, y must be non-zero; say, y 6= 0.

Let: := sign hx, yi , the sign of the real number
hx, yi (so that is +1 if hx, yi > 0, and 1 if



hx, yi < 0). Then: notice that hx, yi = hx, yi .


Let: z := sy. Then: ||z|| = ||y||, and hx, yi = shx, yi = hx, zi.
For any t R, we have the following:
0 6 ||x tz||2
= hx tz, x tzi

()

= ||x||2 2thx, zi + t2 ||z||2




= ||x||2 2 hx, yi t + ||y||2 t2 .
The right-hand side is a quadratic polynomial in t, and you can calculate (using ordinary methods of
calculus in one variable) that its absolute minimum is achieved at


hx, yi
.
t=
||y||2


Substituting this value of t into the inequality 0 6 ||x||2 2 hx, yi t + ||y||2 t2 gives


hx, yi


+ ||y||2
0 6 ||x||2 2 hx, yi
||y||2


hx, yi 2
2
= ||x||
.
||y||2



hx, yi 2
||y||4


2
Multiplying through by ||y||2 and rearranging the terms gives hx, yi 6 ||x||2 ||y||2 , whence the
Cauchy-Schwarz inequality follows immediately.
For the equality, observe that the inequality () is actually an equality if and only if x = tz = tsy; i.e.,
if and only if x, y are linearly dependent (since ts R).
Q.E.D.

You might also like