You are on page 1of 13

MA 2030 - Linear Algebra and Numerical Analysis

Arindama Singh & A. V. Jayanthan

A Question

Let A =

0 1
1 0

. Here, A : R2 R2 .

It may transform straight lines to straight lines or points.


Get me a straight line which is transformed to itself.

A=

x
y


=

0 1
1 0



x
y


=

y
x


.

Thus, the line {(x, x) : x R} never moves.


So also the line {(x, x) : x R}.
Observe:
 
 




x
x
x
x
A
=1
and A
= (1)
.
x
x
x
x

Eigenvalues & Eigenvectors


Definition 3: Let V be a vector space over F, and T : V V be
a linear operator. By an eigenvector associated with the eigenvalue
F we mean a nonzero vector v V with the property that
Tv = v .
Example 3: Consider the map T : R3 R3 by
T (a, b, c) = (a, a + b, a + b + c). It has an eigenvector (0, 0, 1)
associated with the eigenvalue 1. Is (0, 0, c) also an eigenvector
associated with the same eigenvalue 1?
Example 4: Let T : P P be defined by T (p(t)) = tp(t). Then,
T has no eigenvector and no eigenvalue.
Example 5: Let T : P[0, 1] P[0, 1] be defined by
T (p(t)) = p 0 (t). Then, since derivative of a constant polynomial is
0, which equals 0 times the polynomial, the eigenvectors are any
non-zero constant polynomial and the associated eigenvalue is 0.

Some Facts
Proposition 7: A vector v 6= 0 is an eigenvector of T : V V for
the eigenvalue F iff v N(T I ). Thus is an eigenvalue of
T iff the operator T I is not one-one.
Proof. Tv = v iff (T I )v = 0 iff v N(T I ).
Now, is an eigenvalue of T iff there exists a nonzero
v N(T I ) iff T I is not one-one.

Proposition 8: Let V over F be finite dimensional. Let A be the


matrix for an operator T : V V with respect to some basis.
Then F is an eigenvalue of T iff det(A I ) = 0.
Proof. Let dim(V ) = n. In the same basis, the matrix of T I is
A I . Now, T I is not one-one iff A I is not one-one iff
rank(A I ) < n iff det(A I ) = 0.


Characteristic Polynomial

Definition 4: The polynomial det(A I ) in the variable is


called the characteristic polynomial of the matrix A.
Example 6: For T : R3 R3 with
T (a, b, c) = (a, a + b, a + b + c), we have
1
0
0
1
1
0
= 0 (1 )3 = 0 = 1, 1, 1.
1
1
1
Solving T (a, b, c) = 1 (a, b, c), we then get eigenvectors (0, 0, c).
Of course, c 6= 0.

In R or in C?
Thus, eigenvalues of a matrix A are simply the zeros of the
characteristic polynomial of A.
Therefore, if A is a matrix with complex entries then eigenvalues of
A exist as complex numbers.
When a matrix A has real entries and the underlying field is R, then
it may not have any eigenvalue at all.
It is thus advantegeous to consider real entries of a matrix as
complex entries.

Examples
Example 7: For T : R2 R2 with T (a, b) = (b, a), we have
0
1
= 0 2 + 1 = 0. It has no real roots.
1 0
Thus the operator T has no eigenvalues.
Example 8: For T : C2 C2 with T (a, b) = (b, a), we have
2 + 1 = 0 = i, i, the eigenvalues of T .
The corresponding eigenvectors are obtained from solving
T (a, b) = i(a, b) and T (a, b) = i(a, b).
For = i, we have b = ia, a = ib.
Thus, (a, ia) is an eigenvector for a 6= 0.
For eigenvalue i, the eigenvectors are (a, ia) for a 6= 0.

Some Corrolaries
Recollect: Eigenvalues of A are the zeros of det(A I ).
Proposition 9: A and At have the same eigenvalues.
Proof. det(At I ) = det((A I )t ) = det(A I ).
Proposition 10: Similar matrices have the same eigenvalues.
Proof. det(P 1 AP I ) = det(P 1 (A I )P)
= det(P 1 )det(A I )det(P) = det(A I ).

Proposition 11: If A is diagonal or upper triangular or lower


triangular, then its diagonal elements are precisely its eigenvalues.
Proof. det(A I ) = (a11 ) (ann ).

Proposition 12: det(A) equals the product of all eigenvalues.
tr (A) equals the sum of all eigenvalues.
Proof. Let 1 , . . . , n be the eigenvalues of A. Now,
det(A I ) = (1 ) (n ). Put = 0 to get det(A).
Expand det(A I ) and equate the coefficients of n1 to get
tr (A).


Cayley-Hamilton Theorem
Proposition 13: Any square matrix satisfies its characteristic
polynomial.


1 1
Example 9: Let A =
. det(A I ) = 2 1. Now,
1 0

 
 
 

2 1
1 1
1 0
0 0
2
A AI =

=
.
1 1
1 0
0 0
0 0
1
Since 0 is not an eigenvalue of A,
 A exists.
 In fact,
0 1
2
1
I =A AA =AI =
.
1 1

Also, A2 = A + I , A3 = 2A + I , A4 = 3A + 2I , A5 = 5A + 3I , . . .
In general, An+1 = fn A + fn1 I , f1 = 1 = f2 , fn+1 = fn1 + fn .

Hermitian Matrices

Proposition 14: Eigenvalues of a real symmetric matrix or of a


Hermitian matrix are real. Eigenvalues of a skew-hermitian matrix
are purely imaginary or zero.
Proof. Let C be an eigevalue of A with an eigenvector v Cn .
Now, Av = v v Av = v v C.
Then, v v = (v Av ) = v A v = v Av = v v .
+ when A is Hermitian since here A = A.
and when A is skew-hermitian since here A = A.
Thus, if A is Hermitian, = , i.e., R.
If A is skew-hermitian, = , i.e.,
is purely imaginary or zero.


Distinct Eigenvalues
Proposition 15: Eigenvectors associated with distinct eigenvalues
of an n n matrix are linearly independent.
Proof. Let vi be an eigenvector associated with the eigenvalue i ,
for i = 1, 2, . . . , m, where i are the distinct eigenvalues of A.
For m = 1, since v1 6= 0, {v1 } is linearly independent,
Induction Hypothesis: for m = k the statement is true.
Now, for m = k + 1, suppose
1 v1 + 2 v2 + + k vk + k+1 vk+1 = 0.
(F)
Then, T (1 v1 + 2 v2 + + k vk + k+1 vk+1 ) = 0.
1 1 v1 + 2 2 v2 + + k k vk + k+1 k+1 vk+1 = 0.
Multiply (F) with k+1 . Subtract from the last:
1 (1 k+1 )v1 + + k (k k+1 )vk = 0.
By the Induction Hypothesis, i (i k+1 ) = 0.
This implies each i = 0, for i = 1, 2, . . . , k.
Then, from (F), k+1 vk+1 = 0.
Finally, all i = 0 for i = 1, 2, . . . , k, k + 1.


Matrix with enough distinct eigenvalues


Proposition 16: If an n n matrix has n distinct eigenvalues, then
A is similar to a diagonal matrix.
Proof. Let A be an n n matrix A with n distinct eigenvalues.
The corresponding eigenvectors form a basis for Fn .
In this basis of eigenvectors, the linear transformation A has a
matrix representation, say B. Now, B is a diagonal matrix.

Alternatively, form the matrix P by taking the eigenvectors
v1 , . . . vn as its columns.
Then, Pe1 = v1 , Pe2 = v2 , . . . , Pen = vn .
Also, P is invertible and P 1 AP = B.
Now, Be1 = P 1 APe1 = P 1 Av1 = 1 P 1 v1 = 1 e1 .
Similarly, Be2 = 2 e2 , . . . , Ben = n en .
This means, B is a diagonal matrix with diagonal entries as
1 , . . . , n .
Note that the proofs also prove diagonalizability when we have n
linearly independent eigenvectors.

An Example



1 3
Example 10: A =
.
4 2
det(A I ) = (1 )(2 ) 12, whosezeros 
are 2
 and 5.
1
3
The corresponding eigenvectors are
and
.
1
4




1 3
4/7 3/7
Take P =
. Then P 1 =
.
1 4
1/7 1/7
We see that P 1 AP =



 

4/7 3/7
1 3
1 3
2 0
=
.
1/7 1/7
4 2
1 4
0 5
A matrix that is similar to a diagonal matrix is called a
diagonalizable matrix.
We will discuss diagonalizable matrices later.

You might also like