You are on page 1of 3

Therefore x + cy is also a -eigenvector. Thus, the set of -eigenvectors form a subspace of F n . q.e.d.

Eigenvalues, eigenvectors, and eigenspaces of linear operators Math 130 Linear Algebra
D Joyce, Fall 2012 Eigenvalues and eigenvectors. Were looking at linear operators on a vector space V , that is, linear transformations x T (x) from the vector space V to itself. When V has nite dimension n with a specied basis , then T is described by a square nn matrix A = [T ] . Were particularly interested in the study the geometry of these transformations in a way that we cant when the transformation goes from one vector space to a dierent vector space, namely, well compare the original vector x to its image T (x). Some of these vectors will be sent to other vectors on the same line, that is, a vector x will be sent to a scalar multiple x of itself.

One reason these eigenvalues and eigenspaces are important is that you can determine many of the properties of the transformation from them, and that those properties are the most important properties of the transformation.

These are matrix invariants. Note that the eigenvalues, eigenvectors, and eigenspaces of a linear transformation were dened in terms of the transformation, not in terms of a matrix that describes the transformation relative to a particular basis. That means that they are invariants of square matrices under change of basis. Recall that if A and B represent the transformation with respect to two dierent bases, then A and B are conjugate matrices, that is, B = P 1 AP where P is the transition matrix between the two bases. The eigenvalues are numbers, and theyll be the same for A and B. The corresponding eigenspaces will be isomorphic as subspaces of F n under the linear operator of conjugation by P . Thus we have the Denition 1. For a given linear operator T : V following theorem. V , a nonzero vector x and a constant scalar are called an eigenvector and its eigenvalue, respec- Theorem 3. The eigenvalues of a square matrix A 1 tively, when T (x) = x. For a given eigenvalue are the same as any conjugate matrix B = P AP , the set of all x such that T (x) = x is called of A. Furthermore, each -eigenspace for A is isothe -eigenspace. The set of all eigenvalues for a morphic to the -eigenspace for B. In particular, the dimensions of each -eigenspace are the same transformation is called its spectrum. for A and B. When the operator T is described by a matrix A, then well associate the eigenvectors, eigenvalWhen 0 is an eigenvalue. Its a special situaues, eigenspaces, and spectrum to A as well. As tion when a transformation has 0 an an eigenvalue. A directly describes a linear operator on F n , well That means Ax = 0 for some nontrivial vector x. take its eigenspaces to be subsets of F n . In other words, A is a singular matrix, that is, a Theorem 2. Each -eigenspace is a subspace of V . matrix without an inverse. Thus, Proof. Suppose that x and y are -eigenvectors and Theorem 4. A square matrix is singular if and only if 0 is one of its eigenvalues. Put another way, c is a scalar. Then a square matrix is invertible if and only if 0 is not T (x + cy) = T (x) + cT (y) = x + cy = (x + cy). one of its eigenvalues. 1

An example transformation that has 0 as an eigenvalue is a projection, like (x, y, z) (x, y, 0) that maps space to the xy-plane. For this projection, the 0-eigenspace is the z-axis. In general, a 0-eigenspaces is the solution space of the homogeneous equation Ax = 0, what weve been calling the null space of A, and its dimension weve been calling the nullity of A.

The characteristic polynomial, the main tool for nding eigenvalues. How do you nd what values the eigenvalues can be? In the nite dimensional case, it comes down to nding the roots of a particular polynomial, called the characteristic polynomial. Suppose that is an eigenvalue of A. That means there is a nontrivial vector x such that Ax = x. Equivalently, Axx = 0, and we can rewrite that as (A I)x = 0, where I is the identity matrix. When 1 is an eigenvalue. This is another im- But a homogeneous equation like (AI)x = 0 has portant situation. It means the transformation has a nontrivial solution x if and only if the determinant a subspace of xed points. Thats because vector x of A I is 0. Weve shown the following theorem. is in the 1-eigenspace if and only if Ax = x. An example transformation that has 1 as an Theorem 5. A scalar is an eigenvalue of A if and eigenvalue is a reection, like (x, y, z) (x, y, z) only if det(A I) = 0. In other words, is a root that reects space across the xy-plane. Its 1- of the polynomial det(A I), which we call the eigenspace, that is, its subspace of xed points, is characteristic polynomial or eigenpolynomial. The the xy-plane. Well look at reections in R2 in de- equation det(AI) = 0 is called the characteristic equation of A. tail in a moment. Another transformation with 1 as an eigenvalue Note that the characteristic polynomial has deis the shear transformation (x, y) (x + y, y). Its gree n. That means that there are at most n eigen1-eigenspace is the x-axis. values. Since some eigenvalues may be repeated roots of the characteristic polynomial, there may be fewer than n eigenvalues. Eigenvalues of reections in R2 . Weve looked We can use characteristic polynomials to give at reections across some lines in the plane. Theres an alternate proof that conjugate matrices have a general form for a reection across the line of slope the same eigenvalues. Suppose that B = P 1 AP . tan , that is, across the line that makes an angle Well show that the characteristic polynomials of of with the x-axis. Namely, the matrix transforA and B are the same, that is, det(A I) = mation x Ax, where det(B I). That will imply that they have the same eigenvalues. cos 2 sin 2 A= , sin 2 cos 2 det(B I) = det(P 1 AP I) = det(P 1 AP P 1 IP ) describes a such a reection. = det(P 1 (A I)P ) A reection has xed points, namely, the points = det(P 1 ) det(A I) det(P ) on the line being reected across. Therefore, 0 is = det(A I) an eigenvalue of a reection, and the 0-eigenspace is that line. Orthogonal to that line is a line passing through How to nd eigenvalues and eigenspaces. the origin and its points are reected across the Now we know the eigenvalues are the roots of the origin, that is to say, theyre negated. Therefore, characteristic polynomial. Well illustrate this with 1 is an eigenvalue, and the orthogonal line is its an example. Heres the process to nd all the eigeneigenspace. values and their associated eigenspaces. 2

1). Form the characteristic polynomial det(A I). 2). Find all the roots of it. Since it is an nth degree polynomial, that can be hard to do by hand if n is very large. Its roots are the eigenvalues 1 , 2 , . . .. 3). For each eigenvalue i , solve the mattrix equation (A i I)x = 0 to nd the i -eigenspace.

where z is arbitrary. Thats the one-dimensional 1-eigenspace (which consists of the xed points of the transformation). Next, nd the 2 -eigenspace. The matrix A2 I is 2 0 0 3 0 0 3 2 1

which row reduces to Example 6. Well nd the characteristic polyno1 0 0 mial, the eigenvalues and their associated eigenvec0 1 1 2 tors for this matrix: 0 0 0 1 0 0 and from that we can read o the general solution A = 3 3 0 3 2 2 (x, y, z) = (0, 1 z, z)
2

The characteristic polynomial is 1 0 0 3 3 0 |A I| = 3 2 2 = (1 )(3 )(2 ). Fortunately, this polynomial is already in factored form, so we can read o the three eigenvalues: 1 = 1, 2 = 3, and 3 = 2. (It doesnt matter the order you name them.) Thus, the spectrum of this matrix is the set {1, 2, 3}. Lets nd the 1 -eigenspace. We need to solve Ax = 1 x. Thats the same as solving (A1 I)x = 0. The matrix A 1 I is 0 0 0 3 2 0 3 2 1 which row reduces to 1 1 0 6 0 1 1 4 0 0 0 and from that we can read o the general solution
1 (x, y, z) = ( 6 z, 1 z, z) 4

where z is arbitrary. Thats the one-dimensional 3-eigenspace. Finally, nd the 3 -eigenspace. The matrix A 3 I is 1 0 0 3 1 0 3 2 0 which row reduces to 1 0 0 0 1 0 0 0 0 and from that we can read o the general solution (x, y, z) = (0, 0, z) where z is arbitrary. Thats the one-dimensional 2-eigenspace. Math 130 Home Page at http://math.clarku.edu/~djoyce/ma130/

You might also like