You are on page 1of 5

18.

03 Class 33, Apr 28 Linear systems:


Characteristic polynomial, eigenvalues, eigenvectors;
determinant, trace; Ray trajectories and solutions, normal modes.

Prologue on Linear Algebra.


Recall [a b ; c d] [x ; y] = x[a ; c] + y[b ; d]
Write A = [a b; c d] and alpha = [x ; y] .
I sketched the two vectors and this linear combination.
I ask: when is this product zero.
One way is for x = 0 = y. If [a ; c] and [b ; d] point in
different directions, this is the ONLY way. But if they lie along a
single line, we can find x and y so that the sum cancels. We get a nonzero solution [x ; y] exactly when the slopes of the vectors [a ; c] and [b ; d] coincide: c/a = d/b , or ad - bc = 0. This combination of the entries in A is so important it's called the "determinant" of the matrix: det(A) = ad - bc
(It "determines" whether the matrix is invertible.) We have found:
A alpha = 0 has a nontrivial solution alpha exactly when det(A) = 0.

I described a method for finding solutions to systems of equations.


As an example I took x' = x + 2y, y' = 2x + y. The "coefficient matrix"
is A = [1 2 ; 2 1] . With u = [x ; y] the equation is
u' = Au.
I showed a Matlab/dfield plot of the vector field v = Au. I suggested
that it showed several solutions along straight rays from (or to) the origin.

For the vector alpha to be on such a trajectory you must have v(alpha)
pointing in the same (or opposite) direction as alpha. This can be
expressed by saying that v(alpha) = lambda alpha for some number lambda;
that is,
A alpha = lambda alpha , alpha not 0.
(The symbol lambda is often used for this kind of thing; you may recall
it from Lagrange multipliers.)
Surprisingly enough, the first thing to do is to find the numbers lambda
which make this possible. We want to do this using matrix algebra. There is no matrix on the right hand side, but we can fix that using the identity matrix I = [1 0 ; 0 1] This matrix has the property that I alpha = alpha for any vector alpha.
So we can rewrite our equation as
A alpha = lambda I alpha
or
(A - lambda I) alpha = 0.
Now lambda I = [lambda 0 ; 0 lambda] , and A - lambda I is A with
lambda subtracted from the diagonal entries. I also want alpha not 0,
and in the Prologue we found a criterion for the existence of such alpha:
det(A - lambda I) = 0.
Let's work out what this is:
det[a-lambda b ; c d-lambda] = (a-lambda)(d-lambda) - bc
= lambda^2 - (a+d) lambda + (ad-bc) This is a degree 2 polynomial, the "characteristic polynomial" p_A(lambda) of A. The number a+d , the sum of the diagonal entries of A , is called the

"trace" of A, tr(A) = a + d. We already have a notation for the constant term, det(A): so p_A(lambda) = lambda^2 - (tr A) lambda + (det A)

In our example, p_A(lambda) = lambda^2 - 2 lambda - 3 .


We are interested in the roots of the characteristic polynomial.
There are two; call them lambda1 and lambda2. (The order doesn't matter.)
These are the "eigenvalues" of the matrix.
In our example, p_A(lambda) = (lambda + 1)(lambda - 3) so the roots are
lambda1 = -1, lambda2 = 3.
Let's next find the corresponding vectors:
For lambda1 seek alpha1 such that A alpha1 = lambda1 alpha1
or equivalently (A - lambda1 I) alpha1 = 0 . Similarly for lambda2.
These are the "eigenvectors" for the corresponding eigenvalues.
In our example,
A - lambda1 I = [2 2 ; 2 2]
and we are trying to find alpha1 = [? ; ?] not 0 such that
[2 2 ; 2 2] [? ; ?] = [0 ; 0].
Clearly alpha1 = [1 ; -1] or any nonzero multple will do.
Similarly,
A - lambda2 I = [-2 2 ; 2 -2] [? ; ?] = [0 ; 0] has solution
alpha2 = [1 ; 1] or any nonzero multiple.
I checked these on the Matlab picture.
So there are a few lines where the matrix acts simply by multiplication,
without changing directions.

We still have to solve the ODE. If u(t) is "ray solution," a solution lying along a ray from (or to) the origin, then u(t) = f(t) alpha.
We found the eigenvectors precisely because for this to work alpha must
be an eigenvector. Say alpha = alpha1. Plug this into the equation:
u'(t) = f'(t) alpha1 versus
Au(t) = A f(t) alpha1 = f(t) A alpha1 = f(t) lambda1 alpha1.
Since alpha1 is not zero, this forces
f'(t) = lambda1 f(t).
By focusing attention on ray solutions, we have gotten ourselves a single
ODE, and a very familiar one at that!
f(t) = c e^{lambda1 t}
For the moment just one solution will do, so take c = 1:
u(t) = e^{lambda1 t} alpha1
is a ray solution, and
u(t) = e^{lambda2 t} alpha2
is another.
In our example, these are
u(t) = e^{-t} [1 ; -1] , u(t) = e^{3t} [1 ; 1]
We have found solutions whose trajectories are rays from (or to) the origin.
As we saw, we can multiply these by constants to get other solutions.
More generally, because u' = Au is LINEAR and HOMOGENEOUS, any linear combination of solutions is again a solution; and in fact u = c1 e^{lambda1 t} alpha1 + c2 e^{lambda2 t} alpha2

is the GENERAL solution to u' = Au. (There is a minor warning here; if lambda1 = lambda2 you may not be able to find two eigenvectors pointing in different directions, and then something else is called for. We'll come back to this.) In our example, the general solution is
u = c1 e^{-t} [1 ; -1] + c2 e^{3t} [1 ; 1]
We can solve for c1 and c2 using an initial condition: say for example
u(0) = [1 ; 0]. Well,
u(0) = c1 [1 ; -1] + c2 [1 ; 1] = [c1+c2 ; -c1+c2]
and for this to be [1 ; 0] we must have c1 = c2 = 1/2:
u(t) = (1/2)e^{-t} [1 ; -1] + (1/2)e^{3t} [1 ; 1] .
When t is very negative, -10, say, the first term is very big and the
second tiny: the solution is very near the line through [1 ; -1]. As t gets near zero, the two terms become comparable and the solution curves around. As t gets large, 10, say, the second term is very big and the first is tiny: the solution becomes asymptotic to the line through [1 ; 1]. The general solution is a combination of the two normal modes.

You might also like