Professional Documents
Culture Documents
2.1
Matrices
1
2 3 4
0 1 5 7
3
2 1 5
is a matrix. It has 3 rows and 4 columns and is called a 3 x 4 matrix.
A matrix with m rows and n columns is called an m x n matrix. It has mn entries.
Sometimes round brackets are used in place of [ ] for matrices. For example, some text books
would use
1
2 3 4
0 1 5 7
3
2 1 5
for the above matrix.
Important Notes:
1. Be careful to use [ ] or ( ) for matrices. Vertical lines such as | | are used for
determinants (see later).
2. Do not confuse matrices and determinants.
3. Matrices are rectangular arrays of numbers, they do NOT have a numerical value.
2.1.1
Addition of matrices.
Two matrices can be added if and only if they are the same size. i.e. If A is an m n matrix
then A + B exists if and only if B is also an m n matrix. In this cases A + B is obtained by
adding corresponding elements.
Example: Find A + B, if it exists, for the following matrices.
1 2 3
4
1 3 10 11
A = 0 1 5 1 , B = 0 5 3 1 .
1 2 7
6
3 7 6 7
0
3 1
1 1 2 + 3 3 + 10 4 + 11
A + B = 0 + 0 1 + 5 5 3 1 + 1
1 + 3 2 + 7 7 6
6+7
0 5 13 15
A+B = 0 6 2 0
2 9 1 13
A + C does not exist because A is a 3 4 matrix, C is a 3 3 matrix so A and C are not the
same size.
2.1.2
Equality of Matrices
0 1
6 5
Example: Let A = 1 2
3 4 ,
1 7 1 0
Then A = B a = 0,
f = 1,
n = 7,
2.1.3
b = 1,
g = 2,
c = 6,
h = 3,
l = 4,
a b c d
B = f g h l .
m n p q
d=5
m=1
p = 1 q = 0.
Let A be a matrix and be a scalar, then A is obtained from A by multiplying every element
in A by .
Example: Let
"
#
1 2 3
A=
,
4 5 6
"
B=
1 0 4
2 1 7
2 0 8
4 2 14
#
.
"
A + 2B =
1 2 11
8 7 20
3 2 5
0 3 8
Notation: Given an m n matrix A we often use ars for the element in row r and column s
and write A = [ars ] or A = [ars ]mn . With this notation the elements of row 2 are
a21
a22
a23 a2n .
A=
2.1.4
a1n
a2n
..
.
am1 am2
amn
a11
a21
..
.
a12
a22
..
.
Transpose of a Matrix
Given a matrix A its transpose (denoted by AT ) is obtained by interchanging the rows and
columns of A.
"
#
0 1
0
1
3
If B =
b11
..
.
b12
..
.
bm1 bm2
b1n
..
.
bmn
then
BT =
b11
b12
..
.
bm1
bm2
..
.
b1n
bmn
2.1.5
Symmetric Matrix
1
2
7
B= 2
3 1
7 1
8
is symmetric, since it is unchanged when rows and columns are interchanged.
2.1.6
0
1
For example B =
3
5
2.1.7
brr = brr i.e. brr = 0 and the diagonal elements brr are all zero.
1
3 5
0 2 4
is skew-symmetric.
1
2
0 2
1
4
0
2
The n n matrix with 1 at each place on the leading diagonal and zeros everywhere else is
called the Identity Matrix (or Unit Matrix) In . Thus
1 0 0 0
0 1 0 0
.
0
0
1
0
In =
..
..
.
.
0 0 0
If we dont need to emphasise the size of the matrix, we can just write I in place of In .
Later, we will see that this identity matrix plays the same role in matrix multiplication as the
number 1 does in ordinary multiplication of real numbers.
4
2.1.8
Diagonal Matrix
An n n matrix with at least one non-zero number on the leading diagonal and zeros
everywhere else is call a diagonal matrix.
For example
A=
4 0 0
0 2 0
0 0 0
..
.
0
0
0
..
.
0 0 0
is a diagonal matrix.
2.1.9
Matrix Multiplication
We do not define matrix multiplication in what might appear to be the obvious way (i.e.
multiplying corresponding elements) as this would not be at all useful.
To find a way which would make matrices useful let us consider three matrices:
b1
x1
a11 a1n
b2
x2
..
..
X = . ,
B= .
A = .
. ,
.
.
..
.
am1 amn
bm
xn
Note: Row and column vectors are just special examples of matrices.
Now let us look at a system of linear equations involving the elements of these matrices.
a11 x1 + a12 x2 + a13 x3 + + a1n xn
a21 x1 + a22 x2 + a23 x3 + + a2n xn
..
.
= b1
= b2
..
.
This will be true if we use a row by column definition for matrix multiplication.
Let A be an m n matrix (m rows, n columns) and let B be a n p matrix (n rows, p
columns). Then the matrix product AB exists precisely because the number of columns in
A equals the number of rows in B. In this case, the matrix AB is an m p matrix.
5
Important Note: The order is important. Do not swap the order. As you will see later AB
and BA are, in general, not equal. In other words, matrix multiplication does not commute.
In general, we multiply matrices in the following way:
Let C = AB, where A is m n and B is n p . The element crs from row r and column s of
C is obtained by taking row r of A and column s of B. We then multiply corresponding
elements and add (i.e. we calculate the scalar product of the two vectors). Because the
number of elements in a row of A equals the number of elements in a column of B, we can
pair them together to make this formula work. This is why we talk about row into column
multiplication.
"
#
b11 b12
a11 a12 a13
Example: Let A =
, B = b21 b22 .
a21 a22 a23
b31 b32
Then A is 2 3 and B is 3 2, and we can calculate the 2 2 matrix AB:
"
"
a11 b11 + a12 b21 + a13 b31 a11 b12 + a12 b22 + a13 b32
a21 b11 + a22 b21 + a23 b31 a21 b12 + a22 b22 + a23 b32
AB =
AB =
#
.
Note that we can also form the product BA, but this is a 3 3 matrix and so AB and BA
are different sizes in this case and so are clearly not equal.
"
#
1
2 0
1 1 2
Example: If A =
and B = 0 1 0 , find AB and BA if they exist.
3
0 1
1
2 1
Solution: A is 2 3 and B is 3 3 so AB exists and is 2 3.
"
1 1 + (1) 0 + 2 1 1 2 + (1) (1) + 2 2
AB =
31+00+11
3 2 + 0 (1) + 2 1
2
1
"
=
3 7 2
4 8 1
The product BA does not exist, since the number of columns of B (3) does not equal the
number of rows of A (2).
Exercise: For the matrices
"
A=
3
1 1
2 1 0
#
,
2
1
B = 5 1 ,
3
1
"
C=
1 2
2 3
#
,
decide whether the following exist: (a) AB, (b) AC, (c) CB. Find those which do exist.
6
#
.
Solution:
A is 2 3, B is 3 2 and C is 2 2. So AB is (2 3).(3 2) 2 2, and so AB exists.
AC is (2 3).(2 2) and 3 6= 2 AC does not exist.
CB is (2 2).(3 2) and 2 6= 3 CB does not exist.
"
AB =
32+15+13
2 2 + (1) 5 + 0 3
3 1 + 1 (1) + 1 1
2 1 + (1) (1) + 0 1
"
=
14 3
1 3
#
.
Notes:
1. Even when AB and BA both exist, in general AB 6= BA.
"
#
4 1
e.g. let A =
and B =
.
0 2
"
#
"
#
8 4
7 5
and BA =
.
Then AB =
4 1
2 2
2 1
1 1
"
1 0
0 0
"
and AC =
1 0
0 0
#
and so AB = AC, but here B 6= C.
2.2
Determinants
A determinant is a single number associated with a square matrix (i.e. a matrix that has an
equal number of rows and columns). The determinant of a square matrix A is denoted by |A|
or det(A). If the matrix A is n n, then its determinant is said to be of order n.
Determinants are intimately connected with the solution of systems of linear equations and
linear transformation of vectors. We will see some applications later in this section.
Determinants are not defined for non-square matrices.
Examples:
The determinant of a 2 2 matrix is
a b
= ad bc.
c d
For example
3 4
= 3 2 4 (1) = 6 + 4 = 10.
1 2
The determinant of a 3 3 matrix can be evaluated as follows:
a1 a2 a3
b b
b b
b b
1 2
1 3
2 3
+ a3
a2
b 1 b 2 b 3 = a1
c1 c2
c1 c3
c2 c3
c1 c2 c3
= a1 (b2 c3 b3 c2 ) a2 (b1 c3 b3 c1 ) + a3 (b1 c2 b2 c1 ) .
1
2
= 3
4 5
5
2
2
2 5
5 1
+ (1)
2 4
(2.1)
This expansion
a1 a2
b b
1 2
c1 c2
d d
1
2
b1 b2 b4
b1 b2 b3
+a3 c1 c2 c4 a4 c1 c2 c3 .
d1 d2 d4
d1 d2 d3
2.2.1
The minor associated with the element aij in the matrix A is defined as the determinant of
the matrix obtained from A by deleting the row and column that contain the element aij (i.e.
row i and column j).
a
22 a23
=
a32 a33
= a22 a33 a23 a32 ,
32
a
11 a13
=
a21 a23
= a11 a23 a13 a21 , etc.
The cofactor of the element aij is denoted by Aij and its value is
Aij = (1)i+j ij .
Thus A11 = (1)1+1 11 = a22 a33 a23 a32 ,
and
2.2.2
a
22 a23
= a11
a32 a33
a
21 a23
a12
a31 a33
a
21 a22
+ a13
a31 a32
Noting that
A11 = (1)1+1
A12 = (1)1+2
A13 = (1)1+3
11 =
11
12 = 12
13 =
13
()
we can write
|A| = a11 A11 + a12 A12 + a13 A13 ,
using (). Note that the alternating signs that appear in the expansion by minors are absorbed
into the cofactors, Aij . This expression provides an expansion of |A| by row 1.
We can equally well expand by row 2 or by row 3:
|A| = a21 A21 + a22 A22 + a23 A23
(row 2)
(row 3).
or
Similarly we can expand a determinant by any given column. For example, expanding by
column 2 gives
|A| = a12 A12 + a22 A22 + a32 A32
Exercise: Show that all the expressions for |A| given above give the same answer for the
matrix
2
1 2
A = 4
3
1 .
1 2
2
General Rule: Generally, the determinant of a square matrix A can be evaluated by taking
any row (or column) of A, multiplying each element of the row (or column) by its own
cofactor, and summing the results.
Thus, if A is an n n matrix, then for any 1 i n:
|A| =
n
X
aij Aij
(expansion by row i)
aji Aji
j=1
or
|A| =
n
X
j=1
10
2.2.3
i.e.
n
X
k=1
n
X
aki Akj = 0 if i 6= j.
k=1
3 1
2
Example: Expand |A| = 2 4 1 by column 3 and verify that
1 1
2
a12 A13 + a22 A23 + a32 A33 = 0,
a11 A13 + a21 A23 + a31 A33 = 0.
Solution:
A13
A23
A33
= (1)1+3
= (1)2+3
= (1)3+3
2 4
2 4 = 2
=
1 1
3 1
= (3 1) = 2
1 1
3 1
12 2 = 10
=
2 4
Therefore
|A| = a13 A13 + a23 A23 + a33 A33
= 2 (2) + (1) (2) + 2 10 = 18.
Also
a12 A13 + a22 A23 + a32 A33 = 1 (2) + 4 (2) + 1 10 = 0
and
a11 A13 + a21 A23 + a31 A33 = 3 (2) + 2 (2) + 1 10 = 0.
11
2.2.4
Properties of Determinants
The following properties hold for determinants of any order, but here we shall give examples
for determinants of order 3.
1. The value of a determinant does not change if we take the transpose. So if A is a
square matrix
|A| = |AT |
a11 a12 a13
|A| = a21 a22 a23
a31 a32 a33
a11 a21 a31
= a12 a22 a32
a13 a23 a33
2. If two rows (or two columns) are interchanged, the sign of the determinant changes (i.e.
the value is multiplied by -1). Thus if we exchange rows 1 and 3:
0 3
3 4 1
0
2 7
= 3(2 + 21) = 57.
2 4 7 = 2 4 7 = (3)
3 1
0 3
3 4 1
0
3. If two rows (or columns) are identical, the value of the determinant is zero. This is a
consequence of the previous property: Interchange the two identical rows. This
multiplies the value of the determinant by -1. But the new determinant is the same as
the old one so |A| = |A| |A| = 0.
4. If A and B are both n n matrices, then
|AB| = |A||B|.
5. To multiply a determinant by the constant , we multiply all the elements of one row
(or one column) by .
For example:
a b c a b c
d e f = d e f .
g h i g h i
Contrast this
d
g
with a scalar
b c
e f =
h i
multiple of a matrix
a b c
d e f .
g h i
12
6. If we add a multiple of the elements of any one row (or column) to the corresponding
elements of another row (or column), the value of the determinant does not change.
For example
6 2
4 0
4
0
4 4 1 = 4 4 1 .
1 1
1 1
2
2
Example: Evaluate
1
2 2 1
3 7 10
6
|A| =
2
1 3
2
5 12 6 3
([row1] 2 [row3])
Solution: We can evaluate this directly by expansion, or we can be smarter and first carry out
some row/column operations using property 6 to simplify the determinant as we go along.
1
0
0
0
3 1 4 3
|A| =
2 3 1 4
5
2 4 2
1 4 3
= 1 3 1 4
2 4 2
1
0
0
= 3 11 5
2
12
8
11 5
= 1
12
8
(expanding by row 1)
13
1
1
1
1
1
1
1 1+b
1
1
1
1+c
Solution:
|A| =
a
0
0
0
1
1
1
1
1 0 0
= a 1 b 0
1 0 c
b 0
= a
0 c
0
0
b
0
0
0
0
c
expanding by column 1
expanding by row 1
= abc
x
y
z
Example: Simplify and evaluate |A| = 1 2x 2 2y 3 2z
2
3
4
Solution:
x
y
z
|A| = 1 2x 2 2y 3 2z
2
3
4
x y z
new row 2 = row 2 + 2 (row 1)
= 1 2 3
2 3 4
x y z
new row 3 = row 3 row 2
= 1 2 3
1 1 1
2 3
1 3
1 2
= x
expanding by row 1
y
+z
1 1
1 1
1 1
= x + 2y z.
14
2.3
2.3.2
15
Then
"
#
#
d b
a b
1
|A| c
a
c d
"
#
ad bc ab + ba
1
|A| cd dc cb + ad
"
#
ad bc
0
1
|A|
0 cb + ad
"
#
1 0
= I (since |A| = ad bc).
0 1
"
AB =
1
adj A.
|A|
As for the case of a 2 2 matrix above, we can demonstrate the truth of this statement by
direct calculation.
Let
B=
1
adj A.
|A|
Then
16
AB =
a21 a22 a23 A12 A22 A32
|A|
a31 a32 a33
A13 A23 A33
3
P
a A
j=1 1j 1j
3
P
1
a2j A1j
=
|A| j=1
3
P
a3j A1j
j=1
3
P
a1j A2j
j=1
3
P
a1j A3j
j=1
a2j A2j
j=1
3
P
3
P
3
P
a2j A3j
j=1
a3j A2j
j=1
3
P
a3j A3j
j=1
|A|
0
0
1
=
0
0 |A|
|A|
0
0 |A|
1 0 0
= 0 1 0 = I
0 0 1
2
1
1
= 11
=
3
17
A23 =
A31 =
A32 =
A33 =
2 1
(1)2+3
1 4
1 1
(1)3+1
1 3
2 1
(1)3+2
0 3
2
1
3+3
(1)
0 1
= 7
=
= 6
= 2
Now evaluate the determinant by expanding along the first row (for example):
|A| = a11 A11 + a12 A12 + a13 A13
= 2 (11) + 1 3 + 1 1 = 18.
Then the inverse is given by
1
adjA
|A|
T
=
A21 A22 A23
|A|
A31 A32 A33
A1 =
11
3
1
1
=
5 3 7
18
4 6 2
11
5
4
1
=
3 3 6
18
1 7 2
2.3.4
B=0
Proof: AB = 0
2. AB = AC
B = 0.
B=C
2.4
B = C.
A=
a11 a12
a21 a22
..
..
.
.
an1 an2
a1n
a2n
..
.
ann
, X =
x1
x2
..
.
xn
, B =
b1
b2
..
.
bn
If all the elements of B are zero, the system of equations is called homogeneous.
If at least one of the elements of B is not zero, the system is called non-homogeneous.
19
2.4.1
2 3
1
4
"
,
B=
3
7
"
,
X=
x
y
#
.
|A| = 8 + 3 = 11
and
A1
1
=
|A|
"
A11 A12
A21 A22
#T
1
=
11
"
4 3
1 2
2
1
1
2
x1
A = 0 1
3 , B = 8 , X = x2 .
1
4 1
9
x3
20
A1
11
5
4
1
=
3 3 6
18
1 7 2
11
5
4
2
x1
1
X = x2 =
3 3 6 8
18
1 7 2
9
x3
22 40 + 36
1
= 6 + 24 54
18
2 + 56 18
18
1
= 36
18
36
= 2
2
(2.2)
4x + 2y =
(2.3)
"
Solution: Let A =
2 1
4 2
#
. Then |A| = 0, so the system is singular.
It should be obvious by inspection that the given system has no solutions if 6= 2 (subtract
twice the 1st equation from the 2nd equation).
21
If = 2 we have infinitely many solutions, since then both equations give exactly the same
information.
To prove these results and find the solutions, we first specify one of the unknowns.
Let x = , where is an arbitrary constant. Then from (2.2)
y = 1 2x = 1 2
(2.4)
From (2.3):
2y = 4x = 4
y=
2.
2
(2.5)
Now (2.4) and (2.5) must give the same value for y and, therefore,
2 = 1 2
2
= 2.
(2.6)
x y + 3z = 5
(2.7)
3x y + 7z =
(2.8)
Find values of for which these have solutions and find these solutions.
Solution: We might just spot that the lhs of row 3 = lhs of (row 1 + 2 row 2) and
conclude that we must have = 3 + 2(5) = 13. This gives no solutions if 6= 13 and an
infinite number of solutions if = 13.
However if we fail to spot this relationship, we proceed using matrix methods as follows:
Let
1
1
1
1 3 1 3 1 1
|A| = 1 1 3 =
+
1 7 3 7 3 1
3 1 7
= 7 + 3 (7 9) 1 + 3 = 4 + 2 + 2 = 0.
Since |A| = 0 we have either no solutions or an infinite number of solutions.
22
z =2 .
2
From (2.21)
y = 3 z = 3 (2 ) = 1 .
2
2
Therefore x = , y = 1 , z = 2 .
2
2
Substitute into (2.23) to get
3x y + 7z =
and hence
3 (1 ) + 7(2 ) =
2
2
= 13.
a11 a12
a21 a22
where A =
..
..
.
.
an1 an2
a1n
x1
x2
a2n
.. , X = ..
,
.
.
an3
xn
23
0
0
..
.
0
Clearly the system has the trivial solution X = 0
x1 = x2 = = xn = 0.
We now ask under what conditions are there any solutions apart from the trivial one.
If A is nonsigular, then |A| =
6 0, A1 exists and the only solution is
x1
0
0
x2
0 0
1
X=
.. = A .. = .. .
.
. .
xn
Thus if |A| =
6 0, the trivial solution is the only solution.
If A is singular (|A| = 0), then there are infinitely many solutions.
Note that inconsistency is not a possibility in this case as we know we have at least one
solution X = 0.
Example: Solve
3x + 4y = 0
xy = 0
"
Solution: Let A =
3
4
1 1
#
. Then |A| = 3 4 = 7 6= 0,
1 2
2 4
#
.
Then |A| = 4 4 = 0, and so A is singular and the system has infinitely many solutions.
Therefore x = , y =
y=
(2.9)
x y + 3z = 0
(2.10)
3x y + 7z = 0
(2.11)
1
1
Solution: Let A = 1 1
3 1
3 . Then |A| = 7 + 3 (7 9) 1 + 3 = 4 + 2 + 2 = 0.
7
Hence A is singular, and the system has infinitely many solutions. We proceed as before:
Let x = . Add (2.9) and (2.10) to obtain
2x + 4z = 0
z=
= .
2
2
= .
2
2
= 0,
2
2
25
2.4.3
Summary
2.5
Gaussian Elimination
(2.12)
(2.13)
(2.14)
(2.12)
3x2 + x3 = 5
6x2 + 8x3 = 4
(2.13) 2 (2.12)
(2.15)
(2.14) + (2.12)
(2.16)
(2.12)
3x2 + x3 = 5
(2.15)
(2.16) 2 (2.15)
6x3 = 6
(2.17)
The original system has now been reduced to a triangular system of equations.This can be
solved by back substitution:
(2.17)
x3 = 1
(2.15)
3x2 + 1 = 5
(2.12)
2x1 4 + 3 = 3
x2 = 2
x1 = 2.
More generally, the system AX = B can be reduced to a triangular set of equations which are
easily solved by back substitution. In hand calculations this can be done in tabular form
incorporating some simple but effective checks on the accuracy as in the following example.
Example: Use Gaussian Elimination to solve
x1 + x2 + 2x3 x4 = 5
x1 + 3x2 + 2x3 + x4 = 17
3x1 + x2 + 3x3 + x4 = 18
x1 + 3x2 + 4x3 + 2x4 = 27
Solution: To solve in tabular form, we set up a table in the following way:
Row x1
(1)
(2)
(3)
(4)
1
1
3
1
x2
x3
x4
1
3
1
3
2
2
3
4
-1
1
1
2
5
17
18
27
Sum Check
Operation
27
The rows correspond to the original system. The Sum Check column gives the sum of the
numbers to its left, and helps to prevent arithmetic errors. Adding this in, we get
Row x1
* (1)
(2)
(3)
(4)
1
1
3
1
x2
x3
x4
Sum Check
1
3
1
3
2
2
3
4
-1
1
1
2
5
17
18
27
8
24
26
37
Operation
The rows in subsequent tables correspond to the equations which are formed in the elimination
process. The Operation column describes how the current row is formed from previous rows.
The operation should also be applied to the sum check column and the result checked against
the sum of the numbers to its left. This is equivalent to the algebraic operations we did before.
In each block of rows one row is indicated by * this is called the pivotal row. In this
example, the first row in each block has been chosen to be the pivotal row, but this is not
necessary.
At each stage, multiples of the pivotal row are subtracted from/added to the other rows to
eliminate x1 , then x2 etc.
Row x1
x2
x3
x4
Sum Check
Operation
* (5)
(6)
(7)
2
-2
2
0
-3
2
2
4
3
12
3
22
16
2
29
(2) (1)
(3) 3 (1)
(4) (1)
* (8)
(9)
0
0
-3
2
6
1
15
10
18
13
(6) + (5)
(7) (5)
20
25
* (10)
(9) +
2
(8)
3
The pivotal rows can be used to get the solution using back substitution:
(10)
(8)
(5)
(1)
5x4 = 20
3x3 + 6x4 = 15
2x2 + 2x4 = 12
x1 + x2 + 2x3 x4 = 5
x4
x3
x2
x1
28
=4
=3
=2
=1
2.5.2
1 2 1
Sum Check
Operation
(1) 1
(2) 2
(3) 3
2
2
2
1
0
0
0
1
0
0
0
1
5
9
14
(4) 1
(5) 0
(6) 0
2 1 1
-2 2 -2
-4 5 -3
0
1
0
0
0
1
5
-1
-1
(1)
(2) 2 (1)
(3) 3 (1)
(7) 1
(8) 0
(9) 0
0 3 -1
-2 2 -2
0 1 1
1
1
-2
0
0
1
4
-1
1
(4) + (5)
(5)
(6) 2 (5)
(10) 1
(11) 0
(12) 0
0 0 -4
-2 0 -4
0 1 1
7
5
-2
-3
-2
1
1
-3
1
(7) 3 (9)
(8) 2 (9)
(9)
(13) 1
(14) 0
(15) 0
0
1
0
7
5/2
-2
-3
1
1
1
3/2
1
(10)
(1/2) (11)
(12)
1
4
8
0
0
1
-4
2
1
=A1
29
You should check by multiplication that this does indeed yield the inverse of A.
Now we solve the equations:
X = A1 B
x
4
7 3
1
1 8
y = 2 5/2
z
1
2
1
17
4 1 + 7 8 3 17
= 2 1 (5/2) 8 + 1 17
1 1 2 8 + 1 17
= 1 .
2
2.6
Suppose A is a square matrix and X is a column matrix (i.e. vector) with at least one
non-zero entry such that
AX = X,
(2.18)
where is a scalar.
The scalar is called an eigenvalue or (much less commonly) eigenroot or latent root. The
vector X is called an eigenvector corresponding to the eigenvalue .
Clearly kX is also an eigenvector for any non-zero scalar k.
(2.18) can be written as:
AX = IX
i.e. (A I)X = 0
(2.19)
.
Now, (A I)X = 0 has a non-trivial solution for the column vector X if and only if
|A I| = 0.
(2.20)
30
For any eigenvalue (i.e. a value of satisfying (2.20)), we can find a corresponding column
vector X, called the eigenvector, satisfying (2.18).
If A is an n n matrix, so that |A I| is an nth order determinant, then |A I| is a
polynomial of degree n in . We therefore solve the characteristic equation by finding the
roots of this polynomial. There will be therefore be at most n distinct solutions to (2.20).
Example: Find the eigenvalues and corresponding eigenvectors associated with the matrix
"
#
4
3
A=
.
2 1
"
Solution: A =
4
3
2 1
"
and I =
0
0
#
.
Therefore
4
3
|A I| =
2 1
= (4 )(1 ) 3 (2)
= 4 + 4 + 2 + 6
= 2 3 + 2
= ( 2)( 1).
Therefore |A I| = 0
= 2 or = 1.
1
X=
=
=
.
x2
1
31
2
x2 = x1 .
3
X
.
|X|
2 + 2 =
32 + (2)2 = 13.
1 2 2
A= 2
2
3 .
2
3
2
Solution:
First find the eigenvalues by solving the characteristic equation:
1 2
2
|A I| = 2
2
3
2
3
2
2
3
|A I| = (1 )
3
2
2
3
(2)
2 2
2 2
2
2
3
x1
0
x1
2 2 2
(A I)X = 2
3
3 x2 = 0 ,
0
x3
2
3
3
and so
2x1 2x2 2x3 = 0
(2.21)
(2.22)
(2.23)
x1
0
0
1
Case = 1: In this case
0
2 2
x1
0
(A I)X = 2
1
3 x2 = 0
2 3
1
x3
0
33
and so
2x2 2x3 = 0
(2.24)
2x1 + x2 + 3x3 = 0
(2.25)
2x1 + 3x2 + x3 = 0
(2.26)
From (2.24), x2 + x3 = 0 x3 = x2 .
Putting x1 = for any 6= 0, (2.25)
2 + x2 + 3(x2 ) = 0 x2 = .
x1
1
Case = 5: In this case
4 2 2
x1
0
(A I)X = 2
3 3 x2 = 0
2 3
3
x3
0
and so
4x1 2x2 2x3 = 0
(2.27)
(2.28)
(2.29)
i.e. 2x1 + x2 + x3 = 0
and 2x1 3x2 + 3x3 = 0
(1/2) (2.27)
(2.30)
1 (2.29)
(2.31)
x1
X = x2
x3
we see that
= 0 2
4
is an eigenvector corresponding to = 5.
34