You are on page 1of 97

Linear Algebra

1
Linear Algebra
Karabuk University
Faculty of Engineering
Electrical and Electronics Engineering
Prof. dr. eng. Mihai Cernat
First part
Linear Algebra
Frank Ayres jr., Theory and problems of Matrices. Schaums Outline Series.
Metric Edition. McGraw-Hill Inc., 1974, ISBN 0-07-099035-2, 512.5/AYR/1974

Literature
Linear Algebra
3
Contents
1 Matrices
2 Some types of
3 Determinant f a square matrix
4 Evaluation of determinants
5 Equivalence
6 The adjoint of a square matrix
7 The invese of a matrix
8 Fields
9 Linear dependence of vectors and forms
10 Linear equtions
11 Vector spaces
12 Linear transformations
13 Vectors over the real field
Linear Algebra 4
Contents
14 Vectors over the complex field
15 Congruence
16 Bilinear forms
17 Quadratic forms
18 Hermitian form
19 The characteristic equation of a matrix
20 Similarity
21 Similarity to a diagonal matrix
22 Polynomials over a field
23 Lambda matrices
24 Smith normal form
25 The minimum polynomialof a matrix
26 Canonical forms under similarity
Linear Algebra
5
Chapter 1 - Matrices (1)
A rectanglar array of numbers enclosed by a pair of brackets such as
and subject of certain rules of operation given below is called a matrix.
The matrix (a) could be considered as the coefficient matrix of homogenous
linear equation system (a1) or as the augmented matrix of the non-
homogenous linear equation system (a2)
(

5 1 1
7 3 2
) (a

=
= +

= +
= + +
5
7 3 2
) 2 (
0 5
0 7 3 2
) 1 (
y x
y x
a
z y x
z y x
a
Later we shall show how the matrix can be used to obtain solutions of these
systems.
Linear Algebra
6
Chapter 1 - Matrices (2)
The matrix (b):
could be given an other interpretation or we might consider its rows as simply
the coordinates of the points A(1, 2, 1), B(2, 1, 4) and C(4, 7, 6) in ordinary
space.
The matrix will be used later to settle such questions as wheather or not the
three points lie in the same plane with the origin or on the same line through the
origin.
(
(
(

6 7 4
4 1 2
1 2 1
) (b
Linear Algebra
7
Chapter 1 - Matrices (3)
In the matrix (1.1):
the numbers or functions are called its elements. In the double subscript
notation, the first subscript indicates the row and the second subscript indicates
the column in which the element stands.
Thus, all elements in the second row have 2 as first subscript and all the
elements in the elemts in the fifth column will have 5 as second subscript.
(
(
(
(
(
(

mn m m m
n
n
a a a a
a a a a
a a a a
...
... ... ... ... ...
... ... ... ... ...
...
...
) 1 . 1 (
3 2 1
2 23 22 21
1 13 12 11
ij
a
Linear Algebra
8
Chapter 1 - Matrices (4)
A matrix of m rows and n columns is said to be of order m by n or mn.
In indicating a matrix, pairs of parantheses, ( ), and double bars, , are
sometimes used.
SQUARE MATRICES
When m = n, the matrix (1.1) above is square and will be called a square
matrix of order n or a n-square matrix. In a square matrix, the elements a
11
,
a
22
, ..., a
nn
are called its diagonal elements. The sum of the diagonal elements
of a square matrix A is called the trace of the matrix A.
EQUAL MATRICES
Two matrices
) ..., , 2 , 1 ; ..., , 2 , 1 ( n j n i b a
ij ij
= = =
Thus, two matrices are equal if and only if one is a duplicate of the other.
| | | |
ij ij
b B a A = = ;
are seid to be equal (A=B) if and only if they bave the same order and each
element of one is equal to the corresponding element of the other, that is only if
Linear Algebra
9
Chapter 1 - Matrices (5)
ZERO MATRIX
A matrix, every element of which is zero, is called a zero matrix. When A is a
zero matrix and there can be no confusion as to its order, we shall write A = 0
instead of the mn array of zero elements.
ADDITION OF MATRICES
If A=[a
ij
] and B=[b
ij
] are two mn matrices, their sum (difference), AB is defined
as the mn matrix C=[c
ij
], where each element of C is the sum (difference) of
the corresponding elements of A and B. Thus C = A B = [a
ij
b
ij
].
Example:
(



=
(

= +
(

=
(

=
1 1 1
3 1 1
9 3 1
3 5 3
5 2 1
0 3 2
4 1 0
3 2 1
B A B A
B A
Linear Algebra
10
Chapter 1 - Matrices (6)
Two matrices of the same order are said to be conformable for addition or
substraction. Two matrices of different orders cannot be added or subtracted.
The sum of k matrices A is a matrix of the same order as matrix A and each of
its elements is equal to k times the corresponding element of matrix A.
We define: if k is any scalar (we call k a scalar for distinguish it from [k] which is
a 11 matrix) then by kA is meant the matrix obtained from matrix A by
multiplying each of its elements by k.
Example:
(

=
(


=
(


+
(


+
(


= + +
(


=
15 10
10 5
5
9 6
6 3
3 2
2 1
3 2
2 1
3 2
2 1
3 2
2 1
A
A A A
A
(


3 2
2 1
Linear Algebra
11
Chapter 1 - Matrices (7)
Assuming that the matrices A, B, and C are conformablle for addition, we state:
(a) A + B = B + A (commutativity axiom)
(b) A + (B + C) = (A + B) + C (associativity axiom)
(c) k ( A + B) = ( A + B) k = kA + kB (distributivity axiom)
(d) There exists a matrix 0 such the A + 0 = A (zero element axiom)
(e) There exists a matrix A such that A +(-A) = 0 (negative element axiom)
In particular, by A, called the negative matrix of matrix A, is meant the matrix
obtained from A by multiplying each element by -1 or by simply changing the sign
of all its elements. For every matrix A, we have A + (-A) = 0, where 0 indicates the
zero matrix of the same order as matrix A.
These are a result of the axioms of elementay algebra, governing the addition of
numbrs and polynomials. They show, moreover, that conformable matrices obey
the same axioms of addition as the elements of these matrices.
Linear Algebra
12
Chapter 1 - Matrices (8)
MULTIPLICATION OF MATRICES
By the product AB in that order of the 1m matrix A and the m1 matrix B is
meant the 11 matrix C where:
| |
| |
(
(

= + + + + =
(
(
(
(
(
(

= =

=
m
k
k k m m
m
m
b a b a b a b a b a C
b
b
b
b
B a a a a A
1
1 1 1 1 31 13 21 12 11 11
1
31
21
11
1 13 12 11
...
...
...
Note that the operation is row by column: each element of the row is multiplied
into the corresponding element of the column and the products are summed.
Linear Algebra
13
Chapter 1 - Matrices (9)
By the product AB in that order of the mp matrix A=[a
ij
] and the pn matrix
B =[bij] is meant the mn matrix C =[c
ij
] where:
( ) n j m i
b a b a b a b a b a c
p
k
kj ik pj ip j i j i j i ij
..., , 2 , 1 ; ..., , 2 , 1
...
1
3 3 2 2 1 1
= =
= + + + + =

=
Think of matrix A as consisting of m rows and of matrix B as consisting of n
columns. In forming matrix C = A B each row of matrix A is multiplied once by
each column of matrix B. This element c
ij
of C is then the product of the ith row
of matrix A and the jth column of matrix B.
The product AB is defined or matrix A is conformable to matrix B for multiplica-
tion only when the number of columns of matrix A is equal to the number of
columns of matrix B.
If matrix A is conformable to matrix B for multiplication (AB is defined), matrix B
is not necessarily conformable to matrix A for multiplication (BA may or may not
be defined).
Linear Algebra
14
Chapter 1 - Matrices (10)
(a) A(B +C) = AB + AC (first distributivity axiom)
(b) (A + B)C = AC + BC (second distributivity axiom)
(c) A(BC) = (AB)C (associativity axiom)
Assuming that matrices A,B, C are conformable for addition and multiplication as
defined above, we state:
However:
(d) AB BA generaly
(e) (A + B)C = 0 does not necessarily imply A=0 or B=0
(f) AB = AC does not necessarily imply B=C
Linear Algebra
15
Chapter 1 - Matrices (11)
PRODUCTS BY PARTITIONING
Let matrix A=[a
ij
] be of order mp and matrix B=[bij] of order pn. In forming the
product AB, the matrix A is in effect partitioned into m matrices of order 1p and
marix B into into n matrices of order p1. Other partitions may be used. For
example, let matrices A and B be partioned into matrices of indicated orders by
drawing as
( ) | | ( ) | | ( ) | |
( ) | | ( ) | | ( ) | |
( ) | | ( ) | |
( ) | | ( ) | |
( ) | | ( ) | |
| | | | | |
| | | | | |
| | | |
| | | |
| | | |
(
(
(

=
(

=
(
(
(




=
(



=
32 31
22 21
12 11
23 22 21
13 12 11
2 3 1 3
2 2 1 2
2 1 1 1
1 2 1 2 1 2
3 1 2 1 1 1
B B
B B
B B
B
A A A
A A A
A
n p n p
n p n p
n p n p
B
p m p m p m
p m p m p m
A
Linear Algebra
16
Chapter 1 - Matrices (12)
In any such partitioning, it is necessary that the columns of matrix A and the rows
of matrix B be partitioned in exactly to same way: however m
1
, m
2
, n
1
, n
2
may be
any non-negative (including O) integers suchg that m
1
+ m
2
= m and n
1
+ n
2
= n.
Then:
C
C C
C C
B A B A B A B A B A B A
B A B A B A B A B A B A
AB =
(

=
(

+ + + +
+ + + +
=
11 21
12 11
32 23 22 22 21 21 31 23 21 22 21 21
32 13 22 12 21 11 31 13 21 12 11 11
Example: Compute AB , given
| | | | | | | |(
(
(

=
(

=
(
(
(

=
(

=
(
(
(

=
(
(
(

=
2 1 3 2
0
0
1 1 2
1 1 1
1 0 1
0
0
2 3
1 2
2 1 3 2
0 1 1 2
0 1 1 1
1 0 1
0 2 3
0 1 2
22 21
12 11
22 21
12 11
B B
B B
B
A A
A A
A
B A
Linear Algebra
17
Chapter 1 - Matrices (13)
| |
| |
| | | || |
| | | | | |
| | | || | | | | | | | 2 2 0 2 1
0
0
0 1
2 4 3 1 3 2 1 1 1
1 3 2 1
1 1 2
1 1 1
0 1
0
0
0
0
0
0
2
0
0
0
0
2 3
1 2
5 5 7
3 3 4
0 0 0
0 0 0
5 5 7
3 3 4
1 3 2
0
0
1 1 2
1 1 1
2 3
1 2
22 22 21 21 22
21 22 21 21 21
22 12 21 11 12
21 12 11 11 11
11 21
12 11
22 22 21 21 21 22 21 21
22 12 21 11 21 12 11 11
= + = +
(

= + =
= + =
= +
(

= + =
(

=
(

+
(

=
(

+
(

= + =
(

=
(

+
(

=
=
(

+
(

= + =
=
(

=
(

+ +
+ +
=
B A B A C
B A B A C
B A B A C
B A B A C
C
C C
C C
B A B A B A B A
B A B A B A B A
AB
Linear Algebra
18
Chapter 2 - Some Types of Matrices (1)
THE UPPER TRIANGULAR MATRIX, THE LOWER TRIANGULAR MATRIX
The square matrix whose elements a
ij
=0 for i > j is called upper triangular matrix;
a square matrix whose elements a
ij
=0 for i < j is called lower triangular matrix.
Thus:
(
(
(
(
(
(

(
(
(
(
(
(

nn n n n
n
nn
n
n
n
a a a a
a a a a
a a
a
a
a a
a a a
a a a a
...
... ... ... ... ...
...
0 ... 0
0 ... 0 0
... 0 0 0
... ... ... ... ...
... 0 0
... 0
...
3 2 1
3 33 32 31
22 21
11
3 33
2 23 22
1 13 12 11
Linear Algebra
19
Chapter 2 - Some Types of Matrices (2)
THE DIAGONAL MATRIX
The square matrix which is both upper triangular and lower triangular is called
diagonal matrix. Thus:
(
(
(
(
(
(

=
nn
a
a
a
a
D
... 0 0 0
... ... ... ... ...
0 ... 0 0
0 ... 0 0
0 ... 0 0
33
22
11
If will be frequently written as: D = diag (a
11
, a
22
, a
33
, ..., a
nn
)
Linear Algebra
20
Chapter 2 - Some Types of Matrices (3)
THE SCALAR MATRIX, THE IDENTITY MATRIX
If in the diagonal matrix a
11
=a
22
=...=a
nn
=k, the matrix is called a scalar matrix. If in
addition, k = 1, the matrix is called identity matrix and is denoted I
n
. Thus:
(
(
(

=
(

=
(
(
(
(
(
(

1 0 0
0 1 0
0 0 1
1 0
0 1
... 0 0 0
... ... ... ... ...
0 ... 0 0
0 ... 0 0
0 ... 0 0
3 2
I I
k
k
k
k
When the order is evident or imaterial, an identity matrix will be denoted I. Clearly,
I
n
+I
n
+ ... to p terms = p I
n
= diag(p, p, p, ..., p) and I
p
=II... to p factors = I.
For example, if
A I A I I A A I A = = =
(

=
3 2 3 2
6 5 4
3 2 1
Linear Algebra
21
Chapter 2 - Some Types of Matrices (4)
SPECIAL SQUARE MATRICES
If matrices A and B are square matrices such that AB = BA, the matrices A and B
are called commutative matrices or are said to commute.
It is a simple matter to show that if matrix A is any n-square matrix, it commutes
with itsef and also with I
n
(see Problem 2).
If matrices and B are square matrices such that AB = -BA, the matrices A and B
are said to anti-commute.
A matrix A for which A
k+1
= A, where k is a positive integer, is called periodic
matrix. If k is the least positive integer for which A
k+1
= A, then matrix A is said to
be of period k.
If k=1, so that A
2
= A, then matrix A is called idempotent matrix (see Problems 3
and 4).
A matrix A for which A
p
= 0, where p is a positive integer, is called nilpotent
matrix. If p is the least positive integer for which A
k
= 0, then matrix A is said to be
a nilpotent matrix of index p (see Problems 5 and 6).
Linear Algebra
22
Chapter 2 - Some Types of Matrices (5)
THE INVERSE OF A MATRIX
If the matrices A and B are square matrices such that AB = BA=I, then matrix B is
called the inverse of the matrix A and we write B=A
-1
(matrix B equals matrix A
inverse). The matrix B also has matrix A as its inverse and we may write A = B
-1
.
Example:
I =
(
(
(

=
(
(
(


(
(
(

1 0 0
0 1 0
0 0 1
1 0 1
0 1 1
3 2 6
4 2 1
3 3 1
3 2 1
Each matrix in the product is the inverse of the other.
We shall find later that not every matrix has an inverse. We can show here however
that if matrix A has an inverse then that inverse is unique. (see Problem 7) .
If the matrices A and B are square matrices lof the same order with inverses matrix
A
-1
and B
-1
respectively, then (AB)
-1
= B
-1
A
-1
that is the product in reverse order of
these inverses. (see Problem 8).
A matrix A such that A
2
=I is called involuntory matrix. An identity matrix is invo-
luntory. An involuntory matrix is its own inverse (see Problem 9).
Linear Algebra
23
Chapter 2 - Some Types of Matrices (6)
THE TRANSPOSE OF A MATRIX
The matrix of order nm obtained by interchanging the rows and columns of an
mn matrix A is called the transpose of matrix A and is denoted by matrix A
(transpose). For example:
(
(
(

=
(

=
6 3
5 2
4 1
6 5 4
3 2 1
T
A A
Note that the element a
ij
in the i
th
row and j
th
column of matrix A stands in the j
th
row
and the i
th
column of matrix A
T
.
If the matrices A
T
and B
T
are the transposes respectively of the matrices A and B and k
is a scalar, we have immediately (A
T
)
T
=A and (k A)
T
= k A
T
.
The transpose of the sum of two ma trices is the sum of their transposes:
(A+B)
T
=A
T
+B
T
.
The transpose of the product of two matrices is the product in inverse order of their
transposes: (AB)
T
= B
T
A
T
.
Linear Algebra
24
Chapter 2 - Some Types of Matrices (6)
SYMMETRIC MATRICES
A square matrix A such that A
T
=A is called symmetric matrix. Thus a square matrix
A=[a
ij
] is symmetric provided a
ij
=a
ji
for all values of i and j. For example, matrix A:
(
(
(

=
6 5 3
5 4 2
3 2 1
A
is symmetric and also is the matrix kA for any scalar k.

If the matrix A is a square matrix, then A+A
T
is a symmetric matrix.
Linear Algebra
25
Chapter 2 - Some Types of Matrices (7)
SYMMETRIC MATRICES
A square matrix A such that A
T
= -A is called skew-symmetric matrix. Thus, a
square matrix A is skew-symmetric provided a
ij
=-a
ji
for all values i and j. Clearly,
the diagonal elements are zeros.
For example, the matrix A is skew-symmetric and so also is the matrix kA for any
scalar k.
(
(
(

=
0 4 3
4 0 2
3 2 0
A
Every square matrix A can be written as the sum
A = B + C
of a symmetric matrix B = (A + A
T
) / 2
and a skew-symmetric matrix C = (A - A
T
) / 2.
Linear Algebra
26
Chapter 2 - Some Types of Matrices (8)
THE CONJUGATE OF A MATRIX (1)
Let a and b be two real numbers and let j=(-1)
1/2
; then z = a+bj is called a complex
number. The complex numbers a+bj and a-bj are called conjugates, each being
the conjugate of the other.
If z=a+bj, its conjugate is denoted by z*=(a+bj)*=a-bj.
If z
1
=a+bj and z
2
=z
1
*=a-bj, then z
2
*=(z
1
*)*=(a-bj)*=a+bj, that is, the conjugate of the
conjugate of a complex number z is z itself.
If z
1
= a+bj and z
2
= c+dj, then:
z
1
+z
2
= ac+(b+d)j and (z
1
+z
2
)* = (a+c)-(b+d)j=a-bj+c-dj= z
1
*+z
2
*, that is, the conju-
gate of the sum of two complex numbers is the sum of their conjugates.
z
1
z
2
= (ac-bd)+(ad+bc)j and (z
1
z
2
)*= (ac-bd)-(ad+bc)j=(a-bj)(c-dj)= z
1
*z
2
*

that is,
the conjugate of the product of two complex numbers is the product of their conju-
gates.
Linear Algebra
27
Chapter 2 - Some Types of Matrices (9)
THE CONJUGATE OF A MATRIX (2)
When matrix A have complex numbers as elements, the matrix obtained from matrix
A by replacing each element by its conjugate is called the conjugate of matrix A
and is denoted by matrix A* (matrix A conjugate).
The transpose of the matrix A* is denoted (A*)
T
and is called matrix A conjugate
transpose.
The transpose of the conjugate is equal to the conjugate of the transpose:
(A*)
T
=(A
T
)*
Linear Algebra
28
Chapter 2 - Some Types of Matrices (10)
HERMITIAN MATRICES
A square matrix A=[a
ij
] such that A
T
=A is called Hermitian. Thus, matrix A is
Hermitian provided a
ij
= (a
ji
)* for all values of i and j. Clearly, the diagonal eements of
a Hermitan matrix are real numbers.

A square matrix A=[a
ij
] such that A
T
=-A is called skew-Hermitian. Thus, matrix A is
skew-Hermitian provided a
ij
= (-a
ji
)* for all values of i and j. Clearly, the diagonal
elements of a Hermitan matrix are either zero or pure imaginares.

If matrix A is a square matrix then A+(A
T
)* is a Hermitian matrix and A-(A
T
)* is a
skew-Hermitian matrix.
Every square matrix A with complex elements can be written as the sum
A = B + C
of an Hermitian matrix B = A + (A
T
)*
and a skew-Hermitian matrix C = A - (A
T
)*.

Linear Algebra
29
Chapter 2 - Some Types of Matrices (11)
DIRECT SUM
Let A
1
, A
2
, ..., A
s
be square matrices of respective orders m
1
, m
2
, ..., m
s
. The
generalization:
of the diagonal matrix A is called direct sum of matrices A
i
.
If matrix A = diag(A
1
, A
2
, ..., A
S
) and matrix B = diag(B
1
, B
2
, ..., B
s
)
where A
i
and B
i
have the same order for (i=1, 2, ..., s), then
AB = diag(A
1
B
1
, A
2
B
2
, ..., A
s
B
s
).
) ..., , , ( diag
... 0 0
... ... ... ...
0 0
0 ... 0
2 1
2
1
s
s
A A A
A
A
A
A =
(
(
(
(

=
Linear Algebra
30
Chapter 2 - Some Types of Matrices (12)
DIRECT SUM (example)
Let:
The direct sum of matrices A
1
, A
2
, and A
3
is
| |
(
(
(

=
(

= =
2 1 4
3 0 2
1 2 1
;
4 3
2 1
; 2
3 2 1
A A A
(
(
(
(
(
(
(
(

=
2 1 4 0 0 0
3 0 2 0 0 0
1 2 1 0 0 0
0 0 0 4 3 0
0 0 0 2 1 0
0 0 0 0 0 2
) , , ( diag
3 2 1
A A A
Linear Algebra
31
Chapter 3 - Determinant of a square matrix (1)
PERMUTATIONS
Consider the 3!=6 permutations of the integeres 1, 2, 3 taken together
(3.1) 123 132 213 231 312 321
and eight of the 4!=24 permutations of the integers 1, 2, 3, 4 taken together
(3.2) 1234 2134 3124 4123 1324 2314 3214 4213
If in a given permutation a lager integer preceeds a smaller one, we say that there is
an inversion. If in a given permutation the number of inversions is even (odd), the
permutation is called even (odd).
For example in (3.1) the permitation 123 in even since there is no inversion, the
permutation 132 is odd since in it 3 preceedes 2, the permutation 312 is even since
in it 3 preceeds 1 and 3 preceedes 2.
In (3.2) the permutation 4213 is even since in it 4 preceeds 2, 4 preceeds 1, 4
preceeds 3, and 2 preceeds 1.
Linear Algebra
32
Chapter 3 - Determinant of a square matrix (2)
THE DETERMINANT OF A SQUARE MATRIX
Consider the n-square matrix
(
(
(
(
(
(

nn n n n
n
n
a a a a
a a a a
a a a a
...
... ... ... ... ...
... ... ... ... ...
...
...
) 3 . 3 (
3 2 1
2 23 22 21
1 13 12 11
and a product
(3.4) a
1 j1
a
2 j2
a
3 j3
... a
n jn
of n elements, selected so that one and only element comes from any row and
one and only one element omes from any column. In (3.4) as a matter of con-
venience, the factors have been arranged so that the sequence of the first sub-
scripts is the natural order 1, 2, ..., n; the sequence j
1
, j
2
, ..., j
n
of second sub-
scripts is then some one of the n! permutations of the integers 1, 2, ..., n.
Linear Algebra
33
Chapter 3 - Determinant of a square matrix (3)
For a given permutation j
1
, j
2
, ..., j
n
of the second subscripts, define
j1 j2 ... jn
=+1
or -1 according as the permutation number is odd or even, respectively, and
form the signed product
(3.5)
j1 j2 ... jn
a
1 j1
a
2 j2
a
3 j3
... a
n jn
By the determinant of matrix A, denoted by |A|, is meant the sum of all the
different signed products of the form (3.5) , called terms of |A|, which can be
formed from the elements of matrix A; thus:
(3.6) |A| =
p

j1 j2 ... jn
a
1 j1
a
2 j2
a
3 j3
... a
n jn
where the summation extends over p=n! permutations j
1
, j
2
, ..., j
n
of the integers
1, 2, ..., n.
The determinant of a square matrix of order n is called a determinant of order n.
Linear Algebra
34
Chapter 3 - Determinant of a square matrix (4)
DETERMINANTS OF ORDER TWO AND THREE
From (3.6) we have for n=2 and n=3,
21 12 22 11 21 12 21 22 11 12
22 21
12 11
) 7 . 3 ( a a a a a a a a
a a
a a
= c + c =
( ) ( )
( ) = +
+ =
= + +
+ =
= c + c + c +
+ c + c + c =
=
31 22 32 21 13
31 23 33 21 12 32 23 33 22 11
31 22 13 32 21 13 31 23 12
33 21 12 32 23 11 33 22 11
31 22 13 321 32 21 13 312 31 23 12 231
33 21 12 213 32 23 11 132 33 22 11 123
33 32 31
23 22 21
13 12 11
) 8 . 3 (
a a a a a
a a a a a a a a a a
a a a a a a a a a
a a a a a a a a a
a a a a a a a a a
a a a a a a a a a
a a a
a a a
a a a
Linear Algebra
35
Chapter 3 - Determinant of a square matrix (5)
( ) ( )
( )
32 31
22 21
13
33 31
23 21
12
33 32
23 22
11
31 22 32 21 13
31 23 33 21 12 32 23 33 22 11
a a
a a
a
a a
a a
a
a a
a a
a
a a a a a
a a a a a a a a a a
+ =
= +
+ =
Linear Algebra
36
Chapter 3 - Determinant of a square matrix (6)
PROPERTIES OF DETERMINANTS (1)
Throughout this section, matrix A is the square matrix whose determinant |A| is
gien by (3.6).
Suppose that every element of the i
th
row (every element of the j
th
column) is
zero. Since every term of (3.6) contains one element of this row (column), every
term in the sum is zero and we have:
I. If every element of a row (column) of a square matrix A is zero, then |A| = 0.
Consider the transpose A
T
of A. It can be seen readily that every term of (3.6)
can be obtained from AT by choosing properly the factors in order from the first,
second, ..., column. Thus:
II. If matrix A is a square matrix, then |A
T
| = |A|; that is, for every theorem con-
cerning the rows of a determinant there is a corresponding theorem concerning
the columns and vice versa.
Linear Algebra
37
Chapter 3 - Determinant of a square matrix (7)
PROPERTIES OF DETERMINANTS (2)
Denote matrix B the matrix obtained by multipying each of the elements of the
i
th
row of matrix A by the scalar k. Since each term in the expansion of |B|
contains one and only one element from its i
th
row, that is, one and only one
element having k as factor
|B| = k
p

j1 j2 ... jn
a
1 j1
a
2 j2
a
3 j3
... a
n jn
= k |A|
Thus,
III. If every element of a row (column) of a determinant |A| is mutiplied by a
scalar k, the determinant is multiplied by k; if every element of a row (column) of
a determinant |A| has k as a factor, then k may be factored from |A|.
For example:
(
(
(

=
(
(
(

=
(
(
(

33 32 31
23 22 21
13 12 11
33 32 31
23 22 21
13 12 11
33 32 31
23 22 21
13 12 11
a k a k a k
a a a
a a a
a a a
a a a
a a a
k
a a k a
a a k a
a a k a
Linear Algebra
38
Chapter 3 - Determinant of a square matrix (8)
PROPERTIES OF DETERMINANTS (3)
Lets matrix B denote the matrix obtained from matrix A by interchanging its i
th

and its (i+1)
st
rows. Each product in (3.6) of |A| is a product of |B| and vice versa;
henc , except possibility for signs, (3.6) is the expansion of |B|. In counting the
inversions in subscripts of any term of (3.6) as a term of |B|, i before (i+1) in the
row subscripts is an inversion; thus, each product of (3.6) with its sign changed is
a term of |B| and |B|=-|A|. Hence:
IV. If matrix B is obtained from matrix A by interchanging any two adjacent rows
(columns), then |B|=-|A|.
As a consequence of theorem IV, we have:
V. If matrix B is obtained from matrix A by interchanging any two of its rows
(columns), then |B|=-|A|.
VI. If matrix B is obtained from matrix A by carrying its ith row (column) over p
rows (columns), then |B|=(-1)
p
|A|.
VII. If two rows (columns) of matrix A are identical, then |A|=0.
Linear Algebra
39
Chapter 3 - Determinant of a square matrix (9)
PROPERTIES OF DETERMINANTS (4)
Suppose that each element of the first row of matrix A is expressed as a binomial
a
1j
=b
1j
+c
1j
(j = 1,2, ..., n). Then:
( )
nn n n n
n
n
nn n n n
n
n
njn j j
p
j jn j j njn j j
p
j jn j j
njn j j
p
j j jn j j
a a a a
a a a a
c c c c
a a a a
a a a a
b b b b
a a a c a a a b
a a a c b A
...
... ... ... ... ...
...
...
...
... ... ... ... ...
...
...
... ...
...
3 2 1
2 23 22 21
1 13 12 11
3 2 1
2 23 22 21
1 13 12 11
3 3 2 2 1 1 ... 2 1 3 3 2 2 1 1 ... 2 1
3 3 2 2 1 1 1 1 ... 2 1
+ =
= c + c =
= + c =


VIII. If every element of the i
th
row (column) of matrix A is the sum of p terms,
then |A| can be expressed as the sum of p determinants. The elements in the i
th

rows (columns) of these p determinants are respectively the first, second, ..., p
th
terms of the sums and all other rows (columns) are those of matrix A.
Linear Algebra
40
Chapter 3 - Determinant of a square matrix (10)
PROPERTIES OF DETERMINANTS (5)
The most useful theorem is
IX. If matrix B is obtained from matrix A by adding to the elements of its i
th
row
(column) a scalar multiple of the corresponding elements of another row (column),
then |B| = |A|.
For example:
23 33 22 32 21 31
23 22 21
13 12 11
33 32 31
23 22 21
13 12 11
33 32 33 31
23 22 23 21
13 12 13 11
33 32 31
23 22 21
13 12 11
a k a a k a a k a
a a a
a a a
a a a
a a a
a a a
a a a k a
a a a k a
a a a k a
a a a
a a a
a a a
+ + +
=
+
+
+
=
Linear Algebra
41
Chapter 3 - Determinant of a square matrix (11)
FIRST MINORS AND COFACTORS (1)
Let matrix A be the n-square matrix (3.3), whose determinant was |A| is given by
(3.6). When from matrix A the elements of its i
th
row and its j
th
column are
removed, the determinant of the remaining (n-1)-square matrix is called a first
minor of matrix A or of |A| and denoted by |M
ij
|.
More frequently, it is called the minor of a
ij
. The signed minor (-1)
i+j
|M
ij
| is called
the cofactor of a
ij
and is denoted
ij
.
.
For example:
( ) ( ) ( )
13 13 12 12 11 11 13 13 12 12 11 11
13 13
3 1
13 12 12
2 1
12 11 11
1 1
11
32 31
22 21
13
33 31
23 21
12
33 32
23 22
11
33 32 31
23 22 21
13 12 11
) 8 . 3 (
1 , 1 , 1
, , ,
o + o + o = + =
= = o = = o = = o
= = = =
+ + +
a a a M a M a M a A
M M M M M M
a a
a a
M
a a
a a
M
a a
a a
M
a a a
a a a
a a a
A
Linear Algebra
42
Chapter 3 - Determinant of a square matrix (12)
FIRST MINORS AND COFACTORS (2)
X. The value of a determinant |A| where A is the matrix of (3.3), is the sum of the
products obtained by multiplying each element of a row (column) by its cofactor,
i.e.,
) ..., , 2 , 1 (
... ) 10 . 3 (
) ..., , 2 , 1 (
... ) 9 . 3 (
1
2 2 1 1
1
2 2 1 1
n j
a a a a A
n i
a a a a A
n
k
kj kj nj nj j j j j
n
k
ik ik in in i i i i
=
o = o + + o + o =
=
o = o + + o + o =

=
=
Linear Algebra
43
Chapter 3 - Determinant of a square matrix (13)
FIRST MINORS AND COFACTORS (3)
XI. The sum of the products formed by multiplying the elements of a row (column)
of an n-square matrix A by the corresponding cofactors of anoter row (column) of
A is zero.
0 0
0 0
0 0
,
33 23 23 22 13 21 33 23 32 22 31 21
33 31 23 21 13 11 33 13 32 12 31 11
32 31 22 21 12 11 23 13 22 12 21 11
33 33 23 23 13 13 33 33 32 32 31 31
32 32 22 22 12 12 23 23 22 22 21 21
31 31 21 21 11 11 13 13 12 12 11 11
33 32 31
23 22 21
13 12 11
= o + o + o = o + o + o
= o + o + o = o + o + o
= o + o + o = o + o + o
o + o + o = o + o + o =
o + o + o = o + o + o =
o + o + o = o + o + o =
=
a a a a a a
a a a a a a
a a a a a a
a a a A a a a A
a a a A a a a A
a a a A a a a A
a a a
a a a
a a a
A
Linear Algebra
44
Chapter 3 - Determinant of a square matrix (14)
MINORS AND ALGEBRAIC COMPLEMENTS (1)
Considering the matrix (3.3). Let i
1
, i
2
, ..., i
m
, arranged in order of magnitude, be m,
(1m<n), of the row indices 1, 2, ..., n and let j
1
, j
2
, ..., j
m
arranged in order of magni-
tude, be m of the column indices. Let the remaining row and column indices, arran-
ged in order of magnitude, be respectively i
m+1
, i
m+2
, ..., i
n
and j
m+1
, j
m+2
, ..., j
n
.
Such a separation of the row and column indices determines uniquely two matrices
called sub-matrices of matrix A:
(
(
(
(
(

=
(
(
(
(
(

=
+ +
+ + + + +
+ + + + +
+ +
+ +
n n m m m n
n m m m m m
n m m m m m
n m m
n m m
m m m m
m
m
m
m
j i j i j i
j i j i j i
j i j i j i
j j j
i i i
j i j i j i
j i j i j i
j i j i j i
j j j
i i i
a a a
a a a
a a a
A
a a a
a a a
a a a
A
, , ,
, , ,
, , ,
..., , ,
..., , ,
, , ,
, , ,
, , ,
..., , ,
..., , ,
...
... ... ... ...
...
...
) 12 . 3 (
...
... ... ... ...
...
...
) 11 . 3 (
2 1
2 2 2 1 2
1 2 1 1 1
2 1
2 1
2 1
2 2 2 1 2
1 2 1 1 1
2 1
2 1
Linear Algebra
45
Chapter 3 - Determinant of a square matrix (15)
MINORS AND ALGEBRAIC COMPLEMENTS (2)
The determinant of each of these sub-matrices is called a minor of matrix A and
the pair of minors are called complementary minors of matrix A, each being the
complement of the other.
n m m
n m m
m
m
j j j
i i i
j j j
i i i
A A
..., , ,
..., , ,
..., , ,
..., , ,
2 1
2 1
2 1
2 1
+ +
+ +
For the 5-square matrix A = [a
ij
]
45 44 42
35 34 32
15 14 12
5 , 4 , 2
4 , 3 , 1
53 51
23 21 3 , 1
5 , 2
a a a
a a a
a a a
A
a a
a a
A = =
are a pair of complementary minors.
Linear Algebra
46
Chapter 3 - Determinant of a square matrix (16)
MINORS AND ALGEBRAIC COMPLEMENTS (3)
Let
m m
j j j i i i p + + + + + + + = ... ... ) 13 . 3 (
2 1 2 1
and
is called the algebraic complement of
n m m n m m
j j j i i i q + + + + + + + =
+ + + +
... ... ) 14 . 3 (
2 1 2 1
( )
m
m
j j j
i i i
p
A
..., , ,
..., , ,
2 1
2 1
1 The signed minor
m
m
j j j
i i i
A
..., , ,
..., , ,
2 1
2 1
n m m
n m m
j j j
i i i
A
..., , ,
..., , ,
2 1
2 1
+ +
+ +
and the signed minor ( )
n m m
n m m
j j j
i i i
q
A
..., , ,
..., , ,
2 1
2 1
1
+ +
+ +
is called the algebraic complement of
Linear Algebra
47
Chapter 3 - Determinant of a square matrix (17)
MINORS AND ALGEBRAIC COMPLEMENTS (4)
Example: For the minors of previous example:
45 44 42
35 34 32
15 14 12
5 , 4 , 2
4 , 3 , 1
53 51
23 21 3 , 1
5 , 2
a a a
a a a
a a a
A
a a
a a
A = =
( )
5 , 4 , 2
4 , 3 , 1
5 , 4 , 2
4 , 3 , 1
5 4 2 4 3 1
1 A A =
+ + + + +
5 , 4 , 2
4 , 3 , 1
A ( )
3 , 1
5 , 2
3 , 1
5 , 2
3 1 5 2
1 A A =
+ + +
3 , 1
5 , 2
A
is the algebraic complement of
is the algebraic complement of
Linear Algebra
48
Chapter 3 - Determinant of a square matrix (18)
MINORS AND ALGEBRAIC COMPLEMENTS (5)
When m=1 (3.11):
(
(
(
(
(

=
m m m m
m
m
m
m
j i j i j i
j i j i j i
j i j i j i
j j j
i i i
a a a
a a a
a a a
A
, , ,
, , ,
, , ,
..., , ,
..., , ,
...
... ... ... ...
...
...
) 11 . 3 (
2 1
2 2 2 1 2
1 2 1 1 1
2 1
2 1
| |
1 1
1
1
1 1
1
1
, , j i
j
i
j i
j
i
a A a A = =
becomes:
The complementary minor is:
1 1
3 2
3 2
,
..., , ,
..., , ,
j i
j j j
i i i
M A
n
n
=
and the algebraic cofactor is:
1 1
, j i
o
Linear Algebra
49
Chapter 3 - Determinant of a square matrix (19)
MINORS AND ALGEBRAIC COMPLEMENTS (6)
A minor of matrix A, whose diagonal elements are also diagonal elements of matrix
A is called principal minor of matrix A. The complemnt of the principal minor of
matrix A is also a principal minor of A; the algebraic complement of a principal
minor is its complement.
Linear Algebra
50
Chapter 4 - Evaluation of Determinants (1)
PROCEDURES FOR EVALUATING
In Chapter 3, procedures for evaluating determinants of orders two and three were
presented. Theorems wich allows easier evaluating of determinants were also pre-
sented.
For determinants of higher orders, the general procedure is to replace, by repeated
use of Theorem IX, Chapter 3 the given determinant |A| by onother |B| having the
property that all elements, except one, in some row (column) are zero. If b
pq
is this
non-zero element and
pq
is its cofactor, then:

( )
pq
q p
pq pq
M b B A
+
= | = = 1
Then the minor of b
pq
is treated in similar fashion and the procedure is
continued until a determinant of order two or three is obtained.
Linear Algebra
51
Chapter 4 - Evaluation of Determinants (2)
( )
( )( ) 286 ) 4 28 17 15 ( 2
17 4
28 15
1 2
5 4 2
17 4 0
28 15 0
5 4 2
5 3 2 4 3 8 ) 2 ( 3 6
5 4 8 4 4 1 ) 2 ( 4 8
5 4 2
2 8 6
8 1 8
1
5 0 4 2
2 0 8 6
2 1 2 3
8 0 1 8
5 0 4 2
2 3 4 1 3 3 ) 2 ( 3 2 3 3 3
2 1 2 3
2 2 4 1 2 2 ) 2 ( 2 3 3 2 2
5 0 4 2
4 3 2 3
2 1 2 3
4 2 3 2
1 3
3 2
= = =
=


+ + +
=
=

=
=



+ + + +
=


+
+
Linear Algebra
52
Chapter 4 - Evaluation of Determinants (3)
( )( )
( )( ) 037 . 0 104 . 0 354 . 0
757 . 0 492 . 0
265 . 0 309 . 0
354 . 0
249 . 0 757 . 0 492 . 0
217 . 0 265 . 0 309 . 0
1 0 0
354 . 0
249 . 0 680 . 0 492 . 0
217 . 0 196 . 0 309 . 0
1 320 . 0 0
384 . 0 921 . 0
240 . 0 680 . 0 492 . 0
217 . 0 196 . 0 309 . 0
386 . 0 123 . 0 0
921 . 0
240 . 0 680 . 0 555 . 0 0
217 . 0 196 . 0 484 . 0 0
384 . 0 123 . 0 0 0
667 . 0 517 . 0 201 . 0 1
921 . 0
448 . 0 841 . 0 555 . 0 312 . 0
799 . 0 637 . 0 484 . 0 872 . 0
138 . 0 527 . 0 157 . 0 782 . 0
667 . 0 517 . 0 201 . 0 1
921 . 0
448 . 0 841 . 0 555 . 0 312 . 0
799 . 0 637 . 0 484 . 0 872 . 0
138 . 0 527 . 0 157 . 0 782 . 0
614 . 0 476 . 0 185 . 0 921 . 0
= = =
= =

=
=

=
= =
Linear Algebra
53
Chapter 4 - Evaluation of Determinants (4)
THE LAPLACE EXPANSION
The expression of a determinant |A| of order n along a row (column) is a special
case of the Laplace expansion. Instead of selecting one row of |A|, let m rows
numbered i
1
,
i2,
..., i
m
, when arranged in order of magnitude, be selected. From
these m rows
( ) ( )
m
m
j j j
i i i
A
m
m n n n
p
..., , ,
..., , ,
3 2
3 2
minors
... 2 1
... 1


=
can be formed by making all possible selections of m columns from the n columns.
Using these minors and their algebraic complements, we have the Laplace expan-
sion
( )
n m m
n m m
m
m
j j j
i i i
j j j
i i i
p
s
A A A
..., , ,
..., , ,
..., , ,
..., , ,
2 1
2 1
2 1
2 1
1 ) 1 . 4 (
+ +
+ +
=

where s=i
1
+i
2
+... +i
m
+j
1
+j
2
+... +j
m
and the summation extends over the p selections
of the column indices taken m at a time.
Linear Algebra
54
Chapter 4 - Evaluation of Determinants (4)
( ) ( ) ( )
( ) ( ) ( )
( )( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) 286 16 8 6 14 23 1 12 8 6 8 15 13
4 2
2 3
2 1
4 2
0 2
3 3
2 2
4 3
5 2
4 3
1 2
2 3
0 4
3 2
2 3
4 2
5 4
4 2
1 3
2 2
5 0
4 3
2 3
3 2
1 1 1
1 1 1
5 0 4 2
4 3 2 3
2 1 2 3
4 2 3 2
2 , 1
4 , 3
4 , 3
2 , 1
4 3 2 1 3 , 1
4 , 3
4 , 2
2 , 1
4 2 2 1 4 , 1
4 , 3
3 , 2
2 , 1
3 2 2 1
3 , 2
4 , 3
4 , 1
2 , 1
4 1 2 1 4 , 2
4 , 3
3 , 1
2 , 1
3 1 2 1 4 , 3
4 , 3
2 , 1
2 , 1
2 1 2 1
= + + + +
=


+
+ +

=
= + + +
+ + + =
=


=
+ + + + + + + + +
+ + + + + + + + +
A A A A A A
A A A A A A
A
Exemple: evaluate |A| using minors of the first two rows
Linear Algebra
55
Chapter 4 - Evaluation of Determinants (5)
DETERMINANT OF A PRODUCT
If matrix A and matrix B are two n-square matrices, then:

= =
o o =
n
i
i
j
n
j
j i
a a a A
2
1
1
2
1 1 11 11
) 3 . 4 (
EXPANSION ALONG THE FIRST ROW AND COLUMN
If matrix A is a n-square matrix, then
B A AB = ) 2 . 4 (
where
11
is the cofactor of a
11
and is the algebraic complement of the
minor
1
1
i
j
o
ij i
j
a a
a a
1
1 11
Linear Algebra
56
Chapter 4 - Evaluation of Determinants (5)
DERIVATIVE OF A DETERMINANT
Let the n-square matrix A differentiable functions of a variable x. Then the deriva-
tive:
A
x d
d
of matrix A with respect to x is the sum of n determinants obtained by replacing
in all possible ways the elements of one row (column) by their derivatives with
respect to x.
5 2
3
2
2
2
3
3
2
2
2
3 3
2
6 12 4 5
0 1
3 1
0 1
1 2
2
3 2
2
0 1
2
1 2
2
0 1 0
1 2 1
3 1
2 0
3 2 0
3 1
2 0
1 2 1
0 1 2
2 0
1 2 1
3 1
d
d
x x x
x
x x
x
x
x
x
x
x
x x
x
x x
x x
x
x
x x
x
x x
x
x
x x
x x
x
+ =
=
+

=
=
+
+

+
+

+
Linear Algebra
57
Chapter 5 - Equivalence (1)
THE RANK OF A MATRIX
A non-zero matrix A is said to have rank r if at least one of its r-square minors is
different from zero while every (r+1)-square minor, if any, is zero. A zero matrix is
said to have rank 0.
Example 1: The rank of matrix A is 2, since |B|=-1 while |A|=0
An n-square matrix A is called non-singular if its rank r=n that is, if |A|0.
Otherwise matrix A is called singular.
The matrix of previous example is singular.
From |AB|=|A||B| follows:
The product of two or more non-singular matrices is not singular; the product of
two or more n-square matrics is singuar at least one of the matrices is sin-
gular.
0 | | 1
3 2
2 1
7 5 3
4 3 2
3 2 1
= = =
(
(
(

A B
Linear Algebra
58
Chapter 5 - Equivalence (2)
(1) The interchange of i
th
and j
th
rows, denoted H
ij
The interchange of i
th
and j
th
columns, denoted K
ij
(2) The multiplication of every element of the i
th
row by a non-zero scalar k, denoted
H
i
(k)
The multiplication of every element of the i
th
row by a non-zero scalar k, denoted
K
i
(k)
(3) The addition to the elements of the i
th
row of k, a scalar, times the corresponding
elements of the j
th
row, denoted of H
ij
(k)
The addition to the elements of the i
th
column of k, a scalar, times the correspon-
ding elements of the j
th
column, denoted of K
ij
(k)
The transformations H are called elementary row transformations; the transforma-
tions K are called elementary column transformations.
ELEMENTARY TRANSFORMATIONS
The following operations, called elementary transformations, on a matrix do not
change either its order or its rank:
Linear Algebra
59
Chapter 5 - Equivalence (3)
The effect of the elementary row transformation H
21
(-2) on matrix A is to produce
matrix B. The effect of the elementary row transformation H
21
(+2) on matrix B
is to restore matrix A. Thus H
21
(-2) and H
21
(+2) are inverse elementary row
transforma-tions.
THE INVERSE OF AN ELEMENTARY TRANSFORMATIONS
The inverse of an elementary transformation is an operation which undoes the
effect of the elemetary transformation; that is after matrix A has been subjected to
one of the elementary transformations and then the resulting matrix has been sub-
jected to the inverse of that elementary transfomation, the final result is the initial
matrix A.
Example 2:
(
(
(

=
(
(
(

=
9 8 7
0 1 2
3 2 1
9 8 7
6 5 4
3 2 1
B A
Linear Algebra
60
Chapter 5 - Equivalence (4)
The effect of the elementary row transformation H
21
(-2) on matrix A is to produce
matrix B. The effect of the elementary row transformation H
21
(+2) on matrix B
is to restore matrix A. Thus H
21
(-2) and H
21
(+2) are inverse elementary row
transforma-tions.
THE INVERSE OF AN ELEMENTARY TRANSFORMATIONS (1)
The inverse of an elementary transformation is an operation which undoes the
effect of the elemetary transformation; that is after matrix A has been subjected to
one of the elementary transformations and then the resulting matrix has been sub-
jected to the inverse of that elementary transfomation, the final result is the initial
matrix A.
Example 2:
(
(
(

=
(
(
(

=
9 8 7
0 1 2
3 2 1
9 8 7
6 5 4
3 2 1
B A
Linear Algebra
61
Chapter 5 - Equivalence (5)
The inverse elementary transformation is an elementary transformation of the
same type.
THE INVERSE OF AN ELEMENTARY TRANSFORMATIONS (2)
The inverse elementary transformations are
( ) ( ) ( ) ( )
( ) ( ) ( ) ( ) k K k K k H k H
k K k K k H k H
K K H H
ij
ij
ij
ij
i
i
i
i
ij
ij
ij
ij
= =
= =
= =



1 1
1 1
1 1
) ' 3 (
/ 1 / 1 ) ' 2 (
) ' 1 (
Linear Algebra
62
Chapter 5 - Equivalence (6)
Since all 3-sequence minors of B are zero while |C|0. The rank of matrix B is 2;
hence the rank of matrix A is 2. This procedure of obtainig from matrix A an
equiva-lent matrix B from which the rank is evidet by inspection is to be compared
with that of computing the various minors of matrix A.
EQUIVALENT MATRICES
Two matrices are called equivalent, A~B, if one can be obtained from the other by a
sequence of elementary transformations. Equivalent matrices have the same order
and the same rank.
Exemple 3: Applying in turn the elementary transformations H
21
(-2), H
31
(1), H
32
(-1)
0
3 5
4 1
0 0 0 0
3 5 0 0
4 1 2 1
~
~
3 5 0 0
3 5 0 0
4 1 2 1
~
7 6 2 1
3 5 0 0
4 1 2 1
~
7 6 2 1
5 3 4 2
4 1 2 1
=

= =
(
(
(

(
(
(

(
(
(

(
(
(

=
C B
A
Linear Algebra
63
Chapter 5 - Equivalence (7)
a) one ore more elements of each of the first r rows are non-zero while the
otherrows have only zero elements;
b) in the i
th
row, (i = 1, 2, ..., r), the first non-zero element is 1; let the column in
which this element stands be numbered j
i
.
c) j
1
< j
2
< ...< j
r
.
d) the only non-zero element in the column numbered j
i
, (i=1, 2, ..., r), is the
element 1 of the i
th
row.
To reduce matrix A to matrix C, suppose j
1
is the number of the first non-zero column
of matrix A.

ROW EQUIVALENCE (1)
If a matrix A reduced to matrix B by use of elementary row transformations alone,
matrix B is said to be row equivalent to matrix A and conversely.
The matrices A and B of Exemple 3 are row equivalent.
Any non-zero matrix A of rank r is row equivalent to a canonical matrix C in which
Linear Algebra
64
Chapter 5 - Equivalence (8)
ROW EQUIVALENCE (2)
To reduce matrix A to matrix C, suppose j
1
is the number of the first non-zero column
of matrix A.
(i
1
) If a
1j1
0, use H
1
(1/a
1j1
) to reduce it to 1, when necessary.
(i
2
) If a
1j1
= 0, but a
pj1
0 use H
1p
and proceed as in (i
1
).
(ii) Use row transformations of type (3) with appropriate multiples of the first row to
obtain zeros elsewhere in the j
1
st
column.
If non-zero elements of the resulting matrix B occur only in the first row, B=C.
Otherwise, suppose j
2
is the number of the first column in with this does not occur. If
b
qj2
0 use H
2
(1/b
qj2
) as in (i
1
). Then, as in (ii), clear the j
2
nd
column of all other non-
zero elements.
If non-zero elements of the resulting matrix B occur only in the first rows, we have
matrix C. Otherwise, the procedure is repeated until matrix C is reached.
Linear Algebra
65
Chapter 5 - Equivalence (9)
ROW EQUIVALENCE (3)
Exemple 4:
Se sequence of row transformations H
21
(-2), H
31
(1), H
2
(1/5), H
12
(1), applied to
matrix A yields:
C
A
=
(
(
(

(
(
(

(
(
(

(
(
(

=
0 0 0 0
5 / 3 1 0 0
5 / 17 0 2 1
~
~
3 5 0 0
5 / 3 1 0 0
4 1 2 1
~
3 5 0 0
3 5 0 0
4 1 2 1
~
7 6 2 1
5 3 4 2
4 1 2 1
Linear Algebra
66
Chapter 5 - Equivalence (10)
THE NORMAL FORM OF A MATRIX
By means of elementary transformations any matrix A of rank r>0can be reduce to
one of the form
| | 0
0 0 0
0
) 1 . 5 (
r
r r
r
I
I I
I
(

called its normal form. A zero matrix it its own normal form.
Since both row and column transformationsmay be used here, the element 1 of
the first row obtained in the Section above, can be moved into the first column.
Then both the first row and the first column can be cleared of other non-zero
elements. Similary, the element 1of the second row can be brought into the
second column, and so on.
For example, the sequence H
21
(-2), H
31
(1), K
21
(-2), K
31
(1), K
41
(-4), K
23
, K
2
(1/5),
H
32
(-1), K
42
(3) aplied to matrix A of Example 3 yields the normal form
(

0 0
0
r
I
Linear Algebra
67
Chapter 5 - Equivalence (12)
ELEMENTARY MATRICES (1)
The matrix which results when an elementary row (column) transformation is
applied to the identity matrix I
n
is called elementary row (column) matrix. Here,
an elementary row (column) matrix will be denoted by the symbol introduced to
denote the elementary row (column) transformation which produced the matrix.
Examples of elementary row (column) matrices obtained from I
3
are:
(
(
(

=
=
(
(
(

= =
(
(
(

=
(
(
(

=
(
(
(

=
0 0 1
0 1 0
1 0 0
) (
1 0 0
1 0
0 0 1
) ( ) (
0 0
0 1 0
0 0 1
) (
1 0 0
0 0 1
0 1 0
1 0 0
0 1 0
0 0 1
13
23 23 3 3 12
3
H
k K k k H k K
k
k H H
I
Linear Algebra
68
Chapter 5 - Equivalence (13)
ELEMENTARY MATRICES (2)
The effect of applying an elementary transformation to an mn matrix A can be
produced by multiplying the matrix A by the corresponding elementary matrix.
To effect a given elementary row transformation of matrix A of order mn , apply
the transformation to I
m
to form the corresponding elementary matrix H and
multiply matrix A on the left by matrix H.
(
(
(

=
(
(
(

(
(
(

=
(
(
(

=
(
(
(

=
3 2 1
6 5 4
9 8 7
9 8 7
6 5 4
3 2 1
0 0 1
0 1 0
1 0 0
3 2 1
6 5 4
9 8 7
) (
9 8 7
6 5 4
3 2 1
13 13
A H A H
A
Interchange the first and the third rows.
Linear Algebra
69
Chapter 5 - Equivalence (14)
ELEMENTARY MATRICES (3)
The effect of applying an elementary transformation to an mn matrix A can be
produced by multiplying the matrix A by the corresponding elementary matrix.
To effect a given elementary column transformation of matrix A of order mn ,
apply the transformation to I
n
to form the corresponding elementary matrix K and
multiply matrix A on the right by matrix K.
(
(
(

=
(
(
(

(
(
(

=
(
(
(

=
(
(
(

=
9 8 25
6 5 16
3 2 7
1 0 2
0 1 0
0 0 1
9 8 7
6 5 4
3 2 1
) 2 (
9 8 25
6 5 16
3 2 7
) )( 2 (
9 8 7
6 5 4
3 2 1
13 13
K A A K
A
Adds to the first column two times the third column.
Linear Algebra
70
Chapter 5 - Equivalence (15)
EQUIVALENT MATRICES (1)
Let matrices A and B be equivalent matrices. Let the elementary row and column
matrices corresponding to the elementary row and column transformations to
reduce matrix A to matrix B be designated as H
1
, H
2
, ..., H
s
; K
1
, K
2
, ..., K
t
where H
1

is the first row transformation, H
2
is the second, ..., K
1
is the first column transfor-
mation, K
2
is the second, .... Then
t s
t s
K K K Q H H H P
B Q A P K K K A H H H
= =
= =
... and ... ) 3 . 5 (
where
... ... ) 2 . 5 (
2 1 1 2
2 1 1 2
Two matrices A and B are equivalent if and only if there exists non-singular matri-
ces P and Q defined in (5.3) such that PAQ=B.
Linear Algebra
71
Chapter 5 - Equivalence (16)
EQUIVALENT MATRICES (2)
Example:
B Q A P
A
K K K K K A H H
K K K K K Q
H H P
A
=
(
(
(
(

= =
(
(
(
(

(
(
(

(
(
(

=
=
(
(
(
(

(
(
(
(

(
(
(
(

(
(
(
(

(
(
(
(



(
(
(


(
(
(

=
=
=
=
(
(
(

=
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
1 0 0 0
0 2 / 1 0 0
1 0 1 0
4 2 / 1 2 1
2 1 2 1
3 2 5 2
2 1 2 1
1 0 1
0 1 2
0 0 1
1 0 0 0
0 2 / 1 0 0
0 0 1 0
0 1 0 1
1 0 0 0
0 1 0 0
1 0 1 0
0 0 0 1
1 0 0 0
0 1 0 0
0 0 1 0
2 0 0 1
1 0 0 0
0 1 0 0
0 0 1 0
0 1 0 1
1 0 0 0
0 1 0 0
0 0 1 0
0 0 2 1
1 0 0
0 1 2
0 0 1
1 0 1
0 1 0
0 0 1
) 2 / 1 ( ) 1 ( ) 2 ( ) 1 ( ) 2 ( ) 2 ( ) 1 (
) 2 / 1 ( ) 1 ( ) 2 ( ) 1 ( ) 2 (
) 2 ( ) 1 (
2 1 2 1
3 2 5 2
2 1 2 1
3 42 41 31 21 21 31
3 42 41 31 21
21 31
Linear Algebra
72
Chapter 5 - Equivalence (17)
EQUIVALENT MATRICES (3)
Any matrix is equivalent to its normal form.
If matrix A is a square non-singular matrix, there exist non-singuar matrices P and
Q as defined in (5.3) sunch that PAQ = I
n
.
Linear Algebra
73
Chapter 5 - Equivalence (18)
INVERSE OF A PRODUCT OF ELEMENTARY MATRICES
Let
t s
K K K Q H H H P = = ... and ...
2 1 1 2
as in (5.3). Since each matrix H and each matrix K have an inverse and since the
inverse of a product is the product of inverses of the factors
1
1
1
1
1 1 1 1
2
1
1
1
... and ... ) 4 . 5 (


= = K K K Q H H H P
t
t s
Let matrix A been an n-square non-singular matrix and let P and Q defined as
above be such that PAQ=I
n
. Then
( )
1 1 1 1 1 1
) 5 . 5 (

= = = Q P Q I P Q Q A P P A
n
Every non-singular matrix can be expressed as a product of elementary matrices.
If matrix A is non-singular, the rank of the product AB (also of the product BA) is
that of matrix B.
If matrices P and Q are non-singular, the rank of PAQ is tat of matrix A.
Linear Algebra
74
Chapter 5 - Equivalence (19)
CANONICAL SETS UNDER EQUIVALENCE
Two matrices mn matrices A and B are equivalent if and only if they have the
same rank.
A set of mn matrices is called a canonical set under equivalence if every mn
matrix is equivalent to one matrix of the set. Such a canonical set is given by (5.1):
as r ranges over the values 1, 2, ..., m or 1, 2, ..., n whichever is the smaller.
| | 0
0 0 0
0
r
r r
r
I
I I
I
(

Linear Algebra
75
Chapter 5 - Equivalence (19)
RANK OF PRODUCT
Let matrix A be a mp matrix of rank r. There exist non-singular matrices P and Q
such that
then A = P
-1
NQ
-1
. Let matrix B a pn matrix and consider the rank of
B Q N P B A =
1 1
) 6 . 5 (
B Q N P B A =
1 1
The rank of AB is that of NQ
-1
B. Now the rows of NQ
-1
B consists of the first r
rows of Q
-1
B and the m-r rows of zeros. Hence, the rank of AB cannot exceed r,
the rank of matrix A. Similary, the rank of AB cannot exceed that of matrix B. We
have proved:
The rank of the product of two matrices cannot exceed the rank of either factor.
If the mp matrix A is of rank r and if the pn matrix B is such that AB=0, the
rank of matrix B cannot exceed p-r.
Linear Algebra
76
Chapter 6 The adjoint of a square matrix (1)
THE ADJOINT (1)
Let matrix A=[a
ij
] be an n-square matrix and
ij
be the cofactor of a
ij
; then by
definition
(
(
(
(

o o o
o o o
o o o
= =
nn n n
n
n
A A

2 1
2 22 12
1 21 11
adj adjoint (6.1)
Note carfully that the cofactors of the elements of the i
th
row (column) of matrix A
are the elements of the i
th
column of matrix adjA.
Linear Algebra
77
Chapter 6 The adjoint of a square matrix (2)
THE ADJOINT (2) - Example 1:
( ) ( ) ( )
( ) ( ) ( )
( ) ( ) ( )
(
(
(




=
=
(

= o =
(

= o =
(

= o
=
(

= o =
(

= o =
(

= o
=
(

= o =
(

= o =
(

= o
(
(
(

=
+ + +
+ + +
+ + +
1 3 3
4 5 2
5 1 6
adj
1
3 2
2 1
1 ; 4
2 2
3 1
1 5
2 3
3 2
1
3
3 3
2 1
1 ; 5
4 3
3 1
1 ; 1
4 3
3 2
1
3
3 3
3 2
1 ; 2
4 3
2 2
1 ; 6
4 3
2 3
1
4 3 3
2 3 2
3 2 1
3 3
33
2 3
32
1 3
31
3 2
23
2 2
22
1 2
21
3 1
13
2 1
12
1 1
11
A
A
Linear Algebra
78
Chapter 6 The adjoint of a square matrix (3)
THE ADJOINT (3)
Using theorems X and XI of Chapter 3, we find:
( ) ( )
( ) ( ) A A I A A A A
a a a
a a a
a a a
A A
n
nn n n
n
n
nn n n
n
n
= = =
=
(
(
(
(

o o o
o o o
o o o

(
(
(
(

=
adj | | | | ..., | , | | , | diag
adj 2 . 6
2 1
2 21 12
1 21 11
2 1
2 22 21
1 12 11

Linear Algebra
79
Chapter 6 The adjoint of a square matrix (4)
THE ADJOINT (4) Example 2:
( ) ( )
7 16 6 27 12 18 12 4 2 2 3 2 1 3 3 3 2 2 3 3 3 2 4 3 1
7 , 7 , 7 diag
7 0 0
0 7 0
0 0 7
1 3 3
4 5 2
5 1 6
4 3 3
2 3 2
3 2 1
adj
1 3 3
4 5 2
5 1 6
adj
4 3 3
2 3 2
3 2 1
= + + = + + =
=
(
(
(

=
(
(
(

(
(
(

=
(
(
(




=
(
(
(

=
A
A A
A A
Linear Algebra
80
Chapter 6 The adjoint of a square matrix (5)
THE ADJOINT (5)
By taking determinants,
( )
n
A A A A A = = adj adj 3 . 6
There follow:
I. If matrix A is a n-square non-singular matrix:
( )
1
adj 4 . 6

=
n
A A
II. If matrix A is a n-square singular matrix:
0 adj adj = = A A A A
If matrix A is of rank < n-1, then adjA=0. If matrix A is of rank n-1 then adjA is of
rank 1.
Linear Algebra
81
Chapter 6 The adjoint of a square matrix (6)
THE ADJOINT OF THE PRODUCT
III. If matrices A and B are n-square matrices
( ) ( ) A B B A adj adj adj 5 . 6 =
Linear Algebra
82
Chapter 6 The adjoint of a square matrix (7)
MINOR OF AN ADJOINT (1)
IV.
| |
of t determinan minor algebraic square the denote let
and , matrix in complement its be let
matrix square a of minor square an be Let
, , ,
, , ,
, , ,
, , ,
, , ,
, , ,
2 1
2 1
2 1
2 1
2 1
2 1

=
+ +
+ +
m M
A A
a A n m A
m
m
n m m
n m m
m
m
j j j
i i i
j j j
i i i
ij
j j j
i i i

matrix A, whose elements occupy the same positions in matrix adjA as those of
m
m
j j j
i i i
A
, , ,
, , ,
2 1
2 1

( ) ( )
m m
j j j
i i i
m
p
j j j
i i i
j j j i i i p
A A M A
n m m
n m m
m
m
+ + + + + + + =
=
+ +
+ +

2 1 2 1
, , ,
, , ,
, , ,
, , ,
2 1
2 1
2 1
2 1
1 6 . 6
occupy in matrix A. Then
Linear Algebra
83
Chapter 6 The adjoint of a square matrix (8)
( ) ( )
m m
j j j
i i i
m
p
j j j
i i i
j j j i i i p
A A M
n m m
n m m
m
m
+ + + + + + + =
=
+ +
+ +

2 1 2 1
, , ,
, , ,
1 , , ,
, , ,
2 1
2 1
2 1
2 1
1 7 . 6
MINOR OF AN ADJOINT (2)
If in (6.6) matrix A is non-singular, then
When m = 2, (6.7) becomes
( ) ( )
( )
2 1
2 1
2 1 2 1
4 3
4 3
2 1 2 1
2 2 1 2
2 1 1 1
,
,
, , ,
, , ,
of s complement algebraic 1
1
, ,
, ,
8 . 6
j j
i i
j j i i
j j j
i i i
j j i i
j i j i
j i j i
A A
A A
n
n
=
= =
o o
o o
+ + +
+ + +

When m = n, (6.7) becomes (6.4).


Linear Algebra
84
Chapter 7 The inverse of a matrix (1)
THE INVERSE OF A MATRIX
If matrix A and matrix B are n-square matrices such AB = BA = I
n
, matrix A is
called the inverse of matrix B (A = B
-1
).
I. An n-square matrix A has an inverse if and only if it is non-singular. The inverse
of a non-singular matrix is unique.
II. If matrix A is non-singular, then AB = AC implies B = C.
The inverse of a non-singular diagonal matrix diag(k
1
, k
2
, ..., k
n
) is the diagonal
matrix diag(1/k
1
, 1/k
2
, ..., 1/k
n
).
If A
1
, A
2
, ..., A
s
are non-singular matrices, then the inverse of the direct sum
diag(A
1
, A
2
, ...A
s
) is
Procedures for computing the inverse of a general non-singular n-square matrix
are given below.
) , , , ( diag
1 1
2
1
1

s
A A A
Linear Algebra
85
Chapter 7 The inverse of a matrix (2)
THE INVERSE FROM THE ADJOINT MATRIX
From (6.2) AadjA=|A|I . If matrix A is non-singular:
( )
(
(
(
(
(

o o o
o o o
o o o
= =

A A A
A A A
A A A
A
A
A
nn n n
n
n
/ / /
/ / /
/ / /
adj
1 . 7
2 1
2 22 12
1 21 11
1

Example 1:
(
(
(



= =
=
(
(
(




=
(
(
(

7 / 1 7 / 3 7 / 3
7 / 4 7 / 5 7 / 2
7 / 5 7 / 1 7 / 6
adj
7
1 3 3
4 5 2
5 1 6
adj
4 3 3
2 3 2
3 2 1
1
A
A
A
A A A
Linear Algebra
86
Chapter 7 The inverse of a matrix (3)
THE INVERSE FROM THE ADJOINT MATRIX (2)
Example 1:
(
(
(

=
(
(
(

(
(
(

=
(
(
(



= =
=
(
(
(




=
(
(
(

1 0 0
0 1 0
0 0 1
7 / 1 7 / 3 7 / 3
7 / 4 7 / 5 7 / 2
7 / 5 7 / 1 7 / 6
4 3 3
2 3 2
3 2 1
7 / 1 7 / 3 7 / 3
7 / 4 7 / 5 7 / 2
7 / 5 7 / 1 7 / 6
adj
7
1 3 3
4 5 2
5 1 6
adj
4 3 3
2 3 2
3 2 1
1
1
A A
A
A
A
A A A
Linear Algebra
87
Chapter 7 The inverse of a matrix (4)
INVERSE FROM ELEMENTARY MATRICES (1)
Let the non-singular n-square matrix A be reduced to I by elementary transforma-
tions so that:
I Q A P K K K A H H H
t s
= =
2 1 1 2
Then A = P
-1
Q
-1
by (5.5) and, since (B
-1
)
-1
= B,
I Q A P K K K A H H H
t s
= =
2 1 1 2
( ) ( )
1 2 2 1
1
1 1 1
2 . 7 H H H K K K P Q Q P A
s t
= = =



Linear Algebra
88
Chapter 7 The inverse of a matrix (5)
INVERSE FROM ELEMENTARY MATRICES (2) Example 2 (1)
1 2 2 1
1 1
1
1
2
1
2
1
1
2 1 1 2
3 31 21 31 21
3
31 21 31 21
1 0 0
0 1 0
0 0 1
1 0 0
0 1 0
3 0 1
1 0 0
0 1 0
0 3 1
4 3 1
3 4 1
3 3 1
1 0 1
0 1 0
0 0 1
1 0 0
0 1 1
0 0 1
) 3 ( ) 3 ( ) 1 ( ) 1 (
to matrix reduce
), 3 ( ), 3 ( ), 1 ( ), 1 ( tions transforma elementary The
4 3 1
3 4 1
3 3 1
H H K K A H H K K A I K K A H H
I K K A H H
I A
K K H H
A
= = =
(
(
(

=
(
(
(

(
(
(

(
(
(

(
(
(

(
(
(

=

(
(
(

=

Linear Algebra
89
Chapter 7 The inverse of a matrix (6)
INVERSE FROM ELEMENTARY MATRICES (2) Example 2 (2)
(
(
(

=
(
(
(

(
(
(

=
(
(
(


=
(
(
(

(
(
(


(
(
(

(
(
(


=
=
=
=


1 0 0
0 1 0
0 0 1
1 0 1
0 1 1
3 3 7
4 3 1
3 4 1
3 3 1
1 0 1
0 1 1
3 3 7
1 0 1
0 1 0
0 0 1
1 0 0
0 1 1
0 0 1
1 0 0
0 1 0
3 0 1
1 0 0
0 1 0
0 3 1
1
1
1 2 2 1
1
1
1
1
2
1
2
1
1
2 1 1 2
A A
A
H H K K A
H H K K A
I K K A H H
Linear Algebra
90
Chapter 7 The inverse of a matrix (6)
INVERSE FROM ELEMENTARY MATRICES (3)
In Chapter 5 it was shown that a non-singular matrix can be reduced to normal
form by row transformations alone. Then from (7.2) with Q = I, we have
( )
1 2
1
3 . 7 H H H P A
s
= =


III. If matrix A is reduced to I by a sequence of row transformations alone, then
matrix A
-1
is equal to the product in reverse order of the corresponding elementa-
ry matrices.
Linear Algebra
91
Chapter 7 The inverse of a matrix (6)
INVERSE FROM ELEMENTARY MATRICES (4)
Example 3: Find the inverse of matrix A of Example 2 using only row transforma-
tions to reduce matrix A to matrix I. Write the matrix [A I3] and perform the
sequence of row transformations wich carry matrix A into matrix I
3
on the rows of
six elements. We have
| |
(
(
(


=
(
(
(

(
(
(


(
(
(

(
(
(

(
(
(


(
(
(

(
(
(

(
(
(

(
(
(

(
(
(

(
(
(

(
(
(

=

(
(
(

1 0 1
0 1 1
0 3 4
1 0 1
0 1 1
3 3 7
1 0 0
0 1 0
0 0 1
~
~
1 0 1
0 1 1
0 3 4
1 0 0
0 1 0
3 0 1
~
1 0 1
0 1 1
0 0 1
1 0 0
0 1 0
3 3 1
~
1 0 0
0 1 0
0 0 1
4 3 1
3 4 1
3 3 1
) 3 ( ) 3 ( ) 1 ( ), 1 (
4 3 1
3 4 1
3 3 1
1
3
13 12 31 21
A
I A
H H H H
A
Linear Algebra
92
Chapter 7 The inverse of a matrix (7)
INVERSE BY PARTITIONING (1)
Let the matrix A=[a
ij
] of order n and its inverse B=[b
ij
] be partitioned into submatri-
ces of indicated orders:
( ) ( )
( ) ( )
( ) ( )
( ) ( )
n q p
q q
B
p q
B
q p
B
p p
B
B
q q
A
p q
A
q p
A
p p
A
A = +
(
(
(
(
(

=
(
(
(
(
(

=
22 21
12 11
22 21
12 11
Since we have AB = BA = I
n
, we have
( )

= +
= +

= +
= +
q
p
I A B A B
A B A B
B A B A
I B A B A
22 22 12 21
21 22 11 21
22 12 12 21
21 12 11 11
) iv (
0 ) iii (
0 ) ii (
) i (
4 . 7
Linear Algebra
93
Chapter 7 The inverse of a matrix (8)
INVERSE BY PARTITIONING (2)
Then, providing that matrix A
11
is non-singular, we can solve for for B
11
, B
12
, B
21

and B
22
:
( )
( )
( )
( ) ( )
( )
( )

=
= +
+ = + =
=
=
=

12
1
11
21 22
22
1
12
1
11
21
1
1
11
21
1
12
1
11
1
11
21 12
1
11
1
11
11
1
11
21
1
21
1
12
1
11
12
1
22
5 . 7
A A A A
I A A A A
A A A A A B A A A B
A A B
A A B
B
Linear Algebra
94
Chapter 7 The inverse of a matrix (9)
INVERSE BY PARTITIONING (3)
In practice, matrix A
11
is usualy taken n-1. To obtain matrix (A
11
)
-1
the following
procedure is used. Let:
(
(
(
(

=
(
(
(

=
(

=
44 43 42 41
34 33 32 31
23 22 21
14 13 12 11
4
33 32 31
23 22 21
13 12 11
3
22 21
12 11
2
24
a a a a
a a a a
a a a a
a a a a
G
a a a
a a a
a a a
G
a a
a a
G
After computing, (G
2
)
-1
, partition G
3
so that A
22
=[a
33
] and use (7.5) to obtain (G3)
-1
.

Repet the process on G4 after partitioning it so that A
22
=[a
44
], and so on.
| | | |
(

=
(
(
(

22 21
12 11
33 32 31
23
13
22 21
12 11
3
21 12 22 11
11 21
12 22
1
2
A A
A A
a a a
a
a
a a
a a
G
a a a a
a a
a a
G
Linear Algebra
95
Chapter 7 The inverse of a matrix (10)
INVERSE BY PARTITIONING (4) Example 4: Find the inverse using partitionong:
| | | |
| | | |
( ) | | | | | | | |
( ) ( ) | | | |
( ) | |
( ) | | | | | |
(
(
(


=
(

=
= = =
(

=
(

= =
(


=
(

+
(


=
(

+
(


= + =
= = =
(

= =
=
(


=
(

=
(


=
(


=
(

=
(
(
(

=
(
(
(

1 0 1
0 1 1
3 3 7
0 1 0 1 1 21
0
3
1
0
3
1 1
3 7
1 0
0 3
1 1
3 4
0 1 1
0
3
1 1
3 4
1 1
0
3
3 1 4
0 1
1 1
3 4
3 1
0
3
3
3
1 1
3 4
1 1
3 4
3 4
1 1
3 4
4 3 1
3
3
4 1
3 1
4 3 1
3 4 1
3 3 1
22 21
12 11 1
1
11
21
1
1
12
1
11
12
1
11
21
1
12
1
11
1
11
11
22
1
12
1
11
21 22
1
11
21 12
1
11
1
11
22 21
12 11
B B
B B
A
A A B
A A B
A A A A A B
B A A A A
A A A A
A
A A
A A
A
Linear Algebra
96
Chapter 7 The inverse of a matrix (11)
THE INVERSE OF SYMMETRIC MATRIX (1)
When matrix A is symmetric,
ij
=
ji
and only (n+1)n/2 cofactors need be compu-
ted instead of usual n
2
in obtaining from adjA.
If there is to be any gain in computing matrix A
-1
as the product of elementary
matrices, the elementary transformations must be performed so that the property
of being symmetric is preserved. This requires that the transformations occur in
pairs, a row transformation followed immediatly by the same column transforma-
tion. For example:
(
(
(
(
(
(

= |
.
|

\
|

(
(
(
(
(
(

|
.
|

\
|

(
(
(
(
(
(

=
(
(
(
(
(
(

c
c a
a
b
K c
a b
c b
a
b
H c
c b
b a
K c
a b
c b
H
0
0 0
0
0
21 21 12 12
However, when the element a in the diagonal is replaced by 1, the pair of transfor-
mations are H
1
(1/a
1/2
) and K
1
(1/a
1/2
). In general, (a)
1/2
is either irational or imagina-
ry; hence, this procedure is not recommended.
Linear Algebra
97
Chapter 7 The inverse of a matrix (12)
THE INVERSE OF SYMMETRIC MATRIX (2)
The maximum gain occurs when the method of partitioning is used since then
(7.5) reduces to:
( )
( ) ( )
( )
( )
( )
12
1
11
21 22
12
1
11
21 22
1
22
1
12
1
11
12
12
21 12
1
11
1
12
1
11
1
11
11
6 . 7 A A A A
A A A A
B A A B
B B A A A A A B
T
T
=

=
= =
= + =



When matrix A is not symmetric, the above procedure may be used to find the in-
verse of matrix A
T
A, which is symmetric, and then the inverse of matrix A is found
by:
( ) ( )
T T
A A A A =

1
1
7 . 7

You might also like