You are on page 1of 7

the first m 1 rows having the largest pivot element) should be laced in the first

or pivot position if it is not already there. This would be accomplished by checking


the absolute value of a11 against the absolute values of a21 , a31 , a41 , , a11 , a11 , and
making the appropriate row interchange if one of these latter values were larger
than |a11 |. This procedures of partial pivoting should always be incorporated in a
computer program for solving fairly large number of simultaneous equations.
It should be noted that, if it were desired to solve a second set of simultaneous
equations which differed from a first set only in the constant terms which appeared,
both sets could be solved at the same time by representing each set of constants
as a separate column in augmenting the coefficient matrix. Actually, two or more
sets of simultaneous equations, differing only in their constant terms, can be solved
in a single elimination procedure by placing the constants of each set in a separate
column to the right of the coefficient columns in the augmented matrix, and applying
equations (??) and (??) until a reduced matrix is obtained which has the same
number of columns as the number of sets of simultaneous equations being solved.
and scaling

5.6

Choleskys Method

Choleskys method also known as Crouts method, the method of matrix decomposition, and the method of matrix factorization, is more economical of computer
time than other elimination methods. As a result it has been used extensively in
some of the larger structural analysis programs.
Crouts method transforms the coefficient matrix, A, into the product of two
matrices, L (lower triangular matrix) and U, (upper triangular matrix) where U
has one on its main diagonal (the method in which L has the ones on its diagonal,
is known as Doolittles method).
Any matrix that has all diagonal elements non-zero can be written as a product of
lower-triangular and an upper-triangular matrix in an infinity of ways, for example

2 1 1
2
0 0
1 12 12

1 12
2
0 4
= 0 4 0 0

0
0
1
6 3
1
6 3 1

1 0 0
2 1 1

2
= 0 1 0 0 4

3 0 1
0
0
4
1 0 0
2 1 1

= 0 2 0 0 2
1

3 0 1
0
0
4

and so on

Of the entire set of LUs whose product equals matrix A, in the Crouts method
89

we choose the pair in which U has only ones on its diagonal, as in the first pair
above.
Suppose we want solution of the four simultaneous equations in four unknowns.
The set is represented by the matrix equation
Ax=b
Where A represent the coefficient matrix, x the column matrix of the unknowns,
and b the column matrix of the constants. If we can reduce the system of equations
to an equivalent system of the form
LU x = b
then
L1 L U x = L1 b = y,

say

or
U x = y,

Ly = b.

they could be readily solved, L y = b by forward substitution, then U x = y by


backward substitution.
In the case of 4 4 A matrix, from the relationship A = LU, we can write the
augmented matrix

a11
a21
a31
a41

a12
a22
a32
a42

a13
a23
a33
a43

a14
a24
a34
a44

:
:
:
:

b1
b2
b3
b4

l11 0 0 0
l21 l22 0 0
l31 l32 l33 0
l41 l42 l43 l44

1 u12 u13 u14


0
1 u23 u24
0
0
1 u34
0
0
0
1

:
:
:
:

y1
y2
y3
y4

(5.34)
For the sake of convenience, let the bs be represented by ai5 and the ys by ui5
to obtain the above system as

a11
a21
a31
a41

a12
a22
a32
a42

a13
a23
a33
a43

a14
a24
a34
a44

:
:
:
:

a15
a25
a35
a45

l11 0 0 0
l21 l22 0 0
l31 l32 l33 0
l41 l42 l43 l44

1 u12 u13 u14


0
1 u23 u24
0
0
1 u34
0
0
0
1

:
:
:
:

u15
u25
u35
u45

(5.35)
Multiplying the rows of L by the first column of U, we get
l11 = a11 ,

l21 = a21 ,

l31 = a31 ,
90

l41 = a41 ;

(5.36)

the first column of L is the same as the first column of A. Now multiply the first
row of L by the columns of U:
l11 u12 = a12 ,

l11 u13 = a13 ,

l11 u14 = a14 ,

l11 u15 = a15 ,

from which
u12 =

a12
,
l11

u13 =

a13
,
l11

u14 =

a14
,
l11

u15 =

a15
,
l11

(5.37)

Thus the first row of U is determined. Next the equations for the second column of
L are obtained by multiplying the rows of L by the second column of U:
l21 u12 + l22 = a22 ,

l31 u12 + l32 = a32 ,

l41 u12 + l42 = a42 ,

which gives
l22 = a22 l21 u12 ,

l32 = a32 l31 u12 ,

l42 = a42 l41 u12 .

(5.38)

Proceeding in the same fashion, the other equations are


a23 l21 u13
,
l22
a25 l21 u15
=
,
l22
= a33 l31 u13 l32 u23 ,

u23 =
u25
l33

u24 =

a24 l21 u14


,
l22

l43 = a43 l41 u13 l42 u23 ,

a35 l31 u15 l32 u25


,
l33
a45 l41 u15 l42 u25 l43 u35
.
l44 = a44 l41 u14 l42 u24 l43 u34 , u45 =
l44
(5.39)
The general formula for getting elements of L and U corresponding to the coefficient
matrix for n simultaneous equations can be written as
u34 =

a34 l31 u14 l32 u24


,
l33

u35 =

lij = aij

j1
X

lik ukj

aij

k=1
i1
X

lik ukj

uij =

k=1

lii

j i i = 1, 2, 3, , n

(5.40)

i < j j = 2, 3, , n + 1

(5.41)

If we make sure that a11 in the original matrix is nonzero, then the divisions of
equations (??) will always be defined since the lii values will be nonzero. This may
be seen by noting that
91

LU = A
and therefore the determinant of L times the determinant of U equals the determinant of A. that is,
|L||U| = |A|
We are assuming independent equations, so the determinant of A is nonzero. Therefore, the determinant of L must be nonzero. Since the determinant of a triangular
matrix is the product of the main diagonal elements, the lii elements are all nonzero.
For n > 2, Crouts method requires fewer arithmetic operations than either the
Gaussian of Gauss-Jordan methods, making it the fastest of the basic elimination
methods. It may also be made economical of core storage in the computer by
overlaying the U and L matrices on the A matrix (in the same storage location).
Since by examining equations (??) through (??) shows that, after any element of
A, aij , is once used, it never again appears in the equations, and also there is no
need to store the zeros in either U and L and ones on the diagonal of U can also
be omitted. (Since these values are always the same and are always known, it is
redundant to record them.) In other words, the A array can be transformed by the
above equations and becomes

a11
a21
a31
a41

a12
a22
a32
a42

a13
a23
a33
a43

a14
a24
a34
a44

a15
a25
a35
a45

l11 u12 u13 u14


l21 l22 u23 u24
l31 l32 l33 u34
l41 l42 l43 l44

u15
u25
u35
u45

(5.42)
Because we can condense the L and U matrices into one array and store their
elements in the space of A, this method is often called a compact scheme.
Example #
Method # 1.

3x1 x2 + 2x3 = 12

x1 + 2x2 + 3x3 = 11

2x1 2x2 x3 = 2
Its augmented matrix

.
2 .. 12

.
3 .. 11

..
2 2 1 . 2

3 1

1 2

92

(5.43)

can be written as

l11 0
0
1 u12 u13 u14

0 0 1 u23 u24
l21 l22

0 0
1 u34
l31 l23 l33
where
l11 =
u12

a11 = 3, l21 = a21 = 1, l31 = a31 =


2,
a12
1
a13
2
a14
12
, u13 =
, u14 =
= 4,
=
=
=
=
l11
3
l11
3
l11
3
=

7
,
3

4
,
3

u23 =

a23 l21 u13


=
l22

1,

u24 =

a24 l21 u14


l22

3,

l33 =

a33 l31 u13 l32 u23

1,

u34 =

a34 l31 u14 l32 u24


l33

2,

1
)
3

l22 =

a22 l21 u12 =

2 (1)(

l32 =

a32 l31 u12 =

2 (2)(

2
3 (1)( )
3
7
3
11 (1)(4)
=
,
7
3
4
2
= 1 (2)( ) ( )(1)
3
3
4
2 (2)(4) ( )(3)
3
,
=
1

3 0
0

7
0
L= 1 3

4
2 3 1

1
)
3

and

1 13

U= 0 1
0 0

2
3

Since U is augmented matrix, using backward substitution, we get


x3 = 2.
x2 = 1.
x1 = 3.
is the solution set.
Method # 2.
93

4
1 3

1 2

3x1 x2 + 2x3 = 12

x1 + 2x2 + 3x3 = 11

2x1 2x2 x3 = 2

(5.44)

Its matrix A is

3 1
2

1
2
3

2 2 1
We get

3 0
0

7
0
L= 1 3

4
2 3 1

1 13

U= 0 1
0 0

and

2
3

Since
Ax= LUx
L1 L U x
or
Ux

Ux

= b
= L1 b,
provided
1
say,
= L b = y
= y,
Ly = b

Now L y = b gives

3 0
0
y1
12

7
0 y2 = 11
1

3
y3
2 43 1
2
where y = [y1 , y2 , y3 ]T
Forward substitution implies

y1
4

y2 = 3
y3
2
And U x = y gives

1 13

1
0
2 0

2
3

x1
4

1 x2 = 3

x3
1
2
94

L1 6= 0

where x = [x1 , x2 , x3 ]T
Backward substitution implies

x1
3

x
=
2
1
x3
2
is the solution set.

5.7

Norm

When we discuss multicomponent entities like matrices and vectors, we frequently


need a way to express their magnitude- some measure of bigness or smallness.
For ordinary numbers, the absolute value tells us how large the number is, but
for a matrix there are many components, each of which may be larger or small in
magnitude. (We are not talking about the size of a matrix, meaning the number of
element it contains.)
Any good measure of the magnitude of a matrix (the technical term is norm
must have four properties that are intuitively essential:
1. the norm must always have a value greater than or equal to zero, and must be
zero only when the matrix is the zero matrix i.e.,
kAk 0 and kAk = 0 if and only if A = 0.
2. The norm must be multiplied by k if the matrix is multiplied by the scalar k.
i.e.,
kkAk = |k| kAk.
3. The norm of the sum of two matrices must not exceed the sum of the norms.
i.e.,
kA + Bk kAk + kBk.
4. The norm of the product of two matrices must not exceed the product of the
norms. i.e.,
kABk kAk kBk.
The third relationship is called the triangular inequality. The fourth is important
when we deal with the product of matrices.
For vectors in two or three space, the length satisfies all four requirements and
is a good value to use for the norm of a vector. This norm is called the Euclidean
norm, and is computed by
q
x21 + x22 + x23 .
95

You might also like