Professional Documents
Culture Documents
SCILAB
By
Gilberto E. Urroz, Ph.D., P.E.
Distributed by
i nfoClearinghouse.com
A "zip" file containing all of the programs in this document (and other
SCILAB documents at InfoClearinghouse.com) can be downloaded at the
following site:
http://www.engineering.usu.edu/cee/faculty/gurro/Software_Calculators/Scil
ab_Docs/ScilabBookFunctions.zip
Definitions
Matrices as tensors and the Kroneckers delta function
4
5
Matrix operations
Einsteins summation convention for tensor algebra
Addition and subtraction
Multiplication by a scalar
Matrix multiplication
Inverse matrices
Verifying properties of inverse matrices
Creating identity matrices in SCILAB
The Vandermonde matrix
The Hilbert matrix
Magic squares
5
7
9
13
14
16
17
19
20
21
22
23
24
24
25
26
26
27
27
28
30
31
Download at InfoClearinghouse.com
42
42
43
45
47
48
49
50
52
Download at InfoClearinghouse.com
53
54
56
58
A matrix with one row and three columns (a 13 matrix) is basically the same as a
three-dimensional row vector, e.g.,
[5. 6. 2.]
A matrix with one column and five rows (a 51 matrix) is the same as a column vector
of five elements, e.g.,
5
3
2.5
0
4
The following is a matrix of two rows and four columns (a 24 matrix):
5 2 3 0
11 7 0 1
A matrix can have variables and algebraic expressions as their elements, for example:
a112
2
a 22
2
3
1 + 5i
2i
5i
5 i 2 e i / 2
Download at InfoClearinghouse.com
Definitions
The elements of a matrix are referred to by using two sub-indices: the first one
representing the row, and the second one the column where the element is located.
matrix A with n rows and m columns can be represented by
a11
a
21
A= M
a n1,1
a n,1
a12
a 22
M
a n 1, 2
a n, 2
L a1,m 1
L a 2, m 1
O
M
L a n 1,m 1
L a n ,m 1
a1,m
a 2,m
M
a n 1,m
a n ,m
Thus, a generic element of matrix A belonging in row i and column j will be written as ai,j
or aij. The matrix A, itself, can be written in a simplified form as
Anm = [aij ].
A matrix having the same number of rows and columns is called a square matrix. The
following is a 33 square matrix:
2
0.1 0.5
The elements of a square matrix with equal sub-indices, i.e., a11, a22, , ann, belong to
the matrixs main diagonal.
0
0
12.5
0
9.2
0
0
0
0.75
A diagonal matrix is a square matrix having non-zero elements only in the main
diagonal. An example of a 33 diagonal matrix is:
I 33
1 0 0
= 0 1 0
0 0 1
A diagonal matrix whose main diagonal elements are all equal to 1.0 is known as an
identity matrix, because multiplying I by any matrix results in the same matrix, i.e.,
Download at InfoClearinghouse.com
IA = AI = A.
The identity matrix is typically given the symbol I. Matrix I3x3, above, is an identity matrix.
4
0
0 0
2.5
2
3.5 2
0 0
0
2 6.5 2 0
0
3 4 3
0
0
0
0
0 5
Matrix operations
The transpose of a matrix results from exchanging rows for columns and columns for
rows. Therefore, given the matrix Anm = [aij ], of n rows and m columns, its transpose
matrix is ATnm = [aTij ], of m rows and n columns, such that aTij = a ji, (i = 1,2, , n; j = 1,2,
, m).
Consider the matrices Anm = [aij ], and Bnm = [bij ], and Cnm = [cij ]. The operations of
addition, subtraction, and multiplication by as scalar, are defined as:
Addition:
Subtraction:
Download at InfoClearinghouse.com
implies
implies
Multiplication by a scalar, k:
Cnm = kAnm,
cij = kaij.
implies
Matrix multiplication requires that the number of rows of the first matrix be equal to
the number of columns of the second matrix. In other words, the only matrix
multiplication allowed is such that,
AnmBmp = Cnp ,
with the elements of the matrix product given by
m
Schematically, the calculation of element cij of a matrix product, Cnp = AnmBmp, is shown
below:
__________________________________________________________________________________
Download at InfoClearinghouse.com
Enter the following matrices and store them in the names suggested. Notice that the names of
the variables correspond to an upper case letter of the alphabet followed by two numbers.
The numbers represent the number of rows and columns that the matrix has. This way we can
purposely select some particular matrices to illustrate matrix operations that are and that are
not allowed. Thus, proceed to store the following variables:
.
5 4
8 2
4 6
Download at InfoClearinghouse.com
8 0 1
1 0 1
2 3 5
A 23 =
, B 23 =
, C23 =
.
5 2 3
0 1 1
6 4 2
10
3
0
2 1 5
3 1 2
2 1 2
A33 = 0
2
1 , B33 = 0 5 2, C33 = 3 7 0 .
7 2 5
4 2 1
2 1 4
=
=
=
=
=
=
=
=
=
=
=
=
=
Note: the use of a semi-colon instead of a comma or blank space suppresses the output in
SCILAB.
Download at InfoClearinghouse.com
4. !
-->A12 + C12
ans = ! - 15.
26. !
8. !
-->A12 - C12
ans = !
5.
- 14. !
-->B12 - C12
ans = !
13.
- 22. !
Download at InfoClearinghouse.com
ans
= ! - 10. !
! - 2. !
-->A21 - C21
ans = ! - 5. !
!
1. !
-->B21 - C21
ans = !
5. !
!
3. !
-->A21 - (B21 - C21)
ans = ! - 12. !
!
0.
- 2. !
- 2. !
4. !
- 4. !
2. !
- 10. !
-->A22 - C22
ans = ! - 2.
! - 4.
- 4. !
- 8. !
-->B22 - C22
ans = !
6.
! - 3.
- 6. !
2. !
1.
2. !
-->A13 + C13
ans = !
6.
1.
- 4. !
- 8. !
-->A13 - B13
ans = !
1.
- 5.
Download at InfoClearinghouse.com
10. !
10
-->A13 - C13
ans = ! - 4.
- 5.
-->B13 - C13
ans = ! - 5.
0.
6. !
0. !
16. !
0.
- 1.
-->A23 + C23
ans = !
10.
!
11.
0. !
2. !
- 3.
2.
- 6. !
1. !
- 5. !
0. !
-->A23 - B23
ans = !
7.
!
5.
0.
- 3.
- 2. !
4. !
-->A23 - C23
ans = !
6.
! - 1.
3.
- 6.
4. !
5. !
-->B23 - C23
ans = ! - 1.
! - 6.
3.
- 3.
6. !
1. !
- 7. !
2. !
C31
!
!
!
-->A31 - B31
Download at InfoClearinghouse.com
11
ans
= ! - 13. !
!
9. !
!
7. !
-->A31 - C31
ans = ! - 10. !
!
0. !
! - 1. !
-->B31 - C31
ans = !
3. !
! - 9. !
! - 8. !
-->A31 - (B31 ans = ! - 13.
!
11.
!
13.
C31)
!
!
!
2. !
2. !
- 3. !
8. !
- 5. !
0. !
- 2. !
2. !
7. !
-->A32 - C32
ans = ! - 4.
!
2.
- 8. !
4. !
-->B32 - C32
ans = !
4.
! - 3.
!
3.
- 6. !
7. !
- 3. !
0.
Download at InfoClearinghouse.com
7. !
12
!
0.
! - 11.
-->A33 + C33
ans = !
4.
!
3.
! - 5.
7.
4.
3. !
- 4. !
0.
- 5.
3.
7. !
1. !
- 1. !
9. !
3. !
0. !
-->A33 - B33
ans = ! - 1.
!
0.
! - 3.
- 2.
- 3.
0.
3. !
- 1. !
- 6. !
-->A33 - C33
ans = !
0.
! - 3.
! - 9.
- 2.
9.
1.
3. !
1. !
- 9. !
-->B33 - C33
ans = !
1.
! - 3.
! - 6.
0.
12.
1.
0. !
2. !
- 3. !
5. !
- 1. !
- 2. !
Notes:
The subtraction A B can be interpreted as A + (-B).
Addition is commutative, i.e., A + B = B + A, and associative, i.e., A+B+C = A+(B+C) = (A+B)+C.
Addition and subtraction can only be performed between matrices of the same dimensions.
Verify this by trying [A23][A21][+]. You will get an error message: <!> + Error: Invalid
Dimension
Multiplication by a scalar
-->2*A11
ans =
6.
-->-3*A12
ans = !
15.
-->-1*A21
ans = !
7. !
- 18. !
Download at InfoClearinghouse.com
13
! - 3. !
-->5*A22
ans = ! - 15.
!
20.
0. !
- 30. !
-->-2*A13
ans = ! - 2.
4.
- 12. !
-->10*A31
ans = ! - 100. !
!
20. !
!
50. !
-->-5*A23
ans = ! - 40.
! - 25.
0.
10.
-->2*A32
ans = !
!
!
0. !
4. !
4. !
2.
2.
10.
-->1.5*A33
ans = !
3.
!
0.
! - 10.5
5. !
- 15. !
- 1.5
3.
3.
7.5 !
1.5 !
- 7.5 !
Matrix multiplication
-->A11*B11
ans =
- 6.
-->B11*A11
ans =
- 6.
-->A12*B21
ans =
15.
-->B21*A12
ans = ! - 15.
! - 25.
-->A12*B22
ans = !
5.
18. !
30. !
-->A21*B12
ans = ! - 21.
!
9.
34. !
14. !
- 6. !
-->B12*A21
ans =
- 27.
-->A22*B21
ans = ! - 9. !
! - 18. !
Download at InfoClearinghouse.com
14
-->A22*B22
ans = ! - 15.
! - 10.
6. !
- 32. !
-->B22*A22
ans = ! - 23.
!
1.
12. !
- 24. !
-->A13*B31
ans =
5.
-->B31*A13
ans = !
3.
! - 7.
! - 2.
-->A13*B32
ans = !
- 6.
14.
4.
39.
18. !
- 42. !
- 12. !
- 28. !
-->A11*B11
ans =
- 6.
-->A23*B31
ans = !
!
26. !
23. !
-->A23*B32
ans = !
!
66.
57.
-->A32*B23
ans = !
!
!
1.
1.
5.
-->A23*B33
ans = !
!
28.
3.
21. !
- 5. !
0.
2.
2.
1. !
- 1. !
3. !
6.
1.
15. !
9. !
3.
14.
10.
6. !
- 5. !
- 23. !
-->A33*B31
ans = !
3. !
! - 16. !
! - 25. !
-->A23*B31
ans = !
!
26. !
23. !
-->A23*B31
ans = !
!
26. !
23. !
-->B33*A33
ans = ! - 8.
! - 14.
! - 15.
Download at InfoClearinghouse.com
15
Note: Some of the examples above illustrate the fact that AmnB nm B nm Amn. That is,
multiplication, in general, is not commutative.
-->A12* (B21*C12)
ans = ! - 150.
300. !
-->(A12* B21)*C12
ans = ! - 150.
300. !
-->A22* (B21*C12)
ans = !
90.
- 180. !
!
180. - 360. !
-->(A22* B21)*C12
ans = !
90.
- 180. !
!
180. - 360. !
-->A32* (B23*C32)
ans = !
8.
6. !
!
14. - 4. !
!
46.
20. !
-->(A32* B23)*C32
ans = !
8.
6. !
!
14. - 4. !
!
46.
20. !
Note: The examples above illustrate the fact that multiplication is associative:
AmnB nmCmp = Amn(B nmCmp) = (AmnB nm)Cmp.
Inverse matrices
Inverse matrices exists only for square matrices. In SCILAB, inverse matrices are calculated by
using the function inv. The standard notation for the inverse matrix of A is A-1. Try the
following exercises:
-->inv(A11)
ans =
.3333333
-->inv(B11)
ans =
-
.5
-->inv(C11)
ans =
.2
-->inv(A22)
ans = ! ! -
.3333333
.2222222
-->inv(B22)
ans = !
! -
.1333333
.1666667
.0666667 !
.1666667 !
-->inv(C22)
ans = ! !
.0588235
.2352941
.1176471 !
.0294118 !
Download at InfoClearinghouse.com
0.
!
.1666667 !
16
-->inv(A33)
ans = ! ! !
.2264151
.1320755
.2641509
.0943396
.4716981
.0566038
.2075472 !
.0377358 !
.0754717 !
-->inv(B33)
ans = !
! !
.0285714
.2285714
.5714286
.0857143
.3142857
.2857143
.2285714 !
.1714286 !
.4285714 !
-->inv(C33)
ans = !
!
! -
.8235294
.3529412
.5
.0588235
.1176471
0.
.4117647 !
.1764706 !
.5
!
Note: Some matrices has no inverse, for example, matrices with one row or column whose
elements are all zeroes, or a matrix with a row or column being proportional to another row or
column, respectively. These are called singular matrices.
Try the following examples:
-->A = [ 1 2 3; 0 0 0; -2 5 2], inv(A)
A = !
1.
2.
3. !
!
0.
0.
0. !
! - 2.
5.
2. !
A = [ 1 2 3; 0 0 0; -2 5 2], inv(A)
!--error
singular matrix
-->B = [ 1 2 3; 2 4 6; 5 2 -1], inv(B)
A = !
1.
2.
3. !
!
2.
4.
6. !
! - 2.
5.
2. !
B = [ 1 2 3; 2 4 6; 5 2 -1], inv(B)
!--error
singular matrix
19
19
1.
0.
0. !
1. !
-->A22*inv(A22)
ans =
!
1.
0. !
!
0.
1. !
-->inv(B22)*B22
ans =
!
!
1.
0.
0. !
1. !
-->B22*inv(B22)
ans =
Download at InfoClearinghouse.com
17
!
!
1.
0.
0. !
1. !
-->inv(C22)*C22
ans =
!
!
1.
0.
0. !
1. !
-->C22*inv(C22)
ans =
!
!
1.
0.
0. !
1. !
-->inv(A33)*A33
ans =
!
!
!
1.
0.
0.
0.
1.
0.
0. !
0. !
1. !
-->A33*inv(A33)
ans =
!
1.
!
0.
! - 2.220E-16
0.
1.
0.
0. !
0. !
1. !
-->inv(B33)*B33
ans =
!
!
!
1.
0.
0.
0.
1.
0.
0. !
0. !
1. !
-->B33*inv(B33)
ans =
!
!
!
1.
0.
0.
0.
1.
0.
0. !
0. !
1. !
-->inv(C33)*C33
ans =
!
!
!
1.
0.
0.
0.
1.
0.
0. !
0. !
1. !
-->C33*inv(C33)
ans =
!
!
!
1.
0.
0.
0.
1.
0.
0. !
0. !
1. !
Download at InfoClearinghouse.com
18
The last set of exercises verify the properties of inverse matrices, i.e., A A-1 = A-1A = I, where I
is the identity matrix.
1.
0.
0. !
1. !
-->I33 = eye(3,3)
I33 =
!
!
!
1.
0.
0.
0.
1.
0.
0. !
0. !
1. !
If a square matrix A is already defined, you can obtain an identity matrix of the same
dimensions by using eye(A), for example:
-->A = [2, 3, 4, 2; -1, 2, 3, 1; 5, 4, 2, -2; 1, 1, 0, 1]
A =
!
2.
! - 1.
!
5.
!
1.
3.
2.
4.
1.
4.
3.
2.
0.
2.
1.
- 2.
1.
!
!
!
!
0.
1.
0.
0.
0.
0.
1.
0.
0.
0.
0.
1.
!
!
!
!
-->eye(A)
ans =
!
!
!
!
1.
0.
0.
0.
The eye function call A = eye(n,m) produces a matrix whose terms aii = 1, all other terms being
zero. For example,
-->A = eye(3,2)
A =
!
!
!
1.
0.
0.
0. !
1. !
0. !
-->A = eye(2,3)
A =
!
!
1.
0.
0.
1.
0. !
0. !
Download at InfoClearinghouse.com
19
Note: the name of the function eye comes from the name of the letter I typically used to refer
to the identity matrix.
__________________________________________________________________________________
x1
x2
x3
.
.
xn
x12
x22
x32
.
.
x n2
x13
x23
x33
.
.
xn3
x1n-1
x2 n-1
x3 n-1
.
.
x n n-1
To produce a Vandermonde matrix given an input vector x, we can use the following function
vandermonde:
function [v] = vandermonde(x)
//Produces the Vandermonde matrix based on vector x.
//If vector x has the components [x(1) x(2) ... x(n)],
//the corresponding Vandermonde matrix is written as:
//
[ 1
x(1) x(1)^2 x(1)^3 ... x(1)^(n-1) ]
//
[ 1
x(2) x(2)^2 x(2)^3 ... x(2)^(n-1) ]
//
[ .
.
.
.
... .
]
//
[ .
.
.
.
... .
]
//
[ 1
x(n) x(n)^2 x(n)^3 ... x(n)^(n-1) ]
//
[nx,mx] = size(x);
if nx<>1 & mx<>1 then
error('Function vandermonde - input must be a vector');
abort;
end;
if nx == 1 then
u = x';
n = mx;
else
u = x;
n = nx;
end;
// u = column vector = [x(1) x(2) ... x(n)]'
// n = number of elements in u
// v = first column of Vandermonde matrix (all ones)
v = ones(n,1);
Download at InfoClearinghouse.com
20
for k = 1:(n-1)
v = [v,u^k];
end;
//end function
2.
- 3.
5.
- 2. !
-->Vx = vandermonde(x)
Vx =
!
!
!
!
1.
1.
1.
1.
2.
- 3.
5.
- 2.
4.
9.
25.
4.
8.
- 27.
125.
- 8.
!
!
!
!
So, you wonder, what is the utility of the Vandermonde matrix? One practical application,
which we will be utilizing in a future chapter, is the fact that the Vandermonde matrix
corresponding to a vector [x1 x2 xn ], which is in turn associated to a vector [y1 y2 yn], can
be used to determine a polynomial fitting of the form
y = b0 + b1x + b2x2 ++ bn-1xn-1,
to the data sets (x,y).
hjk =
1
.
j + k 1
The Hilbert matrix has application in numerical curve fitting by the method of linear squares.
In SCILAB, a Hilbert matrix can be generated by first using the function testmatrix with the
option hilb and the size n. For example, for n=5, we have:
-->H_inv = testmatrix('hilb',5)
H_inv =
!
25.
! - 300.
!
1050.
! - 1400.
!
630.
- 300.
4800.
- 18900.
26880.
- 12600.
1050.
- 18900.
79380.
- 117600.
56700.
- 1400.
26880.
- 117600.
179200.
- 88200.
630.
- 12600.
56700.
- 88200.
44100.
!
!
!
!
!
This function call produces the inverse of the Hilbert matrix. Use the function inv to obtain
the actual Hilbert matrix, i.e.,
Download at InfoClearinghouse.com
21
-->H = inv(H_inv)
H =
!
!
!
!
!
1.
.5
.3333333
.25
.2
.5
.3333333
.25
.2
.1666667
.3333333
.25
.2
.1666667
.1428571
.25
.2
.1666667
.1428571
.125
.2
.1666667
.1428571
.125
.1111111
!
!
!
!
!
Magic squares
The function testmatrix can also produce matrices corresponding to magic squares, i.e.,
matrices such that the sums of its rows and columns, as well as its diagonals, produce the same
result. For example, a 3x3 magic square is obtained from:
-->A = testmatrix('magic',3)
A =
!
!
!
8.
3.
4.
1.
5.
9.
6. !
7. !
2. !
The sum of elements in each column of the magic square is calculated as follows:
-->sum(A(:,1))
ans =
15.
-->sum(A(:,2))
ans =
15.
-->sum(A(:,3))
ans =
15.
Download at InfoClearinghouse.com
22
The sums of the elements in the main diagonal is calculated using the function trace:
-->trace(A)
ans =
15.
Finally, the sum of the elements in the second main diagonal of the matrix is:
-->A(3,1)+A(2,2)+A(1,3)
ans =
15.
2 3 5
12 1.3 15
A = 3 6 4 , C = 1.3 2 2.4
5 4 2
15 2.4
5
Any square matrix, Bnn = [bij ] can be written as the sum of a symmetric Bnn = [bij ] and an
anti-symmetric Bnn = [bij ] matrices. Because
we can write
Therefore,
2 1 3
B = 5 4 2
6 3 10
Download at InfoClearinghouse.com
23
defined earlier, and use the following SCILAB commands to find its symmetric and antisymmetric components:
-->B = [2, -1, 3; 5, 4, -2; 6, 3, -10];
2.
2.
4.5
2.
4.
.5
4.5 !
.5 !
- 10. !
0.
3.
1.5
- 3.
0.
2.5
- 1.5 !
- 2.5 !
0. !
2.
5.
6.
- 1.
4.
3.
3. !
- 2. !
- 10. !
This decomposition of a square matrix into its symmetric and anti-symmetric parts is commonly
used in continuous mechanics to determine such effects as normal and shear strains or strain
rates and rotational components.
Download at InfoClearinghouse.com
24
1.
3.
5.
7.
9.
11. !
2.
4.
6.
8.
10.
12.
!
!
!
!
!
!
-->size(rv)
ans =
!
1.
6. !
-->size(cv)
ans =
!
6.
1.
-->M = [2,3;-2,1;5,4;1,3]
M =
!
2.
! - 2.
!
5.
!
1.
3.
1.
4.
3.
// A matrix
!
!
!
!
-->size(M)
ans =
!
4.
2. !
1.
=
10.
2.
8.
3.
6.
4.
4.
5.
2.
6. !
0. !
Download at InfoClearinghouse.com
25
.2113249
.7560439
.0002211
.3303271
.6653811
.6283918
.8497452
.6857310
.8782165
.0683740
.5608486
.6623569
.7263507 !
.1985144 !
.5442573 !
If you want to generate a matrix of random integer numbers between 0 and 10, combine
function rand with function int as follows:
-->A = int(10*rand(4,2))
A =
!
!
!
!
2.
2.
2.
8.
6.
3.
9.
2.
!
!
!
!
Numerical operations involving elements of matrix A and of vector b are shown next:
-->A(2,1) + v(2), A(3,2)
ans =
10.
=
ans
9.
2. !
2. !
Download at InfoClearinghouse.com
26
!
!
2. !
8. !
ans =
!
!
!
!
6.
3.
9.
2.
!
!
!
!
The next two statements, with the colon operator as the second sub-index, extract the first
and second rows from matrix A:
-->A(1,:), A(2,:)
ans =
!
2.
6. !
ans
!
=
2.
3. !
2.
3.
4.
5.
3.
4.
5.
6.
4.
5.
6.
7.
!
!
!
!
A 1x9 row vector composed of the elements of u1, u2, and u3 is put together by using:
-->[u1, u2, u3]
ans =
!
2.
3.
- 5.
7.
- 2.
1.
2.
2.
- 1. !
If the vector names are separated by semi-colons, the vectors are added as rows:
-->[u1; u2; u3]
Download at InfoClearinghouse.com
27
ans
!
!
!
=
2.
7.
2.
3.
- 2.
2.
- 5. !
1. !
- 1. !
The transpose vectors u1, u2, and u3 are column vectors. Therefore, the matrix that
results from listing the vector names u1, u2, u3 is a 3x3 matrix whose columns are the
vectors:
-->[u1' u2' u3']
ans =
!
2.
7.
2. !
!
3. - 2.
2. !
! - 5.
1. - 1. !
If semi-colons are placed between the column vectors the result is a 9x1 column vector:
-->[u1';u2';u3']
ans =
!
2. !
!
3. !
! - 5. !
!
7. !
! - 2. !
!
1. !
!
2. !
!
2. !
! - 1. !
2.
3.
- 5. !
-->B = [B;u2]
B =
!
!
2.
7.
3.
- 2.
- 5. !
1. !
2.
7.
2.
3.
- 2.
2.
- 5. !
1. !
- 1. !
Download at InfoClearinghouse.com
28
In the next example, the row vectors are added one at a time to form a single row vector:
-->C = []; C = [C u1]
C =
!
2.
3.
- 5. !
-->C = [C u2]
C =
!
2.
3.
- 5.
7.
- 2.
1. !
- 5.
7.
- 2.
1.
-->C = [C u3]
C =
!
2.
3.
2.
2.
- 1. !
[]; D = [D u1']
!
!
!
-->D = [D u2']
D =
!
2.
7. !
!
3. - 2. !
! - 5.
1. !
-->D = [D u3']
D =
!
2.
!
3.
! - 5.
7.
- 2.
1.
2. !
2. !
- 1. !
Finally, to construct a single column vector out of three column vectors, the composing column
vectors are added one at a time:
2. !
3. !
Download at InfoClearinghouse.com
29
! - 5. !
!
7. !
! - 2. !
!
1. !
-->B
B =
!
!
!
2.
7.
2.
3.
- 2.
2.
- 5. !
1. !
- 1. !
-->B(2,1) = 0
B =
!
!
!
2.
0.
2.
3.
- 2.
2.
- 5. !
1. !
- 1. !
-->B(3,2) = -10
B =
!
!
!
2.
0.
2.
3.
- 2.
- 10.
- 5. !
1. !
- 1. !
Notice that after each assignment statement the entire matrix is printed.
If need be to replace entire rows or columns in a matrix by a vector of the proper length, you
can build the new matrix as illustrated in the examples below. In the first example, vector u1
replaces the second row of matrix B:
Download at InfoClearinghouse.com
30
-->u1
u1 =
!
2.
3.
- 5. !
-->[B(1,:);u1;B(3,:)]
ans =
!
!
!
2.
2.
2.
3.
3.
- 10.
- 5. !
- 5. !
- 1. !
In this second example we reconstruct the original matrix B and then replace its first row with
vector u2:
-->B = [u1;u2;u3]
B =
!
!
!
2.
7.
2.
3.
- 2.
2.
- 5. !
1. !
- 1. !
7.
7.
2.
- 2.
- 2.
2.
1. !
1. !
- 1. !
To calculate the sum or product of rows of a matrix use the qualifier r in functions sum and
prod (i.e., adding or multiplying by rows):
-->sum(B,'r')
ans =
!
11.
3.
- 5. !
Download at InfoClearinghouse.com
31
-->prod(B,'r')
ans =
!
28.
- 12.
5. !
To obtain the sum or product of columns of a matrix use the qualifier c in sum and prod (i.e.,
adding or multiplying by columns):
-->sum(B,'c')
ans =
!
!
!
0. !
6. !
3. !
-->prod(B,'c')
ans =
! - 30. !
! - 14. !
! - 4. !
Download at InfoClearinghouse.com
32
.
.
.
.
.
.
.
.
.
an-1,1x1 + an-1,2x2 + an-1,3x3 + + an-1,m-1x m-1 + an-1,mx m = bn-1,
an1x1 + an2x2 + an3x3 + + an,m-1x m-1 + an,mx m = bn.
This system of linear equations can be written as a matrix equation,
Anmxm1 = bn1,
if we define the following matrices:
a11
a
A = 21
M
a n1
a12
a 22
M
an 2
L a1m
x1
b1
b
L a2m
x2
, x=
, b = 2
M
M
O M
L a nm
xm
bn
Using the matricial equation, Ax = b, we can solve for x by using the function linsolve. The
function requires as arguments the matrices A and b. Some examples are shown below:
Case 1. A system with the same number of equations and unknowns - unique solution exists:
Download at InfoClearinghouse.com
33
If the number of unknowns is the same as the number of equations, the solution may be
unique, in which case the null space returned by linsolve is empty. Consider the following
case involving three equations with three unknowns:
2x1 + 3x2 5x3 - 13 = 0,
x1 3x2 + 8x3 + 13 = 0,
2x1 2x2 + 4x3 + 6 = 0.
can be written as the matrix equation Ax + c = 0, if
2 3 5
x1
A = 1 3 8 , x = x 2 , and
2 2 4
x3
13
c = 13 .
6
If a unique solution exists it represents the point of intersection of the three planes in the
coordinate system (x1, x2, x3). Using function linsolve to obtain a solution we write:
-->A = [2, 3, -5; 1, -3, 8; 2, -2, 4], c = [-13; 13; 6]
A =
!
!
!
c
2.
1.
2.
=
3.
- 3.
- 2.
- 5. !
8. !
4. !
! - 13. !
!
13. !
!
6. !
-->[x0,nsA] = linsolve(A,c)
nsA =
[]
=
x0
!
1. !
!
2. !
! - 1. !
The null space returned for A is empty, indicating a unique solution. To verify that the solution
satisfies the system of equations we can use:
--> A*x0+c
ans =
1.0E-14 *
!
!
!
.1776357 !
0.
!
0.
!
Although the resulting vector, A*x0+c, is not exactly zero, it is small enough (in the order of
10-14) to indicate that the solution satisfies the linear system.
Download at InfoClearinghouse.com
34
Case 2. A system with the same number of equations and unknowns - no unique solution exists:
A system of linear equations with the same number of equations and unknowns not always has
a unique solution, as illustrated by the following system:
2x1 + 3x2 5x3 - 13 = 0,
2x1 + 3x2 -5x3 + 13 = 0,
2x1 2x2 + 4x3 + 6 = 0.
The system of linear equations can be written as the matrix equation Ax + c = 0, if
2 3 5
x1
A = 2 3 5, x = x 2 , and
2 2 4
x3
13
c = 13 .
6
2.
2.
2.
=
! - 13.
3.
3.
- 2.
13.
- 5. !
- 5. !
4. !
6. !
-->[x0,nsA] = linsolve(A,c)
WARNING:Conflicting linear constraints!
nsA =
[ ]
x0 =
[ ]
The function linsolve fails to produce a solution because the first two equations represent
parallel planes in Cartesian coordinates. This failure is reported as a warning indicating
conflicting linear constraints.
Case 3 - A system with more unknowns than equations
The system of linear equations
2x1 + 3x2 5x3 + 10 = 0,
x1 3x2 + 8x3 - 85 = 0,
can be written as the matrix equation Ax + c = 0, if
x1
2 3 5
A=
, x = x 2 , and
1
3
8
x3
Download at InfoClearinghouse.com
35
10
c=
.
85
2001 - Gilberto E. Urroz
This system has more unknowns than equations, therefore, it is not uniquely determined. Such
a system is referred to as an over-determined system.
We can visualize the linear system by realizing that each of the linear equations represents a
plane in the three-dimensional Cartesian coordinate system (x1, x2, x3). The solution to the
system of equations shown above will be the intersection of two planes in space. We know,
however, that the intersection of two (non-parallel) planes is a straight line, and not a single
point. Therefore, there is more than one point that satisfies the system. In that sense, the
system is not uniquely determined.
Applying the function linsolve to this system results in:
-->A = [2, 3, -5; 1, -3, 8], c = [10; -85]
A =
!
!
c
2.
1.
=
3.
- 3.
- 5. !
8. !
!
10. !
! - 85. !
-->[x0,nsA] = linsolve(A,c)
nsA =
! !
!
x0
!
!
!
.3665083 !
.8551861 !
.3665083 !
=
15.373134 !
2.4626866 !
9.6268657 !
In this case, linsolve returns values for both x0, the particular solution, and for nsA, the null
space of A. There is no unique solution to the equation system, instead, a multiplicity of
solutions exist given by x = x0 + nsA*t, where t can take any real value. For example, for t =
10, we calculate a solution that we call x1:
-->x1 = x + nsA*10
x1 =
!
!
!
11.708051 !
11.014548 !
13.291949 !
.0426326 !
.1989520 !
Download at InfoClearinghouse.com
36
x1 + 3x2 - 15 = 0,
2x1 5x2 - 5 = 0,
-x1 + x2 - 22 = 0,
3
1
x
A = 2 5, x = 1 , and
x2
1 1
15
c = 5 .
22
This system has more equations than unknowns, and is referred to as an under-determined
system. The system does not have a single solution.
Each of the linear equations in the
system presented above represents a straight line in a two-dimensional Cartesian coordinate
system (x1, x2). Unless two of the three equations in the system represent the same equation,
the three lines will have three different intersection points. For that reason, the solution is
not unique.
An attempt to solve this system of linear equations with function linsolve produces no result:
-->A = [1, 3; 2, -5; -1, 1], c = [-15; -5; -22]
A =
!
!
! c
1.
2.
1.
=
3. !
- 5. !
1. !
! - 15. !
! - 5. !
! - 22. !
-->[x0,nsA] = linsolve(A,c)
WARNING:Conflicting linear constraints!
nsA = []
x0
= []
Download at InfoClearinghouse.com
37
plane. Thus, the solution point can be visualized as that point such that the sum of the
squares of the distances from the point perpendicular to each line is a minimum.
SCILAB provides the function leastsqr to obtain the parameters that minimize the sum of
squares of the residuals of a function given a set of reference data. The residuals of a leastsquare problem are the differences between the reference data and the data reproduced by
the function under consideration.
For the case of a system of linear equations the function to be minimized by the least-square
method is
f(x) = Ax + c,
where A and c describe the system Ax + c = 0. The parameters of the function sought in
this case are the elements of the least-square solution x. As it will be explained in more
detail in a subsequent chapter, the method of least squares requires taking the derivatives of
the function to be solved for with respect to the parameters sought. For this case, the
derivative of interest is
g(x) = f(x) = A.
This derivative is referred to as the gradient of the function.
A general call to function leastsqr in SCILAB is:
[SSE,xsol]=leastsq([imp,] fun [,Dfun],x0)
where SSE is the minimum value of the sum of the squared residuals (SSE stands for the Sum
of Squared Errors. Error is equivalent to residual in this context. ), xsol is the point that
minimizes the sum of squared residuals, imp is an optional value that determines the type of
output provided by the function, fun is the name of the function under consideration, Dfun is
the gradient of the function under consideration, and x0 is an initial guess to the least-square
solution.
We will apply function leastsq to the under-determined system of linear equations presented
earlier, namely,
3
1
x
A = 2 5, x = 1 , and
x2
1 1
15
c = 5 .
22
Using SCILAB, we first enter the matrix A and vector c, as well as an initial guess for the
solution, vector x0, as follows:
--> A = [1,3;2,-5;-1,1],c=[-15;-5;-22]
A =
!
!
! c
1.
2.
1.
=
3. !
- 5. !
1. !
! - 15. !
! - 5. !
Download at InfoClearinghouse.com
38
! - 22. !
-->x0 = [1;1]
x0 =
!
!
1. !
1. !
In order to use leastsq we need to define the functions f(x) = Ax + c, and g(x) = A, as follows:
-->deff('y = f(x,A,c)','y = A*x+c')
-->deff('yp = g(x,A,c)','yp = A')
Now, we invoke function leastsq with the appropriate arguments to obtain our least-square
solution as:
-->[SSE,xsol] = leastsq(f,g,x0)
xsol =
!
!
3.0205479 !
1.890411 !
SSE =
645.5411
The least-square solution to the system of linear equations is, therefore, the vector xsol =
[3.02, 1.89]T, and the sum of squared errors is SSE = 645.5411.
You can check that, by
definition, SSE = |Axsol+c|2, by using:
--> norm(A*xsol+c)^2
ans =
645.5411
Download at InfoClearinghouse.com
39
[nc,mc] = size(c);
if nA <> nc | mA <> nx then
error('LSlinsolve - incompatible dimensions');
abort;
end;
deff('y00 = ff(xx,A,c)', 'y00 = A*xx+c');
deff('yp0 = gg(xx,A,c)', 'yp0 = A');
[SSE,xsol] = leastsq(ff,gg,x0);
//end of function
Using the same matrix A and vectors c and x0 from the previous example, we can use function
LSlinsolve to obtain the least-square solution to the linear system Ax + c as follows:
-->getf('LSlinsolve')
-->[SSE,xsol] = LSlinsolve(A,c,x0)
xsol =
!
!
3.0205479 !
1.890411 !
SSE =
645.5411
2 5 1
x1
18
A = 2 2 3, x = x 2 , and b = 23 .
0 5 2
x3
51
The solution using the backward slash operator with SCILAB is obtained as follows:
-->A = [2, -5, 1; 2, 2, -3; 0, 5, -2], b = [18; 23; -51]
A =
!
!
!
b
2.
2.
0.
=
- 5.
2.
5.
1. !
- 3. !
- 2. !
Download at InfoClearinghouse.com
40
!
18. !
!
23. !
! - 51. !
-->x = A\b
x =
! - 48.333333 !
! - 35.666667 !
! - 63.666667 !
.0237406
.0865663
.0147655 !
.9166667
.3333333
.8333333
.4166667
.3333333
.8333333
1.0833333 !
.6666667 !
1.1666667 !
The solution can be calculated without first calculating the inverse by using:
-->x = inv(A)*b
Download at InfoClearinghouse.com
41
! - 48.333333 !
! - 35.666667 !
! - 63.666667 !
Characterizing a matrix
In the previous section we mentioned that, under certain conditions, a system of n linear
equations with n unknowns may not have a unique solution. In order to determine when such
situations occur, we can use certain measures or norms that characterize a matrix. Some of
those measures are the rank, norm, and determinant of the matrix. These and other ways of
characterizing a matrix are presented in this section.
4 2 1 0 0 2 4 2
2
9 3 = 2 1 0 0 1 1 = L U
A= 4
2 3 7 1 1 1 0 0 4
Such a decomposition can be obtained with SCILAB by using the function lu.
function lu is
[L,U]= lu(A)
A general call to
where A is as square matrix, and L, U are as defined above. The function can also be called by
using
[L,U,P] = lu(A)
where P is a permutation matrix. When the permutation matrix P is included in the analysis,
the relationship between the various matrices is given by PA = LU.
To illustrate the use of the function lu we use the matrix A defined above:
-->A = [2,4,-2;4,9,-3;-2,-3,7]
A =
Download at InfoClearinghouse.com
42
!
2.
!
4.
! - 2.
4.
9.
- 3.
- 2. !
- 3. !
7. !
-->[L,U] = lu(A)
U =
!
!
!
4.
0.
0.
L
9.
1.5
0.
- 3.
!
5.5
!
1.3333333 !
!
.5
!
1.
! - .5
.3333333
0.
1.
1. !
0. !
0. !
0.
0.
1.
U1 =
1.
0.
0.
!
!
!
9.
1.5
0.
4.
0.
0.
L1 =
!
1.
! - .5
!
.5
0. !
1. !
0. !
- 3.
!
5.5
!
1.3333333 !
0.
1.
- .3333333
0. !
0. !
1. !
Download at InfoClearinghouse.com
43
6
2 4
x1
14
A = 3 2 1 , x = x 2 , b = 3.
4 2 1
x3
4
To solve the system of equations under consideration using function lu and the procedure
outlined above, use:
-->A= [2,4,6;3,-2,1;4,2,-1], b = [14;-3;-4]
A =
!
!
!
2.
3.
4.
b =
!
14.
! - 3.
! - 4.
4.
- 2.
2.
6. !
1. !
- 1. !
!
!
!
-->[L,U,P] = lu(A)
P =
!
0.
0.
1. !
!
0.
1.
0. !
!
1.
0.
0. !
U =
!
!
!
L
!
!
!
4.
0.
0.
=
2.
- 3.5
0.
1.
.75
.5
- 1.
!
1.75 !
8.
!
0.
1.
- .8571429
0. !
0. !
1. !
-->c = P*b
c =
! - 4. !
! - 3. !
!
14. !
0
0 y1 4
1
L y = c, i.e., 0.75
1
0 y 2 = 3
0.5 0.8571429 1 y 3 14
This system is equivalent to the three equations:
y1
0.75y1 +
Download at InfoClearinghouse.com
y2
44
= -4
= -3
0.5y1 - 0.8571429y2 + y3
= 14
This system of equations can be solved one equation at a time starting from the first one, y1 = 4. The second equation produces y2 = -3 - 0.75 (-4) = 0. The third equation is used to solve for
y3 = 14 - 0.5 (-4) + 0.8571429 (0) = 16. Thus, the solution vector, y, is written as:
4
y = 0 .
16
Next, we proceed to solve the equation:
U x = y, or
2
1 x1 4
4
0 3.5 1.75 x = 0 .
2
0
0
8 x3 16
2x2 x3 = -4
-3.5x2 +1.75x3 = -3
8x3 = 16
These equations can be solved one at a time, starting with the third equation, x3 = 16/8 = 2.
The second equation produces x2 = (-1.75)(2)/(-3.5) = 1. Finally, the first equation produces x1
= (-4-2(1)+2)/4 = -1. Thus, the solution is
1
x = 1 .
2
In summary, the LU decomposition of a square matrix A helps in the solution of the linear
system Ax = b, by generating two systems of equations that can be easily solved one at a time.
With SCILAB, however, we need not be concerned with solving linear systems through LU
decomposition. Instead we can use functions linsolve or the backward slash operator as
illustrated earlier.
nn,
Where U and V are orthogonal matrices, and S is a diagonal matrix. The diagonal elements of S
are called the singular values of A and are usually ordered so that si si+1, for i = 1, 2, , n-1.
The columns [uj] of U and [vj] of V are the corresponding singular vectors.
Download at InfoClearinghouse.com
45
2 1
A = 3 2 '
5
4
Using SCILAB:
-->A = [2,-1;-3,2;5,4]
A =
!
2.
! - 3.
!
5.
- 1. !
2. !
4. !
6.6487173 !
3.8463696 !
.8882616
.4593379 !
Download at InfoClearinghouse.com
46
! - .4593379
S =
!
!
!
U
6.6487173
0.
0.
=
.8882616 !
! !
! -
.1981112
.2626234
.9443415
0.
!
3.8463696 !
0.
!
.4697774
.8201336
.3266344
.8602681 !
.5083402 !
.0391031 !
= 2.
! - .8882616
! - .4593379
S =
!
!
!
U
! !
! -
.4593379 !
.8882616 !
6.6487173
0.
0.
=
.1981112
.2626234
.9443415
0.
!
3.8463696 !
0.
!
.4697774
.8201336
.3266344
.8602681 !
.5083402 !
.0391031 !
0.
1.
0.
0. !
0. !
1. !
1.
0.
0. !
1. !
Download at InfoClearinghouse.com
47
In the previous example, when the rank is included in the call to the function svd, you get a
message indicating that the function rank is being redefined (Warning :redefining
function: rank.) This occurs because SCILAB already provides a function rank and a new
version of the function gets loaded through svd. If we attempt to use the function rank with a
matrix or vector after using svd with the option rank included, we get an error message, i.e.,
--> rank(A)
!--error
invalid index
21
3 2 1
A = 1 1 2.
3 5 1
The rank of this matrix calculated with SCILAB is:
-->A = [3,-2,1;1,1,-2;3,5,-1]
A =
! 3. - 2. 1. !
! 1. 1. - 2. !
! 3. 5. - 1. !
--> rank(A)
ans =
3.
The rank of the matrix is 3, i.e., the same as the number of rows or columns of matrix A.
When the rank of a square matrix is the same as the number of rows or columns of the matrix,
the corresponding system of linear equations, Ax = b or Ax+c = 0, has a unique solution. If
the rank of the matrix is smaller than the number of rows or columns of the square matrix, the
matrix is singular (i.e., it does not have an inverse), and the corresponding system does not
have a unique solution.
Download at InfoClearinghouse.com
48
1 2 5
x1
A = 2 4 10, x = x 2 , and
2 2
x3
5
10
b = 5 .
20
1.
2.
2.
=
2.
4.
2.
- 5. !
- 10. !
5. !
10. !
5. !
20. !
--> rank(A)
ans =
2.
The rank of matrix A is 2. Therefore, the matrix is singular. We can check that by using:
-->A\b
!--error
19
singular matrix
The L1 norm or column norm for a matrix Amn = [aij] is defined as the largest column
sum of the matrix. The column sum is understood as the sum of the absolute values of
the elements in a given column. Thus,
n
i =1
The infinity norm or row norm of a matrix Amn = [aij] is defined as the maximum row
sum of the matrix. Once more, this sum involves the absolute values of the elements.
Thus,
Download at InfoClearinghouse.com
49
|| A || F =
j =1
| a
i =1 j =1
ij
|2 .
The Euclidean norm of a matrix Amn = [aij] is defined as the largest singular value of
the matrix.
|| v || P = vip
i =1
1/ p
| v |=|| v || 2 = vi2 .
i =1
The infinite norm of vnx1 = [vi], or v1xn = [vi], is the maximum absolute value of the
elements of the vector, i.e.,
|| v || = max | vi | .
i
norm(A), or norm(A,2):
Download at InfoClearinghouse.com
50
norm(A,1):
norm(A,'inf'),norm(A,%inf):
norm(A,'fro'):
norm(v,p):
norm(v) or norm(v,2):
norm(v,'inf'):
In this description of the function norm we use the concept of the singular values of a matrix.
Some applications of the function norm for matrices and vectors are presented next:
-->A = [2,3;-2,1;0,5]
A =
!
2.
! - 2.
!
0.
3. !
1. !
5. !
-->norm(A)
ans =
5.964908
-->norm(A,1)
ans =
//L1 norm
9.
-->norm(A,'inf')
ans =
//Infinite norm
5.
-->norm(A,'fro') //Frobenius norm
ans =
6.5574385
-->v = [3, 3, -5, 2]
v =
!
3.
3.
- 5.
-->norm(v,3)
ans =
2. !
//L3 norm
5.7184791
-->norm(v,4)
ans =
//L4 norm
5.3232748
-->norm(v)
Download at InfoClearinghouse.com
51
ans
=
6.8556546
-->norm(v,'inf')
ans =
//Infinite norm
5.
2.
5.
1.
3.
5.
1.
- 1. !
2. !
- 1. !
-->cond(A)
ans =
20.765455
Singular matrix:
-->B = [2,3,-1;1,1,1;4,6,-2]
B =
!
!
!
2.
1.
4.
3.
1.
6.
- 1. !
1. !
- 2. !
Download at InfoClearinghouse.com
52
-->cond(B)
ans =
1.391E+16
2.
2.01
1.
2.
2.01
5.
- 1.
!
- .99 !
- 2.
!
-->cond(C)
ans =
679.64701
The condition number for the singular matrix is practically infinity, while that for matrix C,
which is near singularity, is a relatively large number compared to that of the non-singular
matrix A. The ranks of these matrices are rank(A) = rank(C) = 3, and rank(B) = 2.
Both rank and condition numbers of a square matrix can be used to determine whether the
matrix is singular or near singularity. This information will be useful in the analysis of the
solution of a linear system.
Download at InfoClearinghouse.com
53
2 1
2 1
A=
, then det( A) =
= 2 7 (1) 5 = 19.
5 7
5 7
Determinants can be calculated in SCILAB by using the function det. For example, for the
matrices used earlier to illustrate the use of cond we find:
-->det(A)
ans =
7.
-->det(B)
ans =
0.
-->det(C)
ans =
-
.12
Notice that the determinant for a singular matrix, such as B in this example, is zero. The
determinant of matrix C, which in nearly-singular, is very close to zero.
The determinant of a matrix, therefore, can also be used as a way to characterize a matrix.
Properties of determinants
The calculation of determinants by hand typically requires simplifying the determinant to
reduce the number of operations in its calculation. Manipulation of the determinant includes
operations such as multiplying or dividing a row or column by a constant, exchanging rows or
columns, or replacing a row or column by a linear combination of other rows or columns.
Whenever one of these operations take place, it is necessary to modify the expression for the
determinant to ensure that its value does not change. Some of the rules of determinant
manipulation are the following:
(1) Multiplying or dividing a row or column in a determinant by a constant is equivalent to
multiplying or dividing the determinant by that constant. For example, consider the
determinant used in the previous example
det( A) =
2 1
= 19.
5 7
If we multiply any row by 2, the determinant gets multiplied by 2. For example (check
these with your
calculator):
4 2
5
= 2 19 = 38,
10 14
Download at InfoClearinghouse.com
= 38,
10
= 38,
54
2 2
5
14
= 38,
10
14
= 2 2 19 = 76.
(2) Switching any two rows or columns produces a change of sign in the determinant. For
example, for the same case presented earlier we have:
5 7
= 19,
2 1
1 2
= 19.
7 5
(3) A row (or column) in a determinant can be replaced by a linear combination of rows (or
columns) without changing the value of the determinant. For example, referring to det(A)
above, we will replace the second row by the linear combination resulting from multiplying
the first row by 2 and adding the second row multiplied by 1, i.e., 2[5 7] +(-1) [2 1] = [8
15], i.e.,
5 7
= 5 15 7 8 = 19.
8 15
Of course, with the HP 49 G calculator you no longer need to calculate determinants by hand.
Still, to understand their calculation and some of the matrix elimination methods to be
presented later, you need to keep in mind these rules.
Example: In the following example, we use determinant operations to simplify its calculation.
First, we divide the first row by 5, the second by 3, and the third by 2.
5 10 20
1 2 4
3 12
9 = 53 2 1 4 3
8 6
2
4 3 1
Next, we replace row 2 with ( row 1 row 2), and row 3 with (4row1-row3):
1 2 4
1 2 4
53 2 1 4 3 = 53 2 0 6 7
4 3 1
0 5 15
The next step in the simplification is to divide the second row by -6 and the third row by 5, to
get
Download at InfoClearinghouse.com
55
1 2 4
1 2 4
5 3 2 0 6 7 = 5 3 2 (6) 5 0 1 7 / 6
0 5 15
0 1
3
Finally, we replace the third row with (row 2 row 3), i.e.,
1 2 4
1 2 4
5 3 2 (6) 5 0 1 7 / 6 = 5 3 2 (6) 5 0 1 7 / 6
0 1
3
0 0 11 / 6
Now we use the method for calculating 33 determinants presented in chapter 9, i.e.,
1 2 4
1 2 4 1 2
5 3 2 (6) 5 0 1 7 / 6 = 5 3 2 (6) 5 0 1 7 / 6 0 1 =
0 0 11 / 6
0 0 11 / 6 0 0
= 5 3 2 (6) 5 (1 1 11 / 6 + 0 + 0 (0 + 0 + 0)) = 1650.
You can check, using SCILABs function det, that indeed, det(A) = -1650.
Download at InfoClearinghouse.com
56
1
1
A=
4
2 3
5 2
2 1
1 4
4
X
4
11
7
Y
, x=
, and b = .
Z
2
6
7
R
9
2.
5.
2.
1.
3.
2.
- 1.
- 4.
4.
7.
6.
7.
!
!
!
!
-->b=[4;11;2;9]
b =
!
!
!
!
4.
11.
2.
9.
!
!
!
!
-->D = det(A)
D =
- 258.
-->DX = det([b,A(:,2),A(:,3),A(:,4)])
DX =
516.
-->DY = det([A(:,1),b,A(:,3),A(:,4)])
DY =
258.
-->DZ = det([A(:,1),A(:,2),b,A(:,4)])
DZ =
3.908E-14
-->DR = det([A(:,1),A(:,2),A(:,3),b])
DR =
- 516.
Download at InfoClearinghouse.com
57
Notice that Cramers rule gives a value for Z of 1.51510-16, indicating that a negligible
numerical error was introduced.
tr (A nn ) = aii .
i =1
Using Einsteins repeated index convention we can write simply, tr(Ann) = aii.
SCILAB provides the function trace to calculate the trace of a matrix. For example, for the
matrix A used in the previous example:
--> trace(A)
ans =
12.
Download at InfoClearinghouse.com
58
Download at InfoClearinghouse.com
59
McCuen, R.H., 1989,Hydrologic Analysis and Design - second edition, Prentice Hall, Upper Saddle River, New Jersey.
Middleton, G.V., 2000, "Data Analysis in the Earth Sciences Using Matlab," Prentice Hall, Upper Saddle River, New
Jersey.
Montgomery, D.C., G.C. Runger, and N.F. Hubele, 1998, "Engineering Statistics," John Wiley & Sons, Inc.
Newland, D.E., 1993, "An Introduction to Random Vibrations, Spectral & Wavelet Analysis - Third Edition," Longman
Scientific and Technical, New York.
Nicols, G., 1995, Introduction to Nonlinear Science, Cambridge University Press, Cambridge CB2 2RU, U.K.
Parker, T.S. and L.O. Chua, , "Practical Numerical Algorithms for Chaotic Systems, 1989, Springer-Verlag, New York.
Peitgen, H-O. and D. Saupe (editors), 1988, "The Science of Fractal Images," Springer-Verlag, New York.
Peitgen, H-O., H. Jrgens, and D. Saupe, 1992, "Chaos and Fractals - New Frontiers of Science," Springer-Verlag, New
York.
Press, W.H., B.P. Flannery, S.A. Teukolsky, and W.T. Vetterling, 1989, Numerical Recipes - The Art of Scientific
Computing (FORTRAN version), Cambridge University Press, Cambridge CB2 2RU, U.K.
Raghunath, H.M., 1985, "Hydrology - Principles, Analysis and Design," Wiley Eastern Limited, New Delhi, India.
Recktenwald, G., 2000, "Numerical Methods with Matlab - Implementation and Application," Prentice Hall, Upper
Saddle River, N.J., U.S.A.
Rothenberg, R.I., 1991, "Probability and Statistics," Harcourt Brace Jovanovich College Outline Series, Harcourt Brace
Jovanovich, Publishers, San Diego, CA.
Sagan, H., 1961,"Boundary and Eigenvalue Problems in Mathematical Physics," Dover Publications, Inc., New York.
Spanos, A., 1999,"Probability Theory and Statistical Inference - Econometric Modeling with Observational Data,"
Cambridge University Press, Cambridge CB2 2RU, U.K.
Spiegel, M. R., 1971 (second printing, 1999), "Schaum's Outline of Theory and Problems of Advanced Mathematics for
Engineers and Scientists," Schaum's Outline Series, McGraw-Hill, New York.
Tanis, E.A., 1987, "Statistics II - Estimation and Tests of Hypotheses," Harcourt Brace Jovanovich College Outline
Series, Harcourt Brace Jovanovich, Publishers, Fort Worth, TX.
Tinker, M. and R. Lambourne, 2000, "Further Mathematics for the Physical Sciences," John Wiley & Sons, LTD.,
Chichester, U.K.
Tolstov, G.P., 1962, "Fourier Series," (Translated from the Russian by R. A. Silverman), Dover Publications, New York.
Tveito, A. and R. Winther, 1998, "Introduction to Partial Differential Equations - A Computational Approach," Texts in
Applied Mathematics 29, Springer, New York.
Urroz, G., 2000, "Science and Engineering Mathematics with the HP 49 G - Volumes I & II", www.greatunpublished.com,
Charleston, S.C.
Urroz, G., 2001, "Applied Engineering Mathematics with Maple", www.greatunpublished.com, Charleston, S.C.
Winnick, J., , "Chemical Engineering Thermodynamics - An Introduction to Thermodynamics for Undergraduate
Engineering Students," John Wiley & Sons, Inc., New York.
Download at InfoClearinghouse.com
60