Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Matrix Theory and Applications for Scientists and Engineers
Matrix Theory and Applications for Scientists and Engineers
Matrix Theory and Applications for Scientists and Engineers
Ebook351 pages9 hours

Matrix Theory and Applications for Scientists and Engineers

Rating: 0 out of 5 stars

()

Read preview

About this ebook

A comprehensive text on matrix theory and its applications, this book is intended for a broad range of students in mathematics, engineering, and other areas of science at the university level. Author Alexander Graham avoids a simple catalogue of techniques by exploring the concepts' underlying principles as well as their numerous applications. Many problems elucidate the text, which includes a substantial answer section at the end. 
The treatment explores matrices, vector spaces, linear transformations, and the rank and determinant of a matrix. Additional topics include linear equations, eigenvectors and eigenvalues, canonical forms and matrix functions, and inverting a matrix. A Solution to Problems Section, References and a Bibliography conclude the treatment.
LanguageEnglish
Release dateJul 18, 2018
ISBN9780486832654
Matrix Theory and Applications for Scientists and Engineers

Related to Matrix Theory and Applications for Scientists and Engineers

Related ebooks

Mathematics For You

View More

Related articles

Reviews for Matrix Theory and Applications for Scientists and Engineers

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Matrix Theory and Applications for Scientists and Engineers - Alexander Graham

    Preface

    I am aware that there are in existence many excellent books on matrix algebra, but it seems to me that very few of them are written with the average student in mind. Some of them are excellent exercises in mathematical rigour, but hardly suitable for a scientist who wishes to understand a particular technique of linear algebra. Others are a summary of various techniques, with little or no attempt at a justification of the underlying principles.

    This book steers a middle course between the two extremes. With the rapid use of even more sophisticated techniques in science, economics and engineering, and especially in control theory, it has become essential to have a real understanding of the many methods being used. To achieve simplicity combined with as deep an understanding as possible, I have tried to use very simple notation, even at the possible expense of rigour. I know that a student who, for example, is not used to Gothic lettering, will actually find a passage of mathematics using such symbols (to denote sets) much more difficult to absorb than when Roman lettering is used.

    Engineers generally denote vectors by x, y, … whereas methematicians will use these letters, but will also use α, β, … and other symbols. There is good reason for this; after all, vectors are elements of a set which satisfy the axioms of a vector space. Nevertheless I have followed the engineers’ usage for the notation, but have been careful to stress the general nature of a vector.

    A few brief remarks about the organization of this book. In Chapter 1 matrices and matrix operations are defined. Chapter 2 deals with vector spaces and the transition matrix. In Chapter 3, various concepts introduced in the first two chapters are combined in the discussion of Linear Transformations. Chapters 4 and 5 deal with various important concepts in matrix theory, such as the rank of a matrix, introduce determinants and solution of sets of simultaneous equations.

    Chapters 6 and 7 are, from the engineer’s point of view, the most important in the book. Various aspects of eigenvalue and eigenvector concepts are discussed. These concepts are applied to obtain the decomposition of the vector space and to choose a basis relative to which a linear transformation has a block-diagonal matrix representation. Various matrix block-diagonal forms are discussed.

    Finally in Chapter 8 techniques for inverting a nonsingular matrix are discussed. I have found some of these techniques very useful especially for various aspects of control theory.

    Limitation of length have prevented the inclusion of all aspects of modern matrix theory, but the most important have been covered.

    Thus, there is enough material to make this book a useful one for college and university students of mathematics, engineering, and science who are anxious not only to learn but to understand.

    A. GRAHAM

    CHAPTER 1

    Matrices

    1.1 Matrices and Matrix Operations

    Definition 1.1

    A matrix is a rectangular array of numbers of the form

    In the above form the matrix has m rows and n columns, we say that it is of order m × n.

    The mn numbers a11, a12, … amn are known as the elements of the matrix A. If the elements belong to a field F, we say that A is a matrix over the field F. Since the concept of a field plays an emportant role in development of vector spaces we shall state the axioms for a field at this early stage.

    The notation we use is the following

    a є S means that a belongs to the set S

    a S means that a does not belong to the set S

    ϕ denotes the empty set

    ∃ denotes ‘there exists’.

    A field F consists of a non-empty set of elements and two laws called addition and multiplication for combining elements satisfying the following axioms:

    Let a, b, c e F

    A1 a + b is a unique element є F

    A2 (a+b) + c = a + (b+c)

    A3 ∃0 є F such that 0 + a = a, all a є F.

    A4 For each a є F, ∃ (–a) є F such that a + (—a) = 0

    A5 a + b = b + a

    M1 ab is a unique element є F

    M2 (ab)c = a(bc)

    M3 ∃1 є F such that 1a = a for all a є F

    M4 If a ≠ 0, ∃ a–¹ є F such that a–1a = 1.

    M5 ab = ba

    M6 a(b+c) = ab + ac.

    Typical examples of fields are: real numbers, rational numbers, and complex numbers.

    If we consider the element aij of a matrix the first suffix, i, indicates that the element stands in the ith row, whereas the second suffix, j, indicates that it stands in the jth column.

    For example, the element which stands in the second row and the first column is a21.

    We usually denote a matrix by a capital letter, say A, or by its (i, j)th element in the form [aij].

    Definition 1.2

    A square matrix is a matrix of order n × n of the form

    The elements a11, a22, …, ann of A are called diagonal elements.

    Example 1.1

    The following are matrices:

    A is a square matrix of order 3 × 3.

    B is a rectangular matrix of order 2 × 3.

    C and D are matrices of order 3 × 1 and 1 × 2 respectively.

    Note

    C is also known as a column matrix or column vector. D is known as a row matrix or a row vector.

    The diagonal elements of A are 1, —1, 1.

    The (2, 3) element of B is b23 = —2.

    Definitions 1.3

    (1)The zero matrix of order m × n is the matrix having all its mn elements equal to zero.

    (2)The unit or identity matrix I is the square matrix of order n × n whose diagonal elements are all equal to 1 and all remaining elements are 0.

    (3)The diagonal matrix A is a square matrix for which aij = 0 whenever i ≠ j. Thus all the off-diagonal elements of a diagonal matrix are zero.

    (4)The diagonal matrix for which all the diagonal elements are equal to each other is called a scalar matrix.

    Example 1.2

    Consider the following matrices:

    A is a scalar matrix of order 3 × 3

    B is a diagonal matrix of order 3 × 3

    I is the unit matrix of order 3 × 3

    O is the zero matrix of order 2 × 3.

    Note

    We shall use the accepted convention of denoting the zero and unit matrices by O and I respectively. It is of course necessary to state the order of matrices considered unless this is obvious from the text.

    Definition 1.4

    (1)An upper triangular matrix A is a square matrix whose elements aij = 0 for i > j.

    (2)A lower triangular matrix A is a square matrix whose elements aij = 0 for i < j.

    Example 1.3

    Consider the two matrices

    A is an upper triangular matrix

    B is a lower triangular matrix.

    Definition 1.5

    Two matrices A = [aij] and B = [bij] are said to be equal if

    (1)A and B are of the same order, and

    (2)the corresponding elements are equal, that is if aij = bij (all i and j).

    Operations on Matrices

    Definition 1.6

    The sum (or difference) of two matrices A = [aij] and B = [bij] of the same order, say m × n, is a matrix C = [cij] also of order m × n such that

    (Or cij = aij bij if we are considering the difference of A and B.)

    Example 1.4

    Given

    find A+B and A–B.

    Solution

    Definition 1.7

    The multiplication of a matrix A = [aij] of order m × n by a scalar r is the matrix rA such that

    Example 1.5

    Given

    Solution

    Note that 3A =A + A + A, and we can evaluate the right-hand side by definition 1.6.

    Definition 1.8

    The product of the matrix A = [aij] of order m × l and the matrix B = [bij] of order l × n is the matrix C= [cij] of order m × n defined by

    We illustrate this definition by showing up the elements of A and B making up the (i, j)* element of C.

    Note the following:

    (i)The product AB is defined only if the number of columns A is the same as the number of rows of B. If this is the case, we say that A is conformable to B.

    (ii)The (i, j)th element of C is evaluated by using the ith row of A and jth column of B.

    (iii)If A is conformable to B the product AB is defined, but it does not follow that the product BA is defined. Indeed, if A and B are of order m × l and l × n respectively, the product AB is defined but the product BA is not, unless n = m.

    (iv)When both AB and BA are defined, AB BA in general, that is, matrix multiplication is NOT commutative.

    (v)

    (a)If AX = 0, it does not necessarily follow that A = 0 or X = 0.

    (b)If AB = AC, it does not necessarily follow that B = C. (See Sec. 1.2 for further discussion of (iv) and (v) and examples).

    Example 1.6

    Find (if possible)

    (i) AB, (ii) BA, (iii) AC, (iv) CA, (v) BC.

    Solution

    (i)Since A has 3 columns and B has 3 rows, the product AB is defined.

    (ii)Since B has 3 columns and A has 2 rows the product BA is not defined.

    (iii)Since A has 3 columns and C has 3 rows, AC is defined

    (iv)The product CA is not defined.

    (v)

    Example 1.7

    Given the matrices:

    write the equations AX = B in full.

    Solution

    AX = B is the equation:

    By Def. 1.8, we can write the above as

    By Def. 1.5 the above equation is equivalent to the following system of simultaneous equations:

    Definition 1.9

    The transpose of a matrix A = [aij] of order m × n is the matrix A′ = [bij] of order n × m obtained from A by interchanging the rows and columns of A so that bij = aij (i = 1, 2, …n, j = 1, 2,… m), for example if

    The transpose of a column matrix is a row matrix and vice versa, thus if

    Note that (A′)′ = A.

    Notation

    To denote vectors (see Def. 2.1) we shall make use of several notations.

    When considering a one-column matrix or a one-row matrix, we use

    Since a one-row matrix is less space-consuming to write than a one-column matrix, we frequently write the column matrix in the transposed form as [1, 2, 3]′.

    Finally, if it is immaterial whether the vectors under discussion are row or column vectors, we use the notation (1, 2, 3).

    1.2 SOME PROPERTIES OF MATRIX OPERATIONS

    In this section we shall state without proof (in general) a number of properties of matrix operation. The interested reader will find the proofs in most of the books mentioned in the Bibliography.

    Although there are a number of analogies between the algebraic properties of matrices and real numbers, there are also striking differences, some of which will be pointed out.

    Let A, B, and C be matrices, each of order m × n

    Addition laws

    (1)Matrix addition is commutative; that is,

    (2) Matrix addition is associative; that is,

    (3)There exists a unique zero matrix O of order m × n such that

    (4)Given the matrix A, there exists a unique matrix B such that

    The above property serves to introduce the subtraction of matrices. Indeed the unique matrix B in the above equation is found to be equal to (1)A which we write as —A. We then find that A — A = 0 and that (—A) = A.

    (5)Multiplication by scalars.

    If r and s are scalars, then

    (a) r(A+B) = rA + rB

    (b) (r+s)A = rA + sA

    (c) r(A–B) = rA – rB

    (d) (r–s)A = rA – sA.

    Example 1.8

    Given

    and

    (a)show that (i) A + B = B + A

    (ii) A + (B + C = (A + B) + C.

    (b)find the matrix

    Enjoying the preview?
    Page 1 of 1