Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Advanced Mathematics for Engineers and Scientists
Advanced Mathematics for Engineers and Scientists
Advanced Mathematics for Engineers and Scientists
Ebook767 pages7 hours

Advanced Mathematics for Engineers and Scientists

Rating: 4 out of 5 stars

4/5

()

Read preview

About this ebook

This book can be used as either a primary text or a supplemental reference for courses in applied mathematics. Its core chapters are devoted to linear algebra, calculus, and ordinary differential equations. Additional topics include partial differential equations and approximation methods.
Each chapter features an ample selection of solved problems. These problems were chosen to illustrate not only how to solve various algebraic and differential equations but also how to interpret the solutions in order to gain insight into the behavior of the system modeled by the equation. In addition to the worked-out problems, numerous examples and exercises appear throughout the text.
LanguageEnglish
Release dateJan 17, 2013
ISBN9780486141596
Advanced Mathematics for Engineers and Scientists

Related to Advanced Mathematics for Engineers and Scientists

Titles in the series (100)

View More

Related ebooks

Mathematics For You

View More

Related articles

Reviews for Advanced Mathematics for Engineers and Scientists

Rating: 4 out of 5 stars
4/5

2 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Advanced Mathematics for Engineers and Scientists - Paul DuChateau

    Index

    Preface

    The old saying that you can’t tell a book by its cover is especially true for a title like Advanced Mathematics for Engineers and Scientists. A title like that leaves an author considerable freedom with respect to level and content and I feel obligated therefore to use this preface to explain what you will find in this book.

    The book is written in such a way that it may be used either as a primary text or as a supplemental reference for a course in applied mathematics at the junior/senior or beginning graduate level. The core chapters are devoted to linear algebra and ordinary and partial differential equations. These are the topics required for the analysis of discrete, semidiscrete, and continuous mathematical models for physical systems. The solved problems in each chapter have been selected to illustrate not only how to construct solutions to various algebraic and differential equations but how to interpret those solutions in order to gain insight into the behavior of the system modeled by the equation.

    The book separates naturally into the following sections:

    Linear Algebra and Calculus: Chapters 1, 2, and 3

    Ordinary Differential Equations: Chapters 4 to 6

    Partial Differential Equations: Chapters 7 to 11

    Approximation Methods: Chapters 12 and 13

    Chapters 1 and 2 are devoted to the two motivating problems of linear algebra: systems of linear equations and algebraic eigenvalue problems. Along with more traditional material, these chapters include topics like; least squares solutions of overdetermined systems with applications to regression analysis, and eigenvalue calculations for large, banded matrices.

    Chapter 3 collects a number of results from multivariable calculus and shows how some of them can be used to derive the equations mathematical physics. The models derived include the Navier Stokes equations of fluid dynamics and Maxwell’s equations for electromagnetic fields. Other solved problems show how certain simplifying assumptions reduce the full Navier Stokes equations to the equations for the propagation of acoustic waves while other assumptions lead to the equations for potential flow.

    Chapters 4, 5, and 6 are devoted to linear and nonlinear ordinary differential equations. The emphasis in these chapters is on the applications of the results to analysis of physical systems. In particular, chapter 5 provides and introduction to dynamical systems.

    Chapters 7 and 8 provide the basic solution techniques used in chapter 9 for solving linear problems in partial differential equations. Chapter 10 treats nonlinear problems in partial differential equations including discussions of shock waves and expansion fans, and a brief introduction to solitary wave solutions for nonlinear differential equations.

    Chapter 11 introduces functional optimization with applications to variational formulation of boundary value problems in differential equations. This material provides the theoretical foundation for the finite element method which is briefly described in chapter 13. Chapter 12 discusses finite difference methods, the other commonly used method for approximating solutions to differential equations.

    Paul DuChateau

    1

    Systems of Linear Algebraic Equations

    This chapter provides an introduction to the solution of systems of linear algebraic equations. After a brief discussion of matrix notation we present the Gaussian elimination algorithm for solving linear systems. We also show how the algorithm can be extended slightly to provide the so-called LU factorization of the coefficient matrix. This factorization is nearly equivalent to computing the matrix inverse and is an extremely effective solution approach for certain kinds of problems. The solved problems provide simple BASIC computer programs for both the Gaussian elimination algorithm and the LU decomposition. These are applied to example problems to illustrate their use. The solved problems also include examples of physical problems for which the mathematical models lead to systems of linear equations.

    The presentation of the solution algorithms is rather formal, particularly with respect to explaining what happens when the algorithms fail in the case of a singular system of equations. To provide a clearer understanding of these and other matters we include a brief development of some abstract ideas from linear algebra. We introduce the four fundamental subspaces associated with a matrix A: the row and column spaces, the null space and the range of A. The relationships that exist between these subspaces and the corresponding subspaces for the transpose matrix AT provide the key to understanding the solution of systems of linear equations in the singular as well as the nonsingular case. The solved problems expand on the ideas set forth in the text. For example, Problem 1.15 gives a physical interpretation for the abstract solvability condition that must be imposed on the data vector in a singular system. Problems 1.16 through 1.20 discuss the notion of a least squares solution for an overdetermined system and apply this to least squares fits for experimental data.

    We should perhaps conclude this introduction with the disclaimer that this chapter is meant to be only an introduction to the numerical and abstract aspects of systems of linear algebraic equations. While the Gaussian elimination algorithm does form the core of many of the more sophisticated solution algorithms for linear systems, the version provided here contains none of the enhancements that exist to take advantage of special matrix structure, nor does it contain provisions to compensate for numerical instabilities in the system. Such considerations are properly the subject of a more advanced course on numerical linear algebra. This chapter seeks only to provide the foundation on which more advanced treatments can build.(1.1)

    SYSTEMS OF SIMULTANEOUS LINEAR EQUATIONS

    Terminology

    Consider the following system of m equations in the n unknowns

    (1.1)

    Here a11, ... , amn denote the coefficients in the equations and the numbers b1 ... , bm are referred to as the data. In a specific example these quantities would be given numerical values and we would then be concerned with finding values for x1, . . . , xn such that these equations were satisfied. An n-tuple of real values {x1, . . . , xn} which satisfies each of the m equations in (1.1) is said to be a solution for the system. The collection of all n-tuples which are solutions is called the solution set for the system. For any given system, one of the following three possibilities must occur:

    The solution set contains a single n-tuple; then the system is said to be nonsingular.

    The solution set contains infinitely many n-tuples; then the system is said to be singular, or, more precisely, singular dependent or underdetermined.

    The solution set is empty; in this case we say the system is singular inconsistent or overdetermined.

    EXAMPLE 1.1

    Consider the system

    Each equation in this simple system defines a separate function whose graph is a straight line in the x1x2 plane. The lines corresponding to these equations are seen to intersect at the unique point (2, 2). Thus, the solution set for this system consists of the single 2-tuple [2, 2]. The system is nonsingular.

    The system

    produces just a single line in the x1x2 plane (alternatively, there are two lines that coincide). The solution set for the system contains infinitely many 2-tuples corresponding to all of the points on the line. That is, every 2-tuple that is of the form [x1, 2 − x1] for any choice of x1 is in the solution set. The system is singular. More precisely, the system is singular dependent (underdetermined). The equations of the system are not independent equations.

    The system

    produces a pair of parallel lines in the x1x2 plane. There are no points that lie on both lines and consequently the solution set for the system is empty. The system is singular. More precisely, the system is singular inconsistent (overdetermined). The equations of the system are contradictory.

    Matrix Notation

    VECTORS

    In order to discuss systems of equations efficiently it will be convenient to have the notion of an n-vector. We shall use the notation X to denote an array of n numbers arranged in a column, and XT will denote the same array arranged in a row,

    We refer to these respectively as column vectors and row vectors. The purpose of this distinction will become apparent.

    INNER (DOT) PRODUCT

    For n-vectors X and Y having entries x1, . . . . , xn and y1, . . . , yn, respectively, we define the inner product of X and Y as the number

    (1.2)

    We use any of the notations X Y, XTY or (X, Y) to indicate the inner product of X and Y. The inner product is also called the dot product.

    MATRICES

    An m × n matrix is a rectangular array of numbers containing m rows and n columns. For example,

    denote, respectively, an m × n matrix and an n × p matrix. Here aij denotes the entry in the ith row and jth column of the matrix A. Note that a column vector is an n × 1 matrix and a row vector is a 1 × n matrix.

    MATRIX PRODUCT

    For an m× n matrix A and an n × p matrix B, the product AB is defined to be the m × p matrix C whose entries Cij are equal to

    (1.3)

    Note that the product AB is not defined if the number of columns of A is not equal to the number of rows of B. Note also that if A is m× n we can think of A as composed of m rows, each of which is a (row) m-vector; i.e.; A is composed of rows R1T, ... , RmT. Similarly, we can think of B as consisting of p columns, C1, . . . , Cp, each of which is a (column) n-vector. Then for i = 1, ... , m and j = 1, . . . , p, the (i, j) entry of the product matrix AB is equal to the dot product RiTCj

    (1.4)

    If A is m × n and B is n × m, then the products AB and BA are both defined but produce results which are m × m and n × n, respectively. Thus, if m is not equal to n, AB cannot equal BA. Even if m equals n, AB is not necessarily equal to BA. The matrix product is not commutative.

    IDENTITY MATRIX

    Let I denote the n × n matrix whose entries Ijk are equal to 1 if j equals k and are equal to 0 if j is different from k. Then I is called the n ×n identity matrix since for any n × n matrix A, we have IA = AI = A. Thus, I plays the role of the identity for matrix multiplication.

    MATRIX NOTATION FOR SYSTEMS

    The system of equations (1.1) can be expressed in matrix notation by writing AX = B; i.e.,

    (1.5)

    We refer to A as the coefficient matrix for the system and to X and B as the unknown vector and the data vector, respectively.

    Gaussian Elimination

    AUGMENTED MATRIX

    We shall now introduce the Gaussian elimination algorithm for solving the system (1.5) in the case m = n. We begin by forming the augmented matrix for the system (1.5). This is the n × n + 1 matrix composed of the coefficient matrix A augmented by the data vector B as follows

    EXAMPLE 1.2 AUGMENTED MATRIX

    Consider the system of equations

    The augmented matrix for this system is

    Note that the augmented matrix contains all the information needed to form the original system of equations. Only extraneous symbols have been stripped away.

    ROW OPERATIONS

    Gaussian elimination consists of operating on the rows of the augmented matrix in a systematic way in order to bring the matrix into a form where the unknowns can be easily determined. The allowable row operations are

    Eij(a)—add to row i of the augmented matrix (a times row j) to form a new ith row (for a ≠ 0 and ij),

    Ej(a)—multiply the jth row of the augmented matrix by the nonzero constant a to form a new jth row (for a ≠ 0),

    Pij—exchange rows i and j of the augmented matrix.

    These operations are referred to as elementary row operations. They do not alter the solution set of the system (1.5).

    Theorem 1.1

    Theorem 1.1. The solution set for a system of linear equations remains invariant under elementary row operations.

    EXAMPLE 1.3 ROW REDUCTION TO UPPER TRIANGULAR FORM

    Consider the system of equations from Example 1.2. The row operation E12(−2) has the following effect on the augmented matrix:

    We follow this with E13(1)

    and E23(−2)

    The augmented matrix has been reduced to upper triangular form. A matrix is said to be in upper triangular form when all entries below the diagonal are zero. That is, A is in upper triangular form if aij = 0 for i>j.

    BACK SUBSTITUTION

    The final form of the augmented matrix in Example 1.3 is associated with the following system of equations UX = B*; i.e.,

    This system has the same solution set as the original system and since the coefficient matrix of the reduced system is upper triangular, we can solve for the unknowns by back substitution. That is,

    Substituting this result in the second equation of the reduced system leads to

    Finally, substituting the values for x3 and x2 into the first equation yields

    GAUSSIAN ELIMINATION

    We can summarize the algorithm for solving the system AX = B by Gaussian elimination as follows:

    The solved problems provide examples of systems requiring row exchanges for the reduction.

    ECHELON FORM

    The notion of an upper triangular matrix is not sufficiently precise for purposes of describing the possible outcomes of the Gaussian elimination algorithm. We say an upper triangular matrix is in echelon form if all the entries in a column that are below the first nonzero entry in each row are zero.

    EXAMPLE 1.4 ECHELON FORM

    is upper triangular but is not in echelon form.

    is in echelon form.

    ROW EQUIVALENCE AND RANK

    If matrix A can be transformed into matrix B by a finite number of elementary row operations, we say that A and B are row equivalent. We define the rank of a matrix A to be the number of nontrivial rows in any echelon form upper triangular matrix that is row equivalent to A. A trivial row in a matrix is one in which every entry is a zero. Now we can state

    Theorem 1.2

    Theorem 1.2. Let A denote an n × n matrix and consider the associated linear system AX = B.

    If rank A = n, then the system is nonsingular and has a unique solution X for every data n-tuple B.

    If rank A = rank[A|B] < n, then the system is singular dependent; i.e., there are infinitely many solutions.

    If rank A < rank[A|B], then the system is singular inconsistent; there is no solution.

    EXAMPLE 1.5

    We apply Theorem 1.2 to the 2 x 2 system of Example 1.1(a)

    We have rank A = n = 2. Then the theorem implies that AX = B has a unique solution for every 2-tuple B.

    For the system of Example 1.1(b), we have

    Then rank A = rank [A|B] = 1 < 2 = n and the theorem indicates that this system is singular dependent as we found previously by other means.

    For the third system in Example 1.1, we have rank A = 1 and

    Then the theorem states that this system is singular inconsistent, and there is no solution. The lines which are the graphs of the two equations of this system are parallel.

    LU Decomposition

    ELEMENTARY MATRICES AND ROW OPERATIONS

    Each of the elementary row operations is equivalent to multiplication by an associated matrix called an elementary matrix.

    Theorem 1.3

    Theorem 1.3. Let In denote the n × n identity matrix and let E, F, G denote, respectively, the matrices obtained by applying to In the row operations Ej(a), Ejk(a), and Pjk, for a ≠ 0 and j k. Then, for n × n matrices A and B,

    A——Ej(a)→ B if and only if EA = B

    A——Ejk(a)→ B if and only if FA = B

    A——PjkB if and only if GA = B

    EXAMPLE 1.6 ELEMENTARY MATRICES

    Let I3 denote the 3 × 3 identity. Then applying the row operations E1(a), E32(a), P23 to I3 leads to

    Multiplying the matrix A by E, F, and P, respectively, we find

    Note that the matrices E and F are lower triangular but P is not.

    MATRIX INVERSE

    For n × n matrices A and B, both products AB and BA are defined and produce an n × n result. In the special case that AB = BA = In, we say that B is the matrix inverse for A and we write B = A−1. Note that only a square matrix may have an inverse but not every square matrix has an inverse. A square matrix with no inverse is said to be singular and a matrix that does have an inverse is said to be invertible or nonsingular.

    Theorem 1.4 Inverse of the Product of Two Matrices

    Theorem 1.4. Suppose A and B are invertible n x n matrices. Then the product AB is invertible and (AB)−1 = B−1A−1. In view of Theorem 1.3, every elementary matrix is invertible.

    Theorem 1.5 Inverse of an Elementary Matrix

    Theorem 1.5. Let In denote the n x n identity matrix and let E, F, P denote, respectively, the matrices obtained by applying to In the row operations Ej(a) , Ejk(a), and Pjk, for a ≠ 0 and j k. Then E, F, and P are invertible with

    EXAMPLE 1.7 INVERSES OF ELEMENTARY MATRICES

    Let E, F, and G be the elementary matrices of Example 1.6.

    Then

    Note that the inverses for E and F are lower triangular and that P is its own inverse. We now illustrate the role played by elementary matrices in Gaussian elimination.

    EXAMPLE 1.8 LU DECOMPOSITION

    The following 3 × 3 matrix can be reduced to upper triangular form by applying (in order) the indicated row operations:

    If we let E, F, and G denote the elementary matrices associated with the respective row operations, E21(−2), E31(1), and E32(−2), then G(F(EA)) = U. Thus, it follows that A = E−1(F−1(G−1U)). Each of the matrices E, F, G is lower triangular as are each of the inverses. It is easy to check that the product of two lower triangular matrices is again lower triangular. Thus, the product E−1F−1G−1 is a lower triangular matrix we shall denote by L. Then A = LU, where L is lower triangular and U is upper triangular. We refer to this as the LU factorization or the LU decomposition of A. Note further that, according to Theorem 1.5,

    and, thus,

    Observe that A is reduced to upper triangular form by the row operations

    E21(−2), E31(1), E32(−2)

    and that L has off-diagonal entries

    L21 = 2, L31 = −1 L32 = 2

    Thus, in order to find the matrix L in the LU factorization of A, it is not necessary to compute the elementary matrices or their inverses. Instead, it is sufficient to keep track of the multipliers in the row operations used to reduce A to U. More generally, we have the next theorem.

    Theorem 1.6

    Theorem 1.6. Suppose n × n matrix A is reduced to upper triangular matrix U by row operations of the form Ejk(a) and Ej(a). Then A = LU where the entries Ljk of the lower triangular matrix L are prescribed by:

    Each row operation of the form Ejk(a) leads to an off-diagonal entry Ljk = −a.

    Each row operation of the form Ej(a) leads to a diagonal entry Ljj = 1/a.

    For simplicity we will avoid discussion of the effect of row exchanges on the factorization of A.

    SOLUTION OF LINEAR SYSTEMS BY LU FACTORIZATION

    Application of Gaussian elimination to the n × n system AX = B reduces the system to an upper triangular systerm UX = B*. By keeping track of the row operations employed, we obtain at the same time a lower triangular matrix L such that LU = A. Then LUX = B or, since UX = B*, we can write LB* = B. Thus, the system AX = B is seen to be equivalent to the two triangular systems

    Forward Sweep: LB* = B

    Backward Sweep: UX = B*

    Solving LB* = B is accomplished by executing a forward sweep; that is, since the matrix L is lower triangular, we solve the first equation first, use the result in the second equation, etc. Then we solve the upper triangular system UX = B* by a backward sweep; i.e., by back substitution, as previously described. This strategy is particularly efficient if we are required to solve the system AX = B for several different data vectors B since we do not have to repeat the time-consuming step of factorization but simply repeat the forward and backward sweeps with the same L and U matrices.

    EXAMPLE 1.9 SOLVING AX = B BY LU FACTORIZATION

    To solve

    by LU factorization, recall from Example 1.7 that A = LU where L and U are given in Example 1.7. We first execute a forward sweep, solving LB* = B, i.e.,

    This produces the following result:

    Then we execute the backward sweep to find X:

    The solved problems illustrate some situations in which it is natural to solve AX = B for a sequence of data vectors B. We summarize the LU decomposition algorithm for AX = B in the following steps:

    Reduce [AlB] to upper triangular form [U|B*] by row operations.

    Store multipliers from row operations to form the matrix L.

    Execute a backward sweep to solve UX = B* for X.

    To solve AX = B for a new data vector B‘, execute a forward sweep to find B* from LB* = B’. Then repeat step 3.

    LINEAR ALGEBRA

    Vector Spaces

    In order to understand the solution of systems of linear equations at more than an algorithmic level, it is necessary to introduce a number of abstract concepts. The proper context for considering questions relating to systems of linear equations is the setting of a vector space. In general, a vector space consists of:

    a collection of objects called vectors

    a second set of objects called scalars

    two operations, vector addition and scalar multiplication for combining scalars and vectors

    It is customary to list a set of axioms describing how these objects and operations interact. For our purposes, it will be sufficient to proceed with less generality.

    VECTORS AND SCALARS

    We will take as our vectors n-tuples of real numbers, so-called n-vectors for which we have already introduced the notation X or XT according to whether the vector is written as a column or a row. In this chapter we will use the real numbers as our scalars. In the next chapter it will be necessary to be slightly more general and to use the complex numbers for scalars.

    VECTOR ADDITION, SCALAR MULTIPLICATION

    We define vector addition by

    for arbitrary vectors X and Y. Scalar multiplication is defined as follows:

    for arbitrary scalar a and vector X.

    EUCLIDEAN n-SPACE

    By defining the vectors, scalars and operations as we have, we have presented a particular example of a vector space. This vector space is usually referred to as Euclidean n-space and is denoted by ℜn. This is certainly not the only example of a vector space which is of importance in applied mathematics, but it is the only one we will consider in this chapter.

    LINEAR COMBINATIONS, SUBSPACES

    For scalars a and b and vectors X and Y, the combination

    is also a vector. We refer to Z as a linear combination of X and Y. Certain sets of vectors have the property that they are closed under the operation of forming linear combinations. Such sets of vectors are called subspaces. That is, a set M of vectors is a subspace if and only if, for all scalars a and b and all vectors X and Y in M, the linear combination aX + bY is also in M.

    EXAMPLE 1.10 SUBSPACES

    Null Space of a Matrix. Let A denote an n × n matrix. Then the set of all vectors X in ℜn with the property that AX = 0 forms a subspace called the null space of A. We denote this subspace by N[A] and write the definition more briefly as

    To see that N[A] is a subspace, suppose that X and Y belong to N[A]; i.e., AX = 0 and AY = 0. For arbitrary scalars a and b, it follows from the definition of the product of a matrix times a vector that

    Since Z =aX + bY satisfies AZ = 0, Z belongs to N[A] and N[A] is closed under the operation of forming linear combinations.

    Range of a Matrix. Note that for X in ℜn, the product AX is also in ℜn. The set of all vectors Y in ℜn that are of this form, Y = AX for some X, form a subspace called the range of A. We denote this subspace by Rng[A] and write

    Rng[A] = {Y in ℜn: Y = AX for some X in ℜn}

    The proof that Rng[A] is a subspace is found in the solved problems. The two subspaces N[A] and Rng[A] are subspaces defined by membership rule. The rule for membership in N[A] is that AX must be the zero vector and the rule for membership in Rng[A] is that Y must be of the form AX for some vector X. Thus, in order to decide if a given vector is in the subspace, it is sufficient to simply apply the membership rule to the vector. We consider now two examples of subspaces defined by another means.

    Row Space of a Matrix. For n × n matrix A, we define the row space of A to be the collection of all possible linear combinations of the rows of A. The rows of A are (row) n-vectors and, hence, this is just the set of all vectors in ℜn that can be written as some linear combination of the rows of A. We denote this subspace by RS[A] and note that since RS[A] consists of all possible linear combinations of the rows of A, it must necessarily be closed under the operation of forming linear combinations. Thus, it is obvious that this is a subspace.

    Column Space of a Matrix. The set of all possible linear combinations of the columns of the matrix A is, like the row space of A, a subspace of ℜn. We denote this subspace by CS[A] and refer to this as the column space of A.

    Spanning Set. The row space and column space of a matrix A are both special cases of the more general concept of a subspace defined by a spanning set. Given any set V1, . . . , Vp of vectors in ℜn, we denote the collection of all the possible linear combinations of these vectors by span[V1, . . . , Vp] and we refer to this subspace as the subspace spanned by the vectors V1, . . . , Vp. Thus, the row space of A is the subspace spanned by the rows of A and the column space of A is the subspace spanned by the columns of A. For M = span[V1, . . . , Vp], it is obvious that M is indeed a subspace but it is not easy to determine whether an arbitrary vector in ℜn belongs to M. On the other hand, for a subspace defined by membership rule, it is necessary to prove that the set is, in fact, a subspace, but it is then easy to tell whether an arbitrary vector belongs to the subspace.

    REPRESENTATIONS FOR AX

    It will be convenient to take note of two particular ways of expressing the product of the n × n matrix A times the n-vector X. Let Ci denote the ith column of A and Ri denote the ith row of A. First we have

    i.e.,

    (1.6)

    where Ck denotes the kth column of the matrix A. Note also that

    i.e.,

    (1.7)

    The representations (1.6) and (1.7) will be useful in establishing certain properties of the four subspaces N[A], Rng[A], RS[A], and CS[A] related to A.

    Theorem 1.7

    Theorem 1.7. For any m × n matrix A, CS[A] = Rng [A].

    LINEAR DEPENDENCE

    The vectors V1, . . . , Vp are said to be linearly dependent if and only if there exists scalars a1, . . . , ap, not all zero, such that

    If the vectors V1, . . . , Vp are not linearly dependent, they are said to be linearly independent. Clearly, the vectors V1, . . . , Vp are linearly independent if and only if a1V1 + ⋅ ⋅ ⋅ + apVp implies that a1 = ⋅ ⋅ ⋅ = ap = 0.

    Theorem 1.8 Test for Linear Dependence

    Theorem 1.8. Let A denote the p × n matrix whose rows are the row vectors V1T, . . . , VpT. Then the vectors V1, . . . , Vp are linearly dependent if and only if rank A < p.

    EXAMPLE 1.11 LINEAR DEPENDENCE

    Let Ek denote the n-vector whose kth component is a 1 and all other components are zero; i.e.,

    The n × n matrix whose kth row is EkT for k = 1 to n is just In and, thus, has rank equal to n. Then, by Theorem 1.8, these vectors are linearly independent. Note that the vectors E1, . . . , En span all of ℜn since for an arbitrary vector XT = [x1, . . . , xn] we have X = x1E1 + ⋅ ⋅ ⋅ + xnEn; i.e., X can be written as a linear combination of the Ek’s.

    Suppose the n × n matrix A has rank p < n. That is, A is row equivalent to an upper triangular matrix U having p nontrivial rows. Then, by Theorem 1.8, the

    Enjoying the preview?
    Page 1 of 1