You are on page 1of 550

PEARSON

NEVER

Daniel Norman

LEARNING

Dan Wolczuk

Introduction to Linear
Algebra for Science and
Engineering
Student Cheap-ass Edition

Taken from:
Introduction to Linear Algebra for Science and Engineering, Second Edition
by Daniel Norman and Dan Wolczuk

Cover Art: Courtesy of Pearson Learning Solutions.


Taken from:
Introduction to Linear Algebra for Science and Engineering, Second Edition
by Daniel Norman and Dan Wolczuk
Copyright 2012, 1995 by Pearson Education, Inc.
Published by Pearson
Upper Saddle River, New Jersey 07458
All rights reserved. No part of this book may be reproduced, in any form or by any means, without
permission in writing from the publisher.
This special edition published in cooperation with Pearson Learning Solutions.
All trademarks, service marks, registered trademarks, and registered service marks are the
property of their respective owners and are used herein for identification purposes only.

Pearson Learning Solutions, 501 Boylston Street, Suite 900, Boston, MA 02116
A Pearson Education Company
www.pearsoned.com

PEARSON

Contents
A Note to Students

2.3 Application to Spanning and Linear

vi

A Note to Instructors

Independence

viii

91

Spanning Problems

Chapter 1

Euclidean Vector Spaces

1.1 Vectors in JR2 and JR3

Bases of Subspaces

The Vector Equation of a Line in JR2


Vectors and Lines in JR3
1.2 Vectors in JRll 14

Subspaces

Resistor Circuits in Electricity


Planar Trusses

16

Spanning Sets and Linear Independence

Chapter 3

18

of Matrices

31

Identity Matrix

40

Mappings

44

44

1.5 Cross-Products and Volumes

50

The Length of the Cross-Product

52

3.4 Special Subspaces for Systems and Mappings:

The Matrix Representation of a System

150

Solution Space and Nullspace

69

Solution Set of Ax

73

150

b 152

Range of L and Columnspace of A

Consistent Systems and Unique

Rowspace of A

75

153

156

Bases for Row(A), Col(A), and Null(A)

Some Shortcuts and Some Bad Moves

76

A Summary of Facts About Rank

77
78

Matrix

167

Some Facts About Square Matrices and Solutions of

83

Linear Systems

85

Homogeneous Linear Equations

165

A Procedure for Finding the Inverse of a

2.2 Reduced Row Echelon Form, Rank,


and Homogeneous Systems

157

162

3.5 Inverse Matrices and Inverse Mappings

A Remark on Computer Calculations

Rank of a Matrix

143

143

Rotation Through Angle e About the X -axis


3
in JR3 145

63

Rank T heorem

of Linear Equations

136

139

Rotations in the Plane

63

A Word Problem

134

Linear Mappings

2.1 Systems of Linear Equations and

Solutions

Linear Mappings

3.3 Geometrical Transformations

54

Systems of Linear Equations

Row Echelon Form

131

Compositions and Linear Combinations of

Some Problems on Lines, Planes,

Elimination

131

Matrix Mappings

Is Every Linear Mapping a Matrix Mapping?

50

and Distances

127

3.2 Matrix Mappings and Linear

43

Some Properties of Projections

Cross-Products

121

126

Block Multiplication

40

Minimum Distance

120

An Introduction to Matrix Multiplication

The Perpendicular Part

Chapter 2

115

The Transpose of a Matrix

34

1.4 Projections and Minimum Distance


Projections

115

Equality, Addition, and Scalar Multiplication

28

The Scalar Equation of Planes and


Hyperplanes

115

3.1 Operations on Matrices

28

Length and Dot Products in JR2, and JR3


Length and Dot Product in JR11

107

Matrices, Linear Mappings,

and Inverses

24

102

105

Linear Programming

15

1.3 Length and Dot Products

102

Equations

Surfaces in Higher Dimensions

95

97

2.4 Applications of Systems of Linear

Addition and Scalar Multiplication


of Vectors in JR11

91

Linear Independence Problems

168

Inverse Linear Mappings

86
iii

170

iv

Contents

3.6 Elementary Matrices 175


3.7 LU-Decomposition 181

Eigenvalues and Eigenvectors of a

291

Matrix

Solving Systems with the LU-Decomposition


A Comment About Swapping Rows

187

185

Finding Eigenvectors and Eigenvalues

6.2 Diagonalization

Some Applications of Diagonalization

Chapter 4

4.1 Spaces of Polynomials

193

Systems of Linear Difference Equations

Addition and Scalar Multiplication of

4.2 Vector Spaces


Vector Spaces

6.4 Diagonalization and Differential


Equations 315

197

4.3 Bases and Dimensions


Bases

Chapter 7

209

211

Extending a Linearly Independent


Subset to a Basis

213

4.4 Coordinates with Respect to a Basis


4.5 General Linear Mappings 226
4.6 Matrix of a Linear Mapping 235

218

The Matrix of L with Respect to the


Basis B

235

7.1 Orthonormal Bases and Orthogonal


Matrices 321
Orthonormal Bases 321
Coordinates with Respect to an Orthonormal
Basis

323

Change of Coordinates and Orthogonal


Matrices

325

Rotation of Axes in JR.2

240
246

4.7 Isomorphisms of Vector Spaces

329
7.2 Projections and the Gram-Schmidt
Procedure 333
Projections onto a Subspace

Chapter 5

Determinants

255

5.1 Determinants in Terms of Cofactors 255


The 3 x 3 Case 256
5.2 Elementary Row Operations and the
Determinant 264
The Determinant and Invertibility 270
Determinant of a Product 270
5.3 Matrix Inverse by Cofactors and
Cramer's Rule 274
Cramer's Rule 276
5.4 Area, Volume, and the Determinant 280
Area and the Determinant 280
The Determinant and Volume 283

Chapter 6

6.1 Eigenvalues and Eigenvectors


289

7.4 Inner Product Spaces


Inner Product Spaces

7.5 Fourier Series

337
342
345

348
348

354
b

The Inner Product

J f(x)g(x) dx

289

354

Fourier Series

Chapter 8

355

Symmetric Matrices and

Quadratic Forms

363

8.1 Diagonalization of Symmetric


Matrices 363

Quadratic Forms

Eigenvalues and Eigenvectors of a


Mapping

Overdetermined Systems

8.2 Quadratic Forms

289

333

The Gram-Schmidt Procedure

7.3 Method of Least Squares

The Principal Axis Theorem

Eigenvectors and

Diagonalization

321

Orthonormal Bases

A Note on Rotation Transformations and

Change of Coordinates and Linear


Mappings

317

General Discussion

206

Finite Spanning Set

317

A Practical Solution Procedure

206

Obtaining a Basis from an Arbitrary


Dimension

312

Eigenvalues

197

201

Subspaces

312

The Power Method of Determining

193

Polynomials

303

6.3 Powers of Matrices and the Markov


Process 307

193

Vector Spaces

291

299

366

372
372

Classifications of Quadratic Forms

8.3 Graphs of Quadratic Forms 380


Graphs of Q(x)
k in JR.3 385
=

376

Contents

8.4 Applications of Quadratic Forms


Small Deformations
The Inertia Tensor

Chapter 9

9.4 Eigenvectors in Complex Vector Spaces

388

388
390
395

395

The Complex Conjugate and Division

397

Polar Form

398

399
402

404

9.2 Systems with Complex Numbers

407

Complex Numbers in Electrical Circuit


Equations

408

9.3 Vector Spaces over C

2 Matrix

420

3 Matrix

422

Properties of Complex Inner Products

413

Inequalities

415

426
429

9.6 Hermitian Matrices and Unitary


Diagonalization 432
Appendix A
Exercises

Answers to Mid-Section
439
Answers to Practice Problems

and Chapter Quizzes

Complex Multiplication as a Matrix


Mapping

425

426

The Cauchy-Schwarz and Triangle

Appendix B

411

Linear Mappings and Subspaces

The Case of a 3

Orthogonality in C" and Unitary Matrices

399

Powers and the Complex Exponential


n-th Roots

The Case of a 2

9.5 Inner Products in Complex Vector Spaces

The Arithmetic of Complex Numbers

The Complex Plane

418

Matrix and a Real Canonical Form

395

Roots of Polynomial Equations

417

Complex Characteristic Roots of a Real

Complex Vector Spaces

9.1 Complex Numbers

Index

529

465

A Note to Students
Linear Algebra-What Is It?
Linear algebra is essentially the study of vectors, matrices, and linear mappings. Al
though many pieces of linear algebra have been studied for many centuries, it did not
take its current form until the mid-twentieth century. It is now an extremely important
topic in mathematics because of its application to many different areas.
Most people who have learned linear algebra and calculus believe that the ideas
of elementary calculus (such as limit and integral) are more difficult than those of in
troductory linear algebra and that most problems in calculus courses are harder than
those in linear algebra courses. So, at least by this comparison, linear algebra is not
hard. Still, some students find learning linear algebra difficult. I think two factors con
tribute to the difficulty students have.
First, students do not see what linear algebra is good for. This is why it is important
to read the applications in the text; even if you do not understand them completely, they
will give you some sense of where linear algebra fits into the broader picture.
Second, some students mistakenly see mathematics as a collection of recipes for
solving standard problems and are uncomfortable with the fact that linear algebra is
"abstract" and includes a lot of "theory." There will be no long-term payoff in simply
memorizing these recipes, however; computers carry them out far faster and more ac
curately than any human can. That being said, practising the procedures on specific
examples is often an important step toward much more important goals: understand
ing the concepts used in linear algebra to formulate and solve problems and learning to
interpret the results of calculations. Such understanding requires us to come to terms
with some theory. In this text, many of our examples will be small. However, as you
work through these examples, keep in mind that when you apply these ideas later, you
may very well have a million variables and a million equations. For instance, Google's
PageRank system uses a matrix that has 25 billion columns and 25 billion rows; you
don't want to do that by hand! When you are solving computational problems, al

ways try to observe how your work relates to the theory you have learned.
Mathematics is useful in so many areas because it is abstract: the same good idea
can unlock the problems of control engineers, civil engineers, physicists, social scien
tists, and mathematicians only because the idea has been abstracted from a particular
setting. One technique solves many problems only because someone has established a
theory of how to deal with these kinds of problems. We use definitions to try to capture
important ideas, and we use theorems to summarize useful general facts about the kind
of problems we are studying. Proofs not only show us that a statement is true; they can
help us understand the statement, give us practice using important ideas, and make it
easier to learn a given subject. In particular, proofs show us how ideas are tied together
so we do not have to memorize too many disconnected facts.
Many of the concepts introduced in linear algebra are natural and easy, but some
may seem unnatural and "technical" to beginners. Do not avoid these apparently more
difficult ideas; use examples and theorems to see how these ideas are an essential
part of the story of linear algebra. By learning the "vocabulary" and "grammar" of
linear algebra, you will be equipping yourself with concepts and techniques that math
ematicians, engineers, and scientists find invaluable for tackling an extraordinarily rich
variety of problems.

vi

A Note to Students

vii

Linear Algebra-Who Needs It?


Mathematicians
Linear algebra and its applications are a subject of continuing research. Linear algebra
is vital to mathematics because it provides essential ideas and tools in areas as diverse
as abstract algebra, differential equations, calculus of functions of several variables,
differential geometry, functional analysis, and numerical analysis.

Engineers
Suppose you become a control engineer and have to design or upgrade an automatic
control system. T he system may be controlling a manufacturing process or perhaps
an airplane landing system. You will probably start with a linear model of the sys
tem, requiring linear algebra for its solution. To include feedback control, your system
must take account of many measurements (for the example of the airplane, position,
velocity, pitch, etc.), and it will have to assess this information very rapidly in order to
determine the correct control responses. A standard part of such a control system is a
Kalman-Bucy filter, which is not so much a piece of hardware as a piece of mathemat
ical machinery for doing the required calculations. Linear algebra is an essential part
of the Kalman-Bucy filter.
If you become a structural engineer or a mechanical engineer, you may be con
cerned with the problem of vibrations in structures or machinery. To understand the
problem, you will have to know about eigenvalues and eigenvectors and how they de
termine the normal modes of oscillation. Eigenvalues and eigenvectors are some of the
central topics in linear algebra.
An electrical engineer will need linear algebra to analyze circuits and systems; a
civil engineer will need linear algebra to determine internal forces in static structures
and to understand principal axes of strain.
In addition to these fairly specific uses, engineers will also find that they need
to know linear algebra to understand systems of differential equations and some as
pects of the calculus of functions of two or more variables. Moreover, the ideas and
techniques of linear algebra are central to numerical techniques for solving problems
of heat and fluid flow, which are major concerns in mechanical engineering. And the
ideas of Jjnear algebra underjje advanced techniques such as Laplace transforms and
Fourier analysis.

Physicists
Linear algebra is important in physics, partly for the reasons described above. In addi
tion, it is essential in applications such as the inertia tensor in general rotating motion.
Linear algebra is an absolutely essential tool in quantum physics (where, for exam
ple, energy levels may be determined as eigenvalues of linear operators) and relativity
(where understanding change of coordinates is one of the central issues).

Life and Social Scientists


Input/output models, described by matrices, are often used in economics, and similar
ideas can be used in modelling populations where one needs to keep track of sub
populations (generations, for example, or genotypes). In all sciences, statistical anal
ysis of data is of great importance, and much of this analysis uses Jjnear algebra; for
example, the method of least squares (for regression) can be understood in terms of
projections in linear algebra.

viii

A Note to Instructors

Managers
A manager in industry will have to make decisions about the best allocation of re
sources: enormous amounts of computer time around the world are devoted to linear
programming algorithms that solve such allocation problems. The same sorts of tech
niques used in these algorithms play a role in some areas of mine management. Linear
algebra is essential here as well.
So who needs linear algebra? Almost every mathematician, engineer, or scientist
will .find linear algebra an important and useful tool.

Will these applications be explained in this book?


Unfortunately, most of these applications require too much specialized background
to be included in a first-year linear algebra book. To give you an idea of how some of
these concepts are applied, a few interesting applications are briefly covered in sections

1.4, 1.5, 2.4, 5.4, 6.3, 6.4, 7.3, 7.5, 8.3, 8.4, and 9.2. You will get to see many more
applications of linear algebra in your future courses.

A Note to Instructors
Welcome to the second edition of Introduction to Linear Algebra for Science and
Engineering. It has been a pleasure to revise Daniel Norman's first edition for a new
generation of students and teachers. Over the past several years, I have read many
articles and spoken to many colleagues and students about the difficulties faced by
teachers and learners of linear algebra. In particular, it is well known that students typ
ically find the computational problems easy but have great difficulty in understanding
the abstract concepts and the theory. Inspired by this research, I developed a pedagog
ical approach that addresses the most common problems encountered when teaching
and learning linear algebra. I hope that you will find this approach to teaching linear
algebra as successful as I have.

Changes to the Second Edition

Several worked-out examples have been added, as well as a variety of mid


section exercises (discussed below).

Vectors in JR.11 are now always represented as column vectors and are denoted
with the normal vector symbol 1. Vectors in general vector spaces are still
denoted in boldface.

Some material has been reorganized to allow students to see important con
cepts early and often, while also giving greater flexibility to instructors. For
example, the concepts of linear independence, spanning, and bases are now
introduced in Chapter 1 in JR.11, and students use these concepts in Chapters 2
and 3 so that they are very comfortable with them before being taught general
vector spaces.

A Note to Instructors

ix

The material on complex numbers has been collected and placed in Chapter 9,
at the end of the text. However, if one desires, it can be distributed throughout
the text appropriately.

There is a greater emphasis on teaching the mathematical language and using


mathematical notation.

All-new figures clearly illustrate important concepts, examples, and applica


tions.

The text has been redesigned to improve readability.

Approach and Organization


Students typically have little trouble with computational questions, but they often
struggle with abstract concepts and proofs. This is problematic because computers
perform the computations in the vast majority of real-world applications of linear
algebra. Human users, meanwhile, must apply the theory to transform a given problem
into a linear algebra context, input the data properly, and interpret the result correctly.
The main goal of this book is to mix theory and computations throughout the
course. The benefits of this approach are as follows:

It prevents students from mistaking linear algebra as very easy and very com
putational early in the course and then becoming overwhelmed by abstract con
cepts and theories later.

It allows important linear algebra concepts to be developed and extended more


slowly.

It encourages students to use computational problems to help understand the


theory of linear algebra rather than blindly memorize algorithms.

One example of this approach is our treatment of the concepts of spanning and
linear independence. They are both introduced in Section 1.2 in JR.n, where they can be
motivated in a geometrical context. They are then used again for matrices in Section

3.1 and polynomials in Section 4.1, before they are finally extended to general vector
spaces in Section 4.2.
The following are some other features of the text's organization:

The idea of linear mappings is introduced early in a geometrical context and is


used to explain aspects of matrix multiplication, matrix inversion, and features
of systems of linear equations. Geometrical transformations provide intuitively
satisfying illustrations of important concepts.

Topics are ordered to give students a chance to work with concepts in a simpler
setting before using them in a much more involved or abstract setting. For ex
ample, before reaching the definition of a vector space in Section 4.2, students
will have seen the 10 vector space axioms and the concepts of linear indepen
dence and spanning for three different vector spaces, and they will have had
some experience in working with bases and dimensions. Thus, instead of be
ing bombarded with new concepts at the introduction of general vector spaces,
students will j ust be generalizing concepts with which they are already familiar.

A Note to Instructors

Pedagogical Features
Since mathematics is best learned by doing, the following pedagogical elements are
included in the book.

A selection of routine mid-section exercises is provided, with solutions in


cluded in the back of the text. These allow students to use and test their under
standing of one concept before moving on to other concepts in the section.

Practice problems are provided for students at the end of each section. See "A
Note on the Exercises and Problems" below.

Examples, theorems, and definitions are called out in the margins for easy
reference.

Applications
One of the difficulties in any linear algebra course is that the applications of linear
algebra are not so immediate or so intuitively appealing as those of elementary cal
culus. Most convincing applications of linear algebra require a fairly lengthy buildup
of background that would be inappropriate in a linear algebra text. However, without
some of these applications, many students would find it difficult to remain motivated
to learn linear algebra. An additional difficulty is that the applications of linear alge
bra are so varied that there is very little agreement on which applications should be
covered.
In this text we briefly discuss a few applications to give students some easy sam
ples. Additional applications are provided on the Corripanion Website so that instruc
tors who wish to cover some of them can pick and choose at their leisure without
increasing the size (and hence the cost) of the book.

List of Applications

Minimum distance from a point to a plane (Section 1.4)

Area and volume (Section 1.5, Section 5.4)

Electrical circuits (Section 2.4, Section 9.2)

Planar trusses (Section 2.4)

Linear programming (Section 2.4)

Magic squares (Chapter 4 Review)

Markov processes (Section 6.3)

Differential equations (Section 6.4)

Curve of best fit (Section 7.3)

Overdetermined systems (Section 7.3)

Graphing quadratic forms (Section 8.3)

Small deformations (Section 8.4)

The inertia tensor (Section 8.4)

A Note to Instructors

xi

Computers
As explained in "A Note on the Exercises and Problems," which follows, some prob
lems in the book require access to appropriate computer software. Students should
realize that the theory of linear algebra does not apply only to matrices of small size
with integer entries. However, since there are many ideas to be learned in linear alge
bra, numerical methods are not discussed. Some numerical issues, such as accuracy
and efficiency, are addressed in notes and problems.

A No t e on the Exerc ises and Problems


Most sections contain mid-section exercises. These mid-section exercises have been
created to allow students to check their understanding of key concepts before continu
ing on to new concepts in the section. Thus, when reading through a chapter, a student
should always complete each exercise before continuing to read the rest of the chapter.
At the end of each section, problems are divided into A, B, C, and D problems.
The A Problems are practice problems and are intended to provide a sufficient
variety and number of standard computational problems, as well as the odd theoretical
problem for students to master the techniques of the course; answers are provided at
the back of the text. Full solutions are available in the Student Solutions Manual (sold
separately).
The B Problems are homework problems and essentially duplicates of the A prob
lems with no answers provided, for instructors who want such exercises for homework.
In a few cases, the B problems are not exactly parallel to the A problems.
The C Problems require the use of a suitable computer program. These problems
are designed not only to help students familiarize themselves with using computer soft
ware to solve linear algebra problems, but also to remind students that linear algebra
uses real numbers, not only integers or simple fractions.
The D Problems usually require students to work with general cases, to write
simple arguments, or to invent examples. These are important aspects of mastering
mathematical ideas, and all students should attempt at least some of these-and not
get discouraged if they make slow progress. With effort, most students will be able
to solve many of these problems and will benefit greatly in the understanding of the
concepts and connections in doing so.
In addition to the mid-section exercises and end-of-section problems, there is a
sample Chapter Quiz in the Chapter Review at the end of each chapter. Students should
be aware that their instructors may have a different idea of what constitutes an appro
priate test on this material.
At the end of each chapter, there are some Further Problems; these are similar to
the D Problems and provide an extended investigation of certain ideas or applications
of linear algebra. Further Problems are intended for advanced students who wish to
challenge themselves and explore additional concepts.

Using This Text to Teach Linear Algebra


There are many different approaches to teaching linear algebra. Although we suggest
covering the chapters in order, the text has been written to try to accommodate two
main strategies.

xii

A Note to Instructors

Early Vector Spaces


We believe that it is very beneficial to introduce general vector spaces immediately
after students have gained some experience in working with a few specific examples
of vector spaces. Students find it easier to generalize the concepts of spanning, linear
independence, bases, dimension, and linear mappings while the earlier specific cases
are still fresh in their minds. In addition, we feel that it can be unhelpful to students
to have determinants available too soon. Some students are far too eager to latch onto
mindless algorithms involving determinants (for example, to check linear indepen
dence of three vectors in three-dimensional space) rather than actually come to terms
with the defining ideas. Finally, this allows eigenvalues, eigenvectors, and diagonal
ization to be highlighted near the end of the first course. If diagonalization is taught
too soon, its importance can be lost on students.

Early Determinants and Diagonalization


Some reviewers have commented that they want to be able to cover determinants and
diagonalization before abstract vector spaces and that in some introductory courses,
abstract vector spaces may not be covered at all. Thus, this text has been written so
that Chapters 5 and 6 may be taught prior to Chapter 4. (Note that all required in
formation about subspaces, bases, and dimension for diagonalization of matrices over

JR is covered in Chapters 1, 2, and 3.) Moreover, there is a natural flow from matrix
inverses and elementary matrices at the end of Chapter 3 to determinants in Chapter 5.

A Course Outline
The following table indicates the sections in each chapter that we consider to be "cen
tral material":
Chapter

Central Material

Optional Material

l , 2,3,4,5

1, 2, 3

1,2,3,4,5,6

l,2,3,4,5,6,7

1,2,3

1,2

3,4

1,2

3,4, 5

1,2

3,4

l , 2,3,4,5,6

Supplements
We are pleased to offer a variety of excellent supplements to students and instructors
using the Second Edition.
T he new Student Solutions Manual (ISBN: 978-0-321-80762-5), prepared by
the author of the second edition, contains full solutions to the Practice Problems and
Chapter Quizzes. It is available to students at low cost.
MyMathLab Online Course (access code required) delivers proven results
in helping individual students succeed. It provides engaging experiences that person
alize, stimulate, and measure learning for each student. And, it comes from a trusted
partner with educational expertise and an eye on the future. To learn more about how

A Note to Instructors

xiii

MyMathLab combines proven learning applications with powerful assessment, visit


www.mymathlab.com or contact your Pearson representative.
The new Instructor's Resource CD-ROM (ISBN: 978-0-321-80759-5) includes
the following valuable teaching tools:

An Instructor's Solutions Manual for all exercises in the text: Practice


Problems, Homework Problems, Computer Problems, Conceptual Problems,
Chapter Quizzes, and Further Problems.

A Test Bank with a large selection of questions for every chapter of the text.

Customizable Beamer Presentations for each chapter.

An Image Library that includes high-quality versions of the Figures, T heo


rems, Corollaries, Lemmas, and Algorithms in the text.

Finally,

the

second

edition

is

available

as

CourseSmart

eTextbook

(ISBN: 978-0-321-75005-1). CourseSmart goes beyond traditional expectations


providing instant, online access to the textbook and course materials at a lower cost
for students (average savings of 60%). With instant access from any computer and the
ability to search the text, students will find the content they need quickly, no matter
where they are. And with online tools like highlighting and note taking, students can
save time and study efficiently.
Instructors can save time and hassle with a digital eTextbook that allows them
to search for the most relevant content at the very moment they need it. Whether it's
evaluating textbooks or creating lecture notes to help students with difficult concepts,
CourseSmart can make life a little easier. See all the benefits at www.coursesmart.com/
instructors or www.coursesmart.com/students.
Pearson's technology specialists work with faculty and campus course designers to
ensure that Pearson technology products, assessment tools, and online course materi
als are tailored to meet your specific needs. This highly qualified team is dedicated to
helping schools take full advantage of a wide range of educational resources by assist
ing in the integration of a variety of instructional materials and media formats. Your
local Pearson Canada sales representative can provide you with more details about this
service program.

Acknowledgments
T hanks are expressed to:
Agnieszka Wolczuk: for her support, encouragement, help with editing, and tasty
snacks.
Mike La Croix: for all of the amazing figures in the text and for his assistance on
editing, formatting, and LaTeX'ing.
Stephen New, Martin Pei, Barbara Csima, Emilio Paredes: for proofreading and
their many valuable comments and suggestions.
Conrad Hewitt, Robert Andre, Uldis Celmins, C. T. Ng, and many other of my
colleagues who have taught me things about linear algebra and how to teach
it as well as providing many helpful suggestions for the text.
To all of the reviewers of the text, whose comments, corrections, and recommen
dations have resulted in many positive improvements:

xiv

A Note to Instructors

Robert Andre

Dr. Alyssa Sankey

University of Waterloo

University of New Brunswick

Luigi Bilotto
Vanier College
Dietrich Burbulla
University of Toronto
Dr. Alistair Carr
Monash University
Gerald Cliff
University of Alberta
Antoine Khalil

Manuele Santoprete
Wilfrid Laurier University
Alistair Savage
University of Ottawa
Denis Sevee
John Abbott College
Mark Solomonovich
Grant MacEwan University

CEGEP Vanier
Hadi Kharaghani
University of Lethbridge
Gregory Lewis
University of Ontario
Institute of Technology
Eduardo Martinez-Pedroza

Dr. Pamini Thangarajah


Mount Royal University
Dr. Chris Tisdell
The University of New South Wales
Murat Tuncali
Nipissing University

McMaster University
Dorette Pronk
Dalhousie University

Brian Wetton
University of British Columbia

T hanks also to the many anonymous reviewers of the manuscript.


Cathleen Sullivan, John Lewis, Patricia Ciardullo, and Sarah Lukaweski: For all
of their hard work in making the second edition of this text possible and for
their suggestions and editing.
In addition, I thank the team at Pearson Canada for their support during the
writing and production of this text.
Finally, a very special thank y ou to Daniel Norman and all those who contributed
to the first edition.

Dan Wolczuk
University of Waterloo

CHAPTER 1

Euclidean Vector Spaces


CHAPTER OUTLINE
1.1

"
Vectors in IR and JR?.3

1.2

11
Vectors in IR

1.3

Length and Dot Products

1.4

Projections and Minimum Distance

1.5

Cross-Products and Volumes

Some of the material in this chapter will be familiar to many students, but some ideas
that are introduced here will be new to most. In this chapter we will look at operations
on and important concepts related to vectors. We will also look at some applications
of vectors in the familiar setting of Euclidean space. Most of these concepts will later
be extended to more general settings. A firm understanding of the material from this
chapter will help greatly in understanding the topics in the rest of this book.

1.1 Vectors in R2 and R3


We begin by considering the two-dimensional plane in Cartesian coordinates. Choose
an origin 0 and two mutually perpendicular axes, called the x1 -axis and the xraxis, as
shown in Figure 1.1.1. Then a point Pin the plane is identified by the 2-tuple ( p1, p2),
called coordinates of P, where Pt is the distance from Pto the X2-axis, with p1 positive
if Pis to the right of this axis and negative if Pis to the left. Similarly, p2 is the distance
from Pto the x1 -axis, with p2 positive if Pis above this axis and negative if Pis below.
You have already learned how to plot graphs of equations in this plane.

P2

---

--

-- --

--

..

p =(pi, p2)

I
I
I
I
I
I
I
I

0
Figure 1.1.1

Pi
Coordinates in the plane

For applications in many areas of mathematics, and in many subjects such as


physics and economics, it is useful to view points more abstractly. In particular, we
will view them as vectors and provide rules for adding them and multiplying them by
constants.

Chapter 1

Definition

Euclidean Vector Spaces

JR2

is the set of all vectors of the form

[:l

where

and

xi x2

are real numbers called

the components of the vector. Mathematically, we write

Remark
We shall use the notation 1 =

[ : ]

to denote vectors in

Although we are viewing the elements of

JR2

as vectors, we can still interpret

these geometrically as points. That is, the vector jJ =


point
to

P(p , p ).
(pi, p2),i 2
(pi, p2)

JR2.
[]

can be interpreted as the

Graphically, this is often represented by drawing an arrow from (0, 0)

as shown in Figure 1.1.2. Note, however, that the points between (0, 0)

and

should not be thought of as points "on the vector." The representation of a

vector as an arrow is particularly common in physics; force and acceleration are vector
quantities that can conveniently be represented by an arrow of suitable magnitude and
direction.

0 = (0, 0)
Figure 1.1.2

Definition
Addition and Scalar
Multiplication in :12

If 1

[: l [l t JR,
y =

and

Graphical representation of a vector.

then we define addition of vectors by

X +y =

[Xi]+ [y'] [Xi Yl]


X2 Y2 X2 Y2
=

and the scalar multiplication of a vector by a factor oft, called a scalar, is defined by

tx =

t [Xzxi]= [txtxi2]

The addition of two vectors is illustrated in Figure 1.1.3: construct a parallelogram


with vectors 1 and y as adjacent sides; then 1 + y is the vector corresponding to the
vertex of the parallelogram opposite to the origin. Observe that the components really
are added according to the definition. This is often called the "parallelogram rule for
addition."

Section 1.1 Vectors in JR2 and JR3

0
Figure 1.1.3

Addition of vectors jJ and if.

EXAMPLE I
Let x

[-]

and y

[n

(3, 4)

Then

(-2, 3)

X1

Similarly, scalar multiplication is illustrated in Figure 1.1.4. Observe that multi


plication by a negative scalar reverses the direction of the vector. It is important to note
that x

y is to be interpreted as x + (-1 )y.

(1.S)J

X1

(-l)J
Figure 1.1.4

Scalar multiplication of the vector J.

Chapter 1

EXAMPLE2

Euclidean Vector Spaces

Let a=

[n [ ]
-

v=

.and w=

Solution: We get

a+v=

[-l

[ i ] [ ] [!]
[ ] [ ]
[-] [ ] [-] [] [-]

3w=3

Let a=

[ l [ ]
v=

.and w =

= -

2v - w = 2

EXERCISE 1

Calculate a+ v, 3w, and 2V- w.

+ < -1)

rn

Calculate each of the following and illustrate with

a sketch.
(a) a+ w

(c) (a+ w) - v

(b) -v

The vectors e1 =

[]

and e =

[]

play a special role in our discussion of IR.2. We

will call the set {e1, e } the standard basis for IR.2. (We shall discuss the concept of

a basis fmther in Section 1.2.) The basis vectors e1 and e are important because any
vector v=
way:

[ ]

can be written as a sum of scalar multiples of e1 and e in exactly one

Remark
In physics and engineering, it is common to use the notation i
instead.

[]

and j =

[]

We will use the phrase linear combination to mean "sum of scalar multiples."
So, we have shown above that any vector x E IR.2 can be written as a unique linear
combination of the standard basis vectors.
One other vector in IR.2 deserves special mention: the zero vector,

0=

[ ]

.Some

important properties of the zero vector, which are easy to verify, are that for any

xEJR.2,
(1) 0 +x x
(2) x + c-1)x = o
(3) Ox= 0
=

Section 1.1

Vectors in JR.2 and JR.3

The Vector Equation of a Line in JR.2


In Figure 1.1.4, it is apparent that the set of all multiples of a vector
through the origin. We make this our definition of a line in

origin in JR.2 is a set of the form

JR.2:

J creates

a line

a line through the

{tJitEJR.}

Often we do not use formal set notation but simply write the vector equation of the
line:

rJ,

tEJR.

Jis called the direction vector of the line.


Similarly, we define the line through ff with direction vector J to be the set

The vector

{ff+ tJi tEJR.}


which has the vector equation

ff+ rJ.

tEJR.

rJ. tEJR. because of the parallelogram


rule for addition. As shown in Figure 1.1.5, each point on the line through ff can be
obtained from a corresponding point on the line x = rJ by adding the vector ff. We
say that the line has been translated by ff. More generally, two lines are parallel if the

This line is parallel to the line with equation x

direction vector of one line is a non-zero multiple of the direction vector of the other
line.
X2
.

line x =

rJ+ ff

Figure 1.1.5

EXAMPLE3

The line with vector equation

td + p.

A vector equation of the line through the point P(2, -3) with direction vector

[ ]
-

is

Chapter 1

EXAMPLE4

Euclidean Vector Spaces

Write the vector equation of a line through P(l, 2) parallel to the line with vector
equation

x= t

[]

tEIR

Solution: Since they are parallel, we can choose the same direction vector. Hence, the
vector equation of the line is

EXERCISE 2

Write the vector equation of a line through P(O, 0) parallel to the line with vector
equation

Sometimes the components of a vector equation are written separately:

x = jJ

tJ becomes

X1 = Pl

td 1

X2 = P2

td2,

t EIR

This is referred to as the parametric equation of the line. The familiar scalar form
of the equation of the line is obtained by eliminating the parameter t. Provided that

di* 0, d1 * 0,

X2 - P2
X1 - Pl
-r- --di
d1
- -

---

or

x2 = P2

d1
+

di

(xi - Pi)

What can you say about the line if d1 = 0 or d2 = O?

EXAMPLES

Write the vector, parametric, and scalar equations of the line passing through the point

r-].

P(3, 4) with direction vector

Solution: The vector equation is x =


.

So, the parametnc equat10n 1s


The scalar equation is x2 = 4

[!] r-l
+

XI = 3 - St
X2 = 4 + t,

- !Cx1

3).

tER

t ER

Section 1.1 Vectors in JR.2 and JR.3

Directed Line Segments For dealing with certain geometrical problems, it is


useful to introduce directed line segments. We denote the directed line segment from
point

P to

point Q by

PQ.

We think of it as an "arrow" starting at

P and

pointing

towards Q. We shall identify directed line segments from the origin 0 with the corre

sponding vectors; we write OP = fJ, OQ =if, and so on. A directed line segment that
starts at the origin is called the position vector of the point.
For many problems, we are interested only in the direction and length of the
directed line segment; we are not interested in the point where it is located. For

example, in Figure 1.1.3, we may wish to treat the line segment QR as if it were the
same as OP. Taking our cue from this example, for arbitrary points P, Q, R in JR.2, we

define QR to be equivalent to OP if r -if=fJ. In this case, we have used one directed


line segment OP starting from the origin in our definition.

p
0
Figure 1.1.6

directed line segment from P to Q.

More generally, for arbitrary points Q, R, S, and T in JR.2, we define QR to be


equivalent to ST if they are both equivalent to the same OP for some P. That is, if

r-if =

fJ and t

s= fJ for the same fJ

We can abbreviate this by simply requiring that

r-if=i'-s
EXAMPLE6

For points Q( l , 3 ) R(6,-l), S(-2,4), and T(3,0), we have that QR is equivalent to


ST because
,

-r - if=

[ ] - [] [ -] = [] - [ -!] =
_

S(-2,4)

r s
-

Chapter 1

Euclidean Vector Spaces

In some problems, where it is not necessary to distinguish between equivalent


directed line segments, we "identify" them (that is, we treat them as the same object)
and write

PQ =

RS.

Indeed, we identify them with the corresponding line segment

starting at the origin, so in Example 6 we write

Remark
Writing

QR

ST

the same object as

QR

= = [-l
ST

is a bit sloppy-an abuse of notation-because

ST.

QR

is not really

However, introducing the precise language of "equivalence

classes" and more careful notation with directed line segments is not helpful at this
stage. By introducing directed line segments, we are encouraged to think about vectors
that are located at arbitrary points in space. This is helpful in solving some geometrical
problems, as we shall see below.

EXAMPLE 7

Find a vector equation of the line through

P(l,

2) and Q(3, -1).

Solution: The direction of the line is

PQ= - p=[_i]-[;] =[_i]


PQ
P(
x=p+tPQ=[;]+t[_i] tE
q

Hence, a vector equation of the line with direction


that passes through

1, 2) is

Observe in the example above that we would have the same line if we started at the
second point and "moved" toward the first point--0r even if we took a direction vector
in the opposite direction. Thus, the same line is described by the vector equations

x=[_iJ+r[-J. rE
x=[_iJ+s[_iJ sE
x=[;]+t[-], tE
In fact, there are infinitely many descriptions of a line: we may choose any point on
the line, and we may use any non-zero multiple of the direction vector.

EXERCISE 3

Find a vector equation of the line through

P(l,

1) and Q(-2, 2).

Section 1.1

Vectors in JR.2 and JR.3

Vectors and Lines in R3


Everything we have done so far works perfectly well in three dimensions. We choose
an origin 0 and three mutually perpendicular axes, as shown in Figure 1.1.7. The
x1 -axis is usually pictured coming out of the page (or blackboard), the x2-axis to
the right, and the x3-axis towards the top of the picture.

Figure 1.1.7

The positive coordinate axes in IR.3.

It should be noted that we are adopting the convention that the coordinate axes
form a right-handed system. One way to visualize a right-handed system is to spread
out the thumb, index finger, and middle finger of your right hand. The thumb is
the x1 -axis, the index finger is the x2-axis, and the middle finger is the x3-axis. See
Figure 1.1.8.

Figure 1.1.8

Identifying a right-handed system.

2
We now define JR.3 to be the three-dimensional analog of JR. .

Definition

R3 is the set of all vectors of the form

:l3

Mathematically, we write

[:l

where x1,x,, and x3 are ceal numbers.

10

Chapter 1

Definition

Euclidean Vector Spaces

If 1

Addition and Scalar

[:n n
jl

and t E II., then we define addition of vectors by

Multiplication in J.3

[ l l l

x+y

X2 + Y2

X3

Xt + Yt

Xt

X2 + Y2
X3 + Y3

[l l
[

and the scalar multiplication of a vector by a factor oft by

tx

Xl
t x2
X3

tX1

tx2

tX 3

Addition still follows the parallelogram rule. It may help you to visualize this

3 must lie within a plane in JR.3 so that the two

if you realize that two vectors in JR.

dimensional picture is still valid. See Figure 1.1.9.

Figure 1.1.9

EXAMPLES
Let u

[ _i]. l-n
jl

Solution: We have

V +U
=

-w

-V + 2W-"

Two-dimensional parallelogram rule in IR.3.

and w
=

[H

crucula jl + U, -W, and -V +

2W

u.

nHJ ni
-[] {]
l
- l 11+2 ll-l J l =r m l :1 r-l
=

Section 1.1 Vectors in JR.2 and JR.3

11

It is useful to introduce the standard basis for JR.3 just as we did for JR.2. Define

Then any vector V

[:]

can be written as the linear combination

Remark
In physics and engineering, it is common to use the notation i

e1, j

e1,

and k

e3

instead.

The zero vector

[]

in R3 has the same properties as the zero vector in l!.2.

Directed line segments are the same in three-dimensional space as in the two
dimensional case.

A line through the point P in JR.3 (corresponding to a vector {J) with direction

vector

J f. 0 can be described by a vector equation:


X

p + tJ,

t E JR

It is important to realize that a line in JR.3 cannot be described by a single scalar linear
equation, as in JR.2. We shall see in Section 1.3 that such an equation describes a plane
in JR.3 .

EXAMPLE9

Find a vector equation of the line that passes through the points P(l, 5, -2) and
Q(4,-1,3).

Solution: A direction vector is

if

- [-H
p

Hence a vector equation of the

line is

Note that the corresponding parametric equations are x1


X3

EXERCISE4

-2

+St.

+ 3t, x2

6t,

and

Find a vector equation of the line that passes through the points P(l, 2, 2) and
Q(l,-2,3).

12

Chapter 1

Euclidean Vector Spaces

PROBLEMS 1.1
Practice Problems

Compute each of the following linear combinations


and illustrate with a sketch.
(a)[]+[]
(c)3 [- ]
A2 Compute each of the following linear combinations.
(a)[-]+[-]
(c)-2 [ _;J
(e) 23 [31] - 2 [11//43)
A3 Compute each of the following linear combinations.
(a)

Al

[!]-[J
Ul

Hl

rn

Ut V = and W= Detenillne
(a) 2v- 3w
Cb) -3Cv +2w) +5v
(c) a such that w- 2a = 3v
(d) a such that a - 3v = 2a
AS Ut V =
and W = = Detennine
(a) v + !w
Cb) 2c v + w)- c2v - 3w)
(c) a such that w- a = 2V
(d) a such that !a + v = w
A4

Consider the points P(2,3,1), Q(3,1,-2),


R(l,4,0), S(-5,1,5). Determine PQ, PR, PS,QR,
and SR, and verify that PQ+QR= PR= PS+ SR.
A 7 Write a vector equation of the line passing through
the given points with the given direction vector.
(a) P(3,4),J= [-]
(b) P(2,3),J = [=:J

A6

[-]
(d)
[
r1
Write a vector equation for the line that passes
(c) P(2,0,5),J=

-11

P(4,1,5),J =

AS

through the given points.


(a) P(-1,2),Q(2,-3)
(b) P(4,1),Q(-2,-1)
(c) P(l,3,-5),Q(-2,1,0)
(d) P(-2,1,1),Q(4,2,2)
(e) P(!,t,1),Q(-l,l,)
2
A9 For each of the following lines in JR. , determine a
vector equation and parametric equations.
(a) x2= 3x1 +2
(b) 2x1 +3x2 = 5
AJO (a)
set of points in IR.11 is collinear if all the points
lie on the same line. By considering directed
line segments, give a general method for deter
mining whether a given set of three points is
collinear.
(b) Determine whether the points P(l,2), QC4,1),
and R(-5,4) are collinear. Show how you
decide.
(c) Determine whether the points S(1,0,1),
T(3,-2,3), and U(-3,4,-1) are collinear.
Show how you decide.
A

Section 1.1 Exercises

13

Homework Problems
B 1 Compute each of the following linear combinations

[-] + r-]
-3 [-]

and illustrate with a sketch.


(a)
(c)

(b)
(d)

[-]- []
-3 []- [;]

(b)

-t

B2 Compute each of the following linear combina

(a)[]+ [=]
[]- []
2 [ =J
H [1 ] - ?a[]
[ + Y3 [- 12] - [ ]

tions.

(b)

(c)

(d

(e)

P(l,4,1), Q(4,3,-1),
R(-1,4,2), and S(8,6,-5). Determine PQ,
PR, PS, QR, and SR, and verify that PQ+QR=
PR= PS +SR.
Consider the points P(3,-2,1), Q(2, 7, -3),
R(3,1,5), and S(-2,4,-1). Determine PQ,
;1...,.
-+
-+
PK,
PS,
QR,
and SR, and verify that P Q+QR=
PR= PS +SR.

B6 (a) Consider the points

the given points with the given direction vector.

P(-3, 4),J=

(b)

P(O, 0). J=

(c)

P(2,3,-1), J=

(d)

P(3,1,2),J=

(e)

(f) (1 +)

B4 Ut V
(a)
(b)
(c)
(d)

(b)
(c)
(d)

0
-i

and W

Detecm;ne

2v- 3w
-2(v- w) - 3w
i1 such that w - 2i1 3v
i1 such that 2i1 3w= v

BS Ut V
(a)

1-

[ ]
-

and W

3v- 2w
-iv+ w
i1 such that v+i1= v
i1 such that 2i1 - w= 2v

[-]
[-]

BS Write a vector equation for the line that passes


through the given points.
(a)
(b)
(c)
(d)

P(3,1), Q(l,2)
P(l,-2,1), Q(O, 0, 0)
P(2,-6,3), Q(-1,5,2)
P(l,-1,i), Q(i, t. 1)

B9 For each of the following lines in

JR2, determine

vector equation and parametric equations.


(a)
(b)

x2= -2x1 3
Xi + 2X2= 3

BlO (You will need the solution from Problem

AlO

(a)

to answer this.)
whether the points P(2,1,1),
Q(l,2, 3), and R(4,-1,-3) are collinear. Show

-H
[

[-]

(a)

tions.

(c)

-+

B7 Write a vector equation of the line passing through

B3 Compute each of the following linear combina-

<{]-[-]
[=l
f;l Hl
[ 4J [-1
{ l {n

-+

(a) Determine

how you decide.


Deterrlline

whether the points S(1,1, 0),


2,
1),
and
U(-4, 0,-1) are collinear. Show
T(6,

(b) Determine

how you decide.

14

Chapter 1

Euclidean Vector Spaces

Computer Problems

[=u
[=m

Cl Let V,

v,

[ :1
-36

V2

and

Use computer software to evaluate each of the fol


lowing.
(a)

171!1

+ sv2 - 3v3 + 42v4

(b) -1440i11 - 2341i12 - 919i13

+ 6691/4

Conceptual Problems

[ ]
[ l
[]

Dl Let i1

and w

[ ].

(a) Explain in terms of directed line segments why

+t2 w

(a) Find real numbers t1 and t2 such that t1i1


_

Illustrate with a sketch.

+t2w

(b) Find real numbers t1 and t2 such that t1 v

PQ+ QR+RP

PQ, QR, and RP in terms of jJ, q, and r.

D3 Let

D2 Let

[ :2).

2
vectors in JR. . Prove that x
2
t E JR, is a line in JR. passing through the

and

J t= 0 be

origin if and only if

D4 Let x and

P, Q, and R be points in JR.2 corresponding to

vectors

fl

fl + t J,

(c) Use your result in part (b) to find real numbers

+ t2i12

(b) Verify the equation of part (a) by expressing

for any X1, X2 E R

t1 and t2 such that t1V1

fl, q, and rrespectively.

fl

is a scalar multiple of

y be vectors in JR.3 and t

Prove that

t(x

+ y)

tx

JR be a scalar.

+ t)!

1.2 Vectors in IRn


We now extend the ideas from the previous section to n-dimensional Euclidean
11
space JR.
Students sometimes do not see the point in discussing n-dimensional space be
cause it does not seem to correspond to any physical realistic geometry. But, in a num
ber of instances, more than three dimensions are important. For example, to discuss

the motion of a particle, an engineer needs to specify its position (3 variables) and its
velocity (3 more variables); the engineer therefore has 6 variables. A scientist working
in string theory works with 11 dimensional space-time variables. An economist seek
ing to model the Canadian economy uses many variables: one standard model has
more than 1500 variables. Of course, calculations in such huge models are carried
out by computer. Even so, understanding the ideas of geometry and linear algebra is
necessary to decide which calculations are required and what the results mean.

Section 1.2 Vectors in

JR.1

15

Addition and Scalar Multiplication of Vectors in JRn


Definition

JR.11 is the set of all vectors of the form

Definition

If 1

Addition and Scalar


Multiplication in :i"

Xi
Xn

y=

r1
1

[Xl
Xn
:

, where x; E R Mathematically,

, and t E JR., then we define addition of vectors by

[Xl tll [X1


Xn n Xn Yn
+ Yl

x+.Y=

: + : =

:
+

and the scalar multiplication of a vector by a factor oft by

tx = t

Theorem 1

1Xn1] ttX11]
=

J'.or all w, x, y E JR.11 ands, t E JR. we have

(1) x+ y E JR.11
(closed under addition)
(2) x+ y = y + 1
(addition is commutative)
(3) c1 + y) + w 1 +CY+ w)
(addition is associative)
(4) There exists a vector 0 E
such that z+0 = z for all z E JR.ll
=

-+

(5) For each 1

JR.1

-+

-+

JR.ll there exists a vector -1

-+

-+

(zero vector)

JR.11 such that 1 + (-1)

(additive inverses)

(6) t1 E JR.11
(closed under scalar multiplication)
(7) s(t1) = (st)1
(scalar multiplication is associative)
(8) (s + t)x = s1 + tx
(a distributive law)
(9) t(1 + y) = t1 + tY
(another distributive law)

(10) lx = 1

(scalar multiplicative identity)

Proof: We will prove properties (1) and (2) from Theorem 1 and leave the other proofs
to the reader.
,-

16

Chapter 1

Euclidean Vector Spaces

For (1), by definition,


X1 +

X +y

[ Ytl
:

Xn

E !Rn

Yn

since x; +y; E IR for 1 :S i :S n.


For (2),

[ ] ti ]
+X1

XI +y1

x+st=

X11 +y,,

EXERCISE 1

11

=st+x

Xn

Prove properties (5), (6), and (7) from Theorem 1.

Observe that properties (2), (3), (7), (8), (9), and ( 10) from Theorem 1 refer only to
the operations of addition and scalar multiplication, while the other properties, ( 1), ( 4) ,

(5), and (6), are about the relationship between the operations and the set IR11 These
facts should be clear in the proof of Theorem 1. Moreover, we see that the zero vector

0
of JR11 is the vector

, and the additive inverse of x is -x = (- l)x. Note that the

0
zero vector satisfies the same properties as the zero vector in JR2 and JR3.
Students often find properties (1) and (6) a little strange. At first glance, it seems
obvious that the sum of two vectors in !R11 or the scalar multiple of a vector in JR11 is
another vector in IR". However, these properties are in fact extremely important. We
now look at subsets of IR11 that have both of these properties.

Subspaces
Definition

A non-empty subset S of IR11 is called a subspace of IR11 if for all vectors x, y E S and

Subspace

t E IR:

( 1) x +y ES (closed under addition)


(2) tx ES (closed under scalar multiplication)
The definition requires that a subspace be non-empty. A subspace always contains
at least one vector. In particular, it follows from (2) that if we let t

0, then every
n
subspace of !R contains the zero vector. This fact provides an easy method for dis
=

qualifying any subsets that do not contain the zero vector as subspaces. For example,

a line in IR3 cannot be a subspace if it does not pass through the origin. Thus, when
_,

checking to determine if a set S is non-empty, it makes sense to first check if 0 ES.


It is easy to see that the set

{O} consisting of only the zero vector in !R11 is a subspace

of IR11; this is called the trivial subspace. Additionally, IR" is a subspace of itself. We
will see throughout the text that other subspaces arise naturally in linear algebra.

Section 1.2

EXAMPLE 1

17

Vectors inJR"

Show that S = {[:l I x1 -x2 +xi = } is a subspace ofll!.3.


0

We observe that, by definition, S is a subset of R3 and that 0 = [] E S


since talcing x1 = x2 = and X3 = satisfies x1 -x2 +x3 =
Let = [:n 'f :l E S. Then they must satisfy the condition of the set, so
x1 - x2 +x3 = and Y1 -Y2 +y3 =
To show that Sis closed under addition, we must show that +y satisfies the condition
of S. We have
+ y = [XX3X2t +Y++YY3I2]
and
(x1 +Y1) -(x2 +Y2) +(x3 +y3) = X1 - x2 +x3 +Y1 -Y2 +Y3 = + =
Hence, + y E S.
tx1
Similarly, for any t E JR, we have tx [tx2] and
SoluUon1

0,

5!

0,

0.

0.

t X3
So, S is closed under scalar multiplication. Therefore, S is a subspace ofJR3.
EXAMPLE2

Show that T ={[]I x1x2 = } is not a subspace ofJR2.


To show that Tis not a subspace, we just need to give one example showing
that does not satisfy the definition of a subspace. We will show that T is not closed
under addition.
Observe that =[]and y =[]are both in T, but + y = [] T, since
Thus, T is not a subspace ofJR2
Solution:
T

EXERCISE2

1(1)

0.

Show that S ={[] I 2x1 = x2} is a subspace ofJR2 and T = {[] I x1 +x2 = 2} is
not a subspace of R2.

Chapter 1

18

Euclidean Vector Spaces

Spanning Sets and Linear Independence


One of the main ways that subspaces arise is as the set of all linear combinations of
some spanning set. We next present an easy theorem and a bit more vocabulary.

Theorem 2

If {v1,

, vk} is a set of vectors in JRn and S is the set of all possible linear combi

nations of these vectors,

then S is a subspace of ]Rn.

Proof: By properties (1) and (6) of Theorem 1, t1v1 +

+ tkvk E JR.11, so S is a subset

of JRn. Taking t; = 0 for 1 i k, we get 0 = Ov1 + + Ovk ES, so S is non-empty.


Let x,y ES. Then, for some real numbers s; and t;, 1 i k, x = s1v1 + +skvk
andy = t1v1 + + tkvk. It follows that

so, x +y ES since (s; + t;) E JR. Hence, S is closed under addition.


Similarly, for all t E JR,

So, S is closed under scalar multiplication. Therefore, S is a subspace of JRn.

Definition

If S is the subspace of JR.11 consisting of all linear combinations of the vectors v1,

, vk

Span

E JR.11, then S is called the subspace spanned by the set of vectors 13 = {v1, ... , vk}, and

Spanning Set

we say that the set 13 spans S. The set 13 is called a spanning set for the subspace S.
We denote S by
S = Span{i11, ... , vk} =Span 13

EXAMPLE3

2
Let v E JR. with v *

0 and consider the line L with vector equation x = tV, t E JR. Then

L is the subspace spanned by {V}, and {V} is a spanning set for L. We write L =Span{V}.
2
Similarly, for v1, v2 E JR. , the set M with vector equation x = ti\11 + t2v2 is a
2
subspace oflR with spanning set {v1, v2}. That is, M =Span{i11, v2}.

If v E JR.

with v *

0, then we can guarantee that Span{v} represents a line in

JR. that passes through the origin. However, we see that the geometrical interpretation
of Span{v1, v2} depends on the choices of v1 and v2. We demonstrate this with some
examples.

Section 1.2

EXAMPLE4

{[] []}

The set

S1

Hence,

S1

is a line in

The set

where

S2

S3

S3

2
JR. .

{ [] [-]}
,

through the origin.


has vector equation

JR.. Hence, S 2 represents the same line as SI That is,

Span

{[] []}

That is,

S3

has vector equation

spans the entire two-dimensional plane.

From these examples, we observe that


if neither v 1 nor

2
{V1, v2} is a spanning set for JR. if and only

v2 is a scalar multiple of the other.


3
JR. .

This also means that neither vector

can be the zero vector. We now look at this in

EXAMPLES
The set

Hence,

S1

"Span

19

has vector equation

2
JR. that passes

= Span

t1 - 2t2

The set

Hence,

Span

Vectors in JR.11

{[ =l m, [] }

has vector equation

20

Chapter 1

EXAMPLES
(continued)

Euclidean Vector Spaces

{[ jl [ _; l [!]}

The set S =Spm


2

which can be written as

has vwor equation

{] {i] [!]
+

t
3

[ jl [ _l [!] l!l
{[ 3] [!]}

t
= ,

So, S =Span
2

!
2

= (tI

1,,1 ,! ER
2 3

1 )
2

[ l
_

(1
2

1 )
3

[! ]

We extend this to the general case in IR.11

Theorem 3

Let v1,...,vk be vectors in IR.11 If vk can be written as a linear combination of


V1,...,Vk-1, then

Proof: We are assuming that there exists t1,...,tk-l E IR. such that

Let 1 ESpan{v1,...,vk}. Then, there exists S1, ...,Sk

IR. such that

1 = S1V1 + ... + Sk-lvk-1 + SkVk

= s1v1
=

(s1

Skt1W1

Sk-1Vk-l
+

sk(t1v1

(sk-1

tk-1Vk-1)

sktk-1Wk-I

Thus, 1 E Span{\11,... , vk-d Hence, Span{vi. ...,vd


we have Span{\11,...,vk-d s;; Span{v1,...,vk} and so

as required.

s;;

Span{\11,...,vk-d Clearly,

In fact, any vector which can be written as a linear combination of the other vectors
in the set can be removed without changing the spanned set. It is important in linear
algebra to identify when a spanning set can be simplified by removing a vector that can
be written as a linear combination of the other vectors. We will call such sets linearly
dependent. If a spanning set is as simple as possible, then we will call the set linearly
independent. To identify whether a set is linearly dependent or linearly independent,
we require a mathematical definition.

Section 1.2

Assume that the set {V 1,

Vectors in IR"

21

, v k} is linearly dependent. Then one of the vectors, say

v;, is equal to a linear combination of some (or all) of the other vectors. Hence, we can
find scalars t1, ... tk E IR such that

where t; f. 0. Thus, a set is linearly dependent if the equation

has a solution where at least one of the coefficients is non-zero. On the other hand, if
the set is linearly independent, then the only solution to this equation must be when all
the coefficients are 0. For example, if any coefficient is non-zero, say t; f. 0, then we
can write

Thus, v; E Span{v1, ..., v;_1, V;+1,

. .

, v,,}, and so the set can be simplified by using

Theorem 3.
We make this our mathematical definition.

Definition

A set of vectors {v1, ... , v k} is said to be linearly dependent if there exist coefficients

Linearly Dependent

t1,

, tk not all zero such that

Linearly Independent

A set of vectors {V1,

is t1

Theorem 4

t2

tk

, vd is said to be linearly independent if the only solution to

0. This is called the trivial solution.

If a set of vectors {V1, ... , v k} contains the zero vector, then it is linearly dependent.

Proof: Assume V;

Hence, the equation

0. Then we have

t1v1 +

+ tkvk has a solution with one coefficient, t;, that is

non-zero. So, by definition, the set is linearly dependent.

EXAMPLE6
show that the set

{[-1 :J . [ 1:Ix4] [-]}

Solution: We consider

is lineady dependent

22

Chapter 1

EXAMPLE6

Euclidean Vector Spaces

Using operations on vectors, we get

(continued)

Since vectors are equal only if their corresponding entries are equal, this gives us three
equations in three unknowns

7t1 - lOt2 - t3
-14t, + 15t2
6t,

15
14

t2

3t3

Solving using substitution and elimination, we find that there are in fact infinitely many
possible solutions. One is

EXERCISE 3
Determine whether

t1

, t2

[
m
ml m
,

, t3

-1. Hence, the set is linearly dependent.

is linearly dependent or JinearI y independent.

Remark
Observe that determining whether a set

{\11, ... 'vk}

in JR.11 is linearly dependent or

linearly independent requires determining solutions of the equation

0.

t1v1

tkvk

However, this equation actually represents n equations (one for each entry of the

vectors) ink unknowns

t1,

tk. In the next chapter, we will look at how to efficiently

solve such systems of equations.


What we have derived above is that the simplest spanning set for a subspace S is
one that is linearly independent. Hence, we make the following definition.

Definition

If

Basis

independent, then

{v,,...,vk}

EXAMPLE 7
Let

is a spanning set for a subspace S of JR.11 and

{V1,

[-H r \l [-H
=

v,

and let s be the subspace of

"
"'
S =Span {V1, i12, v3}. Find a basis for S.
Solution: Observe that

{V1,... ,vk}

is linearly

, vk} is called a basis for S.

{V1, i12, i13}

is linearly dependent, since

JR3

given by

Section 1.2 Vectors in !fll.11

EXAMPLE 7
(continued)

23

In particular, we can write v3 as a linear combination of v1 and i12. Hence, by

Theorem 3,

Moreover, observe that the only solution to

is t1

t2

0 since neither v 1 nor i12 is a scalar multiple of the other. Hence, {V1, i12} is

linearly independent.
Therefore, {v1, v2} is linearly independent and spans S and so it is a basis for S.

Bases (the plural of basis) will be extremely important throughout the remainder
of the book. At this point, however, we just define the following very important basis.

Definition

In !fll.1 , let e; represent the vector whose i-th component is 1 and all other components

Standard Basis

are 0. The set {e1' ... ' e,,} is called the standard basis for

for'.?."

!fli.11

Observe that this definition matches that of the standard basis for !fli. and !fll. given
in Section 1.1.

EXAMPLE 8

The standard basis for R3 is re,, e,, e, J

{ [] [!] [m

It is linearly independent since the only solution to

is 11

12

13

0. Moreover, it is a spanning set for R3 since every vector X

[:l

!fll.3 can be written as a linear combination of the basis vectors. In particular,

[:l [] [!] m
=

X1

X2

X3

Remark
Compare the result of Example 8 with the meaning of point notation P(a, b, c). When
we say P(a, b, c) we mean the point P having a amount in the x-direction, b amount
in they-direction, and c amount in the z-direction. So, observe that the standard basis
vectors represent our usual coordinate axes.

24

Chapter 1

EXERCISE4

Euclidean Vector Spaces

State the standard basis for


a spanning set for

JR5. Prove that it is linearly independent and show that it is

JR5.

Surfaces in Higher Dimensions


We can now extend our geometrical concepts of lines and planes to JR" for n > 3. To
match what we did in

Definition

Let

Linein3i."

a line in

Definition

Let

Planein J."

vector equation x

jJ, v

Hyperplane in R"

JR" with

and JR3, we make the following definition.

0. Then we call the set with vector equation x


jJ.

jJ + t1v, t1

JR

that passes through

V1,V2,JJ

through

Definition

JR11

JR

JR11,

with

jJ

{V1,v2} being a linearly independent set. Then the set with


ti v1 + t2v2, ti, t2 E JR is called a plane in JR" that passes

jJ.

JR11, with {V1, . ., v11_ i} being linearly independent. Then the set
11
with vector equation x
jJ + t1v1 +
+ t11_1v11_1, t; E JR is called a hyperplane in JR
that passes through jJ.
Let

v 1, ... , v11_ 1, jJ

EXAMPLE9
The set Span

{ l , I , : }

is linearly independent in

Show that the set Span

is a hyperplane since

-1

-2

EXAMPLE 10

JR4.

{[l r-ll rm

l : ' :}
1

-2

-1

defines a plane in R3.

Solution: Observe that the set is Iinear!y dependent since

[ l r-l l l H
+

the simplified vector equation of the set spanned by these vectors is

Hence,

Section 1.2 Exercises

25

EXAMPLE 10
(continued )

Since the set

{ll [- ]}

is linear! y independent, this is the vector equation of a plane

3
passing through the origin in IR. .

PROBLEMS 1.2
Practice Problems
Al Compute each of the following linear combina
tions.
( a)

(b )

2
-1

(el X

+2

-1

-1
3
1
-1
-3
+2
1
4
5
0
2

-2

{.x

(c)

(d)

-1

( b)

(c)

{[]I

(d )

{[::J

X
t

X3
=

X +X2
t
XtX2

x3

X3

11

(e)
(f )

{x
{x
{x
{x

0.

E JR.41Xt+2x3
E IR.4 I X1

E IR.4 I 2x1

3x4,X2 - Sx3
=

5,X[ -3X4

X3X4,X2 -X4

E JR.4 X[ +X2
1

-X4,X3

o
=

A4 Show that each of the following sets is linearly de


pendent. Do so by writing a non-trivial linear com
bination of the vectors that equals the zero vector.
(a)

(b) {v1}, where v1 *

or is not a subspace of the appropriate IR.

I xi - xl

{!}

spaces of IR.4. Explain.


(a)
E IR.4 x1 +x2 + x3 +x4

2
3

+ 12

+1
1

A3 Determine whether the following sets are sub

A2 For each of the following sets, show that the set is

{[:]
{[:J

[] [;] []

(D Span

1
2
-2
2
1 +2
1
(c) 2
0
2
-1

(a)

(b)

(c)

(d )

mrnl .[_:]}
{l-rrnrnJ}
{[i][l][]}
{[n.rJ.rn

Euclidean Vector Spaces

Chapter 1

26

AS Determine if the following sets represent lines,


planes, or hyperplanes in JR4. Give a basis for each.
Ca) Span

(b) Span

{r , r}
{ , ! }
_!
,
,

}
{
,

(c) Span

c d) Span

H' j'n

fl + tJ, t E JR is a
A6 Let fl, J E JR12 Prove that 1
12
JR
if and only if fl is a scalar multiple
subspace of
of J
=

A 7 Suppose that {v 1, ..., vk} is a linearly indepen


12
dent set in JR Prove that any non-empty subset of
is linearly independent.
=

Homework Problems
Bl Compute each of the following linear combinations.
(a)

(b)

(c)

1
-1

-2

-2
3
2
2
1
-2
3 +2 2
2
3
0

2
1
+

-1

-7
-2
3 0 -2 3 + 6
-4
0
-7
2
3
0

Cb)

{[]I

{[i]

X1

+x2

1 x, +x,

(c)
(d)

1
1

mirm

B3 Determine whether the following sets are sub


4
spaces of JR . Explain.
4
(a) {1E1R 2x1 - Sx4 7,3x2 2x4}
4
2x x + x}
(b) {1E JR x

-1

B2 For each of the following sets, show that the set is


1
or is not a subspace of the appropriate JR 2
(a)

CD Span

Ce)

I
I
4
{1E1R I
4

+
0,3x3 -x4}
X1 + X2 + X4
{1E JR I X1 + 2x3 0, Xt - 3x4 o}
=

un

CD Span

mirm

B4 Show that each of the following sets is linearly de


pendent by writing a non-trivial linear combination
of the vectors that equals the zero vector.
Ca)

Cb)

WHW
{[-il Ul {:]}
mrnl' [i]}
{[][].[-;]}

Cc)
(d)

Section 1.2

Exercises

27

BS Determine if the following sets represent lines,


planes, or hyperplanes in JR.4. Give a basis for each.

(a) Span

{ \ : }
pn
-

-2

(b) SpM

Computer Problems
1.23

Cl Let vi

4.21

4.16 .-t
, v2
_2_21

:t

-3.14
0

0.34

-9.6
1.01

' v3

2.02'

2.71

1.99

0.33
.v
and -t
4

2.12
_ _
.
3 23

0.89
Use computer software to evaluate each of the fol
lowing.
(a) 3vi - 2v2 + Sv3 - 3\14
Cb) 2Avi - I.3v2 +

Yiv3 - Y3v4

Conceptual Problems
Dl Prove property(8) from Theorem 1.

D6 Let 13

D2 Prove property(9) from Theorem 1.


(a) Prove that the intersection of U and Vis a sub
space of JR.I!.

D7 Let v1,Vz

of JR.I!.

I i1

U, v

E V}. Prove

that U + Vis a subspace of JR".

D4 Pick vectors j3, vi,v2, and v3 in JR.4 such that the


vector equation x

j3 +

t1v1 + t2v2 + t3V3

(d) Is a line passing through the origin

vd be a linearly independent set

of vectors in JR.I!. Prove that every vector in Span13

Span{V1,sv1 + tV2}

JR". State whether each of the fol

a counterexample.
(a) If i12

tV 1 for some real number t, then {V1,v2}

is linearly dependent.
(b) If v1 is not a scalar multiple of v2, then {V1, 112)
is linearly independent.
(c) If {V1,v2,v3} is linearly dependent, then v1 can
be written as a linear combination of v2 and v3.
(d) If v1 can be written as a linear combination of
Vz and V3, then {V1,vz, v3} is linearly depen

can be written as a unique linear combination of the


vectors in 13.

JR.I! and lets and t be fixed real numbers

is true, explain briefly. If the statement is false, give

(c) Is the point(l,3,1,1)

{V1,

,vk} be a linearly independent set

lowing statements is true or false. If the statement

(a) ls a hyperplane not passing through the origin


(b) ls a plane passing through the origin

D8 Let v1, Vz,V3


{a+ v

Span{V1, v2}

subspaces of JR.I! does not have to be a subspace

with t * 0. Prove that

(b) Give an example to show that the union of two

(c) Define U + V

{V1, . . . ,vk. x} is linearly independent.

D3 Let U and Vbe subspaces oflR.n.

DS Let 13

{V1,

of vectors in JR.I! and let x et Span 13. Prove that

dent.
(e) {vi} is not a subspace oflR.11

(f)

Span{Vi} is a subspace of JR".

28

Chapter 1

Euclidean Vector Spaces

1.3 Length and Dot Products


In many physical applications, we are given measurements in terms of angles and
magnitudes. We must convert this data into vectors so that we can apply the tools of
linear algebra to solve problems. For example, we may need to find a vector represent
ing the path (and speed) of a plane flying northwest at 1300 km/h. To do this, we need
to identify the length of a vector and the angle between two vectors. In this section, we
see how we can calculate both of these quantities with the dot product operator.

3
2
Length and Dot Products in R , and R
The length of a vector in

IR2 is defined by the usual distance formula (that is, Pythago

ras' Theorem), as in Figure 1.3.10.

Definition

If x

Length in ::2

[]

JR2, its length is defined to be


11111

xf

0
Figure 1.3.10

For vectors in

JR3,

Length in JR2.

the formula for the length can be obtained from a two-step

JR2, as shown in Figure 1.3.11. Consider the point


X(x1, x2, x3) and let P be the point P(x1, x2, 0). Observe that OPX is a right triangle, so

calculation using the formula for


that

Definition
Length in:::.'

If it

[:l

IR3, its length is defined to be


11111

xf

x +

One immediate application of this formula is to calculate the distance between two

P and Q, then the distance between them is the


PQ.

points. In particular, if we have points


length of the directed line segment

Section 1.3 Length and Dot Products

Figure 1.3.11

EXAMPLE 1

Find the distance between the points

Solution: We have

PQ=

Length in

P(-1, 3, 4), Q(2,


l)

[ - j [-]
=

-3

29

JR3.

5 , 1) in IR.3.

Hence, the distance between the two

points is

llPQll= "132 + (-8)2 + (-3)2= ...f82

Angles and the Dot Product

IR.2

Determining the angle between two vectors in

leads to the important idea of the dot product of two vectors. Consider

Figure 1.3.12. The Law of Cosines gives

llPQll2= llOPll2 + llOQll2 - 2llOPll llOQll cos e


Substituting

[ ]

[]

q
- qi
OP= jJ = P1 , OQ= q= 1 . PQ= jJ - q= Pi P2
q2
P2 q 2

simplifying gives

p1q1 + p2q2= llfJll ll?/11 cos e

For vectors in IR.3, a similar calculation gives

Observe that if

jJ

q, then e

0 radians, and we get

PT + p + p= llf1112

(1 .1)

into (1.1) and

30

Chapter 1

Euclidean Vector Spaces

Figure 1.3.12

llPQll2

llOPll2 + llOQll2 - 2llOPll llOQll cos 8.

This matches our definition of length in JR3 above. Thus, we see that the formula on the
left-hand side of these equations defines the angle between vectors and also the length
of a vector.
Thus, we define the dot product of two vectors

x y

Similady, the dot product of vectors X

x y

X1Y1

[::]

X1Y1

[]. y []
=

in IR2 by

X2Y2

and jl

X2Y2

in IR3 is defined by

x3y3

Thus, in IR2 and JR3, the cosine of the angle between vectors

x and y can be calcu

lated by means of the formula


cos

xst
=

where e is always chosen to satisfy 0 e

EXAMPLE2

Find the angle in IR3 between i1

Solution: We have

v. w
llvll
llwll

[ _] [-H
W

So e

:::::

1.966

radians.

1(3) + 4(-1) + (-2)(4)


v1 + 16 + 4

../21

Y9

16

Y26

Hence,

and n.)

1r.

cose

(1.2)

1111111.Yll

-9

-9
Y26::::: -0. 3 85 1 6
W

(Note that since cose is negative, e is between

Section 1.3 Length and Dot Products

EXAMPLE3

Find the angle in JR2 between v =

Solution: We have v

Hence, cos e

[ ]
-

1(2)

and w

(-2)(1)

0. Thus, e =
=

[n

=
w

0.

radians.

That is, v and w are perpendicular to each other.

EXERCISE 1

llilll llwll

Find the angle in JR3 between i1

U]

and "1 =

31

[ :l
=

Length and Dot Product in R11

We now extend everything we did in JR2 and JR 3 to JRll. We begin by defining the dot
product.

Definition

Let

x=

Dot Product

Xi
Xn

and y =

rll

be vectors in JRll. Then the dot product of

'

X1Y1

X2Y2

'

'

x and y is

XnYn

Remark

The dot product is also sometimes called the scalar product or the standard inner

product.

From this definition, some important properties follow.

Theorem 1

Let

x, y, z E JR11

and

t E R Then,

(1) x x ;:::: 0 and x x = 0 if and only if x = 0


(2) x. y y. x
(3) x. (J + tJ = x. y + x. z
(4) (tx) .Y t(x. J) = .x (tJ)

Proof: We leave the proof of these properties to the reader.

Because of property (1), we can now define the length of a vector in JR n. The word

norm is often used as a synonym for length when we are speaking of vectors.

32

Chapter 1

Definition

Euc]jdean Vector Spaces

Let x

[1 ].

Norm

We define the norm or length of x by

Xn

EXAMPLE4

2
Let1

1 /3
andy

-2/3
=

-2/3

-1
Solution: We have

11111
11.Y11

EXERCISE2
Let X

[]

. Find 11111 and 11.Yll.

..J22 + l2 +32 + c-1)2

..Jo /3)2 +c-2/3)2 +02 +c-2/3)2

and let jl

....;1/9+4/9+o+4/9

X. Detenn;ne 11111 and Iljlll.

11 11

We now give some important properties of the norm in IR. 1

Theorem 2

11

Let 1,y E IR. and t ER Then


0 if and only if 1
0
(2) 11r111 1r1111 11
(3) 11.YI:S11111 11.Yll, with equality if and only if {1,y} is linearly dependent
(4) 111+.Yll ::; 11111+11.Yll

(1) 11111 <': 0 and 11111

Proof: Property (1) of Theorem 2 follows immediately from property (1) of


Theorem 1.

(2) llt111

...j(txi)2 + + (txn)2

-../ti xT + +x

Jtlll1JI.

(3) Suppose that {1, y} is linearly dependent. Then, eithery


the zero vector, then both sides are equal to 0. If 1 t)1, then

0 or 1

t.Y. Ify is

11 .YI

11 Ct1)1

1rc1 1)1

1r1111112

11111 11r111

11111 11.Yll

Suppose that {1,y) is linearly independent. Then t1 +y * 0 for any t E R


Therefore, by property (1),we have (t1+y) (t1+y) > 0 for any t ER Use property
(3) of Theorem 1 to expand, and we obtain

(1 1)t2 + (21 y)t+CY y)

>

0,

for all t E IR.

( 1.3)

Section 1.3 Length and Dot Products

Note that

1 1

>

0 since 1

f.

0.

Now a quadratic expression

33

At2+ Bt+C with A

>

is always positive if and only if the corresponding equation has no real roots. From
the quadratic formula, this is true if and only if
implies that

B2 - 4AC

4(1 . 1)2 - 4(1 1)CY1)

<

0. Thus, inequality (1.3)

<o

which can be simplified to the required inequality.

(4) Observe that the required statement is equivalent to

111+1112 Cll111+11111)2
The squared form will allow us to use the dot product conveniently. Thus, we consider

111+1112 - c11111+11111)2

c1+1)c1+1) - c111112+ 211111 11111+111112)


1 . 1+1 . 1+1 . 1+ 11 - c1 1+ 211111 11111+ 1 1)
21 . 1 - 211111 11111

by

(3)

Remark
Property

(3)

is called the

Cauchy-Schwarz Inequality (or Cauchy-Schwarz

Buniakowski). Property (4) is the Triangle Inequality.

EXERCISE 3

Definition

Unit Vector

Prove that the vector x

A vector 1

11j111 is parallel to 1 and satisfies II.XII

IR.11 such that 11111

1.

1 is called a unit vector.

We will see that unit vectors can be very useful. We often want to find a unit vector
that has the same direction as a given vector
that this is the vector
A

1.

Using the result in Exercise

(1.2).

we see

of

ffi..t

We could now define the angle between vectors


tion

3,

1 and 1 in IR"

by matching equa

However, in linear algebra we are generally interested only in whether two

vectors are perpendicular. To agree with the result of Example

3,

we make the follow

ing definition.

Definition
Orthogonal

Two vectors 1 and

1 in IR.11 are orthogonal to each other if and only if 11

Notice that this definition implies that

0.

0 is orthogonal to every vector in IR.11

34

Chapter 1

EXAMPLES

Euclidean Vector Spaces

1
2
-1
3
0
Let v =
, w =
. Show that v is orthogonal to w but v is not
, and z... =
3
0
1
2
-2
orthogonal to Z.
Solution: We have v w = 1(2) + 0(3) + 3(0) + (-2)(1) = 0, so they are orthogonal.
v z = 1 (-1) + 0(-1) + 3(1) + (-2)(2) = -2, so they are not orthogonal.

...

The Scalar Equation of Planes and Hyperplanes


We saw in Section 1.2 that a plane can be described by the vector equation x =

ti vi

t2v2, ti, t2

jJ

JR, where {vi, v2} is linearly independent. In many problems, it is

more useful to have a scalar equation that represents the plane. We now look at how
to use the dot product to find such an equation.
Suppose that we want to find the equation of a plane that passes through the point

P(p" p,, p,).

suppose that we can find a vector it

[:n

the plane, that is orthogonal to any directed line segment


is, it is orthogonal to

PQ for any point

called the normal vedor of

PQ lying in the plane.

(That

Qin the plane; see Figure 1.3.13.) To find the

equation of this plane, let X(x1, x2, x3) be any point on the plane. Then it is orthogonal
to

PX, so

This equation, which must be satisfied by the coordinates of a point X in the plane, can
be written in the form

This is the standard equation of this plane. For computational purposes, the form

it (x - jJ)

0 is often easiest to use.

Xi
Figure 1.3.13

The normal it is orthogonal to every directed line segment lying in the plane.

35

Section 1.3 Length and Dot Products

EXAMPLE6

Find the scalar equation of the plane that passes through the point P(2, 3, -1) and has
nonnfil vectoc n

Hl

Solution: The equation is

it (X

- p)

[l

[XJ - 2]

-4

X2 - 3

X3 + 1

or

X1 - 4x2 + X3

1(2) + (-4)(3) + 1(-1)

-11

It is important to note that we can reverse the reasoning that leads to the equation
of the plane in order to identify the set of points that satisfies an equation of the form

n1x1 + n2x2 + n3x3

where n

[::l

d. If n1 * 0, this can be written as

This equation describes a plane through the point P(d In" 0, O) with

normal vector it. If if.2 * 0, we could combine the d with the x2 term and find that the
plane passes through the point P(O, d/n2, 0). In fact, we could find a point in the plane
in many ways, but the normal vector will always be a non-zero scalar multiple of it.

EXAMPLE 7

- p)

Describe the set of points in JR. that satisfies 5x1 - 6x2 + 7x3

Solution: We wish to rewrite the equation in the form it (x


above, we get

or

( - l )-

5 x1

6(x2 - 0) + 7(x3 - 0)

Thus, we identify the set as a plane with normal vectm n


point (11/5, 0, 0). Alternatively, if x1

x3

11.
=

0. Using our work

1-H

0, we find that x2

passing through the

-11/6, so the plane

passes through (0, -11/6,0).

Two planes are defined to be parallel if the normal vector to one plane is a non
zero scalar multiple of the normal vector of the other plane. Thus, for example, the
plane Xt + 2x2 - X3

1 is parallel to the plane

2x1 + 4x2 - 2x3

7.

36

Chapter 1

Euclidean Vector Spaces

Two planes are orthogonal to each other if their normal vectors are orthogonal.
For example, the plane x1

x2

x3 =

0 is orthogonal to the plane x1

x2 - 2x3 = 0

since

EXAMPLE8

Find a scalar equation of the plane that contains the point


the plane 2x1

3x2 - 5x3

P(2, 4, -1) and is parallel to

6.

Solution: An equation of the plane must be of the form 2x1


planes are parallel. The plane must pass through

3x2 - 5x3 = d since the

P, so we find that a scalar equation of

the plane is

2x1

EXAMPLE9

3x2 - 5x3 = 2(2)

(-5)(-1) = 21

3(4) +

Find a scalar equation of a plane that contains the point


to the plane x1 - 2x2

4x3 = 2.

Solutiom The nocmal vectoc must be orthogonal to


equation of this plane must be of the form 2x2
through

P,

[-il

so we pick

[H

Thus, an

x3 = d. Since the plane must pass

we find that a scalar equation of this plane is

2x2

EXERCISE4

P(3, -1, 3) and is orthogonal

X3 = 0(3) + 2(-1)

(1)(3) = 1

Find a scalar equation of the plane that contains the point


the plane X1 - 3x2 - 2x3

P(l, 2, 3) and is parallel to

= -5.

The Scalar Equation of a Hyperplane

Repeating what we did above we

can find the scalar equation of a hyperplane in IRn. In particular, if we have a vector

m that is orthogonal to any directed line segment PQ lying in the hyperplane,

then for

any point X(x1, ... , x11) in the hyperplane, we have

o = m

PX

As before, we can rearrange this as

m. ex - ff)

O=mx-mff
mx=mff
m1X1

mnXn = m p

Thus, we see that a single scalar equation in JR.11 represents a hyperplane in IR".

Section 1.3 Exercises

EXAMPLE 10

. Find the scalar equation of the hyperplane in JR.4 that has normal vector m

37

2
=

3
_
and
2

passes through the point P(1,0,2,-1 ).

Solution: The equation is


2x1

3x2 - 2x3

x4

2(1)

3(0)

(-2)(2)

1(-1)

-3

PROBLEMS 1.3
Practice Problems
Al Calculate the lengths of the given vectors.
(a)

[ 1
-

{]
[I

(e )

(b )

[:;1

{]
[ l

(f)

l/Y3
l/Y3

-l/Y3

-1

(a) x

(b) ,

(a)

[-1

{]
-2

(e)

-2
1
0

(b)

(b) P( l , 1,-2) and Q(-3,1,1)


(c) P(4,-6, 1) and Q(-3,5, 1)
(d) P(2, 1, 1,5) and Q(4,6,-2, 1)

rn

+J

(f)

A3 Determine the distance from P to Q for


(a) P(2,3) and Q(-4,1)

and y

and

AS Determine

(a)

A2 Find a unit vector in the direction of

m [!]
H l nl

Schwarz inequality if

whether

each

pair

of

vectors

is

orthogonal.

(g)

A4 Verify the triangle inequality and the Cauchy

0
0
-1

(c)

(e )

lH Hl
m nl
0
0

'

X3

4
(d)

-1

O'

4
3

-2

1/3

3/2

2/3

(f)
-1/3 ' -3/2

A6 Determine all values of k for which each pair of


vectors is orthogonal.
(a )

(c)

[ n [ ]
_

lH Hl

(d)

2
3
4

'

k
k
-k
0

38

Chapter 1

Euclidean Vector Spaces

A 7 Find a scalar equation of the plane that contains the


given point with the given normal vector.
(a) P(-1,2,-3),fl=

(b) P(2,5,4),fl=

[_]

[]
[-;]
[=l

0
1
2
(d) P(l,0,1,2,1),n=
-1

(c) P(l,-1,1),fl=

(d) P(2,l,1),fl=

AS Determine a scalar equation of the hyperplane that


passes through the given point with the given nor
mal vector.
(a) P(l,1,-1), fl=

1
-4
(c) P(O,0,0,0),n=
5
-2

[!]

0
1
(b) P(2,-2,0,l),n=
3
3

A9 Determine a normal vector for the hyperplane.


(a) 2x1 + x2 = 3 in IR2
(b) 3x1 - 2x2 + 3x3 = 7 in IR3
3
(c) -4x1 + 3x2 - 5x3 - 6 = 0 in JR
(d) X1 - x2 + 2x3 - 3x4 = 5 in IR4
(e) X1 + X2 - X3 + 2x4 - X5 = 0 in IR5
AlO Find an equation for the plane through the given
point and parallel to the given plane.
(a) P(l, -3, -1), 2x1 - 3x2 + 5x3 = 17
(b) P(O, -2, 4), X2 = 0
(c) P(l,2,1), x1 - x2 + 3x3 = 5
All Consider the statement "If ilv = ilw, then v = w."
(a) If the statement is true, prove it. If it is false,
provide a counterexample.
(b) If we specify i1 t= 0, does this change the re
sult?

Homework Problems
Bl Calculate the lengths of the following vectors.
(a)

(c)

[l

[-]
-3

3/YW
1;Y25
Ce)
-3/YW

(b)
(d)

(f)

B2 Find a unit vector in the direction of

(c)

r-!l

[1il

2
2
(e)
0

[ ]
- 3
1
2/3
1
2
-1

1
3

-1;Y26

(a)

[ 1

(b)

r;1

]
<{
-1
(f) -1
-1
3

B3 Determine the distance from P to Q for


(a) P(l,-3) and Q(2,3)
(b) P(-2,-2,5) and Q(-4, 1,4)
(c) P(3, 1,-3) and Q(-1,4,5)
(d) P(S,-2,-3, 6) and Q(2,5,-4,3)
B4 Verify the triangle inequality and the Cauchy
Schwarz inequality if
(al x

{]
[J

) x=

an<lY =

n<lY=

BS Determine whether
orthogonal.
(a)

(c)

rn r-1

mDl

nl
m
each pair

of

vectors
b
( )
(d)

is

r:l r-1

[ilHl

Section 1.3 Exercises


2

-3
(e)

1 '

-2

(f) -1 ' -1
-2

3
(a) P(2,l,1,5),it=

(c)

-5
1

(b) P(3,1,0,7),it=

vectors is orthogonal.
(a)

-2

B6 Determine all values of k for which each pair of

39

-4
l
-3

[n []

[
m ]

(c) P(O,0,0, 0),it=

0
O
0
0

B7 Find a scalar equation of the plane that contains the

[-1
[-]
[]
[: l

given point with the given normal vector.

(d) P(l,2,0,1,1),it= -2

(a) P(-3,-3,1),it=

B9 Determine a normal vector for the given hyper


plane.

(b) P(6, -2,5), it=

(c) P(0,0,0),it=

(d) P( I, l, I),it=

(a) 2x1 + 3x2 = 0 in JR.

3
(b) -xi - 2x2 + 5x3 = 7 in JR.
4
(c) xi + 4x2 - X4 = 2 in JR.
4
(d) X1 + X2 + X3 - 2x4 = 5 in IR.
(e) xi - x5 = 0 in JR.5
(f) X1 + X2 - 2x3 + X4 - X5 = 1 in JR.5
BlO Find an equation for the plane through the given

BS Determine a scalar equation of the hyperplane that


passes through the given point with the given nor
mal vector.

point and parallel to the given plane.


(a) P(3,-1,7), 5x1 - x2 - 2x3 = 6

(b) P(-1,2,-5), 2x2

3x3 = 7

(c) P(2,1,1), 3x1 - 2x2 + 3x3 = 7


(d) P(l, 0, 1), x1 - 5x2 + 3x3 = 0

Computer Problems
1.12

1.00

2.10

3.12

Cl Let i1 1 = 7.03 'i12 = -0.45 , v3 =

-3.13
1.21
3.31 , and

4.15

-2.21

1.14

6.13

2.00

-0.01

1.12
0

v4 = 2.13 .
3.15
3.40
Use computer software to evaluate each of the fol
lowing.
(a) i11 i12
(b) lli13ll
(c) i12 v4
2
(d) lli12 + v4[[

40

Chapter 1

Euclidean Vector Spaces

Conceptual Problems
Dl (a) Using geometrical arguments, what can you
say about the vectors

p, it, and Jif the line with


fJ + tJ and the plane with

1=
scalar equation it 1 = k have no point of inter
vector equation

D7 Let S be any set of vectors in JR11 Let S .L be the set


of all vectors that are orthogonal to every vector in

S. That is,

S .L = { w

section?

(b) Confirm your answer in part (a) by determining


when it is possible to find a value of the para
meter t that gives a point of intersection.

111111-111111111-111.

(Hint:

11111=111-.Y +yll.)

D3 Let Vt and i1 be orthogonal vectors in IR". Prove


2
that lli11 + i1 112 =llv t ll2 + llv 112.
2

04 Determine the equation of the set of points in JR3


that are equidistant from points

and

Q.

IR11 I v w = 0 for all v

Explain

such that all of the vectors are mutually ortho

v1 = 0 for all i * j. Prove that


{V 1, . . . , vkl is linearly independent.

gonal. That is, v;

D9 (a) Let it be a unit vector in IR3. Let a be the angle


between it and the x1 -axis, let,B be the angle be
tween ii and the x -axis, and let y be the angle
between

why the set is a plane and determine its normal vec

it and the x3-axis. Explain why


it=

05 Find the scalar equation of the plane such that each


point of the plane is equidistant from the points
(a) Write and simplify the equation

llQXll.

llPXll

JR". Prove that the set of all vectors ortho


gonal to it is a subspace of JR".

cos,B

cosy

(Hint: Take the dot product of

it with the stan

Because of this equation, the components

n1, n , n3 are sometimes called the direction

normal vector by geometrical arguments.

cos a

[ l

dard basis vectors.)

(b) Determine a point on the plane and the

06 Let it

S}

Show that S .L is a subspace of JR".

tor.

P(2, 2, 5) and Q(-3, 4, 1) in two ways.

08 Let {V 1, . .. , vkl be a set of non-zero vectors in JR"

D2 Prove that, as a consequence of the triangle inequal


ity,

cosines.

(b) Explain why cos2 a+ cos2,B + cos2y

1.

(c) Give a two-dimensional version of the direction


cosines and explain the connection to the iden

tity cos2 e

+ sin2 e = 1.

1.4 Projections and Minimum Distance


The idea of a projection is one of the most important applications of the dot product.
Suppose that we want to know how much of a given vector y is in the direction of some
other given vector 1 (see Figure 1.4.14). In elementary physics, this is exactly what is
required when a force is "resolved" into its components along certain directions (for
example, into its vertical and horizontal components). When we define projections, it
is helpful to think of examples in two or three dimensions, but the ideas do not really
depend on whether the vectors are in JR2, JR3, or JR11

Projections
First, let us consider the case where

Yt e1

[]

1 = e1

in JR2. How much of an arbitrary vector

points along x? Clearly, the part of y that is in the direction of x is

[] =

CJ x)x. This will be called the projection of y onto x and is denoted proj1 y.

Section 1.4 Projections and Minimum Distance

41

1
X1
0

X1
proj_.. y is a vector in the direction of x.

Figure 1.4.14

2
Next, consider the case where 1 E JR has arbitrary direction and is a unit vector.

First, draw the line through the origin with direction vector x. Now, draw the line
perpendicular to this line that passes through the point (y1, Y ) . This forms a right
2
triangle, as in Figure
The projection of y onto 1 is the portion of the triangle
that lies on the line with direction x. Thus, the resulting projection is a scalar multiple
of 1, that is proj_x y = k1. We need to determine the value of k. To do this, let tdenote
the vector from projxy toy. Then, by definition, tis orthogonal to x and we can write

1.4.14.

y = t + projx y = t + kx
We now employ a very useful and common trick-take the dot product of y with x:

y . x = (t+ kx) . 1 = t . .x + (k1) . 1 = o + k(x x) = kll1112 = k


since 1 is a unit vector. Thus,
projx y = (Y 1)1

EXAMPLE 1

Find the projection of 11 =

[ ]
-

onto the unit vector i1 =

[1il

Solution: We have
i1

projv 11 = (it v)v


.

=
=
=

(-3 1 ) [1/.../2]
.../2 .../2 1;.../2
[1/.../2]
. .../2 1/.../2
[=]
+

[11.../21
1/.../2J
X1

-2

2
If x E JR is an arbitrary non-zero vector, then the unit vector in the direction of 1
is x = 11:11 Hence, we find that the projection of y onto x is
,

,"'t

"'t
,

( ,"'t

proJx Y = proJx Y = v

x)x =
A

( . jjiji ) jjiji

y 1

'

111112

To match this result, we make the following definition for vectors in JR11

Definition
Projection

For any vectorsy, 1 in JR11, with x =F 0, we define the projection of y onto 1 by

proJx y =

yx
1
111112

42

Chapter 1

EXAMPLE2

Euclidean Vector Spaces

Let V =

[ :J
_

and it =

...

Solution: We have

l-n

Determine proj, it and proj, V.

42 +32 +c-1)

11v1 1 2

[][ 1
[1[ 1

8/13
4
3 =
6113
26
-1
-2/13

uv
(-2)(4)+5(3)+3(-l) =
=
v
v=
ProJv l1
2

vl1
(4)(-2)+3(5) +(-1)(3)11

prOJu v = 11 11 =
=
2
38
(-2)2 +52 +32
11 1\
.

-2
5 =
3

-4/19
10/19
6/19

Remarks

1. This example illustrates that, in general, proj1 y


should not expect equality because proj_i'
proj_y 1 is in the direction of

2. Observe that for any 1

proj" 1. Of course, we

is in the direction of 1, whereas

y.

JR.11, we can consider proj1 a function whose domain


and codomain are JR.n. To indicate this, we can write proj1 : JR.11 JR.11 Since the
output of this function is a vector, we call it a vector-valued function.

EXAMPLE3

Let v =

[jl
1/.../3

and 11 =

[ 1.
-2

Find proj;111.

Solution: Since \!vii = 1, we get

11v

proj;1 a= l ll v =ca. v)v =


lv 2

EXERCISE 1

Let V =

[ii

and it

l-n

V3

[ jJ []
l/

Determine proj, it and proj, V.

Section

1.4 Projections and Minimum Distance

43

The Perpendicular Part


W hen you resolve a force in physics, you often not only want the component of the
force in the direction of a given vector x, but also the component of the force perpen
dicular to x.

We begin by restating the problem. In JR.n, given a fixed vector x and any other y,

y as the sum of a vector parallel to x and a vector orthogonal to x. That


= w + z, where w = c1 for some c E JR and zx = 0.
We use the same trick we did JR.2. Taking the dot product of x and y gives

express
write y

is,

y . x= (z+ w). x = z. x + (d). x= 0 + c(x . x) = c111112


T herefore,

c=

{i;I,

so in fact, w

= ex =

proj1

y,

as we might have expected. One

bonus of approaching the problem this way is that it is now clear that this is the only
way to choose w to satisfy the problem.
Next, since y

= proj1 y +z, it follows that z = y-proj1 y. Is this zreally orthogonal

to x? To check, calculate

x = CY

proj1 y)

y. x
x .x
- 111112
J. x
= J 1
x x)
- 111112 e
= y.x

( )
( )
= y. x -(fi;1) 111112
=Jx-Jx=o

So, zis or thogonal to x, as required. Since it is often useful to construct a vector zin
this way, we introduce a name for it.

Definition

Perpendicular of a Projection

For any vectors x, y E Rn, with x *

0, define the projection ofy perpendicular to x

to be

Notice that perp1

y= proj1 y + perp1 y.

is again a vector-valued function on JR.11 Also observe that

See Figure 1.4.15.

0
Figure 1.4.15

perp_x(j1) is perpendicular to 1, and proj_x(y)

perp_,(51)

y.

44

Chapter 1

Euclidean Vector Spaces

EXAMPLE4
Let v =

and ii =

Solution:

[H

Detennine proj, ii and perp, ii.

perpv it= i1

EXERCISE 2
Let v =

and ii=

[-H

projv i1 =

[15] [84/3/3] [-11/35/3 l


1 4/3 -1/3
=

Detennine proj, ii and perp, ii.

Some Properties of Projections


Projections will appear several times in this book, and some of their special properties
are important. Two of them are called the linearity properties:
(Ll) proj/(51 + t)
projxy + projx zfor ally, Z E JR11
(L2) proj1(ty) = t proj1y for ally E JR11 and all t E JR
=

EXERCISE 3

Verify that properties (L

1)

and (L2) are true.

It follows that perp1 also satisfies the corresponding equations. We shall see that
proj1 and perp1 are just two cases amongst the many functions that satisfy the linearity
properties.
proj1 and perp1 also have a special property called the projection property. We
write it here for projx. but it is also true for perp1:
proj1 (proj1y) = proj1y,

for ally in JR11

Minimum Distance
What is the distance from a point Q(q1, q2) to the line with vector equation x = jJ + tJ?
In this and similar problems, distance always means the minimum distance. Geomet
rically, we see that the minimum distance is found along a line segment perpendicular

Section 1.4 Projections and Minimum Distance

45

to the given line through a point Pon the line. A formal proof that minimum distance
requires perpendicularity can be given by using Pythagoras' Theorem. S
( ee Problem

D3.)
To answer the question, take any point on the line x jJ + tJ The obvious choice
is P(pi, p2) corresponding to jJ. From Figure 1.4.16, we see that the required distance
is the length perp;PQ.

x = (1,2) + t(-1, 1)
d=(-1,1)
0

X1

Figure 1.4.16

EXAMPLES

The distance from

Q to the line 1

Find the distance from Q(4, 3) to the line 1


Solution: We pick the point P( 1,

PQ

= [ = J = [n
II perp;PQll

2)

f1 + tJ is II perpJPQll.

= [J [-n
+ t

t ER

on the line. Then,

so, the distance is

- Q
1 m-(-13 ) [- Jll
= 1 m l- Jll=1 l;J11=2
=

llPQ

proj;P ll

+ l

+l

V2

Notice that in this problem and similar problems, we take advantage of the fact
that the direction vector J can be thought of as "starting" at any point. W hen perp;AB
is calculated, both vectors are "located" at point P. W hen projections were originally
defined, it was implicitly assumed that all vectors were located at the origin. Now, it
is apparent that the definitions make sense as long as all vectors in the calculation are
located at the same point.
We now want to look at the similar problem of finding the distance from a point
Q(q1, q2, q3) to a plane in JR.3 with normal vector it. If Pis any point in the plane, then
proji1 PQ is the directed line segment from the plane to the point Q that is perpendicular

46

Chapter 1

Euclidean Vector Spaces

to the plane. Hence,

II proj,1 PQll is the distance from Q to the plane. Moreover, perp11 PQ

onto the plane. See Figure 1.4.17.

is a directed line segment lying in the plane. In particular, it is the projection of

PQ

Xi

Figure 1.4.17

EXAMPLE6

proj,-r PQ and perp,1 PQ, where it is normal to the plane.

What is the distance from

n3X3

d?

Solution: Assuming that


=

Q(q1, q1, q3) to a plane in lll3


n1

t=

0, we pick

P(d/ni, 0, 0)

with equation

nixi +n1x2 +

to be our point in the plane.

Thus, the distance is

II proJ,1 PQll

...

I
I
l 11ni12 l 11ni1
l 1!1). l
l2
(q llrliff)

ei/
ei/

fJ) rt

rt

(qi - d/ni)ni +q1n2 +q3n3


nf+n +n
=

q1ni +q2n2 +q3n3 - d


nf+n+n

This is a standard formula for this distance problem. However, the lengths of pro
jections along or perpendicular to a suitable vector can be used for all of these prob
lems. It is better to learn to use this powerful and versatile idea, as illustrated in the
problems above, than to memorize complicated formulas.

Section 1.4 Exercises

47

Finding the Nearest Point In some applications, we need to determine the


point in the plane that is nearest to the point Q. Let us call this point R, as in Figure
l .4.17. Then we can determine R by observing that
OR =
-+

OP+ PK
= OP+ perp11 PQ
-+

-+

-+

However, we get an easier calculation if we observe from the figure that

OR =
-+

O Q + QR = O Q + proj,1 QP
-+

-+

-+

-+

Notice that we need QP here instead of PQ. Problem D4 asks you to check that these

two calculations of OR are consistent.


If the plane in this problem passes through the origin, then we may take
and the point in the plane that is closest to Q is given by perp,1 q.

Solution' We pkk P( 1, - I, I) to be the point on the plane. Then QP

find that the point on the plane closest to Q is

PROBLEMS 1.4
Practice Problems
Al For each given pair of vectors v and ii, determine
proj;1 i1 and perp;1 ii. Check your results by verify
ing that projil i1 + perp;1 i1 = i1 and v perp;1 i1 = 0.

A2 Determine proj;1 i1 and perp;1 i1 where


(a) v=

[J [ ]
r J r-4J
.a= -

3/5 'u.... =
Cb) ....
v= 4

/5

(c) v =

[H {i]
[ ;] [ 1
a

Cd) v= -

2/3

,a=

(e ) v_-; =

....

ou =

-2

-1

-1

en v =

-3

-1

1
1

0,

Find the point on the plane x1 - 2x2 + 2x3 = 5 that is closest to the point Q(2, 1, 1).

EXAMPLE 7

(a) v=

[ i].

P=

2
a=

-3

[ J [ ;J
.a= _

U l Hl
{; ] H l
Ul. {ll

( b) V=

(c) v

( d) "=

-1

(e) v=

2
l

-3

2
-1
a=

and we

Chapter 1

48

Euclidean Vector Spaces

[rn a [H

A3 Consider the force represented by the vector

F=

and let

a.

(a) Determine a unit vector in the direction of

a.

(b ) Find the projection of

F onto

(c) Determine the projection of


to

a.

perpendicular

[ !] a Ul

A4 Follow the same instructions as in Problem A2 but


with

F=

and

A6 Use a projection to find the distance from the point


to the plane.
(a) Q(2,3,1),plane 3x1 - x2 + 4x3 = 5
(b) Q(-2, 3,- 1),plane 2x1 - 3x2 - Sx3 = 5
(c) Q(O,2,-1),plane 2x1 - x3 = 5

(d) Q( -1,-1,1),plane 2x1 - x2-x3 =4

A 7 For the given point and hyperplane in JR4, determine


by a projection the point in the hyperplane that is
closest to the given point.
(a) Q(1,0,0,1), hyperplane 2x1 - x2 + X3 +

AS For the given point and line, find by projection the

X3 - X4= 4

and use perp to find the distance from the point to

[ J [ l
[] [-l

(a) Q(O,O),linex=

+r

(b) Q(2,5 ), linex=

+t

tEIR
t EIR

U] [ H
[J t[H

(c) Q( l,0,l),lineX=

+t -

(d) Q(2,3,2), lineX=

tElR

t E JR

Homework Problems

a a. a a
[ ] [ ]
[ :]. a [ ]
=[]a= Hl
U J nl
a
....

Bl For each given pair of vectors v and a, determine


projv i1 and perpv
Check your results by verify
ing that projil
(a) v=
(b ) v

(c) V

+ perpil

a=

(d) '=

o ,

(a) v=
(b ) v=

(c) V

-1

'u =

[]
[n a

[ =n
[]

a=

+; l a Ul
Ul a= [l
U l [ ]
,a
=

(d) ,=

a=

(e) v=

-4

v=

= 0.

(f)

and v . perpil

B2 Determine projv i1 and perpv i1 where

a=

(e) v=

(c) Q(2,4,3,4), hyperplane 3x1 - x2 + 4x3 + X4 = 0


(d) Q(-1, 3,2,-2),hyperplane xi + 2x2 +

point on the line that is closest to the given point


the line.

X4

(b) Q( l ,2,1,3),hyperplane x1 - 2x2 + 3x3 = 1

(f)

v=

-1

2
= -1

Section 1.4 Exercises


B3 Consider the force represented by the vector

= [-q]

and let

nl

(c ) Determine the projection of


toil.

F perpendicular

B4 Follow the same instructions as in Problem


wM

B6 Use a projection to find the distance from the point


to the plane.

(a)

(a) Determine a unit vector in the direction of il.


(b) Find the projection of F ontoil.

= [ !] = HJ
and U

A2

but

(b)

(c )
(d)

( a)

and use perp to find the distance from the point to

(d)

(bl

(c )

(d)

line 1

JR

Q(3,-5), = [-] +t[_;J. t


Q(O, 0, 2), !ind= []+ r H t
Q(1.-1.01.lind{i]+t[:j. telt
Q(-1,2,3),lind= [_!]+r[H telR
E

JR

Computer Problems
Cl Letv1

1.2.1102 1.3.0102
= 4.7.0135 'V2 = -0.-2.2415 'V3
6.13 2.00
V3

-1.1.1.000102 V4 = 1.2.01132 .
3.3.4105
-1.1.0034
, and

Use computer software to evaluate each of the


following.

(a) proji11 v3
(b) perpit 1
(c) proj;:t3 v1
(d) proj;:t4 v2

plane

plane

closest to the given point.

(c )

(a)

-5x3= 5 = 7
2x1x 2x1+-x2x2-3x2-4x3
-X3
=
-xi + 2x2 -X3 =JR.54,

plane

plane

determine

by a projection the point in the hyperplane that is

BS For the given point and line, find by projection the

the line.

Q(Q(2O,,0,-32),,-1),
Q(Q(2O,,0-1,, 0)2),,

B7 For the given point and hyperplane in

(b)

point on the line that is closest to the given point

49

Q(Q(2l,, 13,,00,,-1-1
3x4Q(3=,1,02,
X4Q(=5,23,3,7),
3x4 = 40

), hyperplane

2x12x1-x3- 2x2+3x4+=x30+
3x1 - x2 - X3 +
2x + x2 + 4x3 +

), hyperplane

-6), hyperplane
hyperplane

50

Euclidean Vector Spaces

Chapter 1

Conceptual Problems
3

Dl (a) Given u and v in JR with u

* 0 and

verify that the composite map C : JR.

D4 By using the definition of perp,1 and the fact that

* 0,

JR.

---+

PQ= -QP, show that

defined by C(1) = proj,1(projv 1) also has the


linearity properties (Ll) and (L2).

-+

(b) (b) Suppose that C(1)=0 for all 1 E JR. , where


C is defined as in part (a). What can you say

DS (a) Let i1 =

about u and v? Explain.

D2 By

linearity

proja( -1)

property

(L2),

we

know

that

-+

and X =

[H

-+

Show that

0.

(b) For any u E JR. , prove algebraically that for any

1 E JR. , proj;;(perpa(1)) =O.

2
D3 (a) (Pythagoras' theorem) Use the fact that 11111 =
2
2
2
11 to prove that 111+111 =11111 +11111 if and

U]

proja(perpa(1)) =

proj,1 1. Check, and explain geo

metrically, that proj_a 1=projil1.

only if 1

-+

OP+ perp,1 PQ= OQ+ proJ11 QP

(c) Explain geometrically why proj,1(perpa(1)) =

0 for every 1.

1 =0.

(b) Let f be the line in ]Rn with vector equation


1=

td and

let P be any point that is not on

f. Prove that for any point Q on the line, the


smallest value of 11.8'

qll2

is obtained when

= projjp) (that is, when p

pendicular to

11.8'

d).

(Hint: Consider 11.8'

projjp) + projjp)

qll.)

is per-

qll

1.5 Cross-Products and Volumes


3
Given a pair of vectors u and v in JR. , how can we find a third vector w that is ortho
gonal to both u and v? This problem arises naturally in many ways. For example, to find
the scalar equation of a plane whose vector equation is 1 = ru + sv, we must find the
normal vector it orthogonal to u and v. In physics, it is observed that the force on an
electrically charged particle moving in a magnetic field is in the direction orthogonal
to the velocity of the particle and to the vector describing the magnetic field.

Cross-Products
3

Let u, v E JR. . If w is orthogonal to both u and v, it must satisfy the equations

U V=Ut W1 + U2W2 + U3W3=0

W=V1W1 + V2W2 + V3W3=0

In Chapter 2, we shall develop systematic methods for solving such equations for

w1, w2, w3. For the present, we simply give a solution:

EXERCISE 1

Verify that w u =0 and w v=0.

U2V3 - U3V2
U3V1 - U1V3
u1v2 - u2v1

Section 1.5 Cross-Products and Volumes

51

Also notice from the form of the equations that any multiple of w would also be
orthogonal to both i1 and v.

Definition

The cross-product of vectors it

Cross-Product

[:]

and V

[]

is defined by

U2 V3-U3V2

ilxv= U3V1-U1V3

U1V 2-U2 V1

EXAMPLE l

m [-il
[5] [-1 [-65--54] [-]
-

CalculaIB the cross product of

Solution:

and

(-3)

Remarks
1. Unlike the dot product of two vectors, which is a scalar, the cross-product of
two vectors is itself a new vector.
3

2. The cross-product is a construction that is defined only in IR. . (There is a gen


eralization to higher dimensions, but it is considerably more complicated, and
it will not be considered in this book.)

The formula for the cross-product is a little awkward to remember, but there are
many tricks for remembering it. One way is to write the components of i1 in a row
above the components of v:

Then, for the first entry in i1 xv, we cover the first column and calculate the difference
of the products of the cross-terms:

For the second entry in i1 xv, we cover the second column and take the negative of the
difference of the products of the cross-terms:

U1

U3

V1

V3

-(U1V3-U3V1)

Similarly, for the third entry, we cover the third column and calculate the difference of
the products of the cross-terms:

52

Chapter 1

Euclidean Vector Spaces

Note carefully that the second term must be given a minus sign in order for this proce
dure to provide the correct answer. Since the formula can be difficult to remember, we
recommend checking the answer by verifying that it is orthogonal to both i1 and v.

EXERCISE2

[ ] m

Calculate the cross-product of -

and

By construction, ax v is orthogonal toil and v, so the direction of ax v is known


except for sign: does it point "up" or "down"? The general rule is as follows: the three
vectors a, v, and axv, taken in this order, form a right-handed system. Let us see how
this works for simple cases.

EXERCISE 3

Let

e1, e2, and e


3

be the standard basis vectors in JR.3. Verify that

but

Check that in every case, the three vectors taken in order form a right-handed system.

These simple examples also suggest some of the general properties of the cross
product.

Theorem 1

for

x,y,zE JR.3

and

t E JR., we have

(1) xxy= -yxx


(2) xxx= 0
(3)

xx(y + Z)=xxy +xxz

(4) ( tx)xy=t(xxy)

Proof: These properties follow easily from the definition of the cross-product and are
left to the reader.
One rule we might expect does not in fact hold. In general,

.xx Cfx Z>

exxy)x t

This means that the parentheses cannot be omitted in a cross-product. (There are for
mulas available for these triple-vector products, but we shall not need them. See Prob
lem F3 in Further Problems at the end of this chapter.)

The Length of the Cross- Product


Given

il

and v, the direction of their cross-product is known. W hat is the length of the

cross-product of a and v? We give the answer in the following theorem.

Theorem 2

Let

it, v E JR.3

and e be the angle between it and

V, then llilx vii = llitll llVll sine.

Section 1.5 Cross-Products and Volumes

53

Proof: We give an outline of the proof. We have

vll2 = (U V3 - U3V )2 + (U3V1 - U1V3)2 + (U1V - U V1)2


2
2
2
2
Expand by the binomial theorem and then add and subtract the term Cuivi+uv+uv).
The resulting terms can be arranged so as to be seen to be equal to
llu

(U21

Thus,
lli1

U2
2

U23)(V21

V2
2

V2)
3 - (UtVt

U V
2 2

U3V3)2

vll2 = lli1112llvll2 - (i1 . v)2= lli1112llvll2 - lli1112llvll2 cos2e


= lli1112llvll2(1 - cos2 B) = llz1112lli1112 sin2e

and the result follows.

To interpret this formula, consider Figure 1.5 .18. Assuming that i1 x v f:. 0, the
vectors z1 and v determine a parallelogram. Take the length of z1 to be the base of the
parallelogram; the altitude is the length of perpa v. From trigonometry, we know that
this length is IJVll sine, so that the area of the parallelogram is
(base)

(altitude) = Jli111 llvll sine= llz1 xvii

Xt
Figure 1.5.18

The area of the parallelogram is

EXAMPLE2
Find the area of the parallelogram detennined by i1=
Solution: By Theorem 2, we find that the area is

lli1

VII=

[]

= I

[]

llUll llVll sin 8.

and V =

[il

54

Chapter 1

EXAMPLE3

Euclidean Vector Spaces

Find the area of the paraIIelogram determined by a =

[-]

and ii =

Find the area of the parallelogram determined by"=

[3l

and

Solution: By Theorem 2, the area is

EXERCISE4

[-: l

ii

LH

Some Problems on Lines, Planes, and Distances

The cross-product allows us to answer many questions about lines, planes, and dis
tances in JR3.

Finding the Normal to a Plane In Section 1.2, the vector equation of a


plane was given in the form 1
jJ + sil + t\!, where {il, v} is linearly independent. By
definition, the normal vector it must be perpendicular to i1 and v. Therefore, it will be
given by it= i1 xv.
=

EXAMPLE4

The lines 1

[i] []
+

and 1 =

m [- ]
+ t

must lie in a common plane since iliey

have the point (1, 3, 2) in common. Find a scalar equation of the plane that contains

these lines.

Solution: The normal to the plane is

Therefore, since the plane passes through P(l, 3, 2), we find that an equation of the
plane is

-4xi - 3x2

2x3 = (-4)(1)

(-3)(3)

2(2) = -9

55

Section 1.5 Cross-Products and Volumes

EXAMPLES

P(l, -2, 1),

Find a scalar equation of the plane that contains the three points
Q(2,-2,-1), and R(4,

Solution: Since
and

1, 1).
_,

P, Q, and R lie in the plane, then so do the directed line segments PQ

PR. Hence, the normal to the plane is given by


n

PQ x PR

Since the plane passes through


6x1 - 6x2

3 x3

(6)(1)

[j] [] HJ
x

P, we find that an equation of the plane is


(-6)(-2)

1(3)

21

or

2x1 - 2x2

X3

Finding the Line of Intersection of Two Planes

Unless two planes in


3
JR. are parallel, their intersection will be a line. The direction vector of this line lies in
both planes, so it is perpendicular to both of the normals. It can therefore be obtained
as the cross-product of the two normals. Once we find a point that lies on this line, we
can write the vector equation of the line.

EXAMPLE6

Find a vector equation of the line of intersection of the two planes x1


and 2x1 - X2

3x3

x2 - 2x3

6.

Solution: The normal vectors of the planes are

[ _l] [-H
and

Hence, the direction

vector of the line of intersection is

One easy way to find a point on the line is to let x3


equations x1

x2

and 2x1 - x2

0 and then solve the remaining


3 and x2 0. Hence, a

6. The solution is x1

vector equation of the line of intersection is

EXERCISE 5

Find a vector equation of the line of intersection of the two planes -x1 - 2x2
and 2x1

x2 - 2x3

1.

x3

-2

56

Chapter 1

Euclidean Vector Spaces

The Scalar Triple Product and Volumes zn 3 The three vectors


a, v, and w in JR.3 may be taken to be the three adjacent edges of a parallelepiped
(see Figure 1.5.19). Is there an expression for the volume of the parallelepiped in
terms of the three vectors? To obtain such a formula, observe that the parallelogram

a and v can be regarded as the base of the solid of the parallelepiped.


11a x vii. With respect to this base, the altitude of the solid is the
of the amount of w in the direction of the normal vector ft
a xv to the base.

determined by

This base has area


length

That is,
altltude
.

II proJ,1 w
...II =
.

lw . ill

liitil

lw . (it xv)I
11a xvii

Thus, to get the volume of the parallelepiped, multiply this altitude by the area of the
base to get

volume of the parallelepiped =


The product

w (it xv) is

lw. ca xv)I
x11a xvii
Ila xvii

called the scalar triple product of

lw ca xv)I

w, it, and v.

Notice that

the result is a real number (a scalar).

axv

altitude

base area
Figure 1.5.19

11 proj,1 wll

11axVil

The parallelepiped with adjacent edges

lw. ca xv)I.

a, v, w

has volume given by

The sign of the scalar triple product also has an interpretation. Recall that the
ordered triple of vectors

{a, v, a xv}

vector

w is then "upwards,"

and

a xv as the
sa + tf!. Some other

is right-handed; we can think of

"upwards" normal vector to the plane with vector equation x

{a, v, w}

(in that order) is right-handed, if and only if

the scalar triple product is positive. If the triple scalar product is negative, then

{a, v, w}

is a left-handed system.
It is often useful to note that

w . ca xv)

a . cv xw) = v . cw xa)

This is straightfoward but tedious to verify.

EXAMPLE 7
Find the volume of the pamllelepiped determined by the vectors

Solution: The volume Vis

V=

m l[:H=rn [JHJ
=

=2

[H [H [ n
and

Section 1.5 Exercises

57

PROBLEMS 1.5
Practice Problems
Al Calculate the following cross-products.

+iJ {l]
{] +J
{Hl
[H=ii
(+I {!l
[!Hll
nl Ul [=H

A2 ld =

v =

and

w =

Check

by calculation that the following general properties

hold.

axa = o
Cb) axv= -v xa
(c) ilx3w=3(ilxw)
(d) ilx(v +w) =ilxv+ilxw
c e) a. cv xw) = w . caxv)
Cf) a. cv xw) -v. ci1 xw)
(a)

[H Ul
[lHl
inR3)

(c)

X= -

+s

(d)

X=s

+1

AS Determine the scalar equation of the plane that con

P(2,1, 5),Q(4,-3,2),R(2, 6, -1)


(b) P(3,1,4),Q(-2,0,2),R(l,4,-l)
(c)
(d)

(b)
(d)

(Hint: For (d), think of these vectms as

[ii

(b) ,

[] Ul ,[l]
Ul [ii f11
[ i] H] []
][ {]
+

tains each set of points.

A3 Calculate the area of the parallelogram determined


y each pair of vectors.

( c)

() ,

(a)

(a)

vector equation

(d)

co

A4 Determine the scalar equation of the plane with

[Hll

[-nr1

[-!]

and

P(-l,4,2),Q(3,l,-l),R(2,-3,-l)
P(l,O,1),Q(-1,0, 1),R(0,0,0)

A6 Determine a vector equation of the line of intersec


tion of the given planes.

- X3 = 5 and
2x1 - 5x2 +-.x3= 7
2x1 - 3x3 = 7 and x2 + 2x3 = 4

(a) Xi + 3x2

A 7 (b)
Find the volume of the parallelepiped determined
by each set of vectors.

()
(b)
(c)

(d)

(e)

[Hrrnl
u1rn m
nrnrn1
[_H urn1
n1nrn1

AS What does it mean, geometrically, if il(vxw) = O?


A9 Show that (il - v) x(il+v) = 2(il xv).

58

Chapter 1

Euclidean Vector Spaces

Homework Problems
Bl Calculate the following cross-products.

Hl {]
m +l
[ ] {i]
[!] Ul

(a)

(c )

( el

+!Hl
<aflHil
{iHil
Hl

B2 Ut U =

V =

B4 Determine the scalar equation of the plane with

and W =

Ch k

by calculation that the following general properties


hold.

(a) i1x i1 =

vector equation
(a) x

{i] {] ll
nl [!] ftl
[;] f1l nl
[] f l
+

1
(c )

+t

(d) X=

+i

BS Determine the scalar equation of the plane that con


tains each set of points.

(b) i1xv = -vx i1

(a) P(S,2,1), Q(-3,2,4),R(8,1,6)

(c ) i1 x 2w = 2(i1x w)

(b) P(S, -1,2), Q(-1,3, 4),R(3,1,1)

Cd) ax cv+ w) = axv+ ax w

(c ) P(0,2,1), Q(3,-1,1),R(l,3,0)

Ce) i1. cvx w) = w. caxv)


Cf) a . cvx w) = -v . cax w)

(d) P(l,5,-3), Q(2,6,-1),R(l,O, 1)

B3 Calculate the area of the parallelogram determined


by each pair of vectors.

lH Hl
{Jr:J

(a)

(b)

(d)

(Hint: For (d), think of these vtors as


in IR.3.)

lrllfl

m. [-;]

m [-l
and

B6 Determine a vector equation of the line of intersec

tion of the given planes.

X3 = 5 and
7X2+- X3 = 6
(b) xi - 3x2 - 2x3 = 4 and
3x1 + 2x2 + x3 = 2
(a) x1 + 4x2

3X1

B7 Find the volume of the parallelepiped determined


by each set of vectors.
(a)

(b)

(c)

(d)

HlUlUI
liHH []
lH lH Ul
lH lH Ul

Conceptual Problems
Dl Show that if X is a point on the line through P and
Q, then x x (q -

p)

OP, and q = OQ.

--7

....

p x if,

where x =

OX,

D2 Consider the following statement: "If i1 f. 0, and


i1xv = i1 x w, then v = w." If the statement is true,
prove it. If it is false, give a counterexample.

Chapter Review Student Review

D3 Explain why u x (v x w) must be a vector that sat


isfies the vector equation x

sv + tW.

59

D4 Give an example of distinct vectors u, v, and w in


JR.3 such that
(a) ax (v x w)

(b) ax (v x w) *

(u xv) x w
(u xv) x w

CHAPTER REVIEW
Suggestions for Student Review
Organizing your own review is an important step to

by giving examples. Explain how this relates with

wards mastering new material. It is much more valu

the concept of linear independence. (Section 1.2)

able than memorizing someone else's list of key ideas.


To retain new concepts as useful tools, you must be able
to state definitions and make connections between var
ious ideas and techniques. You should also be able to
give (or, even better, create) instructive examples. T he

6 Let {V [, v2} be a linearly independent spanning set

for a subspace S of JR.3. Explain how you could con


struct other spanning sets and other linearly indepen
dent spanning sets for S . (Section 1.2)

suggestions below are not intended to be an exhaus

7 State the formal definition of linear independence.

tive checklist; instead, they suggest the kinds of activ

Explain the connection between the formal defini

ities and questioning that will help you gain a confident

tion of linear dependence and an intuitive geometric

grasp of the material.

understanding of linear dependence. Why is linear

1 Find some person or persons to talk with about math


ematics. There's lots of evidence that this is the best

independence important when looking at spanning


sets? (Section 1.2)

ment is a small price for learning. Also, be sure to

8 State the relation (or relations) between the length


in JR.3 and the dot product in JR.3. Use examples to
illustrate. (Section 1.3)

get lots of practice in writing answers independently.

9 Explain how projection onto a vector

way to learn. Be sure you do your share of asking


and answering. Note that a little bit of embarrass

2 Draw pictures to illustrate addition of vectors, sub


traction of vectors, and multiplication of a vector by
a scalar (general case). (Section 1.1)

3 Explain how you find a vector equation for a line and


make up examples to show why the vector equation
of a line is not unique. (Albert Einstein once said, "If
you can't explain it simply, you don't understand it
well enough.") (Section 1.1)

v is defined

in terms of the dot product. Illustrate with a picture.


Define the part of a vector x perpendicular to v and
verify (in the general case) that it is perpendicular to

v. (Section 1.4)
10 Explain with a picture how projections help us to

solve the minimum distance problem. (Section 1.4)

11 Discuss the role of the normal vector to a plane in de


termining the scalar equation of the plane. Explain

4 State the definition of a subspace of JR.n. Give exam

how you can get from a scalar equation of a plane

ples of subspaces in JR.3 that are lines, planes, and

to a vector equation for the plane and from a vector

all of JR.3. Show that there is only one subspace in

equation of the plane to the scalar equation. (Sec

JR.3 that does not have infinitely many vectors in it.

tions 1.3 and 1.5)

(Section 1.2)

5 Show that the subspace spanned by three vectors in


JR.3 can either be a point, a line, a plane, or all of JR.3,

12 State the important algebraic and geometric proper


ties of the cross-product. (Section 1.5)

Chapter 1

60

Euclidean Vector Spaces

Chapter Quiz
propriate level of difficulty for a test on this material.

both

El Determine a vector equation of the line passing


through points P(-2, 1, -4) and Q(S, -2, 1).

{[], [ ]}

E4 Pro,e thatS

{[::J

ElO Each of the following statements is to be inter

is a basis for JR.2.

E IR I a,x, + a,x, + a3x3

3
is a subspace of JR. for any real numbers a1, az, a3

if and only if d

[- ]

0.

is true, and if so, explain briefly. If false, give a


counterexample.
(i) Any three distinct points lie in exactly one
plane.
(ii) The subspace spanned by a single non-zero

[H

t -

(iii) The set Span{V1,

. ,

vk} is linearly dependent.

(iv) The dot product of a vector with itself cannot


be zero.

and each of the coordinate axes

E6 Find the point on the line 1

preted in JR.3. Determine whether each statement

vector is a line passing through the origin.

ES Determine the cosine of the angle between


ii

and

E9 Prove that the volume of the parallelepiped deter


mined by il, v, and w has the same volume as the
parallelepiped determined by (il +kV), v, and w.

E2 Determine the scalar equation of the plane that


contains the points P(l,-1,0), Q(3, 1,-2), and
R(-4, 1,6).
E3 Show that

[] nl

ES Determine a non-zero vector that is orthogonal to

Note: Your instructor may have different ideas of an ap

(v) For any vectors x and y, proj1 y

proj.Y x.
(vi) For any vectors x and y, the set {proj1 y, perp1 y}
=

is linearly independent.

t ER

(vii) The area of the parallelogram determined by il

that is closest to the point P(2, 3, 4). Illustrate your


method of calculation with a sketch.

and v is the same as the area of the parallelo


gram determined by il and (v + 3il).

E7 Find the point on the hyperplane x1 + x + x3 +


2
x4
1 that is closest to the point P(3, -2, 0, 2) and
=

determine the distance from the point to the plane.

Further Problems
These problems are intended to be a little more chal
lenging than the problems at the end of each section.
Some explore topics beyond the material discussed in
the text.

F3 In Problem l .5.D3, you were asked to show that


il x(v x w)
sv + tW for some s, t ER
(a) By direct calculation, prove that il x (v x w)
ca w)V - ca . v)w.
(b) Prove that il x (v x w) + v x (w x it) + w x
Cit xv) o.
=

Fl Consider the statement "If il * 0, and both il v


il w and i1 xv
il x w, then v
w." Either prove
=

the statement or give a counterexample.

F2 Suppose that il and v are orthogonal unit vectors in


3
JR.3. Prove that for every x E JR. ,

F4 Prove that
(a) it v
11a + vll2 - llit - vll2
211a112 + 211v112
Cb) 11a + vli2 + 11a - v112
=

(c) Interpret (a) and (b) in terms of a parallelogram


determined by vectors il and v.

61

Chapter Review Student Review

FS Show that if P, Q, and R are collinear points and

OP= p, OQ = if, and OR= 1, then


ctJ

if)

cif

1'J

c1 x fJ) = o

F6 In JR.2, two lines fail to have a point of intersection


only if they are parallel. However, in JR.3, a pair of
lines can fail to have a point of intersection even if
they are not parallel. Two such lines in JR.3 are called

skew.
(a) Observe that if two lines are skew, then they do
not lie in a common plane. Show that two skew
lines do lie in parallel planes.
(b) Find the distance between the skew lines

1=

MyMathlab

[Hl

,seRand1=

HHil

teR

Go to MyMathLab at www.mymathlab.com. You can practise many of this chapter's exercises as often as you
want. The guided solutions help you find an answer step by step. You'll find a personalized study plan available to
you, too!

CHAPTER 2

Systems of Linear
Equations
CHAPTER OUTLINE

2.1

Systems of Linear Equations and Elimination

2.2

Reduced Row Echelon Form, Rank, and Homogeneous Systems

2.3

Application to Spanning and Linear Independence

2.4

Applications of Systems of Linear Equations

In a few places in Chapter 1, we needed to find a vector x in JRn that simultaneously


satisfied several linear equations. In such cases, we used a system of linear equations.
Such systems arise frequently in almost every conceivable area where mathematics is
applied: in analyzing stresses in complicated structures; in allocating resources or
managing inventory; in determining appropriate controls to guide aircraft or robots;
and as a fundamental tool in the numerical analysis of the flow of fluids or heat.
The standard method of solving systems of linear equations is elimination. Elim
ination can be represented by row reduction of a matrix to its reduced row echelon
form. This is a fundamental procedure in linear algebra. Obtaining and interpreting
the reduced row echelon form of a matrix will play an important role in almost every
thing we do in the rest of this book.

2.1 Systems of Linear Equations


and Elimination
Definition

A linear equation in n variables x1,

Linear Equation

form

, x11 is an equation that can be written in the

(2.1)
The numbers a1, ..., a,, are called the coefficients of the equation, and b is usually
referred to as "the right-hand side," or "the constant term." The xi are the unknowns or
variables to be solved for.

64

Chapter 2

EXAMPLE 1

Systems of Linear Equations

The equations
X1 +
x1 - 3x2 +

2x2

../3x3

(2.2)

JTX 4

(2.3)

are both linear equations since they both can be written in the form of equation (2. 1 ).
The equation

Definition
Solution

EXAMPLE2

A vector

x2

1 is not a linear equation.

n
in !R is called a solution of equation (2.1) if the equation is satisfied

Sn
when we make the substitution x1

A few solutions of equation

s1, x2

(2.2) are

s2, ... , Xn

[n [Os l [ ]
and

2 + 2(1)
3 + 2(0.5)
6 + 2(- 1 )

The vector

0
0
0
0

Sn.

since

4
4
4

is clearly a solution of x1 - 3x2 +

.../3x3

JTX4.

A general system of m linear equations inn variables is written in the form

a 11X1 + a12x2 +
a21X1 + a22X2 +

+ a1nXn
+ a2nXn

b1
b2

Note that for each coefficient, the first index indicates in which equation the coefficient
appears. The second index indicates which variable the coefficient multiplies. That
is, au is the coefficient of Xj in the i-th equation. The indices on the right-hand side
indicate which equation the constant appears in.
We want to establish a standard procedure for determining all the solutions of such
a system-if there are any solutions! It will be convenient to speak of the solution set
of a system to mean the set of all solutions of the system.
The standard procedure for solving a system of linear equations is elimination. By
multiplying and adding some of the original equations, we can eliminate some of the

Section 2.1

Systems of Linear Equations and Elimination

65

variables from some of the equations. The result will be a simpler system of equations
that has the same solution set as the original system, but is easier to solve.
We say that two systems of equations are equivalent if they have the same solution
set. In elimination, each elimination step must be reversible and must leave the solution

set unchanged. Every system produced during an elimination procedure is equivalent


to the original system. We begin with an example and explain the general rules as we
proceed.

EXAMPLE 3

Find all solutions of the system of linear equations

X1

X2 - 2X3

X1

3x2 - X3

2x1

x2 - Sx3

Solution: To solve this system by elimination, we begin by eliminating x1 from all

equations except the first one.


Add (-1) times the first equation to the second equation. The first and third

equations are unchanged, so the system is now

X1

2x1

X2 - 2x3

2x2

X3

x2 - Sx3

Note two important things about this step. First, if x1, x2, x3 satisfy the original system,
then they certainly satisfy the revised system after the step. This follows from the rule
of arithmetic that if P

Q and R

S, then

P + R

Q + S. So, when we add

two equations and both are satisfied, the resulting sum equation is satisfied. Thus, the
revised system is equivalent to the original system.
Second, the step is reversible: to get back to the original system, we just add (1)
times the first equation to the revised second equation.
Add ( 2 ) times the first equation to the third equation.
-

X1

X2 - 2X3

2x2

+ X3

-X2 - X3

-1

Again, note that this step is reversible and does not change the solution set. Also note
that x1 has been eliminated from all equations except the first one, so now we leave
the first equation and turn our attention to x2.
Although we will not modify or use the first equation in the next several steps, we
keep writing the entire system after each step. This is important because it leads to a
good general procedure for dealing with large systems.
It is convenient, but not necessary, to work with an equation in which x2 has the
coefficient 1. We could multiply the second equation by 1 /2, but to avoid fractions,
follow the steps on the next page.

66

Chapter 2

Systems of Linear Equations

Interchange the second and third equations. This is another reversible step that

EXAMPLE3
(continued)

does not change the solution set:


X1

X2 - 2X3

-X2 - X3

2X2

X3

4
-1
3

Multiply the second equation by (-1). This step is reversible and does not

change the solution set:


X1

X2 - 2X3

X2

X3

2X2

X3

Add (-2) times the second equation to the third equation.


X1

X2 - 2x3
X2

X3

-X3

}
1

In the third equation, all variables except x3 have been eliminated; by elimination,
we have solved for X3. Using similar steps, we could continue and eliminate X3 from
the second and first equations and x2 from the first equation. However, it is often a
much simpler task to complete the solution process by back-substitution.
First, observe that x3

-1. Substitute this value into the second equation and find

that
X2

1 - X3

1 - ( -1)

Next, substitute these values back into the first equation to obtain
X1

4 - X2

2x3

Thus, the on!y solution of this system is

4-2

2(-1)

[:] [ H
[ ]
=

Since the final system is equl va

lent to the original system, this solution is also the unique solution of the problem.
Observe that we can easily check that

satisfies the original system of equa

tions:
0
0

2(0)

2 - 2( -1)

3(2) - (-1)
+

2 - 5(-1)

4
7
7

It is important to observe the form of the equations in our final system. The first
variable with a non-zero coefficient in each equation, called a leading variable, does
not appear in any equation below it. Also, the leading variable in the second equation

Section 2.1

Systems of Linear Equations and Elimination

67

is to the right of the leading variable in the first, and the leading variable in the thfrd is
to the right of the leading variable in the second.
The system solved in Example 3 is a particularly simple one. However, the solution
procedure introduces all the steps that are needed in the process of elimination. They
are worth reviewing.

Types of Steps in Elimination

(1) Multiply one equation by a non-zero constant.


(2) Interchange two equations.
(3) Add a multiple of one equation to another equation.
Warning! Do

not

combine steps of type ( 1) and type (3) into one step of the form

"Add a multiple of one equation to a multiple of another equation." Although such a


combination would not lead to errors in this chapter, it would lead to errors when we
apply these ideas in Cnapter 5.

EXAMPLE4

Determine the solution set of the system of linear equations

X1
X1

2X3

3x3

X4

14

3x4

19

Remark
Notice that neither equation contains x2. This may seem peculiar, but it happens in
some applications that one of the variables of interest does not appear in the linear
equations. If it truly is one of the variables of the problem, ignoring it is incorrect.
Rewrite the equations to make it explicit:

x1
Xi

Ox2

Ox2

2x3

3x3

X4

14

3x4

19

Solution: As in Example 3, we want our leading variable in the first equation to be to


the left of the leading variable in the second equation, and we want the leading variable
to be eliminated from the second equation. Thus, we use a type (3) step to eliminate

x1 from the second equation.


Add ( -1) times the first equation to the second equation:
xi

Ox2

2x3
X3

X4

2X4

14
5

Observe that X2 is not shown in the second equation because the leading variable must
have a non-zero coefficient. Moreover, we have already finished our elimination pro
cedure as we have our desired form. The solution can now be completed by back
substitution.
Note that the equations do not completely determine both x3 and x4: one of them
can be chosen arbitrarily, and the equations can still be satisfied. For consistency, we
always choose the variables that do not appear as a leading variable in any equation to
be the ones that will be chosen arbitrarily. We will call these free variables.

68

Chapter 2

EXAMPLE4
(continued)

Systems of Linear Equations

Thus, in the revised system, we see that neither x2 nor x4 appears as a leading
variable in any equation. Therefore, x2 and x4 are the free variables and may be chosen
arbitrarily (for example, x4

JR and x2

JR). Then the second equation can be

solved for the leading variable X3:

X3

5 - 2X4

5 - 2t

Now, solve the first equation for its leading variable x1:

Xt

14

2X3 - X4

14

2(5

2t) - t

3t

Thus, the solution set of the system is

X1

3t

s, t

5 - 2t '

JR

In this case, there are infinitely many solutions because for each value of

and each

value oft that we choose, we get a different solution. We say that this equation is the

general solution of the system, and we call

and

the parameters of the general

solution. For many purposes, it is useful to recognize that this solution can be split into
a constant part, a part in

t,

and a part in
XJ

X2
X3
X4

4
0
5
0

s:

0
1
0
0

3
+

0
-2

This will be the standard format for displaying general solutions. It is acceptable
s and x4 in the place of t, but then you must say x2, X4

leave x2 in the place of

to
E

JR. Observe that one immediate advantage of this form is that we can instantly see
the geometric interpretation of the solution. The intersection of the two hyperplanes

x1 + 2x3 + X4
P(4, 0, 5, 0).

14 and x1

3x3

3x4

4
4
19 in JR is a plane in JR that passes through

The solution procedure we have introduced is known as Gaussian elimination

with back-substitution.

A slight variation of this procedure is introduced in the next

section.

EXERCISE 1

Find the general solution to the system of linear equations

2x 1

X1

4x2

Ox3

2X2 - X3

12
4

Use the general solution to find three different solutions of the system.

Section 2.1

Systems of Linear Equations and Elimination

69

The Matrix Representation of a System


of Linear Equations
After you have solved a few systems of equations using elimination, you may realize
that you could write the solution faster if you could omit the letters x1, x2, and so on
as long as you could keep the coefficients lined up properly. To do this, we write out
the coefficients in a rectangular array called a matrix. A general linear system of
equations in

unknowns will be represented by the matrix


a11

a12

alj

a111

b1

a21

a22

a2j

a211

b2

ail

ai2

aij

a;n

b;

am!

am2

amj

a,,111

bm

where the coefficient aij appears in the i-th row and }-th column of the coefficient
matrix. This is called the augmented matrix of the system; it is augmented because
it includes as its last column the right-hand side of the equations. The matrix without
this last column is called the coefficient matrix of the system:
au

a12

alj

a111

a21

a22

a2j

a211

a;1

a;2

aij

a;n

amt

am2

amj

amn

For convenience, we sometimes denote the augmented matrix of a system with coeffibi
cient matrix

A and right-hand side b

by

[A I b J. In Chapter 3, we will develop

bm
another way of representing a system of linear equations.

EXAMPLES

Write the coefficient matrix and augmented matrix for the following system:
3x1 + 8x2 - l 8x3 + X4
x1+ 2x2
x1+3x2

- 4x3
-

7 x3

+ X4

35

11

Solution: The coefficient matrix is formed by writing the coefficients of each equation
as the rows of the matrix. Thus, we get the matrix
3

A=

-1 8

l j

70

Chapter 2

EXAMPLES

Systems of Linear Equations

For the augmented matrix, we just add the right-hand side as the last column. We get

(continued)

EXAMPLE6

8
2
3

[:
[

18
-4
-7

1
0

35
11
10

W1ite the system of linear equations that has the augmented matrix

1
0
0

0
-1
0

2
1
1

Solution: The rows of the matrix tell us the coefficients and right-hand side of each
equation. We get the system
x,

-X2

2x3

X3

X3

-2

Remark
Another way to view the coefficient matrix is to see that the }-th column of the matrix
is the vector containing all the coefficients of x1. This view will become very important
in Chapter 3 and beyond.
Since each row in the augmented matrix corresponds to an equation in the system
of linear equations, performing operations on the equations of the system corresponds
to performing the same operations on the rows of the matrix. Thus, the steps in elimi
nation correspond to the following elementary row operations.
Types of Elementary Row Operations

(1) Multiply one row by a non-zero constant.


(2) Interchange two rows.
(3) Add a multiple of one row to another.
As with the steps in elimination, we do not combine operations of type (1) and
type (3) into one operation.
The process of performing elementary row operations on a matrix to bring it into
some simpler form is called row reduction.
Recall that if a system of equations is obtained from another system by one or
more of the elimination steps, the systems are said to be equivalent. For matrices,
if the matrix M is row reduced into a matrix N by a sequence of elementary row
operations, then we say that Mis row equivalent to N. Just as elimination steps are
reversible, so are elementary row operations. It follows that if M is row equivalent to

N, then N is row equivalent to M, so we may say that Mand N are row equivalent. It
also follows that if A is row equivalent to Band Bis row equivalent to C, then A is row
equivalent to C.
Let us see how the elimination in Example 3 appears in matrix notation. To do
this, we introduce notation to indicate this elementary row operation. We write Ri + cRJ

Section 2.1

Systems of Linear Equations and Elimination

to indicate adding c times row j to row i,


row j. We write

cR;

R; ! R J

to indicate interchanging row i and

to indicate multiplying row i by a non-zero scalar

at each step we will use

71

c.

Additionally,

to indicate that the matrices are row equivalent. Note that

it would be incorrect to use

or ::::} instead of

As one becomes confident with

elementary row operations, one may omfr these indicators of which elementary row
operations were used. However, including them can make checking the steps easier,
and instructors may require them in work submitted for grading.

EXAMPLE 7

[ 11 31 -1-2 4 l

The augmented matrix for the system in Example

(-1)

2 1 -5
(-1)

The first step in the elimination was to add


Here we add

[1
2

times the first equation to the second.

multiplied by the first row to the second. We write

[;

1 -1-2
-5

The remaining steps are

-[

is

12 -21
1 -5 l
-12 -2-11
-! l

R3 - R1
(-l)R2

R2+(-l)R1

-[

-[

1 -2
2 1
-5

2-1 -2-1
j]l
-2
4
21 11 31

R2 ! R3

R3 - 2R2

1 11 -21
[ -1 n
3
0
0

All the elementary row operations corresponding to the elimination in Example

3.

have

been performed. Observe that the final matrix is the augmented matrix for the final
system of linear equations that we obtained in Example

EXAMPLES

The matrix representation of the elimination in Example

[ 11 32 31 11419
J
0
0

EXERCISE 2

R2 + (-l)R1

is

[ 1 21 21 11 4
5 J
0

0
0

W rite out the matrix representation of the elimination used in Exercise

1.

In the next example, we will solve a system of linear equations using Gaussian
elimination with back-substitution entirely in matrix form.

72

Chapter 2

EXAMPLE9

Systems of Linear Equations

Find the general solution of the system

3x1 + 8x2 - l 8x3 +x4


Xi+ 2x2 - 4x3
Xi +3x2 - 7X3 +X4

35

11

10

Solution: W rite the augmented matrix of the system and row reduce:

[:
[
[

8
2
3

-18
-4
-7

1
0
1

35
11
10
11

2
2
3

-4
-6

-7

2
1
2

-4
-3
-6

11
-1
2

l
l

Ri !R2

R3-R1

-[ i
-[[ I

R3 - 2R2

2
8
3

-4
-18

2
2

-4
-6
-3

-7

0
0

0
1
1
0

I
-1

1
0

-4
-3
0

11
35
10

-1

11
-1
4

R1 - 3R1
R1

t R3

To find the general solution, we now interpret the final matrix as the augmented matrix
of the equivalent system. We get the system

xi +2x2 - 4x3

11

X2 - 3X3+ X4

-1

-X4

We see that X3 is a free variable, so we let x3

t E R Then we use back-substitution

to get

X4

-4

X2

-1 + 3X3 - X4

xi

11 - 2x2 +4x3

3 + 3t

5 - 2t

Thus, the general solution is

X1
X2
X3
X4

5 - 2t
3 + 3t
t
-4

-2
5
3
3
+t
1
0
-4
0

t E JR

Check this solution by substituting these values for x1, x2, x3, x4 into the original
equations.

Observe that there are many different ways that we could choose to row reduce
the augmented matrix in any of these examples. For instance, in Example 9 we could
interchange row 1 and row 3 instead of interchanging row 1 and row 2. Alternatively,
we could use the elementary row operations

R1 - R1

and

R3 - tR1

to eliminate the

Section 2.1

Systems of Linear Equations and Elimination

73

non-zero entries beneath the first leading variable. It is natural to ask if there is a way of
determining which elementary row operations will work the best. Unfortunately, there
is no such algorithm for doing these by hand. However, we will give a basic algorithm
for row reducing a matrix into the "proper" form. We start by defining this form.

Row Echelon Form


Based on how we used elimination to solve the sy stem of equations, we define the
following form of a matrix.

Definition
Row Echelon Form (REF)

A matrix is in row echelon form (REF) if


(1) W hen all entries in a row are zeros, this row appears below all rows that contain
a non-zero entry.

(2) When two non-zero rows are compared, the first non-zero entry, called the lead
ing entry, in the upper row is to the left of the leading entry in the lower row.

Remark
It follows from these properties that all entries in a column beneath a leading entry
must be 0. For otherwise, (1) or (2) would be violated.

EXAMPLE 10

Determine which of the following matrices are in row echelon form. For each matrix
that is not in row echelon form, explain why it is not in row echelon form.

i
0
0

[o o

(b) 0

2
3

(d)

0
0

-3

0
-1
4

2
1
0

-]

Solution: The matrices in (a) and (b) are both in REF. The matrix in (c) is not in REF
since the leading entry in the second row is to the right of the leading entry in the
third row. The matrix in (d) is not in REF since the leading entry in the second row is
beneath the leading entry in the first row.

Any matrix can be row reduced to row echelon form by using


the Jollowing steps First, consider the first column of the matrix; if it consists
entirely of zero entries, move to the next column. If it contains some non-zero entry,
interchange rows (if necessary) so that the top entry in the column is non-zero. Of
course, the column may contain multiple non-zero entries. You can use any of these
non-zero entries, but some choices will make your calculations considerably easier
than others; see the Remarks below. We will call this entry a pivot. Use elementary row
operations of type (3) to make all entries beneath the pivot into zeros. Next, consider
the submatrix consisting of all columns to the right of the column we have just worked
on and all rows below the row with the most recently obtained leading entry. Repeat

74

Chapter 2

Systems of Linear Equations

the procedure described for this submatrix to obtain the next pivot with zeros below
it. Keep repeating the procedure until we have "used up" all rows and columns of the
original matrix. The resulting matrix is in row echelon form.

EXAMPLE 11

Row reduce the augmented matrix of the following system to row echelon form and
use it to determine all solutions of the system:

X1

+ X2

X2 + X3
X1 + 2X2 + X3

-2

Solution: W rite the augmented matrix of the system and row reduce:

1
2
-2

R3 - R1

0
0

R3 - R2

Lil

Observe that when we write the system of linear equations represented by the aug
mented matrix, we get

X\ + X2
X2 + X3
0

-5

Clearly, the last equation is impossible. This means we cannot find values of x1, x2, X3
that satisfy all three equations. Hence, this system has no solution.

Remarks
1. Although the previous algorithm will always work, it is not necessarily the
fastest or easiest method to use for any particular matrix. In principle, it does
not matter which non-zero entry is chosen as the pivot in the procedure just
described. In practice, it can have considerable impact on the amount of work
required and on the accuracy of the result. The ability to row reduce a general
matrix to REF by hand quickly and efficiently comes only with a lot of practice.
Note that for hand calculations on simple integer examples, it is sensible to go
to some trouble to postpone fractions because avoiding fractions may reduce
both the effort required and the chance of making errors.

2. Observe that the row echelon form for a given matrix A is not unique. In partic
ular, every non-zero matrix has infinitely many row echelon forms that are all
row equivalent. However, it can be shown that any two row echelon forms for
the same matrix A must agree on the position of the leading entries. (This fact
may seem obvious, but it is not easy to prove. It follows from Problem F6 in
the Chapter 4 Further Problems.)
We have now seen that a system of linear equations may have exactly one solution,
infinitely many solutions, or no solution. We will now discuss this in greater detail.

Section 2.1

Systems of Linear Equations and Elimination

75

Consistent Systems and Unique Solutions


We shall see that it is often important to be able to recognize whether a given system
has a solution and, if so, how many solutions. A system that has at least one solution is
called consistent, and a system that does not have any solutions is called inconsistent.
To illustrate the possibilities, consider a system of three linear equations in three
unknowns. Each equation can be considered as the equation of a plane in IR.3. A solution
of the system determines a point of intersection of the three planes that we call P1, P2,

P3. Figure 2.1.1 illustrates an inconsistent system: there is no point common to all three
planes. Figure 2.1.2 illustrates a unique solution: all three planes intersect in exactly
one point. Figure 2.1.3 demonstrates a case where there are infinitely many solutions.

Figure 2.1.1

Two cases where three planes have no common point of intersection: the
corresponding system is inconsistent.

point of intersection
Figure 2.1.2

Three planes with one intersection point: the corresponding system of


equations has a unique solution.

Figure 2.1.3

Three planes that meet in a common line: the corresponding system has
infinitely many solutions.

Row echelon form allows us to answer questions about consistency and uniqueness.
In particular, we have the following theorem.

76

Chapter 2

Theorem 1

Systems of Linear Equations

Suppose that the augmented matrix

[A I G] of a system of linear equations is row

[s I c), which is in row echelon form.

equivalent to

(1) The given system is inconsistent if and only if some row of


form

[s I c] is of the

], with c * 0.

(2) If the system is consistent, there are two possibilities. Either the number of
pivots in S is equal to the number of varia
_ bles in the system and the system has
a unique solution, or the number of pivots is less than the number of variables
and the system has infinitely many solutions.

Proof: (1) If

[s I c] contains a row of the form [

...

]. where c * 0,

then this corresponds to the equation 0 = c, which clearly has no solution. Hence, the
system is inconsistent. On the other hand, if it contains no such row, then each row
must either be of the form
satisfied by any values of

x1,

0
.

, x11,

], which corresponds to an equation

or else contains a pivot. We may ignore the rows

that consist entirely of zeros, leaving only rows with pivots. In the latter case, the
corresponding system can be solved by assigning arbitrary values to the free variables
and then determining the remaining variables by back-substitution. Thus, if there is no
row of the form

], the system cannot be inconsistent.

(2) Now consider the case of a consistent system. The number of leading variables
cannot be greater than the number of columns in the coefficient matrix; if it is equal,
then each variable is a leading variable and thus is determined uniquely by the system
corresponding to

[s I c].

If some variables are not leading variables, then they are

free variables, and they may be chosen arbitrarily. Hence, there are infinitely many
solutions.

Remark
As we will see later in the text, sometimes we are only interested in whether a system
is consistent or inconsistent or in how many solutions a system has. We may not nec
essarily be interested in finding a particular solution. In these cases, Theorem
related theorem in Section

1 or the

2.2 is very useful.

Some Shortcuts and Some Bad Moves


When carrying out elementary row operations, you may get weary of rewriting the
matrix every time. Fortunately, we can combine some elementary row operations in

one rewriting. For example,

1
1
2

1
3
1

-2
-1
-5

Choosing one particular row (in this case, the first row) and adding multiples of it to
several other rows is perfectly acceptable. There are other elementary row operations
that can be combined, but these should not be used until one is extremely comfortable

Section 2.1

Systems of Linear Equations and Elimination

77

with row reducing. This is because some combinations of steps do cause errors. For
example,

R1 -R1

R1 - R1

0
0

-1
1

-1
1

(WRONG!)

This is nonsense because the final matrix should have a leading 1 in the first column.
By performing one elementary row operation, we change one row; thereafter we must
use that row in its new changed form. Thus, when performing multiple elementary row
operations in one step, make sure that you are not modifying a row that you are using
in another elementary row operation.

A Word Problem
To illustrate the application of systems of equations to word problems, we give a simple
example. More interesting applications usually require a fair bit of background. In
Section 2.4 we discuss two applications from physics/engineering.

EXAMPLE 12

A boy has ajar full of coins. Altogether there are

180 nickels, dimes, and quarters. The

number of dimes is one-half of the total number of nickels and quarters. The value of

$16.00. How many of each kind of coin does he have?


Solution: Let n be the number of nickels, d the number of dimes,

the coins is

and

q the

of quarters. Then

n+ d+q= 180
The second piece of information we are given is that
1

d= 2cn+q)
We rewrite this into standard form for a linear equation:

n -2d+q=O
Finally, we have the value of the coins, in cents:

Sn+ lOd+ 2Sq= 1600


Thus,

n, d, and q satisfy the system of linear equations:


n

+d

+q = 180

n -2d

+q=0

Sn+ lOd+ 2Sq= 1600


W rite the augmented matrix and row reduce:

-2

1
1

10

2S

180
160
1
0
4

180
60
140

R1 -R1
R3 -SR1

-[[

-3
s

R3 -R1

0
0

1
0
20
1
0
4

180
-180
700

180
60
80

(-1/3)R2

(1 /S)R3

number

78

Chapter 2

EXAMPLE 12
(continued)

Systems of Linear Equations

According to Theorem 1, the system is consistent with a unique solution. In particular,


writing the final augmented matrix as a system of equations, we get
n +

So, by back-substitution, we get q

180

60

4q

80

20, d

60,

180

100. Hence, the

boy has 100 nickels, 60 dimes, and 20 quarters.

A Remark on Computer Calculations


In computer calculations, the choice of the pivot may affect accuracy. The problem
is that real numbers are represented to only a finite number of digits in a computer,
so inevitably some round-off or truncation errors occur. W hen you are doing a large
number of arithmetic operations, these errors can accumulate, and they can be partic
ularly serious if at some stage you subtract two nearly equal numbers. The following
example gives some idea of the difficulties that might be encountered.
The system
O.lOOOx1

0.9990x2

1.000

O. lOOOx1

l.OOOx2

1.006

is easily found to have solution x2

6.000, x1

-49.94. Notice that the coefficients

were given to four digits. Suppose all entries are rounded to three digits. The system
becomes

The solution is now X2

O. lOOx1

0.999x2

1.00

0.100x1

l.OOx2

1.01

10, x1

-89.9. Notice that despite the fact that there was

only a small change in one term on the right-hand side, the resulting solution is not
close to the solution of the original problem. Geometrically, this can be understood
by observing that the solution is the intersection point of two nearly parallel lines;
therefore, a small displacement of one line causes a major shift of the intersection
point. Difficulties of this kind may arise in higher-dimensional systems of equations in
real applications.
Carefully choosing pivots in computer programs can reduce the error caused by
these sorts of problems. However, some matrices are ill conditioned; even with high
precision calculations, the solutions produced by computers with such matrices may
be unreliable. In applications, the entries in the matrices may be experimentally deter
mined, and small errors in the entries may result in large errors in calculated solutions,
no matter how much precision is used in computation. To understand this problem bet
ter, you need to know something about sources of error in numerical computation
and more linear algebra. We shall not discuss it further in this book, but you should be
aware of the difficulty if you use computers to solve systems of linear equations.

Section 2.1 Exercises

79

PROBLEMS 2.1
Practice Problems
substitution and write the general solution in stan
dard form.
(a)

(d)

x1 -3x2 = 5
X2

(e)
(b)

X1

(c)

x1

2X2 -X3

X3

2 3
3x2 - x

Sx3

X3

X2

2
3
9
13

0
2
4
6

Al Solve each of the following systems by back

(f)

4
2

0
4
16
20
2

-1

-4
3

1
6

1
0

0
-4

2
0
4
3

3
2

3
-2
11

4
1
3
8

A4 Each of the following is an augmented matrix of a

system of linear equations. Determine whether the


(d)

4 4
x

X4

+X4

+X3 +

x1 - 2x2
X2

X3

-3

solution.

; -
n l
2

A2 Which of the following matrices are in row eche


lon form? For each matrix not in row echelon form,

[ i _; 1
[ i 1
[ - -1
[ H :1

explain why it is not.


(a) A

(b) B =

1
0 0

(d)D=

A3 Row reduce the following matrices to obtain a row


equivalent matrix in row echelon form. Show your
steps.
(a)

(b)

[ - ]

[i
-1

1
2
(c)
5
3

=i

-1
-1
0
4

0
-1

-2
0
5

- l

0 0 0 0

(c) C

-1

0 0

system is consistent. If it is, determine the general

1 0 1
0 1 0
0 0 0
0 0 0

(e)

-2

0
0 0
0 0
0 0

-1

AS For each of the following systems of linear equations:


(i) Write the augmented matrix.
(ii) Obtain a row equivalent matrix in row echelon
form.

8
3
2

(iii) Determine whether the system is consistent or


inconsistent. If it is consistent, determine the
number of parameters in the general solution.
(iv) If the system is consistent, write its general so
lution in standard form.
(a)

3x1 -Sx2
Xt

2X 2 = 4

80

Chapter 2

Systems of Linear Equations

(b )

Xi +2X2

+X3

2xi - 3x2 + 2x3


(c )

(d)

xi +2x2- 3x3

( a)

xi +3x2 - 5x3

11

2xi +5x2 - 8x3

19

-3xi +6x2 +l6x3


xi - 2x2

2xi - 3x2 - 8x3

-1

-2

!l

-3

cd

A 7 A fruit seller has apples, bananas, and oranges. Al

36

- 5x3

(b)

together he has 1500 pieces of fruit. On average,

-11

each apple weighs 120 grams, each banana weighs

-17

1 40 grams, and each orange weighs 160 grams.


He can sell apples for 2 5 cents each, bananas for

( e)

(f)

Xi +2x2-X3

20 cents each, and oranges for 30 cents each. If

2xi +5x2 +x3

10

4xi +9x2-x3

19

the fruit weighs 20 8 kilograms, and the total sell


ing price is $3 80, how many of each kind of fruit
does the fruit seller have?

-5

+X4

-8

6x1 +13x2- 17x3 +4x4

xi
2x1

+2x2

- 3x3

+ 4x2 -6x3

A8 A student is taking courses in algebra, calculus, and


physics at a college where grades are given in per
centages. To determine her standing for a physics

21

prize, a weighted average is calculated based on


50% of the student's physics grades, 30% of her

( g)
x, +2x2- 3x3

+X4

2x1 +4x2- 5x3 +3x4

+X5

+4xs

+8xs

2x, +5x2 - 7x3 +3x4 +lOxs

A6 Each of the following matrices is an augmented


matrix of a system of linear equations. Determine
the values of

a,

b, c, d for which the systems

calculus grade, and 20% of her algebra grade; the


weighted average is 84. For an applied mathemat
ics prize, a weighted average based on one-third
of each of the three grades is calculated to be 8 3.
For a pure mathematics prize, her average based on
50% of her calculus grade and 50% of her algebra
grade is 8 2.5. What are her grades in the individual
courses?

are consistent. If a system is consistent, determine


whether the system has a unique solution.

Homework Problems
Bl Solve each of the following systems by back

(c)

substitution and write the general solution in stan


dard form.
(a )

(b )

x1 - 2x2 -x3

X2 +3x3

X3

X1 - 3X2

+X3

X2 +2X3

-2
1
-1

xi +3x2-x3 +2x4

-1

X2-X3 +2X4

-1

X 3 + 3X4
(d)

xi +3x2- 2x3 - 2x4 +2xs

-2

x2- 2x3 - 2x4 +3xs

-3

X3

+X4 - 2xs

Section 2.1 Exercises

B2 Which of the following matrices are in row eche


lon form? For each matrix not in row echelon form,

[ - 1
[ i l
[l l
[ 1

(c)

explain why it is not.


(a) A

(b)

(c) C

(d) D =

(d)

0
-1
0
0

0
0
0

1
0
0

(ii) Obtain a row equivalent matrix in row echelon


form.
(iii) Determine whether the system is consistent
or inconsistent. If it is consistent, then deter

-3

mine the number of parameters in the general

[ H l
[! r Lr l

solution.
(iv) If the system is consistent, write its general
solution in standard form.
(a)

3
6
]

2
3
2

(d)

1
2
4

0
-1
0
-1

2
5
7
6

1
2
4
2

7
2
12
25

3
6
9
6

system is consistent. If it is, determine the general

solution.
(a)

-! -
1
1
0

0
2
0

X1+X2 +X3

-2
6

x1 - 2x2 - 2x3

-xi +12x2 +8x3

X2 +X3

Xt +X2 +X3

2x1+3x2 + 3x3

-7

2x1+4x2 +X3

-16

X1+2X2 +X3

(d)

system of linear equations. Determine whether the

-1

-4

B4 Each of the following is an augmented matrix of a

1
0
0

+X2 - X3

(c)

0
1
3
5

2x1+X2 +Sx3

2X1

(b)

-1
0
0
0

(i) Write the augmented matrix.

equivalent matrix in row echelon form. Show your

(c)

1
0
0
0

tions:

steps.

BS For each of the following systems of linear equa

B3 Row reduce the following matrices to obtain a row

(a)

[ l

1
3

81

(e)

(f)

X1 +X2 +2X3 +X4

x, + 2x2 +4x3 +X4

X1

+X4

X1 +X2 +2X3 +X4

x1 +2x2 +4x3 +X4

X1

+X4

-21
1
-1
3

82

Chapter 2

Systems of Linear Equations

B6 Each of the following matrices is an augmented

F3 are applied perpendicular to the rod in the direc

matrix of a system of linear equations. Determine

tions indicated by the arrows in the diagram below;

b, c, d for which the systems are

F1 is applied to the left end of the rod, F2 is applied

consistent. In cases where the system is consis

at a point 2 m to the right of centre and F3 at a point

the values of

a,

tent, determine whether the system has a unique

4 m to the right. The total force on the pivot is zero,

solution.

the moment about the centre is zero, and the sum


of the magnitudes of forces is 80 newtons. Write a

[ i : j]
-

(a)

(b)

system of three equations for F1, F2, and F3; write

a'

the corresponding augmented matrix; and use the


standard procedure to find F1, F2, and F3

cd

c+d

B7 A bookkeeper is trying to determine the prices

-S

that a manufacturer was charging. He examines


old sales slips that show the number of various

B9 Students at Linear University write a linear algebra

items shipped and the total price. He finds that

examination. An average mark is computed for 100

20 armchairs, 10 sofa beds, and 8 double beds cost

students in business, an average is computed for

$1S,200; JS armchairs, 12 sofa beds, and 10 dou

300 students in liberal arts, and an average is com

ble beds cost $1S,700; and 12 armchairs, 20 sofa

puted for 200 students in science. The average of

beds, and 10 double beds cost $19,600. Determine

these three averages is 8S%. However, the overall

the cost for each item or explain why the sales slips

average for the 600 students is 86%. Also, the aver

must be in error.

age for the 300 students in business and science is

B8 (Requires knowledge of forces and moments.) A


rod 10 m long is pivoted at its centre; it swings in
the horizontal plane. Forces of magnitude F1, F2,

4 marks higher than the average for the students in


liberal arts. Determine the average for each group
of students by solving a system of linear equations.

Computer Problems
Cl Use computer software to determine a matrix in
row echelon form that is row equivalent to each of

[
[

the following matrices.


(a)A=

(b) B

3S

4S

18

13

l7

6S

-61

23

19

41

-2S

-36

37

41

22

SO

-38

49

13

4S

27

-23

-21

27

C2 Redo Problems A3, AS, B3, and BS using a


computer.

C3 Suppose that a system of linear equations has the


augmented matrix.

1.121

-2.0lS

2.131

2.SOl

3.214

4.130

3.11S

-1.639

-12.473

-1.827

8.430

4.612

(a) Determine the solution of this system.


(b) Change the entry in the second row and third
column from 4.130 to 4.080 and find the
solution.

Section 2.2 Reduced Row Echelon Form, Rank, and Homogeneous Systems

83

Conceptual Problems
Dl Consider the linear system in x, y, z, and w:

For what values of the constants a and b is the


system

+w=b

+y

2x +3y +z +Sw
z

+w

(a) Inconsistent?
(b) Consistent with a unique solution?

(c) Consistent with infinitely many solutions?

3
D2 Recall that in JR , two planes n

2y +2z +aw= 1

x = c and m

x = d are parallel if and only if the normal vec

tor mis a non-zero multiple of the normal vector n.


Row reduce a suitable augmented matrix to explain
why two parallel planes must either coincide or else
have no points in common.

2.2 Reduced Row Echelon Form, Rank, and


Homogeneous Systems
To determine the solution of a system of linear equations, elimination with back
substitution, as described in Section 2.1, is the standard basic procedure. In some
situations and applications, however, it is advantageous to carry the elimination steps
(elementary row operations) as far as possible to avoid the need for back-substitution.
To see what further elementary row operations might be worthwhile, recall that the
Gaussian elimination procedure proceeds by selecting a pivot and using elementary
row operations to create zeros beneath the pivot. T he only further elimination steps
that simplify the system are steps that create zeros above the pivot.

EXAMPLE 1

In Example 2.1.7, we row reduced the augmented matrix for the original system to a
row equivalent matrix in row echelon form. That is, we found that

-2

-1

-5

4l [
7

-2

-1

4l
1

Instead of using back-substitution to solve the system as we did in Example 2.1.7, we


instead perform the following elementary row operations:

[
[

(- l)R3
Rt+ 2R3

[
-[

1
0
Rt - R1
1

This is the augmented matrix for the system


us the solution we found in Example 2.1.7.

x1

0,

x2

2, and x3

-1, which gives

84

Chapter 2

Systems of Linear Equations

This system has been solved by complete elimination. The leading variable in
the }-th equation has been eliminated from every other equation. This procedure is
often called Gauss-Jordan elimination to distinguish it from Gaussian elimination
with

back-substitution. Observe

that the

elementary row operations

used

in

Example 1 are exactly the same as the operations performed in the back-substitution in
Example 2.1.7.
A matrix corresponding to a system on which Gauss-Jordan elimination has been
carried out is in a special kind of row echelon form.

Definition
Reduced Row
Echelon Form (RREF)

A matrix R is said to be in reduced row echelon form (RREF) if


(1) It is in row echelon form.
(2) All leading entries are 1, called a leading 1.
(3) In a column with a leading 1, all the other entries are zeros.
As in the case of row echelon form, it is easy to see that every matrix is row equivalent
to a matrix in reduced row echelon form via Gauss-Jordan elimination. However, in
this case we get a stronger result.

Theorem 1

For any given matrix A there is a unique matrix in reduced row echelon form that is
row equivalent to A.

Proof: You are asked to prove that there is only one matrix in reduced row echelon
form that is row equivalent to A in Problem F6 in the Chapter 4 Further Problem.

EXAMPLE2

Obtain the matrix in reduced row echelon form that is row equivalent to the matrix

A=

-2

Solution: Row reducing the matrix, we get

-2

10

-6

-1

-4

R2 - 3R1
(- l )R2

-[

-2

-1

10
-6

- ]

R1

2R2

This final matrix is in reduced row echelon form.

EXERCISE 1

Row reduce the matrices in Examples 2.1.9 and 2.1.12 into reduced row echelon form.

Because of the uniqueness of the reduced row echelon form, we often speak of
the reduced row echelon form of a matrix or of reducing a matrix to its reduced row
echelon form.

Section 2.2 Reduced Row Echelon Form, Rank, and Homogeneous Systems

85

Remarks
1. In general, reducing an augmented matrix to reduced row echelon form to solve
a system is not more efficient than the method used in Section 2.1. As previously
mentioned, both methods are essentially equivalent for solving small systems
by hand.
2. W hen row reducing to reduced row echelon form by hand, it seems more natural
not to obtain a row echelon form first. Instead, you might obtain zeros below
and above any leading 1 before moving on to the next leading 1. However, for
programming a computer to row reduce a matrix, this is a poor strategy because
it requires more multiplications and additions than the previous strategy. See
Problem F2 at the end of the chapter.

Rank of a Matrix
We saw in Theorem 2.1.1 that the number of leading entries in a row echelon form
of the augmented matrix of a system determines whether the system is consistent or
inconsistent. It also determines how many solutions (one or infinitely many) the system
has if it is consistent. Thus, we make the following definition.

Definition

The rank of a matrix A is the number of leading 1s in its reduced row echelon form

Rank

and is denoted by rank(A).


The rank of A is also equal to the number of leading entries in any row echelon
form of A. However, since the row echelon form is not unique, it is more tiresome to
give clear arguments in terms of row echelon form. In Section 3.4 we shall see a more
conceptual way of describing rank.

EXAMPLE3

The rank of the matrix in Example 1 is 3 since the RREF of the matrix has three
leading 1s. The rank of the matrix in Example 2 is 2 as the RREF of the matrix has
two leading ls.

EXERCISE 2

Determine the rank of each of the following matrices:

(a) A=

Theorem 2

!]

[ b] be a system of

Let A I

(b) B=

linear equations inn variables.

(1) The system is consistent if and only if the rank of the coefficient matrix A is

[ b].

equal to the rank of the augmented matrix A I

86

Chapter 2

Theorem 2
(continued)

Systems of Linear Equations

(2) If the system is consistent, then the number of parameters in the general solu
tion is the number of variables minus the rank of the matrix:
#of parameters

n - rank(A)

Proof: Notice that the first n columns of the reduced row echelon form of [A

tq

consists of the reduced row echelon form of A. By Theorem 2.1.1, the system is in

form [ 0

GJ contains a row of the


J 1 ]. But this is true if and only if the rank of [A I G] is greater than

consistent if and only if the reduced row echelon form of [A

the rank of A.
If the system is consistent, then the free variables are the variables that are not
leading variables of any equation in a row echelon form of the matrix. Thus, by def
inition, there are n - rank(A) free variables and hence n - rank(A) parameters in the
general solution.

Corollary 3

Let [A I

G] be a system of m linear equations inn variables. Then [A I GJ is consis

tent for all

b E JR11 if and only if rank(A)

= m.

Proof: This follows immediately from Theorem 2.

Homogeneous Linear Equations


Frequently systems of linear equations appear where all of the terms on the right-hand
side are zero.

Definition

A linear equation is

Homogeneous

equations is homogeneous if all of the equations of the system are homogeneous.

homogeneous if the right-hand side is zero.

A system of linear

Since a homogeneous system is a special case of the systems already discussed,


no new tools or techniques are needed to solve them. However, we normally work
only with the coefficient matrix of a homogeneous system since the last column of the
augmented matrix consists entirely of zeros.

EXAMPLE4

Find the general solution of the homogeneous system

2X1

X2

X1

X2 - X3

-X2

2X3

Section 2.2 Reduced Row Echelon Form, Rank, and Homogeneous Systems

EXAMPLE4

87

Solution: We row reduce the coefficient matrix of the system to RREF:

1
[ !; ]l
[

(continued)

-1
1
-1
-1

R1 ! R2

- [ - l
-[
I

-[ - I

R2 - 2 R1

-1

R1+R2
R3 - R2

0
-1
0

0
1
0

(- l )R2

This corresponds to the homogeneous system

X1

+X3

X2 - 2 X3

Hence, x3 is a free variable, so we let X3

t ER Then x1

-x3

-t, x2

x
2 3

2 t,

and the general solution is

Observe that every homogeneous system is consistent as the zero vector


certainly be a solution. We call

0 the trivial solution.

will

Thus, as we will see frequently

throughout the text, when dealing with homogeneous systems, we are often mostly
interested in how many parameters are in the general solution. Of course, for this we
can apply Theorem .
2

EXAMPLES

Determine the number of parameters in the general solution of the homogeneous


system

[1

xi + 2x2 + 2x3 +x4 +4xs

3x1+7x2 + 7x3 +3x4 + 13xs

2 x1+5x2 + 5x3 + x
2 4 +9xs

[- l

1 ]

Solution: We row reduce the coefficient matrix:

3
2

2
7
5

2
7
5

3
2

4
13 R1 - 3R1
9 R3 - 2R1

0
0

2
1
1

2
1
1

0
0

4 RI - 2R2
l
1 R3 - R2

1
- o
0

0
1
0

0
1
0

1
0
0

.2
1
0

The rank of the coefficient matrix is 2 and the number of variables is 5. Thus, by
Theorem 2, there are 5 - 2

EXERCISE 3

3 parameters in the general solution.

Find the general solution of the system in Example 5.

88

Chapter 2

Systems of Linear Equations

PROBLEMS 2.2
Practice Problems
Al Determine the RREF and rank of the following
matrices.
(a)

[! -i]
[ I
[i !]
[ ! -1
1-1 -222 33
2 4 3
[2: di
[31 -2 3 []
21 312 13 442
31 83 22 3
2 7
0

(b)

(c)

(d)

(e)

(D
(g )

(h )

1
0

(i)

1
0

(b)

(c)

(d)

(e)

(f)

0
0
0

0
1
0
0

0
1

0
0
1
0
0

-]

0
0

-5
0

0
0
1
0

0
0
0

2
-S
x
3
x
2
x1x1 4x22x2 -3xx3 33
x3x11 x2x2 -9x3
2x1 x2 -S-7x3x3
-3x4
xx3 11 -3-X2x2 8x2x33 -Sx4
23xx11 -3- x2x22 7Sx3X3 -4x4
-7x4
x2
2
x
3
2
x
4
x12x1 xx22 2 Sx3Sx3 x3 X44 -3-xsxs
xi x2 4x3 2x4 - 2xs
=

(b)

(c)

(d)

ters in the general solution and write out the general

0
0
0

0
0

For each matrix, determine the number of parame

[ - 11

0
0
0

the general solution.


(a)

matrix of a homogeneous system already in RREF.

( a)

0
0
0

matrix and determine the rank and number of pa-

A2 Suppose that each of the following is the coefficient

solution in standard form.

1
0
0
0

rameters in the general solution. Then determine

[ -3 2 !I
21
1
[ 4
[1
0
0

A3 For each homogeneous system, write the coefficient

1
0
0

0
0

89

Section 2.2 Exercises


A4 Solve the following systems of linear equations

(g)

by row reducing the coefficient matrix to RREF.

Compare your steps with your solutions from the


Problem 2.1.AS.
(a)

3x1 - x
S 2
X1

(b)

X1

2x2

2x2

2x1 - 3x2
(c)

(d)

-3x1

2x2 - 3x3

x1

3x2 - x
S 3

2x1

Sx2 - 8x3

6x2

16x3

x1 - 2x2

-x
S 3

2x1 - 3x2

- 8x3

2x2 - X3

2x1

Sx2 +x3

4x1

9x2 - X3

2x2

- 3x3

2x1

4x2

- 6x3

13x2 - l7x3

2x1

4x2 - x
S 3

2x1

Sx2 - x
7 3

[A b]
I

(b)

(c)

[ -ll
[f
[:

3x4

8xs

3x4

lOxs

by row reducing the aug

[A ].
A=[: ; nb=[]
A = u J. b= HI
A= [-10 -1-1 ] -- [-14 ]
A= u _! i =H b= m
I0

(a)

19

(b)

(c)

(d)

-4

19

== -21
-5

X4

4x4

-8

Bl Determine the RREF and rank of the following

(a)

operations, find the general solution to the homo

Homework Problems
matrices.

4xs

geneous

11

X4

mented matrix to RREF. Then, without any further

-17

2 x 2 - 3x3

==
=

3 6

Xi

==
==
=

AS Solve the system

== -11
=
==10
=

x,

6x1

2x3

X3

x1

( e)

(f)

==

x1

X5

(d)

(e )

-1

1 4

-1

-3

-2

[i d ;]
2

(f)

1
3

0 -1 0

-2

-1

'b

...

90

Chapter 2

(g)

0
3

Systems of Linear Equations

-2
-6

-4

2x1+7x2

-4

X1 +3X2

(c)

X1 + X2 + X3 - 2X4 = 0
- 14x4=0

X1+4x2

B2 Suppose that each of the following is the coefficient


matrix of a homogeneous system already in RREF.
F or each matrix, determine the number of parame-

-
]

(d)

x1 +3x2 +X3 +X4 +2xs = 0

ters in the general solution and write out the general

(a)

(b)

(c)

[
[

-2

X1 +2x2 + X3 +X4 +X5 =0

[A I b J by row reducing the coef

B4 Solve the system

ficient matrix to RREF. Then, without any further


operations, find the general solution to the homo

-3

-5

geneous system

(a)

A=

[A I 0].

[ : ]. [-]
b=

-4

-2
1

-4

matrix and determine the rank and number of pa

-1

-5

-5

rameters in the general solution. Then determine

(b)

A=

(c)

X1 +X2 - 3X3 = 0
x1 +4x2 - 2x3 =0
2x1

- 3x3 =0

9
-4
-1

x1 +5x2 - 3x3 =0
3x1 +5x2 - 9x3 = 0

-4

-5

the general solution.

(b)

=0

x1+2x2 +2x3 +X4

B3 F or each homogeneous system, write the coefficient

(a)

- X5 = 0

2x2 +X3

solution in standard form.


3 0

A= -4

-2
-1

-1

(d)

A=

4x1+ 8x2 - 7X3 = 0

-1

10

_,

'

-1

1
8

4 -1
5 -2

2 'b

4
_,

5
2

-8

-2

-4

5
-4

_,

1 'b

-4

-6
6

Computer Problems
Cl Determine the RREF of the augmented matrix of
the system of linear equations
2.0lx +3.45y+2.23z=4.13
l.57x +2.03y- 3.1 lz=6.11
2.23x +7.lOy- 4.28z = 0.47

If the system is consistent, determine the general


solution.
C2 Determine the RREF of the matrices in Problem
2.1.Cl.

Section 2.3 Application to Spanning and Linear Independence

91

Conceptual Problems
0 in JR3 that is simul
taneously orthogonal to given vectors a, b, c E JR3.

Dl We want to find a vector 1

orthogonal to i1, v, and w. (This should lead to

i-

a homogeneous system with coefficient matrix

(a) W1ite equations that must be satisfied by 1.

C, whose rows are i1, v, and w.) What does the

(b) What condition must be satisfied by the rank of

rank of C tell us about the set of vectors 1 that

the matrix A =

[:: : :i
C1

C2

are orthogonal to i1, v, and w?


if there are to be

C3

D3 What can you say about the consistency of a system


of m linear equations inn variables and the number

non-trivial solutions? Explain.

D2 (a) Suppose that

-n

of parameters in the general solution if:


(a) m = 5, n = 7, the rank of the coefficient matrix

is the coefficient ma

is 4?

trix of a homogeneous system of linear equa

(b) m = 3, n = 6, the rank of the coefficient matrix

tions. Find the general solution of the system

is 3?

and indicate why it describes a line through the

(c) m = 5, n = 4, the rank of the augmented matrix

01igin.

is 4?

(b) Suppose that a matrix A with two rows and


three columns is the coefficient matrix of a ho

D4 A system of linear equations has augmented ma-

mogeneous system. If A has rank 2, then ex


plain why the solution set of the homogeneous
system is a line through the origin. What could
you say if rank(A)

1?

(c) Let i1, v, and w be three vectors in JR4. Write

conditions on a vector 1 E JR4 such that 1 is

trix

For which values of a and b

is the system consistent? Are there values for which


there is a unique solution? Determine the general
solution.

2.3 Application to Spanning and Linear


Independence
As discussed at the beginning of this chapter, solving systems of linear equations will
play an important role in much of what we do in the rest of the text. Here we will show
how to use the methods described in this chapter to solve some of the problems we
encountered in Chapter 1.

Spanning Problems
Recall that a vector v E Rn is in the span of a set {V1, .. , vd of vectors in JR11 if and
.

only if there exists scalars t1, ... , tk E JR such that

Observe that this vector equation actually represents n equations (one for each compo
nent of the vectors) in the k unknowns t1, ... , tk. Thus, it is easy to establish whether
a vector is in the span of a set; we just need to determine whether the corresponding

system of linear equations is consistent or not.

92

Chapter 2

Systems of Linear Equations

EXAMPLE 1
Detennine whethec the vectorV

[ ]

l{[: [-il m [ l]}

is in the set Span

[
" rn Hl m =il [ 1

Solution: Consider the vector equation

12

13

14

Simplifying the left-hand side using vector operations, we get

Comparing corresponding entries gives the system of linear equations

ti

t1

- t2

ti

t2

5t2

2t3 -t4 = -2

+
+

t3 - 3t4 = -3

4t3

We row reduce the augmented matrix:

[:
[

-1

-2

-3

-3

-1
5
I

-1

-2

-2

-1

-2

-1

R2-Ri
R3 - Ri

3t4

-1

-2

-2

-1

-2

-1

R3

2R2

By Theorem 2.1.1, the system is inconsistent. Hence, there do not exist values of

t1, t2, t3, and t4 that satisfy the system of equations, so v is not in the span of the
vectors.

EXAMPLE2
Ut v,

[H = nl

v2, andV3

v,

andV3

W riteV

[l]

as a lineM combination of v,,

Solution: We need to find scalars t1, t2, and t3 such that

Simplifying the left-hand side using vector operations, we get

Section 2.3 Application to Spanning and Linear Independence

EXAMPLE2

93

This gives the system of linear equations:

(continued)

ti - 2t2
2ti

t1

ti

t3

-1

t3

t3

-1

Row reducing the augmented matrix to RREF gives

The solution is t1

2, t2

EXERCISE 1
Detennine whethec v

EXAMPLE3
Considec Span

0, and t3

- I-[ I

0 1

-1

-3

3. Hence, we get 2vi

is in the set Span

{ t ; , }

Ov2 - 3v3

v.

{[ il Hl [m
=

Find a homogeneous system of lineac equations that

defines this set.

Xi
Solution: A vector x

x2

X3

is in this set if and only if for some t1, t2, and t3,

X4

1
ti

1
+ t1 3

5
t3 5
3

Simplifying the left-hand side gives us a system of equations with augmented matrix

X1

X2

X4

X3

Row reducing this matrix to RREF gives

Xi

X3

X2

X4

3
1

The system is consistent if and only if -S x i

0
+

this system of linear equations defines the set.

Xi

2X1 - X2

-Sxi

2x2

-X1

X3

2x2
+

X4

x3

0 and -xi

X4

0. Thus,

94

Chapter 2

EXAMPLE4

Systems of Linear Equations

Show that every vector v E JR3 can be written as a linear combination of the vectors
V,

[!] Ul
Y2

andV3
=

Hl

Solution: To show that every vector v E JR3 can be written as a linear combination of

the vectors v1, v2, and v3, we need to show that the system

is consistent for all v E JR3.

[-31 13
-3 -:1
[-13 31 -1 ] [10 1o 0ol
1 001
3,

Simplifying the left-hand side gives us a system of equations with coefficient


matrix

-2

Row reducing the coefficient matrix to RREF gives

-2

Hence, the rank of the matrix is

which equals the number of rows (equations).

Hence, by Theorem 2.2.2, the system is consistent for all v E JR3, as required.

We generalize the method used in Example 4 to get the following important re


sults.

Lemma 1

A set of k vectors { v 1, ... , vk} in JR11 spans JR11 if and only if the rank of the coefficient
matrix of the system t1v1 +

Proof: If Span{\11, . 'vk}


.

+ tkvk

v is n.

JR11, then every vector

E JR" can be written as a linear

combination of the vectors {V1, ... , vk}. That is, the system of linear equations

has a solution for every

E JR". By Theorem 2.2.2, this means that the rank of the

coefficient matrix of the system equals n (the number of equations). On the other hand,
if the rank of the coefficient matrix of the system t1v1 +

+ tkvk

v is n, then the

system is consistent for all v E JR11 by Theorem 2.2.2. Hence, Span{V 1, . .., vk}

JR".

Section 2.3 Application to Spanning and Linear Independence

Theorem 2

Let {V1,

11
, vk) be a set of k vectors in JR. If Span{V1, ... , vk)

Proof: By Lemma 1, if Span{v 1, . .. , vk}

95

JR.11, then k;?: n.

JR.11, then the rank of the coefficient matrix

is n. But, if we haven leading ls, then there must be at leastn columns in the matrix
to contain the leading ls. Hence, the number of columns, k, must be greater than or
equal ton.

Linear Independence Problems


Recall that a set of vectors {V1,

11
, vk} in JR. is said to be linearly independent if and

only if the only solution to the vector equation

is the solution t;

0 for 1

k. From our work above, we see that this is true when

the corresponding homogeneous system of n equations in k unknowns has a unique


solution (the trivial solution).

EXAMPLES
Detenn;ne whether the set

{ [: l [-il m [ m
,

dent.

Solution: Consider

;s linear!y ;ndependent or depen-

" m Hl r!l l l ll
]
[;
+

+ ,,

,,

,,

Simplifying as above, this gives the homogeneous system with coefficient matrix

1
-1
5

2
1
4

-1
-3
3

Notice that we do not even need to row reduce this matrix. By Theorem 2.2.2, the
number of parameters in the general solution equals the number of variables minus
the rank of the matrix. There are four variables, but the maximum the rank can be is 3
since there are only three rows. Hence, the number of parameters is at least one, so the
system has infinitely many solutions. Therefore, the set is linearly dependent.

EXAMPLE6
Let v I

rn. [-]
i12

and v,

independent or dependent.

[: l

Determ;ne whether the set [ii J, i12, i13j;S Hnearly

96

Chapter 2

EXAMPLE6

Systems of Linear Equations

Solution: Consider t1 v1 +t2v2 +t3v3

(continued)
of the corresponding system is
as in Example 2, we get

0. As above, we find that the coefficient matrix

[ - : l
=

Using the same elementary row operations

o
1
0

o
0
1

Therefore, the set is linearly independent since the system has a unique solution (the
trivial solution).

EXERCISE 2
Determine whether the set

{[il [=l [-i]}

is linearly independent or dependent

Again, we can generalize the method used in Examples 5 and 6 to prove some
important results.

Lemma 3

A set of vectors {V 1,

, vk} in JR11 is linearly independent if and only if the rank of

the coefficient matrix of the homogeneous system t1 v1 +

Proof: If {V1,

+ tkvk

0 is k.

vk} is linearly independent, then the system of linear equations

has a unique solution. Thus, the rank of the coefficient matrix equals the number of
unknowns k by Theorem 2.2.2.
On the other hand, if the rank of the coefficient matrix equals k, then the ho
mogeneous system has k - k

t1

Theorem 4

If {v1 ,

tk

0 parameters. Therefore, it has the unique solution

0, and so the set is linearly independent.


=

, vk} is a Iinearly independent set of vectors in JR11, then k ::; n.

Proof: By Lemma 3, if {V1,


vk} is linearly independent, then the rank of the co
efficient matrix is k. Hence, there must be at least k rows in the matrix to contain the
leading 1 s. Therefore, the number of rows n must be greater than or equal to k.

Section 2.3

Application to Spanning and Linear Independence

97

Bases of Subspaces
Recall from Section 1 .2 that we defined a basis of a subspace S of JR.11 to be a linearly
independent set that spans S. Thus, with our previous tools, we can now easily identify
a basis for a subspace. In particular, to show that a set 13 of vectors in JR.11 is a basis for
a subspace S, we just need to show that Span 13

S and 13 is linearly independent. We

demonstrate this with a couple examples.

EXAMPLE 7
11 s

{[il Hl 'nl}

Prove iliat sis a basis for R3.

Solution: To show that every vector v

JR.3 can be written as a linear combination of

the vectors in 13, we just need to show that the system

[" il Hl [-l
+

+ 1,

is consistent for all v

JR. 3.

1,

Row reducing the coefficient matrix that corresponds to this system to RREF gives

_;2

[
1 1
1

The rank of the matrix is 3, which equals the number of rows (equations). Hence, by
Theorem 2.2.2, the system is consistent for all v

JR.3, as required.

Moreover, to determine whether 13 is linearly independent, we would perform the


same elementary row operations on the same coefficient matrix. So, we see that the
rank of the matrix also equals the number of columns (variables). By Theorem 2.2.2,
the system has a unique solution, and hence 13 is also linearly independent. Thus, 13 is

a basis for JR.3.

EXAMPLES
Show that S

{[ l [l l}
_

is a basis for the plane -3x1

2x2

x3

O.

Solution: We first observe that 13 is clearly linearly independent since neither vector
is a scalar multiple of the other. Thus, we need to show that every vector in the plane
can be written as a linear combination of the vectors in 13. To do this, observe that any
vector x in the plane must satisfy the condition of the plane. Hence, every vector in
the plane has the form

x
since X3

3x1

3x1 - 2x2

2x2. Therefore, we now just need to show that the equation

98

Chapter 2

EXAMPLE 8

Systems of Linear Equations

is always consistent. Row reducing the corresponding augmented matrix gives

(continued)

So, the system is consistent and hence :B is a basis for the plane.

Theorem 5

A set of vectors {V 1, . .. , v } is a basis for ]Rn if and only if the rank of the coefficient
n
matrix of tiV1++t Vn =vis n.

Proof: If {V1, ... , Vn} is a basis for JRll, then it is linearly independent. Hence, by
Lemma 3, the rank of the coefficient matrix is n.
If the rank of the coefficient matrix is n, then the set is linearly independent and

spans ]Rn by Lemma 1 and Lemma 3.

Theorem 5 gives us a condition to test whether a set of n vectors in JR11 is a basis


for 1Rn. Moreover, Lemma 1 and Lemma 3 tell us that a basis of JRll must contain n
vectors. We now want to prove that every basis of a subspace S of ]Rn must contain the
same number of vectors.

Lemma6

Suppose that S is a non-trivial subspace of JR11 and Span {vi, ... , ve} = S. If
{ z11, ... , z1d is a linearly independent set of vectors in S, then k ::; e.

Proof: Since each z1i, 1 ::; i ::; k, is a vector in S, by definition of spanning, it can be
written as a linear combination of the vectors v1, ... , Ve. We get

il1 = a11v1+a11V2+ + ae1Ve

z12 = a12V1 + a12V2+ + aeive

Consider the equation


O=t1U1+..+tkUk
_,

_,

_,

=t1(a11V1+ a11V2++ae1Ve)++tk(alkv1+a1kV2++ aekve)


=(a11t1++alktkW1++(anti++ aektdve

Since {V 1,

, ve} is linearly independent, the only solution to this equation is

Section 2.3 Application to Spanning and Linear Independence

99

This gives a homogeneous system of e equations in the k unknowns t1, ..., tk. If k > e,
then this system would have a non-trivial solution, which would imply that {it 1,

,uk}

is linearly dependent. But we assumed that {it 1, ..., uk} is linearly independent, so we
must have k e.

Theorem 7

If {V1, .. , v e} and {it 1,


.

. .

, uk} are both bases of a non-trivial subspace S of IR.n, then

k= e.

Proof: We know that {V 1,

v e} is a basis for S, so it is linearly independent. Also,


S. Thus, by Lemma 6, we get e k.
Similarly, {it j, . . 'uk} is linearly independent as it is a basis for s' and Span{V1' ... 'Ve}

S, so e k. Therefore, e k, as required.
,

{it 1,.. ., uk} is a basis for S, so Span{u 1, ..., uk}

This theorem justifies the following definition.

Definition

If S is a non-trivial subspace of JR.I! with a basis containing k vectors, then we say that

Dimension

the dimension of S is k and write


dims= k

So that we can talk about the basis of any subspace of IR.n, we define the empty set
...,

to be a basis for the trivial subspace {O} of JR.11 and thus say that the dimension of the
trivial vector space is 0.

EXAMPLE9

By definition, a plane in JR.11 that passes through the origin is spanned by a set of two
linearly independent vectors. Thus, such a plane is 2-dimensional since every basis of
the plane will have two vectors. This result fits with our geometric understanding of a
plane.

100

Chapter 2

Systems of Linear Equations

PROBLEMS 2.3
Practice Problems

Al Let B

{ 1 ! , - }

A4 Using the procedure in Example 8, determine


For each of the following

vectors, either express it as a linear combination of

whether the given set is a basis for the given plane

or hyperplane.
(a)

the vectors of B or show that it is not a vector in

-3

Span B. (a)

)
(b

A2 Let B

(c) -

{ -: , -i , _; }
0

For each of the fol-

-1

nation of the vectors of B or show that it is not a


vector in Span B.

(a) _1

-7
3
(b)
0

(c)

lowing vectors, either express it as a linear combi

(b

(c)

(a) Span

(b) Span

(c) Span

(d) Span

{[] [!]}
{[]}
{[i].[-!]}
{[J. [-m

for 2x,

for x1

3x2 + x, = 0

x2 + 2x3 - 2x4 = 0

independent. If the set is linearly dependent, find


zero vector.

(a)

{ ' ' - }
{ I : 1 }
;

-1

( b)

'

0
r xi + x,- x,=

all linear combinations of the vectors that equal the

A3 Using the procedure in Example 3, find a homoge


neous system that defines the given set.

fo

AS Determine whether the following sets are linearly

-1

{[i], []}
{[:] .[j]} {i . , !} -

(c)

1
1

A6 Determine all values of k such that the given set is

u,:, =!}
{ 1 - . -n

linearly independent.

(a)

-}
4

-3

(b)

Exercises

A 7 Determine whether the given set is a basis for JR'.3.


(a)

(b)

{[iH=:J.l:J}
WH-m

(c)

(d)

101

{[Jnrni1n
{[-: ].[J []}

Homework Problems

Bl Let B

{ l. , j }
1

For each of the following

{:

(e) Span

the vectors of B or show that it is not a vector in


SpanB.

B2 Let B

6
0
(b)
0
3

3
-1
(c)
2
1

{ , -: }
,

(f) Span

-1
0

whether the given set is a basis for the given plane

For each of the following

( a)

the vectors of B or show that it is not a vector in

(b)

0
1
(a)
4

(c )

SpanB.

2
3

(c)

-1

B3 Using the procedure in Example 3, find a homoge


neous system that defines the given set.

{ [ i J lm
{[ i] }
{ [J nJ .U] }
j i . :)
l

(a) span

(b) Span

cci span

(d) Sp

(d)

{r-;] li]}
{HrnJ}

+ X2 + X3
fo
'2X>

for 4x1+2,, - x, = a

p 1n
{- -q
,

for 2x1+3x, - 54

for X1 + 2x,

BS Determine whether the following sets are linearly


independent. If the set is linearly dependent, find
all linear combinations of the vectors that are

(a)

u, , n
1
0

-2

0
0
0
0
1 ' 0
-1
0
3

B4 Using the procedure in Example 8, determine

-1

10
3

0
1
2
-5
0

1
0

or hyperplane.

vectors, either express it as a linear combination of

(b)

-1

vectors, either express it as a linear combination of

-4
-2
(a)
2
-6

-1
3
3 ' -5
1
2

(b)

1
0

'

1
2
0
0

0
1

2
-2

1
-3
1

0.

102

Chapter 2

Systems of Linear Equations

B6 Determine all values of k such that the given set is

3
B7 Determine whether the given set is a basis for JR. .

linearly independent.

(a)

(b)

p , , ; }
h -1. ; }

(a)

(b)

(c)

(d)

{[-illlUl}
minim[J}
WlHHm
rnHm

Computer Problems
1

-2

-1

-1

0
6
2
3

0
-4
3
6
3

2
-1

Cl Let B

I
2
4
-2

-1

8
2

-2

and v

0
1

(a) Determine whether B is linearly independent or

(b) Determine whether vis in SpanB.

dependent.

0.
0

3
5

0
0

Conceptual Problems
Dl Let B

{e1, ... ,e11} be the standard basis for JR.11

Prove that SpanB

JR" and that B is linearly inde

pendent.

D2 Let B

(a) Prove that if k < n, then there exists a vector

JR.11 such that v 'I. SpanB.

(b) Prove that if k > n, then B must be linearly

{v1, ... 'vd be vectors in JR.11

dependent.
(c) Prove that if k

n, then SpanB

JR.11 if and

only if B is linearly independent.

2.4 Applications of Systems of Linear


Equations
Resistor Circuits in Electricity
The flow of electrical current in simple electrical circuits is described by simple linear
laws. In an electrical circuit, the current has a direction and therefore has a sign at
tached to it; voltage is also a signed quantity; resistance is a positive scalar. The laws
for electrical circuits are discussed next.

Section 2.4 Applications of Systems of Linear Equations

Ohm's Law

103

If an electrical current of magnitude I amperes is flowing through

a resistor with resistance R ohms, then the drop in the voltage across the resistor is

V = IR, measured in volts. The filament in a light bulb and the heating element of an
electrical heater are familiar examples of electrical resistors. (See Figure 2.4.4.)

VVv--

Figure 2.4.4

V =IR

Ohm's law: the voltage across the resistor is V

Kirchhoff's Laws (1)

IR.

At a node or junction where several currents enter, the

signed sum of the currents entering the node is zero. (See Figure 2.4.5.)

11
Figure 2.4.5

/2

]3 - ]4 = 0

One of Kirchhoff's laws: 11 - /2

14

0.

(2) In a closed loop consisting of only resistors and an electromotive force E (for
example, E might be due to a battery), the sum of the voltage drops across resistors is
equal to E. (See Figure 2.4.6.)

Figure 2.4.6

Kirchhoff's other law: E

R11

R21.

Note that we adopt the convention of drawing an arrow to show the direction of I or
of E. These arrows can be assigned arbitrarily, and then the circuit laws will determine
whether the quantity has a positive or negative sign. It is important to be consistent in
using these assigned directions when you write down Kirchhoff's law for loops.
Sometimes it is necessary to determine the current flowing in each of the loops
of a network of loops, as shown in Figure 2.4.7. (If the sources of electromotive force
are distributed in various places, it will not be sufficient to deal with the problems as a
collection of resistors "in parallel and/or in series.") In such problems, it is convenient
to introduce the idea of the "current in the loop," which will be denoted i. The true
current across any circuit element is given as the algebraic (signed) sum of the "loop
currents" flowing through that circuit element. For example, in Figure 2.4.7, the circuit
consists of four loops, and a loop current has been indicated in each loop. Across the
resistor R1 in the figure, the true current is simply the loop current i1; however, across
the resistor R2, the true current (directed from top to bottom) is i1 -i2. Similarly, across

R4, the true current (from right to left) is i1 - i3.

104

Chapter 2

Systems of Linear Equations

A resistor circuit.

Figure 2.4.7

The reason for introducing these loop currents for our present problem is that there
are fewer loop currents than there are currents through individual elements. Moreover,
Kirchhoff's law at the nodes is automatically satisfied, so we do not have to write
nearly so many equations.
To determine the currents in the loops, it is necessary to use Kirchhoff's second
law with Ohm's law describing the voltage drops across the resistors. For Figure 2.4.7,
the resulting equations for each loop are:

The top-left loop: R,i, +R1Ci1 - i1)+R4Ci1 - i3)

The top-right loop: R3i2+ RsCi2 - i4)+R1Ci2 - i1)

The bottom-left loop: R6i3+R4(i3 - ii)+R1(i3 - i4)

The bottom-right loop: Rsi4+R1(i4 - i3)+Rs(i4 - i1)

E,
=

E2
=

0
=

-2

Multiply out and collect terms to display the equations as a system in the variables
i1, i1, i3, and i4. The augmented matrix of the system is
R, + R1 + R4

-R2

-R4

E,

-Rs

E2

-R2

R1+R3+Rs

- R4

R4+R6+R1

-R1

-Rs

-R1

Rs+R1 +Rs

0
-E2

To determine the loop currents, this augmented matrix must be reduced to row echelon
form. There is no particular purpose in finding an explicit solution for this general
problem, and in a linear algebra course, there is no particular value in plugging in
particular values for 1, 2, and the seven resistors. Instead, the point of this example
is to show that even for a fairly simple electrical circuit with the most basic elements
(resistors), the analysis requires you to be competent in dealing with large systems of
linear equations. Systematic, efficient methods of solution are essential.
Obviously, as the number of loops in the network grows, so does the number of
variables and so does the number of equations. For larger systems, it is important to
know whether you have the correct number of equations to determine the unknowns.
Thus, the theorems in Sections 2.1 and 2.2, the idea of rank, and the idea of linear
independence are all important.

The Moral of This Example

Linear algebra is an essential tool for dealing

with large systems of linear equations that may arise in dealing with circuits; really
interesting examples cannot be given without assuming greater knowledge of electrical
circuits and their components.

Section 2.4 Applications of Systems of Linear Equations

105

Planar Trusses
It is common to use trusses, such as the one shown in Figure 2.4.8, in construction.
For example, many bridges employ some variation of this design. When designing
such structures, it is necessary to determine the axial forces in each member of the
structure (that is, the force along the long axis of the member). To keep this simple,
only two-dimensional trusses with hinged joints will be considered; it will be assumed
that any displacements of the joints under loading are small enough to be negligible.

Fv
Figure 2.4.8

A planar truss. All triangles are equilateral, with sides of length

s.

The external loads (such as vehicles on a bridge, or wind or waves) are assumed
to be given. T he reaction forces at the supports (shown as R1, R2, and RH in the figure)
are also external forces; these forces must have values such that the total external force
on the structure is zero. To get enough information to design a truss for a particular
application, we must determine the forces in the members under various loadings. To
illustrate the kinds of equations that arise, we shall consider only the very simple case
of a vertical force Fv acting at C and a horizontal force FH acting at E. Notice that
in this figure, the right-hand end of the truss is allowed to undergo small horizontal
displacements; it turns out that if a reaction force were applied here as well, the equa
tions would not uniquely determine all the unknown forces (the structure would be
"statically indeterminate"), and other considerations would have to be introduced.
The geometry of the truss is assumed given: here it will be assumed that the trian
gles are equilateral, with all sides equal to s metres.
First consider the equations that indicate that the total force on the structure is zero
and that the total moment about some convenient point due to those forces is zero. Note
that the axial force along the members does not appear in this first set of equations.

Total horizontal force: RH + FH

Total vertical force: R1 + R2 - Fv

Moment about A: -Fv(s) + R2(2s)

0, so R2

Fv

R1

Next, we consider the system of equations obtained from the fact that the sum of
the forces at each joint must be zero. T he moments are automatically zero because the
forces along the members act through the joints.
At a joint, each member at that joint exerts a force in the direction of the axis of the
member. It will be assumed that each member is in tension, so it is "pulling" away from
the joint; if it were compressed, it would be "pushing" at the joint. As indicated in the
figure, the force exerted on this joint A by the upper-left-hand member has magnitude

N1; with the conventions that forces to the right are positive and forces up are positive,

106

Chapter 2

Systems of Linear Equations

the force vector exerted by this member on the joint A is


the same member will exert a force

[ ]
2

1 12

[ J.
12

On the joint B,

. If N1 is positive, the force is a tension

force; if N1 is negative, there is compression.

For each of the joints A, B, C, D, and E, there are two equations-the first for the
sum of horizontal forces and the second for the sum of the vertical forces:

Al

N1/2

A2

...f3N1/2

Bl

- Ni/2

B2

-...f3Ni/2

Cl
C2

+N2

+ N3/2 +N4
-.../3N3/2
-N2 -N3/2
.../3N3/2

Dl
D2

+ Ns/2

+ N6

+R1=

= Fv

+ ...f3Ns/2
-N4 -Ns/2
-...f3Ns/2

El

+RH=

+ N1/2

-...f3N1/2

-N6 -N1/2

E2

...f3 N1/2

=-FH
+R2=

Notice that if the reaction forces are treated as unknowns, this is a system of 10
equations in 10 unknowns. The geometry of the truss and its supports determines the
coefficient matrix of this system, and it could be shown that the system is necessarily
consistent with a unique solution. Notice also that if the horizontal force equations (Al,

Bl, Cl, D l , and El) are added together, the sum is the total horizontal force equation,
and similarly the sum of the vertical force equations is the total vertical force equation.

A suitable combination of the equations would also produce the moment equation, so
if those three equations are solved as above, then the 10 joint equations will still be a
consistent system for the remaining 7 axial force variables.
For this particular truss, the system of equations is quite easy to solve, since some
of the variables are already leading variables. For example, if FH = 0, from A2 and

!rs Fv and then B2 , C2, and D2 give NJ= Ns= !;:s Fv;
!rs Fv.
then Al and E l imply that N2 = N6 = 1v-s Fv, and Bl implies that N4 =
2
E2 it follows that N1= N1= -

Note that the members AC, BC, CD, and CE are under tension, and AB, BD, and DE
experience compression, which makes intuitive sense.
This is a particularly simple truss. In the real world, trusses often involve many
more members and use more complicated geometry; trusses may also be three
dimensional. Therefore, the systems of equations that arise may be considerably larger
and more complicated. It is also sometimes essential to introduce considerations other
than the equations of equilibrium of forces in statics. To study these questions, you
need to know the basic facts of linear algebra.
It is worth noting that in the system of equations above, each of the quantities N1,
N2, ... , N1 appears with a non-zero coefficient in only some of the equations. Since
each member touches only two joints, this sort of special structure will often occur
in the equations that arise in the analysis of trusses. A deeper knowledge of linear
algebra is important in understanding how such special features of linear equations
may be exploited to produce efficient solution methods.

Section 2.4

Applications of Systems of Linear Equations

107

Linear Programming
Linear programming is a procedure for deciding the best way to allocate resources.
"Best" may mean fastest, most profitable, least expensive, or best by whatever criterion
is appropriate. For linear programming to be applicable, the problem must have some
special features. These will be illustrated by an example.
In a primitive economy, a man decides to earn a living by making hinges and gate
latches. He is able to obtain a supply of 25 kilograms a week of suitable metal at a
price of 2 cowrie shells per kilogram. His design requires 500 grams to make a hinge
and 250 grams to make a gate latch. With his primitive tools, he finds that he can make
a hinge in 1 hour, and it takes 3/4 hour to make a gate latch. He is willing to work

60 hours a week. The going price is 3 cowrie shells for a hinge and 2 cowrie shells
for a gate latch. How many hinges and how many gate latches should he produce each
week in order to maximize his net income?
To analyze the problem, let x be the number of hinges produced per week and
let y be the number of gate latches. Then the amount of metal used is (O.Sx + 0.25y)
kilograms. Clearly, this must be less than or equal to 25 kilograms:

O.Sx

0.25y

25

or

2x

100

Such an inequality is called a constraint on x and y; it is a linear constraint because


the corresponding equation is linear.
Our producer also has a time constraint: the time taken making hinges plus the
time taken making gate latches cannot exceed 60 hours. Therefore,

lx + 0.75y

60

or

4x

3y

240

Obviously, also x 0 and y 0.


The producer's net revenue for selling x hinges and y gate latches is R(x, y)

3x

2y - 2(25) cowrie shells. This is called the objective function for the problem.

The mathematical problem can now be stated as follows: Find the point (x, y) that max
imizes the objective function R(x, y)

0, y

0, 2x

100, and 4x

3y

3x + 2y- 50, subject to the linear constraints


240.

This is a linear programming problem because it asks for the maximum (or
minimum) of a linear objective function, subject to linear constraints. It is useful to
introduce one piece of special vocabulary: the feasible set for the problem is the set
of (x, y) satisfying all of the constraints. The solution procedure relies on the fact that
the feasible set for a linear programming problem has a special kind of shape. (See
Figure 2.4.9 for the feasible set for this particular problem.) Any line that meets the
feasible set either meets the set in a single line segment or only touches the set on its
boundary. In particular, because of the way the feasible set is defined in terms of linear
inequalities, it turns out that it is impossible for one line to meet the feasible set in two
separate pieces.
For example, the shaded region if Figure 2.4.10 cannot possibly be the feasible
set for a linear programming problem, because some lines meet the region in two line
segments. (This property of feasible sets is not difficult to prove, but since this is only
a brief illustration, the proof is omitted.)

108

Chapter 2

Systems of Linear Equations

lines

Figure 2.4.9

R(x, y)

50

constant

"-- 2x + y

x
=

100

The feasible region for the linear programming example. The grey lines
are level sets of the objective function

Now consider sets of the form


the level sets of

R.

R(x, y)

R.

k, where k is a constant; these are called

These sets obviously form a family of parallel lines, and some

of them are shown in Figure

2.4.9.

Choose some point in the feasible set: check that

(20, 20) is such a point. Then the line


R(x,y)

R(20,20)

50

x
Figure 2.4.10

The shaded region cannot be the feasible region for a linear program
ming problem because it meets a line in two segments.

Section 2.4

109

Applications of Systems of Linear Equations

meets the feasible set in a line segment. (30,30) is also a feasible point (check),
and
R(x,y)

R(30,30)

100

also meets the feasible set in a line segment. You can tell that (30,30) is not a boundary
point of the feasible set because it satisfies all the constraints with strict inequality;
boundary points must satisfy one of the constraints with equality.
As we move further from the origin into the first quadrant, R(x,y) increases. The
biggest possible value for R(x,y) will occur at a point where the set R(x, y)

k (for

some constant k to be determined) just touches the feasible set. For larger values of
R(x,y), the set R(x,y)

k does not meet the feasible set at all, so there are no feasible

points that give such bigger values of R. The touching must occur at a vertex-that is,
at an intersection point of two of the boundary lines. (In general, the line R(x,y)

for the largest possible constant could touch the feasible set along a line segment that
makes up part of the boundary. But such a line segment has two vertices as endpoints,
so it is correct to say that the touching occurs at a vertex.)
For this particular problem, the vertices of the feasible set are easily found to be
(0,0), (50,0), and (0, 80), and the solution of the system of equations is
2x + y
4x + 3y

100

240

For this particular problem, the vertices of the feasible set are (0,0),(50, 0),(0, 80),
and (30, 80). Now compare the values of R(x,y) at all of these vertices: R(O,0),R(50,0),
R(O, 80)

110, and R(30,40)

120. The vertex (30,40) gives the best net revenue, so

the producer should make 30 hinges and 40 gate latches each week.

General Remarks

Problems involving allocation of resources are common in

business and government. Problems such as scheduling ship transits through a canal
can be analyzed this way. Oil companies must make choices about the grades of crude
oil to use in their refineries, and about the amounts of various refined products to pro
duce. Such problems often involve tens or even hundreds of variables-and similar
numbers of constraints. The boundaries of the feasible set are hyperplanes in some

IR", where

is large. Although the basic principles of the solution method remain the

same as in this example (look for the best vertex), the problem is much more com
plicated because there are so many vertices. In fact, it is a challenge to find vertices;
simply solving all possible combinations of systems of boundary equations is not good
enough. Note in the simple two-dimensional example that the point (60,0) is the inter
section point of two of the lines (y

0 and 4x + 3y

240) that make up the boundary,

but it is not a vertex of the feasible region because it fails to satisfy the constraint
2x + y $ 100. For higher-dimension problems, drawing pictures is not good enough,
and an organized approach is called for.
The standard method for solving linear programming problems has been the sim

plex method, which finds an initial vertex and then prescribes a method (very similar
to row reduction) for moving to another vertex, improving the value of the objective
function with each step.
Again, it has been possible to hint at major application areas for linear algebra,
but to pursue one of these would require the development of specialized mathematical
tools and information from specialized disciplines.

110

Chapter 2

Systems of Linear Equations

PROBLEMS 2.4
Practice Problems
Al Determine the system of equations for the reaction

A2 Determine the augmented matrix of the system of

forces and axial forces in members for the truss

linear equations, and determine the loop currents

shown in the diagram.

indicated in the diagram.

E1R1E>
l1

R1 E>

R3 'ji) R4

R6 'Ji) R1 RsE2

A3 Find the maximum value of the objective function


x + y subject to the constraints 0 $ x $ 100, 0 $ y $
80, and 4x + Sy $ 600. Sketch the feasible region.

CHAPTER REVIEW
Suggestions for Student Review
1 Explain why elimination works as a method for solv
ing systems of linear equations. (Section 2.1)
2 When you row reduce an augmented matrix

[A I b] to

solve a system of linear equations, why can you stop


when the matrix is in row echelon form? How do you
use this form to decide if the system is consistent and
if it has a unique solution? (Section 2.1)

3 How is reduced row echelon form different from row


echelon form? (Section 2.2)

in row echelon form (but not reduced row echelon


form) and of rank 3.
(b) Determine the general solution of your system.
(c) Perform the following sequence of elementary
row operations on your augmented matrix:
(i) Interchange the first and second rows.
(ii) Add the (new) second row to the first row.
(iii) Add twice the second row to the third row.
(iv) Add the third row to the second.

4 (a) Write the augmented matrix of a consistent non

(d) Regard the result of (c) as the augmented matrix

homogeneous system of three linear equations in

of a system and solve that system directly. (Don't

four variables, such that the coefficient matrix is

just use the reverse operation in (c).) Check that


your general solution agrees with (b).

Chapter Review

5 For homogeneous systems, how can you use the row

111

7 Explain how to determine whether a set of vectors

echelon form to determine whether there are non

(i11, ... , vk} in JR11 is both linearly independent and

trivial solutions and, if there are, how many parame

a spanning set for a subspace S of lRn. What form

ters there are in the general solution? Is there any case

must the reduced row echelon form of the coefficient

where we know (by inspection) that a homogeneous

matrix of the vector equation t1v1 +

system has non-trivial solutions? (Section 2.2)

have if the set is a linearly independent spanning set?

6 Write a short explanation of how you use informa

_,

+ tk vk = b

(Section 2.3)

tion about consistency of systems and uniqueness of


solutions in testing for linear independence and in de
termining whether a vector x belongs to a given sub
space of JR11 (Section 2.3)

Chapter Quiz
El Determine whether the following system is consis
tent by row reducing its augmented matrix:
X2 - 2x3

(b) Determine all values of (a, b, c) such that the


system has a unique solution.

E4 (a) Determine all vectors in JR5 that are orthogonal


3
1
2

+ X4 = 2

2x1 - 2x2 + 4x3 - X4 = 10


- X2

X1

+ X3

2
to 3

1
4

If it is consistent, determine the general solution.

E2 Find a matrix in reduced row echelon form that is


row equivalent to

and 8 .

0
0

5
9

(b) Let a, v, and w be three vectors in JR5. Explain


to all of a, v, and w.

ES Detenn;ne whether
0

2
-2

3
1

3
3

4
-4

9
-6

0
3
6
-3

-1

for JR3

{[!].[i]. [!]}

;s a bas;s

E6 Indicate whether the following statements are true

-1

or false. In each case, justify your answer with a


brief explanation or counterexample.

Show your steps clearly.

E3 The matrix A =

why there must be non-zero vectors orthogonal

Show your steps clearly.

A=

(a) A consistent system must have a unique solu

b+2

2
-3

c -1

c +

tion.
IS

the augmented matrix of a system of linear equa


tions.
(a) Determine all values of (a, b, c) such that the
system is consistent and all values of (a, b, c)
such that the system is inconsistent.

(b) If there are more equations than variables in a


non-homogeneous system of linear equations,
then the system must be inconsistent.
(c) Some homogeneous systems of linear equa
tions have unique solutions.
(d) If there are more variables than equations in a
system of linear equations, then the system can
not have a unique solution.

112

Chapter 2

Systems of Linear Equations

Further Problems
These problems are intended to be challenging. They
may not be of interest to all students.

Fl The purpose of this exercise is to explore the


relationship between the general solution of the
system

[A I b]

(c) Let R be a matrix in reduced row echelon form,


with m rows, n columns, and rank k. Show that

9eneral solution of the homogeneous sys


tem l R 10] is

the

and the general solution of the

corresponding homogeneous system

[A I 0]. This

relation will be studied with different tools in

where each vi is expressed in terms of the en

Section 3.4. We begin by considering some exam

tries in R. Suppose that the system

ples where the coefficient matrix is in reduced row

consistent and show that the general solution is

echelon form.
=

(a) Let R

is

r1 3
. Show that the general so
r23

[01

[RI c]

[ 0] is

lution of the homogeneous system RI

where

jJ

nents of

is expressed in terms of the compo

c,

and x H is the solution of the corre

sponding homogeneous system.


where v is expressed in terms of r13 and r23
Show that the general solution of the non
homogeneous system [R I

c]

is

(d) Use the result of (c) to discuss the relationship


between the general solution of the consistent

[A I b] and the corresponding homogeneous system [A I 0].

system

F2 This problem involves comparing the efficiency of

jJ

where

is expressed in terms of

as above.
=

(b) Let R

r 2

0 0 0

feneral solution
lRI O ) is

c,

and x H is

row reduction procedures.


W hen we use a computer to solve large systems

of linear equations, we want to keep the number


Show that the

r3s

of the homogeneous system

of arithmetic operations as small as possible. This


reduces the time taken for calculations, which is
important in many industrial and commercial ap
plications. It also tends to improve accuracy: every
arithmetic operation is an opportunity to lose accu
racy through truncation or round-off, subtraction of

where each of v1 and v2 can be expressed in


terms of the entries riJ. Express each vi ex
plicitly. Then show that the general solution of

[RI c]

can be written as

two nearly equal numbers, and so on.


We want to count the number of multiplications
and/or divisions in solving a system by elimination.
We focus on these operations because they are more
time-consuming than addition or subtraction, and
the number of additions is approximately the same

where
nents

jJ is expressed in terms of the compo


c, and x H is the solution of the corre

sponding homogeneous system.

The pattern should now be apparent; if it is not,


try again with another special case of R. In the
next part of this exercise, create an effective la
belling system so that you can clearly indicate
what you want to say.

as the number of multiplications. We make certain


assumptions: the system

[A I b]

has n equations

and n variables, and it is consistent with a unique


solution. (Equivalently,

A has n rows, n columns,

and rank n.) We assume for simplicity that no


row interchanges are required. (If row interchanges
are required, they can be handled by renaming
"addresses" in the computer.)

Chapter Review

(a) How many multi lications and divisions are re


quired to reduce

lA I b] to a form [c I J] such

that C is in row echelon form?

(b) Determine how many multiplications and divi


sions are required to solve the system with the

(1) To carry out the obvious first elementary


row operation, compute

a21

a11

-one division.

Since we know what will happen in the first

the first row of

lA

procedure is as efficient as Gaussian elim

cations.

(2) Obtain zeros in the remaining entries in the


first column, then move to the (n
1) by n
-

blocks consisting of the reduced version of


with the first row and first column

IL
(3) Note that i
i=l
n(n+l)(2n+l)

duced row echelon form is the same as the

evry other element of


I b by this factor and

element of the second row-n multipli

deleted.

divisions required to row reduce [RI c] to re


number used in solving the system by back

a
zi,
a11

subtract the product from the corresponding

[A I 6]

(c) Show that the number of multiplications and

but

column, we do not multiply a11 by


we must multipl

[ J] of part (a) by back

augmented matrix C I
substitution.

Hints

113

II
1!(1L+J)
- and i2
2
i=1

substitution. Conclude that the Gauss-Jordan


ination with back-substitution. For large n,
the number of multiplications and divisions is
roughly

f.

(d) Suppose that we do a "clumsy" Gauss-Jordan


procedure. We do not first obtain row eche
lon form; instead we obtain zeros in all en
tries above and below a pivot before moving
on to the next column. Show that the number
of multiplications and divisions required in this

(4) The biggest term in your answer should be


n3 /3. Note that n3 is much greater than n2

procedure is roughly

!f, so that this procedure


y 50% more operations

requires approximatel

than the more efficient procedures.

when n is large.

MyMathlab

Go to MyMathLab at www.mymathlab.com. You can practise many of this chapter's exercises as often as you
want. The guided solutions help you find an answer step by step. You'll find a personalized study plan available to
you, too!

CHAPTER 3

Matrices, Linear
Mappings, and Inverses
CHAPTER OUTLINE
3.1

Operations on Matrices

3.2

Matrix Mappings and Linear Mappings

3.3

Geometrical Transformations

3.4

Special Subspaces for Systems and Mappings: Rank Theorem

3.5

Inverse Matrices and Inverse Mappings

3.6

Elementary Matrices

3.7

LU-Decomposition

In many applications of linear algebra, we use vectors in JR11 to represent quantities,


such as forces, and then use the tools of Chapters I and 2 to solve various problems.
However, there are many times when it is useful to translate a problem into other linear
algebra objects. In this chapter, we look at two of these fundamental objects: matrices
and linear mappings. We now explore the properties of these objects and show how
they are tied together with the material from Chapters I and 2.

3.1 Operations on Matrices


We used matrices essentially as bookkeeping devices in Chapter 2. Matrices also pos
sess interesting algebraic properties, so they have wider and more powerful applica
tions than is suggested by their use in solving systems of equations. We now look at
some of these algebraic properties.

Equality, Addition, and Scalar Multiplication


of Matrices
We have seen that matrices are useful in solving systems of linear equations. However,
we shall see that matrices show up in different kinds of problems, and it is important
to be able to think of matrices as "things" that are worth studying and playing with
and these things may have no connection with a system of equations. A matrix is a
rectangular array of numbers. A typical matrix has the form

A=

a11

a12

a lj

a111

a21

a22

a2j

a2,,

a;1

a;2

aij

a;,,

aml

am2

amj

amn

116

Chapter 3

Matrices, Linear Mappings, and Inverses

We say that A is an m x n matrix when A has m rows and n columns. Two matrices A
and B are equal if and only if they have the same size and their corresponding entries
are equal. That is, if aiJ

biJ for 1 i

m, 1

n.

For now, we will consider only matrices whose entries aiJ are real numbers. We
will look at matrices whose entries are complex numbers in Chapter 9.

Remark
We sometimes denote the ij-th entry of a matrix A by

(A)iJ

This may seem pointless for a single matrix, but it is useful when dealing with multiple
matrices.
Several special types of matrices arise frequently in linear algebra.

Definition

An n x n matrix (where the number of rows of the matrix is equal to the number of

Square Matrix

columns) is called a square matrix.

Definition

A square matrix U is said to be upper triangular if the entries beneath the main

Upper Triangular

diagonal are all zero-that is, UiJ

Lower Triangular

be lower triangular if the entries above the main diagonal are all zero-in particular,

lij

0 whenever i

EXAMPLE l
The mauices

H j ]

0 whenever i

> j. A square matrix

L is said to

< j.

[ ;]

nd

[ ]

"'e uppe' tringul.,, whHe

"'e lower tr;ngul"' The motrix

[ ]

[-; ]

md

; , both upper ond lower

triangular.

Definition

A matrix D that is both upper and lower triangular is called a diagonal matrix-that

Diagonal Matrix

is, diJ

0 for all i

* j. We denote an nx n diagonal matrix by

EXAMPLE2

We

denote

the

diagonal

diag(d11, d22,

matrix

diag(O, 3, 1) is the diagonal matrix

, d 1111)

[ : ]
_

[ ]
0

by

diag( YJ, - 2),

while

Section

3.1 Operations on Matrices

Also, we can think of a vector in JR11 as an n x

matrix, called a

117

column matrix.

For this reason, it makes sense to define operations on matrices to match those with
vectors in JR11

Definition

Let A and B be m x n matrices and t E JR a scalar. We define

Addition and Scalar


Multiplication of Matrices

(A
and the

scalar multiplication

B)iJ

of matrices by

(B)iJ

of matrices by

(tA)iJ

EXAMPLE3

(A)iJ

addition

t(A)iJ

Perform the following operations.

()a [ ]+[_; ]
Solution:[]+[_; ]-[4!2) :]=[ :]
[13 -OJ5 [-21 -0]1
(b)

5 [ i]
Solution:[5 24 ]31 = [5(5(42)) 5(5(1)3)] =[0210 1 5 5]
c

( )

Note that matrix addition is defined only if the mattices are the same size.

Properties of Matrix Addition and Multiplication by Scalars

We

now look at the properties of addition and scalar multiplication of matrices. It is very
important to notice that these are the exact same ten properties discussed in Section
1.2 for addition and scalar multiplication of vectors in JR11

118

Chapter 3

Theorem 1

Matrices, Linear Mappings, and Inverses

Let A, B, C be m x n matrices and let s, t be real scalars. Then

(1)
(2)
(3)
(4)

A+ B is an m x n matrix
(closed under addition)
A+ B = B + A
(addition is commutative)
(A+ B) + C =A + (B + C)
(addition is associative)
There exists a matrix, denoted by Om,n , such that A+ Om,n = A; in particular,
Om,n is the m x n matrix with all entries as zero and is called the zero matrix
(zero vector)

(5) For each matrix A, there exists an m x n matrix (-A), with the property that
A +(-A) = Om,n; in particular, (-A) is defined by (-A)iJ = -(A)iJ
(additive
inverses)

(6)
(7)
(8)
(9)
(10)

sA is an m x n matrix

(closed under scalar multiplication)

s(tA) (st)A
(scalar multiplication is associative)
(s+ t)A =sA + tA
(a distributive law)
s(A + B) =sA + sB
(another distributive law)
lA =A
(scalar multiplicative identity)
=

These properties follow easily from the definitions of addition and multiplication
by scalars. The proofs are left to the reader.
Since we can now compute linear combinations of matrices, it makes sense to look
at the set of all possible linear combinations of a set of matrices. And, as with vectors
in JR'\ this goes hand-in-hand with the concept of linear independence. We mimic the
n
definitions we had for vectors in ]R .

Definition

Let :B = {A 1,

, Ak} be a set of

m x n matrices. Then the

span of :B is defined as

Span

Definition
Linearly Independent

Let :B = {A1, ... , Ae} be a set of m x n matrices. Then :B is said to be linearly inde
pendent if the only solution to the equation

Linearly Dependent

t1A1 +
is t1 =

EXAMPLE4

+ teAe = Om,n

=te =0; otherwise, :Bis said to be linearly dependent.

Determine if

[ ]

is in the span of

Solution: We want to find if there are t1, t2, t3, and t4 such that

Section 3.1

Operations on Matrices

119

EXAMPLE4

Since two matrices are equal if and only if their corresponding entries are equal, this

(continued)

gives the system of linear equations


ti + t1

ti + t3 + t4

t3

t1 + t4

Row reducing the augmented matrix gives

1
1

-2
3

We see that the system is consistent. Therefore,


we have

EXAMPLES

ti

-2,

t1

3,

Determine if the set :B

t3

3, and

t4

1.

[ ]

{[ ] [ ] , [ ]}

is in the span of :B. In particular,

is linearly dependent or linearly

independent.

Solution: We consider the equation

Row reducing the coefficient matrix of the corresponding homogeneous system gives

2
2
-1
The only solution is

EXERCISE 1

Deterrune if:S

t1

t2

t3

0, so S is linearly independent.

{[ ][ n.[ ][- ]}
[- ]
-

early independent. Is X

in the span of :B?

is linearly dependent or lin-

120

Chapter 3

EXERCISE 2

Matrices, Linear Mappings, and Inverses

Consider 13=

{[ ], [ ], [ ], rn ]}

Prove that 23 is linearly independent

and show that Span 13 is the set of all 2 x 2 matrices. Compare 13 with the standard
basis for IR.4.

The Transpose of a Matrix


Definition

Let

Transpose

whose ij-th entry is the Ji-th entry of

EXAMPLE6

be an m x n matrix. Then the transpose of

Determine the transpose of A=

Solution:

AT=

1
3

6
5

-1
3

6
5

is the n x m matrix denoted

-4
and B=
2

[ : ]

-1
T
=

4
2

A. That is,

[ ll

AT,

J T
= [1 0
and BT= _

Observe that taking the transpose of a matrix turns its rows into columns and its
columns into rows.

Some Properties of the Transpose

How does the operation of transposi

tion combine with addition and scalar multiplication?

Theorem 2

A and B and scalar s


(ATf =A
(A+ Bf =AT+ BT
(sAf=sAT

For any matrices

(1)
(2)
(3)

IR., we have

Proof:
2.

((A+ Bl);;=(A+ B);;=(A);;+ (B);;=(AT);;+ (BT);;=(AT+ BT);;.

3.

((sA)T);;=(sA);;=s(A)Ji=s(AT);;=(sAT);;.

Section 3.1

EXERCISE 3

Let

[ a
_

Verify that

(ATl

and

121

Operations on Matrices

(3Al

3AT.

Remark
Since we always represent a vector in JR11 as a column matrix, to represent the row of a
matrix as a vector, we will write iJT. For now, this will be our main use of the transpose;
however, it will become much more important later in the book.

An Introduction to Matrix Multiplication


The operations are so far very natural and easy. Multiplication of two matrices is more
complicated, but a simple example illustrates that there is one useful natural definition.
Suppose that we wish to change variables; that is, we wish to work with new
variables y1 and Y2 that are defined in terms of the original variables x1 and x2 by the
equations

Y1 = a11x1

a12X2

Y2

a22x2

a21X1

This is a system of linear equations like those in Chapter 2.


It is convenient to w1ite these equations in matrix form. Let
and

[ai

a21

[J [].
y

a12
be the coefficient matrix. Then the change of variables equations
a22

can be written in the form

Ax,

provided that we define the

product of A and x

according to the following rule:

(3.1)
It is instructive to rewrite these entries in the right-hand matrix as dot products. Let

a1 =

A.

[:: ]

and a2 =

[: ]

so that a

[a11

a12 and

a{

[a21

a22 are the

rows

of

Then the equation becomes

Thus, in order for the right-hand side of the original equations to be represented cor
rectly by the matrix product Ax, the entry in the first row of Ax must be the dot product
of the first row of

A (as a column vector)

with the vector x; the entry in the second row

must be the dot product of the second row of

(as a column vector) with x.

Suppose there is a second change of variables from

Z1

b11Y1

b12Y2

Z2

b21Y1

b22Y2

y to z:

122

Chapter 3

Matrices, Linear Mappings, and Inverses

In matrix form, this is written

= By Now suppose that these changes are performed


.

one after the other. The values for y1 and Y2 from the first change of variables are
substituted into the second pair of equations:

= b11(a11X1

a12X2) + b12(a21X1

a22X2)

z2 = b21(a11x1

a12x2)

a22x2)

ZJ

b22(a21X1

After simplification, this can be written as

b11a11
b21a11

We want this to be equivalent to

+
+

b12a21

b22a21

= By

b11a12 + b12a22 x
b21a12 + b22a22
BA

Therefore, the product BA must be

b11a12
b21a12

+
+

b12a22
b22a22

(3.2)

Thus, the product BA must be defined by the following rules:

EXAMPLE7

(BA)11 is the dot product of the first row of Band the first column of A.

(BA)12 is the dot product of the first row of Band the second column of A.

(BA)ii is the dot product of the second row of Band the first column of A.

(BA)22 is the dot product of the second row of Band the second column of A.

[ ][
2
4

3
1

5
-2

] [

1
2(5)
7
4(5)

+
+

3(-2)
1(-2)

2(1)
4(1)

] [

4
3(7)
1(7)
18

+
+

23
11

We now want to generalize matrix multiplication. It will be convenient to use

b[

to represent the i-th row of B and a1 to represent the }-th column of A. Observe from
our work above that we want the ij-th entry of BA to be the dot product of the i-th row

of Band the }-th column of A. However, for this to be defined,

bj must have the same

number of entries as a1. Hence, the number of entries in the rows of the matrix B (that

is, the number of columns of B) must be equal to the number of entries in the columns
of A (that is, the number of rows of A). We can now make a precise definition.

Definition

Let Bbe anm x n matrix with rows

Matrix Multiplication

a1,

bf,

, b and A be an n x p matrix with columns

, ap. Then, we define BA to be the matrix whose ij-th entry is


-;

(BA)iJ = b; ai
-;

Remark
If B is an m x n matrix and A is a p x q matrix, then BA is defined only if n = p.

Moreover, if n

p, then the resulting matrix ism x q.

More simply stated, multiplication of two matrices can be performed only if the
number of columns in the first matrix is equal the number of rows in the second.

Section 3.1 Operations on Matrices

EXAMPLES

Perform the following operations.

(a)

2
4

3
-1

Solution:

(b)

0
2

2
1

l
-1

3
-1

3
1

1
2

1
-1

3
1
2
0

2
0

0
2

3
s

1
2
3
s

9
15

13
3

H - m !l
[- i [ i [ i
] [ ]

Solution:

EXAMPLE9

123

3
3

l
-0

-3

is not defined because the first matrix has two columns and the sec-

ond matrix has three rows.

EXERCISE4

Let A

not defined.
(a) AB

EXAMPLE 10
LetA

-]
(b) BA

andB

Solution: ATB

[I

and B

[ ].

(c)ATA

m
2

Calculate the following or explain why they are

(d) BET

CompuATB

Obsme that if we let X

[]
=

[1(6)

and'!

2(5)

3(4)]

[28]

be vectors in 1<3, then X

28,

This matches the result in Example 10. This result should not be surprising since we

124

Chapter 3

Matrices, Linear Mappings, and Inverses

have defined matrix multiplication in terms of the dot product. More generally, for any

x,y

JRn, we have

XTy

X y

where we interpret the 1 x 1 matrix on the right-hand side as a scalar. This formula
will be used frequently later in the book.
Defining matrix multiplication with the dot product fits our view that the rows of
the coefficient matrix of a system of linear equations are the coefficients from each
equation. We now look at how we could define matrix multiplication by using our
alternate view of the coefficient matrix; in that case, the columns of the coefficient
matrix are the coefficients of each variable.
Observe that we can write equations (3. 1 ) as

a11x1 +a12x2
a11X1 +a12X2

Ax=

] [ ] [ J
a11

a11

xi +

a12

a12

x2

That is, we can view Ax as giving a linear combination of the columns of A. So, for

an m x n matrix A with columns a1, ... , a11 and vector

an

Xt

:
Xn

JR", we have

X1a1 +'' + Xnan

Using this, observe that (3.2) can be written as

BA

[Ba1

Ba2

Hence, in general, if A is an m x n matrix and B is a p x m matrix, then BA is the p x n


matrix given by

(3.3)
Both interpretations of matrix multiplication will be very useful, so it is important
to know and understand both of them.

Remark
We now see that linear combinations of vectors (and hence concepts such as spanning
and linear independence), solving systems of Linear equations, and matrix multiplica
tion are all closely tied together. We will continue to see these connections later in this
chapter and throughout the book.

Summation Notation and Matrix Multiplication

Some calculations

with matrix products are better described in terms of summation notation:

11

I ak
k=l

a1 +a1 ++a,,

Let A be an m x n matrix with i-th row aj

blj
matrix with j-th column b1

:
bnj

(AB)ij

ai bj
_,

_,

->
a T bj
i

[ail

ain and let B be an n x p

. Then the ij-th entry of AB is

11

_,

ailblJ +ai2b21 + +ainbnj

A )ik( B)kj
L)
k=l

We can use this notation to help prove some properties of matrix multiplication.

Section 3.1

Theorem 3

Operations on Matrices

125

If A, B, and C are matrices of the correct size so that the required products are
defined, and t E JR, then

(1) A(B+ C) =AB+ AC

(2)

(A + B)C= AC+ BC

(3) t(AB)=(tA)B A(tB)


(4) A(BC)=(AB)C
=

T
T T
(5) (AB) =B A

Each of these properties follows easily from the definition of matrix multiplica
tion and properties of summation notation. However, the proofs are not particularly
illuminating and so are omitted.

Important Facts The matrix product is not commutative:

That is, in general,

AB -:: BA. In fact, if BA is defined, it is not necessarily true that AB is even defined.
For example, if B is

and A is

2 x 3, then BA is defined,

but AB is not. However,

even if both AB and BA are defined, they are usually not equal. AB = BA is true only
in very special circumstances.

EXAMPLE 11

[ 3]
[ -i H-;
[ ;][

Show that if A =

Solution: AB=
but

BA= _

[ 1]
] [ =] ,
i] [: -]

5
and B= _2
=

7 , then AB

-:: BA.

Therefore, AB -:: BA.

The cancellation law is almost never valid for matrix multiplication: Thus, if

AB=AC, then we cannot guarantee that B=C.

EXAMPLE 12

Let A=

[ ] [; ]
[; n
[ ][; ] [ ] [ ] [; ]
.B =

AB=

.and C =

Then,

=AC

so AB=AC but B-:: C.

Remark
The fact that we do not have the cancellation law for matrix multiplication comes from
the fact that we do not have division for matrices.
We must distinguish carefully between a general cancellation law and the follow
ing theorem, which we will use many times.

126

Chapter 3

Theorem4

Matrices, Linear Mappings, and Inverses

If

A and B are m xn

matrices such that

Ax=Bx for every x

!Rn, then

A=B.

Note that it is the assumption that equality holds for every x that distinguishes this
from a cancellation law.

Proof: You are asked to prove Theorem

4, with hints, in Problem Dl.

Identity Matrix
We have seen that the zero matrix Om,n is the additive identity for addition of m x n
matrices. However, since we also have multiplication of matrices, it is important to
determine if we have a multiplicative identity. If we do, we need to determine what

A and a
I such that Al=A=IA, both A and I must ben xn matrices. The multiplicative
identity I is then x n matrix that has this property for alln xn matrices A.
the multiplicative identity is. First, we observe that for there to exist a matrix

matrix

To find how to define


find a matrix

I=

[; {]

I,

we begin with a simple case. Let

such that

Al=A.

Thus, we must have

[; ].

We want to

By matrix multiplication, we get

[ c db]=[c db] [eg fh]=[ce ++ dgbg


a

A=

af + bh
cf + dh

ae

a=ae + bg, b=af + bh, c=ce + dg, and d=cf+ dh. Although

this system of equations is not linear, it is still easy to solve. We find that we must have

e = 1 =h and

f=g=0. Thus,

I=
It is easy to verify that

I also

diag(l , 1)

[ ]=

satisfies

IA = A.

Hence,

is the multiplicative identity

for 2 x 2 matrices. We now extend this definition to then xn case.

Definition

Then xn matrix

I=diag(l, 1, ... , 1) is called the identity matrix.

Identity Matrix

EXAMPLE 13

The 3 x 3 identity matrix is

The 4 x 4 identity matrix is

I=diag(l, 1, 1)=

I=diag(l, 1, 1,

1)

[ ]
0

1
0
0
0

0
0

0
0
0

0
0
0

Section 3.1

Operations on Matrices

127

Remarks
1. In general, the size of I (the value of n) is clear from context. However, in some
cases, we stress the size of the identity matrix by denoting it by

I2 is the 2 x 2 identity matrix,

and

Im

In. For example,

is them xm identity matrix.

2. The columns of the identity matrix should seem familiar. If {ei, ... , en} is the
standard basis for

Theorem 5

If

A is any m x n matrix,

n
,

then

then

I,.,zA =A=Ain.

You are asked to prove this theorem in Problem D2. Note that it immediately
implies that

In

is the multiplicative identity for the set of n x n matrices.

Block Multiplication
Observe that in our second interpretation of matrix multiplication, equation (3.3), we

BA in blocks. That is, we computed the smaller matrix products


Bai, Ba2, ... , Ban and put these in the appropriate positions to create BA. This is a very

calculated the product

simple example of block multiplication. Observe that we could also regard the rows
of

B as blocks and write


BA=
bTA
p
There are more general statements about the products of two matrices, each of

which have been partitioned into blocks. In addition to clarifying the meaning of some
calculations, block multiplication is used in organizing calculations with very large
matrices.
Roughly speaking, as long as the sizes of the blocks are chosen so that the products
of the blocks are defined and fit together as required, block multiplication is defined
by an extension of the usual rules of matrix multiplication. We will demonstrate this
with an example.

EXAMPLE 14

Suppose that

A is

anm x n matrix,

is an n x p matrix, and

A and B

are partitioned

into blocks as indicated:

A=
Say that

A1

is r x n so that

A2

[1],

is (m

r) x n, while Bi is n x q and B2 is n x (p

Now, the product of a 2 x 1 matrix and a 1 x 2 matrix is given by

q).

128

Matrices, Linear Mappings, and Inverses

Chapter 3

So, for the partitioned block matrices, we have

EXAMPLE 14
(continued)

Observe that all the products are defined and the size of the resulting matrix is

m x

p,

as desired.

PROBLEMS 3.1
Practice Problems

3] [-3 ]
(-3)[; -i
[ ;] 3 [! -J
r- !H- ; -;J
[ 3 i] [ i
H -!l [ : ; _;J
[ ; ;J [i j]
[

Al Perform the indicated operation.


(a)

(b)

2
4

-2
1

(c)

-1

-4
-5

A4 For the given matrices

A=

(a)

-2

(c)

AS

(b)

A=

C=

B=
-

H -]

[-i l C=[ =;

n B=[=

[- 3;].
1

[l j].

3
!J

AB
DC

BA
(c) AC
CD
(f) CTD
(g) Verify that A(BC)=(AB)C
(h) Verify that (ABf=BTAT
(d)

A(B + C)=AB+AC and that A( B) =

[- n

D=

; -i] .c=

do not exist.

(a)

3(AB) for the given matrices.


(a)

Determine the following products or state why they

A3 Check that

[ -i ; ] .B=[ -;
2

that

B=

UtA=
and

(d)

check whether

B=

A=

(b)

-1

B,

[ ], 1-3 =]
[; - ; ] n =]
-2

why a product is not defined.

(b)

and

A + B and AB are defined. If so, check


(A+Bf=AT+ BT and (ABf=BTAT.

A2 Calculate each of the following products or explain

(a)

(b)
(e)

(i) Without doing any arithmetic, determine

A6 Let

[ -i !] [;], [ i],
[-:]

A=

-1

andZ=

(a) Determine

x=

y=

Ax, Ay, and AZ.

-1

DTC

Section 3.1

1 13 -1ol.
-1 1

(b) Use the result of (a) to determine A [42


A7

Let A = [ - !] and consider Ax as a lin-1 1 of A to determine x if


ear combination of0 columns
Ax=b where
(b) b=
b=
l

Ul
U
(d) =
(c)b= HI
" Ul
Calculate the following products. (b) [2 6] [ ]
(a)[-] [2 6]
(d) [s 4 -31[-]
(c) Hl [s 4 -3]

A9

Verify the following case of block multiplication


by calculating both sides of the equation and com
paring.

AlO

All

Determine if the set

A12

is linearly dependent or linearly independent.


Let
A = [basis vectors] andlet{e\,
standard
for Prove...,e1that1}denotethe
Ae; =

Homework Problems
Bl

Perform the indicated operation.


(a) [-! + [-6
3 [ -i27 3 -6 :i2
(b) -7 1 0 -5]
(c) [- ;]-4[- - ]
Calculate each of the following products or explain
why a product is not defined.
(a) r-; -][; - =n
(b) [- ; =
1 1 3I [5 = 0l
(c) [ -] [_; = ]
(-5)

B2

-9

6
3
-2
4
4
5
3
[ -4 1 1-2 1 ] 3
-3 2
= [-42 31 ][-26 34]+ [-42 1 ] [-31 32]
Determine if[ -]is in the span of

129

()

A8

Exercises

B3

a1

a11

(d) [ -][-: -]
Check that A(B+ =AB+ AC and that A(3B) =
3(AB) for the given matrices.
(a) A=[_; _nB=[ - -n
= r-; -J
(b) A=u ] B [; -n [- -1
For the given matrices A and B, check whether
B and
If so, check that
(AA ++ B/
=ATAB+ BareT anddefined.
(AB/=BTAT.
(a) A=[_; : :].s=H j]
C)

B4

a;.

IR.11

Chapter 3

130

H ! J.B=[! ! J
Let = H =H B= [! -].
1 3 -5
[=; land - ; 1

(b)
BS

Matrices, Linear Mappings, and Inverses

A=

BS

Determine the following products or state why they


do not exist.
(a)AB (b) BA (c)
(e)
(d)
(g) Verify that B( ) =
(h) Verify that =B
(i) Without doing any arithmetic, determine
LetA = [--1 1 0] = [-2].y =[ 2].and
DC

-1

B9

AC
(f) CDT
(BC)A
T AT

DA
CA
(ABl

Der

B6

.x

(a) Determine
and
(b) Use the resultof(a) to determine [-2 -2-1 -3=]
Let ; - =] and consider as a lin[-2 -4 -3
ear combination of columns of to determine if
b where
(a)b=[=!l
(b)b=nl
Ax, Ay,

BlO

At

B7

Ax

Ax=

Computer Problems

Cl Use a computer to check your results for Problems


A2 and AS.
Use a computer to determine the following matrix
products and the transpose of the products. Note
that one of the matrices appears twice. You should
C2

(c)b=rn
Calculate the following products.
(b) [-2 4] [;J
(a) [;J [-2 4]
(d) [-3 2) [!]
(c) [!] [-3 2]
Verify
the following
multiplication
calculating
both sidescaseof oftheblock
equation
and comparby
ing.
43
25
61
-1 3
= [l3 -1]2 [42 53] [-2 4]3 [-16 1]3

Bll

Determine if[

-] is in the span of

Determine if the set


{[-1 O0 '[-34 -2OJ' [20 -3J' [-33 -3-l }
J
J
is linearly dependent or linearly independent.
S

not have to enter it twice.


2.12 5.3.5635J[-l.02 3.47
(a) [-1.97
3.33 5.83
3.47 -4.94 [
(b) [-1.02
3.33 5.83 2.29J 4:67

-1

-4.94
2.29J
4.25
3.38
-3.73]

Section 3.2 Matrix Mappings and Linear Mappings

131

Conceptual Problems
Dl Prove T heorem 4, using the following hints. To
prove A

B, prove that A

= 0111,11; note
that Ax = Bx for every x E JR" if and only if
1
(A - B)x = 0 for every x E JR. 1 Now, suppose that
-

Cx = 0 for every x E JR.11 Consider the case where

x = e; and conclude that each column of C must be


the zero vector.

D3 Find a formula to calculate the ij-th entry of AAT


and of AT A. Explain why it follows that if AAT or
AT A is the zero matrix, then A is the zero matrix.

D4 (a) Construct a 2 x 2 matrix A that is not the zero


2
matrix yet satisfies A = 02,2.

(b) Find 2 x 2 matrices A and B with A t. B and


2
neither A = 02,2 nor B = 02,2, such that A
-

D2 Let A be an m x n matrix. Show that !111A = A and


A/12 =A.

AB

BA + B2 = 02 2.

DS Find as many 2 x 2 matrices as you can that satisfy


2
A =I .

3.2 Matrix Mappings and Linear


Mappings
Functions are a fundamental concept in mathematics. Recall that a function f is a
rule that assigns to every element x of an initial set called the domain of the function
a unique value y in another set called the codomain off. We say that f maps x to y or
that y is the image ofx under f, and we write f(x) = y. !ff is a function with domain U
and codomain V, then we say that f maps U to V and denote this by f:U V. In your
2
earlier mathematics, you met functions f: JR JR such as f(x) = x and looked at their
various properties. In this section, we will start by looking at more general functions
f : JR.11 JR.111, commonly called mappings or transformations. We will also look at a
class of functions called linear mappings that are very important in linear algebra.

Matrix Mappings
Using the rule for matrix multiplication introduced in the preceding section, observe
that for any m x n matrix A and vector x E JR", the product Ax is a vector in JRm. We
see that this is behaving like a function whose domain is JR.11 and whose codomain is
JR.11!.

Definition
Matrix Mapping

For any mxn matrix A, we define a function fA : JR.11 JR.111 called the matrix mapping,
corresponding to A by
fACx) =Ax

for any x E JR.11

Remark
Although a matrix mapping sends vectors to vectors, it is much more common to view
functions as mapping points to points. Thus, when dealing with mappings in this text,

132

Chapter 3

Matrices,Linear Mappings, and Inverses

we will often write

when,technically, it would be more correct to write f

EXAMPLE 1
LetA

[-i ]

[[] tJ
=

FindfA(l. 2) and fA(-1, 4).

Solution: We have

EXERCISE 1

LetA

[ n

FindfA(l,o),JAco,1),andfAC2,3).
_
W hat is the relationship between the value of fA(2,3) and the values of fA(1, 0) and
=

fA(O,1)?

EXERCISE 2

[ ; ]
-

LetA

Find fA(-1, l, l,0) and fA(-3,l,0, l).

Based on our results in Exercise 1, is such a relationship always true? A goodway


to explore this is to look at a more general example.

EXAMPLE2

Let A

a1 i
a21

a12
and find the values of fA(l,0), fA(O, 1),andfA(X1,x2).
a22

Section 3.2

EXAMPLE2

133

Matrix Mappings and Linear Mappings

Solution: We have

(continued)

Then we get

We can now clearly see the relationship between the image of the standard basis

vectors in IR. and the image of any vector


matrix

Theorem 1

1. We suspect that this works for any

m x n

A.

Let e1' e2, ...

'en be the standard basis vectors of IR.11' let A be an m x n matrix, and let
:
11
IR.
-t
IR.m
be the corresponding matrix mapping. Then, for any vector 1 E IR.11,
fA
we have

A = [a'.1 a'.2
a'.11 ] . Since e;
fA(e;) =Ae; =a'.;. Thus, we have

Proof: Let
we get

has 1 as its i-th entry and Os elsewhere,

as required.

Since the images of the standard basis vectors are just the columns of
that the image of any vector

1 E IR."

A,

we see

is a linear combination of the columns of

A.

This should not be surprising as this is one of our interpretations of matrix multiplica
tion. However, it does make us think about how a matrix mapping will affect a linear
combination of vectors in IR.11 For simplicity, we look at how it affects these separately.

Theorem 2

Let A be an m x n matrix with corresponding matrix mapping

1, y E IR.11 and any t E IR.,


fA(x+ y) =fA(x) +/ACY)
fA(tx) =tfA(x)

for any
(L l )
(L2)

Proof: Let 1,y

E IR.11

and

t ER

/A : IR.11 -t IR.m.

Then,

we have

Using properties of matrix multiplication, we get

fA(X+y) =A(x+ y) =Ax+ Ay =fA(x)+/ACY)


and

fA(tx) =A(tx) =tA1 =tfA(X)

134

Chapter 3

Matrices, Linear Mappings, and Inverses

A function that satisfies property (Ll ) of Theorem 2 is said to preserve addi


tion. Similarly, a function satisfying property (L2) of Theorem 2 is said to preserve
scalar multiplication. Notice that a function that satisfies both properties will in fact
preserve linear combinations-that is,

We call such functions linear mappings.

Linear Mappings
Definition
Linear !\lapping

A function L : JRn --+ JRm is called a linear mapping (or linear transformation) if for
E JRn and t E JR it satisfies the following properties:

every x, y
(Ll ) L(x

(L2) L( tx)

Definition
Linear Operator

y)
=

L(x)

L(Y)

tL(x)

A linear operator is a linear mapping whose domain and codomain are the same.
Theorem 2 shows that every matrix mapping is a linear mapping. In Section 1.4,
we showed that projil and perpil are linear operators from JRn to JRn.

Remarks
1. L inear transformation and linear mapping mean exactly the same thing. Some
people prefer one or the other, but we shall use both.

2. Since a linear operator L has the same domain and codomain, we often speak
of a linear operator Lon JR11 to indicate that Lis a linear mapping from JR11 to
JR".

3. For the time being, we have defined only linear mappings whose domain is JR"
and whose codomain is JRm. In Chapter 4, we will look at other sets that can be
the domain and/or codomain of linear mappings.

EXAMPLE3

2
Show that the mapping f : IR
linear.

--+

IR2 defined by f(x1, x2)

( 2x1

x2, -3x1

5x2) is

Solution: To prove that it is linear, we must show that it preserves both addition and

scalar multiplication. For any y, z E JR2 we have


,

f(Y

ZJ

f(y1

[
[

Z1,Y2

Z2)

2(y1 + z1) + CY2 + z2)


- -3(y1 + z1) + S(y2 + z2)
=

2y1 + Y2
-3y1 + Sy2

f(Y)

f(ZJ

] [
+

2 z1

-3z1

Z2
Sz2

Section 3.2 Matrix Mappings and Linear Mappings

EXAMPLE3

Thus,

f preserves addition. Let y

(continued)

JR.2 and t E JR. be a scalar. Then we have

f(tY)

Hence,

135

f(ty1' ty2)

[ 2(ty1) +(ty2) ]
-3(ty1)+S(ty2)
[ 2y1 + Y2 ]
t
-3y1 + 5y2

tfCY)

f also preserves scalar multiplication, and therefore f is a linear operator.

In the previous solution, notice that the proofs for addition and scalar multipli
cation are very similar. A natural question to ask is whether we can combine these
into one step. The answer is

yes!

We can combine the two linearity properties into one

statement:
L :

scalar

JR.I!
E

JR.m is a linear mapping if for every

and y in the domain and for any

JR.,

L(t1 + y)

tL(1) + LCY)

The proof that the definitions are equivalent is left for Problem D 1.

EXAMPLE4

Determine if the mapping

f:

JR.3

Solution: We must test whether


1,Y E JR.3 and consider

JC1+Sf)

JR. defined by

111+5'11

/(1)

11111 is linear.

preserves addition and scalar multiplication. Let

and

JC1) +!CY)

11111 +115'11

Are these equal? By the triangle inequality, we get

111 + 5'11 ::; 11111+115'11


and we expect equality only when one of 1, y is a multiple of the other. Therefore, we
believe that these are not always equal, and consequently
To demonstrate this, we give a countecexample: ifi'

f(X + j1)

f( l, l, 0)

f does not preserve addition.

[l
=

and j1

..f5.

[n

then

136

Chapter 3

Matrices, Linear Mappings, and Inverses

EXAMPLE4

but

(continued)

1
/( )
Thus,

f(x + y)

f(J)

[i] [ ! ]
+

I +

f(x) + f(Y) for any pair of vectors , y in JR.3, hence f is not linear.
1

Note that one counterexample is enough to disqualify a mapping from being linear.

EXERCISE 3

Determine whether the following mappings are linear.

JR.2

---7

JR.2 defined by f(x1, x2)

g : JR.2

---7

JR.2 defined by g(x1, x2)

(a) f
(b)

(XT, X1

x2)

(x2, x1 - x2)

Is Every Linear Mapping a Matrix Mapping?


We saw that every matrix determines a corresponding linear mapping. It is natural to
ask whether every linear mapping can be represented as a matrix mapping.
To see if this might be true, let's look at the linear mapping
defined by

f(x1, x2)

f from

Example

3,

(2x1 + X2, -3x1 + Sx2)

If this is to be a matrix mapping, then from our work with matrix mappings, we know
that the columns of the matrix must be the images of the standard basis vectors. We
have
A

f(l, 0)

[ ; ]
_

[ ;l f(O,
_

1)

[n

So, taking these as columns, we get the matrix

.Now observe that

Hence, f can be represented as a matrix mapping.


T his example not only gives us a good reason to believe it is always true but
indicates how we can find the matrix for a given linear mapping L.

Theorem 3

If L : JR.11

---7

JR.m is a linear mapping, then L can be represented as a matrix mapping,

with the corresponding m x n matrix [L] given by

Section 3.2 Matrix Mappings and Linear Mappings

Proof: Since L is linear, for any 1

L(1)

JR.11 we have

+ + ..
+ + ... +

L(x1e1

X1L(e1)

137

[LCe1)

x2e2

Xne11)

x2L(e2)

L(e2)

...

X11L(e11)

L(e11)]

[1]
Xn

[L]1

as required.

Remarks

1.
2.

Combining this with Theorem


only if it is a matrix mapping.

2,

we have proven that a mapping is linear if and

The matrix [L] determined in this theorem depends on the standard basis. Con
sequently, the resulting matrix is often called the standard matrix of the linear
mapping. Later, we shall see that we can use other bases and get different ma
trices corresponding to the same linear mapping.

EXAMPLES

Let v

[!]

and i1

rn

Find the standard matrix of the mapping projv : JR.2

and use it to find projil i1.

Solution: Since projil is linear, we can apply Theorem

JR.2

to find [projil]. The first

column of the matrix is the image of the first standard basis vector under prok Using
the methods of Section

1.4,

proJv e

we get
=

e1 v
v
llVll2

1(3)+0(24) [3] [9;25 ]


3 +4 4 12/25
2

Similarly, the second column is the image of the second basis vector:
.

rO v
P J e2

e2. v
v
11v112

0(3) + 1(4) [3]4 [12/25]


16/25
25
=

Hence, the standard matrix of the linear mapping is

[proJv]

[proJv. -;

ei

. -; ]

proJv e2

[9/25 12/25]
12/25 16/25

Therefore, we have
. -;
proJv u

.]

[ proJv

-;

u =

[9/25 12/25] [1] [33/25]


12/25 16/25 2 44/25
=

138

Chapter 3

EXAMPLE6

Matrices, Linear Mappings, and Inverses

Let

G : JR3

JR

--+

be defined by

the standard matrix of

G(xi, x2, x3)

(xi, x2).

Prove that

Solution: We first prove that G is linear. For any y, z E JR3 and t

G(ry

tJ

Hence,

is linear and find

G.

G(tyi

Zi, ty2

Z2, ty3

JR,

we have

Z3)

[ : ] [] []
[] []
=

tG(Y)

G(tJ

is linear. Thus, we can apply Theorem 3 to find its standard matrix. The

images of the standard basis vectors are

G(l, 0,0)

[] ,

G(O, 1,0)

[],

G(O, 0, 1)

[]

Thus, we have

[G]

G(O,

[co. 0, 0)

Did we really need to prove that


structed

[G]

1, 0)

G(O, 0,

1) =

[ ]
0

was linear first? Couldn't we have just con

using the image of the standard basis vectors and then said that because

is a matrix mapping, it is linear? No! You must always check the conditions of the

theorem before using it. Theorem 3 says that

if f is linear, then [f]

can be constructed

from the images of the standard basis vectors. The converse is not true!
For example, consider the mapping
dard basis vectors are

[ l

f(l, 1)

f(l, 0)

[]

and

f(xi, x2)
f(O,

1) =

(xix2, 0).

ml

The images of the stan-

so we can construct the matrix

But this matrix does not represent the mapping! In particular, observe that
=

[] [ ] [ ] []
but

.Hence, even though we can create a matrix using

the images of the standard basis vectors, it does not imply that the matrix will represent
that mapping, unless we already know the mapping is linear.

EXERCISE4

2
JR be defined by H(xi, x2, x3, x4)
and find the standard matrix of H.

Let

H : JR

--+

(x3

x4, xi).

Prove that

H is linear

Section 3.2 Matrix Mappings and Linear Mappings

139

Compositions and Linear Combinations


of Linear Mappings
We now look at the usual operations on functions and how these affect linear mappings.

Definition
Operations on Linear

"
Let L and M be linear mappings from JR to JRm and let t
(L + M) to be the mapping from JR" to JRm such that

JR be a scalar. We define

Mappings

(L + M)(x)

L(x) + M(x)

1
We define (tL) to be the mapping from JR 1 to JRm such that

(tL)(x)

Theorem 4

tL(x)

"
If Land M are linear mappings from JR to JRm and t
linear mappings.

Proof: Let 1, y

(tL)(sx + y)

JR11 and s

R Then,

tL(sx + y)

t[sL(x) + L(J)]

Hence, (tL) is linear. Determining that L

JR, then (L + M) and (tL) are

stL(x) + tL(J)

s(tL)(x) + (tL)(J)

M is linear is left for Problem D2.

The properties we proved in Theorem 4 should seem familiar. They are properties
(1) and (6) of Theorem 1.2.1 and Theorem 3.1.1. That is, the set of all possible linear
mappings from JR" to JRm is closed under scalar multiplication and addition. It can be
shown that the other eight properties in these theorems also hold.

Definition
Composition of Linear
Mappings

1
Let L : JR1 4 ]Rm and M : ]Rm
11
JR JRP is defined by
for all x

JRP be linear mappings. The composition M

(M 0 L)(x)

L :

M (L(x))

JR".

Note that the definition makes sense only if the domain of the second map M
contains the codomain of the first map L, as we are evaluating M at L(x). Moreover,
observe that the order of the mappings in the definition is important.

EXERCISE 5

1
Prove that if L : JR 1
linear mapping.

]Rm and M : ]Rm

JRP are linear mappings, then M

L is a

Since compositions and linear combinations of linear mappings are linear map
pings, it is natural to ask about the standard matrix of these new linear mappings.

140

Chapter 3

Theorem 5

Matrices, Linear Mappings, and Inverses

Let

JR11 - Rm, M

JR11 - Rm, and N

JR111

JRP be linear mappings and t

Then,

[tL]= t[L],

[L + M]= [L] + [M],

[NoL]= [N][L]

Proof: We will prove [tL] = t[L] and [No L] = [N][L]. You are asked to prove that
[L + M]= [L] + [M] in Problem D2.
Observe that for any x E JR11, we have
[tLJx= (tL)(x)= tL(x)= t[LJx
[tL]= t[L] by Theorem 3.1.4.
[NJ is p xm and [L] ism x n, the matrix product [N][L]
is defined. For any 1 E JR11,

Hence,

We first observe that since

[N
Thus,

EXAMPLE 7

L]x= (N 0 L)(x)= N(L(x))= N([L]x) = [N][L]x

[NoL]= [N][L] by Theorem 3.1.4.

Let Land

M be the linear operators on JR2 defined by

Determine [MoL].
Solution: We find that

L(l,0)=

[]

L(0,1)=

Thus,

[ ]

M(l,0)=

[ ]
-

M(0,1)=

[ ]
-

[ -][ ] [ ]

[MoL]= -

In Example 7,

M o L is the mapping such that (M o L)(x) = 1 for all 1

JR".

Moreover, the standard matrix of the mapping is the identity matrix. We thus make the
following definition.

Definition

The linear mapping Id :

JR" - JR11 defined by

Identity Mapping

Id(x) =
is called the

identity mapping.

Section 3.2 Exercises

141

Remark

Since the mappings L and M in Example 7 satisfy Mo L= Id, they are called inverses,
as are the matrices [M] and [L]. We will look at inverse mappings and matrices in
Section 3.5.

PROBLEMS 3.2
Practice Problems
-2
3

3
0
Al Let A=
and let fA be the corresponding
5
4 -6
matrix mapping.
(a) Determine the domain and codomain of fA.
(b) Determine /A(2,-5) and /A(-3, 4).
(c) Find the images of the standard basis vectors
for the domain under fA.
(d) Determine fA(x).
(e) Check your answers in (c) and (d) by calculat
ing [/A(x)] using Theorem 3.
A2 Let A=

A4 For each of the following linear mappings, deter


mine the domain, the codomain, and the standard
matrix of the mapping.
(a) L(x1,x2,x3) =(2x1 - 3x2 + x3,x2 - 5x3)
(b) K(x1,x2,X3,X4)= (5x1 +3x3-X4,x2-?x3+3x4)
(c) M(x1' X2,X3,X4)= (x1 - X3 + X4,X1 + 2x2 - X3 3X4,X2 + X3,X1 - X2 + X3 - X4)
AS Suppose that S and T are linear mappings with ma
trices

[S]=[
-

;],

l
[T]=[
2

2
2

-1
]
3

(a) Determine the domain and codomain of each


mapping.
(b) Determine the matrices that represent (S + T)
and (2S - 3T).

and let /A be the corre1


0
2 -1
sponding matrix mapping.
(a) Determine the domain and codomain of fA
(b) Determine /A(2,-2,3, 1) and fA(-3, 1,4,2).
(c) Find the images of the standard basis vectors
for the domain under fA.
(d) Determine fA(x).
(e) Check your answers in (c) and (d) by calculat
ing [fA(x)] using Theorem 3.

A6 Suppose that S and T are linear mappings with ma


trices

A3 For each of the following mappings, state the


domain and codomain. Determine whether the
mapping is linear and either prove that it is linear
or give a counterexample to show why it cannot be
linear.
(a) f(x1,x2)= (sin X1, ex2)

(a) Determine the domain and codomain of each


mapping.
(b) Determine the matrices that represent So T and
To S.

(b)
(c)
(d)
(e)
(f)

g(x1,x2)= (2x1 + 3x2,x1 - x2)


h(x1' X2) =(2x1 + 3x2,X1 - X2,X1X2)
k(x1' X2,X3) =(x1 + X2,0, X2 - X3)
l'(x1,x2, X3)= (x2,lxil)
m(x1) (x1,1,0)

[S]=[

;],[T]=

-l
-4

A7 Let L, M, and N be linear mappings with matrices

[L]

HH
2

[N] =

[M]

[i - -l

and

- . Determine which of the following


1

142

Chapter 3

Matrices, Linear Mappings, and Inverses

compositions are defined and determine the domain


and codomain of those that are defined.
(a) LoM

(b) MoL
(d) NoL
(f) NoM

(c) Lo N
(e) Mo N

AS Let v=

[-n

A9 Let v =
A 10 Let V

[! l

Determine the matrix [perpil].

[ jl

Determine the matrix [proj,].

Determine the matrix [projv].

Homework Problems
Bl Let A =

[ =

-1

and letfA be the correspond

ing matrix mapping.


(a) Determine the domain and codomain offA

(b) DeterminefA(3,4, -5 ) andfA(-2, 1, -4).


(c) Find the images of the standard basis vectors
for the domain under !A.
(d) DeterminefA(x).
(e) Check your answers in (c) and (d) by calculat
ing [JA(x)] using Theorem 3.
2

0 2 3
B2 Let A =
7 9 and letfA be the corresponding
5
2 4 8
matrix mapping.
(a) Determine the domain and codomain offA
(b) DeterminefA(-4,2, 1) andfA(3,-3,2).
(c) Find the images of the standard basis vectors
for the domain underfA.
(d) Determine fA(x).

B4 For each of the following linear mappings, deter


mine the domain, the codomain, and the standard
matrix of the mapping.
(a) L(x1,X2,X3) =(2x1 -3x2,x2, 4x1 - 5x2)
(b) K(x1,X2,X3,X4) =(2xi - X3 + 3x4 -X1 - 2x2 +
,

2x3

X4,3x2 + x3)
(c) M(x1,X2,x3) =(x3 -x1,0,Sx1
+

x2)

BS Suppose that S and T are linear mappings with ma


trices

[S]=

[ -]

and

[ T]=

[! -]

(a) Determine the domain and codomain of each


mapping.
(b) Determine the matrices that represent ( T + S)
and ( -S + 2T).

B6 Suppose that S and T are linear mappings with ma


trices

(e) Check your answers in (c) and (d) by calculat

ing [fA(x)] using Theorem 3.

B3 For each of the following mappings, state the do


main and codomain. Determine whether the map
ping is linear and either prove that it is linear or
give a counterexample to show why it cannot be
linear.
(a) f(x1, x2) =(2x2,xi - x2)
(b) g(x1,x2) =(cosx2,xix )
(c) h(xi' X2,X3) =(0,0,X1 + X2 + X3)
(d) k(Xi,X2,X3) =(0,0,0)
(e) f(xi,X2,X3,X4) =(xi, l,X4)
(f) m(x1' X2,X3,X4)=(xi + X2 - X3)

[S]=

-3

- ,[ T]= [-
-2

(a) Determine the domain and codomain of each


mapping.
(b) Determine the matrices that represent SoT and
ToS.

B7 Let L, M, N be linear mappings with matrices [L]

un

[M]=

[i

and

Section 3.3
2

3
-3

[N] =

. Determine which of the fol-

0
-4

lowing compositions are defined and determine the


domain and codomain of those that are defined.

(a) LoM
(c) Lo N
(e) Mo N
B8 Let v

(b) MoL
(d) NoL
(f) NoM

[-l

B9 Let v =

B 10 Let ii=

Bll Let ii=

Geometrical Transformations

[ ].
_

[-l].
[-H

143

Determine the matrix [perpil].

Determine the matrix [prnj,].

Determine the matrix [perp,].

Determine the matrix [projil].

Conceptual Problems
Dl Let L

R.11

R.m. Show that for any 1, y E R.11 and


t E R., L satisfies L(1+y) = L(1)+L(Y) and L(t1) =
tL(1) if and only if L(t1 + y) = tL(1)+ L(Y).

D2 Let L

R.11

R.m and M

R.11

R.m be linear

mappings. Prove that (L+M) is linear and that [L+

M]

D3 Let v

[L]+ [M].
E

R.11 be a fixed vector and define a mapping

DOTv by DOTv 1 = v

1 . Verify that DOTv is a

linear mapping. What is its codomain? Verify that


the matrix of this linear mapping can be written as
VT.

D4 If it is a unit vector, show that [proj,,] = it itT.


DS Let v

R.3 be a fixed vector and define a map

ping CROSSv by CROSSv 1 = v x 1. Verify that


CROSSv is a linear mapping and determine its
codomain and standard matrix.

3.3 Geometrical Transformations


Geometrical transformations have long been of great interest to mathematicians. They
also have many important applications. Physicists and engineers often rely on sim
ple geometrical transformations to gain understanding of the properties of materials
or structures they wish to examine. For example, structural engineers use stretches,
shears, and rotations to understand the deformation of materials. Material scientists
use rotations and reflections to analyse crystals and other fine structures. Many of
these simple geometrical transformations in R2 and R3 are linear. The following is a
brief partial catalogue of some of these transformations and their matrix representa
tions. (projil and perpil belong to the list of geometrical transformations, too, but they
were discussed in Chapter 1 and so are not included here.)

Rotations in the Plane


Re

R2

R2 is defined to be the transformation that rotates 1 counterclockwise

through angle () to the image Re(1). See Figure 3.3.1. Is Re linear? Since a rotation
does not change lengths, we get the same result if we first multiply a vector 1 by

a scalar t and then rotate through angle (), or if we first rotate through () and then
multiply by t. Thus, Re(t1) = tRe(1) for any t E R. and any 1 E R2. Since the shape

of a parallelogram is not altered by a rotation, the picture for the parallelogram rule of
addition should be unchanged under a rotation, so Re(1 + y) = Re(1) + Re(Y). Thus,

144

Chapter 3

Matrices, Linear Mappings, and Inverses

Re(1)

Figure 3.3.1

Re is linear.

Counterclockwise rotation through angle e in the plane.

A more formal proof is obtained by showing that Re can be represented as

a matrix mapping.
Assuming that
calculate

[Re].

Re

is linear, we can use the definition of the standard matrix to

From Figure

3.3.2,

we see that

R e(1 O)

Re(O, l)

[ ]
[ ]
c se
sme

'

Hence,

[Re]

-sine

cose

c se

-sine

sme

cose

It follows from the calculation of the product

Re(1)

[XXit

[Re]1 that

c se -

sine

Sln 8 +

COS 8

XX22 ]

X2
Re(l, 0)

(0, 1)

Re(O, 1)

(cose, sin 8)

(-sine, cos 8)

(1, 0)
Figure 3.3.2

EXAMPLE 1

Image of the standard basis vectors under R8.

What is the matrix of rotation of.2 through angle

Solution: Since cos

- and sin

[R2,,.13]

[
=

2rr/3?

f,

-1 /2
../3/2

v'3/2
-1/2

X1

Section 3.3 Geometrical Transformations

EXERCISE 1

Determine

[Rrr14]

and use it to calculate

Rrr14(1, 1).

145

Illustrate with a sketch.

Rotation Through Angle(} About the x3-axis in IR.3


Figure 3.3.3 demonstrates a counterclockwise rotation with respect to the right-handed
standard basis. This rotation leaves x3 unchanged, so that if the transformation is de

R, R(O,

noted
then
0, 1)
(0, 0, 1). Together with the previous case, this tells us that
the matrix of this rotation is
=

[R] [
=

OJ

cose

- sine

sine
0

cose

These ideas can be adapted to give rotations about the other coordinate axes. Is it
3
possible to determine the matrix of a rotation about an arbitrary axis in JR ? We shall
see how to do this in Chapter 7.

(COS B, sin B, X3)

Figure 3.3.3

A right-handed counterclockwise rotation about the x3-axis in JR3.

Stretches Imagine that all lengths in the x1 -direction in the plane are stretched by a
scalar factor t > 0, while lengths in the x2-direction are left unchanged (Figure 3.3.4).
This linear transformation, called a "stretch by factor tin the x1 -direction," has matrix

[ n

(If t < 1, you might prefer to call this a shrink.) It should be obvious that

stretches can also be defined in the x2-direction and in higher dimensions. Stretches
are important in understanding the deformation of solids.

Contractions and Dilations If a linear operator T : JR2

[ l

with t > 0, then for any 1, T(x)

JR2 has matrix

tx, so that this transformation stretches

vectors in all directions by the same factor. Thus, for example, a circle of radius 1
centred at the origin is mapped to a circle of radius t at the origin. If 0 < t < 1, such a
transformation is called a contraction; if t > l, it is a dilation.

146

Chapter 3

Matrices, Linear Mappings, and Inverses

Figure 3.3.4

Shears

A stretch by a factor t in the

x1 -direction.

Sometimes a force applied to a rectangle will cause it to deform into a paral

lelogram, as shown in Figure 3.3.5. The change can be described by the transformation
1
2
2
: IR IR , such that S (2, 0)
(2, 0) and S (0, 1) = ( s, ). Although the deformation

of a real solid may be more complicated, it is usual to assume that the transformation S

is linear. Such a linear transformation is called a shear in the direction of x1 by amount


s. Since the action of

IS

[ ]
1
0

S on the standard basis vectors is known, we find that its matrix

1 .

(5, 1)
(2 + 5, 1)
(2, 1)
...(0, 1) ----....,.-----.---""'7

(2,0)
Figure 3.3.5

A shear in the direction of

x1

by amounts.

Reflections in Coordinate Axes in R.2 or Coordinates Planes in


IR;.3

2
Let R: JR.2 JR. be a reflection in the x1 -axis (see Figure 3.3.6). Then each vector

corresponding to a point above the axis is mapped by R to the mirror image vector

below. Hence, R(xi, x2) = (x1, -x2), and it follows that this transformation is linear

[ OJ

1
.
. h matnx
wit

.
.
.
.
he x2-ax1s has the matnx
1 . smu'larly, a re ftect1on mt

[- OJ
l
0

1 .

Next, consider the reflection T : JR3 JR3 that reflects in the x1x2-plane (that is,

the plane x3 = 0). Points above the plane are reflected to points below the plane. The

matrix of this reflection is 0

o
1
0

o
0 .

-1

Section 3.3 Geometrical Transformations

Figure 3.3.6

EXERCISE 2

A reflection in JR;.2 over the

x1

147

-axis.

W rite the matrices for the reflections in the other two coordinate planes in JR.3.

General Reflections

We consider only reflections in (or "across") lines in JR.2 or

planes in JR.3 that pass through the origin. Reflections in lines or planes not containing
the origin involve translations (which are not linear) as well as linear mappings.
Consider the plane in JR.3 with equation it 1

0. Since a reflection is related to

proj,7, a reflection in the plane with normal vector it will be denoted refl;t. If a vector

corresponds to a point P that does not lie in the plane, its image under refl;t is the

vector that corresponds to the point on the opposite side of the plane, lying on a line
through P perpendicular to the plane of reflection, at the same distance from the plane
as P. Figure 3.3.7 shows reflection in a line. From the figure, we see that

Since proj,1 is linear, it is easy to see that refl,1 is also linear.

reft;t a

Figure 3.3.7

A reflection in R2 over the line with nonnal vector n.

148

Chapter 3

Matrices, Linear Mappings, and Inverses

It is important to note that refl,1 is a reflection with normal vector it. The calcu
lations for reflection in a line in IR2 are similar to those for a plane, provided that the
equation of the line is given in scalar form it 1 0. If the vector equation of the line
is given as 1
tJ, then either we must find a normal vector it and proceed as above,
or, in terms of the direction vector J, the reflection will map jJ to (jJ 2 perpJ jJ).
=

EXAMPLE2
Consider a refiection refl,1

1!.3

IR3 over the plane with normal vector ii

[-i].

Determine the matrix [refl,1].


Solution: We have

Hence,

Note that we could have also computed [refl,1] in the following way. The equation
for refl;t(p) can be written as
refl;t(P)

Id(jJ)

2 proj;t(P )

(Id +(-2) projit )(jJ)

Thus,

PROBLEMS 3.3
Practice Problems
A 1 Determine the matrices of the rotations in the plane
through the following angles.
(b) 1T
can
(d) 6f'
(c) -

A2 (a) In the plane, what is the matrix of a stretch S


by a factor 5 in the x2-direction?
(b) Calculate the composition of S followed by a
rotation through angle e.

Section 3.3 Exercises

(c) Calculate the composition of S following a ro


tation through angle e.

A3 Determine the matrices of the following reflections


2
in JR .

(a) R is a reflection in the line x1

(b) S is a reflection in the line 2xt

3x2
=

0.

AS (a) Let D : JR JR be the dilation with factor


3
JR4 be defined by
t = 5 and let inj : JR
inj(x1,x2,x3)

(x1,X2,0,x3). Determine the

matrix of inj oD.


2
3
(b) Let P : JR JR be defined by P(x1,x2,X3)
3
(x2, x3) and let S be the shear in JR such that
=

x2.

S (x1,x2,x3) (x1,x2,X3
matrix of P o S .
=

A4 Determine the matrix of the reflections in the fol


3
lowing plane in JR .

(a) Xt

X2

X3

(b) 2x1- 2x2 - X3

0
=

149

2x1). Determine the

2
(c) Can you define a shear T : JR
To P

2
JR such that

Po S, where P and S are as in part (b)?


2
3
JR be defined by Q(x1,x2,x3) =

(d) Let Q : JR

(xt, x2). Determine the matrix of Q o S, where

S is the mapping in part (b).

Homework Problems
Bl Determine the matrices of the rotations in the plane
through the following angles.

(c)
(a)-

(b)
(d)

B4 Determine the matrix of the reflections in the fol


3
lowing plane in JR .

(a) Xt - 3x2

-n

(b) 2X1

B2 (a) In the plane, what is the matrix of a stretch S


by a factor 0.6 in the x2-direction?

(c) Calculate the composition of S following a ro


tation through angle

B3 Determine the mat1ices of the following reflections


2
in JR .

(a) Ris a reflection in the line X1- 5x2


(b) S is a reflection in the line 3x1

4x2

0.
=

- x3

X2- X3

0
3

BS (a) Let C : JR JR be contraction with fac


3
tor 1 /3 and Jet inj : JR JR5 be defined by

(b) Calculate the composition of S followed by a


rotation through angle e.

inj(x1,x2,x3)

(0,x1,0,x2,x3). Determine the

matrix of inj oC.


3
2
(b) Let S : JR JR be the shear defined by

S (x1,x2,X3) = (x1,X2 - 2x3,x3). Determine the


matrices Co S and S o C, where C is the con
traction in part (a).
3
3
(c) Let T : JR JR be the shear defined by
T(x1,x2,X3)

0.

(xt

3x2,x2,X3). Determine the

matrix of S o T and T o S, where S is the map


ping in part (b).

Conceptual Problems
Dl Verify that for rotations in the plane [Ra o Re]
[Ra][Re] = [Ra+e].
D2 In Problem A3, [R]

4/5
_315

-3/5
_415 and [S]

-3/5 4/5
.
are refiect10n matrices. Calculate
415 315
[Ro S] and verify that it can be identified as the
matrix of a rotation. Determine the angle of the
rotation. Draw a picture illustrating how the com
position of these reflections is a rotation.

150

Chapter 3

Matrices, Linear Mappings, and Inverses

D3 In R3, calculate the matrix of the composition of a


reflection in the x2x3-plane followed by a reflection
in the x1 x2-plane and identify it as a rotation about
some coordinate axis. W hat is the angle of the rotation?

D4 (a) Construct a 2 x 2 matrix A *I such that A3 = /.


(Hint: Think geometrically.)
(b) Construct a 2 x 2 matrix A *I such that A5 = /.
DS From geometrical considerations, we know that
reflt1 o refl,1 = Id. Verify the corresponding matrix
equation. (Hint: [reft,1] = I 2[projn] and proj,1 sat
isfied the projection property from Section 1.4.)
-

3.4 Special Subspaces for Systems and


Mappings: Rank Theorem
In the preceeding two sections, we have seen how to represent any linear mapping as a
matrix mapping. We now use subspaces of R11 to explore this further by examining the
connection between the properties of Land its standard matrix [L] = A. This will also
allow us to show how the solutions of the systems of equations Ax = band Ax =0 are
related.
Recall from Section 1.2 that a subspace of R11 is a non-empty subset of R11 that is
closed under addition and closed under scalar multiplication. Moreover, we proved that
Span{V1,
, vk} is a subspace of R11 Throughout this section, L will always denote a
linear mapping from R11 to Rm, and A will denote the standard matrix of L.

Solution Space and Nullspace


Theorem 1

Let A be an m x n matrix. The set S = {x E R11 I Ax = 0} of all solutions to a


homogeneous system Ax = 0 is a subspace of R11

Proof: We have 0 E S since AO= 0.


Let x, y E S. Then A(x+ y) =Ax+ Ay =0 + 0 = 0, so we have x+ y
Let x ES and t ER Then A(tx) = tAx = tO= 0. Therefore, tx e S.
So, by definition, S is a subspace of R11

S.

From this result, we make the following definition.

Definition
Solution Space

The set S = {x E R11 I Ax = 0} of all solutions to a homogeneous system Ax = 0 is


called the solution space of the system Ax = 0.

Section 3.4

EXAMPLE 1

Special Subspaces for Systems and Mappings: Rank Theorem

151

Find the solution space of the homogeneous system x1 + 2x2 - 3x3 = 0.


Solution: We can solve this very easily by using the methods of Chapter 2. In partic
ular, we find that the general solution is

or

EXERCISE 1

Let A=

[ ! ]

Find the solution space of AX= 0

Notice that in both of these problems, the solution space is displayed automatically
as the span of a set of linearly independent vectors.
For the linear mapping L with standard matrix A = [L], we see that L(x) =Ax, by
definition of the standard matrix. Hence, the vectors x such that L(x) = 0 are exactly
the same as the vectors satisfying Ax = 0. Thus, the set of all vectors x such that
L(x) = 0 also forms a subspace of JR11
Definition
Nullspace

The nullspace of a linear mapping L is the set of all vectors whose image under L is
the zero vector 0. We write
Null(L) = {x E JR" I L(x) = O}
Remark

The word kernel-and the notation ker(L) = {x


place of nullspace.
EXAMPLE2

Let V=

[ l

l Find the nullspace of proj, , R3

JR" I

L(x) = 0)-is often used in

R3 .

Since vectors orthogonal to v will be mapped to 0 by projil, the nullspace


of projil is the set of all vectors orthogonal to v. That is, it is the plane passing through
the origin with normal vector v. Hence, we get

Solution:

Null(proj,) =

{[:]

R3 I 2x1 - x,

3x3 = 0

152

Chapter 3

EXAMPLE3

Matrices, Linear Mappings, and Inverses

Let L : JR.

--+

3
JR. be defined by L(x1, x2)

Solution: We have

[]

(2x1 - x2, 0, xi + x2). Find Null(L).

E Null(L) if L(x1, x2)

(0, 0, 0). For this to be true, we must

have
2x1 - X2
X1 + X2

0
0

We see that the only solution to this homogeneous system is x

0. Thus Null(L)

{0}.

EXERCISE 2

To match our work with linear mappings, we make the following definition for
matrices.

Definition

Let A be an m x n matrix. Then the nullspace of A is

Nullspace

Null(A)

{x

e JR.11 J Ax

O}

It should be clear that for any linear mapping L : JR.11

Solution Set of Ax

--+

JR.m, Null(L)

Null([L]).

Next, we want to consider solutions for a non-homogeneous system Ax

b, b t:- 0 and

compare this solution set with the solution space for the corresponding homogeneous

system Ax

EXAMPLE4

0 (that is, the system with the same coefficient matrix A).

Find the general solution of the system x1 + 2x2 - 3x3

Solution: The general solution of x1

EXERCISE3
Let A

[ ! l

and

+ 2x2 - 3x3

[n

5.

5 is

Find the genernl solution of AX

Section 3.4 Special Subspaces for Systems and Mappings : Rank Theorem

153

Observe that in Example 4 and Exercise 3, the general solution is obtained from
the solution space of the corresponding homogeneous problem (Example
cise

Theorem 2

and Exer

1,

respectively) by a translation. We prove this result in the following theorem.

jJ

be a solution of the system of linear equations Ax =

b, b * 0.
v is any other solution of the same system, then A(jJ -v) = 0, so that jJ -v
is a solution of the corresponding homogeneous system Ax = 0.
(2) If h is any solution of the corresponding system Ax = 0, then jJ + h is a
solution of the system Ax = b.

Let

( 1) If

Proof:

(i) Suppose that Av=

(ii) Suppose that Ah=

b. Then A(jJ - v) = AjJ -Av= b - b= 0.

0. Then A(jJ + h) = AjJ +Ah= b + 0 = b.

The solution jJ of the non-homogeneous system is sometimes called a particular


solution of the system. Theorem 2 can thus be restated as follows: any solution of the
non-homogeneous system can be obtained by adding a solution of the corresponding
homogeneous system to a particular solution.

Range of L and Columnspace of A


Definition

The range of a linear mapping L: JR11

JRm is defined to be the set

Range

Range (L) = {L(x) E JRm I

EXAMPLES

Let ii =

[:]

JR11}

and considec the lineac mapping prnj, ' R3

image of this mapping is a multiple of

R3. By definition, evecy

v, so the range of the mapping is the set of all

multiples of v. On the other hand, the range of perpil is the set of all vectors orthogonal

to v. Note that in each of these cases, the range is a subset of the codomain.

EXAMPLE6

If Lis a rotation, reflection, contraction, or dilation in JR3, then, because of the geome

try of the mapping, it is easy to see that the range of Lis all of JR3.

154

Chapter 3

EXAMPLE 7

Matrices, Linear Mappings, and Inverses

R2 R3 be defined by L(x1,x2) = (2x1 - x2,0, xi + x2). Find Range(L).


Solution: By definjtjon of the range, if L(x) is any vector in the range, then

Let L

[ ]
2x1

L(x)=

x2

X1 + X2

. Using vector operations, we can write this as

This is focany x1,x2 e R, and so Range(L)=Span

EXERCISE4

Let L : R3

{[] nl}

R2 be defined by L(x1,X2,X3) = (xi - x2, -2xi + 2x2 + x3). Find

Range(L).

It is natural to ask whether the range of L can easily be described in terms of the
matrix A of L. Observe that

ctn]

L(x)=Ax= a1

X1

:
Xn

= X1G1 +

"

X11Gn

Thus, the image of x under L is a linear combination of the columns of the matrix A.

Definition
Columnspace

The columnspace of an m x

matrix A is the set Col(A) defined by

Col(A)= {Ax E R"' I x E R"J


Notice that our second interpretation of matrix-vector multiplication tells us that
Ax is a linear combination of the columns of A. Thus, the columnspace of A is the
set of all possible linear combinations of A. In particular, it is the subspace of R111
spanned by the columns of A. Moreover, if L : R11

Rm is a linear mapping, then

Range(L)=Col(A).

EXAMPLE8

Let A= 1
2

3]
1

and B=

co1cAJ =span

[ -H
1

Then

{[]. []. [ ;J}


_

and

coI(BJ =span

{[] [-l]}

Section 3.4 Special Subspaces for Systems and Mappings: Rank Theorem

EXAMPLE9
If L is the mapping with standa<d matrix A =

[ n

Range(L) =Col(A) =Span

EXERCISE 5

155

then

{[] [i]}

Find the standard matrix A of L(x1, x2, x3) = (x1 - x2, -2x1 + 2x2 + x3) and show that
Range(L) = Col(A).

The range of a linear mapping L with standard matrix A is also related to the

system of equations Ax =

Theorem 3

The system of equations Ax =

b is consistent if and only if b is in the range of the


b is in the

linear mapping L with standard matrix A (or, equivalently, if and only if


columnspace of A).

b then b = Ax = L(x) and hence b is


b is in the range of L, then there exists a vector x such

Proof: If there exists a vector x such that Ax =


in the range of L. Similarly, if
that

b =L(x)

=Ax.

[ n

EXAMPLE 10
Suppose that L is a linear mapping with matrix A

C=

ul

and

J=

Determine whether

are in the range of L

Solution: c is in the range of L if and only if the system Ax = c is consistent. Sim


ilarly,

J is in the range of

L if and only if Ax = J is consistent. Since the coefficient

matrix is the same for the two systems, we can answer both questions simultaneously
by row reducing the doubly augmented matrix

1
2

1
1

-1

][

I c I J ]:
1

-]

156

Chapter 3

EXAMPLE 10
(continued)

Matrices, Linear Mappings, and Inverses

By considering the reduced matrix corresponding to

J (ignore the last column),

we see that the system Ax = c is consistent, so c is in the range of L. The reduced


matrix corresponding to
not in the range of L.

I J J shows that Ax

= J is inconsistent and hence

J is

Rowspace of A
The idea of the rowspace of a matrix A is similar to the idea of the columnspace.

Definition

Given an m x n matrix A, the rowspace of A is the subspace spanned by the rows of A

Rows pace

(regarded as vectors) and is denoted Row(A).

EXAMPLE 11
Let A =

[l
2

1]

3
_

[1 ]
m] .[_m
3

and B =

Row(A) =Span

. Then

and

Row(B) =Span

{ [] [_i] . [])
.

To write a mathematical definition of the rowspace of A, we require linear com


binations of the rows of A. But, matrix-vector multiplication only gives us a linear
combination of the columns of a matrix. However, we recall that the transpose of a
matrix turns rows into columns. Thus, we can precisely define the rowspace of A by

We now prove an important result about the rowspaces of row equivalent matrices.

Theorem 4

If the m x n matrix A is row equivalent to the matrix B, then Row(A) = Row(B).

Proof: We will show that applying each of the three elementary row operations does

not change the rowspace. Let the rows of A be denoted a , ..., a


CT ,
be denoted by b
1

_,T

and the rows of B

bm .

Suppose that B is obtained from A by interchanging two rows of A. Then, except


for the order, the rows of A are the same as the rows of B; hence Row(A) = Row(B).
Suppose that B is obtained from A by multiplying the i-th row of A by a non-zero
constant t. Then,
;:J
Row(B) = Span{b1, ... , bm} = Span{a1, ... , tai, ... , um)
-+

= {c1a1 +

-+

-+

+ ci(tai) +

-+

cmam I ci

JR}

= {c1a1 + ... + Cjtllj + . . + Cmllm I Cj E JR}


.

=Span{a1,

, am) = Row(A)

Section 3.4 Special Subspaces for Systems and Mappings: Rank Theorem

157

Now, suppose that B is obtained from A by adding t times the i-th row to the }-th

row. Then,

Row(B)

Span{b1, ..., b111}

Span{a1, . .. , aj+ta;,

... , a111}
{c1a1 +...+c;a;+...+Cj(aj+ta;)+...+Cmam IC; E JR}
{c1a1+ +(c;+cjl)a;+ +cjaj+ +cmam I c; E JR}

Span{a1,

.. ,a111}

Row(A)

By considering a sequence of elementary row operations, we see that row equiva-

lent matrices must have the same rowspace.

Bases for Row(A), Col(A), and Null(A)


Recall from Section 1.2 that we always want to write a spanning set with as few vectors

in the set as possible. We saw this in Section 1.2 when the set was linearly independent.
Thus, we defined a basis for a subspace S to be a linearly independent spanning set.
Moreover in Section 2.3, we defined the dimension of the subspace to be the number
of vectors in any basis.

Basis of the Rowspace of a Matrix

We now determine how to find bases

for the rowspace, columnspace, and nullspace of a matrix.

Theorem 5

Let B be the reduced row echelon form of an m x n matrix A. Then the non-zero

rows of B form a basis for Row(A), and hence the dimension of Row(A) equals the
rank of A.

Proof: By Theorem 4, we have Row(B)

Row(A). Hence, the non-zero rows of B

form a spanning set for the rowspace of A. Thus, we just need to prove that these non
zero rows are linearly independent. Suppose that B has r non-zero rows

consider

bf, .. ., b and
(3.-+)

Observe that if B has a leading I in the }-th row, then by definition of reduced row

echelon form, all other rows must have a zero as their }-th coordinate. Hence, we must
have

tj

0 in (3.4). It follows that the rows are linearly independent and thus form a

basis for Row(B)

Row(A). Moreover, since there are r non-zero rows in B, the rank

of A is r, and so the dimension of the rowspace equals the rank of A.

EXAMPLE 12

Let A

[l : ]
-

Find a basis for Row(A).

Solution: Row reducing A gives

ll H ! -rl
l

158

Chapter 3

Matrices, Linear Mappings, and Inverses

EXAMPLE 12
(continued)

Hence, by Theorem 5, a basis for Row(A) is

EXERCISE 6

{[l .U]}

Let A

2
=

. Find a basis for Row(A).

Basis of the Columnspace of a Matrix

What about a basis for the column

space of a matrix A? It is remarkable that the same row reduction that gives the basis
for the rowspace of A also indicates how to find a basis for the columnspace of A.
However, the method is more subtle and requires a little more attention.
Again, let B be the reduced row echelon form of A. Recall that the whole point of
the method of row reduction is that a vector x satisfies Ax

Bx

0.

That is, if we let i11,

0 if and only if it satisfies

i111 denote the columns of A and

b1,

b11 denote the

columns of B, then

if and only if

So, any statement about the linear dependence of the columns of A is true if and only
if the same statement is true for the corresponding columns of B.

Theorem 6

Suppose that B is the reduced row echelon form of A. Then the columns of A that
correspond to the columns of B with leading ls form a basis of the columnspace of
A. Hence, the dimension of the columnspace equals the rank of A.

Proof: For any

m x n

matrix B

[E1

b11] in reduced row echelon form, the

set of columns containing leading ls is linearly independent as they are standard basis
vectors from JR"'. Moreover, if

b; is a column of B that does not contain a leading 1,

then, by definition of the reduced row echelon form, it can be written as a linear combi
nation of the columns containing leading 1 s. Therefore, from our argument above, the
corresponding column a; in A can be written as a linear combination of the columns
in A that correspond to the columns containing leading l s in B. Thus, we can remove
this column from the spanning set without changing the set it spans by Theorem 1.2.3.
We continue to do this until we have removed all the columns of A that correspond
to columns of B that do not have leading l s, and we get a basis for the columnspace
A.

Section 3.4

EXAMPLE 13

Special Subspaces for Systems and Mappings: Rank Theorem

2
LetA =

159

. Find a basis for Col(A).

Solution: By row reducingA, we get


1

{ }.

The first, third, and fourth columns of the reduced row echelon form ofA are linearly
independent. Therefore, by Theorem 2, the first, third, and fourth columns of matrixA

form a basis for Col(A). Thus, a basis for Col(A) is

Notice in Example 13 that every vector in the columnspace of the reduced row
echelon form of A has a last coordinate 0, which is not true in the columnspace of A,
so the two columnspaces are not equal. Thus, the first, third, and fourth columns of the
reduced row echelon form ofA do not form a basis for Col(A).

EXERCISE 7
LetA=

[
-1

2
-1

-3

-2

Find a basis for Col(A).

There is an alternative procedure for finding a basis for the columnspace of matrix
A, which uses the fact that the columnspace of a matrix A is equal to the rowspace of
AT. However, the basis obtained in this manner is sometimes not as useful, as it may
not consist of the columns ofA.

EXERCISE 8

Find a basis for the columnspace of the matrix A from Example 13 by finding a basis
for the rowspace ofAT.

Basis of the Nullspace of a Matrix


rank of A was

r,

In Section 2.2 we saw that if the

then the general solution of the homogeneous system Ax =

0 was

automatically expressed as a spanning set of n - r vectors. We can now show that these
spanning vectors are linearly independent, so that the dimension of the nullspace ofA
is n -

r.

Since this quantity is imp01tant, we make the following definition.

Definition

LetA be an m x n matrix. We call the dimension of the nullspace ofA the nullity ofA

Nullity

and denote it by nullity(A).

160

Chapter 3

EXAMPLE 14

Matrices, Linear Mappings, and Inverses

Consider the homogeneous system Ax = 0, where the coefficient matrix

[1 2 0 4]
00 1

A=

is already in reduced row echelon form. By finding a basis for Null(A), determine the
nullity of A and relate it to rank(A).

Solution: The general solution is


-3

-4
0
0

x =ti -6 + t2 -5 + t3

-2
1
0
0
0

-3

Thus,

-40 0 -21
0 01 000
-6

-5

is a spanning set for the nullspace of A. We now check for

linear independence.

Let us look closely at the coordinates of the general solution x corresponding to


the free variables (x2, X4, and xs in this example):

x =ti

0
0
1

0
1
0

1
0
0

t3

+ t2

+ t3

t2
t1

Clearly, this linear combination is the zero vector only if t1 = t2

t3 =

0,

so the

spanning vectors are linearly independent and hence form a basis for the nullspace of
A. It follows that nullity(A)

(#of columns) - rank(A). In particular, it is the

number of free variables in the system.

Following the method in Example

Theorem 7

14,

Let A be an m x n matrix with rank(A) =

we prove the following theorem.


r.

Then the spanning set for the general

solution of the homogeneous system Ax = 0 obtained by the method in Chapter


is a basis for Null(A) and the nullity of A is n -

r.

Proof: Let {vi, ... , v11_,} be a spanning set for the general solution of Ax = 0 obtained
in the usual manner and consider

t1V1

+ + tn-rVn-r

=0

(3.5)

Section 3.4 Special Subspaces for Systems and Mappings: Rank Theorem

Then, the coefficients

ti

161

are just the parameters to the free variables. Thus, the coordi

nate associated with the i-th parameter is non-zero only in the i-th vector. Hence, the

only possible solution of (3.5) is

t1

t11_, =

0. Therefore, the set is a linearly

independent spanning set for Null(A) and thus forms a basis for Null(A). Thus, the
nullity of A is

r,

as required.

Putting Theorem 6 and Theorem 7 together gives the following important result.

Theorem 8

[Rank Theorem]
If A is any

m x n

matrix, then
rank(A)

nullity(A)

= n

[-i ; ]

EXAMPLE 15
Find a basis for the rowspace, columnspace, and nullspace of A

verify the Rank Theorem in this case.

1 [1 1 11
11
{[l []}-

Solution: Row reducing A gives

2
3

3
2

and

Thus, a basis for the rowspace of A is

Also, the first and second columns

of the reduced row echelon form of A have leading l s, so the corresponding columns

{[-il [I}.

from A form a basis for the columnspace of A. That is, a basis for the columnspace of A
is

Thus, since the rank of A is equal to the dimension of the columnspace

(or rowspace), the rank of A is 2.


By back-substitution, we find that the general solution is 1

Hence, a basis for the nullspace of A is

rank(A)

as predicted by the Rank Theorem.

{[ ;]}
=

nullity(A)

[=; l

t E

Thus, we have nullity(A)

2+ 1

and

162

Chapter 3

Matrices, Linear Mappings, and Inverses

EXERCISE9
Find a basis for the rowspace, columnspace, and nullspace of A=
verify the Rank Theorem in this case.

[ : = ]

and

A Summary of Facts About Rank


For an m x n matrix A:
the rank of A =the number of leading ls in the reduced row echelon form of A
=the number of non-zero rows in any row echelon form of A
=dim Row(A)
=dim Col(A)
=n - dim Null(A)

PROBLEMS 3.4
Practice Problems
Al Let L be the linear mapping with matrix
1
0 -I
3
3
0 I
3
1
-5
.
1 'YI = 6 and Y2=
I
I 2
1
5
2 5
I

A4 Determine a matrix of a linear mapping L


JR2

-t

3
JR whose nullspace is Span

'

whose range is Span

(a) Is y 1 in the range of L? If so, find x such that


L(x)=J1.
(b) Is Yi in the range of L? If so, find x such that
L(x)=Ji.

nullspace of each of the following linear mappings.


=

(2x1, -x2 + 2x3)

AS Suppose that each of the following matrices is


the coefficient matrix of a homogeneous system of
(i) The number of variables in the system
(ii) The rank of the matrix
(iii) The dimension of the solution space

(b) M(x1' X2, X3, X4) = (x4, X3, 0, X2, X1 + X2 - X3)

(a) A=

A3 Determine a matrix of a linear mapping L


JR

-t

3
JR whose nullspace is Span

range is Span

{[]}

{[ ]}

and whose

and

{[: ]}

equations. State the following.

A2 Find a basis for the range and a basis for the


(a) L(x 1 , X2, X3)

H-]}

[ _;]

[I ; ]
[ i -]
-2

(b) B=

(c) C=

Section 3.4 Exercises

(d) D

0
=

-5

-4

-1
A8 The matrix A

2
1

A6 For each of the following matrices, determine a ba


sis for the rowspace, a subset of the columns that
form a basis for the columnspace, and a basis for

row echelon form

(b)

(c)

13

0
0

IR.12,

calculation,

give

basis

for

the

basis?

ing linear mappings.

(c) reft, : 11.3

columnspace of A. Why is this the required

nullspace and a basis for the range of the follow

11.3, where V

-1

!Rm, what is m?

(d) Without

A 7 By geometrical arguments, give a basis for the

(b) pe']J, : 11.3

basis.

(c) If the columnspace of A is a subspace of some

(a) prnj, : 11.3

-1

has reduced

outline the theory that explains why this is a

(b) Without calculation, give a basis for Row(A);

what is n?

-2

11

(a) If the rowspace of A is a subspace of some

i
[ = l
1

0
0

the nullspace. Verify the Rank Theorem.


(a)

163

11.3, where V

11.3, where V

[-]
[!]
[!]

(e) Determine the general solution of the system


Ax

0 and,

hence, obtain a spanning set for

the solution space.


(f) Explain why the spanning set in (e) is in fact a
basis for the solution space.
(g) Verify that the rank of A plus the dimension
of the solution space of the system Ax

0 is

equal to the number of variables in the system.

Homework Problems
Bl Let L be the linear mapping with matrix
1

-1

'y I =

'and h

-3
=

2 .
1

(a) Is y 1 in the range of L? If so, find x such that

L(x)

L(x)

J1.

(b) Is y 2 in the range of L? If so, find x such that

.Y'2.

B2 Find a basis for the range and a basis for the


nullspace of each of the following linear mappings.

(a) L(x1,x2)

(x1,X1 +2x2,3x2)

(b) M(x1' X2,X3,X4)

3x3)

(x1 +X4,X2 - 2x3,Xt - 2x2 +

164

Chapter 3

Matrices, Linear Mappings, and Inverses

B3 Determine a matrix of a linear mapping L

{[]}

JR.2 whose nullspace is Span


is Span

{[]}

JR.2

and whose range

nullspace and a basis for the range of the follow


ing linear mappings.
(a) proj, : 11.3 R3, where ii =

B4 Determine a matrix of a linear mapping L : JR.2


JR.3 whose nullspace is Span

range is Span

B7 By geometrical arguments, give a basis for the

{[]}

{[-]}

and whose

(b) perp,: R3 R3, where ii=

(c) reft, : R3 R3, where ii=

BS Suppose that each of the following matrices is


the coefficient matrix of a homogeneous system of
equations. State the following.
(i) The number of variables in the system

BS The matrix A =

(ii) The rank of the matrix


(iii) The dimension of the solution space

(a) A= 0

-1

(b) B =

(c) C

0
=

-3

-2

-1

-3

duced row echelon form

-1

-1

-2

R= 0

-2

1 .

(b) Without calculation, give a basis for Row(A);


outline the theory that explains why this is a
basis.

-1

-2

(c) If the columnspace of A is a subspace of some

JR.m, what ism?


(d) Without

calculation,

0 and,

Ax =

(c)

0
0

basis

for

the

basis?

the solution space.

(b)

(e) Determine the general solution of the system

sis for the rowspace, a subset of the columns that


the nullspace. Verify the Rank Theorem.

give

columnspace of A. Why is this the required

form a basis for the columnspace, and a basis for

[ l l
ln ! =:i

what is n?

B6 For each of the following matrices, determine a ba

(a)

4 has re-

(a) If the rowspace of A is a subspace of some JR.n,

(d) D =

[j]
[-i]
m

hence, obtain a spanning set for

(f) Explain why the spanning set in (e) is in fact a


basis for the solution space.
(g) Verify that the rank of A plus the dimension
of the solution space of the system Ax =

0 is

equal to the number of variables in the system.

Section 3.5

165

Inverse Matrices and Inverse Mappings

Conceptual Problems
Dl Let L

JR11

JRm be a linear mapping. Prove that

dim(Range(l)) + dim(Nuil(l))

D2 Suppose that L

JR11

JRm and M

]Rm

D3 (a) If A is a 5

n
-

x7

matrix and rank(A)

4,

then

what is the nullity of A, and what is the dimen


sion of the columnspace of A?

JRP are

(b) If A is a 5

linear mappings.
(a) Show that the range of Mo l is a subspace of
the range of M.
(b) Give an example such that the range of Mo l
is not equal to the range of M.
(c) Show that the nullspace of L is a subspace of
the nullspace of Mo l.

x4

matrix, then what is the largest

possible dimension of the nulls pace of A? What


is the largest possible rank of A?
(c) If A is a

4x5

matrix and nullity(A)

3, then

what is the dimension of the rowspace of A?

D4 Let A be an

n x n

matrix such that A2

011,11

Prove that the columnspace of A is a subset of the


nullspace of A.

3.5 Inverse Matrices and Inverse Mappings


In this section we will look at inverses of matrices and linear mappings. We will make
many connections with the material we have covered so far and provide useful tools
for the material contained in the rest of the book.
Definition
Inverse

EXAMPLE 1

Let A be an

n x n matrix.

If there exists an

n x n matrix B such that AB

BA, then

A is said to be invertible, and B is called the inverse of A (and A is the inverse of B).
1
The inverse of A is denoted A-

The matrix

and

[ - ]
_

is the inverse of the matrix

[ ]

because

[ H- -J=[ J=/
[- -][ ] =[ ]=/
Notice that in the definition, B is

proven fact that the inverse is unique.

the

inverse of A. Th.is depends on the easily

166

Chapter 3

Theorem 1

Matrices, Linear Mappings, and Inverses

Let A be a square matrix and suppose that BA = AB = I and CA = AC = I. Then


B=C.

Proof: We have B=BI=B(AC)=(BA)C=IC=C.

Remark
Note that the proof uses less than the full assumptions of the theorem: we have proven
that if BA = I = AC, then B = C. Sometimes we say that if BA = I, then B is a
"left inverse" of A. Similarly, if AC = I, then C is a "right inverse" of A. The proof
shows that for a square matrix, any left inverse must be equal to any right inverse.
However, non-square matrices may have only a right inverse or a left inverse, but not
both (see Problem D4). We will now show that for square matrices, a right inverse is
automatically a left inverse.

Theorem 2

Suppose that A and B are

n x n

matrices such that AB = I. Then BA = I, so that

B =A-1 Moreover, B and A have rank

n.

Proof: We first show, by contradiction, that rank(B) = n. Suppose that B has rank
n. Then, by Theorem 2.2.2, the homogeneous system Bx=0 has non-trivial

less than

solutions. But this means that for some non-zero x, AB1 = A(Bx) = AO = 0. So,
AB is certainly not equal to I, which contradicts our assumption. Hence, B must have
rank

n.

Since B has rank

the non-homogeneous system y = Bx is consistent for every

n,

IR.11 by Theorem 2.2.2. Now consider


BAy =BA(Bx)=B(AB)x=BIx=Bx=y

Thus, BAy =y for every y

IR.11, so BA=I by Theorem 3. 1.4. Therefore, AB=I and

BA=I, so that B=A-1


Since we have BA

I, we see that rank(A)=n, by the same argument we used to

prove rank(B)=n.

Theorem 2 makes it very easy to prove some useful properties of the matrix in
verse. In particular, to show that A-1 =B, we only need to show that AB=I.

Theorem 3

Suppose that A and B are invertible matrices and that t is a non-zero real number.

(1) (tA)-1= + A-1


(2) (AB)-1=B-1A-1
(3) (Ar)-1=(A-ll

Proof: We have
(tA)

u ) ()
A-1 =

AA-1 = 11 =I

(AB)(B-1A-1)=A(BB-1)A-1=AIA-1 =AA-1=I
(AT)(A-1l =(A-1Al = F =I

Section 3.5 Inverse Matrices and Inverse Mappings

167

A Procedure for Finding the Inverse of a Matrix


For any given square matrix A, we would like to determine whether it has an inverse
and, if so, what the inverse is. Fortunately, one procedure answers both questions. We

begin by trying to solve the matrix equation AX


I for the unknown square matrix
1
= A- by Theorem 2. On the other hand, if no
=

X. If a solution X can be found, then X

such matrix X can be found, then A is not invertible.


To keep it simple, the procedure is examined in the case where A is 3 x 3, but it

should be clear that it can be applied to any square matrix. Write the matrix equation
AX

I in the form

Hence, we have

So, it is necessary to solve three systems of equations, one for each column of X. Note

that each system has a different standard basis vector as its right-hand side, but all
have the same coefficient matrix. Since the solution procedure for systems of equations
requires that we row reduce the coefficient matrix, we might as well write out a "triple
augmented matrix" and solve all three systems at once. Therefore, write

and row reduce to reduced row echelon form to solve.

Suppose that A is row equivalent to!, and call the resulting block on the right B
so that the reduction gives

Now, we must interpret the final matrix by breaking it up into three systems:

In particular, Ab1

e1, Ab2

ez, and Ab3

e3. It follows that the first column of the

desired matrix X is b1, the second column of X is b2, and so on. Thus, A is invertible
and B

A-1

If the reduced row echelon form of A is not

I, then rank(A) <

n.

invertible, since Theorem 2 tells us that if A is invertible, then rank(A)

Hence, A is not
= n.

First, we summarize the procedure and give an example.

Algorithm 1
Finding A-1

To find the inverse of a square matrix A,

(1) Row reduce the multi-augmented matrix


reduced row echelon form.

[ A I ! ] so that the left block is in


],

(2) If the reduced row echelon form is I I B


then A- = B.
(3) If the reduced row echelon form of A is not/, then A is not invertible.

168

Chapter 3

Matrices, Linear Mappings, and Inverses

EXAMPLE2
Deternline whether A =

[ i
2

Solution: Write the matrix

[l
1

-1

is invertible, and if it is, determine its inverse.

II

J and row reduce:

1 1 ol [1 11 -1 l
1 l [1
1
1
-1
j]
[-l : j].

-1

-2

-1

-2

-5

-1

Hence, A is invertible and A

You should check that the inverse has been correctly calculated by verifying that

AA-1 =I.

EXAMPLE3

Determine whether A =

[ ]

Solution: Write the matrix

is invertible, and if it is, determine its inverse.

II

J and row reduce:

[ I J [ 1- J

Hence, A is not invertible.

EXERCISE 1

Determine whether A =

[ ]

is invertible, and if it is, determine its inverse.

Some Facts About Square Matrices and Solutions of


Linear Systems
In Theorem 2 and in the description of the procedure for finding the inverse matrix, we
used some facts about systems of equations with square matrices. It is worth stating
them clearly as a theorem. Most of the conclusions are simply special cases of previous
results.

Theorem 4

[Invertible Matrix Theorem]


Suppose that A is an

n xn

matrix. Then the following statements are equivalent (that

is, one is true if and only if each of the others is true).

Section 3.5 Inverse Matrices and Inverse Mappings

Theorem 4
(continued)

169

(1) A is invertible.
(2) rank(A) = n.
(3) The reduced row echelon form of A is/.

(4)

For all

b E IR.n, the system Ax= b is consistent and has a unique solution.

(5) The columns of A are linearly independent.


(6) The columnspace of A is llln.

Proof: (We use the "implication arrow":

P =>

Q means "if P, then Q." It is common


(4) ::::> (5) ::::> (6) ::::> (1),

in proving a theorem such as this to prove (1) => (2) => (3) ::::>
so that any statement implies any other.)

(1)
(2)

=>
=>

nx n.

(2): This is the second part of Theorem 2.


(3): This follows immediately from the definition of rank and the fact that A is

This follows immediately from Theorem 2.2.2.


(5): Assume that the only solution to Ax = 0 is the trivial solution. Hence, if
A = a1 ...a,, , then 0 = Ax = X1 il1 + ... + Xniln has only the solution X1 = ...=
Xn =0. Thus, the columns il1' ...'an of A are linearly independent.
(5) => (6): If the columns of A are linearly independent, then Ax = 0 has a unique

(3)

(4)

::::>

(4):

=>

solution, so the rank of A is n. Thus, Ax = b is consistent for all

E IR." and so

Col(A) = IR.".

(6)

::::>

(1): If Col(A) = IR.n, then Ax; = e; is consistent for 1

i n. Thus, A is

invertible.

Amongst other things, this theorem tells us that if a matrix A is invertible, then
the system Ax = b is consistent with a unique solution. However, the way we proved
the theorem does not immediately tell us how to find the unique solution. We now
demonstrate this.
Let A be an invertible square matrix and consider the system Ax=

b. Multiplying
l

both sides of the equation by A-1 gives A-1Ax= A-1b and hence x=A- b.
_,

EXAMPLE4

Let A=

[ n

Solution: By Example 1, A-1 =

EXERCISE 2

Let A=

1
2

[]
[ -n

Find the solution of Ax=

_,

Thus, the solution of Ax=

[)

is

3
I
.
and b= 14
7 . Determme A- and use 1t to find the solution of Ax=b.
1

_,

[ ]

_,

170

Chapter 3

Matrices, Linear Mappings, and Inverses

It likely seems very inefficient to solve Exercise 2 by the method described. One
would think that simply row reducing the augmented matrix of the system would make
more sense. However, observe that if we wanted to solve many systems of equations
with the same coefficient matrix A, we would need to compute A-1 once, and then
each system can be solved by the problem of solving the system to simple matrix
multiplication.

Remark
It might seem surprising at first that we can solve a system of linear equations without
performing any elementary row operations and instead just using matrix multiplica
tion. Of course, with some thought, one realizes that the elementary row operations
are "contained" inside the inverse of the matrix (which we obtained by row reducing).
In the next section, we will see more of the connection between matrix multiplication
and row reducing.

Inverse Linear Mappings


It is useful to introduce the inverse of a linear mapping here because many geometrical
transformations provide nice examples of inverses. Note that just as the inverse matrix
is defined only for square matrices, the inverse of a linear mapping is defined only
for linear operators. Recall that the identity transformation Id is the linear mapping
defined by ld(x)= x for all x.

Definition
Inverse Mapping

n
n
----?
If L : JR. ----? JR. is a linear mapping and there exists another linear mapping M : JR.11
JR.n such that M o L = Id = Lo M, then L is said to be invertible, and M is called the
inverse of L, usually denoted L-1
Observe that if M is the inverse of L and L(v)= w, then
M(w)= M(L(v))= (M

L)(v)= ld(v)

Similarly, if M(w)= v, then


L(v)= L(M(w))= (L

M)(w)= ld(w)= w

So we have L(v)= w if and only if M(w)= v.

Theorem 5

n
Suppose that L : JR. ----? JR.11 is a linear mapping with standard matrix [L] = A and
n
that M : JR. ----? JR.11 is a linear mapping with standard matrix [M] = B. Then M is the
inverse of L if and only if B is the inverse of A.

Proof: By Theorem 3.2.5, [Mo L] = [M][L]. Hence, Lo M= Id= Mo L if and only


if AB= I= BA.

For many of the geometrical transformations of Section 3.3, an inverse transfor


mation is easily found by geometrical arguments, and these provide many examples of
inverse matrices.

Section 3.5 Inverse Matrices and Inverse Mappings

EXAMPLES

171

For each of the following geometrical transformations, determine the inverse transfor
mation. Verify that the product of the standard matrix of the transformation and its
inverse is the identity matrix.
(a) The rotation Re of the plane
(b) In the plane, a stretch T by a factor oft in the x1-direction

Solution: (a) The inverse transformation is to just rotate by -e. That is, (Re t 1

We have

[R
since sin( -e)

[Re ][R_e ]

-e

cos( -e)

- sin( -e)

sin(-e)

cos(-B)

- sine and cos(-e)

r
[

c se

- sine

sm e

cose

][

] [

cose

sine

- sine

cose

R_8

cose. Hence,

c se

sine

- sm e

cose

cos2 e + sin2e

cose sine - cose sine

- sine cose+sine cose

cos2e+sin2e

1
(b) The inverse transformation r- is a stretch by a factor of

EXERCISE 3

] [ ]
=

0
1

? in the Xi-direction:

For each of the following geometrical transformations, determine the inverse transfor
mation. Verify that the product of the standard matrix of the transformation and its
inverse is the identity matrix.
(a) A reflection over the line x

x1 in the plane

(b) A shear in the plane by a factor oft in the x1 -direction

Observe that if

JR.11 is in the domain of the inverse M, then it must be in the

range of the original L. Therefore, it follows that if L has an inverse, the range of L
must be all of the codomain JR.11 Moreover, if L(x1)

to both sides, we have

Hence, we have shown that for any


L(x)

L(x ), then by applying M


2

JR.11, there exists a unique x E JR" such that


y. This property is the linear mapping version of statement (4) of Theorem 4

about square matrices.

172

Chapter 3

Theorem 6

Matrices, Linear Mappings, and Inverses

[Invertible Matrix Theorem, cont.]


Suppose that l : ]Rn --+ JRll is a linear mapping with standard matrix A = [l].
Then, the following statements are equivalent to each other and to the statements of
Theorem 4.
(7) l is invertible.
(8) Range(L)=]Rn.
(9) Null(l)= {0}.

Proof: (We use P <:=> Q to mean "P if and only if Q.")


(1) <:=> (7): This is Theorem 5.
(4) <:=> (8): Since L(x)=Ax, we know that for every b E JRll there exists a x
n
that L(x)=bif and only if Ax=bis consistent for every b E IR .

JR11 such

(7) (9): Assume that L is invertible. Hence, there is a linear mapping L-1 such that
L-1(L(x)) = x. If x E Null(L), then l(x) = o and then L-1L(x) = L-1(0). But, this
gives
x=L-1(0) =0
_,

_,

since L-1 is linear. Thus, Null(L)= {0}.


(9) (3): Assume that Null(L) = {O}. Then L(x) = Ax = 0 has a unique solution.

Thus, rank(A)=n, by Theorem 2.2.2.

Remark
It is possible to give many alternate proofs of the equivalences in Theorem 4 and
Theorem 6. You should be able to start with any one of the nine statements and show
that it implies any of the other eight.

EXAMPLE6

EXAMPLE 7

Prove that the linear mapping projil is not invertible for any v E JR11, n 2.
Solution: By definition, projil(x) = tV, for some t E R Hence, any vector y
that is not a scalar multiple of v is not in the range of prok Thus, Range(projil)
hence projil is not invertible, by Theorem 6.

Prove that the linear mapping L : JR

--+

E ]Rn
* IR11,

3
JR defined by

is invertible.
Solution: Assume that x is in the nullspace of L. Then L(x) = 0, so by definition of
L, we have
2 X1

X2 =0
X3 =0

X2 - 2X3 =0
The only solution to this system is x1 =x2 =x3 =0. Thus, Null(L)= {0} and hence L
is invertible, by Theorem 6.

Section 3.5 Exercises

173

Finally, recall that the matrix condition A B = BA = I implies that the matrix
inverse can be defined only for square matrices. Here is an example that illustrates for

linear mappings that the domain and codomain of L must be the same if it is to have
an inverse.
Consider the linear mappings :
P
JR.4 JR.3 defined by P(x ,x2,x3,x ) =(x ,x2,x3)
1
1
4
3
and inj : JR. JR.4 defined by inj(x1,x2,x3) =(xi,x2,X3,0).

EXAMPLE 8

It is easy to see that Poinj = Id but that injoP 1- Id. Thus, P is not an inverse for

inj. Notice that P satisfies the condition that its range is all of its codomain, but it fails
the condition that its nullspace is trivial. On the other hand, inj satisfies the condition
that its nullspace is trivial but fails the condition that its range is all of its codomain.

PROBLEMS 3.5
Practice Problems
Al For each of the following matrices, either show that
the matrix is not invertible or find its inverse. Check
by multiplication.
(a)

[; J

the following.
(a) B

(b)

(c)

[ ! i l
[i ]
[ : l

(e)

(f)

1
0
2
0

1
2
2
6

1
0
0
0
0

0
1
0
0
0

A2 Let B =

7
3

=
X

(c) B

A3 Let A =

(d)

(b) B

[;]
nl
[[ : ]{]]

[ ]

and B =

(a) Find A-1 andB-1


(b) Calculate

1
0

0
0

1
0

[ ! i]

and (ABt1

(ABt1 = s-1A-1

and check that

(c) Calculate(3At1 and checkthat it equals

t A-1

(d) Calculate(A7t1 and checkthat A7(A7t1 =I.

1
0

AB

[ n

A4 By geometrical arguments,determine the inverse of


0
2

each of the following matrices.


(a) The matrix of the rotation Rn:;6 in the plane.
(b)

[ ]

[ -! ]
-

UseB-1 to find the solutions of

(d)

(c)

[ ]

174

Chapter 3

Matrices, Linear Mappings, and Inverses


2

AS The mappings in this problem are from JR - JR2.


(a) Determine the matrix of the shear S by a factor
of 2 in the x2-direction and the matrix of S -1
(b) Determine the matrix of the reflection R in the
line x
x2
0 and the matrix of R-1.
1
(c) Determine the matrix of (RoS t1 and the matrix
of (S o R)-1 (without determining the matrices
of R o S and S o R).

11

: JR - JR" is a linear mapping and


: JR11 - JR11 is a function (not assumed to be

A6 Suppose that L
that

M(J) if and only if y


M is also linear.

linear) such that x


Show that

L(x).

Homework Problems
Bl For each of the following matrices, either show that
the matrix is not invertible or find its inverse. Check
by multiplication.
(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

U)

[- ]

[ : l
[ 3 ]
[ i -1
[f - ]
-

1
0
2
1

-1

0
0
1
3

2
1
3
6

2
0

1
0
0
0

0
0
0
1
0

0
0

0
0
0
0

0
1
3
1

-2
0

0
0
0
0

1
0
o
0

B2 Let A

0
1
o
0
1

0
0
1
0

0
0
o
1

1-

-1

0
0
o
0

-1

(a) Find A-1.

b if

(b) Use (a) to solve Ax

01 b
=

2
0
5
3

(ii)

(iii)

B3 Let A

5
3
3
3

Ul
3[_:1
Hl

[ ]

and B

(a) Find A-1 and B-1

[ l

(b) Calculate AB

0
0

0
1
0
0
0

1
0
0
0
0

0
0
1
0
0

0
1
1
0
0

1
0
1
0
0

and (ABt1 and check that


B-1 A-1
(c) Calculate (SA)-1 and check that it equals ! A-1
T
I.
(d) Calculate (A )-1 and check that AT (ATt 1

(AB)-1

B4 By geometrical arguments, determine the inverse of


each of the following matrices.
(a) The matrix of the rotation Rrr;4 in the plane.
(b)

[ ]

Section 3.5 Exercises

(c)

(d)

- 1/

[-

B6 For each of the following pairs of linear mappings

from JR.

2
BS The mappings in this problem are from JR.
(a) Determine the matrix of the stretch

-t

2
JR. .

by a fac

tor of 3 in the x2-direction and the matrix of


s-1.
(b) Determine the matrix of the reflection
=

3
JR. , determine the matrices

-t

[W1 ],

[S-1], and [(R o S)-1].


R is the rotation about the x1 -axis through angle
7r /2, and S is the stretch by a factor of 0.5 in the

(a)

-1

line x1 + x2

175

0 and the matrix of

R-1

x3-direction.

(b)

R is the reflection in the plane x1 - x3 0, and


S is a shear by a factor of 0.4 in the x3-direction
=

in the x x3-plane.

in the

(c) Determine the matrix of (Ros)-1 and the matrix


of
of

(S o R)-1 (without determining the matrices


R o S and S o R).

Computer Problems
1.23
2.01
Cl Let A=
1.11
2.14

3.11
-2.56

1.01

0.00

(a) Use computer software to calculate A-1.

3.03

0.04

(b) Use computer software to calculate the inverse

0.03

-5.11

-1 .9 0

4.05

.
2.56
1.88

of

A-1. Explain your answer.

Conceptual Problems
Dl Determine an expression in terms of A-1 and B-1
for

trix

D2 (a) Suppose that A is an

(b)

(a) Show that A has a right inverse by finding a ma

((ABl)-1

n x n matrix such that

A3 = I. Find an expression for A-1 in terms


of A. (Hint: Find X such that AX=!.)
Suppose that B satisfies B5 + B3 + B
I. Find
1
an expression for B- in terms of B.
=

D3 Prove that if A and B are square matrices such that

AB is invertible, then A and B are invertible.

D4 Let A

B such that AB= I.

(b) Show that there cannot exist a matrix C such


that

CA =

I. Hence,

cannot have a left in

verse.

DS Prove that the following are equivalent for an n


matrix

A.

(1) A is invertible.
(2) Null(A) {O}.
=

(3) The rows of


(4)

AT

A are linearly independent.

is invertible.

x n

176

Chapter 3

Matrices, Linear Mappings, and Inverses

3.6 Elementary Matrices


In Sections 3.1 and 3.5, we saw some connections between matrix-vector multiplica
tion and systems of linear equations. In Section 3.2, we observed the connection be
tween linear mappings and matrix-vector multiplication. Since matrix multiplication
is a direct extension of matrix-vector multiplication, it should not be surprising that
there is a connection between matrix multiplication, systems of linear equations, and
linear mappings. We examine this connection through the use of elementary matrices.

Definition

A matrix that can be obtained from the identity matrix by a single elementary row

Elementary Matrix

operation is called an elementary matrix.


Note that it follows from the definition that an elementary matrix must be square.

EXAMPLE 1

E1=

[ ]

is the elementary matrix obtained from I2 by adding the product oft times

the second row to the first-a single elementary row operation. Observe that 1 is the
matrix of a shear in the x1 -direction by a factor oft.
2 =

[ ]
[ ]

is the elementary matrix obtained from I2 by multiplying row 2 by the

non-zero scalar t. E2 is the matrix of stretch in the x2-direction by a factor oft.

3 =

is the elementary matrix obtained from I2 by swapping row 1 and row 2,

and it is the matrix of a reflection over x2

x1 in the plane.

As in Example 1, it can be shown that every n x n elementary matrix is the stan


dard matrix of a shear, a stretch, or a reflection. The following theorem tells us that ele
mentary matrices also represent elementary row operations. Hence, performing shears,
stretches, and reflections; multiplying by elementary matrices; and using elementary
row operations all accomplish the same thing.

Theorem 1

If A is an n x n matrix and Eis the elementary matrix obtained from In by a certain


elementary row operation, then the product EA is the matrix obtained from A by
performing the same elementary row operation.
It would be tedious to write the proof in the general nxn case. Instead, we illustrate

why this works by verifying the conclusion for some simple cases for 3 x 3 matrices.

Case 1. Consider the elementary row operation of adding k times row 3 to row 1. The
l

corresponding elementary matrix is E = 0

0 . Then,

a11

a12

a13

a21

a22

a23

a31

a32

a33

while

EA=

l[

R, + kR,

a11 + ka31

a12 + ka32

a13 + ka33

a21

a22

a23

a31

a32

a33

a11

a12

a13

a21

a22

a31

a32

a23 =
a33

[""

a12 + ka32

a1 3 + ka33

a21

a22

a23

a31

a32

a33

+ ka, ,

177

Section 3.6 Elementary Matrices

Case 2. Consider the elementary row operation of swapping row 2 with row 3, which
1 0
1 :
has the corresponding elementary matrix E= 0 0
0 1 0

a11
a21
a31

a12
a22
a32

a13
a23
a33

0
0
1

ol [

a11
az1
a31

while

EA=

1
0

[ OJ
[
][
a11

R2 l R3 -

a12
a22
a32

a31
a21

a12
a32
a22

a11
a13
a23 = a31
a21
a33

a13
a33
a23

a12
a32
a22

a13
a33
a23

Again, the conclusion of Theorem 1 is verified.

EXERCISE 1

Theorem 2

Verify that Theorem 1 also holds for the elementary row operation of multiplying the
second row by a non-zero constant for 3 x 3 matrices.

For any m x n matrix A, there exists a sequence of elementary matrices,

E1, E2,
of A.

, Ek. such that Ek E2E1A is equal to the reduced row echelon form

Proof: From our work in Chapter 2, we know that there is a sequence of elemen
tary row operations to bring A into its reduced row echelon form. Call the elementary
matrix corresponding to the first operation E1, the elementary matrix corresponding
to the second operation E2 , and so on, until the final elementary row operation cor
responds to Ek. Then, by Theorem 1, E1A is the matrix obtained by performing the
first elementary row operation on A, E2E1A is the matrix obtained by performing the
second elementary row operation on E1A (that is, performing the first two elementary
row operations on A), and Ek E2E1A is the matrix obtained after performing all of
the elementary row operations on A, in the specified order.

EXAMPLE2

Let A =

[ ].

Find a sequence of elementary matrices E1, ..., Ek such that

Ek E1A is the reduced row echelon form of A.


Solution: We row reduce A keeping track of our elementary row operations:

1
2

2
4

R2 - 2R1 -

1
0

R2

2
0

R2, so E2=

R1 - Rz

]
[ .
[ ]

The first elementary row operation is Rz - 2R1, so E1=


The second elementary row operation is

1
1

1 2 .

1
0

2
0

178

Chapter 3

EXAMPLE2
(continued)

Matrices, Linear Mappings, and Inverses

[ - ].
[ -][ 12] [ _ ][ !] [ l

The third elementary row operation is R1 - R2, so E3

Thus,E3E2E1A

Remark
We know that the elementary matrices in Example

must be 2 x

for two reasons.

First, we had only two rows in A to perform elementary row operations on, so this
must be the same with the corresponding elementary matrices. Second, for the matrix
multiplication E1A to be defined, we know that the number of columns in E1 must be
equal to the number of rows in A. Also, E1 is square since it is elementary.

EXERCISE2
Let A

[ H

Find a sequence of elementacy matrices 1,.

, E, such that

Ek E1A is the reduced row echelon form of A.

2,

In the special case where A is an invertible square matrix, the reduced row eche
lon form of A is I. Hence, by Theorem
operations such that Ek E1A

there exists a sequence of elementary row

I. Thus, the matrix B

Ek E1 satisfies BA

I,

so B is the inverse of A. Observe that this result corresponds exactly to two facts we
observed in Section 3.5. First, it demonstrates our procedure for finding the inverse of
a matrix by row reducing

) . Second, it shows us that solving a system Ax

by row reducing or by computing x

A-' b yields the same result.

Finally, observe that elementary row operations are invertible since they are re
versible, and thus reflections, shears, and stretches are invertible. Moreover, since the
reverse operation of an elementary row operation is another elementary row operation,
the inverse of an elementary matrix is another elementary matrix. We use this to prove
the following important result.

Theorem 3

If an n x n matrix A has rank n, then it may be represented as a product of elementary


matrices.

Proof: By Theorem 2, there exists a sequence of elementary row operations such that

Ek E1A = I. Since E k is invertible, we can multiply both sides on the left by (Ekt'
to get

1
(E k)- EkEk-l E1A

1
(Ek)- 1

or

Ek-I E1A

1
E"k

Next, we multiply both sides by E;!, to get

We continue to multiply by the inverse of the elementary matrix on the left until the
equation becomes

Section 3.6 Exercises

179

Thus, since the inverse of an elementary matrix is elementary, we have written A as a


product of elementary matrices.

Remark
Observe that writing A as a product of simpler matrices is kind of like factoring a

polynomial (although it is definitely not the same). This is an example of a matrix de

composition. There are many very important matrix decompositions in linear algebra.
We will look at a useful matrix decomposition in the next section and a couple more
of them later in the book.

EXAMPLE3

Let A =

[ l

Write A and A-1 as a product of elementary matrices.

Solution: We row reduce A to I, keeping track of the elementary row operations used:
0 2
1 1

R2 ! Ri

-[

1 1
0 2

1 2
2R

-[

1 1
0 1

R1 - R1

-[

1 0
0 1

Hence, we have

Thus,

and

[o ] [1 OJ [ ]

1
1 1
1
A= e-lE-lE1 2 3 = 1 00 20 1

PROBLEMS 3.6
Practice Problems
Al Write a 3

3 elementary matrix that corresponds

to each of the following elementary row opera

[- !]

tions. Multiply each of the elementary matrices by

A =

20

and verify that the product EA is

the matrix obtained from A by the elementary row


operation.

(a) Add (-5) times the second row to the first row.
(b) Swap the second and third rows.
(c) Multiply the third row by (-1).

(d) Multiply the second row by 6.

(e) Add 4 times the first row to the third.

A2 Write a 4 x 4 elementary matrix that corresponds to


each of the following elementary row operations.

180

Chapter 3

Matrices, Linear Mappings, and Inverses

(a) Add (-3) times the third row to the fourth row.
(b) Swap the second and fourth rows.

(i) Find

(c) Multiply the third row by (-3 ).


(d) Add

A4 For each of the following matrices:

times the first row to the third row.

(e) Multiply the first row by 3.

A3 For each of the following matrices, either state that

(a) A=

it is an elementary matrix and state the correspond

ing elementary row operation or explain why it is


not elementary.

(c)

(e)

sequence

of

elementary

matrices

[i ]
[2 42 i
[--4 1 =]4
1 -2 4

(iii) Write A as a product of elementary matrices.

(f) Swap the first and third rows.

(a)

Eb ... ,E1 such thatEkE1A=I.


1
(ii) Determine A- by computingEkE1.

[0 -4i1
[[o ! o]l

(b)

(d)

en

[-0 0 -1i
[ ! I
[ : ]

(b) A=

(c) A=

(d) A=

-1 3 -4 -1
0 1 2 0
-2 4 -8 -1

Homework Problems
Bl Write a 3

3 elementary matrix that corresponds

B3 For each of the following matrices, either state that

to each of the following elementary row opera

it is an elementary matrix and state the correspond

tions. Multiply each of the elementary matrices by

ing elementary row operation or explain why it is

A=

[1 --2i

not elementary.
and verify that the productEA is

-3

the matrix obtained from A by the elementary row


operation.

(a) Add 4 times the second row to the first row.


(b) Swap the first and third rows.
(c) Multiply the second row by (-3).
(d) Multiply the first row by
(e) Add

(-2)
4 4

2.

times the first row to the third.

(f) Swap the first and second rows.


B2 Write a

elementary matrix that corresponds to

each of the following elementary row operations.


(a) Add 6 times the fourth row to the second row.
(b) Multiply the second row by 5.
(c) Swap the first and fourth rows.
(d) Swap the third and fourth rows.
(e) Add

(-2)

times the third row to the first row.

(f) Multiply the fourth row by

(-2).

0 1 0i
(a)[
[-0 1 0i
[0 0i1

(b)

[1 o ol
[1 00 1 i
[ l ! ]

(c)

(d)

(e)

cn -

B4 For each of the following matrices:


(i) Find

sequence

of

elementary

matrices

Eb ... ,E1 such thatEkE1A=I.


(ii) Determine A-1 by computingEkE1.

[ - i

(iii) Write A as a product of elementary matrices.


(a) A=

2 4 0

Section 3.7 LU-Decomposition

(b)

A=

[ 1
H _; -]
1

(c)

A=

3
(d) A=
-2
3

-2
4
4
-5

2
-1
1

181

1
2
-3
5

Conceptual Problems
2

IR.2

show that L can be written as a composition of

1 and 2 such
E2E1A= I.
Since A is invertible, we know that the system
1
Ax = b has the unique solution x = A- 6 =
E2E1 b. Instead of using matrix multiplication,

shears, stretches, and reflections.

calculate the solution 1 in the following way.

Dl (a) Let L

IR.

-t

be the invertible linear op

erator with standard matrix

A =

[ =l

By

writing A as a product of elementary matrices,

(a) Determine elementary matrices


that

(b)

(b) Explain how we know that every invertible lin


ear operator L : JRll

-t

IR.11

E1 b by performing the elemen


1 on the
b. Then compute x = E2E1 b by per

First, compute

can be written as

tary row operation associated with

a composition of shears, stretches, and reflec

matrix

tions.

forming the elementary row operation associ

D2 For 2

2 matrices, verify that Theorem

1 holds for

the elementary row operations add t times row 1 to

(c)

row 2 and multiply row 2 by a factor of t t 0.

D3 Let A=

[ ;]

and

2 on the result for E 1 b.


Solve the system Ax = b by row reducing
[ A I b ]. Observe the operations that you use

ated with

on the augmented patt of the system and com

[;].

pare with part (b).

3.7 LU-Decomposition
One of the most basic and useful ideas in mathematics is the concept of a factorization
of an object. You have already seen that it can be very useful to factor a number into
primes or to factor a polynomial. Similarly, in many applications of linear algebra, we
may want to decompose a matrix into factors that have certain properties.

In applied linear algebra, we often need to quickly solve multiple systems Ax

where the coefficient matrix A remains the same but the vector

b changes.

= b,

The goal of

this section is to derive a matrix factorization called the LU-decomposition, which is


commonly used in computer algorithms to solve such problems.
We now start our look at the LU-decomposition by recalling the definition of
upper-triangular and lower-triangular matrices.

Definition

An n x n matrix U is said to be upper triangular if the entries beneath the main

Upper Triangular

diagonal are all zero-that is, (U)ij

Lower Triangular

be lower triangular if the entries above the main diagonal are all zero-in particular,
(L)iJ

= 0 whenever i

<

}.

= 0 whenever i

>

}. Ann

x n matrix L is said to

182

Chapter 3

Matrices, Linear Mappings, and Inverses

Remark
By definition, a matrix in row echelon form is upper triangular. This fact is very im
portant for the LU-decomposition.

b, we can use the same row operations


I b J to row echelon form and then solve the system using back

Observe that for each such system Ax


to row reduce

substitution. The only difference between the systems will then be the effect of the row
operations on

b. In particular, we see that the two important pieces of information we

require are the row echelon form of A and the elementary row operation used.

For our purposes, we will assume that our n x n coefficient matrix A can be brought
into row echelon form using only elementary row operations of the form add a mul
tiple of one row to another. Since we can row reduce a matrix to a row echelon form
without multiplying a row by a non -zero constant, omitting this row operation is not
a problem. However, omitting row interchanges may seem rather serious: without row
interchanges, we cannot bring a matrix such as

[ ]

into row echelon form. How

ever, we only omit row interchanges because it is difficult to keep track of them by
hand. A computer can keep track of row interchanges without physically moving en
tries from one location to another. At the end of the section, we will comment on the
case where swapping rows is required.
Thus, for such a matrix A, to row reduce A to a row echelon form, we will only use
row operations of the form R; + sRj where i > j. Each such row operation will have
a corresponding elementary matrix that is lower triangular and has ls along the main
diagonal. So, under our assumption, there are elementary matrices E1,

, Ek that are

all lower triangular such that

where U is a row echelon form of A. Since Ek E1 is invertible, we can write A

(Ek E1)-1 U and define

Since the inverse of a lower-triangular elementary matrix is lower triangular, and a


product of lower-t1iangular matrices is lower triangular, Lis lower triangular. (You are
asked to prove this in Problem Dl .) Therefore, this gives us the matrix decomposition

A = LU, where U is upper triangular and Lis lower triangular. Moreover, L contains
the information about the row operations used to bring A to U.

Theorem 1

If A is an n x n matrix that can be row reduced to row echelon form without swap
ping rows, then there exists an upper triangular matrix U and lower triangular matrix
L such that A =LU.

Definition
LU-Decomposition

Writing an n x n matrix A as a product LU, where Lis lower triangular and U is upper
triangular, is called an LU-decomposition of A.
Our derivation has given an algorithm for finding the LU-decomposition of such
a matrix.

Section 3. 7

EXAMPLE 1

LU-Decomposition

183

2[ 4 -1-1 4
-1 -1 3]
2 -1-1 4 2 - 20 -11 -2
4
[ -1 -1 3 l
[ 0 -3/2 4 l
2
1
4
l
- [ 00 10 -22
[-102 0 1], [1/2 0 ],1 [0 3/2 ll
[ 100 Ol10 [-1/20 100 l01 00 -3/210

I
[ 100 ] [_lj -3/210 Ol01 - [-1/22 -3/210 100
6 .

Find an LU-decomposition of A=

Solution: Row-reducing and keeping track of our row-operations gives

R2 - R 1
R3+R1

R3+ R2

= u

E1 =

Hence, we let

L=

E; 1Ei1E;1 =

And we get A= LU.

Observe from this example that the entries in Lare just the negative of the multi
pliers used to put a zero in the corresponding entry. To see why this is the case, observe
that if EkE1A= U , then

Hence, the same row operations that reduce A to U will reduce Lto /.
T his makes the LU-decomposition extremely easy to find.

EXAMPLE2

Find an LU -decomposition of B

-4[ 2 31 -1-33] .
6

184

Chapter 3

EXAMPLE2

Matrices, Linear Mappings, and Inverses

Solution: By row-reducing, we get

(continued)
L=

H '. ]
H ]
0

Therefore. we have

H m i : i
]
[
0

B = LU =

EXAMPLE3

Find an LU-decomposition of C =

Solution: By row-reducing, we get

[ ;
[ _; - l
-4

-2

-11

Therefore, we have

-4

-3

3.

-2

i
; - l
-[
-3

L=

-1

R3

3R2

EXERCISE 1
Find an LU-decomposition of A=

1-

-1
-3

-3

16

L=

[ i
[ ll
-4

-4

-3

Section 3.7 LU-Decomposition

185

Solving Systems with the LU-Decomposition


....

We now look at how to use the LU-decomposition to solve the system Ax =b. If
A =LU, the system can be written as
LUx =b
Letting y = Ux, we can write LUx =bas two systems:
Ly=b

and

Ux=y

which both have triangular coefficient matrices. This allows us to solve both sys
tems immediately, using substitution. In particular, since Lis lower triangular, we use
forward-substitution to solve y and then solve Ux =y for x using back-substitution.

Remark
Observe that the first system is really calculating how performing the row operations
on A would have affected

EXAMPLE4

-11

b.

[ ]

2 1
3
3 and b =- 13 Use an LU-decomposition of B to solve Bx=b.
Let B= -4 3
4
6 8 -3
.

Solution: In Example 2 we found an LU-decomposition of B. We write Bx = b as


LUx=band take y= Ux. W riting out the system Ly=

b, we get

3
}'1 =
-2y1 + )'2

-13

3y1 + )'2 + )'3 =4

[-;].

Using forward-substitution, we find that )'1 =3, so )'2 =-13 + 2(3) = -7 and

y, =4 -3(3)- (-7) =2. Hence,f =


Thus, our system Ux=y is

2X1 + X2 - X3 =3
Sx2 + x3 =-7
-X3 =2
Using back-substitution, we get x3

3- (-!) + (- 2)

=>

-2, 5x2 =-7 - (-2)

l=H

x1 =I. Thus, the solution is it=

x2 =-1 and 2x1

186

Chapter 3

Matrices, Linear Mappings, and Inverses

EXAMPLES

+ : -4- + + l
[-: -4 H
[
:
+ [
+ -
]
:l
-[ l
[-: ]

Use LU-decomposition to solv

-2

12

-2

SoluOom We fast find an LU-decomposition fm

[ _: -41 l
[ -1 48 l
[ H

-2

-2

3
6

R1
R3

-2

Thus, U =

Row reducing gives

-2

-1
-2

R1
2R1

I
0
0

R3 - 2R2

L=

-2

I
-I
0

L=

-2

We let jl = Ut and solve Ljl =

YI =

-y1

+ Y2
-2y1 + 2y2 + y3

b.

This gives

= 6
= 12

1 + -14
[i].
Xt-X2+ X2++4X3X3
Ox3
8 -St. x3 t -x2 -4t8 -St, l x2 [ 8- + 4t x, 1 + 4t) -t
[ : 4t -] + t [-Si

So, y1

I,

y, =

6+I

= 7, and y, =

= O. Then we solve Ut =

which is

= 1
= 7

= 0

This gives

JR,

= 7

so

and

(7 -

Hence,

x = -7

EXERCISE 2
Let A =

[-14 -11 1]

2
-3 . Use the LU-decomposition of A that you found in Exercise 1

-3

-3

to solve the system Ax =


(a)

b=

b, where:
(b) b=

Ul

Section 3.7 Exercises

187

A Comment About Swapping Rows


It can be shown that for any n x n matrix A, we can first rearrange the rows of A to
get a matrix that has an LU-factorization. In particular, for every matrix A, there exists
a matrix P, called a permutation matrix, that can be obtained by only performing row
swaps on the identity matrix such that

PA=
Then, to solve

LU

Ax= b, we use the LU-decomposition as before to solve the equivalent

system

PAx=Pb

PROBLEMS 3.7
Practice Problems

[-2-

-2-2
[ 2 J
-2 2
-2 -22 2
2 2

Al Find an LU-decomposition of each of the following


matrices.
(a)

(c)

-1
0
1

-]
[ ]
-2 -2
2

-3
-3
4

-3

(d )

1
-1
0

(f)

2.

-6

0
0

4
3

-1
3
3
-1

3
-1
-1
0

4
3
-1
0

0
1
4

A=

0
-4
-4

-1

(b)

3
-4

-3

A=

A2 For each matrix, find an LU-decomposition and use


it to solve Ax= b;, for i = 1,

Homework Problems
matrices.

-[-1
[

(a)

(c)

1
5

-1
1
1

0
-1

-1
-4
-4

(b)

(d )

-4

0
3

( e)

-1

b1 =

b, =

-5

b1 =

-1
-8

-3
3
3
3

2 2 -2
3
-3

0
-1
-4

Bl Find an LU-decomposition of each of the following

b 1 = = b, = =

-1

( d)

-1

(c)A= -

[ ]
l l l l
H -i. [=;]. [ ;]
[ i 2 -n l=H {l
2
-2 2

A= -

(b)

-4
5
-1

1
0
( e)
3

(a)

-4

0
0

0
1

-6

'bi=
_,

7
-4
5

'b2 =

-22 2
-1

(f)

2.

4
-3

5
4

3
-3
5
-5

5
-3
3
-5

-1

5
4

B2 For each matrix, find an LU-decomposition and use


it to solve Ax= b;, for i=1,

188

Chapter 3

Matrices, Linear Mappings, and Inverses

[-4I
3[-2 -4 -], {H [l
; 2 b
6 =H nJ,c, ll

[-4 -23 :l [-16l [=]2


2-2 -1 -2-3 -2-2 -4-3 'b2 -24
4 -4 -4 -3 -4

(a) A =

(b) A =

(c) A =

b, =

b,

-8

b, =
0

-5

(d) A =

b, =

,b1

Conceptual Problems
Dl (a) Prove that the inverse of a lower-triangular ele

(b) Prove that a product of lower-triangular matri

mentary matrix is lower triangular.

ces is lower triangular.

CHAPTER REVIEW
Suggestions for Student Review
Try to answer all of these questions before checking an
swers at the suggested locations. In particular, try to in

3 Determine the image of the vector

3.3)

[ ]

under the ro

vent your own examples. These review suggestions are

tation of the plane through the angle

intended to help you carry out your review. They may

the image has the same length as the original vector.

not cover every idea you need to master. Working in

(Section

small groups may improve your efficiency.

Check that

4 Outline the relationships between the solution set for


the homogeneous system Ax =

1 State the rules for determining the product of two

23

0 and

solutions for

the corresponding non-homogeneous system Ax =

matrices A and B. What condition(s) must be satis

b.

fied by the sizes of A and B for the product to be

A is

defined? How do each of these rules correspond to

discuss connections between these sets and special

writing a system of linear equations? (Section

subspaces for linear mappings. (Section

3.1)

2 Explain clearly the relationship between a matrix

JR.2

and the corresponding matrix mapping. Explain how


you determine the matrix of a given linear map
ping. Pick some vector v

and determine

the standard matrices of the linear mappings proj11,


perp11, and refl11. Check that your answers are cor

3.2)

Illustrate by giving some specific examples where


x

3.4)
3.4)

and is in reduced row echelon form. Also

5 (a) How many ways can you recognize the rank of


a matrix? State them all. (Section

3.4)

(b) State the connection between the rank of a ma


trix A and the dimension of the solution space of

Ax =

0. (Section

4 4;

3;

(c) Illustrate your answers to (a) and (b) by con

2.

rect by using each matrix to determine the image of

structing examples of

v and a vector orthogonal to v under the mapping.

elon form of (i) rank

(Section

rank

x 5 matrices in row ech

(ii) rank

and (iii)

In each case, actually determine the gen

eral solution of the system A x =

and check

Chapter Review

that the solution space has the correct dimension.


(Section

3.4)

(b) Pick a fairly simple

189

matrix (that has not

too many zeros) and try to find its inverse. If it


is not invertible, try another. When you have an

6 (a) Outline the procedure for determining the in

inverse, check its correctness by multiplication.

verse of a matrix. Indicate why it might not pro

(Section

duce an inverse for some matrix A. Use the ma


trices of some geomet1ic linear mappings to give
two or three examples of mat1ices that have in

verses and two examples of square matrices that

do not have inverses. (Section

3.5)

7 For

3.5)

matrices, choose one elementary row op

eration of each of the three types, call these E 1, E2,

3. Choose an arbitrary 3 x 3 matrix A and check that


E;A is the matrix obtained from A by the appropriate
elementary row operations. (Section

3.6)

Chapter Quiz

[3

-5
4

El Let A =

=J

and B

-1

0 2 .
3
1 -1 5

Either determine the following products or explain


why they are not defined.

(a)AB

(b)BA

let

(b) Use
A

[n
=

Determine

the

result

fA

[_i]

of

part

columnspace of B and whether vis in the range

vector x such that

fBCY)

(a) to

-1
3
2

calculate

(the second column of B).

E6 You are given the matrix A below and a row eche


lon form of A. Determine a basis for the rowspace,
columnspace, and nullspace of A.

0
A=

2
0

(c) The matrix of [Ro M]

[ i
1

and b =

5
1 6 . Determine
18

[]

the solution set of Ax = band the solution space of


two sets.

= v.

(c) Determine a vector y such that

(b) The matrix of M

f8(x)

(a) The matrix of R

Ax =

-1

/A (U) and /A ('I).

E3 Let R be the rotation through angle about the x33


3
axis in JR. and let M be a reflection in JR. in the
plane with equation - x 1 - x2 + 2x3= 0. Determine:

1 1 1

-7

of the linear mapping fB with matrix B.

and

-2

, and v =

(b) Determine from your calculation in part (a) a

E4 Let A =

-5
6

(a) Using only one sequence of elementary row

be the matrix

[ l
-1

4
-3
5
3

operations, determine whether i1 is in the

mapping with mauix A, and let i1 =

'I =

ES Let B =

(c)BAr

[- - ].

E2 (a) Let A =

-1 -1 -1
'i1
1
3
0
2 -1
0

and discuss the relationship between the

1
-2
0

1
4

5
8
14

1 0
1 -1

0
0

0
0
0

1 1
3
0 1 2
0

E7 Determine the inverse of the matrix A


1 0 0 -1
0 0 1
0
.
1
0 2 0
2
1 0 0
ES Determine all values of p such that the matrix

[ i
2

1 1

is invertible and determine its inverse.

190

Matrices, Linear Mappings, and Inverses

Chapter 3

E9 Prove that the range of a linear mapping L

: JRll

--?

]Rm is a subspace of the codomain.

{i11, ... , vk} be a linearly independent set in JRll


and let L : JRll --? ]Rm be a linear mapping. Prove
that if Null(L)
{O}, then {L(v1),
, L(vk)} is a

ElO Let

[ =]

MK for all 3 x 4

(c) The matrix of a linear mapping L:IR.2--? IR.3


whose

range

is

. .

linearly independent set in ]Rm.


Ell Let

(b) A matrix K such that KM


matrices M.

nullspace is Span

Span

{[;]}

{[ ]}

and

whose

(d) The matrix of a linear mapping L:IR.2--? IR.3

0 0
4
(a) Determine a sequence of elementary matrices
Ei, . .. , Ek> such that EkE i A = I .
(b) By inverting the elementary matrices in part
(a), write A as a product of elementary matri
ces.
E12 For each of the following, either give an example

or explain (in terms of theorems or definitions) why


no such example can exist.
(a) A matrix K such that KM = MK for all 3 x 3
matrices M.

whose

range

is

nullspace is Span

Span

{[]}

{ [ l l}

and

whose

(e) A linear mapping L : IR.3 --? IR.3 such that the


range of L is all of IR.3 and the nullspace of L is
Span

{[ ;]}
-

(f) An invertible 4 x 4 matrix of rank 3.

Further Problems
These problems are intended to be challenging. They
may not be of interest to all students.
C commutes with matrix D if
DC. Show that the set of matrices that com-

Fl We say that matrix

CD

mute with A

[; 7]

is the set of matrices of the

form pl+ qA, where p and q are arbitrary scalars.


F2 Let A be some fixed n x n matrix. Show that the set
C(A) of matrices that commutes with A is closed
under addition, scalar multiplication, and matrix
multiplication.
F3 A square matrix

A is said to be nilpotent if some

power of A is equal to the zero matrix. Show that

the matrix

[ a2 :1
0

is nilpotent. Generalize.

F4 (a) Suppose that e is a line in IR.2 passing through the

origin and making an angle e with the positive

x1 -axis.

Let refle denote a reflection in this line.


Determine the matrix [refle] in terms of func
tions of e.
(b) Let refla denote a reflection in a second line, and
by considering the matrix [refla o refl8], show
that the composition of two reflections in the
plane is a rotation. Express the angle of the rota
tion in terms of a and e.
FS (Isometries of JR.2) A linear transformation L

IR2 --? IR2 is an isometry of IR2 if L preserves lengths

(that is, if llL(1)11 = 11111 for every 1 E IR2).

(a) Show that an isometry preserves the dot product


(that is, L(1) L(j)

1 y for every 1,J

JR.2).

(Hint: Consider L(1 + y).)


(b) Show that the columns of that matrix [L] must
be orthogonal to each other and of length 1. De
duce that any isometry of IR.2 must be the com
position of a reflection and a rotation. (Hint: You
may find it helpful to use the result of Problem
F4 (a).)

Chapter Review

F6 (a) Suppose that A and B are

n x

matrices such

that A + B and A - B are invertible and that C


and D are arbitrary
there are

n x n

n x n

matrices. Show that

matrices X and Y satisfying the

system

191

(b) With the same assumptions as in part (a), give


a careful explanation of why the matrix

[ ! ]

must be invertible and obtain an expression for


its inverse in terms of (A+ B)-1 and (A - B)-1.

AX+ BY= C
BX+ AY= D

MyMathlab

Go to MyMathLab at www.mymathlab.com. You can practise many of this chapter's exercises as often as you
want. The guided solutions help you find an answer step by step. You'll find a personalized study plan available to
you, too!

CHAPTER 4

Vector Spaces
CHAPTER OUTLINE
4.1

Spaces of Polynomials

4.2 Vector Spaces


4.3 Bases and Dimensions
4.4 Coordinates with Respect to a Basis
4.5

General Linear Mappings

4.6

Matrix of a Linear Mapping

4.7

Isomorphisms of Vector Spaces

This chapter explores some of the most important ideas in linear algebra. Some of
these ideas have appeared as special cases before, but here we give definitions and
examine them in more general settings.

4.1 Spaces of Polynomials


We now compare sets of polynomials under standard addition and scalar multiplica
n
tion to sets of vectors in IB?. and sets of matrices.

Addition and Scalar Multiplication of Polynomials


Recall that if p(x)

ao + a1x++ anx1. q(x) =bo + b1x++ bnr, and t E JR?., then

(p + q)(x)

(ao + bo) + (a1 + b1)x ++ (an + bn)X1

and

(tp)(x) =tao + (ta1)x + + (ta1 ).x'1


1
Moreover, two polynomials p and q are equal if and only if ai =bi for 0 ::::; i ::::; n.

EXAMPLE 1

Perform the following operations.


(a) (2 + 3x + 4x2 + x3) + (5 + x - 2x2 + 7x3)
Solution:

(2+3x+4x2+x3)+(5+x-2x2+7x3) =2+5+(3+l)x+(4-2)x2+(1+7)x3
=7 + 4x + 2x2 + 8x3

(b) (3 + x2 - 5x3) - (1 - x - 2x2)


Solution:

(3+x2-5x3)-(l -x-2x2)

3-1 + [0-(-l)]x+ [1-(-2)]x2+ [-5-0]x3


2 + x + 3x2 - 5x3

194

Chapter 4

EXAMPLE 1

Vector Spaces

(c)

5(2 + 3x

4x2 + x3)

(continued)

Solution: 5(2+3x+4x2+.x3)=5(2)+5(3)x+5(4)x2+5(1).x3=10+15x+20x2+5.x3
(d)

2(1 + 3x - x3) + 3(4 + x2 + 2.x3)

Solution: 2(1 + 3x - x3) + 3(4 + x2 + 2.x3) = 2(1) + 2(3)x + 2(0)x2 + 2(-l).x3

+ 3(4) + 3(0)x + 3(1)x2 + 3(2).x3


=2 + 12 + (6 +O)x

(0 + 3)x2 + (-2 + 6).x3

=14 + 6x + 3x2 + 4.x3

Properties ofPolynomial Addition and Scalar Multiplication


Theorem 1

p(x), q(x) and r(x) be polynomials of degree at most n and let s, t E R Then
(1) p(x) + q(x) is a polynomial of degree at most n
(2) p(x) + q(x)=q(x) + p(x)
(3) (p(x) + q(x)) + r(x)=p(x) + (q(x) + r(x))
(4) The polynomial 0= 0 +Ox ++Ox't, called the zero polynomial, satisfies
p(x) + 0=p(x)=0 + p(x) for any polynomial p(x)
(5) For each polynomial p(x), there exists an additive inverse, denoted (-p)(x),
with the property that p(x) + (-p)(x)=O; in particular, (-p)(x) = -p(x)
(6) tp(x) is a polynomial of degree at most n
(7) s(tp(x))=( st)p(x)
(8) (s + t)p(x)=sp(x) + tp(x)
(9) t( p(x) + q(x))=tp(x) + tq(x)
(10) 1 p(x)= p(x)
Let

Remarks

1. These properties follow easily from the definitions of addition and scalar mul
tiplication and are very similar to those for vectors in !Rn. Thus, the proofs are
left to the reader.

2. Observe that these are the same 10 properties we had for addition and
scalar multiplication of vectors in JR.11 (Theorem 1.2.1) and of matrices
(Theorem 3 .1.1).
3. When we look at polynomials in this way, it is the coefficients of the polynomi
als that are important.
As with vectors in JR.11 and matrices, we can also consider linear combinations of
polynomials. We make the following definition.

Definition

Let 13= (p1 (x), . . . , Pk(x)} be a set of polynomials of degree at most n. Then the span

Span

of 13 is defined as

Section 4.1

The set 13

Linearly Independent

to the equation

t1 P1(x)+

EXAMPLE2

195

{p1 (x),... , Pk(x)} is said to be linearly independent if the only solution

Definition

is t1

Spaces of Polynomials

tk

+tkPk(x)

0; otherwise, 13 is said to be linearly dependent.

{1+x,1 +x3,x+x2,x+x3}.
Solution: We want to determine if there are t1,t 2,t ,t4 such that
3

Determine if 1+2x+3x2+4x3 is in the span of 13

1+2x+3x2+4x3

t1 (1 +x)+t2(l +x3)+t (x+x2)+t4(x+x3)


3
(t1 +t2)+(t1 +t +t4)X+t X2+(t2+t4)X3
3
3

By comparing the coefficients of the different powers of x on both sides of the equation,
we get the system of linear equations

t1 + t2
+ t
t1
t
t2

3
3

+ t4
+ t4

3
4

Row reducing the augmented matrix gives

1
0

1
0

-2
3
3

We see that the system is consistent; therefore, 1 +2x+3x2+4x3 is in the span of 13.
In particular, we have t1

EXAMPLE3

Determine if the set 13

-2, t2

3, t

3, and t4

1.

{1 + 2x+ 2x2 - x3,3 + 2x+ x2+x3,2x2

2x3} is linearly

dependent or linearly independent.

Solution: Consider
0

t1 (1 +2x+2x2 - x3)+t2(3+2x+x2+x3)+t (2x2+2x3)


3
(t1 +3t2)+(2t1 +2t2)X+(2t1 +t2+2t )x2+(-t1 +t2+2t )X3
3
3

Comparing coefficients of the powers of x, we get a homogeneous system of linear


equations. Row reducing the associated coefficient matrix gives

2
2
-1
The only solution is t1

t2

2
2

0. Hence 13 is linearly independent.

Chapter 4

196

Vector Spaces

EXERCISE 1

Determine if :B

==

(1+2x +x2 +x3,1 +x + 3x2+x3,3 + Sx+ Sx2 - 3x3,-x - 2x2}


p(x) 1+5x - 5x2+x3 in the span

is linearly dependent or linearly independent. Is

==

of :B?

EXERCISE 2

{1,x,x2,x3). Prove that :B is linearly independent and show that Span :B


is the set of all polynomials of degree less than or equal to 3.

Consider :B

==

PROBLEMS 4.1
Practice Problems
Al Calculate the following.

(2 - 2x+3x2+4x3) +(-3 - 4x+x2+2x3)


(b) (-3)(1- 2x+2x2+x3 + 4x4)
(c) (2 +3x+x2 - 2x3) - 3(1 - 2x+4x2 + 5x3)
(d) (2 +3x + 4x2) - (5 +x - 2x2)
(e) -2(-5 +x+x2) +3(-1 - x2)
( f) 2 ( - tx+2x2) + H3 - 2x+x2)
(g) V2o +x+x2) + 7r(-1+x2)
Let :B
{1+x2+x3,2 +x+x3, -1+x+2x2+x3).

(e)

(a)

A2

(f)

A3 Determine which of the following sets are linearly


independent. If a set is linearly dependent, find all
linear combinations of the polynomials that equal
the zero polynomial.

{1+2x+x2- x3,5x+x2,l- 3x+2x2+x3}


{1+x+x2,x,x2+x3,3 +2x+2x2 - x3}
(c) {3 +x +x2,4 +x - x2,1+2x+x2+2x3,
-1+5x2+x3}
(d) {1+x+x3+x4,2 +x - x2+x3+x4,
x+x2+x3+x4}
Prove that the set :B
{1,x - 1,(x - 1)2} is linearly
(a)

(b)

==

For each of the following polynomials, either ex


press it as a linear combination of the polynomials
in :B or show that it is not in Span :B.
(a) 0
(b) 2 +4x + 3x2+4x3
( c) -x+2x2+x3
(d) -4 - x+3x2

-1+7x+Sx2+4x3
2 +x+Sx3

A4

==

independent and show that Span :B is the set of all


polynomials of degree less than or equal to

2.

Homework Problems
Bl Calculate the following.

(3 +4x - 2x2+5x3) - (1 - 2x+Sx3)


(-2)(2 +x+x2+3x3 - x4)
(c) (-1)(2 +x+4x2+2x3) - 2(-1 - 2x - 2x2 - x3)
(d) 3(1+x+x3) +2(x - x2+x3)
(e) 0(1 +3x3 - 4x4)
(f) H3 - x+x2) + !(2 +4x+x2)
(g) (1 + -Y2) (1 - -Y2+ < -Y2 - l ) x2 ) - H-2 + 2x2)
Let :B
{l+ x,x + x2,1 - x3}. For each of the

B3 Determine which of the following sets are linearly

(a )
(b )

B2

independent. If a set is linearly dependent, find all


linear combinations of the polynomials that equal
the zero polynomial.

==

following polynomials, either express it as a linear


combination of the polynomials in :B or show that
it is not in Span :B.
(a)
(b)
(c)
(d)

p(x)
p(x)
q(x)
q(x)

==

==

==

==

1
Sx+2x2+3x3
3 +x2 - 4x3
1+x3

{x2,x3,x2+x3+x4}

(b) {1+21
- 2 x+
6x - }
6
(c) {1 + x +x3,x+x3+ x5, 1 - x5 }
(d) (1 - 2x+x4,x - 2x2+ x5, 1 - 3x+x3}
(e) {l+2x+x2-x3, 2+3x-x2+x3+x4,1+x-2x2
+2x3+x4,1+2x+x2+x3 - 3x4,
4 + 6x - 2x2+Sx4)
Prove that the set 13
{1,x - 2,(x - 2 ) 2,( x - 2 ) 3}
(a)

B4

is linearly independent and show that Span :B is

the set of all polynomials of degree less than or


equal to

3.

Section 4.2 Vector Spaces

197

Conceptual Problems
Dl

{ p 1 (x), ... , Pk(x)} be a set of polynomials


of degree at most n.
(a) Prove that if k < n + 1, then there exists a
polynomial q(x) of degree at most n such that
q(x) (f. Span 13.
Let 13

(b) Prove that if k > n + 1, then 13 must be linearly

dependent.

4.2 Vector Spaces


We have now seen that addition and scalar multiplication of matrices and polynomi
als satisfy the same 10 properties as vectors in IR.11 Moreover, we commented in Sec
tion 3.2 that addition and scalar multiplication of linear mappings also satisfy these
same properties. In fact, many other mathematical objects also have these important
properties. Instead of analysing each of these objects separately, it is useful to de.fine
one abstract concept that encompasses them all.

Vector Spaces
Definition
Vector Space over

J.

A vector space over JR. is a set V together with an operation of addition, usually
y for any x, y E V, and an operation of scalar multiplication, usually
denoted sx for any x EV and s E JR., such that for any x, y, z EV and s, t E JR. we have
denoted x +

all of the following properties:

y EV (closed under addition)


y = y + x (addition is commutative)
(x + y) + z x + (y + z) (addition is associative)

V1 x+

V2 x +
V3

V4 There is an element 0 EV, called the zero vector, such that

x+0

0 +x

(additive identity)

VS For each x EV there exists an element -x EV such that

x + (-x) = 0
V6 tx EV
V7 s(tx)

(additive inverse)

(closed under scalar multiplication)

(st)x (scalar multiplication is associative)


VS (s + t)x
sx + tx (scalar addition is distributive)
Y9 t(x + y) tx + ty (scalar multiplication is distributive)
Y lO lx = x
(1 is the scalar multiplicative identity)
=

Remarks
1. We will call the elements of a vector space

vectors. Note that these can be

very different objects than vectors in IR.11 Thus, we will always denote, as in the
definition above, a vector in a general vector space in boldface (for example, x).
However, in vector spaces such as JR.", matrix spaces, or polynomial spaces, we
will often use the notation we introduced earlier.

198

Chapter 4

Vector Spaces

2. Some people prefer to denote the operations of addition and scalar multiplica
tion in general vector spaces by E9 and 0, respectively, to stress the fact that
these do not need to be "standard" addition and scalar multiplication.
3. Since every vector space contains a zero vector by V3, the empty set cannot be
a vector space.

4. W hen working with multiple vector spaces, we sometimes use a subscript to


denote the vector space to which the zero vector belongs. For example, Ov
would represent the zero vector in the vector space V.
5. Vector spaces can be defined using other number systems as the scalars. For
example, note that the definition makes perfect sense if rational numbers are
used instead of the real numbers. Vector spaces over the complex numbers are
discussed in Chapter 9. Until Chapter 9, "vector space" means "vector space
over JR."

6. We define vector spaces to have the same structure as JRn. The study of vector
spaces is the study of this common structure. However, it is possible that vectors
in individual vector spaces have other aspects not common to all vector spaces,
such as matrix multiplication or factorization of polynomials.

EXAMPLE 1

JR11 is a vector space with addition and scalar multiplication defined in the usual way.
We call these standard addition and scalar multiplication of vectors in JR11

EXAMPLE2

P11, the set of all polynomials of degree at most

n, is a vector space with standard

addition and scalar multiplication of polynomials.

EXAMPLE 3

M(m, n), the set of all m x n matrices, is a vector space with standard addition and
scalar multiplication of matrices.

EXAMPLE4

Consider the set of polynomials of degree n. Is this a vector space with standard addi
tion and scalar multiplication? No, since it does not contain the zero polynomial. Note
also that the sum of two polynomials of degree n may be of degree lower than n. For
example, (1 + x")

(1 - x'1)

2, which is of degree 0. Thus, the set is also not closed

under addition.

EXAMPLES

f: (a,b)--+ R If f,g E 'F(a,b), then the


(f + g)(x)
f(x) + g(x), and multiplication by a scalar t E JR is
(tf)(x) tf(x). With these definitions, 'F(a,b) is a vector space.

Let 'F(a,b) denote the set of all functions


sum is defined by
defined by

EXAMPLE6

Let C(a,b) denote the set of all functions that are continuous on the interval (a, b).
Since the sum of continuous functions is continuous and a scalar multiple of a contin
uous function is continuous, C(a,b) is a vector space. See Figure 4.2.1.

199

Section 4.2 Vector Spaces

- ___. ,,
,
.
..
..

g
.
.
.
.
.
..

::

,
,
,
..
...
, .. ___

Figure 4.2.1

EXAMPLE?

,
,
,;
..
... . ....

"! +

The sum of continuous functions f and g is a continuous function.

Let Tbe the set of all solutions to

x1+2x2

1,

2x1 + 3x2

0. Is Ta vector space with

standard addition and scalar multiplication? No. This set with these operations does
not satisfy many of the vector space axioms. For example, Vl does not hold since

[-3]
2

m
"TT"
1s
a so1ut1on

B ut
Jl ; 1t
1s
of th'1s system of l.mear equations.

[-3] + [-3] [-6]


2

is not a solution of the system and hence not in T.

EXAMPLE 8

{(x, y) I x,y E JR} with addition defined by (x1,y1) + (x2,y2) =


(x1 + x2, y1 + Y2) and scalar multiplication defined by k(x, y)
(ky, kx). Is V a vec
Consider V

tor space? No, since 1(2,


does not satisfy V7.

EXAMPLE9
Let S

{[ ]

3) (3,
=

1 x1, x2 E JR

X1 + X2

2)

-:f.

(2,

3),

it does not satisfy VlO. Note that it also

. Is S a vector space with standard addition and scalar

multiplication in JR3? Yes. Let's verify the axioms.


First, observe that axioms V2, VS, V7, V8, V9, and VlO refer only to the opera
tions of addition and scalar multiplication. Thus, we know that these operations must
satisfy all the axioms as they are the operations of the vector space JR3.
Let 1

[ I

and y

r
1+Y2

X1 + X2

Vl We have
1 +y

[ ; l
X 1+ X2

Observe that if we let z1

be vectors ins.

r
l +Y2

X1+Y1 and z2

and hence

1+y

l[

; :

X1 + Yl + X2 +Y2

x2 + y2, then z1 + z2

[ IE

l
x1 + Yt +x2 + Y2

Z1 + Z2

since it satisfies the conditions of the set. Therefore, S is closed under addition.

V3 The vector 0

conditions of S.

[]

satisfies 1 + 0

0 + Jt and is in S since it satisfies the

200

Chapter 4 Vector Spaces

EXAMPLE9
(continued)

[ ]
t[ l [tx1ttxxi2tx2l

V4 The additive inverse of x=

is (-x)=

X1+X2

it satisfies the conditions of S.

V6 tx

Xt

-X1- X2

which is in S since

E S. Therefore, S is closed under scalar

X2

multiplication.

Thus, S with these operators is a vector space as it satisfies all 10 axioms.

EXERCISE 1

Prove that the set S =

{[] x1, x2
I

Z}

is not a vector space using standard addition

and scalar multiplication of vectors in JR.2.

EXERCISE 2

Let S =

{ [1 2] a1, a2
I

E JR . Prove that S is a vector space using standard addi

tion and scalar multiplication of matrices. This is the vector space of 2


matrices.

2 diagonal

Again, one advantage of having the abstract concept of a vector space is that when
we prove a result about a general vector space, it instantly applies to all of the examples
of vector spaces. To demonstrate this, we give three additional properties that follow
easily from the vector space axioms.

Theorem 1

Let V be a vector space. Then

(1) Ox = 0 for all x EV

(2) (- l)x = -x for all x EV


(3) tO = 0 for all

E JR

Proof: We will prove ( 1). You are asked to prove (2) and (3) in Problem Dl . For any
x EV we have
Ox=Ox+ 0

byV3

= Ox+ [x + (-x)]

byV4

=Ox+ [ lx+ (-x)]

byVlO

=[Ox+ lx]+ (-x)

byV2

= (0+ l)x + (-x)

byV8

= lx + (-x)

operation of numbers in JR

= x + (-x)

byVlO

=0

byV4

Section 4.2 Vector Spaces

201

Thus, if we know that V is a vector space, we can determine the zero vector of V
by finding Ox for any x E V. Similarly, we can determine the additive inverse of any
vector x EV by computing (-l)x.

EXAMPLE 10

Let V = {(a,b) I a,b E JR, b > O} and define addition by (a,b) EB (c, d)

(ad + be,bd)

and define scalar multiplication by t 0 (a,b) = (taY-1, b1). Use T heorem 1 to show
that axioms V3 and V4 hold for V with these operations. (Note that we are using EB
and 0 to represent the operations of addition and scalar multiplication in the vector
space to help distinguish the difference between these and the operations of addition
and multiplication of real numbers.)

Solution: We do not know if V is a vector space. If it is, then by Theorem 1 we must


have

0 =OO(a,b) =(0ab-1,b0) =(0, 1)

Observe that (0, 1) EV and for any (a,b) EV we have

(a,b) EB (0, 1) =(a(l) + b(O), b(l) ) =(a,b) =(O(b) + l(a), 1(b) ) =(0, 1) EB (a,b)
So, V satisfies V3 using 0 = (0, 1).
Similarly, if V is a vector space, then by Theorem 1 for any x = (a,b) E V we
must have

(-x) =(-l)O(a,b) =(-ab-2,b-1)


Observe that for any (a,b) E V we have (-ab-2,b-1) E V since b-1 > 0 whenever

>

0. Also,

So, V satisfies V4 using -(a,b) =(-ab-2,b-1 ).


You are asked to complete the proof that V is indeed a vector space in Problem

D2.

Subspaces
In Example 9 we showed that S is a vector space that is contained inside the vector
space JR3. Observe that, by the definition of a subspace of JR" in Section 1.2, S is
actually a subspace of JR3. We now generalize these ideas to general vector spaces.

Definition

Suppose that V is a vector space. A non-empty subset 1LJ of V is a subspace of V if it

Subspace

satisfies the following two properties:


S 1 x + y E 1LJ for all x,y E 1LJ (U is closed under addition)
S2 tx E 1LJ for all x E 1LJ and t E JR (U is closed under scalar multiplication)

Equivalent Definition. If 1LJ is a subset of a vector space V and 1LJ is also a vector
space using the same operations as V, then 1LJ is a subspace of V.
To prove that both of these definitions are equivalent, we first observe that if 1LJ is a
vector space, then it satisfies properties S 1 and S2 as these are vector space axioms V 1
and V6. On the other hand, as in Example 4.2.9, we know the operations must satisfy
axioms V2, VS, V7, V8, V9, and VlO since V is a vector space. For the remaining
axioms we have

202

Chapter 4

Vector Spaces

V1

Follows from property S1

V3 Follows from Theorem 1 and property S2 because for any u E 11J we have

Oil E 11J

V4 Follows from Theorem 1 and property S2 because for any u E 11.J, the additive
inverse of u is (-u)

(-1 )u E 11J

V6 Follows from property S2


Hence, all 10 axioms are satisfied. Therefore, 11J is also a vector space under the
operations of V.

Remarks
1. When proving that a set 11J is a subspace of a vector space V, it is important not
to forget to show that 11J is actually a subset of V.
2. As with subspaces of IRn in Section 1 .2, we typically show that the subset is
non-empty by showing that it contains the zero vector of V.

EXAMPLE 11

In Exercise 2 you proved that

{[ ]
2

I a1, a2E

since is a subset of M(2, 2), it is a subspace of M(2, 2).

EXAMPLE 12

IR}

is a vector space. Thus,

{p(x) E P3 I p(3) 0}. Show that 11J is a subspace of ?3.


Solution: By definition, 11J is a subset of P3 The zero vector in P3 maps x to 0 for
all x; hence it maps 3 to 0. Therefore, the zero vector of ?3 is in 11.J, and hence 11J is
Let 11J

non-empty.
Let

p(x), q(x) E 11J ands ER

Sl

(p + q)(3)

S2

(sp)(3)

p(3) + q(3)

sp(3)

sO

Hence, 11J is a subspace of

EXAMPLE 13

Define the trace o f a 2

P3

Then

p(3)
=

0 and q(3)

0.

0 so p(x)+ q(x) E 11J

0, so sp(x) E 11J

Note that this also implies that 11J is itself a vector space.

.
2 matnx b y tr

([

a11
a11

a1 2

])

a11 + a22 Prove that


a22

{A E M(2, 2) I tr(A)
O} is a subspace of M(2, 2).
Solution: By definition, is a subset of M(2, 2). The zero vector of M(2, 2) is
=

02,2

b11

[ l

Clearly, tr(02,2)

0, so02,2E.

LetA,BE ands ER Then a11 + a22


b22
tr(B)
0.

S l tr(A+B)

tr(A)

0 and

A+BE

tr

([

aii +
a21 +

bii
b21

j)

b12
a22 + b 22

a12 +

a11 + a22 + bi1 + b22

0, so

Section 4.2 Vector Spaces

EXAMPLE 13

S2 tr(sA)

(continued)

tr

([

])

sa12
sa22

sa11
sa2 1

sa1 i + sa22

s(a11 + a22)

s(O)

203

0, so sA ES

Hence, Sis a subspace of M(2, 2).

EXAMPLE 14

The vector space JR2 is not a subspace of


we take any vector x

[]

JR2,

P2 I b + c

JR3,

since

JR2

is not a subset of

this is not a vector in

JR3,

JR3.

That is, if

since a vector in

JR3

has

three components.

{a+ bx+ cx2

a}

P2 .

EXERCISE 3

Prove that U

EXERCISE4

Let V be a vector space. Prove that {O} is also a vector space, called the trivial vector
space, under the same operations asV by proving it is a subspace of V.

is a subspace of

In the previous exercise you proved that {0} is a subspace of any vector space V.
Furthermore, by definition, V is a subspace of itself. We now prove that the set of all
possible linear combinations of a set of vectors in a vector space V is also a subspace.

Theorem 2

If {v1, ... , vk} is a set of vectors in a vector space V and Sis the set of all possible
linear combinations of these vectors,

then Sis a subspace of V.

Proof: By VI and V6, t1 v1+


ti

0 for 1

+ tkvk

EV. Hence, Sis a subset ofV. Also, by taking

k, we get

by V9 and Theorem 1, so Ov ES.


Let x, y ES. Then for some real numbers Si and t;, 1 sis k, x
and y

t1V1 +

using V8. So,

+ tkvk.

y ESsince

Similarly, for all

s1 v1 +

+ skvk

It follows that

(si + t;)

E JR. Hence, Sis closed under addition.

E JR,

by V7. Thus, Sis also closed under scalar multiplication. Therefore, Sis a subspace
ofV.

204

Chapter 4

Vector Spaces

To match what we did in Sections 1.2, 3.1, and 4.1, we make the following
definition.

Definition

If is the subspace of the vector space V consisting of all linear combinations of the

Span

vectors

Spanning Set

we say that the set 13 spans. The set 13 is called a spanning set for the subspace.

vi,... ,vkEV, then is called the subspace spanned by 13 ={vi,... ,vk}, and

We denote by

= Span{vi,... ,vk}= Span:S

In Sections

1.2, 3.1 ,and 4.1, we saw that the concept of spanning is closely related

to the concept of linear independence.

Definition

={vi,... ,vk}

If 13
is a set of vectors in a vector space V, then 13 is said to be linearly
independent if the only solution to the equation

Linearly Independent
Linearly Dependent

is

ti= = tk =O; otherwise, 13 is said to be linearly dependent.

Remark
The procedure for determining if a vector is in a span or if a set is linearly independent
in a general vector space is exactly the same as we saw for

JR.'1, M(m,n), and

Pn in

Sections 2.3, 3.1, and 4.1. We will see further examples of this in the next section.

PROBLEMS 4.2
Practice Problems
Al Determine, with proof, which of the following sets
are subspaces of the given vector space.

(a)

(b)

{ i

I x1 2x,=0, X1> x,,x,,x.e R


+

A2 Determine, with proof, whether the following sub


sets of M(n, n) are subspaces.
(a) The subset of diagonal matrices

of

R4

(b) The subset of matrices that are in row echelon


form
(c) The subset of symmetric matrices (A matrix
A is

{[: :]I ai+2a2=O,ai,a2,a3,a4EJR.}

M(2,2) 2 3
(c) {ao+aix
a1x a3x I ao+2ai=0,
ao,ai,a1,a3EJR.} of ?3
(d) {[: :]lai,a2,a3,a4EZ}ofM(2,2)
(e) {[: :]I a1a4 - a2a3=O,a1,a2,a3,a4 EJR.}
of M(2,2)
(f) {[ ] I ai = a1,ai, a1EJR.}
of M(2,2).
of

symmetric if

aiJ=a1; for all

AT

A or, equivalently, if

i and j.)

(d) The subset of upper triangular matrices

A3 Determine, with proof, whether the following sub


sets of P5 are subspaces.

{p(x) E I p(-x) = p(x) for all x E JR.} (the


subset of even polynomials)
(b) {(1
x2) p(x) I p(x) E
4 ?3}
(c) {ao,a1x + +a4x I ao= a4,a1= a3,aiEJR.}
(d) {p(x) E
I p(O)2 = 1}
(e) {ao+a1x+a2x 1ao,a1,a2ElR.}
(a)

P5

Ps

Section 4.2 Exercises


A4 Let 'F be the vector space of all real-valued func
tions of a real variable. Determine, with proof,
which of the following subsets of r are subspaces.
(a) {fEr I f(3)=O}
Cb) u e r 1 f(3) = 1}

205

(c) {f r I f(-x) = f(x) for all xE JR}


(d) {fEr I f(x) 2:: 0 for all x

EJR}

AS Show that any set of vectors in a vector space V that


contains the zero vector is linearly dependent.

Homework Problems
Bl Determine, with proof, which of the following sets
are subspaces of the given vector space.
(a)

(b)

{ i X1
I

+ X3 =x,,x,,x,,x,,x, ER

A
of

1!.4

{[: :] ai a3 a4,a1,a2,a3,a4 JR}


azx2 a3x3 ao az a3,
ao,a1,az,2a3EJR ?3
{ao a,x ao,a1EJR} ?4
{[ ] }
{[' :]la1-a3=l,a1,a2,a3EJR}
I

o f M(2, 2)

(c) {a0 +a,x +

} of

(d)
(e)

(f)

of

of M(2, 2)

of M(2, 2).

B2 Determine, with proof, whether the following sub


sets of M(3, 3) are subspaces.
(a) {A M(3, 3) I tr(A) =O}
(b) The subset of invertible 3 x 3 matrices
(c) The subset of 3 x 3 matrices A such that

(d) The subset of 3 x 3 matrices A such that

m []

m []
=

(e) The subset of skew-symmetric matrices. (A


matrix A is skew-symmetric if AT = -A; that
is,

for all i and j.)

aij -a1i
P5 5
JR
=
P
2
Pa4x2} 4
{ao3
a1a4 EJR
{x

B3 Determine, with proof, whether the following sub


sets of
are subspaces.
(a) {p(x) E
-p(x) for all x E } (the
I p(-x)
subset of odd polynomials)
(b) {(p(x)) I p(x)E
(c)
+
=1
}
+a,x +
I
(d)
p(x) I p(x)E P2}
(e) (p(x) I p(l)=0)

B4 Let 'F be the vector space of all real-valued func


tions of a real variable. Determine, with proof,
which of the following subsets of r are subspaces.
(a) {fEr I f(3)+f(S)= O}
(b) {fE'Flf(l)+f(2)=1}
(c) {fEr I lf(x)I :S 1)
(d) {fEr I f is increasing on JR}

Conceptual Problems
Dl Let V be a vector space.
(a) Prove that -x=(-l)x for every x EV.
(b) Prove that the zero vector in V is unique.
(c) Prove that tO 0 for every t R
=
bE
D2 Let V = {(a, b) I
b > 0} and define
addition by
b) B (c, d) =(ad+ be, bd) and scalar
multiplication by t 0
b) = (tab1-1, b1) for any
t
R Prove that V is a vector space with these
operations.

a, JR,
(a,
(a,

D3 Let V = {x
I x > O} and define addition by
xy and scalar multiplication by t 0 x = x1
x

E JR

By.=

for any t E R Prove that V is a vector space with


these operations.

D4 Let lL denote the set of all linear operators


L : JR"
with standard addition and scalar
multiplication of linear mappings. Prove that lL is a
vector space under these operations.

JR"

DS Suppose that U and V are vector spaces over R The


Cartesian product of U and V is defined to be
U x V= {(u, v) I uE U, vEV}

206

Chapter 4

Vector Spaces

(a) InU x V define addition and scalar multiplica


tion by

(b) Verify thatU x {Ov} is a subspace ofU x V.


(c) Suppose instead that scalar multiplication is
defined by t 0 (u, v)

(u1,V1) EB (u2, V2)


t0(U1,V1)

(u1

(tu1,tV1)

U2, V1

V2)

(tu, v), while addition

is defined as in part (a). IsU x V a vector space


with these operations?

Verify that with these operations that U x V is


a vector space.

4.3 Bases and Dimensions


In Chapters I and 3, much of the discussion was dependant on the use of the standard
basis in IR.11 For example, the dot product of two vectors a and b was defined in terms of
the standard components of the vectors. As another example, the standard matrix [L]
of a linear mapping was determined by calculating the images of the standard basis
vectors. Therefore, it would be useful to define the same concept for any vector space.

Bases
11
Recall from Section 1.2 that the two important properties of the standard basis in IR.
were that it spanned IR.11 and it was linearly independent. It is clear that we should want
a basis 13 for a vector space V to be a spanning set so that every vector in V can be
written as a linear combination of the vectors in 13. Why would it be important that the
set 13 be linearly independent? The following theorem answers this question.

Theorem 1

Unique Representation Theorem


Let 13
{v1,..,v11} be a spanning set for a vector space V. Then every vector in V
can be expressed in a unique way as a linear combination of the vectors of 13 if and
only if the set 13 is linearly independent.
=

Proof: Let x be any vector in V. Since Span 13

V, we have that x can be written as

a linear combination of the vectors in 13. Assume that there are linear combinations

This gives

which implies

If 13 is linearly independent, then we must have a; - b;


Hence, x has a unique representation.
On the other hand, if 13 is linearly dependent, then

0, so a;

b; for 1 ::; i ::; n.

207

Section 4.3 Bases and Dimensions

has a solution where at least one of the coefficients is non-zero. But

Ov I

... + Ovn

Hence, 0 can be expressed as a linear combination of the vectors in 13 in multiple


ways.

Thus, if 13 is a linearly independent spanning set for a vector space V, then every
vector in V can be written as a unique linear combination of the vectors in 13.

Definition

A set 13 of vectors in a vector space Vis a basis if it is a linearly independent spanning

Basis

set for V.

Remark
According to this definition, the trivial vector space

{O}

does not have a basis since

any set of vectors containing the zero vector in a vector space Vis linearly dependent.
However, we would like every vector space to have a basis, so we define the empty set
to be a basis for the trivial vector space.

EXAMPLE 1

The set of vectors

{[ ], rn ], [ ], [ ]}

in Exercise 3.1.2 is a basis for

{ x, x2, x3}

M(2, 2). It is called the standard basis for M(2, 2).

(1, x,

The set of vectors 1,

set

in Exercise 4.1.2 is a basis for


... , }is called the standard basis for P11

EXAMPLE2
ove that the set C

{[i], m, [m
JR.3,
[X2XJ] t1 [11] t2 [1]]
X3
1

is a basis for

In particular, the

R3.

JR.3
JR.3
t3 [1] [ti tt12 t2 tt32 t3]
1] [1 1 l
JR.3.
JR.3.
JR.3.

Solution: We need to show that Span C


prove that Span C

P3.

and that C is linearly independent. To

we need to show that every vector 1 E

can be written as a

linear combination of the vectors in C. Consider

0
1

Row reducing the corresponding coefficient matrix gives

0 - 0
1

Observe that the rank of the coefficient matrix equals the number of rows, so by The
orem 2.2.2, the system is consistent for every 1 E

Hence, Span C

Moreover,

since the rank of the coefficient matrix equals the number of columns, there are no
parameters in the general solution. Therefore, we have a unique solution when we take

0, so C is also linearly independent. Hence, it is a basis for

208

Chapter 4

EXAMPLE3

Vector Spaces

Is the set 23

], [ n, [ ]}

= {[-

a basis for the subspace Span 23 of M(2, 2)?

Solution: Since 23 is a spanning set for Span 23, we just need to check if the vectors in
23 are linearly independent. Consider the equation

[o o] [ 2] [o
i

ti
0 = -1

+t2

1
1

+t3

[ ][
2
1

ti +2t3
-ti+3t2+t3

2t1 +t2+St3
ti+t2+3t3

Row reducing the coefficient matrix of the corresponding system gives

-1

Observe that this implies that there are non-trivial solutions to the system. For example,
one non-trivial solution is given by t1 = -2, t2 = - 1 , and t3 = 1 , and you can verify
that

(-2) _

+(- 1 )

[ J

+< l )

[ J=[ J

Therefore, the given vectors are linearly dependent and do not form a basis for Span 23.

EXAMPLE4

Is the set C

2
2
2
{3+2x+2x , 1 +x , 1 +x+x } a basis for P2?

Solution: Consider the equation


2
2
2
2
ao +aix+a2x = ti(3+2x+2x )+t2( 1 +x )+t3( 1 +x+x )
2
= (3ti+t2+t3)+(2ti+t3)X+(2t1 +t2+t3)X
Row reducing the coefficient matrix of the corresponding system gives

[ i l [ o ol
3
2
2

1
0
1

1
0
1
0

1
0

0
l

Observe that this implies that the system is consistent and has a unique solution for all
ao + aix+a2x

EXERCISE 1

P2. Thus, C is a basis for P2.

Prove that the set 23 = {l +2x+x , 1 +x , 1 +x} is a basis for P2

209

Section 4.3 Bases and Dimensions

EXAMPLES

{p(x) E P I p(l) O} of P .
2
2
Solution: We first find a spanning set for and then show that it is linearly indepen
dent. By the Factor Theorem, if p(l) = 0, then (x- 1) is a factor of p(x). That is, every
polynomial p(x) E can be written in the form

Determine a basis for the subspace

p(x)

(x - l)(ax +b)

a(x2 - x) +b(x

1)

Span{x2 - x, x - 1 }. Consider

Thus, we see that =

t1 t
0. Hence, {x2 - x, x - l} is linearly independent. Thus,
2
{x2 - x, x - 1} is a linearly independent spanning set of and hence a basis.

The only solution is

Obtaining a Basis from an Arbitrary Finite Spanning


Set
Many times throughout the rest of this book, we will need to determine a basis for a
vector space. One standard way of doing this is to first determine a spanning set for the
vector space and then to remove vectors from the spanning set until we have a basis.
We now outline this procedure.
Suppose that

{v1,

, vk} is a spanning set for a non-trivial vector space V.

We want to choose a subset of'T that is a basis for V. If'T is linearly independent, then
Tis a basis for V, and we are done. If Tis linearly dependent, then t1 v1 +

has a solution where at least one of the coefficients is non-zero, say


can solve the equation for

t;

+tkvk 0
0. Then, we

v; to get

So, for any x E V we have


x =

a1v1 +

+a;-1 V;-1 +a;v; +a;+1V;+1 +

+akvk

a1V1 + ... +a;-1V;-1 +a; - (t1V1 +... +t;-1Vi-J +t;+1V;+1 +... +tkvk)
+a;+1V;+1 +

akvk

T\{v;}. This
T\{v;} is a spanning set for V. If T\{vd is linearly independent, it is a

Thus, any x E V can be expressed as a linear combination of the set


shows that

basis for V, and the procedure is finished. Otherwise, we repeat the procedure to omit
a second vector, say

v1, and get T\{v;, v1}, which still spans V. In this fashion, we

must eventually get a linearly independent set. (Certainly, if there is only one non-zero
vector left, it forms a linearly independent set.) Thus, we obtain a subset of Tthat is a
basis for V.

210

Chapter 4

EXAMPLE6

Vector Spaces

If T

={[_il [-:J.[-l []}

deIBnlline a subset of T that is a basis for Span T

1
[
[
[
[
]
1
=
=
[
]
]
]
[ol

Solution: Consider
0

ti

-2

1
+ t2 -1 + t3 -2 + t4 5
3

ti + 2t2 + t3 + t4
ti - t2 - 2t3 + 5t4
-2t1 + t2 + 3t3 + 3t4

-11

We row reduce the corresponding coefficent matrix

ti
The general solution is

t2
t3

t4

= 1
s

-1

s E R Taking s
.

1, we get

t1
t2
t3

=
l
Ul-Hl+H Hl -UJ+:I
[ ]
Tl{[-I}=
{fJHJ.rm

, which

gives

or

Thus, we can omit -

from T and consider

Now consider

The matrix is the same as above except that the third column is omitted, so the same
row operations give

Hence, the on! y solution is t,

[ - 1 r 11
===
-2

t2

t3

0 and we conclude that

linearly independent and thus a basis for Span T.

{[_iJ.l-:].[i]}

is

Section 4.3 Bases and Dimensions

EXERCISE 2

Let 'B

{1

211

x, 2 + 2x + x2, x + x2, 1 + x2}. Determine a subset of 'B that is a basis for

Span 'B.

Dimension
n

We saw in Section 2.3 that every basis of a subspace S of R contains the same number
of vectors. We now prove that this result holds for general vector spaces. Observe that
the proof of this result is essentially identical to that in Section 2.3.

Lemma2

Suppose that V is a vector space and Span{v1,


early independent set in V, then k $

, v,,}

V. If {u1 ,

. .

, uk} is a lin

n.

Proof: Since each ui, 1 $ i $ k, is a vector in V, it can be written as a linear combi


nation of the v/s. We get

U1

a11V1 + a11V2 + + an1Vn

U2

a12V1 + a21V2 + + an2Vn

Consider the equation

Since 0

t1u, + + tkuk
t1(a11V1 + az1V2 + + a111Vn) + + tk(a1kV1 + azkV2 + + a,,kvn)
(a11t1 + + alktk)v1 + + (a,,1t1 + + a,,ktk)v11
O v1 + + O v,, we can write this as

Comparing coefficients of vi we get the homogeneous system

tk. If k > n, then this system would have a non


trivial solution, which would imply that {u1, ... , uk} is linearly dependent. But, we

of n equations in k unknowns t1,

assumed that {u1, ... , uk} is linearly independent, so we must have k $

Theorem 3

If 'B

{v1, ..., v,,} and C

{u1,

n.

, uk} are both bases of a vector space V, then

k = n.

Proof: On one hand, 'Bis a basis for V, so it is linearly independent. Also, C is a basis
for V, so Span C

V. Thus, by Lemma 2, we get that

independent as it is a basis for V, and Span 'B


gives

k. Therefore,

k, as required.

k. Similarly, C is linearly

V, since 'Bis a basis for V. So Lemma 2

212

Chapter 4

Vector Spaces

As in Section 2.3, this theorem justifies the following definition of the dimension
of a vector space.

Definition
Dimension

If a vector space V has a basis with n vectors, then we say that the dimension of V is
n and write
dimV = n

0.

If a vector space V does not have a basis with finitely many elements, then V is called

infinite-dimensional. The dimension of the trivial vector space is defined to be

Remark

Properties of infinite-dimensional spaces are beyond the scope of this book.

EXAMPLE 7

(a) IR.11 is n-dimensional because the standard basis contains n vectors.


(b) The vector space M(m, n) is (m x n)-dimensional since the standard basis has

x n vectors.

(c) The vector space Pn is (n + 1 )-dimensional as it has the standard basis

{ 1 , X, x2,

. .

, X1}.

(d) The vector space C(a, b) is infinite-dimensional as it contains all polynomials


(along with many other types of functions). Most function spaces are infinite
dimensional.

EXAMPLES
Ut S =Span

ml. [=l nl. nl}

Show that ilim S

2.

Solution: Consider

4
[1 -1 -li4 [0 01 6]
-1
0000
[-l r!l
{ li].[=]}

We row reduce the corresponding coefficent matrix


-1

3
2

2
3

Observe that this implies that

of the first two vectors Thus, S

-2

and

Span

can be written as linear combinations

Moreom, B =

m].[=l}

is

Section 4.3 Bases and Dimensions

213

EXAMPLE8

clearly linearly independent since neither vector is a scalar multiple of the other, hence

(continued)

13 is a basis for. Thus, dim

EXAMPLE9

Let

{[: ]

Solution: Since d

2.

M(2, 2) I a+ b
=

d . Determine the dimension of.

a+ b, observe that every matrix inhas the form

Thus,

It is easy to show that this spanning set

{[ ] [ n , [ ]}
,

independent and hence is a basis for. Thus, dim

EXERCISE 3

Find the dimension of

{a+ bx+ cx2 + dx3

for S is also linearly

3.

P3 I a+ b + c + d

O}.

Extending a Linearly Independent Subset to a Basis


Sometimes a linearly independent subset T

{vi, ..., vd is given in an n-dimensional

vector space V, and it is necessary to include these in a basis for V. If Span T * V,


then there exists some vector wk+! that is in V but not in Span T. Now consider

(4.1)
If tk+I * 0, then we have

Wk+I

t1
--V1 tk+I

' '

tk
- -Vk
tk+!

and so Wk+! can be written as a linear combination of the vectors in T, which cannot
be since Wk+! <t Span T. Therefore, we must have tk+I

But, Tis linearly independent, which implies that t1

0. In this case, (4.1) becomes

tk 0. Thus, t1 v1 +
+
tk+I
0, and hence {v1,
, vb wk+!} is
linearly independent. Now, if Span{v1,
, vb Wk+d
V, then it is a basis for V. If
not, we repeat the procedure to add another vector Wk+2 to get {v1, ... , vb Wk+! Wk+2},
tkvk

tk+! Wk+I

0 implies that t1

which is linearly independent . In this fashion, we must eventually get a basis, since
according to Lemma 2, there cannot be more than n linearly independent vectors in an
n-dimensional vector space .

214

Chapter 4

EXAMPLE 10

Vector Spaces

Let C

{[ ], [- -]}

Extend C to a basis for M(2, 2).

Solution: We first want to determine whether C is a spanning set for M(2, 2). Consider

Row reducing the augmented matrix of the associated system gives

-2
-1
1
1

0
1

b1
b2
b3
b4

1
0
0
0

-2
1
0
0

b1
b2 - b1
b1 - b2 + b3
2b1 - 3b2 + b4

[ ]
[ ]

Hence, '13 is not a spanning set of M(2, 2) since any matrix


(or 2b1 - 3b2 + b4 :/= 0) is not in Span'B. In particular,

with b1 -b2+b3 :/= 0

is not in the span of '13.

Hence, by the procedure above, we should add this matrix to the set. We let

and repeat the procedure. Consider

Row reducing the augmented matrix of the associated system gives

1
1
0

So, any matrix

-2
-1
1

[: :]

0
0

1
0
0
0

-2
1

0
0

0
0

1
0

bi
b2 - bi
b1 - b2 + b3
2b1 - 3b2 + b4

with 2b1 - 3b2 + b4 :/= 0 is not in Span '131 For example,

is not in the span of '131 and thus '131 is not a basis for M(2, 2). Adding
get

rn ]

[ ]

to '131 we

By construction '132 is a linearly independent. Moreover, we can show that it spans

M(2, 2).

Thus, it is a basis for M(2, 2).

Section 4.3 Bases and Dimensions

EXERCISE 4
Extend the set 'T

{[; ]}

to a basis for

215

R3.

Knowing the dimension of a finite dimensional vector space Vis very useful when
trying to invent a basis for V, as the next theorem demonstrates.

Theorem 4

Let Vbe an n-dimensional vector space. Then


(1) A set of more than n vectors in V must be linearly dependent.
(2) A set of fewer than n vectors cannot span V.

(3) A set with n elements of Vis a spanning set for V if and only if it is linearly
independent.

Proof:

(1) This is Lemma 2 above.

(2) Suppose that Vcan be spanned by a set of k

< n

vectors. If Vis n-dimensional,

then it has a basis containing n vectors. This means we have a set of n linearly
independent vectors in a set spanned by k

<

n vectors which contradicts (1).

(3) If 13 is a linearly independent set of n vectors that does not span V, then it can
be extended to a basis for V by the procedure above. But, this would give a
linearly independent set of more than n vectors in V, which contradicts (1).
Similarly, if 13 is a spanning set for V that is not linearly independent, then
by the procedure above, there is a linearly independent proper subset of 13 that
spans V. But, then V would be spanned by a set of fewer that n vectors, which
contradicts (2).

EXAMPLE 11

1. Produce a basis 13 for the plane P in

IR3

with equation

2. Extend the basis 13 to obtain a basis C for

x1 +

2x2

0.

JR3.

Solution: (a) We know that a plane in JR3 has dimension 2. By Theorem 4, we just need
to pick two linearly independent vectors that lie in the plane. Observe that ii 1

and

v,

[I

both satisfy the equation of the plane and neither is a scalar multiple

of the other. Thus they are linearly independent, and 13

{v 1, v2} is a basis for the

plane P.
(b) From the procedure above, we need to add a vector that is not in the span of {v 1,

v2}.

But, Span{v 1, v2} is in the plane, so we need to pick any vector not in the plane. Observe that

V3

plane. Thus,

[j

does not satisfy the equation of the plane and hence is not in the

{V 1, v2, v3} is a linearly independent


JR3, according to Theorem 4.

fore is a basis for

set of three vectors in JR3 and there

216

Chapter 4

Vector Spaces

EXERCISE 5

Produce a basis for the hyperplane in IR4 with equation


extend the basis to obtain a basis for IR4.

x1 - x2 + X - 2x4 = 0
3

and

PROBLEMS 4.3
Practice Problems
Al Determine whether each set is a basis for
(a)

{[il-l=: l [;]}
WlHl}
minrnlHl}
wirnrnJ}
{[-:].[J[]}

JR3.

(b)

(c)

(d)

(e)

{l 1}

AS Determine the dimension of the vector space of

(b)
(c)

A2 Ut

B=

' 1 '
3

basis for IR4.

basis for the plane

(a)

(b)

B and determine
B=

B=

for

JR3.

Prove that

is a

the dimension of Span B.

{HJ.r!HH lm
{[l r=i flrnHm

A4 Select a basis for Span B from each of the following

(a)

B and determine the dimension of Span B.


B={[-

[ -]}

11, determine a
x1 - x2 +X - X4 = 0
3

A 7 (a) Using the method in Example


basis for the hyperplane
in IR4

(b) Extend the basis of part (a) to obtain a basis


for IR4.

AS Obtain a basis for each of the following vector


spaces and determine the dimension.

(a) S ={a+bx+cx2
(b) S =

(c) s =

sets

(b) Extend the basis of part (a) to obtain a basis

A3 Select a basis for Span B from each of the following


sets

11, determine
2x1 - x2 - X = 0 in JR3.
3

A6 (a) Using the method in Example

2
3

B.
B={l+x,l+x+x2,l+x3}
B= (1+x,1 - x, 1+x3,1 - x3}
B = {1 + x + x2,1 - x3,1 - 2x + 2x2 - x3,
1 - x2+2x3,x2+x3}

polynomials spanned by
(a)

][ -][ =J.

{[ ]

P2 a= -c}

I
M(2,2) a,b,c
I

[{ :] [:l [i] o}

(d) S ={p(x)
(e) S

R3

P2 p(2) = O}

{[ !]

I
M(2,2) a= -c}
I

IR}

Section 4.3 Exercises

217

Homework Problems
Bl Determine whether each set is a basis forJR3.
(a)

{[-il Ul Ul}

BS Select a basis for Span3 from each of the following


sets3 and determine the dimension of Span3.

{[ ][= =][ -][ ]}


{[ ] ,[ ] [ ] ,
rn _;J.r; -n

(a) 3=

(b)

(c)

(d)

(e)

{[J .nrnrnJ}
{[] HJ.lm
mHm
{[J[J.Ul
}
-

(b) 3=

B6 Determine the dimension of the vector space of


polynomials spanned by3.

(a) 3= { +

1 x+x2,x+x2+x3, 1-x3}
x+ x x2+x3, 1 x2+x3}
11,
x1 3x2 4x3

(b) 3= {l+

(c) {

(d) {

B4 Select a basis for Span3 from each of the following


sets3 and determine the dimension of Span3.

(a)

(b)

=
B

{ [l .UJ rm
{[ :i [-irnl .ui Hll
_

determine a

=0 inJR3.

for JR3.

BS (a) Using the method in Example


basis for the hyperplane

B3 Determine whether each set is a basis for


+

(b) Extend the basis of part (a) to obtain a basis

Prove that3 is a basis for

(b) {

basis for the plane

{[ ][ !] [! -J.
M(2, 2).
[! -n.
P2.
+x+x2, 1 -x2}
11++x2,x2,21-xx x2,2x2,-x-2-x2,2x}1+2x2}
3+2x+2x2,1+x+x2,1 -x - x2}

(a) {1

x2,

B7 (a) Using the method in Example

B2 Let3=

inJR4.

determine a

11,
x1 x2+2x3+ =
+

X
4

(b) Extend the basis of part (a) to obtain a basis


for JR4.

B9 Obtain a basis for each of the following vector

S {[ ]EM(2,2) Ia,bE }
S { x2)p(x) p(x)EP2}
{[;;] R3 I[;;] Ul o}
S={[; :]EM(2,2)1a-b=O,
= = JR}
S {p(b -x) EPs I-p(a-x) Ep(x) x}

spaces and determine the dimension.

(a)
(b)

JR

= Cl+

(c) s =
(d)
(e)

0, c

for all

Conceptual Problems
Dl (a) It may be said that "a basis for a finite dimen

for V." Explain why this makes sense in terms

sional vector space V is a maximal (largest pos

sible) linearly independent set in V." Explain


why this makes sense in terms of statements in

this section.

(b) It may be said that "a basis for a finite dimen

sional vector space V is a minimal spanning set

of statements in this section.

D2 Let V be an n-dimensional vector space. Prove


that if

=V.

is a subspace of V and dim

= n, then

218

Chapter 4

Vector Spaces

D4 In Problem 4.2.D2 you proved that V

D3 (a) Show that if {v1, v2} is a basis for a vector space


V, then for any real number t, {v1, v2+ tvi} is

{(a, b) I

a, b E JR, b > O}, with addition defined by (a, b) EB


(c, d) = (ad+ be, bd) and scalar multiplication de
fined by t O (a, b) = (taY-1, b1), was a vector space
over R Find, with justification, a basis for V and
hence determine the dimension of V.

also a basis for V.


(b) Show that if {v1, Vz, v3} is a basis for a vector
space V, then for any s, t E JR, {v1, V2, v3 + tv1+

sv2} is also a basis for V.

4.4 Coordinates with Respect to a Basis


In Section 4.3, we saw that a vector space may have many different bases. Why might
we want a basis other than the standard basis? In some problems, it is much more
convenient to use a different basis than the standard basis. For example, a stretch by a

factor of 3 in JR in the direction of the vector

[]

is geometrically easy to understand.

However, it would be awkward to determine the standard matrix of this stretch and then
determine its effect on any other vector. It would be much better to have a basis that
takes advantage of the direction

[]

and of the direction

[-l

which remain unchanged

under the stretch.


Alternatively, consider the reflection in the plane x1+2x2 - 3x3
to describe th;; by say;ng that ;t ceve.ses the nocmal veotoc

3
0 in JR . It is easy

lj] [ ]
[-] [}
to

unchanged any vectocs ly;ng ;n the plane (such as the vectocs

and

and leaves

See Hg

ure 4.4.2. Describing this reflection in terms of these vectors gives more geometrical
information than describing it in terms of the standard basis vectors.
z

Figure 4.4.2

Reflection in the plane

x1 + 2x2

3x3

0.

Notice that in these examples, the geometry itself provides us with a preferred
basis for the appropriate space. However, to make use of these preferred bases, we

Section 4.4 Coordinates with Respect to a Basis

219

need to know how to represent an arbitrary vector in a vector space V in terms of a


basis <J3 for V.

Definition
Coordinate Vector

Suppose that <J3

x1 v1

x2v2

{v1, ... , v11} is a basis for the vector space V. If x E V with


+ x11v11, then the coordinate vector of x with respect to the

basis <J3 is

[X]!B =
Xn

Remarks
1. This definition makes sense because of the Unique Representation Theorem
(Theorem 4.3.1).

2. Observe that the coordinate vector [x]!B depends on the order in which the basis
vectors appear. In this book, "basis" always means ordered basis; that is, it is
always assumed that a basis is specified in the order in which the basis vectors
are listed.
3. We often say "the coordinates of x with respect to <J3" or "the <J3-coordinates
of x" instead of the coordinate vector.

EXAMPLE 1

The set <J3

{[], []}

[;]
[] [] [;l

is a basis for JP?.2. Find the coordinates of a=

with respect to the basis <J3.

Solution: For a, we must find ai and a2 such that ai


we see that a solution is a1

1 and a2

Similarly, we see that a solution of b1

2. Thus,

+ a2

[ ] [ ] [-]
+

b2

is b1

3, b2

and

[-]

By inspection,

-2. Hence

Figure 4.4.3 shows JP?.2 with this basis and a. Notice that the use of the basis <J3 means
that the space is covered with two families of parallel coordinate lines, one with direction vector

[]

and the other with direction vector

[n

Coordinates are established

relative to these two families. The axes of this new coordinate system are obviously
not orthogonal to each other. Such non-orthogonal coordinate systems arise naturally
in the study of some crystalline structures in material science.

220

Chapter 4

Vector Spaces

[]

/
Figure 4A.3

EXAMPLE2

The set '13

The basis 13 in 2; the 13-coordinate vector of a.

{[; ], [ ], [ ]}
[ ]
[ ]

is a basis for the subspace Span '13. Determine

whether the matrices A

and B

are in Span '13. If they are, determine

their '13-coordinate vector.

Solution: We are required to determine whether there are numbers u1, u2, u3 and/or
v,, v2, v3, such that

UI

VI

[; ] [ OJ [ ] [ 1]
[; ] [ ] [ ] [ ]
+ Uz

+ U3

+ V3

+ Vz

Since the two systems have the same coefficient matrix, augment the coefficient matrix
twice by adjoining both right-hand sides and row reduce

2
2

-1

6
-3

-9
-8

It is now clear that the system with the second matrix B is not consistent, so that B is
not in the subspace spanned by '13. On the other hand, we see that the system with the
first matrix A has the unique solution u1

I, u2

1, and u3

-3. Thus,

Note that there are only three '13-coordinates because the basis '13 has only three vectors.
Also note that there is an immediate check available as we can verify that

221

Section 4.4 Coordinates with Respect to a Basis

[1

0 -1]3 [32 2] [11 1o] -3[1 1]0

EXAMPLE2
(continued)

EXERCISE 1
Determine the coordinate vector of
Span 'B.

EXAMPLE3

=l

+l

{[_:l [ m

with respect to the basis 13 =

of

Suppose that you have written a computer program to perform certain operations with
polynomials in

3"3,-5,

P You need to include


2

are considering. If you use the standard basis


-

{1,

some method of inputting the polynomial you

x, x2} for P ,
2

to input the polynomial

5x +2x2, you would surely write your program in such a way that you would type

2",

the standard coordinate vector, as the input.

{1 - 1

3-

On the other hand, for some problems in differential equations, you might prefer

the basis 'B =


find

t1, t

and

t3

x2, x,

x2}.

To find the 'B-coordinates of

such that

5x +2x2, we must

Row reducing the corresponding augmented matrix gives

[ 01 01 01 -531
-1 0 1 2
3-5x
[3-5x

It follows that the coordinates of

0[ 01 001 001 15-5 1


[1-5/21
5/2
/2

+2x2 with respect to 'B are

+2x2].IB

"0.5, 5 2.5".

/2

Thus, if your computer program is written to work in the basis 'B, then you would input

- ,

We might need to input several polynomials into the computer program written
to work in the basis 'B. In this case, we would want a much faster way of converting
standard coordinates to coordinates with respect to the basis 'B. We now develop a
method for doing this.

Theorem 1

Let 'B be a basis for a finite dimensional vector space V. Then, for any

t E JR we have
[tx + Y]lB

=t[x]lB +[Y]lB

x, y E V and

222

Chapter 4

Vector Spaces

Proof: Let :B = {v1, ... , Vn}. Then, for any vectors x = X1V1 + + x11v11 and
y =Y1V1 + +y11v11 in V and any t E JR, we have tx+y = (tx1 +y1)V1 + +(tx11+y11)v11, so
tx1 + Y1
[tx + Y]!B =
tXn +Yn

[lt
Xl

=t:

+:

Xn

=t[x]!B+[ y]!B

11

Let :Bbe a basis for an n-dimensional vector space V and let C = {w1,

, w11} be

another basis for V. Consider x E V. W riting x as a linear combination of the vectors


in Cgives

Taking :B-coordinates gives

[x]B =[X1W1 + +XnW11]!8 =X1[wi]!8 + + x11[w11]!8 = [wi]!B [w11J!8

Xl
:

Xn

Since

[I]

= [x]c, we see that this equation gives a formula for calculating the

:B-coordinates of x from the C-coordinates of x using simple matrix -vector multi


plication. We call this equation the change of coordinates equation and make the
following definition.

Definition
Change of Coordinates Matrix

Let :B and C = {w1,

[ [w1]!8

, w11} both be bases for a vector space V. The matrix P =


[w11]!8 is called the change of coordinates matrix from C-coordinates

to :B-coordinates and satisfies

[x]!B = P[x]c
Of course, we could exchange the roles of :Band C to find the change of coordi
nates matrix Q from :B-coordinates to C-coordinates.

Theorem 2

Let :B and C both be bases for a finite-dimensional vector space V. Let P be


the change of coordinates matrix from C-coordinates to :B-coordinates. Then, P

is invertible and p-

is the change of coordinates matrix from :B-coordinates to

C-coordinates.

The proof is left to Problem D4.

EXAMPLE4
Let S = [ t,, l,, t3} be the standard basis for JR3 and let '1l =

{[_].[:l [;l}.

Find the

change of coordinates matrix Q from :B-coordinates to S-coordinates. Find the change


of coordinates matrix P from S-coordinates to :B-coordinates. Ve1ify that PQ = I.

Section 4.4 Coordinates with Respect to a Basis

EXAMPLE4
(continued)

223

Solution: To find the change of coordinates matrix Q, we need to find the coordinates
of the vectors in 13 with respect to the standard basis S. We get

To find the change of coordinates matrix P, we need to find the coordinates of the
standard basis vectors with respect to the basis 13. To do this, we solve the augmented
systems

-1

10 l ' [
00 l , [
10 01 00 [ 01 01 00
00 l 00
1-11
[

-1

-1

0l

To make this easier, we row reduce the triple-augmented matrix for the system:

[_l

T hus,

3/5

3
4

3/5

P=

We now see that

EXAMPLES

1 1

7/5
-4/5

7/5
-4/5

3/5

-1/5

-1

7/5

-4/5
3/5

-1

-4/5

i[

-1/5
-4/5
3/5

-1/5
-4/5

3/5

1
3
-1

1l

-1
-1

] 0 ]
=

{ - x2, x, +x2}. Find [a+bx+cx2]13.


Solution: If we find the change of coordinates matrix P from the standard basis
S= {1, x, x2} of P to 13, then we will have
2

Let 13

[a+bx+ex']=P[a +bx+cx']s =P

Hence, we just need to find the coordinates of the standard basis vectors with respect
to the basis 13. To do this, we solve the three systems of linear equations given by

t11Cl - x2)+ t! (x)+t13(1+x2)=1


2
t 1 (1 - x2) + t (x)+ t 3(1+x2)=x
2
22
2
t31 (1 - x2)+ f3 (x)+t33(1+x2)=x2
2

224

Chapter 4

Vector Spaces

01100l [100102/ 10 -10/2 l


[ -11010010-010
01001 00112/ 0 12/
[1/2 0 / =[
=01
1/2 0 -l112/ 1:1 l

We row reduce the triple-augmented matrix to get

EXAMPLES
(continued)

Hence,

[a+ bx+ cx2]

la+
2 le
2

EXERCISE 2
Let s

I'" e,. e,)

be the standard basis for R3 and let 2l

= {[i].[].[m.

Find the

change of coordinates matrix Q from B-coordinates to S-coordinates. Find the change


of coordinates matrix P from S-coordinates to B-coordinates. Verify that PQ

/.

PROBLEMS 4.4
Practice Problems
Al In each case, check that the given vectors in B are
linearly independent (and therefore form a basis for

x y

the subspace they span). Then determine the coor

{ }

dinates of

and

with respect to B.

= i : x= = i
2
1
0
=1
{ 1y=
x= 2
= {[ ] , [ ], [ ] } x = [ l
y= [- !]
B={[ 1 ],[ =]}
x=[ ! -ly=[- = ]

(a) B

+ x+ x2, + 3 x+ 2x2, 4+ x2},


+ 8x+ sx2'
-4+ 8x+ 4x2

(b) B

(c) B

(d)

=1
{ 2x4}, x1=2
y=1
= ml [J}
=0.

+ x2 + x4, + x + 2x2 + x3 + x4,


+ x - 5x2+ x3 - 6x4,
x - x2+ x3
+ x+ 4x2+ x3 + 3x4

(e) B

A2 (a) Verify that 2l

plane 2x1 - x - 2x

is a basis for the

(b) For each of the following vectors, determine


whether it lies in the plane of part (a). If it does,

[ l

[] [ ]
=1
{ 1

find the vector's B-coordinates.


(i)

(ii)

A3 (a) Verify that B

(iii)

+ x2,

is a basis for P .

x+ 2x2, -1

x+ x2}

Section 4.4

(b) Determine the coordinates relative to

:B

of the

following polynomials.
(i)
(ii)
(iii)

p(x)=1
q(x)=4 - 2x+7x2
r(x)=-2 - 2x+ 3x2
[2 -4x+ l0x2}8

;] ' [ -] ' [
[ -]

:B={ [
A=

AS For the given basis

(c) Determine

and use your an

swers to part (b) to check that

:B,

find the change of coordi

vector space.

{[] [!] .[=:]}

(b)
Cc)

(i) Determine whether the given matrix A belongs

:B.

for

R3

=
:B={ 1 - 2x+5x2, 1 - 2x2,x+x2}
= {[ =n, [ =J, [ ]}
2

A4 In each case, do the following.

]}'

nates matrix to and from the standard basis of each


(a) B

[4 - 2x+7x2]+[-2 - 2x+ 3x2]


=[(4 - 2)+(-2 - 2)x+(7+ 3)x2]
to Span

(b)

225

Exercises

:s

tor space of

for P

for the vec -

2 upper-triangular matrices

(ii) Use your row reduction from (i) to explain

:B.
:B
- 2
[
]. 1 0]}'

whether :Bis a basis for Span


(iii) If

:B

is a basis for Span

determine [A].

(a)

:B={[
[
-

A=

][-
]

and A

Span

:B,

Homework Problems
Bl In each case, check that the given vectors in

:B

are

(b) For each of the following vectors, determine

linearly independent (and therefore form a basis for

whether it lies in the plane of part (a). If it does,

the subspace they span). Then determine the coor

find the vector's :B-coordinates.

{ 1i :1 } = j1 = !1

dinates of

(a)

(b)

(c)
(d)

:B=

and

with respect to

:B.

:B = {[ - ] ' [ ] [ ]}' x=[ -;


l
y=[ ]
:B={x+x2,-x+ 3x2,1 +x-x2},x=-1 +
3x - x2, y = 3+ 2x2
:B = (1 + 2x + 2x2, -3x - 3x2,- 3 - 3x},
x= 3 - 3x2, y=1+x2
(1+x+x3, 1+2x2+x3+x4, x2+x3+3x4},
2 - 2x+5x2 -x3 -5x4, y=-1-3x+3x2 =
2x3 -x4
=
2x1 - 3x2+ 2x3= 0.
'

(e) :B=
x

B2 (a) Verify that B


plane

{[] [W

is a basis for the

(i)

B3 (a) Verify that

(ii)

[i1

[1
= {UrnHm
(iii)

is a basis

for JR3.
(b) Determine the coordinates relative to :B of the
following vectors.
(i)

(ii)

[J

226

Vector Spaces

Chapter 4

(c) Determine [jJ and use your answers to part


(b) to check that.
=

BS

Ul. m. Ul.

A E

B4

(a) Verify that 13=


is a basis
for
(b) Determine the coordinates relative to 13 of the
following polynomjals.
(i)
(ii) =
(iii) =
(c) Determine
and use your an
swers to part (b) to check that
{1+x2,1 +x,x+x2}

P .
2

p(x)

3 +4x+ Sx2

q(x)

4+ Sx - 7x2

r(x)

1+x+x2

[4

B6

Sx + 6x2]2:1

(-1+2x2,1+x+x2,l-x-3x2)forP

[3+4x+5x2]2:1+ [1+x+ x2]2:1

In each case, do the following.


(i) Determine whether the given matrix belongs
to Span 13.
(ii) Use your row reduction from (i) to explain
whether 13 is a basis for Span 13.
(iii) If 13 is a basis for Span 13 and Span 13,
determine [A]2:1.
(a) 13={[ ].[- -][ -]}.
=[ -]
(b) 13={[ ] , [ ] , [ ]}, =[ !]
For the given basis 13, find the change of coordi
nates matrix to and from the standard basis of each
vector space.
(a) = {[i] .[_:] .[_l]} for
(b) 13=
(c) 13= {[ -], [ ]}for the vector space of
diagonal matrices

[(3 + 1)+ (4+ l)x+(5+ l)x2]2:1

Conceptual Problems
Dl

Suppose that 13=


is a basis for a vector
space and that =
is another basis
for and that for every
Must it
be true that = for each
Explain or
prove your conclusion.
Suppose is a vector space with basis 13 =
is also a
Then =
basis of Find a matrix such that =
V

v;

D2

{v1, ,vk}
C
{w1,
,wk}
x E V, [x]2:1 [x]c.
w;
1 $ i $ k?
.

{v,,v ,V3,V4}.
2
V.

{v3,V ,V4,vi}
2
P[x]2:1

[x]c.

LetB = {[][]}andC = {[][]}and let


be the linear mapping such that
=
(a) Find (U]).
(b) Find ([ ])
D4 Prove Theorem
D3

L : IR.2 --t IR.2


[x]s
[L(x)Jc.
L

2.

Section 4.5 General Linear Mappings

227

4.5 General Linear Mappings


m
In Chapter 3 we looked at linear mappings L : IR.11 IR. and found that they can be
useful in solving some problems. Since vector spaces encompass the essential prop
erties of IR.11, it makes sense that we can also define linear mappings whose domain
and codomain are other vector spaces. This also turns out to be extremely useful and
important in many real-world applications.

Definition

If V and W are vector spaces over JR., a function L : V

Linear Mapping

satisfies the linearity properties


L1

Lx
(

y)

W is a linear mapping if it

L(x)

= + L(y)
+= tL(x)

L2 L(tx)

for all x, y E V and t ER If W

V, then Lmay be called a linear operator.

As before, the two properties can be combined into one statement:

L(tx

y)

tL(x)

for all X, y E v' t E JR.

+ L(y)

Moreover, we have that

EXAMPLE 1

Let L : M(2, 2)

P2 be defined by L

d)x

(b

([; ]) = + + +

2
ax . Prove that Lis a

linear mapping.

[;; :], [; ]
( [: :]+[ ])=L([:: ::])
= + + + + +
+ +
= + + +
+ + + +

Solution: For any

E M(2, 2) and t E JR., we have

L t

(td1

d2)

(ta1

t(d1

(tb1
2
a2)x

b2

td1

d2)X

2
a1x )
2
d2)x a2x )

d1)x

(b1

(d2

(b2

a1x

2
a1x )

So, L is linear.

EXAMPLE2

Let M : P3

P3 be defined by M(ao

linear operator.

Solution: Let p(x)

a1x

=ao + +
+ =
=
=

M(tp(x)

q(x))

2
a2x , q(x)

M((tao

(ta1
t(a1

Hence, M is linear.

a1

2a2x. Prove that M is a

+ + = +
=ho+ +
+ho)+ + + +
+ + +
+ + +
+
(ta1

2(ta2

b1)

2a2x)

tM(p(x))

b1x

(b1

M(q(x))

2
b2x , and t ER Then,

b1)x

b2)x

2b2x)

(ta2

2
b2)x )

228

Chapter 4

Vector Spaces

EXERCISE 1
Let

L R3

M(2, 2) be defined by

linear.

= Xi+XX22 + X3]
[[:ll [6

. Prove that

L.

1s

Remark
Since a linear mapping

L :V

Wis just a function, we define operations (addition,

scalar multiplication, and composition ) and the concept of invertibility on these more
general linear mappings in the same way as we did for linear mappings

]Rn

JR.111

As we did with linear mappings in Chapter 3, it is important to consider the range


and nullspace of general linear mappings.

Definition

Let

Range

defined to be the set

and Wbe vector spaces over JR. The range of a linear mapping

L :V

Wis

={L(x) E I x EV}

Nulls pace

Range(L)

The nullspace of Lis the set of all vectors in V whose image under Lis the zero vector
Ow. We write

={x EV I L(x) =Ow}

Null(L)

EXAMPLE3

Let

: M(2, 2)

P2
L([; ])=c +(b + d)x+ax2.
+x+x2
L,
A
L(A)= +x + x2.
a, b, d
l+x+x2=L([ !])=c+(b+d)x+ax2
c= b+d = a =
+x+x2 E
A
L(A)= +x+x2 A=[ ].
be the linear mapping defined by

Determine whether 1

is in the range of

1
Solution: We want to find

such that

Hence, we must have

1,

and

and if it is, determine a matrix

such that

1, and

1. Observe that we get a system of linear

equations that is consistent with infinitely many solutions. Thus, 1


and one choice of

such that

is

EXAMPLE4
Let

L : P2

JR3 be the linear mapping defined by

Determine whether

[;]

is in the range of

Range(L)

L.

[a -bl
L(a + bx + cx2) b-c .
c-a

Section 4.5 General Linear Mappings

EXAMPLE4

229

Solution: We want to find a, b and c such that

(continued)

1,
1 l [ 1 -1 1

This gives us the system of linear equations a- b =

b-

= 1 , and c - a = 1 . Row

reducing the corresponding augmented matrix gives

1
1
0

0
-1
1

Henee, the system is inconsistent, so

Theorem 1

0
0

[;]

0
-1
0

1
0

1
3

is not in the span of L.

Let V and W be vector spaces and let L : V

W be a linear mapping. Then

( 1 ) L(Ov) = Ow
(2) Null(L) is a subspace of V
(3) Range(L) is a subspace of W

The proof is left as Problem D 1 .


As with any other subspace, it is often useful to find a basis for the nullspace and
range of a linear mapping.

EXAMPLES

[ ]

Determine a basis for the range and nullspace of the linear mapping l : P1
defined by L(a+bx) =

a- 2b

Solution: If a+bx

E Null(L), then we have

]
[

l(a+bx) =

a- 2b

[ ]

IR.3

Hence, a = 0

and a- 2b = 0, which implies that b = 0. Thus, the only polynomial in the nullspace
of l is the zero polynomial. That is, Null(l) = {0}, and so a basis for Null(L) is the
empty set.

Any vector y in the range of l has the form

I l [l [ 1
{ [l [J]}.

=a

+b

a- 2b

Thus, Range( L) = Span C, where C =

independent. Hence C is a basis for the range of l.

-2

Moreovec, C is elead y linead y

230

Chapter 4

EXAMPLE6

Vector Spaces
Determine a basis for the range and nullspace of the linear mapping L M(2, 2) P2
defined by L([; ]) =(b + c) + (c - d)x2.
Solution: If[: ] Null(L), then 0 =L([: ]) =(b + c) + (c - d)x2, sob + c =0
and c - d =0. Thus, b =-c and d =c, so every matrix in the nullspace of L has the
form
:

Thus, 13 =Span {[ ], [ - ]}is a linearly independent spanning set for Null(L)


and hence is a basis for Null(L).
Any polynomial in the range of L has the form (b+c)+(c-d)x2. Hence Range(L) =
Span{ 1, x2} since we can get any polynomial of the form a0 + a2x2 by taking b =a0,
c = 0, and d = -a . Also, {1, x2} is clearly linearly independent and hence a basis for
2
Range(L).

EXERCISE 2

Determine a basis for the range and nullspace of the linear mapping L : JR3
=[X Xi+ X3 X2 X1+ X3 ]
defined by L

[[ii
X3

M(2, 2)

Observe that in each of these examples, the dimension of the range of L plus the
dimension of the nullspace of L equals the dimension of the domain of L. This result
reminds us of the Rank Theorem (Theorem 3.4.8). Before we prove the similar result
for general linear mappings, we make some definitions to make this look even more
like the Rank Theorem.
Definition
Rank of
a Linear Mapping

Definition
Nullity of
a Linear Mapping

Let V and W be vector spaces over R The rank of a linear mapping L : V


the dimension of the range of L:
rank(L) = dim ( Range(L))
Let V and W be vector spaces over R The nullity of a linear mapping L
the dimension of the nullspace of L:
nullity(L) =dim (Null(L))

is

is

Section 4.5 General Linear Mappings

Theorem 2

231

[Rank-Nullity Theorem]
Let V and W be vector spaces over IR'. with dim V

n, and let L : V

-t

W be a linear

mapping. Then,
rank(L)

nullity(L)

Proof: The idea of the proof is to assume that a basis for the nullspace of L contains
k vectors and show that we can then construct a basis for the range of L that contains
n k vectors.
Let 13 = {v1,...,vd be a basis for Null(L), so that nullity(L) = k. Since V is
-

n-dimensional, we can use the procedure in Section 4.3. There exist vectors Uk+l ... ,u,,
such that {v1,...,vb Uk+l ... ,Un} is a basis for V.
Now consider any vector win the range of L. Then w = L(x) for some x
But any x

{v1,...,vbuk+1,...,u,,}, so there exists t1,...,tn such that x


tk+I Uk+! +
+ t11u11 Then,

V.

V can be written as a linear combination of the vectors in the basis


=

t1V1

+ ... +

tkvk

W =
=

L(t1V1

t1L(v1)

tkVk + tk+IUk+I

+ tkL(vk)

But each v; is in the nullspace of L, so L(v;)

tk+1L(uk+1)
=

tnU11)
+

tnL(un)

0, and so we have

Therefore, any w E Range(L) can be expressed as a linear combination of the vectors


in the set C

{L(uk+i),...,L(un)}. Thus, C is a spanning set for Range(L). Is it linearly

independent? We consider

By the linearity of L, this is equivalent to

If this is true, then tk+I Uk+!

+ t11u,, is a vector in the nullspace of L. Hence, for

some d1, ..., db we have

But this is impossible unless all

t;

and d; are zero, because {v1,..., vbuk+1,...,u11} is

a basis for V and hence linearly independent.


It follows that C is a linearly independent spanning set for Range(L). Hence it is a
basis for Range(L) containing n - k vectors. Thus, rank(L)
rank(L)
as required.

nullity(L)

(n - k) + k

k and

232

Chapter 4

Vector Spaces

P3
([ :])=cx+(a+b)x3

Determine the rank and nullity of L

EXAMPLE7

M(2,2)

defined by

and verify the Rank-Nullity Theorem.


Solution: If

[ :]

Null(L), then

= ([ :])=cx+(a+b)x3
a +b = c =
O

Hence, we have

Therefore, Null(L)

{[ -J, rn ]}

0 and

Span

0, and so every matrix in Null(L) has the form

{[ -] [ ]}
,

Moreover, we see that the set '13

is linearly independent, so '13 is a basis for Null(L) and hence

nullity(L) 2.
Clearly a basis for the range of L is
since this set is linealy independent
and spans the range. Thus rank(L) 2.
Then, as predicted by the Rank-Nullity Theorem, we have
=

{x, x3},

rank(L)

nullity(L)

=+ =
2

dim M(2,2)

PROBLEMS 4.5
Practice Problems
Al Prove that the following mappings are linear.

IR3 IR2
x x2, x3)
)
+
(X1 +X2, X\ X2+X3
P,
m J = (a+ b)+
(a+b+c)x
IR
([ :])=a+d

(a) L :

(b) L

R'

(c) tr: M(2,2)

defined by L(

1,

defined by L

defined by tr

(Taking the trace of a matrix is a linear opera


tion.)
M(2, 2) defined by
(d)

T : P3
cx2+dx3)= [ ]

T(a +bx+

A2 Determine which of the following mappings are

linear.

IR
ad-be
P 2 P2 2
(a-b)+(b+c)x
T: IR2
[l :2]
M([ ])= [ ]

(a) det : M(2,2)


(b) L :
(c)

defined by

([ :]) =
L(a + bx + cx2) =
r([])=

defined by det

M(2,2) defined by

(d) M : M(2,2)

M(2,2) defined by

Section 4.5 Exercises

[-

A3 For each of the following, determine whether the


mapping L : V W. If it is, find a vector x E V
such that L(x) = y.
(a) L : JR3 JR4 defined by L

([:ll

Xt +X3

y=

X2 - X4

[= -]

A4 Find a basis for the range and nullspace of the fol

0
0

2 x2 - 2x3 - 2x4

-2Xt

given vector y is in the range of the given linear

lowing linear mappings and verify the Rank-Nullity


Theorem.
2
(a) L: IR.3 IR. defined by L(x1,x2,x3) =
(Xt +X2,X1 +X2+X3)

2
0
y=
0

(b) L: JR3 P1 defined by L

2
(b) L: P2 M(2,2) defined by L(a+bx+cx ) =

c
b

] [ ]

c .

([ll

(a+b+c)x
(c) tr : M(2,2) IR. defined by tr

y=

2
(c) L : P2 Pt defined by L(a + bx + cx ) =
(b+c)+ (- b - c)x,y = 1 +x
(d) L: JR4 M(2,2) defined by L

[]

233

= (a+b)+

([; ])

=a+d

(d) T: P3 M(2,2) defined by T(a+bx+


2
cx +dx3) =

[; ]

Homework Problems
Bl Prove that the following mappings are linear.
2
(a) L: P2 JR3 defined by /Aa+bx+cx ) =

(b) L : P2 Pt defined by L a +bx+cx


b+ 2cx

(c) L: M(2,2) IR. defined by L

([ !])
([
([;

(d) Let JD be the subspace of M(2,2) of diagonal

a+ (a+b)x+bx

(e) L: M(2,2) M(2,2) defined by L

[;= =:]

(f) L : M(2, 2) P2 defined by L


(a+b+c+d)x

])
!])

linear.
2
(a) L: P3 IR. defined by L(a+bx+cx +dx3) =
a
b

=0

matrices; L : JD P2 defined by L

B2 Determine which of the following mappings are

([; ])

(g) T: M(2,2) M(2,2) defined by T(A) =AT

c
d
2
(b) M : P2 IR. defined by M(a + bx+ cx ) =
2
b - 4ac

(e)

N : JR3

Xl

X3 ]

mm

(d) L :

M(2,2) M(2, 2) defined by L (A ) =

[ ;]

(e) T

M(2,2) defined by

M(2,2) P2 defined by T

2
a+ (ab)x+ (abc)x

([ ])
a

--

234

Vector Spaces

Chapter 4

B3 For each of the following, determine whether the


given vector y is in the range of the given linear
mapping L : V ---t W. If it is, find a vector x E V
such that L(x)
(a) L

R3
-

y.

L([::Jl =
l [ 1

:\ :

y=l+x- x2

---t

JR.3 defined by L(a + bx + cx2)

l [-1

a+b +c'

y=

(d) L: JR.2 ---t M(2, 2) defined by L

[l l [ ]

(c) L:M(2,2)---tlR.defined by L

(d) Let D be the subspace of M(2,2) of diagonal


matrices; L: D ---t P defined by
2
a+(a+b)x+bx2

[:= =]

([])=

(f) L : M(2,2) ---t P defined by L


2

y=

([; :]) =

(a+b +c + d)x2

(e) Let T denote the subspace of 2 x 2 upper


triangular matrices; L : T ---t P defined by
2

L([ ])= -

( a- c) +(a - 2b)x+

] [-

---t M(2, 2) defined by T(A)=AT

(g) T : M(2,2)

[ ;: :

(h) L: P

M(2,2) defined by L(a+bx+cx2) =

---t

-: = ]

(-2a+b + c)x2, y= 2 +x- x2


: P ---t M(2,2) defined by L(a+bx+cx2)
2
-a - 2c
2b - c
2
2
=
y
0 -2
-2a+2c -2b - c '

---t P1 defined by L a+bx+cx2

(e) L: M(2,2) ---t M(2,2) defined by

(f) L

ex')= m
(
)=
([; :])=o
L([ ]) =
L([; :]) =

R3 defined by L(a+bx+

2
b + 2cx

(c) L : P
2

Theorem.

(b) L : P

y=
1
-2x1 + x - 2x3
2
(b) L : P ---t P defined by L(a + bx + cx2) =
2
2
(-a
2b) + (2a + c)x + (-2a + b - 2c)x2,

lowing linear mappings and verify the Rank-Nullity

(a) L P,

R3 defined by

B4 Find a basis for the range and nullspace of the fol

Conceptual Problems
Dl Prove Theorem 1.

L(vk)} is a linearly independent set in W,

D2 For the given vector spaces V and W, invent a lin


ear mapping L : V ---t W that satisfies the given
properties.
(a) V = R3, W = P L
2

( [ll

([ll

= x', L

([]l

= 2x

rank(L)= 2, and L

,vk} is a linearly independent set

(b) Give an example of a linear mapping L

---t

,L(vk)} is linearly dependent

inW.

Range(L)

{O} and

---t

W be a linear mapping. Prove that

W if and only if Null(L)

{O}.

DS Let U, V andW be finite-dimensional vector spaces


over JR. and let L : V

---t

U and M: U ---t W be linear

(a) Prove that rank(M

L) $ rank(M).

(b) Prove that rank(M

L) $ rank(L).

(c) Construct an example such that the rank of the

composition is strictly less than the maximum

D3 (a) Let V and W be vector spaces and L : V ---t W


.

mappings.

2,

be a linear mapping. Prove that if {L(v1),

in V but {L(v1),

let L : V

=
{[ J.rn J.rn ]}
=
=
([ ])
JR.4; nullity(L)
0

D4 Let V and W be n-dimensional vector spaces and

Range(L)=span
(c) V=M(2,2), W

in V.
W, where {v1,... , vd is linearly independent

= I +x+x2

(b) V = P , W = M(2, 2); Null(L)


2

then {v1,

of the ranks.

Section 4.6 Matrix of a Linear Mapping

D6 Let U and V be finite-dimensional vector spaces

235

vector space. Define the left shift L : S ---+ S by

and let L : V ---+ Ube a linear mapping and M :

L(x1, x2,x3, ...) = (x2,x3, X4, ... ) and the right shift

U ---+ Ube a linear operator such that Null(M) =

R : S ---+ S by R(x1,X2,X3,...) = (O,x1,x2,X3,. ..).


Then it is easy to verify that L and R are linear.
Check that (L o R)(x) = x but that (R o L)(x) f. x.
L has a right inverse, but it does not have a left
inverse. It is important in this example that S is

{Ou}. Prove that rank(M o L) = rank(L).


D7 Let S denote the set of all infinite sequences of
real numbers. A typical element of S is x

=
(x1, x2,...,x11,). Define addition x + y and scalar
multiplication tx in the obvious way. Then S is a

infinite-dimensional.

4.6 Matrix of a Linear Mapping


The Matrix of L with Respect to the Basis B
In Section 3.2, we defined the standard matrix of a linear transformation L : IR.n ---+ IR."'
to be the matrix whose columns are the images of the standard basis vectors S =

{e1,..., e11} of IR.11 under L. We now look at the matrix of L with respect to other bases.
We shall use the subscript S to distinguish the standard mat1ix of L:

11

The standard coordinates of the image under L of a vector 1 E IR. are given by the
equation

[L(x)]s = [L]s[1]s

(4.2)

These equations are exactly the same as those in Theorem 3.2.3 except that the notation
is fancier so that we can compare this standard description with a description with
respect to some other basis. We follow the same method as in the proof of Theorem

3.2.3 in defining the matrix of L with respect to another basis 13 of IR.11.


11
11
1
Let 13 = {V1,...,v11} be a basis for IR.1 and let L : IR. ---+ IR. be a linear operator.
11
Then, for any 1 E IR. , we can write 1 = b1v1 +
+ b11v11 Therefore,

Taking 13-coordinates of both sides gives

[L(x)J23 = [h1L Cv1) + + h11LCv11)]23

=bi [L Cv1)J23 + + h11[LCv11)J23


bi
b,,
= [1]23, so this equation is the 13-coordinates version of equa-

Observe that

bn
tion (4.2) where the matrix
standard matrix of L.

[ [L(v1)]23

[L(v 1)J23
1
11

is taking the place of the

IR.11 is a linear

Definition

Suppose that 13 = {v1, ... 'v,,} is any basis for IR. and that L : IR.11

Matrix of a Linear Operator

operator. Define the matrix of the linear operator L with respect to the basis 13 to be
the matrix

---+

236

Chapter 4

Vector Spaces

We then have that for any 1 E IR.11,

Note that the columns of [L].:B are the .3-coordinate vectors of the images of the
.3-basis vectors under L. The pattern is exactly the same as before, except that
everything is done in terms of the basis .3. It is important to emphasize again that
by "basis," we always mean ordered basis; the order of the basis elements determines
the order of the columns of the matrix [Lhi.

EXAMPLE 1

Let L : IR.

and let :B =

3
IR. be defined by L(x1,x2,X3) = (x1 +2x2 - 2x3,-x2 +2x3,X1 + 2x2)

{[ :J [: l [j]}
[H

Find the :B-matrix of Land use it to detennine [L(1)],

where [1].=

Solution: By definition, the columns of [L].:B are the .3-coordinates of the images of
the vectors in .3 under L. So, we find these images and write them as a linear combi
nation of the vectors in .3:

L(2.-l.-l)=

L(l.l. l)=

L(0,0,-1) =

l-l
m
Hl

l=:l m (-1) ui
l=!l (1) m ui
(4/3) [=:l (-2/3) [:] Ul
+(0)

=(l)

=(0)

[L(l,

1,

1)].:B

[L(0,0,-l)J.:B]

[-1 -1
[-1 -4/2/33] [13] [-115]
-2

-2

[L(x)].:B=

1
0

2 =

-2

[]

+ (-2)

[L].:B= [[L(2,-1 ,-1)].:B

Thus,

+(-2)

Hence,

-2

We can verify that this answer is correct by calculating L(x) in two ways. First, if

[1].=

ilien

1=1

l :l l:J il-l Ul
=

+2

Section 4.6 Matrix of a Linear Mapping

EXAMPLE 1

[ ]

and by definition of the mapping, we have L(x)

(continued)

[L(x)]!B

L(4, 1, -2)

237

(10, -5, 6). Second, if

.then

-11

EXAMPLE2

Let v

[! ]

.In Example 3.2.5, the standard matrix of the linear operator projv

IR2 was found to be

[pWJ ;t ]S

9/25
12/25

12/25
16/25

JR2

Find the matrix of projv with respect to a basis that shows the geometry of the trans
formation more clearly.

Solution: For this linear transformation, it is natural to use a basis for JR2 consisting
of the vector v, which is the direction vector for the projection, and a second vector
orthogonal to v, say w

Hence, [projv v]!B

[]

r-J.

Then, with 13

projv v

projv

proj" w

proj"

1 v +Ow

{V, w}, by geometry,

[!] [!]
[-J []

and [projv w]!B

We now consider projv x for any x

rnl

Ov +Ow

Thus,

JR2. We have

r 1 r1 r 1
=

In terms of 13-coordinates, projv is described as the linear mapping that sends

[ll

[]

to

This simple geometrical description is obtained when we use a basis 13 that is

adapted to the geometry of the transformation (see Figure 4.6.4 ) This example will be
discussed further below.

238

Chapter 4

Vector Spaces

Vz

Figure 4.6.4

The geometry of proji1.

Of course, we want to generalize this to any linear operator L

V - V on a vector

space V with respect to any basis '13 for V. To do this, we repeat our argument above
exactly.
Let '13 = {v1, ... , v,,} be a basis for a vector space V and let L: V - V be a linear
operator. Then, for any x EV, we can write x = biv1 +

+ b11v11 Therefore,

Taking '13-coordinates of both sides gives

[L(x)]21 = [b1L(v1) +

= b1 [L(vi)]21+

= [L(v1)l

+ b11L(v11)]21

+ bn[L(v,,)]21

[L(v,)]

f]

We make the following definition.

Definition

Suppose that '13 = {v1, ... , v11} is any basis for a vector space V and that L: V - V is

Matrix of a Linear Operator

a linear operator. Define the matrix of the linear operator L with respect to the basis
'13 to be the matrix

Then for any x EV,

[L(x)]21 = [L]21[x]21

EXAMPLE3

Let L: P2 - P2 be defined by L(a+bx+cx2) =(a+b) +bx+(a+b+c)x2 Find the


matrix of L with respect to the basis '13 = { 1, x, x2 }.

Section 4.6

EXAMPLE3

Matrixof a Linear Mapping

239

Solution: We have

(continued)

L(l)=l+x2
L(x) =1 +x +x2
L(x2) =x2
Hence,

We can check our answer by observing that [L(a+bx+cx2)]1l =

1 1

L
[ (a +bx+cx2)]1l = [L]1l[a+bx+ cx2]1l = 0 1 0
1 1 1

EXAMPLE4

a+b
b
and
a+b+c

a
a+b
b =
b
c
a+b+c

l[ ] [

Let L: P

P
be defined by L(a+bx+cx2) =(a+b)+bx+(a+b+c)x2 . Find the
2
2
matrixof L with respect to the basis '13 ={l+x+x2,1 - x,2}.

Solution: We have

L(l +x+x2) =2 +x +3x2= 3


( )(1 - x)+(-3/2)(2)
( )1
( +x +x2)+ 2
L(l - x) =0 +(-l)x +Ox2 = 0
( )1
( +x +x2)+ (1)(1- x)+ (-1/2)(2)
L(2)= 2 +2x2= 2
( )1
( - x)+ (-1)(2)
( )1
( +x+x2)+ 2
Hence,

EXERCISE 1

Let L

([ ]) [
] .
{[ ].[ ][ ][ ]}.

M2
( ,2) M2
( ,2) be defined by L

matrixofLwith respect to the basis'B=

a
c

a+b
c

a-b
. Fmd the
a+b+d

Observe that in each of the cases above, we have used the same basis for the
domain and codomain of the linear mapping L. To make this as general as possible,
we would like to define the matrixcL
[ 1l
] of a linear mappingL: V

W, where '13 is

a basis for the vector space V and C is a basis for the vector space W. This is left as
Problem D3.

240

Chapter 4

Vector Spaces

Change of Coordinates and Linear Mappings


In Example 2, we used special geometrical properties of the linear transformation and
of the chosen basis 13 to determine the 13-coordinate vectors that make up the 13-matrix
[L]s of the linear transformation L

2
2
IR. - IR. In some applications of these ideas,

the geometry does not provide such a simple way of determining [Ls
] . We need a gen
eral method for determining [L]s, given the standard matrix [Ls
] of a linear operator
L : IR.11 - IR.11 and a new basis 13.

Let L : IR.11 - IR.11 be a linear operator. Let Sdenote the standard basis for IR.11 and

let 13 be any other basis for IR.11 Denote the change of coordinates matrix from 13 to S
by P so that we have
P[xs
]

[x]s

and

If we apply the change of coordinates equation to the vector L(x) (which is in IR.11),
we get
[L(x)s
J

P-1 [L(x)s
J

Substitute for [L(x)]s and [L(x)]s to get

But, P[x]s

[xs
] , so we have

Since this is true for every [x]s, we get, by Theorem 3.1.4, that
[Ls
]

p-I [L]sP

Thus, we now have a method of determining the 13-matrix of L, given the standard
matrix of Land a new basis 13.
We shall first apply this change of basis method to Example 2 to make sure that
things work out as we expect.

EXAMPLES

Let v

[ !]

.In Example 2, we determined the matrix of projil with respect to a geo

metrically adapted basis 13. Let us verify that the change of basis method just described
does transform the standard matrix [projil ]s to the 13-matrix [projil ]s.
The matrix [projil s
]

[ {; } ;n
[! ]
2

The basis 13

of coordinates matrix from 13 to Sis P

3/25
_4125

{[!J, [-]}

so the change

.The inverse is found to be P-1

4/25
. Hence, the .v-matnx
'
. given
.
by
. of proJ. v 1s
3125
-

p 1 [proJ,iJsP

3/25
-4/25

4/25
3/25

][

9/25
12/25

Thus, we obtain exactly the same 13-matrix [projvs


] as we obtained by the earlier
geometric argument.

Section 4.6 Matrix of a Linear Mapping

To make sure we understand exactly what this means, let us calculate the

EXAMPLES
(continued)

241

B-coordinates of the image of the vector 1

[]

two ways.
Method 1. Use the fact that [projv x]3

of 1:

[5]2
.

Hence,

[proJv 1]3

[proJi13
] [x3
]

Method 2. Use the fact that [projil 13


)

[proJv x]s
Therefore,

[projv]3[x3
] . We need the B-coordinates

[5]2 [ 3/25 4/25] [5] [ 23/25]


-4/25 3/25 2 -14/25
23/25] [23/25]
.
[1 OJ [-14/25

1
p-

under projil. We can do this in

p-1[projv x]s:

. [52] [12/9/2525 12/25]


92/25]
16/25 [5]2 [69/25
[-4/25
3/25 43/25
/25] [6992/25
/25] [23/25]

[proJv ]s

[proJv 113

The calculation is probably slightly easier if we use the first method, but that re
ally is not the point. W hat is really important is that it is easy to get a geometrical
understanding of what happens to vectors if you multiply by

[ ]

(the B-matrix); it

is much more difficult to understand what happens if you multiply by

[ 1;;; ;;]

(the standard matrix). Using a non-standard basis may make it much easier to under
stand the geometry of a linear transformation.

EXAMPLE6

Let Lbe the linear mapping with standard matrix A

[; n
{ [] , [-]}
[ -n
Let B

be

a basis for IR.2. Find the matrix of L with respect to the basis B.
Solution: The change of coordinates matrix Pis P

[ ].
_

p-1AP

[4 -11 13] [42 35] [13 -11] [2113/2/2 11/2/2]

EXAMPLE 7
Let Lbe the linear mapping with standard matrix A

It follows that the B-matrix of Lis

[L]3

'B

and we have p-1

{[i], m, [:]}

-3
7

-7

95

-5
-5-3
1

Let B be the basis

Determine the matrix of L with respect to the basis 'B

242

Chapter 4

Vector Spaces

EXAMPLE 7
Solutiom The change of coordinates matrix P is P

(continued)

p-1

[i n

and we have

Thus, the 13-matrix of the mappingLis

-1

[L]23

p-1AP

2
0

o
-3
0

o
0

Observe that the resulting matrix is diagonal. What does this mean in terms of the
geometry of the linear transformation? From the definition of [L]23 and the definition
of 13-coordinates of a vector, we see that the linear transformation stretches the first
basis vector

vector

vector

[:]
[:1

[i]

by a factor of 2, it reflects (because of the minus sign)the second basis

in the origin and stretches it by a factor of 3, and it stretches the third basis

by a factor of 4. This gives a very clear geometrical picture of how the linear

transformation maps vectors. This picture is not obvious from looking at the standard
matrix A.

At this point it is natural to ask whether for any linear mappingL : JR" JR" there
exists a basis 13 of JR" such that the 13-matrix ofLis in diagonal form, and how can we
find such a basis if it exists?
The answers to these questions are found in Chapter 6. However, in order to deal
with these questions, one more computational tool is needed, the determinant, which
is discussed in Chapter 5.

PROBLEMS 4.6
Practice Problems
Al Determine the matrix of the linear mappingL with
respect to the basis 13 in the following cases. Deter
mine [L(x)]23 for the given [1]23.
2
{v1,v2} andL(v1) v2,
(a) In JR , 13
=

L(v2)

2v1 - v2; [ xJ23

[]

[XJ.
=

Ul

{ [ ] , [-;]}

2
of JR . In each

of the following cases, assume that L is a linear


mapping and determine [L]23.
(a) L( l ,1) (-3,-3) andL(-1,2) (-4,8)
(b) L(l, 1)
(-1,2)andL(-1,2) (2,2)
=

3
(b) In JR , 13
W1,v2, v3} and L(v1) 2v1 - \13,
L(v2) 2v1 - v3,L(v3) 4v2 + sv3;
=

A2 Consider the basis 13

.
[
-!
}
]
]
ml n

A3 Consider the basis 13

{v 1, v2,v3}

3
of R . In each of the follow-

ing cases, assume that L is a linear mapping.

Section 4.6 Exercises


Determine [L(v\)]2l, [L(v2)]2l, and [L(v3)]2l and
hence determine [Lk

([ :JJ nl ([ ]] = [:]. ([ !]]= m


ll :11 = l n wrn = m +rn =
ni
=
=
=
l

ll: 11 l n l 11 m ll-rn ui
2

(a) L

(b) L

(c) L

A4 For each of the following linear transformations,


determine a geometrically natural basis 13 (as in Examples and 3) and determine the 13-matrix of the
transformation.
(a) refic1,-2)
(b) projc2,1,-1)
(c) refic-1,-1,1)
AS (a) Hnd the coordinates of

with respect to the

mlHl[!lJinR'.
1, .
L(l,-1,0)=(0, L(l,0,l,2)1)==(2(,-2,0)
L(l,
[j]
s= {[j]. [] [m
L(l,0,-1) = (0,1,1),

basiss=

(b) Suppose that L : JR..3


transformation such that

JR..3 is a linear
2, 4),

1,2),andL(O,
Determine the 13-matrix ofL.
(c) Use parts (a) and (b) to determine

A6 (a) Find the coordinates of

with respect to the

in

basis

(b) Suppose that L : JR..3


formation such that

2, 4).

R'.

JR..3 is a linear trans

0) = 0,

L(112)

5111 - 7V2; [ 1]2l

= [-]

1,1) =

L( l , 2,
(-2, 2), and L(O,
(5, 3, -5). Determine the 13-matrix ofL.
(c) Use parts (a) and (b) to determineL(5,3, -5).

A7 Assume that each of the following matrices is the


standard matrix of a linear mappingL : JR..n JR..n.
Determine the matrix ofL with respect to the given
basis 13. You may find it helpful to use a computer
to find inverses and to multiply matrices.

(a)

(b)
(c)
(d)
(e

(fj

[ ;].13={[].[!]}
l-! =n ={[ ;1 r n
[ 6 -n- 2 0 ={[].[.]}
[ - 6 ].13={[ ] [ ]}
-

13

13

[ - Hs=mrnrnl}
! s=
l tH mrnrn11

A8 Find the 13-matrix of each of the following linear


mappings.
(a) L : JR..3 JR..3 defined by L(x1,x2, X3)
(x1+

=
s=mi. m. lll
1 -1
=
2cx, = {1,
M(2,2),
([ ]) = [ ! ; : ]

,,, ,,+ X ,X1


3

X3).

2
(b) L : P2 P2 defined by L(a+ bx+ cx )
2
2
2
a+ (b+ c )x , 13
{ + x , + x, 1 x+ x }
2
(c) D : P2 P2 defined by D(a+ bx+ cx )
2
b+
13
x, x }

U, where U is the subspace of


(d) T : U
upper-triangular matrices in
defined
=

by

c .

13={[ ][ ][ ]}

Homework Problems
Bl Determine the matrix of the linear mappingL with
respect to the basis 13 in the following cases. Deter
mine [L(1)]2l for the given [ 1)$,
2
111 + 3112,
{V1, 112} and L(111)
(a) In JR.. , 13

243

=
= Hl

=2

111 3v2,
{V1,112,v3}andL(111)
(b) InlR..3,13
LCv2) 3v1 + 4112 - 11 ,L(v ) =-vi + 2v2 +
3
3
=

6il3; [XJ.

244

Vector Spaces

Chapter 4

B2 Consider the basis

13= { [],[

]}

of JR2. In each

L
[L].:a.
L (l,2)= (1,-2) L (l,-2) (4, 8)
L (l,2)= (5,10) L (l,-2)= (-3,6)
L (l,2) (0,0) L (l,-2)= (1,2)
13 = {v1,v2,v }

of the following cases, assume that

:]
[
= ml [ ] l]}
[
L

B6 (a) Find the coordinates of

is a linear

mapping and determine


(a)
(b)
(c)

and

and

{[iJ +:l []}

B3 Consider

the

basis

mine

L
[L (v1)].:a, [L (v2)].:a,
[L].:a.

is a linear mapping Deter


.
and hence
and

[L (v3)].:a

determine

(b)

(c)

(d)

[[i]] = [HHll [iHlll Hl


L i=
l[ JJ [HHJJ= lHml = [:J
L[[i]]= llHHll= [iH[]l []
L i=
l[ ll nHHJJ [Hlll= m

B4 For each of the following linear transformations,


determine a geometrically natural basis
amples

and

3)

transformation.

13

(as in Ex

and determine the 13-matrix of the

L (-1,0,1)
(-2,0, 2).

JR3 - JR3 is a linear trans

L (2,1,0) = (3,3,0),
(1,4,4),
L (l,1,0)
L.
L (l,4,4).
and

Determine the 13-matrix of


(c) Use parts (a) and (b) to determine

B7 Assume that each of the following matrices is the


standard matrix of a linear mapping
: JR11 - JR11
Determine the matrix of

with respect to the given

13.
[136 n 13= {[!] ' [] }
[ 160l 13= {[] ' [ ]}
; s=
lO .
- -; - s=
-1 6 -6 .

basis

You may find it helpful to use a computer

to find inverses and to multiply matrices.


(a)
(b)

(c)

(d)

l= =! J mrnrnl}
[
!J mrni+m

BS Assume that each of the following matrices is the


"
: JR 11 - JR
standard matrix of a linear mapping
.
Determine the matrix of

13

x.

L
[L (x)].:a

with respect to the given

and use it to determine

given vector

for the

You may find it helpful to use a

computer to find inverses and to multiply matrices.

m
Wl L l:J [W

BS (a) Find the coordinates of

basisS=

inR3.

formation such that

basis

(a) perp( ,2)


3
(b) perpc2,1,-2)
(c) reflc1,2, )
3

(b) Suppose that

of R3 In each of the following

cases, assume that

(a)

basisS

and

with respect to the

(b) Suppose that

(a)
with respect to the

inR3

JR3

.
JR3 is a linear

L (l,1, 0)= (0,5,5),


L (O,1,1)= (2,0,2), L (l,O,1)= (5,2, 1).
L.
L (5,2,1).
:

transformation such that

and

Determine the 13-matrix of


(c) Use parts (a) and (b) to determine

(b)

(c)

[- 1] .13= { [] . (;] } .[xJ.:a= []


6
r3 6 =l { [J. rn. [1].:a = r-n
13

U = =n
[
XJ [il
=

s=

{[] [-;].[_:]}.

Section 4.6

(d)

li =: H {Ul r l lW
nl
=

[XJ. =

B9 Find the $-matrix of each of the following linear


mappings.
3

(a) L : IR.

3
IR. defined by L(x1,X ,x3) = (x1 +
2

X + X3,X1 +2x ,X1 + X3), =


2
2
(b) L : P

{[i] 'm' lm

defined by L(a + bx + cx2) =

2
2
(a+b+c)+(a+2b)x+(a+c)x2,

13

= {1,x,x2}

(c) L: M(2,2)

245

Exercises

M(2,2) defined by

([ ]) = [ ; l
13= {[ ][ ][ ].
[ ]}
b

(d) D : P

P defined by D(a + bx + cx2) =


2
{1+ 2x+ 3x2, -2x +x2,1+ x+ x2}

2
b+2ax, 13

(e) T :

ID

ID, where ID is the subspace

of diagonal matrices in M(2,2), defined by

r([ ]) [
13= { [ ][ ]}
a

2a

b .

Conceptual Problems
Dl Suppose that

13

and C are bases for IR.11 and Sis the

standard basis of IR.11 Suppose that P is the change


of coordinates matrix from

13

to S and that Q is

the change of coordinates matrix from C to S. Let

satisfies [L(x)]c = c[L]21[x]21 and hence is the ma

trix of L with respect to basis

D2 Suppose that a 2 x 2 matrix A is the standard matrix

(b) L: IR.2

2
b + 2cx,

ordinates matrix from

x + x2}

conditions will have to be satisfied by the vectors

v1 and v

1
in order for p- AP =

for some d1,d

D
2
IR.? (Hint: Consider the equation

2
AP = PD, or A v1

[1 ] =

v = v1
2

v D.)
2

13

= {v1, ... ,v },
11
let W be a vector space with basis C , and let

D3 Let V be a vector space with basis

L: V

W be a linear mapping. Prove that the

matrix c[L]21 defined by

(c) T :

{1,x}

P defined by L(a1,a1) = (a1 + a1) +


2

IR.2

and C .

P1 defined by D(a + bx + cx2) =

= {l,x,x2}, C

a1x2,

to the standard basis. What

=
13
13 {[ ] [ ]},

IR.2. Let
{V1,v }
2
be a basis for IR.2 and let P denote the change of co

13

13

pings with respect to the given bases


(a) D : P

13=

and C .

D4 Determine the matrix of the following linear map

L : IR.11 IR.11 be a linear mapping . Express the


matrix [L]c in terms of [L]21, P, and Q.
of a linear mapping L : IR.2

13

C = { 1 + x2, 1 + x, -1 -

M(2,2) defined by

r([;])

[ ; ] { [ ] [ ]}
c= {[ ][ ][ ][ ]}
[; ]. 13
c= {[][]}

(d) L : P

:s=

b .

IR.2 defined by L(a + bx + cx2)

= {1 + x2, 1 + x, - 1 + x + x2},

246

Chapter 4

Vector Spaces

4.7 Isomorphisms of Vector Spaces


Some of the ideas discussed in Chapters 3 and 4 lead to generalizations that are im
portant in the further development of linear algebra (and also in abstract algebra).
Some of these generalizations are outlined in this section. Most of the proofs are easy
or simple variations on proofs given earlier, so they will be left as exercises. Through
out this section, it is assumed that U, V, and W are vector spaces over JR and that
L : U --t V and M : V --t Ware linear mappings.

Definition

Lis said to be one-to-one if L( u1 )

L(u2) implies that u1

u2.

One-to-One

Lemma 1

Lis one-to-one if and only if Null (L)

{0}. (Compare this to Theorem 3.5.6.)

You are asked to prove Lemma 1 as Problem D 1.

EXAMPLE 1

Every invertible linear transformation L : JR11

--t

JR" is one-to-one. The fact that such

a transformation is one-to-one allows the definition of the inverse. The mapping inj :

JR3

--t

JR4 of Example 3.5.8 is a one-to-one mapping that is not invertible. The mapping
--t JR3 is not

P : JR4 --t JR3 of Example 3.5.8 is not one-to-one. For any n, proj11 : JR3

one-to-one, since many elements in the domain are mapped to the same vector in the
range.

EXAMPLE2

--t JR3 defined by Lx


( 1,x2)
x
( i,xi + x2,x2) is one-to-one.
Solution: Assume thatL(x1,x2) L(y1, Y2). Then we have (x1,xi + X2,x2) (y1, Y1+
Y2, yz), and so x1
Y1 andx2 Y2 Thus, Lis one-to-one.

Prove that L : JR2

EXAMPLE3

Determine ifM : P2

--t

M(2,2) defined by La
( + bx+ cx2 ) =
2

[ ]

is one-to-one.

Solution: Let p x
( ) = l+x andq(x) = l+x+x .0bserve thatMp
( ) =

[ ]

= Mq
( ),

butp x
( ) * q(x), so Mis not one-to-one.

EXERCISE 1

Suppose that {u1,

, uk} is a linearly independent set in U and Lis one-to-one. Prove

that {L( u1) , ... , L(uk)} is linearly independent.

Definition

L : U --t V is said to be onto if for every v E V there exists some u E U such that

Onto

L(u) = v. That is, Range (L) = V.

EXAMPLE4

Invertible linear transformations of JR" are all onto mappings. The mapping P : JR4

--t

JR3 of Example 3.5.8 is onto, but the mapping inj : JR3 --t JR4 of Example 3.5.8 is not
onto.

Section 4. 7 Isomorphisms of Vector Spaces

EXAMPLES

2
Prove that L : JR. -t PI defined by L(yI, y2) =YI + (yI + y2)x is onto.
Solution: Let a+ bx be any polynomial in PI. We need to find a vector y =

24 7

[]

2
JR. ,

such that UJ) = a+ bx. For this to be true, we require that Yi+ (yI+ y2)x = L(yI, y2) =
a+ bx, so YI =a and b =YI + y2, which gives Y2 = b - Y1 = b - a. Therefore, we have
L(a, b - a) = a + bx, and so Lis onto.

EXAMPLE6

2
Determine if M : JR. -t JR.3 defined by M(xI, x2) = (xi, XI + Xz, x2) is onto.
Solution: If M is onto, then for every vector jl =

soch that L(X) = jl. But, if jl =

[;]

:]

!!.3, there exists X =

[:]

11!.'

then we have

which implies that XI = 1, x2 = 1, and XI + x2 = 1, which is clearly inconsistent.


Hence, M is not onto.

EXERCISE 2

Suppose that {ui, ... , uk} is a spanning set for U and Lis onto. Prove that a spanning
set for Vis {L(ui), ... , L( uk)}.

Theorem 2

I
The linear mapping L : U -t Vhas an inverse linear mapping L- : V -t U if and
only if Lis one-to-one and onto.
You are asked to prove Theorem 2 as problem D4.

Definition
Isomorphism
Isomorphic

If U and V are vector spaces over JR, and if L : U -t V is a linear, one-to-one, and
onto mapping, then Lis called an isomorphism (or a vector space isomorphism), and
U and Vare said to be isomorphic.
The word isomorphism comes from Greek words meaning "same form." The con
cept of an isomorphism is a very powe1ful and important one. It implies that the es
sential structure of the isomorphic vector spaces is the same, so that a vector space
statement that is true in one space is immediately true in any isomorphic space. Of
course, some vector spaces such as M(m, n) or P11 have some features that are not
purely vector space properties (such as matrix decomposition and polynomial factor
ization), and these particular features cannot automatically be transferred from these
spaces to spaces that are isomorphic as vector spaces.

248

Chapter 4

EXAMPLE7

Vector Spaces

3
Prove that P2 and JR are isomorphic by constructing an explicit isomorphism L.

Solution: We define L: P2

3
R by L(a, + aix + a2x')

[]

To prove that it is an isomorphism, we must prove that it is linear, one-to-one, and


onto.

Linear: Let any two elements of P2 be p(x)


and let t E R Then,
L(tp + q)

2
ao+a1x+a2x and q(x)

b0+b1x+b2x

2
2
L(t(ao + a1x + a1x ) + (bo + b1x + b1x ))
2
L(tao + bo + (ta1 + b1)x + (ta2 + b1)x )

tL(p) +Lq
( )

Therefore, Lis linear.

2
One-to-one: Let o0 + a1x + a2x
Hence, a0

a1

Onto: For any

a2, so Null (L)

[:;]

Null (L). Then,

[]

R3, we have La
( , + a1x + a2x')

Use Exercise 1 and Exercise 2 to prove that if L : 1I.J

{u1,

Theorem 3

. ,

2
L(a, + a1x + a2x )

[]

{O} and thus Lis one-to-one by Lemma 1.

Thus, Lis an isomorphism from P2 to JR3 .

EXERCISE 3

u,,} is a basis for 1I.J, then {L(u1),

[::J.

Hence, Lis onto.

V is an isomorphism and

, L(u,,)} is a basis for V.

Suppose that 1I.J and V are finite-dimensional vector spaces over R Then 1I.J and V
are isomorphic if and only if they are of the same dimension.
You are asked to prove Theorem 3 as Problem DS.

EXAMPLE 8

1. The vector space M(m, n) is isomorphic to Rm".


2. The vector space P,, is isomorphic to R"+1.
3. Every k-dimensional subspace of JR" is isomorphic to every k-dimensional sub
space of M(m, n).

Section 4.7 Exercises

249

If we know that two vector spaces over JR have the same dimension, then
Theorem 3 says that they are isomorphic. However, even if we already know that
two vector spaces are isomorphic, we may need to construct an explicit isomorphism
between the two vector spaces. The following theorem shows that if we have two
isomorphic vector spaces 1!J and V, then we only have to check if a linear mapping
L : 1!J

---t

V is one-to-one or onto to prove that it is an isomorphism between these two

spaces.

Theorem 4

If1!J and V are n-dimensional vector spaces over JR, then a linear mapping L : 1!J

---t

is one-to-one if and only if it is onto.

You are asked to prove Theorem 4 as Problem D6.

PROBLEMS 4.7
Practice Problems
Al For each of the following pairs of vector spaces,

(c) P3 and M(2, 2)

define an explicit isomorphism to establish that the

(d) IP'= {p(x) E P2

spaces are isomorphic. Prove that your map is an


isomorphism.
(a) P3 and JR4

1!J =

I
{[ ]
1

p(2) = O} and the vector space


I a1,a2

E JR

of 2 x 2 diagonal

matrices

(b) M(2, 2) and JR4

Homework Problems
Bl For each of the following pairs of vector spaces,
define an explicit isomorphism to establish that the

(c) R and the vector spa S =Span

spaces are isomorphic. Prove that your map is an


isomorphism.
(a) P and JR5
4
(b) M(2, 3) and JR6

(d) IP' = {p(x) E P3


space T =

{[] []}

p(l) = O} and the vector

I
{[ :]I

a1,a2,a3

E JR

of 2 x 2

upper-triangular matrices

Conceptual Problems
Dl Prove Lemma 1. (Hint: Suppose that L is one-to
one. What is the unique u E 1!J such that L(u) = O?
Conversely, suppose that Null (L) = {O}. If L(u1) =
L(u2), then what is L(u1 - u2)?)

D2 (a) Prove that if Land Mare one-to-one, then Mo L


is one-to-one.
(b) Give an example where Mis not one-to-one but
M o Lis one-to-one.
(c) Is it possible to give an example where Lis not
one-to-one but M o Lis one-to-one? Explain.

D3 Prove that if L and M are onto, then M o Lis onto.

D4 Prove Theorem 2.
DS Prove Theorem 3. To prove "isomorphic ::::} same
dimension," use Exercise 3. To prove "same dimen
sion ::::} isomorphic," take a basis {u1, ..., u11} for
1!J, and a basis {v1 ,
, v11} for V. Define an isomor
phism by taking L(u;) = v; for 1 i n, requiring
.

that L be linear. (You must prove that this is an iso


morphism.)

D6 Prove Theorem 4.
D7 Prove that any plane through the origin in JR3 is iso
2
morphic to JR .

250

Chapter 4

Vector Spaces

D8 Recall the definition of the Cartesian product from


Problem 4.1.D4. Prove that 1l.J x {Ov} is a subspace
of 1l.J x V that is isomorphic to 1!.J.
D9 (a) Prove that JR2
(b) Prove that JRI!

JR is isomorphic to JR3.
x ]Rm is isomorphic to JRn+m.
x

DlO Suppose that L

1l.J

V is a vector space isomor

phism and that M : V V is a linear mapping.

Prove that L-1 o Mo Lis a linear mapping from 1l.J

to 1!.J. Describe the nullspace and range of L-1oMoL


in terms of the nullspace and range of M.

CHAPTER REVIEW
Suggestions for Student Review
Remember that if you have understood the ideas of
Chapter 4, you should be able to give answers to these
questions without looking them up. Try hard to answer
them from your own understanding.
1 State the essential properties of a vector space over

R Why is the empty set not a vector space? Describe


two or three examples of vector spaces that are not
subspaces of JR11 (Section 4.2)

2 What is a basis? What are the important properties


of a basis? (Section 4.3)

(a) Explain the concept of dimension. What the


orem is required to ensure that the concept of
dimension is well defined. (Section 4.3)
(b) Explain how knowing the dimension of a vector
space is helpful when you have to find a basis
for the vector space. (Section 4.3)

4 Why is linear independence of a spanning set impor


tant when we define coordinates with respect to the
spanning set? (Section 4.4)

the standard procedure to determine its coordi

[ !]

nates with respect to :B. Did you get the right


answe,, -

? (Section 4.4)

(d) Pick any two vectors in JR5 and determine


whether they lie in your subspace. Determine
the coordinates of any vector that is in the sub
space. (Section 4.4)

6 Write a short explanation of how you use informa


tion about consistency of systems and uniqueness of
solutions in testing for linear independence and in
determining whether a vector belongs to a given sub
space. (Sections 4.3 and 4.4)

7 Give the definition of a linear mapping L : V W


and show how this implies that L preserves linear
combinations. Explain the procedure for determin
ing if a vector y is in the range of L. Describe how
to find a basis for the nullspace and a basis for the

S Invent and analyze an example as follows.

range of L. (Section 4.5)

(a) Give a basis :B for a three-dimensional sub

space in JR5. (Don't make it too easy by choos


ing any standard basis vectors, but don't make
it too hard by choosing completely random
components.) (Section 4.3)

(b) Determine the standard coordinates in JR5 of the


vecto' that hos cooniinate vectm

(c) Take the vector you found in (b) and carry out

[-!]

spect to your basis :B. (Section 4.4)

8 State how to determine the standard matrix and the


:B-matrix of a linear mapping L : V V. Ex
plain how [L(x)].s is determined in terms of [L]_s.
(Section 4.6)

9 State the definition of an isomorphism of vector


spaces and give some examples. Explain why a

with re

finite-dimensional vector space cannot be isomor


phic to a proper subspace of itself. (Section 4.7)

Chapter Review

251

Chapter Quiz
El Determine whether the following sets are vector
spaces; explain briefly.
(a) The set of 4 x 3 matrices such that the sum of
the entries in the first row is zero (a11 + a12 +
a1 = 0) under standard addition and scalar
3
multiplication of matrices.
(b) The set of polynomials p(x) such that p(l) = 0
and p(2) = 0 under standard addition and scalar
multiplication of polynomials.
(c) The set of 2 x 2 matrices such that all entries
are integers under standard addition and scalar
multiplication of matrices.

[::J

(d) The set of all vectors

such that x1 + x2 +

x3 = 0 under standard addition and scalar mul


tiplication of vectors.
E2 In each of the following cases, determine whether
the given set of vectors is a basis for M(2, 2).

(a){[ ][ -nl
{[ ] ' [ -] ' [
{[_ !][ ][

(b)

(c)

].[; -][ ]}
] ' [; ]}
]}
-

E3 (a) Let be the subspace spanned by v 1

1
0
1 ,
1
3

3
1
3
1 . Pick a
1 and V4
v2 = O, v
3
-2
0
1
2
0
subset of the given vectors that forms a basis 13
for. Determine the dimension of.
0
2
1 with
(b) Determine the coordinates of x
-3
-5
respect to 13.
,

E4 (a) Find a basis for the plane in JR.3 with equation


X1 - X3 = 0.
(b) Extend the basis you found in (a) to a basis 13
for JR.3.
(c) Let L : JR.3 JR.3 be a reflection in the plane
from part (a). Determine [L].
(d) Using your result from part (c), determine the
standard matrix [L]s of the reflection.
ES Let L : JR.3

JR.3 be a linear mapping with standard

tr{i - ]

ma

and let=

Determine the matrix [L].

{[i] [I] [ t]}


. -

E6 Suppose that L : V W is a linear mapping with


Null(L) = {O}. Suppose that {v1, ... , vd is a linearly
independent set in V. Prove that {L(v1),
, L(vk)}
is a linearly independent set in W.

E7 Decide whether each of the following statements is


true or false. If it is true, explain briefly; if it is false,
give an example to show that it is false.
(a) A subspace of JR.11 must have dimension less
than n.
(b) A set of four polynomials in P2 cannot be a ba
sis for P2.
(c) If 13 is a basis for a subspace of JR.5, the
13-coordinate vector of some vector x E JR.5 has
five components.
(d) For any linear mapping L : JR.11 JR.I! and any
basis 13 oflR.11, the rank of the matrix [L] is the
same as the rank of the matrix [L]s.
(e) For any linear mapping L : V V and any
basis 13 of V, the column space of [L] equals
the range of L.
(f) If L : V W is one-to-one, then dim V =
dimW.

252

Chapter 4

Vector Spaces

Further Problems
These problems are intended to be ch allenging. They

are all in the nullspace, where

may not be of interest to all students.

denote unknown entries. Determine these un

Fl Let S be a subspace of an n-dimensional vector

known entries and prove that K1 and

V such that Null(L)

sider

S.

F2 Use the ideas of this chapter to prove the unique

A.

(Hint: Begin by assuming that there are

two reduced row echelon forms R and


two matrices?)

A,

the three column sums of

A,

For example, if

[- -!].
3

square with

wt(A)

3.

(e)
(f)

A is a magic

f
and

3. Show that all A in MS 3

are of the form

1.

(g) Find the coordinates of

[ ;]1
2

respect to the basis 15.

with

Conclusion: MS 3 is a three-dimensional subspace of


M(3, 3).
Exercises F4-F7 require the following definitions.
If S and T are subspaces of the vector space V, we de

3 x 3 magic
M(3, 3) consisting of magic
squares is denoted MS 3.
(a) Show that MS 3 is a subspace of M(3, 3).
(b) Observe that weight detenrunes a map wt
MS 3 lit Show that wt is linear.
(c) Compute the nullspace of wt. Suppose that

p, q E lit
Show that 15
{:l., K1, K2} is a basis for MS 3.
As an example, find all 3 x 3 magic squares of
weight

and the two

The aim of this exploration is to find all

a
d '
g

Observe that!... is a magic

for some

fine

S+T

squares. The subset of

wt(!...)
k

(k/3):1. + PK1 + qK2

diagonal sums of A (a11 + a22 + a33 and a13 + a22 +


a31 ) all have the same value k. The common sum k
is called the weight of the magic square A and is
denoted by wt(A)
k.
=

form a

s in the

row sum is the sum of the entries in one row of


of

that have weight

F3 Magic Squares-An Exploration of


Their Vector Space Properties
We say that any matrix A E M(3, 3) is a 3 x 3
magic square if the three row sums (where each

A)

, v

a11K1 - a12K2.)

square with

S. W hat can

you say about the columns with leading

(d) Let!...

ness of the reduced row echelon form for a given


matrix

- 1 1 1l
[ 1

K2

basis for Null(wt). (Hint: If A E Null(wt), con

space V. Prove that there exists a linear operator


L: V

a, b, c, . .

1 l

j
m

h
k
n

{ p s + q t I p, q E JR, s E S, t

If S and T are subspaces of V such that S + T

T}
=

V and

Sn T {O}, we say that S is the complement of T (and


T is the complement of S). In general, given a subspace
S of V, one can choose a complement in many ways; the
2
complement of S is not unique. For example, in JR , we
=

may take a complement of Span


or Span

{[ ]}

{ []}

to be Span

{[]}

F4 In the vector space of continuous real-valued


functions of a real variable, show that the even
functions and the odd functions form subspaces
such that each is the complement of the other.

Chapter Review

FS (a) If is a k-dimensional subspace of JR.11, show

253

be the subspace spanned by v and . Let U be the

that any complement of must be of dimension

subspace spanned by w and . Prove that if w is in

n-k.

T but not in S, then v is in U.

(b) Suppose that is a subspace of JR.n that has a


unique complement. Must it be true that is
JR. 1?
either { O} or 1

F6 Suppose that v and w are vectors in a vector space

F7 Show that if and T are finite-dimensional sub


spaces of V, then
dim + dim T

dim( + T) + dim( n T)

V. Suppose also that is a subspace of V. Let T

MyMathlab

Go to MyMathLab at www.mymathlab.com. You can practise many of this chapter's exercises as often as you
want. The guided solutions help you find an answer step by step. You'll find a personalized study plan available to
you, too!

CHAPTER 5

Determinants
CHAPTER OUTLINE
5.1 Determinants in Terms of Cofactors
5.2 Elementary Row Operations and the Determinant
5.3 Matrix Inverse by Cofactors and Cramer's Rule
5.4 Area, Volume, and the Determinant

For each square matrix A. we define a number called the determinant of A. Originally,
the determinant was used to "determine" whether a system of n linear equations in

n variables was consistent. Now the determinant also provides a second method for
finding the inverse of a matrix. It also plays an important role in the discussion of
volume. Finally, it is an important tool for finding eigenvalues in Chapter

6.

5.1 Determinants in Terms of Cofactors


Consider the system of two linear equations in two variables:
a11 x1 + a12X2
a21X1 + a22X2

b1

b2

By the standard procedure of elimination, the system is found to be consistent if and


only if a11 a22 - a12 a21 * 0. If it is consistent, then the solution is found to be

X1

X2

a11a22 - a12 a21

a1 la22 - a12 a21

This fact prompts the following definition.

Definition

The determinant of a 2 x 2 matrix A

Determinant of
a

2 Matrix

detA

det

a11
a21

a11

a12

a21

a22

is defined by

256

Chapter 5

EXAMPLE 1

Determinants

[ !l [ -n [ n

Find the determinant of

Solution: We have

and

[ !]
[2 7]
[ ]

det

det

det

-5

1(4) - 3(2)

2(-5)

2(4) - 2(4)

7(8)

-2

10 - 56

-66

An Alternate Notation: The determinant is often denoted by vertical straight lines:

One risk with this notation is that one may fail to distinguish between a matrix and the
determinant of the matrix. This is a rather gross error.

EXERCISE 1

Calculate the following determinants.


(a)

1; I

The 3

(b)

I _;1

(c)

I I

3 Case

Let A be a 3 x 3 matrix. We can show through elimination (with some effort) that the
system is consistent with a unique solution if and only if

We would like to simplify or reorganize this expression so that we can remember it


more easily and so that we can determine how to generalize it to the n x n case.
Notice that a11 is a common factor in the first pair of terms in D, a12 is a common
factor in the second pair, and a13 is a common factor in the third pair. Thus, D can be
rewritten as

Observe that the determfoant being multiplied by a11 is the determinant of the 2 x 2
matrix formed by removing the first row and first column of A. Similarly, a12 is being
multiplied by (-!) times the determinant of the matrix formed by removing the first

257

Section 5.1 Determinants in Terms of Cofactors

row and second column of A, and

a13

is being multiplied by the determinant of the

matrix formed by removing the first row and third column of A. Hence, we make the
following definitions.

Definition

Let A be a 3 x 3 matrix. Let A(i, j) denote the 2 x 2 submatrix obtained from A by

Cofactors of a

deleting the i-th row and }-th column. Define the cofactor of a 3 x 3 matrix

3 Matrix

CiJ

Definition

aiJ

to be

( - l)C +J! detA(i, j)

The determinant of a 3 x 3 matrix A is defined by

Determinant of a
3

3 Matrix

Remarks
1. This definition of the determinant of a 3 x 3 matrix is called the expansion of

the determinant along the first row. As we shall see below, a determinant can
be expanded along any row or column.

2. The signs attached to cofactors can cause trouble if you are not careful. One
helpful way to remember which sign to attach to which cofactor is to take a
blank matrix and put a+in the top-left corner and then alternate - and+both
across and down:

[ : i.
+

This is shown for a 3 x 3 matrix, but it works for

a square matrix of any size.


3. CiJ is called the cofactor of aiJ because it is the "factor with a;/' in the expansion
of the determinant. Note that each

EXAMPLE2
Let A

[
1

CiJ

is a number not a matrix.

Calculate the cofactors of the first row of A and use them to find

the determinant of A.

Solution: By definition, the cofactor C11 is ( -1) 1+1 times the determinant of the

matrix obtained from A by deleting the first row and first column. Thus,

C11

(-1)1+ det

[ ]
3

3(6)- 5(0)

18

258

Chapter 5

Determinants

EXAMPLE2

The cofactor C12 is (-1) 1+z times the determinant of the matrix obtained from

(continued)

deleting the first row and second column. So,

2
C12 =(-1)1+ det

[i ]

by

= -[2(6) - 5(1)] = -7

Finally, the cofactor C13 is

1 3
C13 =(-1) + det

[i ]

=2(0) - 3(1) =-3

Hence,

EXERCISE2
Let

A = 0

3
-2 . Calculate the cofactors of the first row of A and use them to
-3

2
I
0

find the determinant of

A.

Generally, when expanding a determinant, we write the steps more compactly, as


in the next example.

EXAMPLE3
Calculate det

[- 1.

5 0 -1
Solution: By definition, we have

det -2

2
2
0

-1

= a11C11

a12 C12 +a13C13

1 2
=1(-1)1+
0

1
2 2
+2(-1)1+ -1
5

1
3 -2
+3(-1)1+
-1
5

2
0

=1[2(-1) - 1(0)] - 2[(-2)(-1) - 1(5)] +3[(-2)(0) - 2(5)]


=-2 +6 - 30 =-2 6

We now define the determinant of an n x n matrix by following the pattern of the

definition for the 3 x 3 case.

Definition
Cofactors of an
11 x n

Matrix

Let

A be an n x n matrix. Let A(i, }) denote the (n - 1) x (n 1) submatrix obtained


A by deleting the i-th row and }-th column. The cofactor of an n x n matrix of
-

from

aij is defined to be

C;1 =(- )(i+Jl det A(i, j)


l

Section 5.1

Definition

259

Determinants in Terms of Cofactors

The determinant of an n x n matrix A is defined by

Determinant of a
n x n

JVlatrix

Remark
This is a recursive definition. The result for the n x n case is defined in terms of the

(n

-1) -1)
x (n

case, which in turn must be calculated in terms of the (n

case, and so on, until we get back to the


explicitly.

EXAMPLE4

2 2
x

-2) -2)
x (n

case, for which the result is given

We calculate the following determinant by using the definition of the determinant.


Note that

* **
0.
25 63 07
3 20 43 a11 C11+a12C12+a13C13+a14C14
167
1 7
0(*)+2(-1)1+2 -2-5 20 43 + 3(-1)1+3 -2-5 53 43 +O(**)
(1(-1)1+1 I I+6(-1)1+2 I= I+7(-1)1+3 I= I)
+3(1(-1)1+11; 1+5(-1)1+21= 1+7(-1)1+31= ;1)
-2((0 -8) -6(-6+20)+7(-4 -0))
+3((9 -4) -5(-6+20)+7(-2+15))
-2(-8 -84 -28)+ 3(5 -70+91)
-2(-120)+ 3(26) 318
4 4
and

represent cofactors whose values are irrelevant because they are

multiplied by

det

1-20
-5

det

-2

It is apparent that evaluating the determinant of a

matrix is a fairly lengthy

calculation, and things will get worse for larger matrices. In applications it is not un
common to have a matrix with thousands (or even millions) of columns, so it is very
important to have results that simplify the evaluation of the determinant. We look at a
few useful theorems here and some very helpful theorems in the next section.

Theorem 1

Suppose that A is an n x n matrix. Then the determinant of A may be obtained by


a cofactor expansion along any row or any column. In patticular, the expansion of
the determinant along the i-th row of A is

The expansion of the determinant along the }-th column of A is

260

Chapter S

Determinants

We omit a proof here since there is no conceptually helpful proof, and it would be
a bit grim to verify the result in the general case.
Theorem

is a very practical result. It allows us to choose from A the row or

column along which we are going to expand. If one row or column has many zeros, it
is sensible to expand along it since we shall then not have to evaluate the cofactors of
the zero entries. This was demonstrated in Example 4, where we had to compute only
two cofactors.

EXAMPLES

2
3
2 -1
0
6
1
'B=
Calculate the detemtinant of A =
4 1
-1 5
3 -1
2
1
-1
4
0 2
2 2
C=
0 0 -1
3
0 0
0
4

0
0
2

-1
0
1

, and

Solution: For A, we expand along the third column to get


detA=

a13C13

a23C23 + a33C33

= (-1)(-1)1+

1 I+

= -1(15 - (-1)) = -16


For B, we expand along the second row to get
det B=

+
+
1 ; I) +

a21C21 + a22C22

=0

=o

+
+

a23C23

a24C24

3 0 -1
2 2
( 6)(-1) + 4 2 1
3 0
1
6

2 2
2(-1) +

= 12(3 - (-3)) = 72
We expanded the 3

submatrix along the second column. For

expand along the bottom row to get

=0

+
(

4 2 1
0 + 4( -1 ) 4+4 0 2 2
0 0 -1

3 3
= 4 (-1)(-1) +

I I )

= 4(-1)(4(2)- 0) = 4(-1)(4)(2)= -32

we continuously

Section 5.1

EXERCISE 3
Calculate the determinant of A

-1

-1

-2

-4

261

Determinants in Terms of Cofactors

by

1. Expanding along the first column


2. Expanding along the second row
3. Expanding along the fourth column

Exercise 3 demonstrates the usefulness of expanding along the row or column with
the most zeros. Of course, if one row or column contains only zeros, then this is even
easier.

Theorem 2

If one row (or column) of an

n x n

matrix A contains only zeros, then detA

0.

Proof: If the i-th row of A contains only zeros, then expanding the determinant along
the i-th row of A gives
detA

ailCil

ai2C;2

a;nCin

As we saw with the matrix C in Example 5, another useful special case is when
the matrix is upper or lower triangular.

Theorem 3

If A is an

n x n

upper- or lower-triangular matrix, then the determinant of A is the

product of the diagonal entries of A. That is,

The proof is left as Problem D 1.


Finally, recall that taking the transpose of a matrix A turns rows into columns
T
and vice versa. That is, the columns of A are identical to the rows of A. Thus, if we
T
expand the determinant of A along its first column, we will get the same cofactors
and coefficients we would get by expanding the determinant of A along its first row.
So, we get the following theorem.

Theorem 4

If A is an

n x n

matrix, then detA

detA7.

With the tools we have so far, evaluation of determinants is still a very tedious
business. Properties of the determinant with respect to elementary row operations make
the evaluation much easier. These properties are discussed in the next section.

262

Chapter 5

Determinants

PROBLEMS 5.1
Practice Problems
Al

.rr
(c)

:Ft:

following dete

-3

I I

1
(e) 0
0

(d)

-4

0
0

2
3

(f)

-4

5
2
1
2
-4

5
-1
2
0

0
0
0
0

-7
2
-5
0
-7 8
3 0
0 0
0 0

by expanding along the first row.

[ i
i
[-
-

-4

(b)

(c)

-1

2
0

-4
3

0
2
2
2

0
-5
0

(d)

2
-1

4
4

-3

2
-2

-1
-2
1

(d)

()

H ol
[- ]
0
0

(b )

-6

-4

-4

-3

(e)

0
0
3

(f)

0
0
0
0

0
0
0
3

-1
-1
-2

6
5
-5
6

-1
-3

-3
1

0
0
0

1
-6
-3
6
2
1
-5
-6
-5
2
1
-1

3
-3

0
0

7
3

0
8

[ ; ]

the second column of A and the second row of A 7.


3
-

(a ) A =

1
0

0
1

1
-2
(b) A =
3

ces by expanding along the row or column of your

5
6
1

-2

the determinant of its transpose by expanding along

A3 Evaluate the determinants of the following matri


choice.

A4 Show that the determinant of each matrix is equal to

-3

A2 Evaluate the determinants of the following matrices

(a)

[ -i]
1

(c)

2
0
0
5

-2

AS Calculate the determinant of each of the following


elementary matrices.
(a ) E 1
=

) E 2 =

(c) 3 =

[ !]
[ ! ]
[- i
0

Section 5.1 Exercises

263

Homework Problems
Bl

:r:ir
1

1
(c)
1

-1
1

following dete

1
0
2
0

3
(e )

-1

0
0

";:;jf -:i
3
0
(d)
3

-5

4
0
(f)
0
0

2
3
0
0

(c)

2
0

0
-3
1

9
-1
2
9
2
0

[ -1

1
0
1

-1
-4
-2

(d)

2
-3

3
1
2
0

1
3

4
-1
(e)
-2
3

2
0
-1
0
1
3
2
5

B2 Evaluate the determinants of the following matrices

[ 1
H - !J

by expanding along the first row.


(a )

(b)

-1
2
0
2

0
5

0
-6
-1
-3

B3 Evaluate the determinants of the following matri


ces by expanding along the row or column of your

( )

(b)

H j ll
[ - j]

-4

2
-4
-3

3
3
2

3
3
-1
3
0

4
1
4
0
0

-5
2
1
0
0

0
0
0
0

the third row ofA and the third column ofAT.

-1

choice.

the determinant of its transpose by expanding along

-2
0
-4
0

1
0

5
-2
0

B4 Show that the determinant of each matrix is equal to

0
2
0
4

-2

0
2
0
-4

0
1

-2
0
(c)
4
0

(d )

(f)

-3
0

(a )A=

[ - -1
-1

3
1
(b)A
= -5

1
6
6
-3

6
3
0
4

2
5
0
0

BS Calculate the determinant of each of the following


elementary matrices.
(a) E
1 =

(b) E
2=

(c)
3=

[ ]
[ ! ]
[ ! ]
_1

264

Chapter 5

Determinants

Computer Problems
0.5

0.5

0.5

0.5

0.5
0.5
-0.5

-0.5
0.5
0.5

0.5
-0.5
-0.5

-0.5
-0.5
0.5

Cl Use a computer to evaluate the determinants of the

following matrices.
(a)

(b)

l.09
2.13
1.72

4.83
-3.25
2.15

2.95
1.57
-0.89

(c)

1.23
-2.09

2.35
0.17

4.19
3.89

-1.28
22.1

0.78
1.58

2.15

-3.55

4.15

-2.59

1.01

0.00

Conceptual Problems
Dl Prove Theorem 3.

(b) Write a determinantal equation for a plane


in JR.

02 (a) Consider the points (a1, a2) and (b1, b2) in


2
JR. . Show that the determinantal equation
det

[: 1
b1

b2

that contains the points (a1, a2, a3),

(b1, b2, b3), and (c1, c2, c3).

0 is the equation of the line

containing the two points.

5.2 Elementary Row Operations and


the Determinant
Calculating the determinant of a matrix can be lengthy. This calculation can often be
simplified by applying elementary row operations to the matrix, provided that suitable
adjustments are made to the determinant.
To see what happens to the determinant of a matrix A when we multiply a row of
A by a constant, we first consider a 3

3 example. Following the example, we state

and prove the general result.

EXAMPLE 1
Let A

a11
a21

a12
a22

a13
a23 and let B be the matrix obtained from A by multiplying the

a31

a32

a33

third row of A by the real number r. Show that detB

Solution: We have B

a11
a21
ra31

rdetA.

a12

a13

a22

a23 . We wish to expand the determinant of B

ra32

ra33

along its third row. The cofactors for this row are

Section 5.2 Elementary Row Operations and the Determinant

EXAMPLE 1

265

Observe that these are also the cofactors for the third row of A. Hence,

(continued)

Theorem 1

Let A be an n x n matrix and let B be the matrix obtained from A by multiplying the
i-th row of A by the real number r. Then, det B

r det A.

Proof: As in the example, we expand the determinant of

B along the i-th row. Notice

that the cofactors of the elements in this row are exactly the cofactors of the i-th row of
A since all the other rows of B are identical to the corresponding rows in A. Therefore,

Remark
It is important to be careful when using this theorem as it is not uncommon to incor
rectly use the reciprocal (1 / r) of the factor. One way to counter this error is to think
of "factoring out" the value of r from a row of the matrix. Keep this in mind when
reading the following example.

EXAMPLE2

By Theorem 1,

1
5
2

and

1
5 1

-1

-2
2

EXERCISE 1

4
15
0

[ :i
-2

det

3
10
-1

(-2)det

[i

4
3
0
1
2

-:1

Let A be a 3 x 3 matrix and let r ER Use Theorem 1 to show that det(rA)

Next, we consider the effect of swapping two rows.

r3 det A.

266

Chapter 5

EXAMPLE3

Determinants

The following calculations illustrate that swapping rows causes a change of sign of the
determinant. We have

[ ]=cb-da=-(ad-bc)=-det[ ]
=[aa2111 a12a22 a13a23]
a31 a32 a33
<let

Let A

and let Bbe the matrix obtained from A by swapping the first

and third rows of A. We expand the determinant of B along the second row (the row
that has not been swapped):

= [:: : :1
a11 a12 a13
=a2i(-1)2+1 la12a32 a33a131 + a22(-1)2+2 la11a31
=-a21 (-1: ::I)+ a22 (-1::
:I+ a22 I::
a12 a13 ]
a22a32 a23a33

<letB

Theorem 2

<let

Suppose that A is an

nxn

matrix and that B is the matrix obtained from A by

swapping two rows. Then detB

-det A .

Proof: W e use induction. W e verified the case where


Assume that the result holds for any

an

nxn

(n - 1) x (n

n=

2 i n Example 3.

- 1) matrix and suppose that Bis

matrix obtained from A by swapping two rows. If the i-th row of A was not

swapped, then the cofactors of the i-th row of Bare (n - 1)

x -1)
(n

matrices. We can

obtain these matrices from the cofactors of the i-th row of A by swapping the same two
rows. Hence, by the inductive hypothesis, the cofactors

c71

=-CiJ.

Hence,

c;1

of B and

CiJ

=ai!C71 + + ainC1,1 =a;1(-Ci1) + + ain(-C;n)


=-(ail Ci! + + ainCin)=-

of A satisfy

detB

<let A

Section 5.2

Corollary 3

Elementary Row Operations and the Determinant

267

If two rows of A are equal, then detA = 0.

Proof: Let B be the matrix obtained from A by interchanging the two equal rows.
Obviously B =A, so detB = detA. But, by Theorem 2, detB= -detA, so detA =
-detA. This implies that detA = 0.

Finally, we show that the third type of elementary row operation is particularly
useful as it does not change the determinant.

EXAMPLE4

For any r

det

[c:

ra

Let A =

lit we have

: rb] =a(d+rb)-b(c+ra)

[:: : :i
a31

a32

= ad+arb-bc-arb =ad-be=det

[: ]

and let B be the matrix obtained from A by adding r times

a33

row 2 to row 3. Expanding the determinant of Balong the first row and using the result
above gives

detB = det

[ ::

a31 + ra21

1 1
=a!l(-1) +

a32 + ra22

a12
a32 + ra22

a33 + ra23
a23

a33 + ra23

I
:1- 1:: :1

- 1+3
+ a13( 1)

a11
a31 + ra21
a12

Theorem 4

ai2

a13

a12
a32

a23

:
l

1 +2
+ an(-l)

I
I::

a12
a32 + ra22
+ a 13

a11
a31 + ra21

a33

Suppose that A is an n x n matrix and that B is obtained from A by adding r times


the i-th row of A to the k-th row. Then detB=detA.

Proof: We use induction. We verified the case where n = 2 in Example 4.


Assume that the result holds for any (n - 1) x (n -1) matrix and suppose that B is
the n x n matrix obtained from A by adding r times the i-th row of A to the k-th row.
Then the cofactors of any other row, say the l-th row, of Bare (n-1) x (n-1) matrices.
We can obtain these matrices from the cofactors of the l-th row of A by adding r times

268

Chapter 5

Determinants

the i-th row to the k-th row. Hence, by the inductive hypothesis, the cofactors
and

CiJ of A are equal. Hence,


detB=

aeiCe1

ae11Ce11 = ae1Ce1 +

aenCe11

c;1 of B

= detA

Theorem 1, Theorem 2, Theorem 4, and Theorem 5.1.3 suggest that an effective


strategy for evaluating the determinant of a matrix is to row reduce the matrix to upper
triangular form. For n > 3, it can be shown that in general, this strategy will require
fewer arithmetic operations than straight cofactor expansion. The following example
illustrates this strategy.

EXAMPLES
LetA=

-3

-3

.Find detA.

2 11
6
Solution: By Theorem 4, performing the row operations

R2 - R1

and

R4 - R1 does not

change the determinant, so


1

detA=

-4

To get the matrix into upper-triangular form, we now swap row 2 and row 3. By
Theorem 2, this gives

3
detA= (-1)

By Theorem 1, factoring out the common factor of ( -4) from the third row gives

detA= ( -1 )( -4)

5
1

Finally, by Theorem 4, we perform the row operation

R4 - R2 to get

detA= 4

= 4 (1)(3)(1)(6)= 72

Section 5.2 Elementary Row Operations and the Determinant

EXERCISE 2
LetA

2
-6
=

4
-6
1
6

-2
-2
3

269

6
5
. Find detA.
-1
5

-2

In some cases, it may be appropriate to use some combination of row operations


and cofactor expansion. We demonstrate this in the following example.

EXAMPLE6
Find the determinant of A

5
8
5

1
1
0

6
7
6
4

7
9
10 .
-2

Solution: By Theorem 4,
1
0
0
0

5
3
0

6
1
0
4

7
2
3
-2

3
1
1
(-1) + 0

1
0
4

2
3 +O
-2

detA

Expanding along the first column gives

detA

Expanding along the second row gives

detA

(1)(3)(-1)2+3

EXERCISE 3

-6
Find the determinant of A

3
-6
-3

-2
2
4
2

[i ![

4
-4
0
-3

(-3)(12 - 1)

-5
3
o
-4

-33

270

Chapter 5

Determinants

The Determinant and Invertibility


It follows from Theorem 1, Theorem 2, and Theorem 4 that there is a connection
between the determinant of a square matrix, its rank, and whether it is invertible.

Theorem 5

If A is an n x n matrix, then the following are equivalent:

(1) detA :f:. 0


(2) rank(A)

(3) A is invertible

Proof:

In Theorem 3.5.4, we proved that (2) if and only if (3), so it is only necessary

to prove that (1) if and only if (2).


Notice that Theorem 1, Theorem 2, and Theorem 4 indicate that if detA * 0, then
the matrices obtained from A by elementary row operations will also have non-zero
determinants. Every matrix is row equivalent to a matrix in reduced row echelon form;
this reduced row echelon form has a leading 1 in every entry on the main diagonal if
and only if the rank of the matrix is n. Hence, a given matrix A is of rank n if and only
if detA :f:. 0.

Remark
Theorem 5 shows that detA :f:. 0 is equivalent to all of the statements in Theorem 3.5.4
and Theorem 3.5.6.
We shall see how to use the determinant in calculating the inverse in the next
section. It is worth noting that Theorem 5 implies that "almost all" square matrices are
invertible; a square matrix fails to be invertible only if it satisfies the special condition
detA

0.

Determinant of a Product
Often it is necessary to calculate the determinant of the product of two square matrices
A and B. When you remember that each entry of AB is the dot product of a row from
A and a column from B, and that the rule for calculating determinants is quite compli
cated, you might expect a very complicated rule. The following theorem should be a
welcome surprise. But first we prove a useful lemma.

Lemma6

If E is an n x n elementary matrix and C is any n x n matrix, then


det(EC)

(det E)(det C)

Proof: Observe that if Eis an elementary matrix, then since Eis obtained by perform

ing a single row operation on the identity matrix, we get by Theorem 1, Theorem 2,
and Theorem 4 that det E

a, where a is

1, -1, or

r,

depending on the elementary

row operation used. Moreover, since EC is the matrix obtained by performing that row
operation on C, we get
det(EC)

a det C

det Edet C

Section 5.2

Theorem 7

Elementary Row Operations and the Determinant

271

If A and B are n x n matrices, then det(AB) = (detA)(detB).

Proof: If detA = 0, then Ay =

has infinitely many solutions by Theorem 5 and

Theorem 3.5.4. If B is invertible, then y = Bx has a solution for every y E JRn, and
hence there are infinitely many x such that ABx =

0, and so AB is not invertible. If B is


not invertible, then there are infinitely many x such that Bx = 0. Then for all such x we
get ABx = AO= 0. So, AB is not invertible. Thus, if detA = 0, then AB is not invertible
and hence det (AB)= 0. Therefore, if detA = 0, then det(AB) = (detA)(detB).

On the other hand, if detA * 0, then by Theorem 3.5.4 the RREF of A is/. Thus,

by Theorem 3.6.3, there exists a sequence of elementary matrices E1,


A = E1 Ek . Hence AB = E1

det(AB) = det(E1 EkB) = (detE1)(detE2) (detEk)(detB)

EXAMPLE?

Verify Theorem 7 for A =

0
-1

2
Solution: By Theorem 4 we get

and B =

detA = -10

-1

detB

-1

15

-10
5

2
-2

1 =

-15

28 = -105
7

So (det A)(detB) = ( -15)( -105) = 1575. On the other hand, using


Theorem 1 and Theorem 4 gives

detAB = det

Ek such that

EkB and, by repeated use of Lemma 6, we get

[=9 - i
12

20

(-5)

= (-5) 1

-4 = (-5)(-1)2+1

56

= (5)(315)= 1575

4
12

20

56

(detA)(detB)

272

Determinants

Chapter 5

PROBLEMS 5.2
Practice Problems
Al Use row operations and triangular form to compute

-2

-6

the determinants of each of the following matrices.

-6

-4

-1

-3

-4

-2

-1

-3

Show your work clearly. Decide whether each ma

[ i
[ 1

trix is invertible.
(a) A=

-1

(b) A=

of each of the following matrices. In each case,


determine all values of p such that the matrix is

-1

(a)A=

-1

-2

(b)A=

-2

-4

-1

16

10

-5

27

expansion to evaluate the following determinants.

-1

-2

(a)

-2

(c) A=

A4 Verify Theorem 7 if
(a)A=

A2 Use a combination of row operations and cofactor

(c)

-2

-1

(b)

(e) A=

[! : =il

invertible.

(c) A=

(d)A=

A3 Use row operations to compute the determinant

(d)

2
1

(b) A

[ -n

[- l
H ]
-

B=

B=

AS Suppose that A is an

matrix is invertible.

n x n

matrix.

(a) Determine det(rA).


(b) If A is invertible,show that detA-1 =
(c) If A3 =/,prove that A is invertible.

Bl Use row operations and triangular form to compute


ces. Show your work clearly. Decide whether each

Homework Problems
the determinants of each of the following matri

[ ]

(a) A=

[ -l 1]

de;A.

Section 5.2 Exercises

(b) A=

[ ; - -1
[; - 1

(c) A=

(d) A=

-1

-4
-1

invertible.

(a) A=

-9
7

-2

-3

-3

-4

(b) A=

1
1

2
2

1
8

(c) A=

-1

[! ]
[ i l
O

-1

-3

expansion to evaluate the following determinants.


1

-2

1
-

(b)

-4

4
1

27

16

2
p

(d) A=

5
2

-1

-3

(c)

B2 Use a combination of row operations and cofactor

(a)

determine all values of p such that the matrix is

10

-2

-1

-2

of each of the following matrices. In each case,

-7

B3 Use row operations to compute the determinant

-6

(e) A=

(f) A=

-4

-3

(d)

4
4

273

3
5

-2

-1

-1

-2

Conceptual Problems
Dl A square matrix A is skew-symmetric if AT = -A.
If A is an n x n skew-symmetric matrix, with n odd,
prove that detA= 0.

D2 Assume that A and B are

n x n matrices. If

detAB= 0, is it necessarily true that detA= 0


or det B = O? Prove it or give a counterexample.

D3 Suppose that A is a 3

L(x)= Ax cannot be all of IR.3 and that its nullspace


cannot be trivial.

D4 Let A be an

detA = 0. What can you say about the rows of A?

[
l
[: {l [ : a

n x n invertible matrix, then det p-lAP= detA.

DS (a) Prove that det

x 3 matrix and that

Argue that the range of the matrix mapping

n x n matrix. Prove that if P is any

= det

a+p

b+q

+ det

c1+kr

274

Chapter 5

Determinants

a +p

b+q

c+r

(b) Use part (a) to express det d+x

e +y

f +z

DS (a) Prove that


det

a31

as the sum of determinants of matrices whose

vl [

p+q

u+

det b+c

q +r

v +w

c+a

r+p

w+u

a+b

07 Pmve that det

[:

: :
a32

(b) Let A be an n

entries are not sums.

D6 Prove that

[::

r det

a31

a32

a33

n matrix and B be the matrix

obtained from A by multiplying the i-th column


of A by a factor of r. Prove that det B

ra33

i [:: : :i .
=

r det A. (Hint: How are the matrices AT and BT

2 det b

related?)

(c) Let A be an n

n matrix and B be the matrix

obtained from A by adding a multiple of one

1 +a
2
(1 +a)

column to another. Prove that det A

det B.

5.3 Matrix Inverse by Cofactors and


Cramer's Rule
We will now see that the determinant provides an alternative method of calculating the
inverse of a square matrix. Determinant calculations are generally much longer than
row reduction. Therefore, this method of finding the inverse is not as efficient as the
method based on row reduction. However, it is useful in some theoretical applications
because it provides a formula for A-1 in terms of the entries of A.
This method is based on a simple calculation that makes use of the following
theorem.

Theorem l

[False Expansion Theorem]


If A is an n

n matrix and i t= k, then

Proof: Let B be the matrix obtained from A by replacing (not swapping) the k-th row
of A by the i-th row of A. Then the i-th row of B is identical to the k-th row of B, hence
det B

0 by Corollary 5.2.3. Since the cofactors c;j of B are equal to the cofactors

CkJ of A, and the coefficients bkJ of the k-th row of B are equal to the coefficients aiJ

of the i-th row of A, we get

as required.

Definition
Cofactor Matrix

Let A be an n

n matrix. We define the cofactor matrix of A, denoted cof A, by

(cof A)iJ

CiJ

Section 5.3 Matrix Inverse by Cofactors and Cramer's Rule

EXAMPLE 1
Let A=

[02 4-2 -1 1

1 . Determine cof A.
5

6
Solution: The nine cofactors of A are

=Cl)1- I 17
C21 14_2 -11= -18
-11=7
=

ell

= (-1)

275

12 I I=
C22= 12 -11=
C32=(-1)012 -11 =-2
c

= c-1)
(1)

C13=(1)I _;1= -18


C23= I -I= 28
C33 (1)1 I

(-1)

16

=6

Hence,

= [-1

EXERCISE 1

Calculate the cofactor matrix of A

Observe that the cofactors of the i-th row of A form the i-th row of cof A, so they
form the i-th column of (cof Al. Thus, the dot product of the i-th row of A and the
i-th column of (cof Al equals the determinant of A. Moreover, by the False Expansion
Theorem, the dot product of the i-th row of A and the }-th column of cof A equals
if i * j. Hence, if af represents the i-th row of A and c1 represents the }-th column of
(cof A)\ then

A(cof Al =

I1

[. .
/

[c1

c,!]

-+T
a

i11 en

Ct

a11

C1

detA

=0
0

where I is the identity matrix.


If detA * 0, it fo llows that A

0
=
0
0

an. en

detA

(detA)/

detA

(de;A) (cofAl= I, and, therefore,

A-1

=( )
de A

ccofA)7

276

Chapter 5

Determinants

If detA

0, then, by Theorem 5.2.4, A is not invertible. We shall refer to this

cofactor method. (Some people refer to the trans


adjugate matrix and therefore call this the adjugate

method of finding the inverse as the


pose of the cofactor matrix as the

method).
EXAMPLE2
Find the inverse of the matrix A

Solution:

-1

-2

We first find that

detA

-1

-2

by the cofactor method.

2
=

-1

-14

2(24 + 14)

76

Thus, A is invertible. Using the result of Example 1 gives

1
=

[
76

--

detA

(cof A)
1

J_

-18

28

EXERCISE 2
Use the cofacto< method to find the inverse of A

]6

[-i n

For 3 x 3 matrices, the cofactor method requires the evaluation of nine 2 x 2


determinants. This is manageable, but it is more work than would be required by the
row reduction method. Finding the inverse of a 4 x 4 matrix by using the cofactor
method would require the evaluation of sixteen 3 x 3 determinants; this method
becomes extremely unattractive.

Cramer's Rule
Consider the system of

linear equations in

variables, Ax

A is invertible, then the solution may be written in the form

X1

X;

Xn

detA

1
7
-- (cof A) b
detA

C11

C 21

C111

b1

C1;

C2;

c,li

b;

C111

C211

Cnn

bn

b.

If detA * 0 so that

Section 5.3 Matrix Inverse by Cofactors and Cramer's Rule

277

Consider the value of the i-th component of x:

This is

de;A

1
-- (b1C1 +b2C2 +.. +bn C)
Ill
I

detA

multiplied by the dot product of the vector

b with the i-th row of (cof Af .

But the i-th row of (cof A f is the i-th column of the original cofactor matrix cof A. So

x; is the dot product of the vector

b with the i-th column of cof A divided by detA.

Now, let N; be the matrix obtained from A by replacing the i-th column of A by

b.

Then the cofactors of the i-th column of N; will equal the cofactors of the i-th column
of A, and hence we get
det N;

b1Ci;+b2C2;+

+b11C11;

Therefore, the i-th component of 1 in the solution of Ax

b is

det N;

-- detA

x 1

This is called Cramer's Rule (or Method). We now demonstrate Cramer's Rule with a
couple of examples.

EXAMPLE3

Use Cramer's Rule to solve the system of equations.

Xt +Xz - X3

b1

2x1 +4x2+ Sx3

b2

X1+X2+ 2X3

b3

[ ]
-

Solufon: The coefficient matrix is A

detA

-1
5

so

-1
7
3

( 1 ) (2 )(3)

Hence,

Xt

det N1
=

x2

x3

detA

det N2
detA
det N3
detA

b2
b3

b1

2
6 l

2
6 l

!
=

b1

b2
b3
1

-1
5
2
-1
5

2
b1

b2
b3

b1

!
=

=-

b2+Sb1
6
b3 +2b1

1
1
- 7
6
3
1

9
3

b1

b2+ Sbi
b3 + 2b1
1

bi
b2 - 2b1
b3 -bi

-1

3b1 - 3b2+ 9b3


6

0
-1

b1+3bz-7b3
6

0
=

-2b1 + 2b3
6

Chapter 5

278

EXAMPLE4

Determinants

Use Cramer's Rule to solve the system of equations

2
3
2
-xi
5

3
5
1
-x2
3

--X1 + -X2

. .s A
.
matnx
S oI ution: The coe ffi c1ent
1
.

detA
Hence,
Xi

x
2

r-22/315

1
5
1
2

- - - -
-2 -1
3
3

3/5
11 3

3 2
5 5

1
1

1
1

, so

-4
225

165
-225 . -11
1
3/5
1/5
1/2
-1/3
4
30
8
-4/225
1
-225
. 93 93
-2/3 1/5
-4/225 2/5 1/2
4
4
4

To solve a system of

equations in

variables by using Cramer's Rule would

require the evaluation of the determinant of

n +

matrices, each of which is

n x n.

Thus, solving a system by using Cramer's Rule requires far more calculation than
elimination, so Cramer's Rule is not considered a computationally useful solution
method. However, like the cofactor method above, Cramer's Rule is sometimes used
to write a formula for the solution of a problem.

PROBLEMS 5.3
Practice Problems
Al Determine the inverse of each of the following ma
trices by using the cofactor method. Verify your an
swer by using multiplication.
(a)
(b)

(c)

[! l]
[; =1

U - Il

(d)

[
-2

A2 Let A

2 -1

[-; - ]
-

(a) Determine the cofactor matrix cof A.


(b) Calculate
and A-1

A(cof Af

and

determine

detA

Section 5.3 Exercises

A3 Use Cramer's Rule to solve the following systems.


(a) x
2 1 -x
3 2=6

(c)

279

7x1+
x2 - x
4 3 =3
-x
6 1 - 4x2+
x3 =0

3x1 +x
5 2=7

4x1 -x2 - x
2 3 =6

(b) x
3 1+x
3 2=2

(d)

x
2 1-x
3 2=5

x
2 1+3x2 - 5x3 =2
3x1 -x2 +2x3 =1
5x1+4x2 - x
6 3 =3

Homework Problems

[ 1.

Bl Determine the inverse of each of the following


matrices by using the cofactor method. Verify your
answer by using multiplication.
(a)
(b)

(c)

(d)

[- ]
[- ]
-2
3

(e)

-1

-7

-2
-3

0
2

(a)

(b)

-2

-3

detA

2
x1 - x
7 2=3

3x1+5x2=-2
X1 +x
3 2=-3

-4

x
5 1+x
4 2=-17

(c) x1 - 5x2 - x
2 3 =-2
2
x1

+x
3 3 =3

4X1 +
X2

-4

(f)

4
(a) Determine the cofactor matrix cof A.
(b) Calculate A(cof A)7 and determine
and A-1

-2

-2

B3 Use Cramer's Rule to solve the following systems.

H -1
1
[ -]
0

B2 Let A=

(d)

2
-3

X1

-X3 =1
+2X3 =-2

3x1 -x2+x
3 3 =5

=0

Conceptual Problems
Dl Suppose that A = i11

n x n

an]

is an invertible

matrix.

(a) Verify by Cramer's Rule that the system of


equations Ax=a1 has the unique solution
x=ej (the }-th standard basis vector).
(b) Explain the result of part (a) in terms of linear
transformations and/or matrix multiplication.

D2 Let A =

-1

-1

2
0

Use the cofactor method

0
3
1
1
to calculate (A- )23 and (A- )42. (If you calcu
0

late more than these two entries of A-1, you have


missed the point.)

280

Chapter 5

Determinants

5.4 Area, Volume, and the Determinant


So far, we have been using the determinant of a matrix only to determine if an n x

ma

trix is invertible. In particular, we have only been interested in whether the determinant
of a matrix is zero or non-zero. We will now see that we are sometimes interested in the
specific value of the determinant, as it can have an important geometric interpretation.

Area and the Determinant


Let i1

[]

and

[ l

In Chapter 1, we saw that we could construct a paralle

logram from these two vectors by making the vectors i1 and

v as adjacent sides and


v as the vertex of the parallelogram, opposite the origin, as in Figure 5.4.1.
This is called the parallelogram induced by a and v.
having i1 +

Ut UJ + V) X1

0
Figure 5.4.1

Parallelogram induced by i1 and v.

We will calculate the area of this parallelogram by calculating the area of the
rectangle with sides of length u1 + v1 and u2 + v2 and subtracting the area inside the
rectangle and outside the parallelogram, as indicated in Figure 5.4.2.
This gives
Area(it, v)

Area of Square - Area 1 - Area 2 - Area 3 - Area 4 - Area 5 - Area 6

(u1 + V1)(u2 + v2) -

V1V2 - U2V1 -

u1u2 -

V1V2 - U2V1 -

U1U2

U1U2 + U1V2 + U2V1 + V1V2 - V1V2 - 2U2V1 - U1U2


U1V2 - U2V1

We immediately recognize this as the determinant of the matrix

[ ]

[it v].

Remark
At this time, you might be tempted to say that the area of the parallelogram induced
by any two vectors it and

v equals the determinant of [il v]. However, this would be

slightly incorrect as we have made a hidden assumption in our calculation above. In


particular, in our diagram we have drawn a and

v as a right-handed system.

Section 5.4 Area, Volume, and the Determinant

281

Area(a, V)

AS

A6

u2

0
Figure 5.4.2

EXERCISE 1

Show that 1f a

[UU1]2

Area of the parallelogram induced by

and v

[VVJ2]

a and v.

form a left-handed system, then the area of the

I [UUj2 VV1JI2

parallelogram mduced by a and V equals det

We have now shown that the area of the parallelogram induced by it and v is
Area(it, v)

EXAMPLE 1

l [ :]I
det

Draw the parallelogram induced by the following vectors and determine its area.
(a) it=

(b) it

[l [-]
[ [
l -n
v

"

Solution: For (a), we have

Area (it, v)

l [- ;JI
det

2(2) - 3(-2)

10

282

Chapter 5

EXAMPLE 1

Determinants

For ( b), we have

(continued)

l [ JI

Area(it,v) = det

= 1(-2)-1(1)= -3

Now suppose that the 2 x 2 matrix A is the standard matrix of a linear transforma
2
IR. . Then the images of i1 and v under L are L(il) =Ail and L(v) =Av.

tion L: IR.2

Moreover, the volume of the image parallelogram is

l [

]I = ldet (A [a v])I

Area (Ail,Av) = det Ail Av


Hence, we get

l ( [

Area (Ait,Av)= det A il

])l =ldetAlldet [il v]l =ldetAIArea(il,v)

(5.1)

In words: the absolute value of the determinant of the standard matrix A of a linear
transformation is the factor by which area is changed under the linear transformation
L. The result is illustrated in Figure 5.4.3.

Figure 5.4.3

Under a linear transformation with matrix A , the area of a figure is


changed by factor I detAI.

EXAMPLE2

Let A =

i1 =

[ ]

[ ]

and v =

in two ways.

and L be the linear mapping L(x) = Ax. Determine the image of

[-]

under L and compute the area determined by the image vectors

Section 5.4 Area, Volume, and the Determinant

EXAMPLE2

283

Solution: The image of each vector under L is

(continued)

Hence, the area determined by the image vectors is

Area (L(i1), L(v))

= l [ ;11 = - =
det

41

18

Or, using (5.1) gives

Area(L(i1), L(v))

EXERCISE2

Let A

= [ ]

=I I

det A Area (i1, v)

= [ [ ;J[ [ [- JI= =
det

2(2)

det

be the standard matrix of the stretch S

JR.2 JR.2

by a factor oft. Determine the image of the standard basis vectors

in the

x1

e1 ez
and

direction
under S

and compute the area determined by the image vectors in two ways. Illustrate with a
sketch.

The Determinant and Volume

JR.3,

Recall from Chapter 1 that if it, v, and w are vectors in


parallelepiped induced by it, v, and w is
Volume(i1, v, w)

Now obsem that if U

lw(i1xv)I

[::l = [:l
V

and W

Jw (it x

= [::I

then the volume of the

v)I

then

= lw1(U2V3-U3V2)-w2(U1V3-U3V1)+w3(U1V2-U2V1)I = lU3UUzJ
det

Hence, the volume of the parallelepiped induced by it, v, and wis

Volume(i1,v,w)= det i1

EXAMPLE3
Ut A

= [;

-n = l-ll m
U

and W

= nl

wJI
Calculate the volume of the

parallelepiped induced by i1, v, and wand the volume of the parallelepiped induced by

Au, Av, and Aw.

284

Chapter 5

Determinants

EXAMPLE 3

Solution: The volume determined by il, v, and w is

(continued)

H : -]

Volume(ii, V, W) = det

= I

- 71 = 7

The volume determined by Ail, Av, and Aw is

l [

Volume(Ail,Av,Aw) = det Ail

Av

Aw

H :J

JI

= det

Moreover, det A =

= I

3851

385

-55, so
Volume (Ail.Av,Aw) = I det AIVolume (il,v,w)

which coincides with the result for the 2

2 case.

In general, if v1, ..., vn are n vectors in 11, then we say that they induce an
n-dimensional parallelotope (the n-dimensional version of a parallelogram or
parallelepiped). Then-volume of the parallelotope is

and if A is the standard matrix of a linear mapping L

n, then

n-Volume (Av1,... , Avn)= I det Aln-Volume (v1,. .., Vn)

PROBLEMS 5.4
Practice Problems
Al (a) Calculate the area of the parallelogram induced
by il=

[; ]

and v=

[]

in 2.

(b) Determine the image of il and v under the lin


ear mapping with standard matrix
A=

[ n
-

(c) Compute the determinant of A.


(d) Compute the area of the parallelogram induced
by the image vectors in two ways.

A2 Let A =
tion R

[ ]

be the standard matrix of the reflec

2 over the line x2 = x1. Determine

the image of il =

[]

and v =

[-]

under R and

compute the area determined by the image vectors.

A3 (a) Compute the volume of the parallelepiped induced by ii=

lH [=n
jl =

and w =

Section S.4 Exercises

(b) Compute the determinant ofA=

[! - ].
0

2 s

(c) What is the volume of the image of the paral

lelepiped of part (a) under the linear mapping


with standard matrixA?

0
2
and V4
v3 =
3
0

[H
[ - -H

HJ m
.w=

.andA=

, i12 =

0
2
5

parallelotope under the linear mapping with

2
5
standard matrixA=
0
0
A6 Let {i11, .

..

3
4

0
0

1
3
7
0

1
0
.
3

, V,,} be vectors in IR.n. Prove that the


n-volume of the parallelotope induced by i11,
Vn

AS (a) Calculate the 4-volume of the 4-dimensional


1
0
parallelotope determined by i11 =

(b) Calculate the 4-volume of the image of this

A4 Repeat Problem A3 with vectors a=

V=

285

1
3

is the same as the volume of the parallelotope in

duced by i11, ... , V,,_1, V n + tV 1.

Homework Problems
Bl (a) Calculate the area of the parallelogram induced
by il =

[]

and v =

[ ]

(c) What is the volume of the image of the paral

lelepiped of part (a) under the linear mapping

in JR.2 .

with standard matrixA?

(b) Determine the image of il and v under the lin


ear mapping with standard matrix

A=

[ n

(c) Compute the determinant ofA.

(d) Compute the area of the parallelogram induced


by the image vectors in two ways.

B2 LetA =
H

[ l

: JR.2 JR.2

be the standard matrix of the shear


in the x1 direction by a factor of t.

Determine the image of the standard basis vectors

e1 and e2 under H and compute the area determined

v =

Hl nl
w

B3 (a) Compute the volume of the parallelepiped in-

[-H [ H
V=

[H
[ - H

and W=

(b) Compute the determinant ofA=

andA=

BS (a) Calculate the 4-volume of the 4-dimensional


1
1
1
1
parallelotope determined by i11 =
v 2=
,
1
2
1
3

by the image vectors.

dured by U=

[_il
u -r H

B4 Repeat Problem B3 with vectors i1 =

V3 =

2
0
, and V 4 =
.
3
5
0
7

(b) Calculate the 4-volume of the image of this

parallelotope under the linear mapping with

2
2
standard matrixA=
-1
0

3 1
3
3 7
2

1
0
o
1

286

Determinants

Chapter 5

Conceptual Problems
Dl Let {ii\, ... , V,1} be vectors in lR.11 Prove that the

D2 Suppose that L, M

JR3

---7

JR3 are linear mappings

n-volume of the parallelotope induced by v1,


, V,1
is half the volume of the parallelotope induced by

that the factor by which a volume is multiplied un

2i11, V2, .. .'V,I.

der the composite map M

with standard matrices A and B, respectively. Prove

L is I det BAI.

CHAPTER REVIEW
Suggestions for Student Review
Try to answer all of these questions before checking an

2 State as many facts as you can that simplify the

swers at the suggested locations. In particular, try to in

evaluation of determinants. For each fact, explain

vent your own examples. These review suggestions are

why it is true. (Sections 5.1 and 5.2)

intended to help you carry out your review. They may


not cover every idea you need to master. Working in

3 Explain and justify the cofactor method for finding


a matrix inverse. Write down a 3 x 3 matrix A and

small groups may improve your efficiency.


1

Define cofactor and explain cofactor expansion. Be


especially careful about signs. (Section 5.1)

calculate A(cof A)T. (Section 5.3 )


4

How and why are determinants connected to


volumes? (Section 5 .4)

Chapter Quiz
El By cofactor expansion along some column, evaluate
-2
4 0 0
det

1
-3

-2

-1

E3

!]

-4

is invertible.

ES Suppose that A is a 5 x 5 matrix and det A

E2 By row reducing to upper-triangular form, evaluate


-8
2
3
7
20
-6 - 1 -9
det
3
8 21 -17
3

E4 Determine all values of k such that the matrix

7.

(a) If B is obtained from A by multiplying the


fourth row of A by 3, what is det B?
(b) If C is obtained from A by moving the first row
to the bottom and moving all other rows up,
what is det C? Justify.

12
0

Evaluate det 0

1 .

0
5

(c) What is det(2A)?

(d) What is det(A-1)?


(e) What is det(AT A)?
E6 Let A

[ ]

Determine (A-1) 31 by using


-2 0 2
the cofactor method.
=

Chapter Review

E7 Determine x by using Cramer's Rule if


2

(b) If A =

2x1 + 3x + x3 = 1
2
X1 + Xz - X3
-1

+ 2x3 =

what is the volume of the

-4

parallelepiped induced by Ail, Av, and Aw?

-2x1

[ _;],

287

E8 (a) What is the volume of the parallelepiped induced by U =

[ J [-H
'

and W=

[!]1

Further Problems
These exercises are intended to be challenging. They

cofactor of c2, argue that

may not be of interest to all students.

V3(a, b, c)= (c - a)(c - b)(b - a)

Fl Suppose that A is an n x n matrix with all row sums

a2

b2

3
a
3
b

1
1

c2

c3

d2

I!

equal to zero. (That is, 2: aiJ = 0 for 1 i n.)


j=l

Prove that detA= 0.


1
F2 Suppose that A and A- both have all integer entries.
Prove that detA= 1.

(b) Let V4(a, b, c, d ) =

det

. By

using arguments similar to those in part (a) (and


without expanding the determinant), argue that

F3 Consider a triangle in the plane with side lengths


a, b, and c. Let the angles opposite the sides with

lengths a, b, and c be denoted by A, B, and C, re


spectively. By using trigonometry, show that

V4(a, b, c, d)= (d - a)(d - b)(d - c)V3(a, b, c)

F5 Suppose that A is a 4 x4 matrix partitioned into 2x2


blocks:
A=

c= b cosA + a cosB

Write similar equations for the other two sides. Use


Cramer's Rule to show that

(a) If A3 = 0 , (the 2 x 2 zero matrix), show that


22
detA= detA1 detA4.
(b) Give an example to show that, in general,

b2 + c2 - a2

detA * detA1 detA4 - detA2 detA3

cosA= ---2 bc

F4 (a) Let V3(a, b, c) = det

[ ] .
1

F6 Suppose that A is a 3 x 3 matrix and B is a 2 x 2


matrix.
Without ex-

c2

panding, argue that (a - b), (b - c), and (c - a)


are all factors of V3(a, b, c). By considering the

MyMathlab

[fy]

(a) Show that det


(b) What is det

0 3
2,
0 3

03

,2
B

I o.2 j

= detA detB.

Go to MyMathLab at www.mymathlab.com. You can practise many of this chapter's exercises as often as you
want. The guided solutions help you find an answer step by step. You'll find a personalized study plan available to
you, too!

CHAPTER 6

Eigenvectors and
Diagonalization
CHAPTER OUTLINE
6.1
6.2
6.3
6.4

Eigenvalues and Eigenvectors


Diagonalization
Powers of Matrices and the Markov Process
Diagonalization and Differential Equations

An eigenvector is a special or preferred vector of a linear transformation that is


mapped by the linear transformation to a multiple of itself. Eigenvectors play an im
portant role in many applications in the natural and physical sciences.

6.1 Eigenvalues and Eigenvectors


Eigenvalues and Eigenvectors of a Mapping
JR11 is a linear transformation.

Definition

Suppose that L : JR11

Eigenvector

that L(V)

Eigenvalue

The pair A., vis called an eigenpair.

-7

A non-zero vector v E

JR11 such

A.vis called an eigenvector of L; the scalar A. is called an eigenvalue of L.

Remarks
1. The pairing of eigenvalues and eigenvectors is not one-to-one. In particular, we

will see that each eigenvector of L will have a distinct eigenvalue, while each

eigenvalue will correspond to infinitely many eigenvectors.

2. We have restricted our definition of eigenvectors (and hence eigenvalues) to be


real. In Chapter 9 we will consider the case where we allow eigenvalues and
eigenvectors to be complex.

The restriction that an eigenvector v be non-zero is natural and important. It is


natural because L(O)
esting to consider

0 for every linear transformation, so it is completely uninter


0 as an eigenvector. It is important because many of the applications
=

involving eigenvectors make sense only for non-zero vectors. In particular, we will
see that we often want to look for a basis of JR11 that contains eigenvectors of a linear
transformation.

290

Chapter 6

EXAMPLE 1

Eigenvectors and Diagonalization

[ =i]

Let A =

and let L

JR2

JR2 be the linear transformation defined by

L(x) = Ax. Determine which of the following vectors are eigenvectors of Land give
the corresponding eigenvalues:

Solution:

So,

[ ]

To test whether

[ ]

rn [=;J. rn [!]
and

is an eigenvector, we calculate L(l,

[=]

= (-2)

[ ]

and Lis linear,

L(-2,-2) = (-2)L(l,

[=]

[ i]

for any real number ..i, so

[ ] 2[=]
=

[]

[17 -15J r J r J r J
l

20

-18

-28

-34 f. ;i 3

is not an eigenvector of L.

[17 -15] [3] [ -9]

L(3 4)=
.
20

EXAMPLE2

= (-2)(2)

is an eigenvector of L with eigenvalue 2.

L(l ,3) =

[!]

1)

is also an eigenvector of L with eigenvalue 2. In fact, by a similar argument,

any non-zero multiple of

So,

1 ):

is an eigenvector of L with eigenvalue 2.

Since

So,

-18

-12

[3]

= 3
- 4

(or any non-zero multiple of it) is an eigenvector of Lwith eigenvalue -3.

Eigenvectors and Eigenvalues of Projections and Reflections in JR3


1. Since projil(it)

lit, it is an eigenvector of proj,1 with corresponding

eigenvalue 1. If vis orthogonal to it, then proj,lV) =

= Ov, so vis an eigen

vector of proji1 with corresponding eigenvalue 0. Observe that this means there
is a whole plane of eigenvectors corresponding to the eigenvalue 0 as the set of
vectors orthogonal to it is a plane in JR3. For an arbitrary vector a

is a multiple of rt, so that a is definitely


a multiple of it or orthogonal to it.

not

JR3, proj,1(a)

an eigenvector of proji1 unless it is

Section 6.1

EXAMPLE2

Eigenvalues and Eigenvectors

291

= 0 = On, so rt is an eigenvector of perpr1 with


perp,1(V) = 1v for any v orthogonal to rt, such a v is an

2. On the other hand, perp,z(it)

(continued)

eigenvalue 0. Since

eigenvector of perpr1 with eigenvalue I.

3. We have refl,1 ii = - lit, so ii is an eigenvector of reflr1 with eigenvalue -1. For


vorthogonal to ii, refl11(V)

1 v. Hence, such a vis an eigenvector of reflr1 with

eigenvalue 1.

EXAMPLE3

Eigenvectors and Eigenvalues of Rotations in JR2


.
Cons1.der the rotation Re
integer multiple of
such that Re(V)

1r.

ln)
11'.2

---7

.
cose
.
lnl
.
11'.2 wit h matnx

- sine
cose

sme

.
, where e is not an

By geometry, it is clear that there is no non-zero vector vin IR2

/lv for

some real number /l. This linear transformation has no real

eigenvalues or real eigenvectors. In Chapter 9 we will see that it does have complex
eigenvalues and complex eigenvectors.

EXERCISE 1
Let Re : IR3

cose

---7

- sine

IR3 denote the rotation in IR3 with matrix sine

cose

Derermine

any real eigenvectors of Re and the corresponding eigenvalues.

Eigenvalues and Eigenvectors of a Matrix


The geometric meaning of eigenvectors is much clearer when we think of them as
belonging to linear transformations. However, in many applications of these ideas, it
is a matrix A that is given. Thus, we also speak of the eigenvalues and eigenvectors of

the matrix A.

Definition

Suppose that A is an n x n matrix. A non-zero vector v E IRn such that Av= /lv is called

Eigenvector

an eigenvector of A; the scalar

Eigenvalue

an eigenpair.
In Example 1, we saw that

/l is called

[ ] [!]
and

an eigenvalue of

A.

The pair /l, vis called

are eigenvectors of the matrix A

[ = ]

with eigenvalues 2 and -3, respectively.

Finding Eigenvectors and Eigenvalues


If eigenvectors and eigenvalues are going to be of any use, we need a systematic
method for finding them. Suppose that a square matrix
vector v E IR" is an eigenvector if and only if
as

It is tempting to write this as

Av= AV.

is given; then a non-zero

This condition can be rewritten

Av-M=o
(A

- ,i)v

0,

but this would be incorrect because A is

a matrix and /l is a number, so their difference is not defined. To get around this, we

292

Chapter 6

Eigenvectors and Diagonalization

use the fact that v= IV, where I is the appropriately sized identity matrix. Then the
eigenvector condition can be rewritten
(A - ,U)v=

The eigenvector vis thus any non-trivial solution (since it cannot be the zero vector)
of the homogeneous system of linear equations with coefficient matrix (A -tl/). By
the Invertible Matrix Theorem, we know that a homogeneous system of

n equations in

variables has non-trivial solutions if and only if it has a determinant equal to 0.

Hence, for tl to be an eigenvalue, we must have det(A-tl/) = 0. This is the key result in
the procedure for finding the eigenvalues and eigenvectors, so it is worth summarizing
as a theorem.

Theorem 1

Suppose that A is an

n x n matrix.

A real number tl is an eigenvalue of A if and only

if tl satisfies the equation


det(A - tl/) = 0
If tl is an eigenvalue of A, then all non-trivial solutions of the homogeneous system

(A -,l/)v=
are eigenvectors of A that correspond totl.

Observe that the set of all eigenvectors corresponding to an eigenvalue tl is just


the nullspace of A -tl/, excluding the zero vector. In particular, the set containing all
eigenvectors corresponding to tl and the zero vector is a subspace of lRn. We make the
following definition.

Definition

Let tl be an eigenvalue of an

Eigenspace

and all eigenvectors of A corresponding to tl is called the eigenspace oftl.

n x n matrix

A. Then the set containing the zero vector

Remark
From our work preceding the theorem, we see that the eigenspace of any eigenvalue tl
must contain at least one non-zero vector. Hence, the dimension of the eigenspace
must be at least 1 .

EXAMPLE4

Find the eigenvalues and eigenvectors of the matrix A=

Solution: We have
A-tl/=

17
20

] [ ] [

l
-15
-tl
-18
0

of Example 1.

17-tl

-15

20

-18 -,{

(You should set up your calculations like this: you will need A - tll later when you.find
the eigenvectors.) Then
det(A -tl/)

17-,{
20

-15
_

18

tl

= (17 - tl)(-18 - tl) - (-15)20


= tl.2 + ,{ - 6 = (;l + 3)(tl - 2)
so det(A - -1/) = 0 when -1= -3 or -1= 2. These are all of the eigenvalues of A.

Section 6.1 Eigenvalues and Eigenvectors

EXAMPLE4
(continued)

-3,
(-3)I [ ] [ -30/4]
0
[3i4].
[3i4]
-3
{[3i4]}.
=

To find all the eigenvectors of il


(A - ill)V

we solve the homogeneous system

0. W riting A - ill and row reducing gives


A

of A corresponding to il
eigenspace for il

-3

-15
-15

20
20

so that the general solution of (A-ill)v


are v

is v

t E R Thus, all eigenvectors

for any non-zero value oft, and the

is Span

A_ 21

The general solution of (A - ill)V

{[ ]}

We repeat the process for the eigenvalue il

Span

15
20

0 is v

- 15
-20

2:

] [

-1
0

[ ],

t E JR, so the eigenspace for il

.In particular, all eigenvectors of A corresponding to il

multiples of

293

2 is

2 are all non-zero

[n

Observe in Example

that det(A - ill) gave us a degree 2 polynomial. This moti

vates the following definition.

Definition

Let A be an n x n matrix. Then C(il)

Characteristic Polynomial

polynomial of A.

det(A - ill) is called the characteristic

For an n x n matrix A, the characteristic polynomial C(il) is of degree n, and the


roots of C(il) are the eigenvalues of A. Note that the term of highest degree il" has
coefficient (-1)'1; some other books prefer to work with the polynomial det(ill - A)
so that the coefficient of tl.11 is always 1. In our notation, the constant term in the char
acteristic polynomial is det A (see Problem 6.2.D7).
It is relevant here to recall some facts about the roots of an n-th degree polynomial:

(1) il1 is a root of C(il) if and only if (il - il1) is a factor of C(il).
(2) The total number of roots (real and complex, counting repetitions) is n.

(3)
(4)

Complex roots of the equation occur in "conjugate pairs," so that the total num
ber of complex roots must be even.
If n is odd, there must be at least one real root.

(5) If the entries of A are integers, since the leading coefficient of the characteristic
polynomial is 1, any rational root must in fact be an integer.

294

Chapter 6

EXAMPLES

Eigenvectors and Diagonalization

[b l

Find the eigenvalues and eigenvectors of A=

Solution: The characteristic polynomial is

So, A. = 1 is a double root (that is, (A.- 1) appears as a factor of C(A.) twice) so A. = 1
is the only eigenvalue of A.
For A.= 1, we have
A-A.I=

which has the general solution v =


Span

EXERCISE2

{[b]}

[ b ].

rn b]
lit Thus, the eigenspace for A = 1 is

t E

Find the eigenvalues and eigenvectors of A=

[ ].

EXAMPLE6

5 -5

-3

Find the eigenvalues and eigenvectors of A= -7 9 -5.

7 -3

-7

Solution: We have

C(,1)

det(A-A.I)=

-3-A.
-7
-7

5
-5
9-A.
-5
7
-3- ,i

Expanding this determinant along some row or column will involve a fair number of
calculations. Also, we will end up with a degree 3 polynomial, which may not be easy
to factor. But this is just a determinant, so we can use properties of determinants to
make it easier. Since adding a multiple of one row to another does not change the
determinant, we get by subtracting row 2 from row 3,

C(A.)=

-3-A.

-7

9- ,i

-2+,1

-5
-5

2-,1

Expanding this along the bottom row gives

C(A.)

(-2 + A.)(-1)((-3

,1)(-5)- (-5)(-7))

+( 2-,1)((-3-,1)(9- ,1)- 5(-7))


( 2- ,i)(( 5A.+ 15- 35) + (;J.2 - 6,i- 27 + 35))
-(,1- 2)(,12 - ,i - 12)= -(,1 - 2)(A.- 4)(A. + 3)

Section 6.1

EXAMPLE6
(continued)

Eigenvalues and Eigenvectors

[-5 5 -51 [
-5

ol
[n

295

Hence, the eigenvalues of Aare ..t1 = 2, ..t2 = 4, and A3 = -3.


For A1 = 2,

1 -1

A- ..t1/= -7

-7

-5

Hence, the general solution of (A - ;1 J)V =

eigenspace of;, is
For ..t2 = 4,

ml}

A- ..t2/ =

-7

Hence, the general solution of (A - A2J)il =

{[11}

For A3 = -3,

is V =

is V =

Thus, a basis for the

-7

Hence, the general solution of (A - ,!3l)V =

{[; l}

Thus, a basis for the

[ =1[ =1
m

A-..t3/= -

eigenspa of ,!3 is

[= =1 -1
[H
-7

eigenspace of A2 is

EXAMPLE 7
Find the eigenva!ues and eigenvectors of A=

is V

[: ; l

Thus, a basis for the

Solution: We have

1 -..t
C(..t) =

1 - ,t
1

1- ,l
1 - ,t
1

,t

,t

,l(-1)((1- ,l)- 1(1)) + (-..t)((l

= -,l(-,l +

2
,t - 2A) = -..t c..t - 3)
2

1
-..t

- ..t)(l - ..t) - 1(1 ))

l[ l

Therefore, the eigenvalues of Aare ..t1 = 0 (which occurs twice) and ..t2 = 3.
For ..t1 = 0,
A- ..t1/=

1 1 l
1 1 1
1 1 1

0
0

0
0

0
0

296

Chapter 6

EXAMPLE 7

(continued)

Eigenvectors and Diagonalization


Hence, a basis forthe eigenspace of 0 is{[-1l [-]}
For
n -2 -2i [0 0 =0i
Thus, a basis for the eigenspace of is{[;]}
These examples motivate the following definitions.
Let beofantimes ismatrepeatrix ewidtash eia rootgenvalofuthee charact
The eristic polynomial. The of is the
number
of is the dimension of the eigenspace of
A'

/l.2

3,

A-,l,/

A2

Definition
Algebraic Multiplicity
Geometric Multiplicity

EXAMPLES

/l.

n x n

/l
/l

multiplicity

algebraic multiplicity
/l.

InpolExampl
e
t
h
e
ei
g
enval
u
e
has
al
g
ebrai
c
mul
t
i
p
l
i
c
i
t
y
2
si
n
ce
t
h
e
charact
e
ri
s
t
i
c
ynomial is - - and has geometric multiplicity since a basis for
its eigenspace is{[]}.
IInn Exampl
ee eachthe eieiggenval
envaluuee has0 hasalgalebraigebraic mulc andtipligeomet
city andricgeomet
rilcicmulity 2,tiplandicitythe
Exampl
mul
t
i
p
eigenvalue has algebraic and geometric multiplicity
5,

(/l

1)(/1.

1),

/l

/l

6,
7,
3

EXERCISE 3

Theorem 2

/l

geometric

1.

1.

Let [00 -20 22]. Show that and -2 are both eigenvalues of and
determine the algebraic and geometric multiplicity of both of these eigenvalues.
sectiThese
on. definitions lead to some theorems that wil be very important in the next
Let be an eigenvalue of an matrix Then
geometric multiplicity algebraic multiplicity
A

-3

-4

/l

/l.1

n x n

/l.2

A.

Section 6.1 Eigenvalues and Eigenvectors

297

If the geometric multiplicity of an eigenvalue is less than its algebraic multiplic


ity, then we say that the eigenvalue is deficient. However, if
distinct eigenvalues A1,

A is an n x n matrix

with

, Ab which all have the property that their geometric multi

plicity equals their algebraic multiplicity, then the sum of the geometric multiplicities
of all eigenvalues equals the sum of the algebraic multiplicities, which equals n (since
an n-th degree polynomials has exactly n roots). Hence, if we collect the basis vectors
from the eigenspaces of all k eigenvalues, we will end up with n vectors in JR.I!. We now
prove that eigenvectors from eigenspaces of different eigenvalues are necessarily lin
early independent, and hence this collection of n eigenvectors will form a basis for JRn.

Theorem 3

Suppose that Ai, ..., Ak are distinct (Ai * A1) eigenvalues of an n x n matrix

A,

with corresponding eigenvectors v1, ... , vb respectively. Then {V1, ... , vk} is linearly
independent.

Proof: We will prove this theorem by induction. If k

0.

since by definition of an eigenvector, v1 *

2'.

1, then the result is trivial,

Assume that the result is true for some

1. To show { v1, ... , vb Vk+ i} is linearly independent, we consider


(6.1)

Observe that since

Avi

Aivi,

we have

(A

Thus, multiplying both sides of (6.1) by A

AJ)Vi

0 and

Ak+I I gives

{v1,
vk} is linearly independent; thus, all the coeffi
AJ; hence, we must have c1
ck
0. Thus (6.1)

By our induction hypothesis,


cients must be 0. But Ai *

becomes

0 + Ck+JVk+I
But Vk+1 *
independent.

since it is an eigenvector; hence, Ck+l

0, and the set is linearly

Remark
In this book, most eigenvalues tum out to be integers. This is somewhat unrealistic; in
real world applications, eigenvalues are often not rational numbers. Effective computer
methods for finding eigenvalues depend on the theory of eigenvectors and eigenvalues.

298

Chapter 6

Eigenvectors and Diagonalization

PROBLEMS 6.1
Practice Problems

Al Let A

[= 2 3 ]

.Determine whether the fol-

-1

lowing vectors are eigenvectors of A. If they are,


determine the corresponding eigenvalues. Answer
without calculating the characteristic polynomial.

(e)

of the following matrices.


(a)
(c)

[ ]
[ ]

(b)

(d)

[ ; ]
[-26 29]
-75

(f)

--63]

[36

A3 For each of the following matrices, determine the


algebraic multiplicity of each eigenvalue and deter

mine the geometric multiplicity of each eigenvalue

by writing a basis for its eigenspace.


(a)

A2 Find the eigenvalues and corresponding eigenspaces

[! ]

(e)

[ -J

(b)

[222 222 2221

(d)

(f)

[ -J
[-2- = 63 ]
-7

[; 3i
1

10

Homework Problems

Bl Let A

[- - -1.6
8

Determine whether the

-1

following vectors are eigenvectors of A. If they are,


determine the corresponding eigenvalues. Answer
without calculating the characteristic polynomial.

23

[ 53 1
1

(e) 0

[-2 236 61
-

(f)

-1

B3 For each of the following matrices, determine the


algebraic multiplicity of each eigenvalue and deter

mine the geometric multiplicity of each eigenvalue

[[-; =J

by writing a basis for its eigenspace.


(a)

B2 Find the eigenvalues and corresponding eigenspaces

-]
;]

of the following matrices.


(a)
(c)

[ 2
[=

!]
_;]

(c)

(e)

[; ]
[ ]

[ ]

(b)

(d)

(t)

[= =; j]

Section 6.2 Diagonalization

299

Computer Problems
Cl Use a computer to determine the eigenvalues

and corresponding eigenspaces of the following


matrices.
(a)

(b)

(c)

[
[l -i]
-

-5
-2

1
3
2
2

2
2

-1.28185
0.76414
-0.83198

2.42918
-0.67401
2.34270

[[ ] [ ] [ ]

C2 Let A

1.21

Verify that

-5
-2
-2

2
2
4
4

2.89316
-0.70562
1.67682

-0.34 ,
0.87

1.31

2.15
-0.21

-1.85

and

are

0.67
2.10

(approximately) eigenvectors of A. Determine the


corresponding eigenvalues.

3
-4
4
4

Conceptual Problems
Dl Suppose that vis an eigenvector of both the matrix
A and the matrix B, with corresponding eigenvalue

D4 (a) Let A be an n x n matrix with rank(A) = r < n.


Prove that 0 is an eigenvalue of A and deter
mine its geometric multiplicity.

,1 for A and corresponding eigenvalue for B. Show


that vis an eigenvector of (A + B) and of AB. De

(b) Give an example of a 3 x 3 matrix with


rank(A) = r < n such that the algebraic mul

termine the corresponding eigenvalues.

tiplicity of the eigenvalue 0 is greater than its

D2 (a) Show that if ;l is an eigenvalue of a matrix A,


then ,.in is an eigenvalue of A11 How are the cor
responding eigenvectors related?
(b) Give an example of a 2 x 2 matrix A such that

geometric multiplicity.

DS Suppose that A is an

A has no real eigenvalues, but A 3 does have real


eigenvalues. (Hint: See Problem 3.3.D4.)

D3 Show that if A is invertible and vis an eigenvector


of A, then vis also an eigenvector of A-1. How are
the corresponding eigenvalues related?

n x n matrix such that the

sum of the entries in each row is the same. That


is,

f a;k =

k=l

for all 1 :$; i :$; n. Show that v =

is

an eigenvector of A. (Such matrices arise in proba


bility theory.)

6.2 Diagonalization
At the end of the last section, we showed that if the k distinct eigenvalues ,11,

, Ak

of an n x n matrix A all had the property that their geometric multiplicity equalled
their algebraic multiplicity, then we could find a basis for JR.11 of eigenvectors of A by
collecting the basis vectors from the eigenspaces of each of the k eigenvalues. We now
see that this basis of eigenvectors is extremely useful.
Suppose that A is an n x n matrix for which there is a basis {V1, . . , v11} of eigen

vectors of A. Let the corresponding eigenvalues be denoted A1, . . . , ,111, respectively. If

300

Chapter 6

Eigenvectors and Diagonalization

J, then we get

we let P =[v1

v1
1

AP=A[v1

vn

Avn

=[Av1
= [,,t1vi

AnVn

=[vi

vn

A1

A2

0
=PD

0
0

Recall that a square matrix D such that d;; = 0 for i t=


can be denoted by diag(d11,

An

is said to be diagonal and

, d1111). Thus, using the fact that P is invertible since the

columns of P form a basis for IR.1 1, we can write AP = PD as

Definition

If there exists an invertible matrix P and diagonal matrix D such that p-l AP=D, then

Diagonalizable

we say that A is diagonalizable (some people prefer "diagonable")and that the matrix
P diagonalizes A to its diagonal form D.
It may be tempting to think that p-l AP = D implies that A = D since P and
1
p- are inverses. However, this isnot true in general since matrix multiplication is not
commutative. Not surprisingly, though, if A and B are matrices such that p-1AP = B
for some invertible matrix P, then A and B have many similarities.

Theorem 1

If A and B are n x n matrices such that p-1AP = B for some invertible matrix P,
then A and B have

(1) The same determinant


(2) The same eigenvalues
(3) The same rank

11

(4) The same trace, where the trace of a matrix A is defined by tr A= .L:

a;;

i=l

(!)was proved as Problem D4 in Section 5.2. The proofs of (2), (3), and (4) are
left as Problems D 1, D2, and D3, respectively.
This theorem motivates the following definition.

Definition

If A and Bare n x n matrices such that p-1AP = Bfor some invertible matrix P, then

Similar Matrices

A and Bare said to be similar.


Thus, from our work above, if there is a basis of IR.11 consisting of eigenvectors of
A, then A is similar to a diagonal matrix D and so A is diagonalizable. On the other
hand, if at least one of the eigenvalues of A is deficient, then A will not haven linearly
independent eigenvectors. Hence we will not be able to construct an invertible matrix P
whose columns are eigenvectors of A. In this case, we say that A is not diagonalizable.
We get the following theorem.

Section 6.2 Diagonalization

Theorem 2

301

[Diagonalization Theorem]
An n x n matrix A can be diagonalized if and only if there exists a basis for JR11 of
eigenvectors of A. If such a basis

{V1,

, v11} exists, the matrix P =

diagonalizes A to a diagonal matrix D = diag(At, ...,A11 ), where

Ai

[v1

v11

is an eigenvalue

of A corresponding to v; for 1 ::S i ::S n.

From the Diagonalization Theorem and our work above, we immediately get the
following two useful corollaries.

Corollary 3

Corollary 4

A matrix A is diagonalizable if and only if every eigenvalue of a matrix A has its


geometric multiplicity equal to its algebraic multiplicity.

If an n x n matrix A has n distinct eigenvalues, then A is diagonalizable.

Remark
Observe that it is possible for a matrix A with real entries to have non-real eigenvalues,
which will lead to non-real eigenvectors. In this case, there cannot exist a basis for
JR11 of eigenvectors of A, and so we will say that A is not diagonalizable over JR. In
Chapter 9, we will examine the case where complex eigenvalues and eigenvectors are
allowed.

EXAMPLE 1

Find an invertible matrix P and a diagonal matrix D such that

A=

[ ;].

t
p- AP =

D, where

Solution: We need to find a basis for JR of eigenvectors of A. Hence, we need to find


a basis for the eigenspace of each eigenvalue of A. The characteristic poly nomial of
A is

Hence, the eigenvalues of A are


For

A1 = 5,

At = 5 and Az = -1.

we get

] [

3 1
0
-3

So,

Vt =
For

So,

V'2 =

[ ]

A2

is an eigenvector for

r-]

Thus,

At = 5 and {V i }

-1
0

is a basis for its eigenspace.

-1, we get

is an eigenvector for

{v1, v2}

A2= -1

and

{\12} is a basis for its eigenspace.

is a basis for JR2, and so if we let P=

[vt

v2 =

[ -].

we get

302

Chapter 6

EXAMPLE 1

(continued)

EXAMPLE2

Eigenvectors and Diagonalization


Note that we could have instead taken P= [i12 vi] = r- n which would have
given
2
0
3
Determine whether A = [-2 35 -201 is diagonalizable. If it is, find an invertible
-2rix such that p-1AP=
matrix Panda
di
a
gonal
mat
The characteristic polynomial of Ais
C(/l)=det(A-/l/)= 0-/
-2-2 l 5-/l33 0-2-/-2 l = 0-/l
-20 -25-/3 /ll 2-2-2-/l
= /l)(-21)(2/l - ( 2-/l)(/l2-5/l /l6)
=-(/l-2)(/l -3/l 2)=-(/l -2)(/l -2)( Hence,
/lt1h=alg2ebrai
is anc eimulgenval
ucietywi1t.hByalgTheorem
ebraic mul6.1tip.l2i,citthye geomet
2 and /l2ric=mulistianpliceiitgyenof
val/l2 =ue wimust
t
i
p
l
i
equal
1
.
Thus,
A
i
s
di
a
gonal
i
z
abl
e
i
f
and
onl
y
i
f
t
h
e
geomet
r
i
c
mul
t
i
p
l
i
c
i
t
y
of /l1 = 2 is 2.
For /l1 = 2, we get A-/l1/ = [=-2 3 -2=]- [0 -0 2 ]Thus,
a basis for
0
3
the eigenspace is { [ fl [ ]}. Hence, the geometric multiplicity of = 2 equals
its alBygebraiCorolc mullarytip3,licweity.see that A is diagonalizable. So, we also need to find a basis
for the eigenspace of /l2 = 1.
For /l2 = 1, we get A-/l2/ =[=-2 !3 =]- [0 0 =]0 Therefore, {[]} is a
basis for the eigenspace.
1AP=[ ]
PSo, we can toke P=n - ]andget
1
00
D.

Solution:

(-2

4)

1)

J1

-1

EXERCISE 1

Diagonalize A=U - =il

Section 6.2 Diagonalization

EXAMPLE3

303

7
Is the matrix A

11

[=! =]
-4

diagonalizable?

-3

Solution: The characteristic polynomial is

C(tl)

det(A

- -tl

-4
0

11 - tl
- 3 + tl

11 - tl
-6
8
-3-tl

1-1)

1.

3, we get A - tl1/

{[1[2]}

-6
3 - tl

28)

1 is an eigenvalue

[=: =] [ =12] .

Thus, a basis for

-4

the eigenspace is

-(tl- 3)(,-l - 3)(,-l

3 is an eigenvalue with algebraic multiplicity 2, and tl2

with algebraic multiplicity


For tl1

-1-tl
-4
-4

-(,,l - 3)(tl2 - 4,-l + 3)

(-3 + tl)(-1)(6,,t + 6- 0) + (3 - tl)(tl2 - U + tl - 11 +

Thus, tl1

tll)

-6

Hence, the geometric multiplicity of .l,

I is less than its

algebraic multiplicity, and so A is not diagonalizable by Corollary 3.

EXERCISE 2

EXAMPLE4

Show that A

[ ]

Show that matrix A

is not diagonalizable.

[ -]

is not diagonalizable over lit

Solution: The characteristic polynomial is


C(tl)
Since tl2 +

det(A - tll)

1-,,l 1
l

-1
-tl

tl 2 + 1

0 has no real solutions, the matrix A has no real eigenvalues and hence

is not diagonalizable over lit

Some Applications of Diagonalization


A geometrical application of diagonalization occurs when we try to picture the graph

of a quadratic equation in two variables, such as ax2 +


we should consider the associated matrix

[: ].

2bxy

cy2

d. It turns out that

By diagonalizing this matrix, we can

easily recognize the graph as an ellipse, a hyperbola, or perhaps some degenerate case.
This problem will be discussed in Section 8 .3.

304

Chapter 6

Eigenvectors and Diagonalization


A physical application related to these geometrical applications is the analysis of
the deformation of a solid. Imagine, for example, a small steel block that experiences
a small deformation when some forces are applied. The change of shape in the block
can be described in terms of a 3 x 3 strain matrix. This matrix can always be diago
nalized, so it turns out that we can identify the change of shape as the composition of
three stretches along mutually orthogonal directions. This application is discussed in
Section 8.4.
Diagonalization is also an important tool for studying systems of linear difference
equations, which arise in many settings. Consider, for example, a population that is
divided into two groups; we count these two groups at regular intervals (say, once a

month) so that at every time n, we have a vector jJ =

[]

that tells us how many are

in each group. For some situations, the change from month to month can be described
by saying that the vector jJ changes according to the rule
jJ(n + 1) =AjJ(n)

whereA is some known 2 x 2 matrix. It follows that p(n)= N p(O). We are often inter
ested in understanding what happens to the population "in the long run." This requires
us to calculateA11 for n large. This problem is easy to deal with if we can diagonal
izeA. Particular examples of this kind are Markov processes, which are discussed in
Section 6.3.
One very important application of diagonalization and the related idea of eigen
vectors is the solution of systems of linear differential equations. This application is
discussed in Section 6.4.
In Section 4.6 we saw that if L : JR.11 JR.11 is a linear transformation, then its matrix
with respect to the basis '13 is determined from its standard matrix by the equation
[L21
] = P-1[L]sP

where P = v1
v11 is the change of basis matrix. Examples 5 and 7 in
Section 4.6 show that we can more easily give a geometrical interpretation of a lin
ear mapping L if there is a basis '13 such that [L]21 is in diagonal form. Hence, our
diagonalization process is a method for finding such a geometrically natural basis. In
particular, if the standard matrix of Lis diagonalizable, then the basis for JR.11 of eigen
vectors forms the geometrically natural basis.

PROBLEMS 6.2
Practice Problems
Al By checking whether columns of P are eigenvec

tors ofA, determine whether P diagonalizesA. If


1
it does, determine p-t and check that p-AP
is
diagonal.

(a)A=
(b)A=

[ ]
[ -l
11
9

6
4

'

[ ]
[ n

P=
p=

(c) A=
(d) A=

Section 6.2 Exercises

-87]' [ ]
- ]
[: 2 2 n - :J

4 4
4

p=

4,

(f) A=

P=

A2 For the following matrices, determine the eigenval


ues and corresponding eigenvectors and determine

whether each matrix is diagonalizable over R If it

is diagonalizable, give a matrix P and diagonal ma


trix D such that p-1 AP = D.

(a) A=

(b) A=
(c) A=
(d) A=
(e) A=

[ ]
[-2 3]
[- -]
4

(g) A=

(a) A=

(b) A=

(d) A=

[1 1 11
[- -17- -8-1

(e) A=

(f) A=

-6

6
-4

-3

A3 Follow the same instructions as for Problem

(c) A=

-3

-2- 72 1]
[ 21
-[ 1
12 81

(g) A=

305

[ ; ]
[: :]
[-; -]
_

A2.

[-2- 2: -=!]2
[2 2 1
[-1- -2 i
[ -:1
0

Homework Problems

Bl By checking whether columns of P are eigenvec-

tors of A, determine whether P diagonalizes A. If

it does, determine p-1 and check that p-1 AP is

[- n [-1 n
[ n [ - ]
[ n [ - ]
7 2

diagonal.

(a) A=

(b) A=
(c) A=
(d) A

p=

[-188 -1 111 [-! -12 -2: l


-6

P=

is diagonalizable, give a matrix P and diagonal ma(a) A=

p=

4,

whether each matrix is diagonalizable over R If it

ues and corresponding eigenvectors and determine

[-2 ;]
[ ]
[ ;]

trix D such that p-1 AP = D.

p=

-4

B2 For the following matrices, determine the eigenval-

(b) A=
(c) A=
(d) A=
(e) A=

[ 3 -2 ]
[- -22 -]
6
6

4
-9

306

Eigenvectors and Diagonalization

Chapter 6

<DA=

(g) A=

H -i
H -;1
-1
2
-3

-3

-4

-2
3

(e) A=

B3 Follow the same instructions as for Problem B2.


(a) A=
(b) A=
(c) A=

[= ]
[ =1
[ ; 1
[ - -]
2

(d) A=

[ 1
[ -1
[ ! -1

A=

(f)

A=

-1

-2

Conceptual Problems
Dl Prove that if A and B are similar, then A and B have
the same eigenvalues.

tr A is equal to the sum of the eigenvalues of

D2 Prove that if A and B are similar, then A and B have


the same rank.

D3 (a) Let A and B be

n x n matrices. Prove that

tr AB= tr BA.
(b) Use the result of part (a) to prove that if A and
B are similar, then tr A = tr B.

(b) Use the result of part (a) and properties of


eigenvectors to calculate a matrix that has

[] [].

eigenvalues 2 and 3 with corresponding eigenand

[H

(c) Determine a matrix that has eigenvalues 2, -2,

[
i].
Ul

and

<espec6 vel Y

[- - ]

(b) Use the result of part (a) to calculate A5, where

-6
A2 (g).

12

spection, the algebraic and geometric multi

[:

plicities of all of the eigenvalues of

a b
a

a b
a

a+b

D7 (a) Suppose that A is diagonalizable. Prove that


det A is equal to the product of the eigenvalues
of A (repeated according to their multiplicity)

by considering p- AP.
tic polynomial is det A. (Hint: How do you find
the constant term in any polynomial p(A.)?)
(c) Without assuming that A is diagonalizable,
show that det A is equal to the product of the
roots of the characteristic equation of A (in
cluding any repeated roots and complex roots).
(Hint: Consider the constant term in the char
acteristic equation and the factored version of

DS (a) Suppose that P diagonalizes A and that the di


agonal form is D. Show that Ak = PDkp-1,

A=

Theorem 1.
(b) Use the result of part (a) to determine, by in

(b) Show that the constant term in the characteris

respectively.

and 3, whh correspond;ng dgenvectors

A (including repeated eigenvalues) by using

A=

D4 (a) Suppose that P diagonalizes A and that the di


1
agonal form is D. Show that A= PDP-

vectors

D6 (a) Suppose that A is diagonalizable. Prove that

is the matrix from Problem

that equation.)

DB Let A be an

n x n matrix. Prove that A is invertible

if and only if A does not have 0 as an eigenvalue.


(Hint: See Problem D7.)

D9 Suppose that A is diagonalized by the matrix P and


that the eigenvalues of A are A.1,..., A.11 Show that
the eigenvalues of (A - A1 I) are 0, A2 - A 1,A3 A1,. ,An -A1 (Hint: A-Ail is diagonalized by P.)

Section 6.3

307

Powers of Matrices and the Markov Process

6.3 Powers of Matrices


and the Markov Process
In some applications of linear algebra, it is necessary to calculate powers of a matrix. If
the matrix
that

A=

is diagonalized by

PDP-1,

P to the diagonal matrix D, it follows from D

p -1 P

= A

and then for any positive integerm we get

Am=
=

(PDP-1r (PDP-1)(PDP-1)
(PDP-1)
PD(P-1P)D(P-1P)D (P-1P)DP PDmp-l

Thus, knowledge of the eigenvalues of

and the theory of diagonalization should

be valuable tools in these applications. One such application is the study of Markov
processes. After discussing Markov processes, we tum the question around and show
how the "power method" uses powers of a matrix

to determine an eigenvalue of

A.

(This is an important step in the Google PageRank algorithm.) We begin with an ex


ample of a Markov process.

EXAMPLE 1

Smith and Jones are the only competing suppliers of communication services in their
community. At present, they each have a

50%

share of the market. However, Smith

has recently upgraded his service, and a survey indicates that from one month to the
next,
hand,

90%
70%

30%10%

of Smith's customers remain loyal, while


of Jones's customers remain loyal and

switch to Jones. On the other

switch to Smith. If this goes on

for six months, how large are their market shares? If this goes on for a long time, how
big will Smith's share become?

Solution: Let
and let

lm

Sm
30%

be Smith's market share (as a decimal) at the end of them-th month

Sm++ lm =
Sm+l= 0.90S m 0 3 m
lm+l = O.lSm 0 7lm

1, since between them they have

be Jones's share. Then

of the market. At the end of the (m


customers and

!)-st month, Smith has

90%

100%

of his previous

of Jones's previous customers, so


. J

Similarly,

We can rewrite these equations in matrix-vector form:

0 9 0.7
0 3] [Sm]
[Sm+l] = [0.1
lm
lm+l
.

The matrix T

= [: :]

is called the transition matrix for this problem: it de

scribes the transition (change) from the state

[:]

at time m to the state

time m + 1. Then we have answers to the questions if we can determine T6

ym [0.0.55]

[::: ]
[:;]

at

and

form large.

To answer the first question, we might compute T6 directly. For the second ques
tion, this approach is not reasonable, and instead we diagonalize. We find that

il1

=1

308

Chapter 6

EXAMPLE 1
(continued)

Eigenvectors and Diagonalization

is an eigenvalue of
with eigenvector

V2

with eigenvector

= [ l
_

v1

= []

.and

il.2

= 0.6

is the other eigenvalue,

Thus,

It follows that

ym

PDmp-1

0 ] [11 - 1]
= [31 -11] [ l0m (0.6)"'
4 3

We could now answer our question directly, but we get a simpler calculation if we
observe that the eigenvectors form a basis, so we can write

Then,

[c2ci ] = [Sloo]= 4 [SoSo+


- 3lolo ]
p-I

Then, by linearity,

[] = c1Tmv1+c2Tmv2
= C1ilV1+c2il1v2
= (So+lo)[]+ (So - 3lo)(0.6)"' [ ]
= 6,
So= lo= 0.5.
[s ] = 4 [31]- 4 co.6)6 [-11]
1 [3- 0.0117
::::: 4 1+0.0117)
::::: [0.747]
0.253
74.7%
(0.6)"'
S = 0.75 loo = 0.25.
75%
75%.
(0.6)"' 0 So+lo= 1.
So lo
ym

Now

Whenm

6
16

Thus, after six months, Smith has approximately


Whenmis very large,
have

oo

and

Thus, in this problem, Smith's share approaches


gets larger than

of the market.

is nearly zero, so form large enough (m oo), we


as

gets large, but it never

Now look carefully: we get the same answer in the long run, no

matter what the initial value of

and

are because

and

Section 6.3 Powers of Matrices and the Markov Process

309

By emphasizing some features of Example 1, we will be led to an important defi


nition and several general properties:

(1) Each column of T has sum l . This means that all of Smith's customers show
up a month later as customers of Smith or Jones; the same is true for Jones's
customers. No customers are lost from the system and none are added after the
process begins.

(2) It is natural to interpret the entries

tij

as probabilities. For example,

t11 =
t21 =
0.9

is the probability that a Smith customer remains a Smith customer, with

0. 1 as the probability that a Smith customer becomes a Jones customer. If we


consider "Smith customer" as "state 1" and "Jones customer" as "state 2," then

tij

is the probability of transition from state j to state i between time m and

time m + 1 .

(3) The "initial state vector" is

[l []
ym

i s the state vector at time m.

(4) Note that

[11] = []=So[:]+ lo[]


t11 + t21 = t12 + t22 =
SI + 11 =SO+ lo
So+ lo =
So l o
T

Since

1 and

1, it follows that

Thus, it follows from (1) that each state vector has the same column sum. In our
example,

and

1, but we could consider

are decimal fractions, so

a process whose states have some other constant column sum.

(5) Note that 1 is an eigenvalue of Twith eigenvector

[n
[j:].

the appropriate sum, we take the eigenvector to be

and the state vector

[j:J

To get a state vector with


Thus,

[ ]=[ ]
3/4
1/4

3/4
1/4

is fixed or invariant under the transformation with

matrix T. Moreover, this fixed vector is the limiting state approached by ym


for any

[].

[]

The following definition captures the essential properties of this example.

Definition

Ann xn matrix Tis the Markov matrix (or transition matrix) of ann-state Markov

Markov Matrix

process if

Markov Process

(1)

tiJ

0, for each i and j.

i=I tiJ =
n

(2) Each column sum is 1: L:

1 for each j.

310

Chapter 6

Eigenvectors and Diagonalization

We take possible states of the process to be the vectors S


each i, and

s1

+ Sn

With minor changes, we could develop the theory with

s;

;:::

s1

for

+ Sn = constant.

. [0.0.19 0.3] .
1

The matnx

O.S

.
1s not a Markov matnx
. smce
.
.
the sum of the entries
m the second

column does not equal

EXAMPLE3

such that

Sn

1.

Remark

EXAMPLE2

s[ :1

Find the fixed-state vector for the Markov matrix A =

rn: :].

Solution: We know the fixed-state vector is an eigenvector for the eigenvalue ,1


We have

[-00.99 -0.0.331 [01 -1/301


.
[].
. .
. [1/4]
34.
[0.1 0.3] [1/4] [1/4]
0.9 0.7 3/4 3/4

A_ I=

1,

so the mvanant state 1s

It is easy to verify that

1.

Therefore, an eigenvector. corresponding to ,1


vector must sum to

1 is

The components in the state

EXERCISE 1

Determine which of the following matrices is a Markov matrix. Find the fixed-state
vector of the Markov matrix.
(a) A

[0.50.4 0.0.651

(b) B =

[0.4 0.61
0.6 0.4

The goal with the Markov process is to establish the behaviour of a sequence with

states s, Ts, T2 S,..., T'n s. If possible, we want to say something about the limit of

Tms as

m oo. As we saw in Example

1,

diagonalization of Tis a key to solving

the problem. It is beyond the scope of this book to establish all the properties of the
Markov process, but some of the prope1ties are easy to prove, and others are easy to
illustrate if we make extra assumptions.

Section 6.3 Powers of Matrices and the Markov Process

PROPERTY 1.

One eigenvalue of a Markov matrix is ,11

311

1.

Proof: Since each column ofT has sum 1, each column of (T - 1/) has sum 0. Hence,
the sum of the rows of (T - 11) is the zero vector. Thus the rows are linearly dependent,
and (T - 1/) has rank less than n, so det(T - 1/)
0. Therefore, 1 is an eigenvalue
=

ofT.

PROPERTY 2.

The eigenvector S* for ,11

1 has

sj

0 for 1 ::; j::;

n.

This property is important because it means that the eigenvector S* is a real state of
the process. In fact, it is a fixed or invariant state:

PROPERTY 3.

All other eigenvalues satisfy

IA;I

::;

1.

To see why we expect this, let us assume thatT is diagonalizable, with distinct eigen
values I, A2, ... , An and corresponding eigenvectors s*, s 2, . ..
state s can be written

, S,1

Then any initial

It follows that

If any

l,1;1

>

1, the term 1,i;nl would become much larger than the other terms when

is large; it would follow that T"' s has some coordinates with magnitude greater than

1. This is impossible because state coordinates satisfy 0 ::;


l,1;1::; 1.

PROPERTY 4.

Suppose that for some

all the eigenvalues of T except for ,11

s;

::; 1,

so we must have

all the entries in ym are not zero. Then

1 satisfy IA;I

< l. In this case, for any initial

state S, T"' s ---t S* as m ---t oo: all states tend to the invariant state S* under the process.
Notice that in the diagonalizable case, the fact that ym s

---t

s* follows from the ex

pression forT111 s given under Property 3.

EXERCISE 2

The Markov matrix T =

[ )

has eigenvalues 1 and -1; it does not satisfy the

conclusion of Property 4. However, it also does not satisfy the extra assumption of
Property 4. It is worthwhile to explore this "bad" case.
Let s=

[ ; l

Determine the behaviour of the sequence s,Ts,T2 s, .... What is the

fixed-state vector forT?

312

Chapter 6

Eigenvectors and Diagonalization

Systems of Linear Difference Equations


If A is an n x n matrix and S(m) is a vector for each positive integer m, then the matrix
vector equation
S(m + 1)

AS(m)

may be regarded as a system of n linear first-order difference equations, describing


the coordinates s1, s2, ... , s,, at times m + 1 in terms of those at time m. They are
"first-order difference" equations because they involve only one time difference from
m to m + 1; the Fibonacci equation s(m + 1)

s(m) + s(m - 1) is a second-order

difference equation.
Markov processes form a special class of this large class of systems of linear
difference equations, but there are applications that do not fit the Markov assumptions.
For example, in population models, we might wish to consider deaths (so that some
column sums of A would be less than 1) or births, or even multiple births (so that some
entries in A would be greater than 1). Similar considerations apply to some economic
models, which are represented by matrix models. A proper discussion of such models
requires more theory than is developed in this book.

The Power Method of Determining Eigenvalues


Practical applications of eigenvalues often involve larger matrices with non-integer en
tries. Such problems often require efficient computer methods for determining eigen
values. A thorough discussion of such methods is beyond the scope of this book, but
we can indicate how powers of matrices provide one tool for finding eigenvalues.
Let A be an n x n matrix. To simplify the discussion, we suppose that A has
n distinct real eigenvalues

tl1, ... , tl,,, with corresponding eigenvectors v1, ... , v11 We


n. We call tl1 the dominant eigenvalue. Since
{\11, ..., v,,} will form a basis for l!l", any vector X E Jll" can be written

suppose that l..t1 I > l..l;I for 2 :::; i :::;

T hen

and
A111 1

c1tlv1

CnA Vn

For m large, l..l'fl is much greater than all other terms. If we divide by c1..l'f, then all
terms on the right-hand side will be negligibly small except for v1, so we will be able
to identify Vt. By calculating Av1, we determine tl1
To make this into an effective procedure, we must control the size of the vectors: if

..t 1

>

1, then tl'1"

-t oo

as m gets large, and the procedure would break down. Similarly,

if all eigenvalues are between 0 and 1, then A1111

-t

0, and the procedure would fail.

To avoid these problems, we normalize the vector at each step (that is, convert it to a
vector of length 1).

Section 6.3 Powers of Matrices and the Markov Process

313

The procedure is as follows.

Algorithm 1

Guess 1o; normalize Yo=1o/ll1o\\


11 = Ay0; normalize y1 =.Xi/\\xi\\
12=Ay1; normalize y2=12/111211
and so on.

We seek convergence of ym to some limiting vector; if such a vector exists, it must


be v1, the eigenvector for the largest eigenvalue il1
This procedure is illustrated in the following example, which is simple enough
that you can check the calculations.

EXAMPLE4

Determine the eigenvalue of largest absolute value for the matrix A =


using the power method.

Solution: Choose any starting vector and let xo =

Yo =

[ ]

y1

Yi

by

. Then

-12.02

xi

il1iii

12 =Ayi

y2

13 =Ah

y
3

X4 =Ah

'

.
.
.
,
-t
At this point, we JU dge that .rm
-t

.v1 =
, so -t

and the corresponding dominant eigenvalue is ili =


using standard methods.)

_1 [1] [0.707]
1 0.707
[-1213 -56] [0.707]
0.707 [ 13.44]
0.745]
[-0.667
[ 0.712]
[ 5.683]'
-0.702
-5.605
[-5.034
5.044]'
[ 0.7078]
-0.7063
[-4.9621
4.9636]
[-0.7070
0.7072]
0.707]
[_0.707]
0.707 7.[_0.707

11 =Ayo
=

[ ; -]

.
.
is an eigenvector of A,

(The answer is easy to check by

Many questions arise with the power method. W hat if we poorly choose the initial
vector? If we choose x0 in the subspace spanned by all eigenvectors of A except vi,
the method will fail to give v1. How do we decide when to stop repeating the steps
of the procedure? For a computer version of the algorithm, it would be important to
have tests to decide that the procedure has converged-or that it will never converge.
Once we have determined the dominant eigenvalue of A, how can we determine
other eigenvalues? If A is invertible, the dominant eigenvalue of A-1 would give the
reciprocal of the eigenvalue of A with the smallest absolute value. Another approach
is to observe that if one eigenvalue /l.i is known, then eigenvalues of A - /l.il will give
us information about eigenvalues of A. (See Problem 6.2.D9.)

314

Eigenvectors and Diagonalization

Chapter 6

PROBLEMS 6.3
Practice Problems
Al Determine which of the following matrices are

A3 A car rental company serving one city has three

Markov matrices. For each Markov matrix, deter

locations: the airport, the train station, and the city

mine the invariant or fixed state (corresponding to

centre. Of the cars rented at the airport, 8/10 are

the eigenvalue ,1

returned to the airport, 1/10 are left at the train sta

(a)
(b)

(c)

(d)

[
[

[
[

]
]

0.2

0.6

0.8

0.3

0.3

0.6

0.7

0.4

0.7

0.3

0.0

0.1
0.2

0.6
0.2

0.1
0.9

0.9
0.0
0.1

0.1
0.9
0.0

0.0
0.1
0.9

1).

tion, and 1/10 are left at the city centre. Of cars


rented at the train station, 3/10 are left at the air
port, 6/10 are returned to the train station, and 1/10
are left at the city centre. Of cars rented at the city

]
1

A2 Suppose that census data show that every decade,

15% of people dwelling in rural areas move into


towns and cities, while 5% of urban dwellers move
into rural areas.
(a) What would be the eventual steady-state popu
lation distribution?

centre, 3/10 go to the airport, 1/10 go to the train


station, and 6/10 are returned to the city centre.
Model this as a Markov process and determine the
steady-state distribution for the cars.

A4 To see how the power method works, use it to de


termine the largest eigenvalue of the given matrix,
starting with the given initial vector. (You will need
a calculator or computer.)
ca)
(b)

[ -J.10 [n
[ ]. 10 []
=

(b) If the population were 50% urban, 50% rural


at some census, what would be the distribution
after 50 years?

Homework Problems
Bl Determine which of the following matrices are

B2 The town of Markov Centre has only two suppliers

Markov matrices. For each Markov matrix, deter

of widgets-Johnson and Thomson. All inhabitants

mine the invariant or fixed state (corresponding to

buy their supply on the first day of each month.

the eigenvalue ,1
(a)
(b)

(c)

(d)

[
[

[
[

0.4

0.7

0.5

0.3

]
]

1).

Neither supplier is very successful at keeping cus


tomers. 70% of the customers who deal with John
son decide that they will "try the other guy" next
time. Thomson does even worse: only 20% of his

0.5

0.6

0.5

0.4

0.8

0.3

0.2

0.0

0.6

0.2

customers come back the next month, and the rest

0.2

0.1

0.6

0.8
0.1

0.1
0.9

0.2
0.6

0.1

0.1

0.2

]
]

go to Johnson.
(a) Model this as a Markov process and determine
the steady-state distribution of customers.
(b) Determine a general expression for Johnson
and Thomson's shares of the customers, given
an initial state where Johnson has 25% and
Thomson has 75%.

Section 6.4

315

Diagonalization and Differential Equations

B3 A student society at a large university campus de

athletic centre. At the athletic centre, there are 20

cides to create a pool of bicycles that can be used

that started at the residence, 20 that started at the

by the members of the society. Bicycles can be bor

library, and 100 that started at the athletic centre.

rowed or returned at the residence, the library, or

If this pattern is repeated every day, what is the

the athletic centre. The first day, 200 marked bi

steady-state distribution of bicycles?

[]

cycles are left at each location. At the end of the


day, at the residence, there are 160 bicycles that
started at the residence, 40 that started at the library,
and 60 that started at the athletic centre. At the li
brary, there are 20 that started at the residence, 140
that started at the library, and 40 that started at the

Computer Problems
Cl Use the powec method whh initial vectoc

[]

to

determine the dominant eigenvalue of the matrix

B4 Use the power method with initial vector


.
.
.
determme the dommant e1genva1ue of

3.5
4.5

to

4. 5
.
3.5

Show your calculations clearly.

2.89316

-1.28185

-0.70562

0.76414

1.67682

-0.83198

2.42918

-0.67401 . You may do


2.34270

this by using software that includes matrix opera


tions or by writing a program to carry out the pro
cedure.

Conceptual Problems
Dl (a) Let T be the transition matrix for a two-state
Markov process. Show that the eigenvalue that
is not I is A.2

t11 + t22

1.

(b) For a two-state Markov process with t21


and t12

b, show that the fixed state (eigenvec-

tor for A.= 1) is

a!b

[].

D2 Suppose that T is a Markov matrix.


n

(a) Show that for any state 1,

I(Tx)k I xk.
=

k=I

k=I

(b) Show that if vis an eigenvector of T with eigen11

value A. f. 1, then

I vk

0.

k=I

6.4 Diagonalization and Differential

Equations
This section requires knowledge of the exponential function, its derivative, and first
order linear differential equations. The ideas are not used elsewhere in this book.
Consider two tanks, Y and Z, each containing 1000 litres of a salt solution. At a
initial time, t

0 (in hours), the concentration of salt in tank Y is different from the

concentration in tank Z. In each tank the solution is well stirred, so that the concentra
tion is constant throughout the tank. The two tanks are joined by pipes; through one
pipe, solution is pumped from Y to Z at a rate of 20 L/h; through the other, solution is

316

Chapter 6

Eigenvectors and Diagonalization

pumped from Z to Y at the same rate. The problem is to determine the amount of salt

y(t) t.
(z/1000)
dy(20)(z/1000)
-0.02y 0.02z.
ydt z

in each tank at time

be the amount of salt (in kilograms) in tank Y at time

Let

amount in the tank Z at time

t.
(20)(y/1000)

t, z(t)
t (y/1000)
and let

Then the concentration in Y at time

be the

is

kg/L.

kg/L is the concentration in Z. Then for tank Y, salt is fl.owing out

Similarly,

through one pipe at a rate of


of

kg/h and in through the other pipe at a rate

kg/h. Since the rate of change is measured by the derivative, we have


By consideration of Z, we get a second differential equation, so

are the solutions of the system of linear ordinary differential equations:

and

dy -0.02y 0.02z
dzdt 0.02y -0.02z
dt
d [y] [-0.02 0.02] [y]
dt z 0.02 _0.02 z
+

.
.
.
'
It 1s convernent to rewnte th 1s system m the form

How can we solve this system? Well, it might be easier if we could change vari

.
[l1].
[-0.0.0022 -0.02
0.202]2. ' 0 .
-0.04,
r- l
[-11 1]
[11 -11]
[:]
[]
4.6: [] [:l
d [y* [y*]
dt z*] - z*

ables so that the

matrix is diagonalized. By standard methods, one eigenvalue of

1s llJ

value is il2
by p

with corresponding eigenvector

-l
.
'With p

'
, wit h correspond mg eigenvector

l
2

Hence, A is diagonalized

1 .

Introduce new coordinates


tion

.
The other eigen-

by the change of coordinates equation, as in Sec-

Substitute this for

-P

on both sides of the system to obtain

AP

Since the entries in Pare constants, it is easy to check that

d y*] d [y*
dt [z* - dt z*]
-P-

-P

Multiply both sides of the system of equations (on the left) by p-i. Since Pdiagonal
izes A, we get

:t [:] p-lAP[:] rn -004] [:]


=

Now write the pair of equations:

dy* 0 dz* -0.04z*


dt
dt
-

and

These equations are "decoupled," and we can easily solve each of them by using simple
one-variable calculus.

Section 6.4 Diagonalization and Differential Equations

The only functions satisfying

dy*
dt

0 are constants: we write y*(t)

functions satisfying an equation of the form

x(t)

cek1 for a constant c. So, from

b is a constant.

dz*
dt

317

a.

The only

kx are exponentials of the form

-0.04z*, we obtain z*(t)

be-0041, where

Now we need to express the solution in terms of the original variables y and z:

-1
1

] [aa

y* - z*
y*
z*
y* + z* -

][ ] [

For later use, it is helpful to rewrite this as

[] a [ ]
=

- be-0.04t
+ be-0.041

+be-0041

[ ]
-

.This is the general

a and b, we would need to know


0. Then we would know y and z for

solution of the problem. To determine the constants


the amounts y(O) and z(O) at the initial time t
all t. Note that as t

oo,

y and z tend to a common value

a,

as we might expect.

A Practical Solution Procedure


The usual solution procedure takes advantage of the understanding obtained from this
diagonalization argument, but it takes a major shortcut. Now that the expected form
of the solution is known, we simply look for a solution of the form
Substitute this into the original system and use the fact that

:tce,i1 [:]

After the common factor ce,i1 is cancelled, this tells us that

[:]

[]
=

ce,i1

A.ce,i1

[:].

[:l

is an eigenvector of

A.. We find the two eigenvalues A.1 and A.2 and the corresponding
eigenvectors v1 and v2, as above. Observe that since our problem is a linear homoge
A, with eigenvalue

neous problem, the general solution will be an arbitrary linear combination of the two
solutions e,i11v1 and e,i21v2. This matches the general solution we found above.

General Discussion
There are many other problems that give rise to systems of linear homogeneous or
dinary differential equations (for example, electrical circuits or a mechanical system
consisting of springs). Many of these systems are much larger than the example we
considered. Methods for solving these systems make extensive use of eigenvectors and
eigenvalues, and they require methods for dealing with cases where the characteristic
equation has complex roots.

318

Chapter 6

Eigenvectors and Diagonalization

PROBLEMS 6.4
Practice Problems
Al Find the general solution of each of the following
systems of linear differential equations.
(a)

:t []

(b)

[! -J []

:!_

[y] [

dt z

0.2

0.1

0 .7

-0.4

] [y]
z

Homework Problems
Bl

Find the general solution of each of the following


systems of linear differential equations.
(a)

:t []

[-: -:;][]

(b)
(c)

:t []
:
t

[- _:] []

[;] [- - i [;]
z

11

-5

-8

CHAPTER REVIEW
Suggestions for Student Review
1

Define eigenvectors and eigenvalues of a matrix A.

(b) Is there any case where you can tell from the

Explain the connection between the statement that

eigenvalues that A is not diagonalizable over JR.?

A. is an eigenvalue of A with eigenvector v and the


condition det(A

/I.I)

0. (Section 6.1)

2 What does it mean to say that matrices A and B are


similar? Explain why this is an important question.
(Section 6.2)

3 Suppose you are told that the n x n matrix A has


eigenvalues /1.1, ... , An (repeated according to multi

plicity).
(a) What conditions on these eigenvalues guaran
tees that A is diagonalizable over JR.? (Sec

(Section 6.2)
4 Use the idea suggested in Problem 6.2.D 4 to create

matrices for your classmates to diagonalize. (Sec


tion 6.2)

5 Suppose that P-1AP = D, where D is a diago


nal matrix with distinct diagonal entries A.1, ... , /1.11
How can we use this information to solve the system of linear differential equations

:t

x = Ax? (Sec

tion 6.4)

tion 6.2)

Chapter Quiz
El Let A =

[ - ]
8

-2

Determine whether the

an eigenvector of A, state the corresponding eigen

(a)[!]

(b)

[ ;

-11

following vectors are eigenvectors of A. If any is


value.

E2 Determine whether the matrix A =

(c)

m c{:J

- - i
5

is diagonalizable. If it is, give an invertible matrix

Pand a diagonal matrix D such that p-1 AP= D.

Chapter Review

E3 Determine the algebraic and geometric multiplicity


of each eigenvalue of A

[- -11

A - 21?

(c) What is the rank of A?

onalizable?

(b) What is the dimension of the nullspace of the


matrix B

Is A diag-

319

E6 Let A

[00..90 0.0.81 0.0.10]


0.1 0.1 0.9
1.

. Verify that A is a Markov

E4 If ;i is an eigenvalue of the invertible matrix A,


1
prove that ;i-1 is an eigenvalue of A-

matrix and determine its invariant state 1 such that

ES Suppose that A is a 3

det A =

0,

x 3 matrix such that

det(A + 2/)

0,

det(A - 3/)

i=l

Xi =

E7 Find the general solution of the system of differen


tial equations

Answer the following questions and give a brief ex

:t [] [: :] []

planation in each case.

(a) What is the dimension of the solution space of

Ax=

o?

Further Problems
Fl (a) Suppose that A and B are square matrices such
that AB

BA. Suppose that the eigenvalues

its characteristic polynomial." That is, if the char


acteristic polynomial is

of A all have algebraic multiplicity 1. Prove


that any eigenvector of A is also an eigenvector
of B.
(b) Give an example to illustrate that the result in

then

part (a) may not be true if A has eigenvalues


with algebraic multiplicity greater than 1.

F2 If det B

0,

prove that AB and BA have the same

eigenvalues.

F3 Suppose that A is an
eigenvalues ;i1,

n x n matrix with n distinct

,An with corresponding eigen

vectors v1,
, Vn, respectively. By representing 1
with respect to the basis of eigenvectors, show that

(A - ;i1 !)(A - ,12/) (A

(Hint: Write the characteristic polynomial in fac


tored form.) This result is called the Cayley

;i,J)x

for every

JR.11, and hence conclude that "A is a root of

MyMathlab

Hamilton Theorem and is true for any square


matrix A.

F4 For an invertible

n x n matrix, use the Cayley

Hamilton Theorem to show that A-1 can be writ


ten as a polynomial of degree less than or equal

-1

to n
in A (that is, a linear combination of
2
11 1
{A - , ... 'A , A,/}.

Go to MyMathLab at www.mymathlab.com. You can practise many of this chapter's exercises as often as you
want. The guided solutions help you find an answer step by step. You'll find a personalized study plan available to
you, too!

CHAPTER 7

Orthonormal Bas es
CHAPTER OUTLINE
7.1 Orthonormal Bases and Orthogonal Matrices
7.2 Projections and the Gram-Schmidt Procedure
7.3 Method of Least Squares
7.4 Inner Product Spaces
7.5 Fourier Series

In Section 1.4 we saw that we can use a projection to find a point P in a plane that is
closest to some other point Q that is not in the plane. We can view this as finding the
point in the plane that best approximates Q. In many applications, we want to find a
best approximation. Thus, it is very useful to generalize our work with projections from
Section 1.4 to not only general subspaces of JR.11 but also to general vector spaces. You
may find it helpful to review Sections 1.3 and 1.4 carefully before proceeding with
this chapter.

7.1 Orthonormal Bases and


Orthogonal Matrices
Most of our intuition about coordinate geometry is based on experience with the stan
dard basis for JR.11 It is therefore a little uncomfortable for many beginners to deal with
the arbitrary bases that arise in Chapter 4. Fortunately, for many problems, it is pos
sible to work with bases that have the most essential properties of the standard basis:
the basis vectors are mutually orthogonal (that is, the dot product of any two vectors
is 0) , and each basis vector is a unit vector (a vector with length 1 ).

Orthonormal Bases
Definition

A set of vectors {V1, vk} in JRll is orthogonal if vi v1


,

O whenever i

j.

Orthogonal

EXAMPLE l
The set

{ \ : -! }
{ ! =: n
,

yourself.) The set

is an octhogonal set of vectors in 11!.'. (Check the dot products

is also an orthogonal set.

If the zero vector is excluded, orthogonal sets have one very nice property.

322

Chapter 7

Theorem 1

Orthonormal Bases

If {V 1,

, vd is an orthogonal set of non-zero vectors in JR'Z, it is linearly

independent.

Proof: Consider the equation c1v1 + + cdk

0.

Take the dot product of v; with

each side to get

Cc1v1 + ... + ckvk) v;


c1Cv1. v;) + ... + c;(v;. v;) + ... + ck(vk. v;)
0 + ... + 0 + c;llv;ll2 + 0 + ... + 0
since v; v1

0 unless i

j. Moreover, v; f.

0,

Ov;

so llv;ll f. 0 and hence c;

0. Since

this is true for all 1 i k, it follows that { v 1, .. , vd is linearly independent.


.

Remark
The trick used in this proof of taking the dot product of each side with one of the
vectors v; is an amazingly useful trick. Many of the things we do with orthogonal sets
depend on it.
In addition to being mutually orthogonal, we want the vectors to be unit vectors.

Definition

A set {V1,

Orthonormal

is a unit vector (that is, each vector is normalized).

vd of vectors in JR11 is orthonormal if it is orthogonal and each vector v;

Notice that an orthonormal set of vectors does not contain the zero vector, since all
vectors have length 1. It follows from Theorem 1 that orthonormal sets are necessarily
linearly independent.

EXAMPLE2

Any subset of the standard basis vectors in JR11 is an orthonormal set. For example, in
JR6' {e,' e2, es, e6} is an orthonormal set of four vectors (where, as usual, e; is the i-th
standard basis vector).

EXAMPLE3
The set

{ ; j -n
l

'l

' },

is an orthonormal set in IR4. The vectors are multipies

of the vectors in Example 1, so they are certainly mutually orthogonal. They have been
normalized so that each vector has length 1.

Section 7.1 Orthonormal Bases and Orthogonal Matrices

EXERCISE 1
Verify that the set

{ [il rl [-]}

323

is orthogonal and then normalize the vectors to

produce the corresponding orthonormal set.

Many arguments based on orthonormal sets could be given for orthogonal sets of
non-zero vectors. However, the general arguments are slightly simpler for orthonor
mal sets since llvill

1 in this case. In specific examples, it may be simpler to use

orthogonal sets and postpone the normalization that often introduces square roots until
the end. (Compare Examples 1 and 3.) The arguments here will usually be given for
orthonormal sets.

Coordinates with Respect to an Orthonormal Basis


An orthonormal set of n vectors in JR.11 is necessarily a basis for JR.12 since it is auto
matically linearly independent and a set of n linearly independent vectors in JR.12 is a
basis for JR.11 by Theorem 4.3.4. We will now see that there are several advantages to an
orthonormal basis over an arbitrary basis.
The first of these advantages is that it is very easy to find the coordinates of a vector
with respect to an orthonormal basis. Suppose that :B

{v1, ... , v12} is an orthonormal

basis for JR.11 and that 1 is any vector in JR.12 To find the :B-coordinates of 1, we must
find b1,... , b such that

11

(7.1)

If :B were an arbitrary basis, the procedure would be to solve the resulting system of
n equations in n variables. However, since :B is an orthonormal basis, we can use our

amazingly useful trick: take the dot product of (7.1) with vi to get

x v;
because vi v1

0 for i

. + o + biCvi vi)+ o + + o
.

* j and

vi V;

bi

1. The result of this argument is important

enough to summarize as a theorem.

Theorem 2

If :B

W1,. . ., Vn}

is an orthonormal basis for JR.12, then the i-th coordinate of a

vector 1 E JR.12 with respect to :Bis

b;
It follows that 1 can be written as

x v;

324

Chapter 7

Orthonormal Bases

EXAMPLE4

Find the coordinates of 1

13

with respect to the orthonormal basis

3
4

{ 1

1 1 -1
1'-../2 1 , -../2

1 -1
2 1 '2
-1

b1, b2, b3, and b4 of 1 are given by

Solution: By Theorem 2, the coordinates

bi
b2

v1

1 . v2

b3

v3

b4

v4

12"(1+2 + +
-(1-2 +
2
(-1+ + +
1
-CO
-../2 +2 + O
3

4)

4)

-1

0)

4)

-../2

-Yz

-1
-../2 .

Thus the 13-coordinate vector of 1 is [1]23

(It is easy to check that

- -../2

EXERCISE 2
Ld

{ rn ul H]}
[!]
},

'

},

then find the coordinates of 1

Verify that is an orthononnal basis fodl3 and

with respect to.

Another technical advantage of using orthonormal bases is related to the first one.
Often it is necessary to calculate the lengths and dot products of vectors whose co
ordinates are given with respect to some basis other than the standard basis. If the
basis is not orthonormal, the calculations are a little ugly, but they are quite simple

{v1,
Y1V1 +

when the basis is orthonormal. Let

X1V1 +

+ X11Vn

and

vn} be an orthonormal basis for JR.n and let


+ YnVn be any vectors in JR". Using the fact

Section 7.1
that

vi Vj

1 Y

0 for i * j and vi v;
=

1 gives

(X1V1 + ''' + XnVn) (y1V1

+ ''' +

Y11V11)

X1Y1(V1 V1) + X1Y2(V1 V2) + + X1Y11(V1 Vn) + X2Y1(V2 V t )


+ + X2Yn(V2 V11) +
+ X11Y11(V11 Vn)

X1Y1

325

Orthonormal Bases and Orthogonal Matrices

X2Y2

+ ... +

111112

and

X11Y11

11

x + +

coordinates. This fact will be used in Section 7 2

Thus, the formulas in the new coordinates look exactly like the formulas in standard

..

EXAMPLES

Ut B

]
jl o

{}, [;]. Ul, [-m.


[-H

[
Solution:
=

},

Detennine

111112

and

[i'] . [i'
,
J.

i' , jlJ.
[ ] [

Verify the result of Example 5 by finding

1 y

i'

,Y

R3 such that

andt jl.

Using our work above, we get

111112

EXERCISE 3

and let

Ul. Ul

Ul Hl

[i']

Ul

and

-2

and y explicitly and computing

111112

and

directly. This will demonstrate the usefulness of coordinates with respect to an

orthonormal basis.

11

Dot products and orthonormal bases in JR. have important generalizations to

products and orthonormal bases in general vector spaces.


Section 7.4.

inner

These will be considered in

Change of Coordinates and Orthogonal Matrices


A third technical advantage of using orthonormal bases is that it is very easy to invert

To keep the writing short, we give the argument in JR.3, but the corresponding argument
a change of coordinates matrix between the standard basis and an orthonormal basis.
works in any dimension.

326

Chapter 7

Orthonormal Bases

Let <J3

{v1,v2,v3} be an orthonormal basis for JR3 and let P

[vt v2 v3J.

From Section 4.4, P is the change of coordinates matrix from <J3-coordinates to coordi
nates with respect to the standard basis S. Now, consider the product of the transpose
of P with P: the rows of

pT are vf,vI, v;, so

viv2 viv3
viv2 viv3
vrv2 vrv3
Vt v2 Vt v3
v2 v2 v2 v3
v3 v2 v3 v3

Remark
We have used the fact that the matrix multiplication xT y equals the dot product x y.
We will use this very impo1tant fact many times throughout the rest of the book.
It follows that if P is a square matrix whose columns are orthonormal, then P is
invertible and

Definition
Orthogonal Matrix

p-t pT. Matrices with this property are given a special name.
=

An n x n matrix P such that

p-t pT and that ppT


=

pTP
pTP.

I is called an orthogonal matrix. It follows that

It is important to observe that the definition of an orthogonal matrix is equivalent


to the orthonormality of either the columns or the rows of the matrix.

Theorem 3

The following are equivalent for an n x n matrix P:

(1) P is orthogonal.
(2) The columns of P form an orthonormal set.
(3) The rows of P form an orthonormal set.

Proof: Let P

[vt

v,,J. By the usual rule for matrix multiplication,


T
(P P)iJ

vf VJ v;
Hence , ppT
I if and only if v; v;
1 and v; v1
=

VJ

0 for all if.}. But this is true if

and only if the columns of P form an orthonormal set.


The result for the rows of P follows from consideration of the product
You are asked to show this in Problem D4.

ppT

I.

Section 7.1

327

Orthonormal Bases and Orthogonal Matrices

Remark
Observe that such matrices should probably be called orthonormal matrices, but the
name orthogonal matrix is the name everybody uses. Be sure that you remember that
an orthogonal matrix has orthonormal columns and rows.

EXAMPLE6

The set

Cose
. e
sm

EXAMPLE 7
The set

{[ ;] [-:: :]}
]
,

is orthonormal for any e (verify). So, the matrix P

-[

-sine .
cose

_
T
.
Hence, p-l - p IS an orthogonal matnx.

{ [il [=: l H]}

cose

sine

- sm
. e

cose

is orthogonal. (Verify this.) If the vectors are normalized,

the resulting set is orthonormal, so the following matrix Pis orthogonal:

1/.../2
1/Y3
1/\/6
1/.../2 -1/Y3 -1/\/6
2/\/6
0
-1/Y3

Thus, Pis invertible and

-l
p

T
p

1/.../2
1 /.../2
l/-fJ -1/Y3 -1 Y3
11\16 -11\16 2/\/6

Moreover, observe that p-I is also orthogonal.

EXAMPLES

The vectors of Example 4 are orthonormal, so the following matrix is orthogonal:

0
1/2
1/2 -1/.../2
0
1/2 -1/2
1/.../2
0
1/2
1
1/2
1.../2
0
1/2 -1/2
-1/.../2

EXERCISE4
Verify that P

0
1/.../2
1/.../2
j-{3
1
-1/Y3 1/Y3
1/\/6 2/\/6
-1/\/6

is orthogonal by showing that pp

/.

The most important application of orthogonal matrices considered in this book is


the diagonalization of symmetric matrices in Chapter 8, but there are many other geo
metrical applications as well. In the next example, an orthogonal change of coordinates

328

Chapter 7

Orthonormal Bases

matrix is used to find the standard matrix of a rotation transformation about an axis
that is not a coordinate axis. (This was one question we could not answer in
Chapter 3.)

EXAMPLE9

Find the standard matrix of the linear transformation L : JR3


about the axis defined by the vector

ii=

JR3 that rotates vectors

counterclockwise through an angle

Solution: If the rotation were about the standard


e1), the matrix of the rotation would be

x1 -axis (that is, the axis defined by

l [I

0
0
1 0
0
R1 = 0 cosn/3 - sinn/3 = 0 1/2 - Y312
0 sinn/3 cosn/3
0 Y312 1/2

This will also be the 13-matrix of the rotation in this problem if there exists a basis
13

= LA, fi, h} such that


(1) /i is a unit vector in the direction of the axis v.
(2) 13 is orthonormal.

(3) 13 is right-handed (so that we can correctly include the counterclockwise sense
of the rotation with respect to a right-handed basis).
Let us find such a basis.
To start, let

/. = 11:11 = },

[; l

We must find two vectors that are orthogonal to

/. and

to each other. Solving the equation

0 = f1

-+

-+

= 1 (X1 + X2 + X3)
Y3

by inspection, we find that the vector

[-i]

is orthogonal to

/..

(There are infinite]y

many other choices for this vector; this is just one simple choice.) To form a right
handed system, we can now take the third vector to be

Normalizing these vectors, we get

and

The required right-handed orthonormal basis is thus 13

u7, fi, h},

and the

orthogonal change of coordinates matrix from this basis to the standard basis is

=[A

Ii

[
1/vf3 1/-Y'i.
A]= 11Y3 - 11-Y'i.
1/Y3

1/../6
11../6
0 -2/../6

Section 7.1

EXAMPLE9
(continued)

Since [L]1l

Orthonormal Bases and Orthogonal Matrices

R1 (above), the standard matrix is given by

[L]s

It is easy to check that

P[L]1lP-

2/3
2/3
-1/3

-1/3
2/3
2/3

2/3
-1/3
2/3

329

is an eigenvector of this matrix with eigenvalue I, and it

should be since it defines the axis of the rotation represented by the matrix. Notice
also that the matrix L
[ ]s is itself an orthogonal matrix. Since a rotation transformation
always maps the standard basis to a new orthonormal basis, its matrix can always be
taken as a change of coordinates matrix, and it must be orthogonal.

A Note on Rotation Transformations and Rotation of


Axes in R2

[
{ [;] ,[-:]}

. p
The matnx

'13

cose

Stne

- sine .
.
. from the bas1s
ts the c hange of coord'mates matnx
COS e

to the standard basis S of JR This change of basis is often

described as a "rotation of axes through angle 8" because each of the basis vectors
in '13 is obtained from the corresponding standard basis vector by a rotation through
angle e.
Treatments of rotation of axes often emphasize the change of coordinates equation.

4.4.2,

Recall Theorem

which tells us that if P is the change of coordinates matrix

from '13 to S, then p-l is the change of coordinates matrix from S to '13. Then, for any
x E JR2 with [x]1l

[:J

the change of coordinates equation can be written in the form

p-1 [x]s. If the change of basis is a rotation of axes, then Pis an orthogonal
1
matrix, so p- = pr. Thus, the change of coordinates equation for this rotation of axes

[x]1l

can be written as

[:] -[ :
_

][]

Or it can be written as two equations for the "new" coordinates in terms of the "old":

b1
b2

X1 COS 8 + X2 sin 8

-X1 sin 8 + X2 COS 8

These equations could also be derived using a fairly simple trigonometric argument.
.
cose - sine
.
1
.
The matnx
.
a SO appeared .tn s ect10n 3 . 3 as the Standard matnx
sm B
cos B

R
[ e] of the linear transformation of JR that rotates vectors counterclockwise through
angle e. Conceptually, this is quite different from a rotation of axes.
It can be confusing that the matrix for a rotation through e as a linear transfor
mation is the transpose of the change of coordinates matrix for a rotation of axes

330

Chapter 7

Orthonormal Bases

through e. In fact, what may seem even more confusing is the fact that if y ou re
place e with ( -e), one matrix turns into the other (because cos ( -e)
sin (-B)

cos e and
- sin B). One way to understand this is to imagine what happens to the
=

vector e1 under the two different scenarios. First, consider R to be the transformation
that rotates each vector by angle e, with 0 < e <

then R(e1) is a vector in the

first quadrant that makes an angle e with the x1 -axis. Next, consider a rotation of axes
through (-B); denote the new axes by y1 and y2, respectively. Then e1 has not moved
but the axes have, and with respect to the new axes, e1 is in the new first quadrant and
makes an angle of e with respect to the Y1 -axis. Therefore, the new coordinates of e1
relative to the rotated axes are exactly the same as the standard coordinates of R(e1).
Compare Figures 7.1.1 and 7.1.2.

Figure 7.1.1

Figure 7.1.2

The transformation L: IR.2

--+

IR.2 rotates vectors by angle e.

The standard basis vector e1 is shown relative to axes obtained from the
standard axis by rotation through -e.

Exercises

Section 7.1

331

PROBLEMS 7.1
Practice Problems
Al Determine which of the following sets are orthog

onal. For each orthogonal set, produce the cor

responding orthonormal set and the orthogonal

change of coordinates matrix P.


(a)

(b)

(c)

(d)

{[ ] [ ] }

A is orthogonal by calculating AT A. If A is not

orthogonal, indicate how the columns of A fail to

form an orthonormal set (for example, "the second


and third columns are not orthogonal").

{[-: rni +m
{[l].[J HJ}
-21
{ 1 _-1: ' --01 ' 01}
{ Hl [] {!]}

(a) A=
(b) A=
(c) A=
(d) A=

A2 t S =

(e) A=
Find the coordi-

nates of each of the following vectors with respect

to the orthonormal basis :B.


(a) w=

(c)Y=

A3 Let

A4 For each of the following matrices, decide whether

:B

m
Hl
=

(b)t=

(d) Z=

1
-1
2 11 -1
{' :
1

'2

' Y2

Hl
m

-10
10 -1!}
-14
3-5
-13
-23
1

' Y2

Find the coordinates of each of the following vec-

tors with respect to the orthonormal basis :B.


(a) 1=

(c) w=

24
_35
103

(b) y=

(d) z=

AS Let L

12/13]
[15/13
2/[ 3/513 -5/13
-4/5 -3/54/5]

[1/52/5 -1/5]
1/3[2/3 -2/32/32/5 -2/31/3]
2/31/3 1/32/3 2/32/3
[2/32/3 -2/31/3 -2/31/3]
JR.3

JR.3

[H

be the rotation through angle

about the axis determined by

ih =

(a) Verify that

is orthogonal to

and calculate the compo

Define

-]
[
22. 1
gi/l g;l 1, 2, 3,

nents of

(b) Let
:B

-+

f;
=

g,

g3

-+

for

-+

{/i, h.f;}

is an

g,

so that

orthonormal basis.

Write the change of coordinates matrix P for

the change from :B to the standard basis S in


the form P=

].

..

(c) Write the :B-matrix of

L, [L]2J.

For part (d),

it is probably easiest to write this in the form

[L]2l=

}i[".].

(d) Determine the standard matrix of

[L]s.

Chapter 7

332

Orthonormal Bases

{[ -; l[ ; l [ ]}
l/

A6 Given that '13=

1/../6

1/../3

-1/Y2

is an orthonormal basis for IR. . Determine another


orthonormal basis for IR.

1/../6
1/../3
1;Y2

and briefly explain why your basis is

orthonormal.

that includes the vector

Homework Problems
gonal. For each orthogonal set, produce the cor
responding orthonormal set and the orthogonal

Bl Determine which of the following sets are ortho


(a) w=

-2

(b) 1=

-4
0
4

change of coordinates matrix P.


(a)

(b)

( c)

{[][-i]}

(c ) y =

{[-:J.Ul[J}
{[ll {l HJ}

B2 Ut =

form an orthonormal set (for example, "the second


and third columns are not orthogonal").
(a) A=

]
u

{ [ l n l }
}o

},

(b) A=
Find the

(c ) A=

coordinates of each of the following vectors with

m
Ul

respect to the orthonormal basis '13.


(a) w=

(c )

B 3 Let '13 =

(b) x=

(d)Z=

{d

orthogonal, indicate how the columns of A fail to

, 1 , -2
1 0

-2
2

2
-2

A is orthogonal by calculating AT A. If A is not

},

(d ) z=

B4 For each of the following matrices, decide whether

-1 2
0

5
0

: !

-1

},

nJ
nl}

Find

-1

the coordinates of each of the following vectors


with respect to the orthonormal basis '13.

(d ) A=

(e) A=

YS -1/ Ysl
[2/1;-VS
-21-VSJ
2/YS -1/YS]
[-11-VS -21-VS
1/2 1/2 ]
[ 1/2 1/2

[
[

1/../3 l/Y2 -1/../6


-1/../3 l/-f2. 1/../6
2/../6
1/../3 0
1/../3 1/../6 l/Y2
1/../3 1/../6 l/Y2
1/../3 -1/../6 0

BS (a) Ut W, =

[i]

[ ]
-

and W, =

Determine a thfrd

vector w3 such that {w1, w2, w3} forms a right


handed orthogonal set.
(b) Let v;

Ul. ni.

w;/llw;ll so that '13 = {v1, 1!2, 1/3} is an

orthonormal basis. Find

and

Section 7.2

Projections and the Gram-Schmidt Procedure

333

Conceptual Problems
Dl Verify that the product of two orthogonal matrices

D3 (a) Use the fact that x

y =Py to show that if an

matrix Pis orthogonal, then llPxll = 11111


for every 1E JRn.

is an orthogonal matrix.

n x n

D2 (a) Prove that if P is an orthogonal matrix, then


detP= 1.

(b) Show that any real eigenvalue of an orthogonal


matrix must be either 1 or -1.

(b) Give an example of a 2 x 2 matrix A such that


det A = 1, but A is not orthogonal.

D4 Prove that an n x n matrix P is orthogonal if and

only if the rows of Pform an orthonormal set.

7.2 Projections and the Gram-Schmidt

Procedure
Projections onto a Subspace
The projection of a vector y onto another vector 1 was defined in Chapter 1 by find
ing a scalar multiple of 1, denoted proj1 y, and a vector perpendicular to 1, denoted
perp1 y, such that
y = proj1 y + perp1 y
Since we were just trying to find a scalar multiple of 1, the projection of y onto 1
could be viewed as projecting y onto the subspace spanned by 1. Similarly, we saw
how to find the projection of y onto a plane, which is just a 2-dimensional subspace. It
is natural and useful to define the projection of vectors onto more general subspaces.
Let y E JRn and let S be a subspace of JR11 To match what we did in Chapter 1, we

want to write y as
y = proj5 y + perp5 y
where proj5 y is a vector in S and perp5 y is a vector orthogonal to S. To do this, we
first observe that we need to define precisely what we mean by a vector orthogonal to
a subspace.

Definition

Let S be a subspace of ]Rn. We shall say that a vector 1 is orthogonal to S if

Orthogonal

1 s= o

Orthogonal Complement

for alls ES

We call the set of all vectors orthogonal to S the orthogonal complement of S and
denote it .L. That is,
s.l

= {1E]Rn11s=0 for all SES}

Remark

Note that if :B = {v1,

vk} is a basis for S, then

s.l

{1E]Rn11. V; =0 for

1 i

k}

334

Chapter 7

EXAMPLE 1

Orthonormal Bases

If S is a plane in lll?.3 with normal vector n, then by definition n is orthogonal to every

vector in the plane, so we say that n is orthogonal to the plane. On the other hand,

we saw in Chapter 1 that the plane is the set of all vectors orthogonal to n (or any

scalar multiple of

plane.

EXAMPLE2
Let W =Span

it), so the orthogonal complement of the subspace Span{it} is the

{ , n

Find W" in R4

V1

Solution: We want to find all v =

ffi?.4 such that v

= 0 and v

V4

= 0. This

gives the system of equations Vt + v4 = 0 and v1 + v3 = 0, which has solution space


Span

g -r}

EXERCISE 1
Lets= Span

Hen. W" =span

{i ;}

{ -r}

FindS"

We get the following important facts about Sand SJ_.

Theorem 1

Let S be ak-dimensional subspace of lll?.11 Then S is a subspace of JR" and:

(1) s n SJ_ = {O}


(2) dim(SJ_) = n k
-

(3) If {v 1

vk} is an orthonormal basis for Sand Wk+l


, vn} is an orthonormal
1
basis for SJ_, then {i11, , vk> Vk+I . . , vn} is an orthonormal basis for lll?.1

. ,

You are asked to prove these facts in Problems D 1, D2, and D4.

We are now able to return to our goal of defining the projection of a vector 1

ffi?.11 Assume that we have an orthonormal basis Wt


vk} for
S and an orthonormal basis (vk+t . , i111} for SJ_. Then, by (3) of Theorem 1, we
know that {i11, , i11 } is an orthonormal basis for ffi?.11 Therefore, from our work in
1

onto a subspaces of

. '

Section 7.2

335

Projections and the Gram-Schmidt Procedure

Section 7 .1, we can find the coordinates of

with respect to this orthonormal basis.

We get

Observe that this is exactly what we have been looking for. In particular, we have

(x v1)111 + + (x vk)Vb
(x vk+t)Vk+l + + (x v11 )1111 , which is a vector inj_.
written x as a sum of

which is a vector in , and


Thus, we make the following

definition.

Definition
Projection onto

Let be ak-dimensional subspace of IR.11 and let '13


{v1,
, vk} be an orthonormal
basis of . If x is any vector in IR.11, the projection of x onto is defined to be
=

a Subspace

The projection of 1

perpendicular to is defined to

be

Remark
Observe that a key component for this definition is that we have an orthonormal basis
for the subspace. We could, of course, make a similar definition for the projection if
we have only an orthogonal basis. See Problem D6.
We have defined perp8 x so that we do not require an orthonormal basis for j_.
However, we have to ensure that this is a valid equation by verifying that perp8 x E j_.
For any 1 i k, we have

v; perp8 x= v; [x - ((x i11)v1 + + (x vk)Vk)]


=

v;. 1 - v;. ccx v1)111 + ..+ex vk)Vk)

= v;
=

x - ( 0 + + 0 + (1 v;)(v; v;) + 0 + + 0)

v; x -v; .1

=0
since '13 is an orthonormal basis and the dot product is symmetric. Hence, perp81
is orthogonal to every vector in the orthonormal basis

{V1, ... , vk}

of . Hence, it is

orthogonal to every vector in.

EXAMPLE3
Let= Span

{l : : }
,l

Solution:

-1

and let X=

Determine proj, 1 and perp, 1.

An orthonormal basis for is '13

W1. i12}

{d : }
,

-1

Thus,

336

Chapter 7

EXAMPLE3

Orthonormal Bases

we get

1
-1
G) i + (-3 ) i
1
4 1
4

(continued)
projs

perps 1

EXERCISE2
Id

(1

1 - projs 1

mJr m

and perps

i11)111 + (1 i12)112

1
-1

-5/2

2
5
7

4
4

-5/2

9/2

-5/2

-5/2

-9/2
-]

be an orthogonal basis for S and let

Deterntlne proj,1

1.

1 E JR3 onto
3
a plane in JR is the vector in the plane that is closest to 1. We now prove that the
projection of 1 E JR" onto a subspace of lR.11 is the vector in that is closest to 1.
Recall that we showed in Chapter 1 that the projection of a vector

Theorem 2

Approximation Theorem

1
projs 1.

Let be a subspace of lR.11 Then, for any


minimizes the distance

Proof: Let

111 - Sll

is s

lR.11, the unique vector s

E that

{i11, ... , vd be an orthonormal basis for and let lVk+I ... , v,,} be an or

thonormal basis forj_. Then, for any

Any vector s E can be expressed as s

E IR", we can write

s1v1 +

skvb so that the square of the

distance from 1 to sis given by

111 - 511 2

(x1

- sd +

To minimize the distance, we must choose

(xk - sd + x+i

s;

x; for 1

::;

::;

k. But this means that

Section 7.2 Projections and the Gram-Schmidt Procedure

337

The Gram-Schmidt Procedure


For many of the calculations in this chapter, we need an orthonormal (or orthogonal)
basis for a subspace of JR.11 If is a k-dimensional subspace of JR.11, it is certainly
possible to use the methods of Section 4.3 to produce some basis {w1, ...,wk} for.
We will now show that we can convert any such basis for into an orthonormal basis
{v1, ...,vk} for. To simplify the description (and calculations), we first produce an
orthogonal basis and then normalize each of the vectors to get an orthonormal basis.
The construction is inductive. That is, we first take v1 = w1 so that {Vi} is an or
thogonal basis for Span{wi}. Then, given an orthogonal basis {V1, ...,v;_i} for
Span{w1, ...,w;_J}, we want to find a vector v; such that {V1, ...,v;_1,v;} is an or
thogonal basis for Span{w1, ...,w;}. We will repeat this procedure, called the Gram
Schmidt Procedure, until we have the desired orthogonal basis {v1, ...,vd for
Span{w1,...,wk}. To do this, we will use the following theorem.

Theorem 3

Suppose that v1, ..., Vk

JR.11 Then

for any t1, ..., tk-1 ER

You are asked to prove Theorem 3 in Problem D5.

Algorithm l

The Gram-Schmidt Procedure is as follows.


First step: Let v1 = w1 Then the one-dimensional subspace spanned by v1 is
obviously the same as the subspace spanned by w1 We will denote this subspace
as S1= Span{Vi}.

Second step: We want to find a vector v2 such that it is orthogonal to v1 and


Span{V1,v2}= Span{wi. w2}. We know that the perpendicular of a projection onto a
subspace is orthogonal to the subspace, so we take

Then v2 is orthogonal to v1 and Span{v1,v2} = Span{w1,w2} by Theorem 3. We


denote the two-dimensional subspace by2 = Span{il\,v2}.
i-th step: Suppose that i
1 steps have been carried out so that {V1, ...,v;-d is
orthogonal, and S;-1= Spanfv1,...,v;_i} = Span{w1,...,w;-d. Let
-

By Theorem 3, {V1, ...,v;} is orthogonal and Span{v1,...,v;}= Span{w1,...,w;}.


We continue this procedure until i= k, so that
k = Span{i11, ...,vk}= Sp
. an{w1, ...,wk}=
and an orthogonal basis has been produced for the original subspace.

338

Chapter 7

Orthonormal Bases

Remarks
1. It is an important feature of the construction that ;_ 1 is a subspace of the
next;.

2. Since it is really only the direction of v; that is important in this procedure, we


can rescale each v; in any convenient fashion to simplify the calculations.

3. Notice that the order of the vectors in the original basis has an effect on the
calculations because each step takes the perpendicular part of the next vector. If
the original vectors were given in a different order, the procedure might produce
a different orthonormal basis.

4. Observe that the procedure does not actually require that we start with a basis
; only a spanning set is required. The procedure will actually detect a linearly
dependent vector by returning the zero vector when we take the perpendicular
part. This is demonstrated in Example 5.

EXAMPLE4

Use the Gram-Schmidt Procedure to find an orthonormal basis for the subspace of JR5

defined by

-1

Span

1
2

Solution: Call the vectors in the basis w1, Wz, and w3, respectively.
1
1
First step: Let v1

w1

0 and 1

Span{Vi}.

Second step: Determine perp51 w2:


-3/2
3/2
1
-1/2
1/2

(It is wise to check your arithmetic by verifying that v1

perp51 Wz

0.)

As mentioned above, we can take any scalar multiple of perp51 w2, so we take

-3
3
v2

2 and S2

-1

Span{v1, vz}.

Projections and the Gram-Schmidt Procedure

Section 7.2

EXAMPLE4

339

Third step: Determine perp32 w3:

(continued)

-1/4
-3/4
1/2
1/4
3/4
(Again, it is wise to check that perps2 w3 is orthogonal to V1 and Vz.)

We now see that {v1, Vz, V3} is an orthogonal basis for. To obtain an orthonormal
basis for , we divide each vector in this basis by its length. Thus, we find that an
orthonormal basis for is

1
2

EXAMPLES

'

1
1

-1
-3
3
-3
1
1
2
2
2.../6 -1 2.../6 1
1
3
'

Use the Gram-Schmidt Procedure to find an orthogonal basis for the subspace
S

Span

ml m [i]}

3
ofR .

Solution: Call the vectors in the spanning set w1, w2, and w3, respectively.

First step: Let V1

W1

and S1

Span(Vi}.

Second step: Determine perps1 w2:

We take v2

-2

and S2

Span{V1, v2}.

Third step: Determine perps2 w3:

Hence, w3 was in S2 and so w3


is an orthogonal basis for.

Span{v1, v2}

Span{w1, w2}. Therefore, {V1, v2}

340

Orthonormal Bases

Chapter 7

PROBLEMS 7 .2
Practice Problems
Al Each of the following sets is orthogonal but not
2
3
orthonormal. Determine the projection of x = 5

{ =!
u
{ }
{ : = - }

onto the subspace spanned by each set.


(a ) =

Cb ) '13 =

( c) C =

( c)

(d )

1 ' 0 ' -1

(c) s =Span

orthonormal basis for the subspace spanned by

each set.

of the following subspaces.

(b ) S=Span

{[][i] [;]}
{[;].[i][]}
P -I - 1 }
{ - }

A4 Use the Gram-Schmidt Procedure to produce an

A2 Find a basis for the orthogonal complement of each


(a) S=Span

( b)

(a

{U]}
{ [ ; ]fi] }
{! . -I}

(a)

( b)

( c)

{[i][;].[] }
{[J.[:J.[=:J}

{ }
0

'

A3 Use the Gram-Schmidt Procedure to produce an

orthogonal basis for the subspace spanned by

each set.

(d)

0
1

' -1

-1

-1
1

JPi.11
Prove that perps
AS Let be a subspace of
JPi.11
E
x
x = projgJ. x for any

Homework Problems
Bl Each of the following sets is orthogonal but not
orthonormal. Determine the projection of x = _
onto the subspace spanned by each set.

4
3

2
5

(a) .'.ll=

{ j _;}
1

-2

(b) =

(c) C =

p -; )
{I - )
{ : ' - ' _: )
,

Section 7.2 Exercises

(c)

(d)

(d) v

-1

B2 Find a basis for the orthogonal complement of each

{[]}
{[j] fi]}
{[J .mrm

) S=Span
(c) s =Span

(d) S=Span

{ I -n
-

orthogonal basis for the subspace spanned by


each set.

(b)

{[l-l= :J.lm
{l:HiJ. Ul}

Conceptual Problems

11
Dl Prove that if is a k-dimensional subspace of JR. ,

11
D2 Prove that if is a k-dimensional subspace of JR. ,
k)-dimensional subspace.
thenJ_ is an (n
then n J_ = {0}.
-

'

orthonormal basis for the subspace spanned by


each set.
(a)

(b)

{[J-l=:Hm
{[ll Ul lJ ll}
u : 'n

(c)

-1

(d)

0
l

7.

B3 Use the Gram-Schmidt Procedure to produce an

(a)

{-! ; ' n
P. . -n

B4 Use the Gram-Schmidt Procedure to produce an

of the following subspaces.


(a) S=Span

341

BS Ut S=Span

0
2

{ 1 !)

and let X =

_: .

(a) Find an orthonormal basis :B for.

(b) Calculate perps x.

(c) Find an orthonormal basis C for J_.

(d) Calculate prok1. x.

D3 Prove that if is a k-dimensional subspace of JR.'1,


then (J_ )J_ =.

(\11, ... , vk} is an orthonormal basis for


{vk+I ... , v1 1} is an orthonormal basis forJ_,
1
v11} is an orthonormal basis for JR. 1
{V 1,

D4 Prove that if
and
then

342

Chapter 7

Orthonormal Bases

DS Suppose thatv1, ...,vk E JRll and t1, ..., tk t ER

Prove that
Span{v1, ...,vk}

D7 If {V 1, ... ,vk} is an orthonormal basis for a sub

space , verify that the standard matrix of proj8 can


=

k 1,vk + t1v1
Spanv
{ 1, ... ,v-

be written

+ ... + tk-v
l k-d
D6 Suppose that is a k-dimensional subspace of JRll

and let 13 = v
{ 1, ... ,v}
k be an orthogonal basis of
. For any x E , find the coordinates of proj8 x

with respect to 13.

7 .3 Method of Least Squares


Suppose that an experimenter measures a variable y at times t1, t , ... , tn and obtains
2
the values Y1, y , . .. , y11 For example, ymight be the position of a particle at time tor
2
the temperature of some body fluid. Suppose that the experimenter believes that the
data fit (more or less) a curve of the form y = a+ bt + ct2. How should she choose a,
b, and c to get the best-fitting curve of this form?
Let us consider some particular a, b, and c. If the data fit the curve y = a+ bt+ ct2
perfectly, then for each i, y; = a + bt; + ct1 . However, for arbitrary a, b, and c, we
expect that at each t;, there will be an error, denoted bye; and measured by the vertical
distance
e; y; (a+ bt; + ct; )
=

as shown in Figure 7.3.3.

0
Figure 7 .3.3

Some data points and a curve y

a + bt + ct2 Vertical line segments

measure the error in the fit at each t;.

343

Section 7.3 Method of Least Squares

One approach to finding the best-fitting curve might be to try to minimize the
ll

total error L: e;.

This would be unsatisfactory, however, because we might get a small

i=I

total error by having large positive errors cancelled by large negative errors. Thus, we
instead choose to minimize the sum of the squares of the errors
n

i=l

i=I

I e7 = I (y; - (a+ bt;+ ctT))2


This method is called the method of least squares.
To find the parameters

ues t1, . . .

, tn

and

y , ... , y11,
1

a, b,

and

that minimize this expression for given val

one could use calculus, but we will proceed by using a

projection. This requires us to set up the problem as follows.

Let

y=

r ;1 , l =

1,i'

1:

, and

i" =

'[

be vectors in R'. Now consider the

tn
t11
distance from y to (al+ bi'+ ci'2). Observe that the square of this distance is exactly
the sum of the squares of the errors
1

ll

11.Y- (al+ bi'+ ci'2)112 =

I (y;-(a+ bt;+ ct;))2


i=l

Next, observe that


13

= {l, t: i'2}.

(al + bi'+ ci'2)

If at least four of the

is a vector in the subspace of JR11 spanned by

t; are distinct,

then 13 is linearly independent (see

Problem D2), so it is a basis for . Thus, the problem of finding the curve of best fit is
reduced to finding

a, b,

c such that al+ bi'+ ci'2

and

y By the Approximation Theorem,


c are the 13-coordinates of projs y.

to

is the vector in that is closest

this vector is projs y and the required

a, b,

and

Given what we know so far, it might seem that we should proceed by transforming
13 into an orthonormal basis for so that we can find the projection. However, we can

a, b, and c
y-al-bi'-ci'2 is equal to perps y.
vector in , so it is orthogonal to l, t: and

use the theory of orthogonality and projections to simplify the problem. If


have been chosen correctly, the error vector e=
In particular, it must be orthogonal to every

i'2.

Therefore,

-+
1 e-+ = -+
1 (y

a -+1

bt-+-ct72 )

-t - a -+
-+ = t (y
1 - bt-+- ct72 ) = 0
t-+ e

-+

...

t2 e= t2 (y-al-bi'-ci'2) = o
The required

a, b,

and

c are

determined as the solutions of this homogeneous system

of three equations in three variables.


It is helpful to rewrite these equations by introducing the matrix

x=

and the ve<otor i1 =

[1

i'

t2]

of parameters. Then the error vector can be written as i! =

jl-Xii.

Since the three equations are obtained by taking dot products of e with the columns of

X,

the system of equations can be written in the form

XT(y-Xil) = 0

344

Chapter 7

Orthonormal Bases

The equations in this form are called the normal equations for the least squares fit.
Since the columns of X are linearly independent, the matrix xrX is a 3 x 3 invertible
matrix (see Problem D2), and the normal equations can be rewritten as

a= (XTx)-lXTy
This is consistent with a unique solution.
For a more general situation, we use a similar construction. The matrix

X,

called

the design matrix, depends on the desired model curve and the way the data are col
lected. This will be demonstrated in Example 2 below.

EXAMPLE 1

Suppose that the experimenter's data are as follows:

l.O

2.1

3.1

4.0

4.9

6.0

6.l

12.6

21.1

30.2

40.9

55.5

Solution: As in the earlier discussion, the experimenter wants to find the curve
= a+ bt + ct2 that best fits the data, in the sense of minimizing the sum of the squares

of the errors. We let

XT =

[f

6.1

(2r = 10 2\

1.0

4.41

3.1

4.0

4.9

6.0

9.61

16.0

24.01

36.0

12.6
21.1

y=

30.2
40.9
55.5

Using a computer, we can find that the solution for the system

[ ]

a = (XTx)-lXTy is

l.63175

a = 3.38382 .

The data do not justify retaining so many decimal places, so we take

0.93608

the best-fitting quadratic curve to bey = 1.63 + 3.38t + 0.94t . The results are shown
in Figure 7.3.4.

0
Figure 7.3.4

The data points and the best-fitting curve from Example 1.

Section 7.3 Method of Least Squares

EXAMPLE2

345

a and b to obtain the best-fitting equation of the form y at2 +bt for the following

Find

data:

-1
4

0
1

Solution: Using the method above, we observe that we want the error vector

e
y - at2 - bt
t2 and t. Therefore,
=

to be equal to perps y. In particular,

must be orthogonal to

t2 . e = t2 . (y at2 bi') = 0
t e t (y at2 bi') = 0
-

In this case, we want to pick X to be of the form

Taking jl =

So, y =

then gives

t2 - t is the equation of best fit for the given data.

Overdetermined Systems
The problem of finding the best-fitting curve can be viewed as a special case of the
problem of "solving" an overdetermined system. Suppose that Ax

b is

a system

of p equations in q variables, where p is greater than q. With more equations than


...

variables, we expect the system to be inconsistent unless b has some special properties.
If the system is inconsistent, we say that the system is overdetermined-that is, there
are too many equations to be satisfied.
Note that the problem in Example 1 of finding the best-fitting quadratic curve was
of this form: we needed to solve Xa
were

equations. Thus, for

n >

a,

b, and c, where there

3, this is an overdetermined system.

If there is no x such that Ax =


that minimizes the "error" llAx

y for the three variables

bll.

b,

the next best "solution" is to find a vector x

However, Ax = x1a1

+ xqaq,

which is a

vector in the columnspace of A. Therefore, our challenge is to find x such that Ax is


the point in the columnspace of A that is closest to

b. By the Approximation Theorem,

we know that this vector is projs x. Thus, to find a vector x that minimizes the "error"

346

Chapter 7

Orthonormal Bases

llAx -bll, we want to solve the consistent system Ax

prokoI(A) x. Using an argument


analogous to that in the special case above, it can be shown that this vector x must also
=

satisfy the normal system

EXAMPLE3

Verify that the following system


vector x that minimizes

llAx - bll:

Ax = b

is inconsistent and then determine the

3x1 - X2
x1 + 2x2
2x1
Solution: Write the augmented matrix

x2

40
1

[A I GJ and row reduce:


2

[ 1 -1 40 l [ 01 1 0 l
11 00
2

-2
-5

x that minimizes llAx - hll


x in this system gives

The last row indicates that the system is inconsistent. The


must satisfy

AT Ax = AT b.

Solving for

[114 1]- [-1


-1
] [ 14]
14
[-
[ ]
6

1
83

-3

87 /83
- -56/83
So,

[ ]
-

is the vector that minimizes

llAx - bll.

347

Section 7.3 Exercises

PROBLEMS 7 .3
Practice Problems
Al Find a and b to obtain the best-fitting equation of
the form y = a+ bt for the given data. Make a graph
showing the data and the best-fitting line.
(a)
(b)

-2

-1

2
(b) y =a+ bt for the data

t -2 -1 0 1

A3 Verify that the following systems

1 2 3 -2

Ax

are in

consistent and then determine for each system the


vector x that minimizes

(a)

A2 Find the best-fitting equation of the given form for


each set of data.
2
(a) y = at + bt for the data

-1

1
(b)

llAx - bll.

X1 + 2Xz
2x1 - 3x2

X1 - 12x2

-4

2x1 + 3x2

-4

3x1 - 2x2

4
7

Xj - 6xz

Homework Problems
Bl Find a and b to obtain the best-fitting equation of
the form y =a+ bt for the given data. Make a graph

2
(a) y = at + bt for the data

showing the data and the best-fitting line.


(a)
(b)

-2

-]

(b) y =a + bt+ ct for the data

-1

-1

t -2 - I

B4 Verify that the following systems

Ax

1 2

0
_

1 2

are in

consistent and then determine for each system the

B2 Find a, b, and c to obtain the best-fitting equation

vector

of the form y = a+ bt + ct for the given data. Make


a graph showing the data and the best-fitting curve.
t

-2

-1

Xj - Xz
(a)

2
2

B3 Find the best-fitting equation of the given form for

x that minimizes llAx - bll.

(b)

3x1 + 2x2

X1 - 6x2

10

X1 + Xz

X1 - Xz

4
14

Xj + 3x2

each set of data.

Computer Problems
Cl Find a, b, and c to obtain the best-fitting equation
2
of the form y = a+ bt + ct for the following data.

Make a graph showing the data and the curve.


t

0.0

1.1

1.9

3.0

4.1

5.2

4.0

3.6

4.1

5.6

7.9

11.8

348

Chapter 7

Orthonormal Bases

Conceptual Problems
Dl Let X =

[f

.where

(=

l::J

and =

[:;]

D2 Let X = l

L t;

n
xrx=

i=l

II

Lt;

II

Lt

i=l

i=l

Lt2

Lt3

I!

i=l

i=l

ll

Lt2

i=l
I!

Lt3
I
ll
Lti

i=l

[]

and

f11

Then show that


II

('n],

i' t2

t1
where t= :

[i =

[' ]

for 1 i n. Assume that at least m + 1

tll

of the numbers t1,

, t11 are distinct.

(a) Prove that the columns of X are linearly inde


pendent by showing that the only solution to

col+ cit++ cmf'n = 0 is co== Cm= 0.


(Hint: Let p(t) = co + c1 t + +Cm and show
that if col+ c1t + + Cmt = 0, p(t) must be

i=l

-+

-+

-+

the zero polynomial.)


r
(b) Use the result from part (a) to prove that x X

is invertible. (Hint: Show the only solution to

xr Xv= Ois

v= Oby considering 11Xv112 .)

7 .4 Inner Product Spaces


In Sections 1.3, 1.4, and 7.2, we saw that the dot product plays an essential role in
the discussion of lengths, distances, and projections in JR.11 In Chapter 4, we saw that
the ideas of vector spaces and linear mappings apply to more general sets, including
some function spaces. If ideas such as projections are going to be used in these more
general spaces, it will be necessary to have a generalization of the dot product to
general vector spaces.

Inner Product Spaces


Consideration of the most essential properties of the dot product in Section 1.3 leads
to the following definition.

Definition

Let V be a vector space over JR. An inner product on V is a function ( , ) : V x V

Inner Product

such that

Inner Product Space

-4

JR

(1) (v, v) 0 for all v EV and (v, v) = 0 if and only if v = 0.


(positive definite)
(2) (v, w) = (w, v) for all v, w EV.
(symmetric)
(3) (v, sw + t z) = s(v, w) + t(v, z) for all s, t E JR and v, w, z EV.
(bilinear)
A vector space V with an inner product is called an inner product space.

Remark
Every non-trivial finite-dimensional vector space V has in fact infinitely many different
inner products. When we talk about an inner product space, we mean the vector space
and one particular inner product.

EXAMPLE 1

The dot product on JR.11 is an inner product on JR".

Section 7.4 Inner Product Spaces

EXAMPLE2

349

Show that the function ( , ) defined by


(x, y) = 2x1Y1 + 3x2y2
is an inner product on JR2.

Solution: We verify that ( , ) satisfies the three properties of an inner product:


1. (x, x)= 2xi + 3x

0 and (x, x)= 0 if and only if x =

0. Thus, it is positive

definite.

2. (x,y)= 2X1Y1 + 3x2y2 = 2y1x1 + 3y2x2 =

(Y, x). Thus, it is symmetric.

2
3. For any x,y,ZE lR. ands, t E JR,
(x, sw + tZ) = 2x1(sw1 + tz1) + 3x2(sw2 + tz2)

= s(2x1w1 + 3x2w2) + t(2x1z1 + 3x2z2)


= s(x, w) + t(x,Z>
So, ( , ) is bilinear. Thus, ( , ) is an inner product on IR2.

Remark
Although there are infinitely many inner products on IRn, it can be proven that for any
inner product on JR" there exists an orthonormal basis such that the inner product is
just the dot product on IR" with respect to this basis. See Problem Dl.

EXAMPLE3

Verify that (p, q) = p(O)q(O) + p(l)q(l) + p(2)q(2) defines an inner product on the

vector space P2 and determine (1 + x + x , 2 - 3x ).

Solution: We first verify that ( , ) satisfies the three properties of an inner product:
2
2
2
(1) (p, p) = (p(0)) + (p(1)) + (p(2)) 0 for all p E P2. Moreover, (p, p) = 0 if
and only if p(O) = p(l) = p(2) = 0, and the only p E P2 that is zero for three
values of x is the zero polynomial, p(x) = 0. Thus ( , ) is positive definite.
(2) (p, q)= p(O)q(O)+p(l)q(l)+p(2)q(2) = q(O)p(O)+q(l)p(l)+q(2)p(2) = (q, p).
So, ( , ) is symmetric.

(3) For any p, q, r E P2 and s, t

IR,

(p, sq + tr)= p(O)(sq(O) + tr(O)) + p(l)(sq(l ) + tr(l)) + p(2)(sq(2) + tr(2))


= s(p(O)q(O) + p(l)q(l) + p(2)q(2)) + t(p(O)r(O) + p(l)r(l )
+ p(2)r(2)) = s(p, q) + t(p, r)
So, ( , ) is bilinear. Thus, ( , ) is an inner product on P2. That is, P2 is an inner product

space under the inner product ( , ).

In this inner product space, we have

2
2
(1 + x + x , 2 - 3x )= (1 + 0 + 0)(2 - 0 ) + (1 + 1 + 1)(2 - 3) + (1 + 2 + 4)(2 - 12)
= 2 - 3 - 70 = -71

350

Chapter 7

EXAMPLE4

Orthonormal Bases

Let tr(C) represent the trace of a matrix (the sum of the diagonal entries). Then, M(2, 2)
is an inner product space under the inner product defined by (A, B) = tr(Br A). If
A =

[ ].
\[ -J. [ ]) ([ ][ -D
([ !])

and B =

then under this inner product, we have

=tr

= tr

EXERCISE 1

= 4 + 4= 8

Verify that (A, B)= tr(BT A) is an inner product for M(2, 2). Do you notice a relation
ship between this inner product and the dot product on JR.4?

Since these properties of the inner product mimic the properties of the dot product,
it makes sense to define the norm or length of a vector and the distance between vectors
in terms of the inner product.

Definition

Let V be an inner product space. Then, for any v EV, we define the norm (or length)

form

of v to be

Distance

llvll=

-J(v, v)

For any vectors v, w EV, the distance between v and w is


ll v-wll

Definition

A vector v in an inner product space V is called a

Unit Vector

EXAMPLES

Find the norm of A =

[ ]

unit vector if llvll = 1.

in M(2, 2) under the inner product (A, B) = tr(Br A).

Solution: We have
llAll =

-J(A, A)= -Jtr(AT A)= Ys+1= -y6

Section 7.4 Inner Product Spaces

EXAMPLE6

Find the norm of p(x) = 1 - 2x - x2 in P2 under the inner product (p, q)

351

p(O)q(O)

+ p(l)q(l) + p(2)q(2).
Solution: We have
Ill - 2x - x211 =

=
=

EXERCISE 2

EXERCISE 3

(p(0))2 + (p(1))2

(p(2))2

--)12 + (1 - 2 - 1)2 + (1

4 - 4)2

Ys4

1 and q(x) = x in P2 under the inner product


p(O)q(O) + p(l)q(l) + p(2)q(2).

Find the norm of p(x)

(p, q)

--j(p, p)

Find the norm of q(x)

x in P2 under the inner product


(p, q) = p(-l)q(-1) + p(O)q(O) + p(l)q(l).
=

In Sections 7.1 and 7 .2 we saw that the concept of orthogonality is very useful.
Hence, we extend this concept to general inner product spaces.

Definition

Let Vbe an inner product space with inner product ( , ). Then two vectors

Orthogonal

are said to be orthogonal if

Orthonormal

be orthogonal if
have

(v;, v;)= 1

(v, w) = 0.

(vi, VJ)= 0 for all if.

The set of vectors

{v1, ... , vd

v, w

E V

in Vis said to

j. The set is said to be orthonormal if we also

for all i.

With this definition, we can now repeat our arguments from Sections 7.1 and 7.2
for coordinates with respect to an orthonormal basis and projections. In particular, we
get that if '13 =

{v1, ... , vk} is an orthonormal basis for a subspace S of an inner product

space Vwith inner product ( , ), then fo r anyx E Vwe have


projs x

(x, V1)v1 + + (x, vk)vk

Additionally, the Gram-Schmidt Procedure is also identical. If we have a basis

{w1,
, w11} for an inner
{v1, ... , v,i} defined by

product space V with inner product (, ), then the set

VJ= VJ

is an orthogonal basis fo r V.

352

Chapter 7

Orthonormal Bases

EXAMPLE 7

Use the Gram-Schmidt Procedure to determine an orthonormal basis for =


Span{l, x} of P

under the inner product ( p,q) = p(O)q(O) + p(l)q(l) + p(2)q(2).

Use this basis to determine projs x2.

Solution: Denote the basis vectors of by p1(x)


1 and p2( x)
x. We want to
find an orthogonal basis { q 1(x),q (x)} for . By using the Gram-Schmidt Procedure,
2
we take q1(x) = p1(x) = 1 and then let
=

Therefore, our orthogonal basis is {q1,q } = {1,x - 1}.

Hence, we have

. x2 =
proJs

(x2,1)

(x2, x- 1)

(x - 1)
llx-1112
!ilii2
0(1)+1(1)+4(1)
0(-1)+1(0)+4(1)
1+
(x - 1)
1 2+12+12
(-1 )2 +02+12
1
5
= -1+2(x - 1) = 2x - 3
3
1+

PROBLEMS 7 .4
Practice Problems
Al On P , define the inner product (p,q) = p(O)q(O)+
2
p(1)q(1)+p(2)q(2) . Calculate the following.
(a) (x- 2x 2, 1+3x)
(b) (2 - x+3x2,4 - 3x2)
(d) 119 + 9x + 9x211
(c) 113 - 2x + x211
A2 In each of the following cases, determine whether
( , ) defines an inner product on P
2
(a) (p,q) = p(O)q(O)+p(l)q(l)
(b) (p, q) = lp(O)q(O)I + lp(l)q(l)I +lp(2)q(2)1
(c) (p, q) = p(- l )q(-1) + 2 p(O)q(O)+p(l)q(l)
(d) (p,q) = p(-l)q(l)+2p(O)q(O)+p(l)q(-1)
A3 On M(2, 2) define the inner product (A, B)
tr(BT A).
(i) Use the Gram-Schmidt Procedure to determine
an orthonormal basis for the following sub
spaces of M(2, 2).

(ii) Use the orthonormal basis you found in part (i)

.[ ]
{[ ],[ ],[ -]}
{[ i],[ -],[:1 ]}

.
to d eterrrune
proJs
ca) s=span
(b) =Span

4
_2

3
.
1

A4 Define the inner product (1, y)


3
3x3y3 on IR .

2X1Y1 + X2Y2 +

(a) Use the Gram-Schmidt Procedure to determine


an orthogonal basis for

Span

{[i] .[-i][i]}

Section 7.4 Exercises

(b) Detemllne the coordinates of X

with re

353

AS Let {v1, ... , vk} be an orthogonal set in an inner


product space V. Prove that

spect to the orthogonal basis you found in (a).

Homework Problems
Bl On P , define the inner product (p,q) p(O)q(O) +
2
p(1)q(l) + p(2)q(2). Calculate the following.
(a) (1 -3x2,1+x+2x2)
(b) (3-x,-2 -x-x2)
(d) 1173x + 73x211
(c) Ill - Sx + 2x211
=

B2 In each of the following cases, determine whether


( ,) defines an inner product on P3.
(a) (p,q) p(-l)q(-1) + p(O)q(O) + p(l)q(l)
=

(b) (p,q) p(O)q(O) + p(l)q(l) + p(3)q(3)


+ p(4)q(4)
(c) (p,q) p(-l)q(O) + p( l)q(l) + p(2)q(2)
+ p(3)q(3)
=

(ii) Use the orthogonal basis you found in part (i)


to determine proj5(1 + x + x2).
(a) S
(b) S

}
{
Span { 1 + x2,x - x2 }

Span l,x - x2

B4 Define the inner product (1,Y)


3
2x3y3 on IR. .

X1Y1 + 3x2 y2 +

(a) Use the Gram-Schmidt Procedure to determine


an orthogonal basis for

B3 On P define the inner product (p,q)


2
+ p(O)q(O) + p(l)q(l).

S Span

p(-l)q(-1)

(i) Use the Gram-Schmidt Procedure to deter


mine an orthogonal basis for the following sub
spaces of P .
2

{[i]' nl, [i]}


[]

(b) Determine the coordinates of X

with re

spect to the orthogonal basis you found in (a).

Conceptual Problems
Dl (a) Let {e1, e } be the standard basis forlR.2 and sup
2
pose that ( ,) is an inner product on IR.2 Show
that if 1,y E JR.2,

(1, st>

x1Y1<e1, e1> + x1Y <e1,e >


2
2
+ x Y1<e ,e1 > + x Y <e ,e >
2
2
2 2 2 2

(b) For the inner product in part (a), define a


matrix G, called the standard matrix of the in
ner product ('),by g;j (ei,ej) for i, j 1, 2.
Show that G is symmetric and that
=

(1,y)

I g;JXiYj

i,j=l

17 Gy

(c) Apply the Gram-Schmidt Procedure, using the


inner product ( ,) and the corresponding norm,
to produce an orthonormal basis '13
{v1, v2}
for JR.2.
(d) Define G, the '13-matrix of the inner product ( ,),
by g;j (v;, Vj) for i, j 1,2. Show that G I
.Y1v1 +
and that for x
.f 1 v1 + .f2v2 and y
=

.Y2v2,
Conclusion. For an arbitrary inner product ( ,) on
JR.2, there exists a basis for JR.2 that is orthonormal
with respect to this inner product. Moreover, when
1 and y are expressed in terms of this basis, (1, Y>
looks just like the standard inner product in JR.2

354

Chapter 7

Orthonormal Bases

This argument generalizes in a straightforward way

[i']8

(b) Show that if

to IR"; see Problem 8.2.D6.

D2 Suppose that {v1, v2, v3} is a basis for an inner prod


uct space V with inner product ( , ). Define a matrix

[:;]

and

[jl]8

:l

then

G by

(a) Prove that G is symmetric (GT

(c) Determine the matrix G of the inner product

G).

(p , q )

p(O)q(O) + p( l )q ( l ) + p(2)q(2) for P2


{1, x, x2}.

with respect to the basis

7 .5 Fourier Series
b

J f(x)g(x) dx

The Inner Product

C[a, b] be the space of functions f : IR IR that are continuous on the interval


[a, b]. Then, for any f, g E C[a, b] we have that the product j g is also continuous
on [a, b] and hence integrable on [a, b]. Therefore, it makes sense to define an inner
Let

product as follows.
The inner product ( , ) is defined on

C[a, b] by
b

(j, g)

l j x g x dx
( ) ( )

The three properties of an inner product are satisfied because


b

(1) (f, f)

J f(x)f(x) d x 2:: 0 for all f E C[a, b] and (j, f) J f(x)f(x) dx 0 if


and only if f (x)
0 for all x E [a, b].
=

(2)

(f, g)

J f(x)g(x) d x J g(x)f(x) dx (g, f)


=

(3)

(f, sg + th) J f(x)(sg(x) + th(x)) dx


s(j, g) + t(f, h) for any s, t E R
=

s J f(x) g(x) dx + t J f(x)h(x) dx

Since an integral is the limit of sums, this inner product defined as the integral of
the product of the values off and g at each x is a fairly natural generalization of the
dot product in IR11 defined as a sum of the product of the i-th c omp onents ofx and y for
each i.
One interesting consequence is that the norm of a function f with respect to this

inner product is

11!11

) 1/2

j2(x) dx

355

Section 7.5 Fourier Series

Intuitively, this is quite satisfactory as a measure of how far the function is from the
zero function.
One of the most interesting and important applications of this inner product in
volves Fourier series.

Fourier Series
Let CP2rr denote the space of continuous real-valued functions of a real variable that
are periodic with period 2n. Such functions satisfy f(x+ 2n)
of such functions are f(x)

for any constant

c,

f(x) for all x. Examples

cos x, sin x, cos 2x, sin 3x, etc. (Note

that the function cos 2x is periodic with period 2n because cos(2(x + 2n))

cos 2x.

However, its "fundamentai (smallest) period" is n.) In some electrical engineering


applications, it is of interest to consider a signal described by functions such as the
function
f(x)

{:7!"

if -n::; x::; -n/2

if

7!"-

7r/2

< x::;

if n/2 < x::;

7r/2

7r

This function is shown in Figure 7.5.5.

7r

2
Figure 7.5.5

A continuous periodic function.

In the early nineteenth century, while studying the problem of the conduction of
heat, Fourier had the brilliant idea of trying to represent an arbitrary function in CP2rr
as a linear combination of the set of functions
{ 1, cos x, sin x, cos 2x, sin 2x, ... , cos nx, sin nx, ...}
This idea developed into Fourier analysis, which is now one of the essential tools in
quantum physics, communication engineering, and many other areas.
We formulate the questions and ideas as follows. (The proofs of the statements are
discussed below.)
(i) For any n, the set of functions { l, cos x, sin x, cos 2x, sin 2x, . .., cos nx, sin nx}
is an orthogonal set with respect to the inner product

(f, g)

1:

f(x)g(x) dx

The set is therefore an orthogonal basis for the subspace of CP2rr that it spans.
This subspace will be denoted CP2rr,n

356

Chapter 7

Orthonormal Bases
(ii) Given an arbitrary function f in CP2rr, how well can it be approximated by a
function in CP2rr,n? We expect from our experience with distance and subspaces
that the closest approximation to f in CP2rr,n is projcP:z..nf. The coefficients
for Fourier's representation off by a linear combination of { 1, cos x, sin x, ... ,
cosnx, sin nx, ...}, called Fourier coefficients, are found by considering this
projection.
(iii) We hope that the approximation improves as n gets larger. Since the distance
fromf to the n-th approximation prokP:z....f is II perpcP:z..n /II, to test if the ap
proximation improves, we must examine whether II perpCP:z.,n /II-+ 0 as n-+ oo.
Let us consider these statements in more detail.
(i) The orthogonality of constants, sines, and cosines with respect to the inner
rr
product (/, g) Jf(x)g(x) dx
-rr
=

These results follow by standard trigonometric integrals and trigonometric


identities:

rr

sinnx dx

rr
rr
cosnx dx
rr

1:

cos mx sinnx dx

and for m* n,

l
l

rr

cos mx cos nx dx

rr
rr
rr

sin mx sinnx dx

rr
1
- - cosnx
0
-rr
n
rr
1
- sinnx
0
-rr
n

1: (

sin(m +n)x - sin(m -n)x) dx

l (
l 2(

rrl
- cos(m +n)x +cos(m -n)x) dx
-rr 2
rrl
cos(m -n)x - cos(m +n)x) dx
-rr

Hence, the set { 1, cos x, sin x, ..., cosnx, sinnx} is orthogonal. To use this as a basis
for projection arguments, it is necessary to calculate 111112, II cos mxll2, and II sin mxll2:
11111

II cos mxll2

II sin mxll2

1: 1
rr

I
l

rr
rr
rr

dx

2n

cos2 mx dx
sin2 mx dx

1 -(1
1
l 2(1
rr

-rr 2
rr 1

-rr

+cos 2mx) dx
- cos 2mx) dx

= 7r

= 7r

(ii) The Fourier coefficients off as coordinates of a projection with respect to the
orthogonal basis for CP2rr,n
The procedure for finding the closest approximation proj CP:z. nf in CP2rr,n to an
.
arbitrary function f in CP2rr is parallel to the procedure in Sections 7.2 and 7.4.
That is, we use the projection formula, given an orthogonal basis {v1, ... 'vd for a
subspace S:
(1, vk)
(1, v1)
.
vk
proJs 1 1iJJi2 v 1 +
+

llvkll2
=

Section 7.5 Fourier Series

357

There is a standard way to label the coefficients of this linear combination:


prokp2".n f=

ao

1 +ai cos x +a1cos2x ++an cos nx


2
+ bi sin x + b2 sin2x + + b" sin nx

The factor in the coefficient of 1 appears here because 111112 is equal to 2rr, while the
other basis vectors have length squared equal to rr. Thus, we have

(f, 1)

ao = liij2 = ;

{rr_

J rr

f(x) dx

(f, cos mx) 1


= 7r
11 cos mxll2
(f, sin mx)
1
bm -2
7r
sin
xll
II m

am =
_

(iii) Is projcPin,, fequal to fin the limit

rr
I-rr
rr .
I-rr f( )

as

f(x) cos mxdx


x smmx dx

---+

oo?

As n ---+ oo, the sum becomes an infinite series called the Fourier series for f. The
question being asked is a question about the convergence of series-and in fact, about
series of functions. Such questions are raised in calculus (or analysis) and are beyond
the scope of this book. (The short answer is "yes, the series converges to f provided
that f is continuous." The problem becomes more complicated if f is allowed to be
piecewise continuous.) Questions about convergence are important in physical and
engineering applications.

EXAMPLE 1

Determine prokPini f for the function f(x) defined by f(x) = lxl if -rr
f(x +2rr) = f(x) for all x.
Solution: We have

rr lxl dx
I-rr
1 rr
I-rr
rr lxl 2x dx
I-rr
1 rr
I-rrrr lxl 3x dx
1
I-rr lxl x dx
1 rr
I lxl
rr 3x dx
I
l

ao = a,= rr

7r

cos

a3 = -

cos

= --

bi = -

sin

7r

7r

7r

9rr

=0

sin2xdx = 0

- rr

b3 = Hence, prokPin.i f =
Figure 7.5.6.

rr and

a1= -

7r

lxlcos xdx = --

7r

= rr

7r

b1 = -

lxl sin

= 0

-rr

- ; cos x -

t,; cos 3x.

The results are shown m

358

Chapter 7

Orthonormal Bases
y

-y=f(x)
--

-Tr

Figure 7 .5.6

EXAMPLE2

= prokP:u,.1 f(x)
( )
projCP:u,,3 fx
Y
Y

7r

Graphs of projCP,,,,, f and prokP,,, , f compared to the graph of f(x).


,

{-Tr -

-Tr

Determine p rokP:i,,,J f for the function jx


( ) defined by
x

f(x) = x
7r

Solution: We have
ao

a1

a1
a3

=
=

bi=
b2 =
b3 =
Hence, projcP:i,,.J f= ; sinx

7r
1

7r
1

7r
1
7r
1
7r
1
7r
1
rr
-

lf
Jlf
Jlf
Jlf
Jlf
Jlf
Jlf
J

if

7r x

/2

if -rr/2 < x rr/2


ifrr/2 < x rr

fdx = 0

-;r

fcosxdx = 0

-;r

fcos 2xdx = 0

-;r

fcos 3xdx = 0

-;r

-;r

fsinxdx =

7r

fsin 2xdx = 0

-;r

-;r

fsin 3xdx = -

9rr

trr sin 3x. The results are shown in Figure 7.5.7.

Chapter Review

359

1r

--

-
Figure 7.5.7

Graphs of prokPi..i

y=f(x)
Y = projCPin.1 f(x)
Y projCPin,J f(x)
=

f and projCPi. i f compared to the graph of f(x).


.

PROBLEMS 7 .5
Computer Problems
Cl Use a computer to calculate projCPin,,, f for
n =3,7, and 11 for each of the following functions.
Graph the function f and each of the projections

(b)

f(x) =eX,

(c)

f(x) =

on the same plot.


(a)

f(x) =x2,

-;r

io
1

if

ifO

:5

7r

n :5
<

:5
7r

7r

PROBLEMS 7 5
.

Suggestions for Student Review


1 What is meant by

an orthogonal set of vectors in !Rn?

What is the difference between an orthogonal basis


and an orthonormal basis? (Section 7 .1)

2 Why is it easier to determine coordinates with re


spect to an orthonormal basis than with respect to an
arbitrary basis? What are some special features of
the change of coordinates matrix from an orthonor
mal basis to the standard basis? What is an orthogo
nal matrix? (Section 7 .1)

3 Does every subspace of IR11 have an orthonormal


basis? What about the zero subspace? How do

you find an orthonormal basis? Describe the Gram


Schmidt Procedure. (Section 7.2)

4 What are the essential properties of a projection onto


a subspace of !Rn? How do you calculate a projection
onto a subspace? (Section 7 .2)

5 Outline how to use the ideas of orthogonality to find


the best-fitting line for a given set of data points
{( t; y; ) Ii=1, ... , n}. (Section 7.3)
,

6 What are the essential properties of an inner prod

uct? Give an example of an inner product on P2.


Give an example of an inner product on M(2, 3).
(Section 7.4)

360

Chapter 7

Orthonormal Bases

Chapter Quiz
El

Determine whether the following sets are orthog

a vector in , use the orthonormality of 'B to deter

onal, and which are orthonormal. Show how you

mine the coordinates of x with respect to 'B.

{ t -> -i}
{ }
{}, -}

decide.

(a)

Cb)

(c)

E2

_l_

Y3

E3 (a) Prove that if P is an orthogonal matrix, det


p 1.

' Vs

(b) Prove that if P and R are n x n orthogonal ma


trices, then so is PR.

E4

1
-2

'B

1
2

O,
0

1
0

-1

(a) Apply the Gram-Schmidt Procedure to the


given spanning set to produce an orthonormal
basis for S.

Consider the orthonormal set


-1
1
1
1

Span

1
1
, Y3 -1 , Y3 -1
1
0

{! H

Let S be the subspace of lll4 defined by S

(b) Determine the point in S closest to x

. Let S be the sub-

ES

1
-2
.
1
0

Determine whether each of the following functions


( , ) defines an inner product on M(2, 2). Explain

space of JR.11 spanned by 'B. Given that x

2
5
1 is

how you decide in each case.


(a) (A, B)

(b) (A,B)

det(AB)

a11b11+2a12b12+2a21b21 + a22b22

-2

Further Problems
Fl

(lsometries of JR.3)

(d) Let A be the standard matrix of L. Suppose that

(a) A linear mapping is an isometry of JR.3 if

llL(x)ll

11111 for every x

JR3. Prove that an

isometry preserves dot products and angles as


well as lengths.

1 is an eigenvalue of A with eigenvector u. Let

v and

be vectors such that {u, v,

orthonormal

[u

basis

for

w]. Show that

(b) Show that L is an isometry if and only if the


standard matrix of L is orthogonal. (Hint: See
Problem 3.FS and Problem 7. l.D3.)

(c) Explain why an isometry of JR.3 must have one


or three real characteristic roots, counting mul
tiplicity. Based on Problem 7. l.D3 (b), these
must be1.

pT AP

JR.3

1
021

and

012
A*

w}

let

is an

where the right-hand side is a partitioned ma


trix, with OiJ being the i x

j zero matrix, and

with A being a 2 x 2 orthogonal matrix. More

over, show that the characteristic roots of A are

1 and the characteristic roots of A.

Chapter Review

Note that an analogous form can be obtained


for pT AP in the case where one eigenvalue
is 1
-

(e) Use

Problem

3.F5

to

analyze

the

A*

of

part (d) and explain why every isometry of IR.

described by requiring the i-th approximation to be


the closest vector v in some finite-dimensional sub
space Si of V, where the subspaces are required to
satisfy

S1

is the identity mapping, a reflection, a composi


tion of reflections, a rotation, or a composition
of a reflection and a rotation.

F2 A linear mapping L : IR.11 IR.11 is called an involu


tion if L o L
Id. In terms of its standard matrix,
this means that A2
I. Prove that any two of the

361

S2

Si

The i-th approximation is then projs;


the approximations improve as

v. Prove that
i increases in the

sense that

following imply the third.


(a) A is the matrix of an involution.
(b) A is symmetric.
(c) A is an isometry.

F3 The sum S

T of subspaces of a finite dimensional


vector space V is defined in the Chapter 4 Further
Problems. Prove that (S + T).l
S.l n T.L.
+

F4 A problem of finding a sequence of approximations


to some vector (or function) v in a possibly infinite
dimensional inner product space V can often be

MyMathlab

llv

proj.,
U!J+ I

vii

:=:;

llv

proj .,
1J11

vii

FS QR-factorization. Suppose that A is an invertible


n x n matrix. Prove that A can be written as the
product of an orthogonal matrix Q and a upper
triangular matrix R : A
QR.
=

(Hint: Apply the Gram-Schmidt Procedure to the


columns of A, starting at the first column.)

Note that this QR-factorization is important in a


numerical procedure for determining eigenvalues
of symmetric matrices.

Go to MyMathLab at www.mymathlab.com. You can practise many of this chapter's exercises as often as you
want. The guided solutions help you find an answer step by step. You'll find a personalized study plan available to
you, too!

CHAPTER 8

Symmetric Matrices and


Quadratic Forms
CHAPTER OUTLINE
8.1
8.2

Diagonalization of Symmetric Matrices


Quadratic Forms

8.3
8.4

Graphs of Quadratic Forms


Applications of Quadratic Forms

Symmetric matrices and quadratic forms arise naturally in many physical applica
tions. For example, the strain matrix describing the deformation of a solid and the
inertia tensor of a rotating body are symmetric (Section 8.4). We have also seen that
the matrix of a projection is symmetric since a real inner product is symmetric. We
now use our work with diagonalization and inner products to explore the theory of
symmetric matrices and quadratic forms.

8.1 Diagonalization of
Symmetric Matrices
Definition
Symmetric :\latrix

A matrix A is symmetric if AT =A or, equivalently, if aiJ = a1; for all i and j.


In Chapter 6, we saw that diagonalization of a square matrix may not be possible
if some of the roots of its characteristic polynomial are complex or if the geometric
multiplicity of an eigenvalue is less than the algebraic multiplicity of that eigenvalue.
As we will see later in this section, a symmetric matrix can always be diagonalized:
all the roots of its characteristic polynomial are real, and we can always find a basis of
eigenvectors. Before considering why this works, we give three examples.

EXAMPLE 1

Determine the eigenvalues and corresponding eigenvectors of the symmetric matrix


A =

[ -n

W hat is the diagonal matrix corresponding to A, and what is the matrix

that diagonalizes A?

Solution: We have
C(A) = det(A

Al) =

0-A
l

1
_

= A + 2A

Using the quadratic formula, we find that the roots of the characteristic polynomial are

A1 = -1

V2 and A2

-1 - V2. Thus, the resulting diagonal matrix is


D

-1

V2

-1 - V2

364

Chapter 8

EXAMPLE 1

Symmetric Matrices and Quadratic Forms

For /l1 = -1 +

Y2,

we have

(continued)
A-/l1/=

-1 - Y2,

1 +
/l2/ =

-1 -

1
1
]
[
Y2
-

Y2

{ [ 1 4}.
1 +

Thus, a basis for the eigenspace is


Similarly, for /l2 =

- Y2
l

we have

Y2
1

Thus, a basis for the eigenspace is

1
-1 +

{[ 4}
[1 1 Y2
1-

-1 +

Y2l
J

.
1 -

Hence, A 1s
. d"iagonalized by p =

] [1
_

Y2

Y2lJ.

Observe in Example 1 that the columns of P are orthogonal. That is,

[ 1 [ -1 1
1+

= (1 +

Y2)(1 - Y2) + 1(1) = 1 - 2 +

= 0

Hence, if we normalized the columns of P, we would find that A is diagonalized by


an orthogonal matrix. (It is important to remember that an orthogonal matrix has or

thonormal columns.)

EXAMPLE2
Diagonali'e the symmetric A =

[ : -n

Show that A can be diagonali,ed by an

orthogonal matrix.

Solution: We have
0

1 - /l

-2

-2

1 - /l

4 - /l
C(/l) =

= -(/l - 4)(/l - 3)(/l + 1)

The eigenvalues are /l1 = 4, tl2 = 3, and tl3 = -1, each with algebraic multiplicity 1.
For /l1 = 4, we get

[ -] [ 1 !I
{ []}
0

A -Ail=

Thus, a basis focthe eigenspace is


For /l2 = 3, we get

-3
-2

-3

[ -]- [ 1 !]
0

A-A2l =

-2
-2

-2

Section 8.1

Diagonalization of Symmetric Matrices

EXAMPLE2
(continued)

{[-:]}

Thus, a basis for the eigenspace is


For A3 =

365

-1, we get

{[:]}
[H [-:l
1 [4 OJ
[- -2]

Thus, a basis for the eigenspace is

Observe that the vectors V 1 =

V2 =

and Vl =

[:]

fonn an orthogonal set.

Hence, if we normalize them, we find that A is diagonalized by the orthogonal matrix

[l

1/-,./2
1/../2

1/-,./2
1/../2

EXAMPLE3

to D

0 .

-1

-4
5

Diagonalize the symmetric A =

-2

an orthogonal matrix.

Solution: We have

5
C(A) =

-A

-2

-4

-2 . Show that A can be diagonalized by


8

-2

-4

5-A

-2

-2

-2

8 -A

2
= -A(A - 9)

The eigenvalues are At = 9 with algebraic multiplicity 2 and A2 = 0 with algebraic


multiplicity

1.

For At = 9, we get

Thus, a basis for the eigenspace oL! I is { w" W,} =

{[-il r]}

. However, observe

that these vectors are not orthogonal to each other. Since we require an orthonormal
basis of eigenvectors of A, we need to find an orthonormal basis for the eigenspace of
At. We can do this by applying the Gram-Schmidt Procedure to this set.
Pick Vt =Wt. Then St = Span{Vi} and

366

Chapter 8

EXAMPLE3
(continued)

Symmetric Matrices and Quadratic Forms

Then,

{V1, v2} is an orthogonal basis for the eigenspace of /l1.


/l2 = 0, we get

-4 -21-2 [1
-2
{[]}

For

Thus, a basis for the eigenspace of .<2 is

[i13) =

v3 is orthogonal to v1 and i12. Hence, normalizing v1, i12, and V3, we


A is diagonalized by the orthogonal matrix

Observe that
find that

0
0

[22/3/3 -111-fI.;-fI. -1/Yi8


-1/Yi8
1/3
4/Yi8
0

to D =

[ l
0
9
0

The Principal Axis Theorem


We will now proceed to show that not only can every symmetric matrix be diagonal
ized, but that it can be diagonalized by an orthogonal matrix.

Definition

A matrix

Orthogonally

matrix

A is said to be orthogonally diagonalizable


P and a diagonal matrix D such that

Diagonalizable

if there exists an orthogonal

Pr AP= D

Remark
Two matrices A and Bare said to be orthogonally similar if there exists an orthogonal

P such that pr AP = B. Observe that orthogonally similar matrices are similar


since an orthogonal matrix P satisfies pr = p-l. In particular, pr AP = Dis equivalent
to p-1 AP = D for an orthogonal matrix P. Hence, all of our theory of similar matrices

matrix

and diagonalization applies to orthogonal diagonalization.

Lemma 1

An n x n matrix

A is symmetric if and only if


x (Ay) = (Ax) y

fo r all x,Y E JR.11

Section 8.1

Diagonalization of Symmetric Matrices

Proof: Suppose that A is symmetric. For any

x, y

367

JR!!,

x . CA'J) = xTAy = xTAT:y =(Axl:y =(Ax) 9

Conversely, if x

(Ay) =(Ax) y for all x, y

lltn,

XTAy = x . (Ay) =(Ax) . y= (Ax)Ty


Since X7Ay = (Ax)7y for all y E lltn, based on Theorem 3.1.4, X7A= (Ax)7. Hence,
(xTA)7 = Ax or ATx = Ax for all x E lltn. Applying Theorem 3.1.4 again gives
AT = A, as required.

In Examples 1-3, we saw that the basis vectors of distinct eigenspaces are or
thogonal. This implies that every eigenvector in the eigenspace of one eigenvalue is
orthogonal to every eigenvector in the eigenspace of a different eigenvalue. We now
use Lemma 1 to prove that is always true.

Theorem 2

If

v1

and

v2 are eigenvectors of a symmetric matrix A


tl1 and tl2, then Vt is orthogonal to v2.

corresponding to distinct

eigenvalues

Proof: By definition of eigenvalues and eigenvectors,

Av1 = tl1v1

and

Av2 = tl2v2.

Hence,

Using Lemma 1, we get

Hence,

It follows that

It was assumed that tl1 *

tl2, so v1 v2 must be zero, and the eigenvectors corresponding

to distinct eigenvalues are mutually orthogonal, as claimed.

Note that this theorem applies only to eigenvectors that correspond to different
eigenvalues. As we saw in Example 3, eigenvectors that correspond to the same eigen
value do not need to be orthogonal. Thus, as in Example 3, if an eigenvalue has al
gebraic multiplicity greater than 1, it may be necessary to apply the Gram-Schmidt
Procedure to find an orthogonal basis for its eigenspace.

Theorem 3

If

A is a symmetric matrix,

then all eigenvalues of

A are real.

The proof of this theorem requires properties of complex numbers and hence is
postponed until Chapter 9. See Problem 9.5.D6.
We can now prove that every symmetric matrix is orthogonally diagonalizable. We
begin with a lemma that will be the main step in the proof by induction in the theorem.

368

Chapter 8

Lemma4

Symmetric Matrices and Quadratic Forms

tl.1 is an eigenvalue of then x n symmetric matrix A, with correspond


ing unit eigenvector v1 Then there is an orthogonal matrix P whose first column is
v1, such that
tl.1
pTAP=
Suppose that

where

A1 is an (n -

On-I,!

1) x (n - 1) symmetric matrix and Om, is the m xn zero matrix.

Proof: By extending the set

{vi}

n
to a basis for IR. , applying the Gram-Schmidt

Procedure, and normalizing the vectors, we can produce an orthonormal basis 13

lV1, W2, ... , Wn} for IR.n. Let

P= [v1 w2

wn]

Then

PTAP=

VTl
->T
W
2

A [vi w2

wn]

->T,,
w
=

VTl
->T
W
2

[Av1 Aw2

Awn]

->T
w
n
=

vf Av1 vf
-> Aw2
wIAv1 W2TAW_,2
->
W2
wAv1 W11TA_,
i11 Av1 v1 Aw2
w2 Av1 w2 Aw2

Also,

->TA_,
w
n Wn

w11
First observe that

v->f Awn
T

Wn
W2 A_,

Av1

w,, A w2

vi Awn
w2 Awn
WnAWn

pTAP is symmetric since

v1 is a unit eigenvector of tl.1, so we have

Since 13 is orthonormal, we get

So, all other entries in the first column are 0. Hence, all other entries in the first row

pTAP is symmetric. Moreover, the (n symmetric since pTAP is symmetric.


are also 0 since

1) x (n - 1) block

A1 is also

Section 8.1 Diagonalization of Symmetric Matrices

Theorem 5

369

Principal Axis Theorem


Suppose A is an n x n symmetric matrix. Then there exists an orthogonal matrix
P and diagonal matrix D such that pT AP = D. That is, every symmetric matrix is
orthogonally diagonalizable.

Proof: The proof is by induction on n. If n = 1, then A is a diagonal matrix and

hence is orthogonally diagonalizable with P = [l]. Now suppose the result is true
for (n - 1) x (n - 1) symmetric matrices, and consider an n x n symmetric matrix

A. Pick an eigenvalue tli of A (note that tli is real by Theorem 3) and find a corre
sponding unit eigenvector vi Then, by Lemma 4, there exists an orthogonal matrix
R= vi w2
wn such that

RTAR=

tli

011-i,i

where Ai is an (n - 1) x (n - 1) symmetric matrix. Then, by our hypothesis, there is an

(n - 1) x (n - 1) orthogonal matrix Pi such that

PfA1P1=Di
where D i is an (n - 1) x (n - 1) diagonal matrix. Define

Oi,n-i
Pi

The columns of P2 form an orthonormal basis for Rn. Hence, P2 is orthogonal. Since

a product of orthogonal matrices is orthogonal, we get that P = P2R is an n x n


orthogonal matrix and, by block multiplication,

pTAP=(P2Rl A(P2R) =P{ RT ARP2


=P{

tli

tli

- 011-i,i

Oi,11-i
=D
Di

This is diagonal, as required.

EXERCISE 1

Orthogonally diagonalize A=

EXERCISE2
Orthogonally diagonalize A=

Oi,11-i
p2
Ai

011-i,i

[- -n

[-
-1

-1

=2

370

Chapter 8

Symmetric Matrices and Quadratic Forms

Remarks
1. The eigenvectors in an orthogonal matrix that diagonalizes a symmetric matrix
A are called the principal axes for A. We will see why this definition makes
sense in Section 8.3.
2. The converse of the Principal Axis Theorem is also true. That is, every or
thogonally diagonalizable matrix is symmetric. You are asked to prove this in
Problem D2. Hence, we can say that a matrix is orthogonally diagonalizable if
and only if it is symmetric.

PROBLEMS 8.1
Practice Problems
Al Determine which of the following matrices are
symmetric.
(a) A=

(b) B =

[ 7]
[ ]
_

[- ]
H -ll
2

(c) C =

(d) D
=

A2 For each of the following symmetric matrices, find


an orthogonal matrix P and diagonal matrix D such
that pT AP = D.
(a) A=

-2

-1
0
-1

(b) A=
3

-1

[ -71
[5 )

(c) A
=

(d) A=

(e) A=

3
-3

[ i
[ - =]
[! : -]
1

0
1

-2

-2

Homework Problems
Bl Determine which of the following matrices are

symmetric.
(a) A
=
(b) B =

[ ]
[ ]

[ :1
0

(c) c
=

(d) D
=

(e)

E=

[ !]
[ J
1

3
3

Section 8.1

P
pTAP=
A=[; ]
A = [ ]
A=[ -]
A= [1 1 1
A = [-1 1 -01

B2 For each of the following symmetric matrices, find


an orthogonal matrix
that
(a)
(b)
(c)

(d)

and diagonal matrix D such

D.

-2

(g)

-4

-2

-2

-2

-1

-2

-2

-2

(i)

(e)

0
A=

(0 l-1 11 -ii4
2
[
A= -2 11
[
-1
A=
1
A= H -1-5 -1_;l

(h)

371

Exercises

Computer Problems
Cl Use a computer to determine the eigenvalues and a
basis of orthonormal eigenvectors of the following

4[ .1 1.9 0.51
1.0.95 0.6 -2.0.61
0.0.0.019555 0.0.0.102555 0.0.0.129555 0.0.0.290555
0.25 0.95 0.05 0.15

symmetric matrices.

(c)

1.2

By

A B
ATA ABA A2
A
and

be

n x n

(b)

(c)

D2 Show that if

(d)

A+ B

symmetric matrices. Deter

mine which of the following is symmetric. (a)

is orthogonally diagonalizable, then

is symmetric.

calculating

the

eigenvalues

2t
t

-2t
2 t

-t

-t

and S

of

l ),

explore how

the eigenvalues of S (t) change as t varies.

Conceptual Problems
Dl Let

1
(-0.05), (0), (0.05), (0.1), (-0.
S (t) =

(a) The matrix in Problem A2 (e).


(b)

[2 +

C2 Let S (t) denote the symmetric matrix

A-1 A

D3 Prove that if
then

is an invertible symmetric matrix,

is orthogonally diagonalizable.

372

Chapter 8

Symmetric Matrices and Quadratic Forms

8.2 Quadratic Forms


We saw earlier in the book the relationship between matrix mappings and linear map
pings. We now ex plore the relationship between symmetric matrices and an impor
tant class of functions called quadratic forms, which are not linear. Quadratic forms
appear in geometry, statistics, calculus, topology, and many other areas. We shall see
in Section 8.3 how quadratic forms and our special theory of diagonalization of sym
metric matrices can be used to graph conic sections and quadric surfaces.

Quadratic Forms
Consl.der the symmetr1c matn'x A

x2
x2

[
][
]

= [ 2x1x ] .

:t A

a
b/2

b/2
C

b/2
. If
c

a
- b/2
-

then

][ ]
]
xi
X2

ax1 + bx2 /2
bx1/2 + CX2

We call the expression a+ bx1x2 +ex a quadratic form on JR

(or in the variables 1


x

and 2x ) . Thus, corresponding to every symmetric matrix A, there is a quadratic form

On the other hand, given a quadratic form Q(x)


the symmetric matrix A

[;

b 2

b 2

= axT+bx12x +ex, we can reconstruct

1 to be the coefficient of T
X ,
by choosing (A)1

(A)12= (Ah1 to be half of the coefficient of 1


x 2x , and (A)22 to be the coefficient of x.

We deal with the coefficient of x 1 2x in this way to ensure that A is symmetric.

EXAMPLE 1

Determine the symmetric matrix corresponding to the quadratic form

Solution: The corresponding symmetric matrix A is

A=

2
-2

-1

Notice that we could have written axT + bx1x2 +ex in terms of other asymmetric
matrices. For example,

Many choices are possible. However, we agree always to choose the symmetric matrix
for two reasons. First, it gives us a unique (symmetric) matrix corresponding to a
given quadratic form. Second, the choice of the symmetric matrix A allows us to apply
the special theory available for symmetric matrices. We now use this to extend the
definition of quadratic form ton variables.

Section 8.2 Quadratic Forms

Definition
Quadratic Form

373

A quadratic form on JR.11, with corresponding symmetric matrix A, is a function Q :

JR.11

JR defined by

11

Q(x)

aijXiXj

.xr Ax

i,j=l

As above, given a quadratic form Q(x) on JR.11, we can easily construct the co1Te
sponding symmetric matrix A by taking (A);; to be the coefficient of x and (A);j to be

half of the coefficient of X;Xj for i :f. j.

EXAMPLE2

Q(x)

matnx A =

Q(x)

3xi

xi

Sx1x2

3
512
4x

2 is a quadratic form on JR.2 with corresponding symmetric

5/2
2 .
+

2x

4x1x2

sponding symmetric matrix A

xi x3

1/2
Q(x)

2xi

4x1x3 - x

2x2x3

corresponding symmetric matrix A

EXERCISE 1

6x2x4

2
0
2
0

2x2x3 is a quadratic form on JR.3 with corre-

0
-

1
3

1
0

x is a quadratic form on JR.2 with


0
3
0

Find the quadratic form c01Tesponding to each of the following symmetric matrices.

(a)

EXERCISE2

1 2

1/2

l /2

-../2

Find the corresponding symmetric matrix for each of the following quadratic forms.

1. Q(x)

xi - 2x,x2 - 3

2. Q(x)

2xi

3. Q(x)

xi + 2x + 3x + 4x

3x1x2 - x1x3

4x

+ x

Observe that the symmetric matrix corresponding to Q(x)

xi

2x

3x

is in fact diagonal. This motivates the following definition.

Definition
Diagonal Form

A quadratic form Q(x) is in diagonal form if all the coefficients ajk with j * k
are equal to 0. Equivalently, Q(x) is in diagonal form if its corresponding symmetric
matrix is diagonal.

EXAMPLE3

The quadratic form Q(x)

Q(x)

2xi - 4x1x2

3xi - 2x + 4 is in diagonal form. The quadratic form


3x is not in diagonal form.
=

374

Chapter 8

Symmetric Matrices and Quadratic Forms


Since each quadratic form has an associated symmetric matrix, we should expect
that diagonalizing the symmetric matrix should also diagonalize the quadratic form.
We first demonstrate this with an example and then prove the result in general.

EXAMPLE4

Let A = [ ]and let Q(x) = .XTAx = 2xi 2x1x2 2. Let x = Py, where
P= [ l/Yl 1/Yll
-fi.J ' Express Q(x) terms of y= [yYi2].
-l/Yl l/
Solution: We first observe that P diagonalizes A. In particular, we have
+

[l/Yl

pTAP= l/Yl

We have

][ ][

-1/Yl 2 l l/Yl l/Yl =[l


l/Yl 1 2 -1/Yl l/Yl 0

OJ
3

QCx)=xTAx
=(PylA(Py)
=yTp TAPy
=yT
2

[ ] y
2

Y1 3Y2
+

Recall that if P = [v1 v2J is an orthogonal matrix, it is a change of coordinates


matrix from standard coordinates to coordinates with respect to the basis 13={V1, v2}.
In particular, in Example 4, we put Q(x) into diagonal form by writing it with respect to
the orthonormal basis 13={l j], [-11]} The vector y is just the 13-coordinates
with respect to x. That is, [x]23=y. We now prove this in general.
Theorem 1

Let Q(x)=xTAx be a quadratic form on JR.11 Then there is an orthonormal basis 13


of JR.11 such that when Q(x) is expressed in terms of 13-coordinates, it is in diagonal
form.
Proof: Since A is symmetric, we can apply the Principal Axis Theorem to get an
orthogonal matrix P such that pTAP=D = diag(;J.1, , ;J.11), where ;J.1, ;J.1 are the
eigenvalues of A. Recall from Section 4.4 that the change of coordinates matrix P from
13-coordinates to standard coordinates satisfies x=P[x]23. Hence,
.

QCx)=xTAx
=(P[x]23)TA(P[x]23)
=[x]PTAP[x]23
=[xJD[xJ23

Section 8.2 Quadratic Forms

Let [X]B

l:J

375

Then we have

. .. Yn]

Q(x)= [Y1

A1

;l2

0
0

An

LJ

= '11y + A2Y+ +AnY


as required.

EXAMPLES

Let Q(x)

=xi+4x1x2+x. Find a diagonal form of Q(x) and an orthogonal matrix P

that brings it into this form.


.
. matnx
. 1s
. A=
Solution: The corresponding
symmetnc

[12 21]

. HT
vve have

2
1
C(-1)= -2 ;l l--1 =(-1-3)(-1+1)
The eigenvalues are

;J,1 =3 with algebraic multiplicity 1 and -12 =-1

with algebraic

multiplicity 1.
For

;J,1=3, we get
A

An eigenvector for
For

- '1i/ =

;J,1 is v1 =

A2 = -1, we get

[il

An eigenvector for

;J,2 is v2 =

[-22 -22] [ l -1 ]

and a basis for the eigenspace is

- -12/=

[-i l

[; ;] [ ]

and a basis for the eigenspace is

Therefore, we see that A is orthogonally diagonalized by P

[ -n

Let 1

{v 1 }.

{v2}.
=

[ i -11 ]

to

=Py. Then, we get

Hence, Q(x) is brought into diagonal form by P.

EXERCISE 3

Let Q(x)

= 4x1x2 -3x.

Find a diagonal form of Q(x) and an orthogonal matrix P

that brings it into this form.

376

Chapter 8

Symmetric Matrices and Quadratic Forms

Classifications of Quadratic Forms


Definition
Positive Definite
Negative Definite
Indefinite
Semidefinite

A quadratic form Q(x) on JR" is


1. Positive definite if Q(x)>0 for all x * 0
2. Negative definite if Q(x) < 0 for all x * 0
3. Indefinite if Q(x)>0 for some x and Q(x) < 0 for some x
4. Positive semidefinite if Q(x) 0 for all x
5. Negative semidefinite if Q(x) 0 for all x
These concepts are useful in applications. For example, we shall see in Section 8.3
that the graph of Q(x)

EXAMPLE6

1 in JR2 is an ellipse if and only if Q(x) is positive definite.

Classify the quadratic forms Q1(x) = 3x

4x1x2 + x .

4x , Q1(x)

xy, and Q3(x)

2
2x1 +

Solution: Q1(x) is positive definite since Q(x) = 3xi + 4x >0 for all x * 0.
Q2(x) is indefinite since Q(l, 1) = 1>0 and Q(-1, 1 ) = - 1 < 0 .
Q3(x) is indefinite since Q(l, 1) = 8>0 and Q(l, -2)
-2 < 0.
=

Observe that classifying Q3(X) was a little more difficult than classifying Q1(X) or
Q2(X). A general quadratic form Q(x) on JR11 would be difficult to classify by inspec
tion. The following theorem gives us an easier way to classify a quadratic form.

Theorem 2

Let Q(x)

.xr Ax, where A is a symmetric matrix. Then

1 . Q(x) is positive definite if and only if all eigenvalues of A are positive.


2. Q(x) is negative definite if and only if all eigenvalues of A are negative.
3. Q(x) is indefinite if and only if some of the eigenvalues of A are positive and
some are negative.

Proof: We prove (1) and leave (2) and (3) as Problems Dl and D2.
By Theorem 1, there exists an orthogonal matrix P such that

where x

Py and /l.1,

, ll.11 are the eigenvalues of A. Clearly, Q(x)>0 for ally i-

P is orthogonal, it is
Py. Thus we have shown that

if and only if the eigenvalues are all positive. Moreover, since


invertible. Hence, x

0 if and only if y

0 since x

Q(x) is positive definite if and only if all eigenvalues of A are positive.

EXAMPLE7

Classify the following quadratic forms.


(a) Q(x)

4x + 8x1x2 + 3x

Solution: The symmetric matrix corresponding to Q(x) is A


2

[: l

The character

istic polynomial of A is C(/l) = /l. - 7/l - 4. Using the quadratic formula, we find that

j33. These are both positive. Hence, Q(x) is positive

the eigenvalues of A are /l = ?


definite.

377

Section 8.2 Exercises

EXAMPLE 7

(b)

Q(x)

-2XT - 2x1X2

2X1X3 - 2x

2x2X3 - 2x

(continued)

Solution: The symmetric matrix corresponding to Q(x) is A =


characteristic polynomial of A is C(tl)
are -1, -1, and

EXERCISE4

2
-('1 + 1) ('1 + 4).

[= = ]

The

-2

Thus, the eigenvalues of A

-4. Therefore, Q(x) is negative definite.

Classify the following quadratic forms.

Q(x)

(b) Q(x)

(a)

4xr

l6x1x2 + 4x

2xT - 6x1x2 - 6x1x3

3x

4x2x3

3x

Since every symmetric matrix corresponds uniquely to a quadratic form, it makes


sense to classify a symmetric matrix by classifying its corresponding quadratic form.
That is, for example, we will say a symmetric matrix A is positive definite if and only
if the quadratic form
we can use Theorem

EXAMPLE8

Q(x) _xTAx is positive definite. Observe that this implies that


2 to classify symmetric matrices as well.
=

Classify the following symmetric matrices.


(a)A=

[; ]

Solution: We have C(tl)

2
'1 - 6'1 + 5.

Thus, the eigenvalues of A are 5 and 1, so A

is positive definite.

(b)A =

[- - =]
-4

-2

Solution: We have C(tl) = (,1


and

2
4)(,1 - 28).

Thus, the eigenvalues of A are

-4, 2 -../7,

-2 -../7, so A is indefinite.

PROBLEMS 8.2
Practice Problems
-2

Al Determine the quadratic form corresponding to the


given symmetric matrix.
(a) A=

(b) A

[ _i]

[ - j]

(c) A =

1
1
-1

-1
0

A2 For each of the following quadratic forms Q(x),


(i) Determine

the

corresponding

symmetric

matrix A.
(ii) Express

Q(x)

in diagonal form and give the

orthogonal matrix that brings it into this form.

378

Chapter 8

Symmetric Matrices and Quadratic Forms

(iii) Classify Q(x).

xT - 3x1 x2 + x
SxT - 4x1x2 + 2x
Q(x)= -2xT + 12x1x2 + 7x
Q(x)= xT 2x1x2 + 6x1x3 + x + 6x2x3 -3.S
Q(x)= -4xT + 2x1x2 - Sx - 2x2x3 - 4x

(a) Q(x)=

(c)A=

(c)
(d)
(e)

(b)A=

l- -!]

6 7

(d)A=

-1

A3 Classify each of the following symmetric matrices.


(a)A=

[ - 1
[ - -1
[ 1 =1
[=; -; !]
0

(b) Q(x)=

(e)A=

-1 -2

[ 1

(fj A=

0 0 3

-3
7

Homework Problems
Bl Determine the quadratic form corresponding to the

[! ]

given symmetric matrix.


(a)A=

(b)A=

(c)A=

(e)

(a)A=

corresponding

(c)A=

symmetric

matrixA.

(d)A=

(ii) Express Q(x) in diagonal form and give the


orthogonal matrix that brings it into this form.
(iii) Classify Q(x).

7xT + 4x1x2 + 4
Q(x)= 2x1 + 6x1x2 + 2x

(a) Q(x)=
(b)

_
l

(b)A= -

B2 For each of the following quadratic forms Q(x),


the

[ 4 -14]
[ -;J

B3 Classify each of the following symmetric matrices.

n ; -l
[ j -!l

(i) Determine

xT + 3x1x2 +
3xT -2x1x2 - 2x1X3 + Sx + 2x2x3 + 3.S
Q(x)= 2xT-4x1x2 + 6x1x3 + 2 + 6x2x3 - 3x

(c) Q(x)=

(d) Q(X)=

(e)A=

n - j]
n -: J
[=r = =!I

Computer Problems
Cl Classify each of the following quadratic forms with
the help of a computer.

-9xT + 8x1x2 + 8x1x3 - Sx - Sx


-0.lxT - 0.8x1x2 + l.2x1x4 + 2.lx +
l.6x2x3 + l.3x + 4.2x3x4 + l . l x

0.85(xT + x + x + x) - O.l.x1.x2 +
0.6X1X3 + 0.2X1X4 + 0.2X2.X3 + 0.6x2X4 -O.lx3X4

(c) Q(x) =

(a) Q(x)=

(b) Q(x)=

Conceptual Problems
Dl Let Q(x)= .xr
Ax, whereA is a symmetric matrix.

D2 Let Q(x)= .xr


Ax, whereA is a symmetric matrix.

Prove that Q(x) is negative definite if and only if

Prove that Q(x) is indefinite if and only if some

all eigenvalues ofA are negative.

of the eigenvalues ofA are positive and some are


negative.

Section 8.2 Exercises


D3 (a) Let Q(x) = .XTAx, where A is a symmetric ma

11
(a) Verify that for any 1,Y E JR ,

tlix. Prove that Q(x) is positive semidefinite if


and only if all of the eigenvalues of A are non
negative.
.
(b) Let A be an m x n matrix. Prove that ATA 1s
positive semidefinite.

D4 Let A be a positive definite symmetric matrix.


Prove that
(a) The diagonal entries of A are all positive.
(b) A is invertible.
(c) A-1 is positive definite.
(d) pTAP is positive definite for any orthogonal
matrix P.
DS A matrix B is called skew-symmetric if BT = -B.
Given a square matrix A, define the symmetric
part of A to be

<.XS>=

11

I I xiY1<ei, e1>
i=l

j=l

(b) Let G be the n x n matrix defined by gii =

(ei, e1). Verify that

(c) Use the properties of an inner product to verify


that G is symmetric and positive definite.
(d) By adapting the proof of Theorem 1, show that
there is a basis 13 = {v1, ... ,v11} such that in
13-coordinates,

(x,y) = A.1.iiY1 +
where A.1, .
particular,

(1,y) = 111112 =

symmetric, and A= A+ +A-.


(b) Prove that the diagonal entries of A- are 0.
(c) Determine expressions for typical entries (A +)ij
and (A-)iJ in terms of the entries of A.
(d) Prove that for every x E JRn,

+ Ani;zY"n

, ,111 are the eigenvalues of G. In

and the skew-symmetric part of A to be

(a) Verify that A+ is symmetric, A- is skew

379

I Aixf
i=I

(e) Introduce a new basis C = { w 1 ,... , wn} by


defining wi = vJ../T;. Use an asterisk to denote
....
.
. ....
C-coordmates, so that x:t = x1 W1 +
+ xn.wn.
Verify that

if i= k
if i * k
and that

(Hint: Use the fact that A= A+ + A- and prove


that xTA-1=0.)

D6 In this problem, we show that general inner prod


ucts on JR11 are not different in interesting ways from
the standard inner product.
Let (,) be an inner product on JR11 and let S =
{e'i, . . , e,1} be the standard basis.
.

(1,y) = xy +

xy

Thus, with respect to the inner product(,), C is


an orthonormal basis, and in C-coordinates, the
inner product of two vectors looks just like the
standard dot product.

380

Chapter 8

Symmetric Matrices and Quadratic Forms

8.3 Graphs of Quadratic Forms


In

JR.2,

it is often of interest to know the graph of an equation of the form Q(x)

where Q(x) is a quadratic form on

JR.2

k,

and k is a constant. If we were interested in only

one or two particular graphs, it might be sensible to simply use a computer to produce
these graphs. However, by applying diagonalization to the problem of determining
these graphs, we see a very clear interpretation of eigenvectors. We also consider a
concrete useful application of a change of coordinates. Moreover, this approach to
these graphs leads to a
the form

Q(x)

classification

= JR.2
k in

of the various possibilities; all of the graphs of

can be divided into a few standard cases. Classification is a

useful process because it allows us to say "I really only need to understand these few
standard cases." A classification of these graphs is given later in this section.
Observe that in general it is difficult to identify the shape of the graph of

bxJX2 + ex =

axi +

k. It is even more difficult to try to sketch the graph. However, it is


easy to sketch the graph of a
c
k. Thus, our strategy to sketch the graph of a
quadratic form Q(x)
k is to first bring it into diagonal form. Of course, we first need

xi + =

to determine how diagonalizing the quadratic form will affect the graph.

Theorem 1

Let

= axr + bx1X2 +ex,


P P=

Q(x)

matrix

a,

b,

and care not all zero. Then an orthogonal

1, which diagonalizes Q(x), corresponds to a rotation in

with det

Proof: Let A

where

= [b b J.
2

Since A is symmetric, by the Principal Axis Theorem,

JR.2

there exists an orthonormal basis {v, w} of

w = [Ww2J]
_,

of eigenvectors of A. Let v

= []

and

.
. smce .-t 1s a umt vector, we must have

v
VJ = - v2 =2]
w= [

Hence, the entries


such that

JR.2.

we must have

VJ

and

cose and
_,

sine

cos e

= 1 111 2 = vi + v

lie on the unit circle. Therefore, there exists an angle e

w = + [- ]
]
=[

sine. Moreover, since w is a unit vector orthogonal to w,


. We choose

have

sine

_,

cose

c se

- sine

sme

cose

so that det

P=

1. Hence we

This corresponds to a rotation by e. Finally, from our work in Section 8.2, we know
that this change of coordinates matrix brings

Remark
If we picked w
reflection.

= [ ]
-

ne
e

Q into diagonal form.

we would find that

corresponds to a rotation and a

Section 8.3 Graphs of Quadratic Forms

381

In practice, we do not need to calculate the angle of rotation. W hen we orthogo

nally diagonalize Q(x) with P = v1

v2

], the change of coordinates 1 = Py causes a

rotation of the new y1- and y2-axes. In particular, taking y =

and taking y =

[]

[l

we get

gives

That is, the new y1-axis corresponds to the vector v1 in the x1x2-plane, and the y2-axis
corresponds the vector V2 in the x1x2-plane.
We demonstrate this with two examples.

EXAMPLE 1

Sketch the graph of the equation 3x + 4x1x2 = 16.

Solution: The quadratic form Q(x) = 3x


. A =
matnx

[ )
3

+ 4x1x2 corresponds to the symmetric

2
. . po1 ynomia
. 1 1s
.
, so the charactenst1c
0
2
C(A) = det(A -Al) = A - 3,i -4 = (1l -4)(,i + 1)

Thus, the eigenvalues of A are A1 = 4 and A2 = -1. Thus, by an orthogonal change of


coordinates, the equation can be brought into the diagonal form:

4y -y =l6
This is an equation of a hyperbola, and we can sketch the graph in the y1y2-plane.
We observe that the y1-intercepts are (2, 0) and (-2, 0), and there are no intercepts on

the Yraxis. The asymptotes of the hyperbola are determined by the equation 4y -Y =

0. By factoring, we determine that the asymptotes are lines with equations 2y1 -y2 = 0
and 2y 1 + Y2 = 0. With this information, we obtain the graph in Figure 8.3.1.
Y2
However, we want a picture

of the graph 3x + 4x1 x2 = 16


relative to the original x1-axis
and x2-axis-that is, in the x1x2plane. Hence, we need to find the
eigenvectors of A.
For A1 = 4,
YI

Thus, a basis for the eigenspace


is {V1}, where v1 =

[n

For A2 = -1,
a ymptotes
Figure 8.3.1

The graph of 4y

- y

shown with horizontal


and asymptotes.

16,

Y1-axis

382

Chapter 8

EXAMPLE 1
(continued)

Symmetric Matrices and Quadratic Forms

Thus, a basis for the eigenspace is {v2}, where v2


but

[-]

[ ]
-

[ l

.(We could have chosen -

is better because {V 1, v2} is right-handed.)

Now we sketch the graph

of 3x + 4x1x2

16 . In the

x1x2-plane, we draw the new


y1-axis in the direction of v1 .
(For clarity, in Figure 8.3.2 we

[]

have shown the vector


stead of

[]

in-

.) We also draw the

new y2 -axis in the direction


of v2. Then, relative to these
new axes, we sketch the graph

of the hyperbola4y -y

16.

The graph in Figure 8.3.2 is


also the graph of the original

equation 3x + 4x1x2

16.

In order to include the


asymptotes in the sketch, we
rewrite their equations in stan

Figure 8.3.2

dard coordinates. The orthog

The graph of 3x + 4x1 x2

16 <=>

4y -y =l6.

onal change of coordinates

matrix in this case is given by


P
(This is a rotation of the axes through angle e;::; 0.46 radians.) Thus,
=

}s

[ ].

the change of coordinates equation can be written

[YI] 1 [ 1] [Xi]
Y -vs
2

since pT

p-

-1

X2

as Pis orthogonal. This gives


and

Then one asymptote is

The other asymptote is

Thus, in standard coordinates, the asymptotes are 3x1 + 4x2

0 and x1

0.

Section 8.3 Graphs of Quadratic Forms

EXAMPLE2

Sketch the graph of the equation

6xt + 4x1x2 + 3x

14.

[ l

Solution: The corresponding symmetric matrix is


;!1

2 and ;!2

The eigenvalues are

7. A basis for the eigenspace of ;!1 is {\11}, where v1

for the eigenspace of A2 is

{ v2}, where if2

[n

383

[-H

A basis

If v t is taken to define the Yt -axis and

if2 is taken to define the Y2-axis, then the original equation is equivalent to

2yt + 7y

14

This is the equation of an ellipse with y1-intercepts


Y2-intercepts (0,

-fi.) and (0,

-fi.).

(-fl, 0)

and

(--fl, 0)

and

In Figure 8.3.3, the ellipse is shown relative to the y1 - and y2-axes. In Fig
ure 8.3.4, the new Yt - and y2-axes determined by the eigenvectors are shown rela
tive to the standard axes, and the ellipse from Figure 8.3.3 is rotated into place. The
resulting ellipse is the graph of

6xf + 4x1 x2 + 3

14.

14 in the Y1Y2-plane.

Y2

Figure 8.3.3

Figure 8.3.4

The graph of

The graph of

2yT ?y

4x1x2

3x

14

2yT ?y
+

14.

384

Chapter 8

Symmetric Matrices and Quadratic Forms

Since diagonalizing a quadratic form corresponds to a rotation, to classify all the


graphs of equations of the form
the form

A.1yf

A.2y = k.

Q(x) = k, we diagonalize and rewrite the equation in

Here,

A.1 and A.2 are the eigenvalues of the corresponding

symmetric matrix. The distinct possibilities are displayed in Table 8.3.1.


Table 8.3.1

A.1 > 0, A.2> 0


A.1> 0, A.2 = 0
A.1 > 0, A.2< 0
A.1 = 0, A.2< 0
A.1 < 0, A.2< 0

Graphs of

tl1xT tl2x
+

k>O

k=O

k<O

ellipse

point (0,0)

empty set

parallel lines

line

x=0

empty set

intersecting lines

hyperbola

y= 0

empty set

line

empty set

point (0,0)

hyperbola
parallel lines
ellipse

The cases where k = 0 or one eigenvalue is zero may be regarded as degenerate


cases (not general cases). The nondegenerate cases are the ellipses and hyperbolas,
which are conic sections. (A conic section is a curve obtained in JR.3 as the intersec
tion of a cone and a plane.) Notice that the cases of a single point, a single line, and
intersecting lines can also be obtained as the intersection of a cone and a plane passing
through the vertex of the cone. However, the cases of parallel lines (in Table 8.3. 1.) are
not obtained as the intersection of a cone and a plane.
It is also important to realize that one class of conic sections, parabolas, does not
appear in Table 8.3.1. In

JR.2, the equation of a parabola is a quadratic equation,

but it

contains first-degree terms. Since a quadratic form contains only second-degree terms,

Q(x) = k cannot be a parabola.


The classification provided by Table 8.3. l suggests that it might be interesting

an equation of the form

to consider how degenerate cases arise as limiting cases of nondegenerate cases. For
example, Figure 8.3.5 shows that the case of parallel lines (y
the family of ellipses

.,

::-:
. ..
-- ......-:
-- - .
,,
..
...
'
'
....
.
-.. ...
---- --

Figure

8.3.5

= constant) arises from

A.xf + x = 1 as A. tends to 0.

.. .-::

.:-:: .....
-..,
...
'
... -.
.
"
"
_.,

./

""" :.:.: ::.. - ... ..

Xt

1. The circle occurs for A. = 1; as A.


decreases, the ellipses get "fatter"; for A. = 0, the graph is a pair of lines.

A family of ellipses

A.xf

,.

x =

k > 0 and k = 0.
k > 0, although they may be
oriented differently. For example, the graph of xf - x = - 1 is the same as the graph of
-xf + x = 1, and this hyperbola may be obtained from the hyperbola xf - x = 1 by
reflecting over the line x1 = x2 However, for purposes of illustration, it is convenient
Table 8.3. 1 could have been constructed using only the cases

The graphs obtained for

k<

0 are all obtained for

Section 8.3 Graphs of Quadratic Forms

385

to include both k > 0 and k < 0. Figure 8.3.6 shows that the case of intersecting lines

(k

0) separates the hyperbolas with intercepts on the x1 -axis Cxi


x -axis Cxi - 2x k < 0).

the hyperbolas with intercepts on the

Figure 8.3.6

EXERCISE 1

Graphs of

xf 2x
-

k for

>

0) from

k E (-1, 0, 1}.

Diagonalize the quadratic form and sketch the graph of the equation
Show both the original axes and the new axes.

Graphs of Q(x) =kin R3


For a quadratic equation of the form

Q(x)

xi+2x1x2+x 2.
=

k in JR3, there are similar results to what

we did above. However, because there are three variables instead of two, there are more
possibilities. The nondegenerate cases give ellipsoids, hyperboloids of one sheet, and
hyperboloids of two sheets. These graphs are called quadric surfaces.
The usual standard form for the equation of an ellipsoid is
This is the case obtained by diagonalizing

Q(x)

x2

x2

x2

1.
k if the eigenvalues and k are all

c;

non-zero and have the same sign. In particular, if we write

An ellipsoid is shown in Figure 8.3.7. The positive intercepts on the coordinate axes
are

(a, 0, 0), (0, b, 0), and (0, 0, c).

Figure 8.3.7

An ellipsoid in standard position.

386

Chapter 8

Symmetric Matrices and Quadratic Forms

(a)

(b)

(b) k

(c)

= k. (a) k = 1; a hyperboloid of one sheet.

Graphs of 4x + 4x

Figure 8.3.8

O; a cone. (c) k = -1; a hyperboloid of two sheets.

The standard form of the equation for a hyperboloid of one sheet is


1. This form is obtained when k and two eigenvalues of the matrix of

2
x

2
x

a + b

x2

Q are positive

and the third eigenvalue is negative. It is also obtained when k and two eigenvalues are
negative and the other eigenvalue is positive. Notice that if this is rewritten

i,

+ =

1it is clear that for every z there are values of x1 and x2 that satisfy the equation,
so that the surface is all one piece (or one sheet). A hyperboloid of one sheet is shown
if Figure 8.3.8 (a).

XJ2 X22 X23


- 2=
The standard form of the equation for a hyperboloid of two sheets is 2 +
b2 c
a
-1. This form is obtained when k and one eigenvalue is negative and the other eigen__

values are positive, or when k and one eigenvalue are positive and the other eigenvalues
are negative. Notice that if this is rewritten

+ =-1 - i, it is clear that for every

lx3I <c, there are no values of x1 and x2 that satisfy the equation. Therefore, the graph
consists of two pieces (or two sheets), one with x3 c and the other with x3 :'.S: -c. A
hyperboloid of two sheets is shown in Figure 8.3.8 (c).
It is interesting to consider the family of surfaces obtained by varying k in the
x2 x2 x2

equation

b;

=k, as in Figure 8.3.8. When k= 1, the surface is a hyperboloid

of one sheet; as k decreases toward 0, the "waist" of the hyperboloid shrinks until at
k = 0 it has "pinched in" to a single point and the hyperboloid of one sheet becomes
a cone. As k decreases towards -1, the waist has disappeared, and the graph is now a
hyperboloid of two sheets.
Table 8.3.2

Graphs of

-11xT '12x
+

-13

k>O

k=O

A1,A2,A3>0

ellipsoid

point (0, 0, 0)

A1,A2 >0, ,13=0

elliptic cylinder

X3-axis

A1,A2 >0, ,13<0

hyperboloid of one sheet

cone

A1 >0, A2= 0, ,13<0

hyperbolic cylinder

intersecting planes

Ai >0, A2, .13<0

hyperboloid of two sheets

cone

.11 =0, A2, ,13<0

empty set

x1-axis

.11,.12,A3<0

empty set

point (0, 0, 0)

Section 8.3

Table 8.3.2 display

Exercises

387

the possible cases for Q(x) = k in R 3. The nondegenerate

cases are the ellipsoids and hyperboloids. Note that the hyperboloid of two sheets

appears in the form

(a)
Figure 8.3.9

2
x

x
x
-t
- - --% = k, k > 0.
c
a
b
z

(b)

(c)

Some degenerate quadric surfaces. (a) An elliptic cylinder x + A.x = 1 ,

parallel t o the x3-axis. (b) A hyperbolic cylinder A.x - x = 1 , parallel

to the x2-axis. (c) Intersecting planes A.x - x = 0.

Figure 8.3.9 shows some degenerate quadric surfaces. Note that paraboloidal sur
faces do not appear as graphs of the form Q(x) = k in R3 for the same reason that
2
parabolas do not appear in Table 8.3.1 for R : their equations contain first-degree
terms.

PROBLEMS 8.3
Practice Problems
Al Sketch the graph of the equation 2xi + 4x1x2 - x =
6. Show both the original axes and the new axes.
A2 Sketch the graph of the equation 2xi + 6x1x2 +

1 Ox = 1 1 . Show both the original axes and the

new axes.

A3 Sketch the graph of the equation 4x i -6x1x2 +4x =

(a) A=
(b) A=

tify the shape of the graph F Ax = 1 and the shape


of the graph xT Ax= -1.

0
1

(d) A=

-2

15. Show both the original axes and the new axes.
AS For each of the following symmetric matrices, iden-

[1 ll
[ ]
[ -]
1

(c) A=

12. Show both the original axes and the new axes.

A4 Sketch the graph of the equation Sxi +6x1x2-3x =

[ n
[ -]

-2

-1

-2

-2

(e) A=

-4

388

Chapter 8

Symmetric Matrices and Quadratic Forms

Homework Problems

Bl Sketch the graph of the equation 9x +4x1x2+6x =

B6 In each of the following cases, diagonalize the

90. Show both the original axes and the new axes.

quadratic form. Then determine the shape of the

surface

B2 Sketch the graph of the equation x + 6x1x2 - 7 x =


32. Show both the original axes and the new axes.

(a)

B3 Sketch the graph of the equation x -4x1x2+x = 8.

(b)

Show both the original axes and the new axes.

B4 Sketch the graph of the equation

(c)

+4x1X2+x = 8.

(d)
(e)

Show both the original axes and the new axes.

Q(x)

= k for k = 1, 0, -1. Note that two

of the quadratic forms are degenerate.

Q(x)
Q(x)
Q(x)
Q(x)
Q(x)

=xi + 4x1x2 + 4x1x3 + Sx + 6x2x3 + Sx


=-xi + 2x1x2 - 6x1x3 + x - 2x2x3 - x
=4xi + 2x1x2 + Sx - 2x2x3 + 4x

=x + 4x1x2 + x

=x + 6x1x2 + 2x1x3 + x + 2x2x3 + Sx

x =

BS Sketch the graph of the equation 3x -4x1x2+3

32. Show both the original axes and the new axes.

Computer Problems
Cl Identify the following surfaces by using a computer

(a)

xi - l4x1x2 + 6x1x3 - x + 8x2x3 10


i
= 37
=

(b) 3x + l0x1x2 + 4x1X3 + l6x2x3 - 6x

to find the eigenvalues of the con-esponding sym


metric matrix. Plot graphs of the original system
and of the diagonalized system.

8.4 Applications of Quadratic Forms


Applying quadratic forms requires some knowledge of calculus and physics. This sec
tion may be omitted with no loss of continuity.
Some may think of mathematics as only a set of rules for doing calculations.
However, a theorem such as the Principal Axis Theorem is often important because
it provides a simple way of thinking about complicated situations. The Principal Axis
Theorem plays an important role in the two applications described here.

Small Deformations
A small deformation of a solid body may be understood as the composition of three
stretches along the principal axes of a symmetric matrix together with a rigid rotation
of the body.
Consider a body of material that can be deformed when it is subjected to some
external forces. This might be, for example, a piece of steel under some load. Fix
an origin of coordinates

in the body; to simplify the story, suppose that this ori

gin is left unchanged by the deformation. Suppose that a material point in the body,
which is at

1 before the forces are applied, is moved by the forces to the point f (1) =
0. The problem is to understand

(f1(1), f2(1), f3(1)); we have assumed that f(O)

this deformation f so that it can be related to the properties of the body. (Note that f

represents the displacement of the point initially at

1, not the force at 1.)

For many materials under reasonable forces, the deformation is small; this means
that the point

f(1)

is not far from

1.

It is convenient to introduce a parameter f3 to

Section 8.4 Applications of Quadratic Forms

describe how small the deformation is and a function

f(x) = 1

389

h(x) and write

f3h(x)

h(x) in terms of the point x, the given function

This equation is really the definition of

f(x), and the parameter {3.

3IR. IR.3
[ ; J,

For many materials, an arbitrary small deformation is well approximated by its


"best linear approximation," the derivative. In this case, the map

approximated near the origin by the linear transformation with matrix

(0)

is

so that

[; (0) J v.

in this approximation, a point originally at v is moved (approximately) to


(This is a standard calculus approximation.)
In terms of the parameter f3 and the function

[CO)]=

I+

h, this matrix can be written as

f3G

= [ (0)J. In this situation, it is useful to write Gas G= E

where G

is its symmetric part, and

W=

W,

where

(G-G )
2

is its skew-symmetric part, as in Problem 8.2.DS.


The next step is to observe that we can write

I+ = I+ + = (I+
f3G

f3(E

W)

{3E)(J + f3W) - /32 EW

Since f3 is assumed to be small, {32 is very small and may be ignored. (Such treatment
of terms like [32 can be justified by careful discussion of the limit at f3

0.)

The small deformation we started with is now described as the composition of two
linear transformations, one with matrix
be shown that

f3E and the other with matrix

I+

f3W. It can

I+

f3W describes a small rigid rotation of the body; a rigid rotation does

I+

not alter the distance between any two points in the body. (The matrix f3W is called an

infinitesimal rotation.)
Finally, we have the linear transformation with matrix

f3E. This matrix is sym

I+

metric, so there exist principal axes such that the symmetric matrix is diagonalized to

[ 1 ]
I 1+3
Ei

E2

because

(It is equivalent to diagonalize

f3E

and add the result to

I,

is transformed to itself under any orthonormal change of coordinates.) Since

f3 is small, it follows that the numbers EJ are small in magnitude, and therefore

1+

EJ

>

0. This diagonalized matrix can be written as the product of the three matrices:

l3 = [1+ o1 ol [1 1+
1+
1
0
0

fl

0
0

0
0

f2

It is now apparent that, excluding rotation, the small deformation can be represented
as the composition of three stretches along the principal axes of the matrix
quantities

E1, E2,

f3E.

The

and f3 are related to the external and internal forces in the material

by elastic properties of the material. (/3 is called the infinitesimal strain; this notation
is not quite the standard notation. This will be important if you read further about this
topic in a book on continuum mechanics.)

390

Chapter 8

Symmetric Matrices and Quadratic Forms

The Inertia Tensor


For the purpose of discussing the rotation motion of a rigid body, information
about the mass distribution within the body is summarized in a symmetric matrix N
called the inertia tensor. (1) The tensor is easiest to understand if principal axes are
used so that the matrix is diagonal; in this case, the diagonal entries are simply the
moments of inertia about the principal axes, and the moment of inertia about any other
axis can be calculated in terms of these principal moments of inertia.

(2) In general,

the angular momentum vector

where

l of the rotating body is equal to Nw,

is the

instantaneous angular velocity vector. The vector lis a scalar multiple of w if and
only if

w is an eigenvector of N-that is, if and only if the axis of rotation is one of the

principal axes of the body. This is a beginning to an explanation of how the body wob
bles during rotation

(w

need not be constant) even though

l is

a conserved quantity

(that is, /is constant if no external force is applied).


Suppose that a rigid body is rotating about some point in the body that remains
fixed in space throughout the rotation. Make this fixed point the origin (0, 0, 0). Sup
pose that there are coordinate axes fixed in space and also three reference axes that are

t, these body axes


t + 6-t, the body axes

fixed in the body (so that they rotate with the body). At any time
make certain angles with respect to the space axes; at a later time

have moved to a new position. Since (0, 0, 0) is fixed and the body is rigid, the body
axes have moved only by a rotation, and it is a fact that any rotation in

JR.3

is deter

u(t + 6-t) and denote


the angle by 6-8. Now let 6-t
0; the unit vector u(t + M) must tend to a limit u(t),
and this determines the instantaneous axis of rotation at time t. Also, as M
0,

'f*, the instantaneous rate of rotation about the axis. The instantaneous
angular velocity is defined to be the vector w = ( 'f*) u(t).

mined by its axis and an angle. Call the unit vector along this axis
-t

-t

-t

(It is a standard exercise to show that the instantaneous linear velocity

some point in the body whose space coordinates are given by

1(t)

v(t)

at

is determined by

v=wx1.)

To use concepts such as energy and momentum in the discussion of rotating mo

tion, it is necessary to introduce moments of inertia.


For a single mass

at the point

x3-axis is defined to be m(xT

x);

(x1, x2, x3)

the moment of inertia about the

this will be denoted by

n33 .

The factor

(XT + x)

is simply the square of the distance of the mass from the x3-axis. There are similar
definitions of the moments of inertia about the
x2-axis (denoted by

x1 -axis (denoted by n11) and about the

n12).

For a general axis e through the origin with unit direction vector it, the moment of
inertia of the mass about e is defined to be
of

m from e.

Thus, if we let

1=

[:n

mil perpa 1112 = m[1 - (1 .

m multiplied

the moment of inertia in this case is

u)uf [1 - (1 . i1)i1] = m(ll1112 - (1 . u)2)

With some manipulation, using ar i1 = 1 and


equal to the expression

Because of this, for the single point mass

the 3 x 3 matrix

by the square of the distance

1 i1 = 1T i1,

m at 1,

we can verify that this is

we define the inertia tensor N to be

Section 8.4 Applications of Quadratic Forms

391

(Vectors and matrices are special kinds of "tensors"; for our present purposes, we
simply treat N as a matrix.) With this definition, the moment of inertia about an axis
with unit direction u is

uT N u

It is easy to check that N is the matrix with components n11, n22, and n33 as given
above, and for i * j, niJ

mx; x1 . It is clear that this matrix N is symmetric because

xxT is a symmetric 3 x 3 matrix. (The term mx;x1 is called a product of inertia. This

name has no special meaning; the term is simply a product that appears as an entry in
the inertia tensor.)
It is easy to extend the definition of moments of inertia and the inertia tensor
to bodies that are more complicated than a single point mass. Consider a rigid body
that can be thought of as k masses joined to each other by weightless rigid rods. The
moment of inertia of the body about the x3-axis is determined by taking the moment of
inertia about the x3-axis of each mass and simply adding these moments; the moments
about the x1 - and x2-axes, and the products of the inertia are defined similarly. The
inertia tensor of this body is just the sum of the inertia tensors of the k masses; since
it is the sum of symmetric matrices, it is also symmetric. If the mass is distributed
continuously, the various moments and products of inertia are determined by definite
integrals. In any case, the inertia tensor N is still defined, and is still a symmetric
matrix.
Since N is a symmetric matrix, it can be brought into diagonal form by the Prin
cipal Axis Theorem. The diagonal entries are then the moments of inertia with respect
to the principal axes, and these are called the principal moments of inertia. Denote
these by N1, N2, and N3 Let 'P denote the orthonormal basis consisting of eigenvectors

[;:]

of N (which means these vectors are unit vectors along the principal axes). Suppose
an arbitrary axis t is determined by the unit vector ii such that [il]p

Then,

from the discussion of quadratic forms in Section 8.2, the moment of inertia about this
axis e is simply

This formula is greatly simplified because of the use of the principal axes.
It is important to get equations for rotating motion that corresponds to Newton's
equation:
The rate of change of momentum equals the applied force.
The appropriate equation is
The rate of change of angular momentum is the applied torque.
It turns out that the right way to define the angular momentum vector
body is

J'

l for

a general

N(t)w(t)

Note that in general N is a function oft since it depends on the positions at time t of
each of the masses making up the solid body. Understanding the possible motions of

392

Chapter 8

Symmetric Matrices and Quadratic Forms

a rotating body depends on determjning

w(t), or at least saying something about it. In

general, this is a very difficult problem, but there will often be important simplifications
if

N is diagonalized by the Principal Axis Theorem. Note that J(t) is parallel to w(t) if
w(t) is an eigenvector of N(t).

and only if

PROBLEM 8.4
Conceptual Problem
Dl Show that if P is an orthogonal matrix that di
agonalizes the symmetric matrix f3E to a matrix
with diagonal entries t:1, t:2, and t:3, then

diagonalizes (I+f3E) to a matrix with diagonal en


tries 1 +t1
: , 1 + E2, and 1 + E3.

P also

CHAPTER REVIEW
Suggestions for Student Review
1

How does the theory of diagonalization of symme


tric matrices differ from the theory for general square
matrices? (Section 8. 1 )

is a quadratic form? (Section

2 Explain the connection between quadratic forms and


symmetric matrices. How do you find the symmetric
matrix corresponding to a quadratic form? How does
diagonalization of the symmetric matrix enable us to
diagonalize the quadratic form? (Section

8.2)

3 List the classifications of a quadratic form. How


does

diagonalizing

the

corresponding

symme

tric matrix help us classify a quadratic form?


(Section

4 What role do eigenvectors play in helping us under


stand the graphs of equations Q(x)= k, where Q(x)

8.3)

5 Define the principal axes of a symmetric matrix A.


How do the principal axes of A relate to the graph of
Q(x) = .XT Ax = k? (Section 8.3)
6 When diagonalizing a symmetric matrix A, we know
that we can choose the eigenvalues in any order.
How would changing the order in which we pick the
eigenvalues change the graph of
Explain. (Section

8.2)

Q(x) = _xT Ax = k?

8.3)

Chapter Quiz
El Let A =

[- ]
-

E3 By diagonalizing the quadratic form, make a sketch


Find an orthogonal matrix

P such that pT AP=

D is diagonal.

E2 For each of the following quadratic forms Q(x),


(i) Determfoe
matrix

the

corresponding

symmetric

A.
Q(x) in diagonal form and give the or

(ii) Express

thogonal matrix that brings it into this form.


(iii) Classify

Q(x).

(iv) Describe the shape of

in the x1x2-plane. Show the new and old coordinate


axes.

E4 Prove that if A is a positive definite symmetric ma


trix, then (1, y) = _xT Ay is an inner product on JR11
ES Prove that if A is a 4

Q(x) = 1 and Q(x)= 0.

Q(x)= Sxi + 4xix2 + Sx


(b) Q(x)= 2xi - 6x1x2 - 6x1 x3 - 3x + 4x2x3 - 3x
(a)

of the graph of

3 2

x 4 symmetric matrix with

characteristic polynomial C(tl)

A=

31.

= (tl

- 3)4,

then

Chapter Review

393

Further Problems
Fl In Problem 7.F5, we saw the Q R-factorization: an

F4 (a) Suppose that A is an invertible n x n matrix.

invertible n x n matrix A can be expressed in the

Prove that A can be expressed as a product

form A

of an orthogonal matrix Q and a positive def

Q R, where Q is orthogonal and R is

upper triangular. Let Ai

RQ, and prove that

inite symmetric matrix U, A

QU. This is

Ai is orthogonally similar to A and hence has the

known as a polar decomposition of A. (Hint:

same eigenvalues as A. (By repeating this process,

Use Problems F2 and F3, let U be the square


root of ATA, and let Q
AU 1 )

QiR1, Ai

Ri Qi, A1

QzR 2, Az

RzQ 2, ...,

one obtains an effective numerical procedure for


determining eigenvalues of a symmetric matrix.)

symmetric matrix. Prove that A has a square root.


That is, show that there is a positive semidefinite
symmetric matrix B such that 82
A. (Hint: Sup
=

D.

Define C to be a positive square root for D and let

(b) Let V
that A

F2 Suppose that A is an n x n positive semidefinite

pose that Q diagonalizes A to D so that QTAQ

QCQT.)

F3 (a) If A is any n x n matrix, prove that ATA is sym


metric and positive semidefinite. (Hint: Con
sider Ax Ax.)

QUQT. Show that Vis symmetric and


VQ. Moreover, show that V2

AAT,

so that Vis a positive definite symmetric square


root of AAT.
(c) Suppose that the 3 x 3 matrix A is the matrix
of an orientation-preserving linear mapping L.
Show that L is the composition of a rotation fol
lowing three stretches along mutually orthog
onal axes. (This follows from part (a), facts
about isometries of JR3, and ideas in Section

8.4. In fact, this is a finite version of the result


for infinitesimal strain in Section 8.4.)

(b) If A is invertible, prove that ATA is positive


definite.

MyMathlab

Go to MyMathLab at www.mymathlab.com. You can practise many of this chapter's exercises as often as you
want. The guided solutions help you find an answer step by step. You'll find a personalized study plan available to
you, too!

CHAPTER 9

Complex Vector Spaces


CHAPTER OUTLINE
9.1
9.2
9.3
9.4

Complex Numbers

9.5
9.6

Inner Products in Complex Vector Spaces

Systems with Complex Numbers


Vector Spaces Over

:C

Eigenvectors in Complex Vector Spaces


Hermitian Matrices and Unitary Diagonalization

When they first encounter imaginary numbers, many students wonder why we look at
numbers that are not real. In fact, it was not until Rafael Bombelli showed in 1572 that
numbers involving square roots of negative numbers can be used to solve real-world
problems. Currently, complex numbers are used to solve problems in a wide variety of
areas. Some examples are electronics, control theory, quantum mechanics, and fluid
dynamics. Our goal in this chapter is to extend everything we did in Chapters 1-8 to
allow for the use of complex numbers instead of just real numbers.

9.1 Complex Numbers


The first numbers we encounter as children are the natural numbers 1, 2, 3, and so on.
In school, we soon find that in order to perform certain subtractions, we must extend
our concept of number to the integers, which include the natural numbers. Then, so
that division can always be carried out, we extend the concept of number to the rational
numbers, which include the integers. Next we have to extend our understanding to the
real numbers, which include all the rationals and also include irrational numbers.
To solve the equation x2 + 1

0, we have to extend our concept of number one

more time. We define the number i to be a number such that i2 =

1 The system of
.

numbers of the form x + yi where x, y E JR is called the complex numbers. Note that the
real numbers are included as those complex numbers with b = 0. As in the case with
all the previous extensions of our understanding of number, some people are initially
uncertain about the meaning of the "new" numbers. However, the complex numbers
have a consistent set of rules of arithmetic, and the extension to complex numbers is
justified by the fact that they allow us to solve important mathematical and physical
problems that we could not solve using only real numbers.

The Arithmetic of Complex Numbers


Definition

A complex number is a number of the form z = x + yi, where x, y E JR, and i is an

Complex Number

element such that i2 = -1. The set of all complex numbers is denoted by C.

396

Chapter 9

Complex Vector Spaces

Addition

of complex. number'!.

Z.\ = X\ +

Y\i and z.2

X2 + )l2i \ G.efmed b

Z1 + Z2= (x1 + X2) + (yl + Y2)i

Multiplication of complex numbers z1= x1 + y1 i and z2= x2 + y2i is defined by


Z1Z2= (x1 + Y1i)(x2 + Y2i)
2
= X1X2 + X1Y2i + X2Y1i + Y1Y2i
= (x1x2- Y1Y2) + (X1Y2 + X2Y1)i

EXAMPLE 1

Perform the following operations


(a) (2 + 3i) + (S- 4i)

Solution: (2 + 3i) + (S- 4i)= 2 +S + (3- 4)i= 7- i


(b) (2 +3i)- cs - 4i)

Solution: (2 + 3i)- (S- 4i)= 2- S + (3- (-4))i= -3 + 7i


(c) (3- 2i)(-2 +Si)

Solution: (3- 2i)(-2 +Si)= [3(-2)- (-2)(S)] + [3(S) + (-2)(-2)]i= 4 +19i

EXERCISE 1

Calculate the following:


(a) (1- 4i) + (2 + Si)
(b) (2 + 2i)i
(c) (1 - 3i)(2 + i)
(d) (3- 2i)(3 +2i)

Remarks
1. Notice that for a complex number z = x + yi, we have that z = 0 if and only if
x= 0 and y = 0.
2. If z= x + yi, we say that the real part of z is x and write Re (z)= x. We say that
the

imaginary part of z is y (not yi),

and we write Im(z)= y. It is important to

remember that the imaginary part of z is a real number.

3. If x = 0, z = yi is said to be "purely imaginary." If y = 0, z = x is "purely


real." Notice that the real numbers are the subset of C of purely real complex
numbers.

4. It is best not to use as a notation for the number i; doing so can lead to
confusion in some cases. In physics and engineering, it is common to use j in
place of i since the letter i is often used to denote electric current.

S. It is sometimes convenient to write x + iy instead of x + yi. This is particularly


common with the polar form for complex numbers, which is discussed below.

Section 9.1

Complex Numbers

397

The Complex Conjugate and Division


We have not yet discussed division of complex numbers. From the multiplication in
Example 1, we can say that

4+l9i=-2+Si
3 - 2i

In order to give a systematic method for expressing the quotient of two complex num
bers as a complex number in standard form, it is useful to introduce the complex con
jugate.

Definition

The

complex conjugate of the complex number z=x+yi is x - yi and is denoted

Complex Conjugate

z = x - yi

EXAMPLE2

2+Si= 2 - Si
-3 - 2i = -3+2i
x = x,

Theorem l

for any x

IR

Properties of the Complex Conjugate

For complex numbers zi = x+iy and z2 with x, y

IR we have

(1) Zi = Zi
(2) z1 is purely real if and only if ZI= zi
(3) zi is purely imaginary if and only if ZI = -z
(4) zi+z2=Zl+z2
(S) ZiZ2 = ZiZ2
(6) t: = Zl1
(7) zi+ZI= 2 Re(zi) = 2x1
(8) zi - ZI = i2 Im(zi)= i2yi
(9) ziZI=+YT

EXERCISE2

Prove properties (1), (2), and (4) in Theorem 1.

The proofs of the remaining properties are left as Problem D 1.


The

quotient of two complex numbers can now be displayed as a complex num

ber in standard form by multiplying both the numerator and the denominator by the
complex conjugate of the denominator and simplifying. If zi = x i +Yii and z2 =

x2+Y2i * 0, then

398

Chapter 9

Complex Vector Spaces

Notice that the quotient is defined for every pair of complex numbers z1, z2, provided
that the denominator is not zero.

2+Si

EXAMPLE3

3 -4i

--

EXERCISE 3

(2+5i)(3+4i)
(3-4i)(3+4i)

Calculate the following quotients.


(a)

1+

l-t

(b)

l+t

(c)

(6-20)+(8+ 15)i
14 23.
=--+-t
25 25
9+ 16

4 -i
1+Si

Roots of Polynomial Equations


Complex conjugates are not only used to determine quotients of complex numbers, but
occur naturally as roots of polynomials with real coefficients.

Theorem 2

Let p(x) = a,,X' +

+ a1x+ ao.

where a; E JR for

1 ::; i ::;

n.

If z is a root of p(x),

then z is also a root of p(x).

Proof: Suppose that z is a root, so that


a11i' +...+ a1z+ ao =0
Using Theorem 1, we get

0 = 0= a,,z11+

+a1z+ ao = a11zn +

+ a1z+ ao

Thus, z is a root of p(x).

EXAMPLE4

Find the roots of p(x) = x3 + 1.

Solution: By the Rational Roots Theorem (or by observation), we see that x =-1 is
a root of p(x). Therefore, by the Factor Theorem, (x+ 1) is a factor of p(x). Thus,

x3 + 1 =(x+ l)(x2 - x+ 1)

Using the quadratic formula, we find that the other roots are

z=

1+ Y3i

and

z=

1 - Y3i

Section 9.1

Complex Numbers

399

The Complex Plane


For some purposes, it is convenient to represent the complex numbers as ordered pairs
of real numbers: instead of z

x+ yi, we write z = (x,y). Then addition and multipli

cation appear as follows:


Z1 +Z2

z1 z2

(x1,Y1) + (x2,Y2) = (x1+ Xz,y1+Y2)


(x1 ,Y1)(x2 ,Y2) = (x 1x2 - Y1Y2,X1Y2+ X2Y1)

In terms of this ordered pair notation, it is natural to represent complex numbers as


points in the plane, with the real part of z being the x-coordinate and the imaginary
part being the y-coordinate. We will speak of the real axis and the imaginary axis in
the complex plane. See Figure 9.1.1. A picture of this kind is sometimes called an

Argand diagram.
y
imaginary
axis

(x.y)

real axis

Figure 9.1.1

(x, -y)

The complex plane.

If c is purely real, then cz


c (x,y)
(ex,cy). Thus, with respect to addition and
2
scalar multiplication by real scalars, the complex plane is just like the usual plane JR .
=

However, in the complex plane, we can also multiply by complex scalars. This has a
natural geometrical interpretation, which we shall see in the next section.

EXERCISE4

Plot the following complex numbers in the complex plane.


(a)2+i

(d) (2+i)( l+i)

(c) (2+i)i

(b)2+i

Polar Form
Given a complex number z

x+yi, the real number

lzl

Definition
;\1odulus
Argument
Polar Form

x2+ y2

is called the modulus of z. If lzl * 0, let () be the angle measured counterclockwise


from the positive x-axis such that
x

r cos ()

and

y = r sin()

400

Chapter 9

Complex Vector Spaces

The angle () is unique up to a multiple of


shown in Figure

9.1.2, a polar form of z is

27r, and it is called an argument of z. As


i

z = r(cos()+ sin())
y
z = x+ iy = r cos()+ ir sin()

Figure 9.1.2

EXAMPLES

Polar representation of z.

Determine the modulus, an argument, and a polar form of z1 =

Solution: We have
lzil=

2-2i and z2=

-1 +

Y3i.

12 - 2il = V22+ 22 = 2 Y2

Any argument () satisfies

2 2 Y2 cos()
=

so cos()=
of z1 is

and

- 2 2 Y2 sin()
=

and sin()=- which gives ()=- + 27Tk, k E Z. Hence, a polar form

For z2, we have

1zz1 =
Since

l-1+ Y3il = )c-1)2 cY3)2 = 2


+

-1 = 2 cos() and Y3 = 2 sin(), we get () = + 27rk, k

Z. Thus, a polar form

of z2 is

Remarks
1. An important consequence of the definition is

2
lzl = x2 + l = zZ

2.

The angles may be measured in radians or degrees. We will always use radians.

3. Notice that every complex number z has infinitely many arguments and hence
infinitely many polar forms.

Section 9.1

4. It is tempting but incorrect to write()

401

Complex Numbers

arctan(y/x). Remember that you need

two trigonometric functions to locate the correct quadrant for z. Also note that
y/x is not defined if x

0.

EXERCISE 5

Determine the modulus, an argument, and a polar form of z1

EXERCISE 6

Let z

-f3+i and z2

-1 - i.

r(cos() + i sin B). Prove that the modulus of z equals the modulus of z and an

argument of z is - B.

The polar form is particularly convenient for multiplication and division because
of the trigonometric identities
cos(B1+B2)

cos Bi cos B2 - sin B1 sin B2

sin(B1+B2)

sin ()1 cos B2 +cos Bi sin B2

It follows that
z1z2

r1(cos Bi+i sin B1 )r2(cos B2 +i sin B2)

rir2((cos Bi cos B2 - sin Bi sin B2)+i(cos Bi sin B2 +sin Bi cos B2))

rir2( cos(B1+B2)+i sin(B1+B2))

In words, the modulus of a product is the product of the moduli of the factors, while
an argument of a product is the sum of the arguments.

Theorem 3

For any complex numbers z1

r1(cos B1+i sin()1) and z2

r2(cos B2+i sin B2), with

z2 f. 0, we have

The proof is left for you to complete in Problem D4.

Corollary 4

EXERCISE 7

Let z

1
r(cos() +i sin B) with r f. 0. Then z-

( cos(-())+i sin(-B)).

Describe Theorem 3 in words and use it to prove Corollary 4.

402

Chapter 9

EXAMPLE6

Complex Vector Spaces

r:;

Calculate ( 1 - i)(- v.) +i) and

Solution:

We have

2+2i
.

r:;

1 + v3i

,
us mg polar form.

( ( )
( G;)

(1 - i)(- +i) = Y2 cos = 2 Y2 cos


;<;

2+2i
1 + -.../3i

+isin +isin

2Y2 (cos ( )+isin ( ))


2 (cos ( )+isin ( ))

7r
+ isin 1
12
1.366+i(0.366 )

( ( )

EXERCISE 8

G;))

-0.732+i(2.732 )

( ))

= Y2 cos ;<;

)) 2 (cos ( 5;)+isin ( 5;))

Calculate (2 - 2i)(-1 + -.../3i) and

2 - 2i
-1 +

-.../3 using polar form.


3i

Powers and the Complex Exponential


From the rule for products, we find that
z2 = r2(cos 2e + isin 28)
Then
z3 = z2z = r2r(cos (W + B) + isin B(W+B))
= r3(cos 3e+isin 3B)

Theorem 5

[de Moivre's Formula]


Let z = r (cos e+isin B) with r

-:F

0. Then, for any integer n, we have

t1 = r'1(cos ne + i sin ne)

Proof:

For n 0, we have z0 = 1 = r0(cos 0+isin 0 ). To prove that the theorem holds


for positive integers, we proceed by induction. Assume that the result is true for some
integer k 0. Then
=

/+1 = lz =Ir[cos (ke+ ())+isin (k ()+ ())]


= /+1[cos((k+ l)())+isin((k+ 1 )()) ]

Section 9.1

Complex Numbers

403

Therefore, the result is true for all non-negative integers n . Then, by Theorem 4, for
any positive integer m, we have
z

-m

()-1

= r

-m

(r"'(cos me+ isin me)f 1

( cos(-me) + isin(-m8))

Hence, the result also holds for all negative integers n = -m .

EXAMPLE 7

Calculate

(2 + 2i)3.

Solution:

c2 +

2i)3

[2 Y2 ( G) G))r
(2 \12)3 (
( :) ( :))
cos

+ sin

cos 3

16
-

+ isin 3

( )

Y2 -

+i

16 + l6i

In the case where r = 1, de Moivre's Formula reduces to


(cose+ isinet = cosne+ isinne
This is formally just like one of the exponential laws, (e11)'1 = e1111 We use this idea

to define ez for any z

(e

Definition

C, where e is the usual natural base for exponentials

ei!I

Euler's Formula

Definition

2.71828). We begin with a useful formula of Euler.


=

cose + isine

For any complex number z = x + iy, we define

Remarks
1

One interesting consequence of Euler's Formula is that

In one formula , we have five of the most important numbers in mathematics: 0,

1, e, i, and1r.

2. One area where Euler's Formula has important applications is ordinary differ
ential equations. There, one often uses the fact that

< +
e a bi)t

eateibt

eat (cos bt

isin bt)

404

Chapter 9

Complex Vector Spaces

Observe that Euler's Formula allows us to write every complex number z in the
form
where r = lzl and e is any argument of z. Hence, in this form, de Moivre's Formula
becomes

EXAMPLES

Calculate the following using the polar form.


(a) (2 +2i)3

3
Solution: (2 +2i)3 = 2 "2.ei1114 = (2 Y2.)3ei<3rr/4l = -1 6 + 1 6i
(b) (2i)3

Solution: (2i)3 = 2ei1112 3 = 23e i<31112l = -8i


(c)

cY3 + i)5

5 = 25ei511l6 = -16../3 + 16i


Solution: ../3
(
+ i)5 = 2ei1116

EXERCISE 9

Use polar form to calculate (1 - i)5 and ( -1 - ../3i)5.

n-th Roots
Using de Moivre's Formula for n-th powers is the key to finding n-th roots. Suppose
that we need to find then-th root of the non-zero complex number z = reil:i. That is, we
need a number w such that w'1 = z. Suppose that w =Rei. Then wn = z implies that

Then R is the realn-th root of the positive real number r. However, because arguments
of complex numbers are determined only up to the addition of 2nk, all we can say
about > is that
nr/> = 8 +2nk,
or
r/> =

EXAMPLE9

e +2nk

kEZ

kEZ

Find all the cube roots of 8.


Solution: We have 8 = 8e;co+2rrk), k E Z. Thus, for any k E Z.

Section 9.1

Complex Numbers

405

0
If k = 0, we have the root w0 = 2e = 2.
2 /
If k= 1, we have the root w1 = 2ei rr 3 = -1 + -./3i.
/
If k = 2, we have the root w2 = 2ei 4rr 3 = -1 - -./3i.
If k = 3, we have the root 2ei2rr= 2 = wo.

EXAMPLE9
(continued)

By increasing k further, we simply repeat the roots we have already found. Simil
arly, consideration of negative k gives us no further roots. The number 8 has three third
roots, w0, w1, and w2. In particular, these are the roots of equation w3 - 8= 0.

Theorem 6

Let

be a non-zero complex number. Then then distinctn-th roots of


lf11 (
Wk = r ei 6

EXAMPLE 10

+2rrk)/n
,

= rei6 are

k = 0, 1, ... 'n - 1

Find the fourth roots of -81.

Solution: We have -81 = 8lei(rr+2rrkl. Thus, the fourth roots are


(81)1/4ei(JT+2JTk)/4,

k= 0, 1,2,3

In Examples 9 and 10, we took roots of numbers that were purely real: we were
really solving x' - a= 0, where

a E

R By our earlier theorem, when the coefficients

of the equations are real, the roots that are not real occur in complex conjugate pairs.
As a contrast, let us consider roots of a number that is not real.

EXAMPLE 11

Find the third roots of Si and illustrate in an Argand diagram.


Solution: Si= Sei0+2kJT), so the cube roots are
st/3eirr/6

Wo
W1
W2

=
=

51/3ei5JT/6

st /3
51/3

(f

( -f3

5 t/3 ei5rr/6 = 5113(-i)

)
)

+i
+

Plotting these roots shows that all three

are points on the circle of radius 5113 cen

tred at the origin and that they are separated


by equal angles of

-i 51/3

Examples 9, 10, and 11 all illustrate a general rule: the n-th roots of a complex
number

z=

rei6 all lie on the circle of radius r11", and they are separated by equal

angles of 2n In.

406

Chapter 9

Complex Vector Spaces

PROBLEMS 9.1
Practice Problems
Al Determine the following sums or differences.
(a) (2+Si)+ (3+2i)
(b) (2-7i)+ (-S+3i)

AS Express the following quotients in standard form.


3
l
(a)
(b)
2+3i
2-7i

(c) (-3+Si)-(4+3i)

(d) (-S-6i)- (9- lli)

(c)

A2 Express the following products in standard form.


(a) (1 + 3i)(3- 2i)
(b) (-2- 4i)(3-i)
(c) (1 - 6i)(-4+ i)
(d) (-1- i)(l- i)

A3 Determine the complex conjugates of the following


numbers.
(a)3- Si
(c) 3

3+2i

(d)

1+6i
.
4 l
-

A6 Use polar form to determine z1z2 and

z
i if
Z2

(a) z1=1+ i, z2= 1 + Y3i


(b) Z1=--f3- i, Z2= 1 - i
(c) Z1 = 1 +2i, Z2=-2- 3i
(d) z1=-3+ i,z2=6- i
A 7 Use polar form to determine the following.
(a) (1 + i)4
(b) (3- 3i)3

(b) 2+7i
(d) -4i

(c) (-1- -./3i)4

A4 Determine the real and imaginary parts of the fol


lowing.
(a) z=3- 6i
4
(c)z= .
6 -[

2- Si

(b) z= (2+Si)(l- 3i)


-1
(d) z= -.

(d) (-2Y3 +2i)5

A8 Use polar form to determine all the indicated roots.


1
1
(b) (-16i) 14
(a) (-1) 15
1
1
(c) c-Y3 - i) 13
(d) (1 +4i) 13

Homework Problems
Bl Determine the following sums or differences.
(a) (3+4i)+ (1 +Si)
(b) (3- 2i)+ (-7+6i)
(c) (-S+7i)- (2+6i)
(d) (-7- 2i)- (-8 -9i)
B2 Express the following products in standard form.
(a) (2+ i)(S- 3i)

(b) (-3- 2i)(S- 2i)


(c) (3- Si)(-1+6i)
(d) (-3- i)(3-i)
B3 Determine the complex conjugates of the following
numbers.
(a) 2i
(b) 17
(c) 4- 8i
(d) s +1li
B4 Determine the real and imaginary parts of the
following.
(a) 4-7i

(b) (3+2i)(2- 3i)


s
( c)
4- i
1- 2i
.
(d)
1+ l
BS Express the following quotients in standard form.
l
(a)
3+4i
2
(b)
3- Si
1- 4i
(c)
3+Si
1+4i
(d)
4- Si
--

B6 Use polar form to determine z1z2 and if


Z2
(a) Z1= l - -f3i, Z2= -1 + i
(b) Z1=- -./3 + i, Z2 =-3- 3i
(c) z1= l+3i, z2= -1 -2i
(d) ZJ=-2+ i, Z2 =4- i

Section 9.2 Systems with Complex Numbers

BS Use polar form to determine all the indicated roots.

B7 Use polar form to determine the following.


(a)

(1

(a) (32) 115

VJi)4

(b) (8 li)1/5

2i)3
(c) ( -Y3 - i)4
(d) (-2 + 2 Y3i)5

(b) ( -2

407

(c) (-

Y3 + i)l/

3
(d) (4+i)1 1

Conceptual Problems
Dl Prove properties (3 ), (5), (6), (7), (8), and (9) of

D3 Use Euler's Formula to show that


(a) eie= e-ie

Theorem 1.

(e;e+e-ie)
;
sine= (e e - e-ie)

(b) cos8=

D2 If z= r(cos e +i sin 8), what islzl? W hat is an argu


ment ofz?

(c)

D4 Prove Theorem 3.

Systems with Complex Numbers

9.2

In some applications, it is necessary to consider systems of linear equations with com


plex coefficients and complex right-hand sides. One physical application, discussed
later in this section, is the problem of determining currents in electrical circuits with
capacitors and inductive coils as well as resistance. We can solve systems with com
plex coefficients by using exactly the same elimination/row reduction procedures as
for systems with real coefficients. Of course, our solutions will be complex, and any
free variables will be allowed to take any complex value.

EXAMPLE 1

Solve the system of linear equations


ZJ + Z2+ Z3 =

(1

=i

i)z1 + z2+

(3 - i)z1 + 2z2+ z3

1 + 2i

Solution: The solution procedure is, as usual, to write the augmented matrix for the
system and row reduce the coefficient matrix to row echelon form:

3 -i

0
i
1 +2i

- l+i

1
0

0
1
0

1 +i

-2+i

1 + 2i
-1

-[

1 +i
1

1 - 2i

Hence, the solution is z

ll
l

R1 - (1 - i)R1
R3 - (3 - i)R1
R1 - R,

R3 + ( 1 - i)R2
R, +m,
R2 - (1 + i)R3

1 +;

-[
[

l l
-2+ i

1 - 2i

- 1+i
- 1+i

-2+i

-i

0
0

1 +i

-[

0
0
1

l
+; l
-1

0
0

-2 + i

1 - 2i

J l

-iR2

2;

-iR3

408

Chapter 9

EXAMPLE2

Complex Vector Spaces

Solve the system

(1

i)z1

(1

ZI

2iz2 +

=1

1.
1
.
t)z2 + Z3 = - - -t
2
2
- Z3 = 0

Solution: Row reducing the augmented matrix gives

[T
[

2i

0
i

0
0
1

2i

-1

-1
i

] [
][
j

1
0

0
+

2i

!;

-1
1
0

0
1
2i

-1

1-i

T
+

l
+H

21

l.

0
1
0

-1
1-i

T
0

I.

Hence, Z3 is a free variable, so we let z3 = t EC. Then z1 = z3 = t, z2 =- i- 12; t, and


the general solution is

EXERCISE 1

Solve the system

iz1 + z2 + 3z3 = -1 - 2i

iz1 + iz2 + (1 + 2i)z3 = 2 + i


2z1 + (1 + i)z2 + 2z3 = 5

-i

Complex Numbers in Electrical Circuit Equations


This application requires some knowledge of calculus and physics. It can be omitted
with no loss of continuity.
For purposes of the following discussion only, we switch to a notation commonly
used by engineers and physicists and denote by j the complex number such that

2
j = -1, so that we can use i to denote current.
In Section 2.4, we discussed electrical circuits with resistors. We now also con

sider capacitors and inductors, as well as alternating current. A simple capacitor can be
thought of as two conducting plates separated by a vacuum or some dielectric. Charge
can be stored on these plates, and it is found that the voltage across a capacitor at
time tis proportional to the charge stored at that time:

V(t) =

Q(t)

where Q is the charge and the constant C is called the capacitance of the capacitor.

Section 9.2 Systems with Complex Numbers

409

The usual model of an inductor is a coil; because of magnetic effects, it is found


that with time-varying current i(t), the voltage across an inductor is proportional to the
rate of change of current:

V(t)

d i(t)
dt

where the constant of proportionality L is called the inductance.


As in the case of the resistor circuits, Kirchhoff's Laws applies: the sum of the
voltage drops across the circuit elements must be equal to the applied electromotive
force (voltage). Thus, for a simple loop with inductance L , capacitance C, resistance R,
and applied electromotive force E(t) (Figure 9.2.3), the circuit equation is

d i(t)
-

dt

1
+R i(t) + -Q(t)
c

E(t)

+
E

c
R

Figure 9.2.3

Kirchhoff's voltage law applied to an alternating current circuit.

For our purposes, it is easier to work with the derivative of this equation and use
the fact that

dQ
dt

i:

d2 i(t)
--

dt2

+R

d i(t)
--

dt

d E(t)

-t

= --

1
c

(t)

dt

In general, the solution to such an equation will involve the superposition (sum)
of a steady-state solution and a transient solution. Here we will be looking only for the
steady-state solution, in the special case where the applied electromotive force, and
hence any current, is a single-frequency sinusoidal function. Thus, we can assume that

E(t)

BeJwt

i(t)

and

Aejwr

where A and B are complex numbers that determine the amplitudes and phases of
voltage and current, and w is 2rr multiplied by the frequency. Then

di
- =

dt
d2i

- =

dt2

.
jwAe1wr
(jw)2i

jwi

-w2i

and the circuit equation can be rewritten


-W2 L .l

. R.l +
+ JW

dE

-t = -

dt

410

Chapter 9

Complex Vector Spaces

Now consider a network of circuits with resistors, capacitors, inductors, electromotive


force, and currents, as shown in Figure 9.2.4. As in Section 2.4, the currents are loops,
so that the actual current across some circuit elements is the difference of two loop
currents. (For example, across

R1,

the actual current is

i1 - i2.)

From our assumption

that we have only one single frequency source, we may conclude that the steady-state
loop currents must be of the form

Figure 9.2.4

An alternating current network.

By applying Kirchhoff's laws to the top-left loop, we find that

If we write the corresponding equations for the other two loops, reorganize each equa
tion, and divide out the non-zero common factor eiwr, we obtain the following system

A1, A2, A3:


[-w2L1 1 jwR1]A1 -jwR1A2 w2L1A3 -jwB
-jwR1A1 [-w2Li 2 jw(R1 R2 R3)] A1 -jwR3A3 jwB
w2L1A1 -jwR3A2 [-w2(L1 3 jwR3] A3
A1, A1, A3.

of linear equations for the three variables

and

Thus, we have a system of three linear equations with complex coefficients for the
three variables

and

We can solve this system by standard elimination. We

emphasize that this example is for illustrative purposes only: we have constructed a
completely arbitrary network and provided the solution method for only part of the
problem, in a special case. A much more extensive discussion is required before a
reader will be ready to start examining realistic circuits to discover what they can do.
But even this limited example illustrates the general point that to analyze some electri
cal networks, we need to solve systems of linear equations with complex coefficients.

Section 9.3

Vector Spaces over C

411

PROBLEMS 9.2
Practice Problems
Al

Determine whether each system is consistent, and

(b)

if it is, determine the general solution.


(a)

(1

(1

z1 + iz2+ +i)z3 -i
-2z1 + - 2i)z2 - 2z3 2i
2iz1 - 2z2 - (2+3i)z3 + 3i
=

(1

z1 + + i)z2+2z3+Z4 - i
221 +(2 + i)z2 + 523 + (2+i)z4 4 -i
iz1 + + i)22+(1+2i)z3+2iZ4
(-1

-1

Homework Problems
Bl Determine whether each system is consistent, and

(c)

if it is, determine the general solution.


(a)

(1

z2 -i23 +3i
i21 - 22+(-1 + i)z3 + 2i
221 +2i22+ +2i)z3 4
21 + (2+i)22 + i23
+i
iz1 + (-1+2i)z2 +2iz4 -i
z1 +(2+i)z2+ + i)z3 + 2iz4 2 -i
(3

(b)

(3

(d)

(1

(-1

i21 +2z2 - + i)z3


+i)21 +(2 - 2i)z2 -423 i
iz1 + 2z2 -(3+3i)z3 + 2i
z1 +z2+iz3+ + i)24
iz1 + i22 + -i)23 + 2 + i)z4
2z1 +222+(2 + 2i)23 + 2z4
(

(1

9.3 Vector Spaces over C


The definition of a vector space in Section

4.2

is given in the case where the scalars are

real numbers. In fact, the definition makes sense when the scalars are taken from any
one system of numbers such that addition, subtraction, multiplication, and division
are defined for any pairs of numbers (excluding division by 0) and satisfy the usual
commutative, associative, and distributive rules for doing arithmetic. Thus, the vector
space axioms make sense if we allow the scalars to be the set of complex numbers. In
such cases, we say that we have a vector space over C, or a complex vector space.

EXERCISE 1

2 {[ ] Z1,Z2 c},

Le t C

with addition defined by

+ wi]
[21]Z2 + [w']
[21
W2 Z2+W2
=

and scalar multiplication by

C defined by

a [zi]z2 [a2az21]
=

Show that C2 is a vector space over C.

412

Chapter 9

Complex Vector Spaces

It is instructive to look carefully at the ideas of basis and dimension for complex
vector spaces. We begin by considering the set of complex numbers C itself as a vector
space.
As a vector space over the complex numbers, C has a basis consisting of a single
element, {1 }. That is, every complex number can be written in the form a1, where a is a
complex number. Thus, with respect to this basis, the coordinate of the complex num
ber z is z itself. Alternatively, we could choose to use the basis {i}. Then the coordinate
of z would be -iz since
z=(-iz)i
In either case, we see that C has a basis consisting of one element, so IC is a one
dimensional complex vector space.
Another way of looking at this is to observe that when we use complex scalars, any
two non-zero elements of the space IC are linearly dependent. That is, given z1, z2 E IC,
there exist complex scalars, not both zero, such that

For example, we may take a1 = 1 and a2 = _fl, since we have assumed that z2 * 0.
z2
It follows that with respect to complex scalars, a basis for IC must have fewer than two
dimensions.
However, we could also view IC as a vector space over R Addition of complex
numbers is defined as usual, and multiplication of z = x + iy by a real scalar k gives
kz= kx + kyi
Observe that if we use real scalars, then the elements 1 and i in C are linearly in
dependent. Hence, viewed as a vector space over JR, the set of complex numbers is
two-dimensional, with "standard" basis {l , i}.
As we saw in Section 9.2, we sometimes write complex numbers in a way that
exhibits the property that IC is a two-dimensional real vector space: we write a complex
number z in the form
z = x + iy = (x,y) = x(l,0) + y(O, 1)
Note that (1, 0) denotes the complex number 1 and that (0, 1) denotes the complex
number i. With this notation, we see that the set of complex numbers is isomorphic to
2
JR. , which justifies our work with the complex plane in Section 9.1. However, notice
that this representation of the complex numbers as a real vector space does not include
multiplication by complex scalars.
2
Using arguments similar to those above, we see that IC is a two-dimensional
complex vector space, but it can be viewed as a real vector space of dimension four.

Definition

The vector space IC11 is defined to be the set

C"

with addition of vectors and scalar multiplication defined as above.

Section 9.3

Vector Spaces over C

413

Remark
These vector spaces play an important role in much of modem mathematics.
Since the complex conjugate is so useful in C, we extend the definition of a com
plex conjugate to vectors in C'1

z1

Definition

Complex Conjugate

The

[]
:

complex conjugate of z=

en is defined to be l =

Zn

EXAMPLE 1
Let z=

1 +i

1 +i

-2i

-2i

. Then z=

3
1- 2i

[0l.

1-i

2i
3
1 +2i

1 - 2i

Linear Mappings and Subspaces


We can now extend the definition of a linear mapping

L :V

W to the case where V

and W are both vector spaces over the complex numbers. We say that
the complex numbers if for any

L is linear over

a EC and v1, v2 EV we have

L(av1 + v2) = aL(v1) + L(v2)


We can also define subspaces just as we did for real vector spaces, and the range
and nullspace of a linear mapping will be subspaces of the appropriate vector spaces,
as before.

EXAMPLE2

Let

L : C3

C2 be the linear mapping such that

L(l,0,0) =

1+i
,
2

[ ]

Then, the standard matrix of

L(O, 1, 0) =

[ L .]
1 -

L is
[L] =

The image of z=

-2i
i ,

[l ]

1 +i
2

-2i
1- i

1+2i

3+i

under Lis calculated by

L(x) = [LJz=

1+i
2

-2i.
1-z

] [ ] = [8 8 ]

1+2.i
3+t

+2i

1-

The range of Lis the subspace of the codomain C2 spanned by the columns of
the nulls pace is the solution space of the system of linear equations Az =
that Zis a vector in C3 .)

[L], and

0. (Remember

414

Chapter 9

Complex Vector Spaces

Similarly, the rowspace, columnspace, and nullspace of a matrix are also defined
as in the real case.

EXAMPLE3

Let A = 11.

nullspace of A.

1 +i

-i

-1 + 2i

- i . Find a basis for the rowspace, columnspace, and

2 +i

Solution: We row reduce and find that the reduced row echelon form of A is
R

[ 1
{ } }
0

As in the real case, a basis for Row(A) is the non-zero rows of the reduced row echelon

for of A. That is, a basis for Row(A) is

-1

-[

m l [j: Ul

We next recall that the columns of A corresponding to the columns of


tain leading 1' fonn a basis for Col(A). So, a basis for Col(A) is

that con-

To find the nullspace, we solve the homogeneous systems Ax =

0.

Using the

reduced row echelon form of A, we find that the homogeneous system is equivalent to
Zt

+ iz2

- Z4

Z3 - iz4

The free variables are


z1

= -is+t and

z3

z2

and

z4,

so we let

Z2
Z3
Z4

Thus, a basis for Null(L) is

Let A

1 +i
-[

nullspace of A.

=0

=s

C, and

Z4

=t

C. Then we get

= it. Hence, the general solution to the homogeneous system is


Z1

EXERCISE 2

z2

= 0

-is+t
s
it

{- H

-1

1 +i

1 - 2i

-2 - 2i

1 -i
1

=s

+t

. Find a basis for the rowspace, columnspace, and

Section 9.3 Vector Spaces over C

415

Complex Multiplication as a Matrix Mapping


We have seen that Ccan be regarded as a real two-dimensional vector space. We now
want to represent multiplication by a complex number as a matrix mapping. We first
consider a special case.
As before, to regard Cas a real vector space we write

z=x+iy=(x,y)
Let us consider multiplication by i:

iz= i(x + iy) = ix - y=(-y, x)


It is easy to see that this corresponds to a linear mapping:

Mi : IR2

IR2

defined by

MJx, y) = (-y, x)

The standard matrix is

[M;] =
Observe that
angle of

[Md = [RI].

[o -1]
1

That is, multiplication by i corresponds to a rotation by an

y
.::=(x,y)

iw = (

Figure 9.3.5

v,

u)

Multiplication by i corresponds to rotation by angle

More generally, we can consider multiplication of complex numbers by a complex


number

= x +yi. In Problem Dl y ou are asked to prove that multiplication by any

complex number

= a + bi can be represented as a linear mapping


-b
standard matrix
.
b
a
a

[a ]

Ma

of JR2 with

416

Chapter 9

Complex Vector Spaces

PROBLEMS 9.3
Practice Problems
(b) Determine L(2 + 3i,1 - 4i).

Al Calculate the following.


(a)

(b)

[- ] [ ]
[ : I [ ! : I
[ ;]
+i

(c) Find a basis for the range and nullspace of L.

A3 Find a basis for the rowspace, columnspace, and


nullspace of the following matrices.

(c) 2i

(d) (-1

(a) A=

-3 - 4i

2 - Si

2i)

[ :I

(c) C =

A2 (a) Write the standard matrix of the linear mapping


L : C2 ---+ C2 such that

[ ]
2i

;/

and

L(O, 1) =

(b) B= 1+i

2 - Si

L(l,0)=

[[

2i

-l+i
.

-1

1+ 2i

-2 +i

1+i

-1

-l+i

-2

-2 + 3i

-1 -i

[ ]

Homework Problems

[ ] [- ]

Bl Calculate the following.


(a)

2 - i
1- l

-l

3 + 2;

(b)

B3 Determine which of the following sets is a basis


for C3

l ;I
[ [
[ ;]
[I
4 - 3i
.

2 +

- 3 +

1+ 3t

1-

-2

(a)

(b)

(c) -3i

(d) (-1 _i)

B4 Find a basis for the rowspace, columnspace, and

3i

nullspace of the following matrices.

-2 - 7i

B2 (a) Write the standard matrix of the linear mapping


L : C2 ---+ C2 such that
L(l,O)=

_; i

mrnl' [J}
mrnHm
l
[
[ l

and

L(0,1)=

[ ;/]
1

(b) Determine L(2 -i,-4 +i).

(a) A=

l+i

l+i

-1-i

1 i

2 -i
.
-l

-l+i

2i

(b) B =

(c) C =

(c) Find a basis for the range and nullspace of L.


(d) D

-1

[I Tl
i

4;

-4i

417

Section 9.4 Eigenvectors in Complex Vector Spaces

Conceptual Problems
Dl (a) Prove that multiplication by any complex num
ber a = a + bi can be represented as a lin
ear mapping Ma of IR2 with standard matrix

[ ]
-

(a) Prove that


mension.

D3 Define isomorphisms for complex vector spaces


and check that the arguments and results of Sec

(b) Interpret multiplication by an arbitrary com

tion

plex number as a composition of a contraction

2
or dilation, and a rotation in the plane JR .

(c) Verify the result by calculating Mo: for a =

4i and interpreting it as in part (b).

D2 Let q2, 2) denote the set of all 2

C(2, 2) is a complex vector space.


C(2, 2) and determine its di

(b) W rite a basis for

4.7

are correct, provided that the scalars are

always taken to be complex numbers.

D4 Let {v\,v2,v3} be a basis for JR3. Prove that

2 matrices with

W1. V2, v3}

is also a basis for

e3

(taken as a com

plex vector space).

complex entries with standard addition and scalar


multiplication of matrices.

9.4 Eigenvectors in Complex


Vector Spaces
We now look at eigenvectors and diagonalization for linear mappings

L : en

en or,

equivalently, in the case of n x n matrices with complex entries.


Eigenvalues and eigenvectors are defined in the same way as before, except that
the scalars and the coordinates of the vectors are complex numbers.

Definition
Eigenvalue
Eigenvector

L : e11
e11 be a linear mapping. If for some A E e there exists a non-zero vector
z E e11 such that L(Z) AZ, then A is an eigenvalue of L and z is called an eigenvector
of L that corresponds to A. Similarly, a complex number A is an eigenvalue of an
n x n matrix A with complex entries with corresponding eigenvector z E C11, z f. 0, if
Let

Az= AZ.

Since the theory of solving systems of equations, inverting matrices, and finding
coordinates with respect to a basis is exactly the same for complex vector spaces as
the theory for real vector spaces, the basic results on diagonalization are unchanged

C11 A complex n x n matrix A is diagonalized by a


matrix P if and only if the columns of P form a basis for C11 consisting of eigenvectors

except that the vector space is now

of A . Since the Fundamental Theorem of Algebra guarantees that every n-th degree
polynomial has exactly n roots over
over

C, the only way a matrix cannot be diagonalizable

is if it has an eigenvalue with geometric multiplicity less than its algebraic

multiplicity.
We do not often have to carry out the diagonalization procedure for complex
matrices. However, a simple example is given to illustrate the theory.

EXAMPLE I

. A
.
Determme
whether the matnx

3
1

Si
4i

2l

7i
6
+ i

determine the invertible matrix P that diagonalizes A.

. .
.
'
11zable. If 1t
1s diagona
1s,

418

Chapter 9

EXAMPLE 1

Complex Vector Spaces

Solution: Consider

A = [3 - 8i- -4t./l -2-11++6i 7-i ]

(continued)

/I.I

,t

-1

The characteristic equation is


det(A

,ti)

.-t2

- -2i)tl+(3 - 3i) = 0
(1

Using the quadratic formula, we find that

/l = (1 -2i)

[(1

2i)2 -4(1)(3 -3i)]112


2

-------

Using the methods from Section 9.1, we find that the eigenvalues are

/l2 = -3i. tl1 =1+i,


-11 +7 i 1
A - = [ -12 -9i
-4i -3+Si ] [ 0
[1 ; l
=1+i [1 ; 1 = -3i,
A /l2/ =[ 3 --S4ti. -2++9t7.i] [ 0
[2; i].
) = -3 .. [2+i]
[a
P = [1 +1 i 2+1 i] .
P-1 - [ 21 + ii]
p-tAP=[l+i0 -3!0.]
For

we get

/I.ii

Hence, the general solution is a

,t

For .-t2

is

-1

Hence, the general solution is a


l

IS

-1 - i]
0

E C. Thus, an eigenvector corresponding to

-11

-2 -i]
0

E C. Thus, an eigenvector corresponding to

. Usmg the formula

-1
1

and

Hence,

that

we get

1t

/l1 = +i

and

Complex Characteristic Roots of a Real Matrix and


a Real Canonical Form
Does diagonalizing a complex matrix tell us anything useful about diagonalizing a real
matrix? First, note that since the real numbers form a subset of the complex numbers,
we may regard a matrix

with real entries as being a matrix with complex entries;

all of the entries just happen to have zero imaginary part. In this context, if the real
matrix
of

A,

has a complex characteristic root .-t, we speak of ,t as a complex eigenvalue

with a corresponding complex eigenvector. We can then proceed to diagonalize

A over C.

Section 9.4 Eigenvectors in Complex Vector Spaces

EXAMPLE2

Let A=

[; =n

419

Find its eigenvectors and diagonalize over IC.

Solution: We have
A-A.I =

[5

- A.
3

-6

-1-A.

2
so det(A-A./) =A. -4A.+ 13, and the roots of the characteristic equation areA.1 =2+3i
andA.2 =2 - 3i=A.1.
ForA.1=2 + 3i,

] [

3- 3i
l
1
-6
A-A.i =
-3- 3i
0
3

-(l + i)

]
[ ]
1+ i

Hence, a comp1 ex eigenvector correspon d'mg to Il1 l =2 + 3'11s


. z1
.... =
.

For A.2 =2 - 3i,

] [

3+ 3i
-6
l
1
A-A.2 =
.
3
-3+ 3t
0

]
[ l
] [;
- (1 - i)

Thus, an eigenvector corresponding toA.2 =2 - 3i is z2 =


If follows that A is diagonalized to

3i 2 3i

by P

1- i
1

Observe in Example 2 that the eigenvalues of A were complex conjugates. This


makes sense since we know that by Theorem 9.1 2
. , complex roots of real polynomials
come in pairs of complex conjugates. Before proving this, we first extend the definition
of complex conjugates to matrices.

Definition
Complex Conjugate

Theorem 1

Let A be an m x n matrix A. We define the complex conjugate of A,

Suppose that A is an n x n matrix with real entries and that A. =


an eigenvalue of A, with corresp.mding eigenvector Z. Then

A by

a+

bi, b * 0 is

:1 is also an eigenvalue,

with corresponding eigenvector Z.

Proof: Suppose that Az =A.Z. Taking complex conjugates of both sides gives
Az =A.z::::} Al=
since

7it

(A't = (A)11 = (Al)) Hence, :1 is an eigenvalue of A with corresponding eigen

vector z, as required.

420

Chapter 9

Complex Vector Spaces

Now we note that the solution to Example 2 was not completely satisfying. The
point of diagonalization is to simplify the matrix of the linear mapping. However, in
Example 2, we have changed from a real matrix to a complex matrix. Given a square
matrix A with real entries, we would like to determine a similar matrix
AP that
also has real entries and that reveals information about the eigenvalues of A, even if
the eigenvalues are complex. The rest of this section is concerned with the problem of
finding such a real matrix.
If the eigenvalues of A are all real, then by diagonalizing A, we have our desired

p-l

similar matrix. Thus, we will just consider the case where A is an n x n real matrix
with at least one eigenvalue ;l = a + bi, where a, b E JR and b 'f. 0.
Since we are splitting the eigenvalue into real and imaginary parts, it makes sense
to also split the corresponding eigenvector z into real and imaginary parts:

z=

Thus, we have Az = ;lz, or

[Xl tll
Xn n

: +i : = x+iy'

x,y

JR.11

A(x + if) = (a+bi)(x + iy) = (ax - by) + i(bx + ay)


Upon considering real and imaginary parts, we get

Ax= ax - by

and

Ay=bx+ay

(9.1)

Observe that equation (9 .1) shows that the image of any linear combination of
x and y under A will be a linear combination of x and y. That is, if v E Span{x,y},
then Av E Span{x,y}.

Definition
Invariant Subspace

If T : V V is a linear operator and 1lJ is a subspace of V such that T(u)


u E l!J, then 1lJ is called an invariant subspace of T.

1lJ for all

Note that x and y must be linearly independent, for if x =


equation (9.1)
would say that they are real eigenvectors of A with corresponding real eigenvalues,
and this is impossible with our assumption that b 'f. 0. (See Problem D2b.) Moreover,
it can be shown that no vector in Span{x,Y} is a real eigenvector of A. (See Problem
D2c.) This discussion together with Problem D2 is summarized in Theorem 2.

ky,

Theorem 2

Suppose that ,l = a +bi, b 'f. 0 is an eigenvalue of an n x n real matrix A with cor


responding eigenvector z=x+if. Then Span{x,y} is a two-dimensional subspace
n
of JR. that is invariant under A and contains no real eigenvector of A.

The Case of a 2 x 2 Matrix


If A is a 2 x 2 real matrix, then it follows from Theorem 2 that 13 = {x,Y} is a basis for
JR.2. From (9.1) we get that the 13-matrix of the linear mapping L associated with A is
[L]!B =

[_: !].

where P = x
coordinates.

Moreover, from our work in Section 4.6, we know that [L]!B =

p-l

AP,

[ y] is the change of coordinates matrix from 13-coordinates to standard

Section 9.4

421

Eigenvectors in Complex Vector Spaces

Thus, we have a real matrix that is similar to A and gives information about the
eigenvalues of A, as desired.

Definition
Real Canonical Form

EXAMPLE3

[: ]

Let A be a 2 x 2 real matrix with eigenvalue A. = a+ ib, b f. 0. The matrix _


called a real canonical form for A.

Find a real canonical form of the matrix C of A =


coordinates matrix P such that p-I AP= C.

[ =;J

b .
IS

and find a change of

Solution: We have

det(A - A.I) =

-5
_
= A.2 +
_2
A.

So, the eigenvalues of A are A.1 = 0 + i and A.2 = 0 - i = A.1. Thus, we have a= 0 and

[ ].
] [1

b = 1 and hence a real canonical form of A is _


For A.1 = i, we have

A_ A.ii=

-5
-2 - i

-2

so an eigenvector corresponding to A. is

Hence, a change of coordinates matrix P is

P=

EXAMPLE4

[ ]

Find a real canonical form of the matrix A =

[; =l

Solution: In Example 2, we saw that A has eigenvalues A. = 2 + 3i and


Thus, a real canonical form of A is

EXERCISE 1

[ ;J.
-

Find a change of coordinates matrix P for Example 4 and verify that

1
p- AP=

[- ;J.

:i' =

2 - 3i.

422

Chapter 9

Complex Vector Spaces

Remarks

1.

[ ]

A matrix of the form

can be rewritten as

c se - sine
sm e cose

va2 + b2 ?

where cose = a/Ya2 + b2 and sine= -b/Ya2 + b2. But, since the new basis
vectors 1 and y are not necessarily orthogonal, the matrix does not represent a
true rotation of Span{x,y}.
2. Observe in Example 3 that we could have taken ,i = -i and tl =

[ -]

Ti.

Thus,

is also a real canonical form of A.

3. The complex eigenvector z is determined only up to multiplication by an arbi


trary non-zero complex number. This means that the vectors 1 and y are not
) 2
uniquely determined. For example, in Example 3, z = + i
=

(1

is also an eigenvector corresponding to ti. Thus, taking P =


give
. p-IAP

EXERCISE 2

- [ _10 0.1]

Find a real canonical form of the matrix C of A

coordinates matrix P such that p-1AP =C.

The Case of a 3

[-i ]

[ i] [ \:i]
[ ]

would also

and find a change of

3 Matrix

If A is a 3 x 3 real matrix with one real eigenvalue with corresponding eigenvector


v and complex igenvalues ,i = a + bi, b * and :i with corresponding eigenvectors
z 1 + iY and z, then we have the equations

Av= v,

Ax=ax- by,

Then, the matrix of A with respect to the basis !B

Ay =bx+ ay

W. X,j1) is

_: n

The matrix can be brought into this real canonical form by a change of coordinates
with matrix P = [v 1 y]. .

EXAMPLES
Find a real canonical form of the matrix C of A =
coordinates matrix P such that p-1AP = C.

[-1

-2
-4

-11
-4
4

and find a change of

Section 9.4 Eigenvectors in Complex Vector Spaces

EXAMPLES

Solution: We have

-4

-4

7-,1

-1 -,1

-1 -,1

(continued)
det(A- ,t/)=

4
0

-2

[3 o ol
1

Thus, the eigenvalues of A are


canonical form ofA is 0
For=

3,

1
-2

2 .

3, ,11 = 1 + 2i, and ,12 = 1 - 2i

we have

[
-

A-/=

-4

-4

-2

-4

Thus, an eigenvector corresponding to is V=


For,11 = 1 + 2i, we have

[-2

- 2i

A-,11/=

= -(,1 - 3)(,12 - 2,1 +

6- 2i
-2
4

][
-

5)

Ti.

Thus, a real

1
0

[-H
-4 l
4

-2 - 2i

Thus, an eigenvector corresponding to A1 is

423

[1

Hence, a change of coordinates matrix P is

P=

H -il
-l

If A is an n x n real matrix with complex eigenvalue ,1 = a +

corresponding eigenvector z= x +

subspace of JR11 If we use x and

iy,

bi, b * 0

and

then Span{x, y} is a two-dimensional invariant

as consecutive vectors in a basis for JR11, with the

other vectors being eigenvectors for other eigenvalues, then in a matrix similar to A,
there is a block

[ !].
-

with the a's occurring on the diagonal in positions determined

by the position of x and y in the basis.


For repeated complex eigenvalues, the situation is the same as for repeated real
eigenvectors. In some cases, it is possible to find a basis of eigenvectors and diagonal

ize the matrix over C. In other cases, further theory is required, leading to the Jordan
normal form.

424

Chapter 9

Complex Vector Spaces

PROBLEMS 9.4
Practice Problems
Al For each of the following matrices, determine a
diagonal matrix D similar to the given matrix
over C. Also determine a real canonical form and
give a change of coordinates matrix P that brings

(b)

(c)

the matrix into this form.


(a)

[- ]

(b)

(c)

2
0

(d)

-1
-1

1
-1

[= - =1
[ -1
[ -1
4

(e)

-5
-3

-2
2
-2

2
1

(d)

Bl For each of the following matrices, determine a


diagonal matrix D similar to the given matrix
over C. Also determine a real canonical form and
give a change of coordinates matrix P that brings
the matrix into this form.

[l ]
[o -;J

2
-3

H
[
1
-1

Homework Problems

(a)

[-1 ]

(f)

-2

-11

-2

-1

-1

-5

Conceptual Problems
Dl Verify that if z is an eigenvector of a matrix A with
complex entries, then z is an eigenvector of A (the
matrix obtained from A by taking complex conju
gates of each entry of A).
D2 Suppose that A is an n x n real matrix and that
tl a+ bi is a complex eigenvalue of A with b * 0.
Let the corresponding eigenvector be 1 + iJ.
=

(a) Prove that x i- 0 and y i- 0.


(b) Show that x * ky for any real number k. (Hint:
Suppose x
ky for some k, and use equation
(9.1) to show that this requires ,B 0.)
(c) Prove that Span{x,y} does not contain an
eigenvector of A corresponding to a real eigen
value of A.
=

Section 9.5 Inner Products in Complex Vector Spaces

425

9.5 Inner Products in Complex


Vector Spaces
We would like to have an inner product defined for complex vector spaces because the
concepts of length, orthogonality, and projection are powerful tools for solving certain
problems.
Our first thought would be to determine if we can extend the dot product to e11
Does this define an inner product on e11? Let z x iJ, then we have

Z Z = Z1 + +
= ( +...+x - Yr - ... - y) +2i(X1Y1 +... +XnYn)
_,

_,

Z11

zz

Observe that z z does not even need to be a real number and so the condition ?:'. 0
does not even make sense. Thus we cannot use the dot product as a rule for de.fining
an inner product in en.
As in the real case, we want (z, tJ to be a non-negative real number so that we can
ZJ. We recall that if z E e, then zZ lzl2 ?:'. 0. Hence, it
define a vector by llZ11
makes sense to choose
<z.

= --./(z,

w>= z w

as this gives us

(z,Z'J
Definition

In en the standard inner product , ) is defined by

Standard Inner Product on C"

(z,

EXAMPLE 1

= z t= Z1Z1 + +ZnZn = [zi[2 + +[ Zn [2 ?:'. 0

Let

w)= z w= Z1W1 +...+ZnWn,

il= v=

[ ]. [;:l

Determine

for W,ZE en

(v, il), (il,v), and (v, (2 - i)il).

Solution:

a>= v. a= = -

[ : j [; J
[ : J [; : ]
+(3+ +
= (-2+
= +3i+ + = 3+
(il,v>= [; J [ = j
= - 3i+ - = 3
(v,(2 - i)il)= v. (2 - i)il = r-: ] (2 [;: ]
= r-:Jc2+i)[]
= (2+i)(3+
= +23i
<v,

i)(l - i)
4

-1

-1

2i)(2

7i

lOi

7i

- lOi
i

lOi)

-4

i)

- i)

426

Chapter 9

Complex Vector Spaces

Observe that this does not satisfy the properties of the real inner product. In par
ticular, (it, v)* (v, it) and (v, ail)* a(v, it).

EXERCISE 1

Let it=

[i ]
2i

and v=

[i l

Determine (it, v), (2iil, v), and (it, 2iv).

Properties of Complex Inner Products


Example 1 warns us that for complex vector spaces, we must modify the requirements
of symmetry and bilinearity stated for real inner products.

Definition

Let V be a vector space over C. A complex inner product on V is a function (,) :

Complex Inner Product

V x V-+ C such that


(1) (z, z)

0 for all z EV and (z, z)= 0 if and only if z = 0

(2) (z, w) = (w, z) for all w, z EV


(3) For all

u, v,

w, z EV and a EC

(i) (v + z, w) = (v, w) + (z, w)


(ii) (z, w + u) = (z, w) + (z, u)
(iii) (az, w)

a(z, w)

(iv) (z, a w) = a(z, w)

EXERCISE 2

Verify that the standard inner product on en is a complex inner product.

Note that property (1) allows us to define the (standard) length by llz ll = (z, z)112,
as desired.
Property (2) is the Hermitian property of the inner product. Notice that if all the
vectors are real, the Hermitian property simplifies to symmetry.
Property (3) says that the complex inner product is not quite bilinear. However,
this property reduces to bilinearity when the scalars are all real.

T he Cauchy-Schwarz and Triangle Inequalities


Complex inner products satisfy the Cauchy-Schwarz and triangle inequalities. How
ever, new proofs are required.

Theorem 1

Let V be a complex inner product space with inner product (, ). Then, for all w,
z EV,

(4) l(z, w)I ::.:; llzll llwll


(5) llz + wll ::.:; llzll + llwll

Section 9.5 Inner Products in Complex Vector Spaces

Proof:

427

We prove (4) and leave the proof of (5) as Problem Dl.

If w= 0, then (4) is immediate, so assume that w 1= 0, and let

(z,w)

a=

--

(w,w)

Then, we get

(z - aw, z - aw)

= (z,z - aw) - a(w, z - aw)


= (z,z) - a(z,w) - a(w,z)
= (z, z)

= llzll2 -

l(z,w)l2

(w,w)

l(z,w)l2
(w,w)

aa(w,w)
+

l(z,w)l2
(w,w)

l(z,w)l2
llwll2

and (4) follows.

Let C(m, n) denote the complex vector space of m x n matrices with complex en
tries. How should we define the standard complex inner product on this vector space?
We saw with real vector spaces that we defined (A, B) = tr(BTA) and found that this
inner product was equivalent to the dot product on IR.11111 Of course, we want to define

the standard inner product on C(m, n) in a similar way. However, because of the way
the complex inner product is defined on C", we see that we also need to take a complex
conjugate. Thus, we define the inner product (,)on C(m, n) by (A, B)= tr(i/ A).
b11
b111
a11
ain
In particular, let A

and B =

. Then

Since we want the trace of this, we just consider the diagonal entries in the product
and find that
tr(BT A) =

I a;1b;1 + I a;2bi2 + + I a;nbi11


i=l

i=l

i=l

which corresponds to the standard inner product of the corresponding vectors under
the obvious isomorphism with C11111

428

Chapter 9

EXAMPLE2

Complex Vector Spaces

Let A =

2 i
[ ; 1 i]

[ _;i

and B =

Find (A,B) and show that this corre-

sponds to the standard inner product of

Solution:

We have

(A,B)

: l
2 i
1
a=
i
1 -i

and b=

3
2 - 3i
-2i .
1 2i
+

(B A)= ([ ; 3i 1 =i2i] [2; i 1 i])


3(1) 2i(l - i)
3(2 i) 2i(i)
=
3i)(2
2i)(i)
(2
3i)(l)
(1 - 2i)(l - i)]
i)
(1
(2
[
3(2 i) 2i(i) (2 3i)(l) (1 - 2i)(l - i)
2 i 3
1 2 3i
=
2i
1 - i 1 - 2i
=

tr

tr

tr

::;

= ab

= (a,
b)

This matrix

Definition
Conjugate Transpose

Let A be an

Il

n x n

is very important, so we make the following definition.

matrix with complex entries. We define the

of A to be

-T

A* = A

EXAMPLE3

Observe that if A is a real matrix, then A =A 7.

Theorem 2

Let A and B be complex matrices and let

(1) (Az, w) =(z,A*w)


(2) A** =A
(3) (A B)*= A* B*

for all Z, w

(4) (aA)*= aA*


(5) (AB)* =B* A*

The proof is left as Problem

D2.

C11

a E

C. Then

conjugate transpose A

Section 9.5 Inner Products in Complex Vector Spaces

429

Orthogonality in c_n and Unitary Matrices


With a complex inner product defined, we can proceed and introduce orthogonality,
projections, and distance, as we did in the real case. No new approaches or ideas are
required, so we omit a detailed discussion. However, we must be very careful that we
have the vectors in the correct order when calculating an inner product since the com
plex inner product is not symmetric. For example, if 'B

{V1, . . , vk} is an orthonormal


.

basis for a subspace ofen, then the projection of z E en onto is given by

If we were to calculate (v 1, !) instead of (z, v1), we would likely get the wrong answer.
Notice that since (z, w) may be complex, there is no obvious way to define the

angle between vectors.

EXAMPLE4
Let v1

0
, v2
0
0

1
1
=

. , and v3

-1

1 and consider the subspace S

Span{v1, v2, v3}

ofe4.
(a) Use the Gram-Schmidt Procedure to find an orthogonal basis for S.

Solution: Let w1

0
Then
0
0
.

w3

v3 -

(v3, w1)

llw1112

w1 _,

0
+i
w 2
0
1
2
11w211
0
0

(v3, w2)

_,

2i

0
i/3
1/3
2/3

-1

0
and get that

Since we can take any scalar multiple of this, we instead take W3

2
{w1, w2, W3} is an orthogonal basis for S.
(b) Let z

: . Find proj8 Z.

-1

430

Chapter 9

EXAMPLE4

Complex Vector Spaces

Solution: Using the orthogonal basis we found in (a), we get

(continued)
.

_,

pro
Js z

<z. w1 >

llw1112

_,

0
-i
0
0

<z. w2>

wi

llw2ll2

+-

3 -l

...

w2

<z. w3> ...

llw3ll2
1
0
0
-1

w3

W hen working with real inner products, we saw that orthogonal matrices are very
important. So, we now consider a complex version of these matrices.

Definition

Unitary Matrix

An n x n matrix with complex entries is said to be unitary if its columns form an


n
orthonormal basis for e .
For an orthogonal matrix P, we saw that the defining property is equivalent to the

matrix condition p-1

Theorem 3

p T We get the associated result for unitary matrices.

If U is an n x n matrix, then the following are equivalent.


The columns of U form an orthonormal basis for IC"
(2) The rows of U form an orthonormal basis for IC"
(3) u-1
u

(1)

Proof: Let U

[z1

if and only if (z;, Z;)

Z,,]. By definition, we have that {z1,


1 and
0

(_,
_,)
Z;, Zj

Thus, since

_,
Z;

-T

( U U) ij - Z;
_,
we get that U* U

I if and only if {z1,

-;J
<.j

::t
<.j

->T-::;
Z;
Zj,

for

----=

_, ;f[ Zj
Z;

, Zn} is orthonormal

it= j

--

(Z;,
_, Zj
_, )

, Zn} is orthonormal. The proof that (2) is

equivalent to (3) is similar.

Observe that if the entries of A are all real, then A is unitary if and only if it is

orthogonal.

EXAMPLES

Are the matrices U

[ ]
-i

- [ (1

1 and V -

Solution: Observe that

[]

thonormal.

+ i)
_Li

_l_(l + i)
Y6 2 .
unitary?

ygl

Y3

\[J. [D rn. [J [J. [ J


=

Hence,

1(1) + i(-i)

is not a unit vector. Thus, U is not unitary since its columns are not or

Section 9.5 Exercises

EXAMPLES

For v, we have v

(continued)

i) _Li
l
'1
= [_L(l
'{3 (1 - i) - i
-

so

_ [ !(2 + I) 3(2-2)] [1
(2-2) 1(2+4) - 0
3 Vi

V VV

'

Y6

Y6

Thus,

431

OJ
1

is unitary.

Determine if either of the following matrices is unitary.

EXERCISE 3

A= [

2+i

Y6
1

- Y6

V6.
2+1

Y6

[ lY}+i

Y6

PROBLEMS 9.5
Practice Problems
Al Use the standard inner product in C" to calculate

(it, v), (v, it), l itl , l vl . 2


it=[ ;] . = [2-_ i]
i
it=[- ].v=[13:;i]
it= [1 il [ ]
it=[ _\+_ 2J v= [--i]
A = Is [1 i 1 i]
= [ ]
= Jd- ]
and

(a)

i1

(d) D

+ i)/-f3
= [(-11;V3

(1

-i)/../6]
2/../6

A3 (a) Verify that ii is orthogonal to ii if

(b)

(c)

(d)

A2 Are the following matrices unitary?


(a)

(b) B

(c) C

Homework Problems
Bl Use the standard inner product in C" to calculate

(it, v), (v,-it), l itl , l vl .


it=[ :lv= [::]
it= [13:Ll v= [ : ]
and

(a)

(b)

ii= [i
(b) Determine the projection of w

it = []

= [ : 1
+
it 3

and

onto

the subspace of C3 spanned by

and v.

A4 (a) Prove that any unitary matrix U satisfies


I det UI

(b) Give a

1. (Hint: See Problem D3.)

2 2

det U * l.

unitary

matrix

such

that

Chapter 9

432

B2

Are the following matrices unitary?


1 2
(a) A= vs -2
1

(b) B
(c)
Cd)
B3

Complex Vector Spaces

C=
D=

[ ]
l (1+
(1+
l(1+

(b) Determine the projection of w =

- 2i

B4

i)/--./7
5 /Y35
2i)/--./7 (3+ i)/Y35
i) I Y6 (1+ i) IY3
2i/.../6
i/Y3
-

(a) Verify that a is orthogonal to v if i1

the subspace of C3 spanned by a and v.


Consider C3 with its standard inner product. Let
v1

[--1i].
] and
[-l+i
.
v2 =

(a) Find an orthonormal basis for

[ i

[1- l .

Span

{v1, v2}.

1+ i

(b) Determine prnj, i1 where it

and v

[;1 ] onto

Conceptual Problems
Dl
D2
D3

Prove property (5) of Theorem 1.


Prove Theorem 2.
Prove that for any n xn matrix,

detA = detA
D4

(a) Show that if U is unitary, then llVZ11 = llZ11 for


all Z en.
(b) Show that if U is unitary, all of its eigenvalues
satisfy l;ll 1.
(c) Give a 2x2 unitary matrix such that none of its
eigenvalues are real.
D6 Prove that all eigenvalues of a real symmetric
matrix are real.
DS

Prove that if A and Bare unitary, then ABis unitary.

9.6 Hermitian Matrices and


Unitary Diagonalization

In Section 8.1, we saw that every real symmetric matrix is orthogonally diagonalizable.
It is natural to ask if there is a comparable result in the case of matrices with complex
entries.
First, we observe that if A is a real symmetric matrix, then the condition AT = A
is equivalent to A = A. Hence, the condition A* = A should take the place of the
condition Ar = A.
*

Definition
Hermitian Matrix

Ann xn matrix A with complex entries is called Hermitian if A


if A= Ar.

*
=

or, equivalently,

Hermitian Matrices and Unitary Diagonalization

Section 9.6

EXAMPLE 1

433

Which of the following matrices are Hermitian?

A=

2
3

3-i

Solution: We have
1

-2i

-= [ 3 ]
B

c=

2i

[o
l

B=

'

2i

-2i

f. BT,so B.1s

3-i '

3
A= [3
: ] = AT,

-i

-i

so

A is Hermitian.

..
not Herm1tian.

-i * cT, soc is not Hermitian.


0

-l

Observe that if A is Hermitian, then we have (A)ij = A1;, so the diagonal entries of
A must be real and for i f. j the ij-th entry must be the complex conjugate of the ji-th
entry.

Theorem 1

An n x n matrix

A is Hermitian if and only if for all z, w E

e11, we have

<t. Aw> = <At,w>

Proof: If

A is Hermitian, then we get


(z,Aw) = t Aw= zTA = t AT= (Azl = (Az, w)

If (z,Aw) =

(Az,w) for all z, w E e11, then we have

Since this is valid for all

t,w E

en' we have that A=

AT. Thus, A is Hermitian.

Remark

A linear operator L : V V is called Hermitian if (x, L(j)) = (L(x),y) for all


x,y E V. A linear operator is Hermitian if and only if its matrix with respect to
any orthonormal basis of V is a Hermitian matrix.Hermitian linear operators play an
important role in quantum mechanics.

434

Chapter 9

Theorem 2

Complex Vector Spaces

Suppose that

A is an

n x n Hermitian matrix. Then

(1) All eigenvalues of A are real.


(2) Eigenvectors corresponding to distinct eigenvalues are orthogonal to each
other.

Proof: To prove (1), suppose that ;l is an eigenvalue of A with corresponding unit


eigenvector t. Then,

(t,AZ)
But, since

A is

(t,,it)

(t,Z)

Hermitian, we have

(t,AZ)

(At,Z)

and

(At,Z}, so

(Al, Z)

;l.

T hus,

(t1,At2)

t1, t2. Then


(t1,A2t2)

A2(l1,t2)

A is Hermitian, we get ;l2(t1,l2)


<ti,l2) 0, as required.
Since

;l2(t1,l2)

(Al1,t2)

and

AJ <ti,l2).

Thus, since

A1

,i

;l must be real.
A with correspond

To prove (2), suppose that ,11 and ,12 are distinct eigenvalues of
ing eigenvectors

A1(l1,l2)

A2,we must have

From this result, we expect to get something very similar to the principal axis
theorem for Hermitian matrices. Let's consider an example.

EXAMPLE2

Let

[ :.
1

; i] .

Solution: We have A*

Verify that

A, so A

A is

Hermitian and diagonalize

C(;l)

Then the characteristic polynomial is


or

;l

1.
For ,i

A - ,if

is Hermitian. Consider

,12 - 5,i + 4

A.
=

; ].

(,1- 4)(,1- 1 ) ,so ;l

4,

f
A_ ,i

-2.
1

Thus, a corresponding eigenvector is

A_ I)Lf

[I ;

il

t1

1
i
l

Thus, a corresponding eigenvector is


by Q

] r
[;]
+

l1

-1

- l

i .

If

I ] [
[ ]
+

1_ i

-2
0

;l

1
0

i .

1,

'

. Hence, A is diagonalized to

[ ]

Observe in Example 2 that since the columns of Qare orthogonal, we can make
Qunitary by normalizing the columns. Hence, we have that the Hermitian matrix

A is

diagonalized by a unitary matrix. We can prove that we can do this for any Hermitian
matrix.

Section 9.6 Exercises

435

Spectral Theorem for Hermitian Matrices


Suppose that A is an n x n Hermitian matrix. Then there exist a unitary matrix U
and a diagonal matrix D such that U*AU = D.

Theorem 3

The proof is essentially the same as the proof of the Principal Axis Theorem, with
appropriate changes to allow for complex numbers. You are asked to prove the theorem
as Problem DS.

Remarks
1 . If A and Bare matrices such that B= U*AU for some U, we say that A and B
are unitarily similar. If B is diagonal, then we say that A is unitarily diago
nalizable.

2.

Unlike, the Principal Axis Theorem, the converse of the Spectral Theorem for
Hermitian Matrices is not true. That is, there exist matrices that are unitarily
diagonalizable but not Hermitian.

The matrix A =

EXAMPLE3

from Example 2 is Hermitian and hence unitarily diag

[: ;]
[0 +1.../6/.../6
1

onalizable. In particular, by normalizing the columns of Q, we find that A is unitarily

d'iagona1
]' zed by u =

i)

(1

+ i)/ V3j
-l/.../3 J'

PROBLEMS 9.6
Practice Problems
Al For each of the following matrices,
(i) Determine whether it is Hermitian.

+i
2

(c) C

(d) F

(ii) If it is Hermitian, unitarily diagonalize it.


(a) A=

(b) B

[ ..j2 +

y 24 L

_r,;

../2

../2
.../3 + i
-

]
]

[ V:+

1 +i
0
1 +i

436

Chapter 9

Complex Vector Spaces

Homework Problems
Bl For each of the following matrices,

(i) Determine whether it is Hermitian.

(ii) If it is Hermitian, unitatily diagonalize it.


(a) A

(b) B

-f2. + i

[
[ -f2.

+i

Y2 +
..f3

]
Y2 - ]

(c) C

(d) F"

[ ../3

../32+ ]

H '. -]

Conceptual Problems
Dl Suppose that A and B are

n x n

Hermitian matri

(b) What can you say about

ces and that A is invertible. Determine which of the

following are Hermitian.

(c) A-1

D2 Prove (without appealing to diagonalization) that if


A is Hermitian, then detA is real.

D3 A general

matrix that is Hermitian, unitary, and diagonal?

(b) A2

b, c, and d if A is

(c) What can you say about the form of a

(a) AB

a,

Hermitian, unitary, and diagonal.

2 2

ci

Hermjtian matrix can be written as

: c ].

a,

b, c, d ER

(a) What can you say about

a,

b, c, and d if A is

D4 Let V be a complex inner product space. Prove


that a linear operator L : V -t V is Hermitian
( (1, L(j))
(L(x), y)) if and only if its matrix with
respect to any orthonormal basis of V is a Hermi
=

tian matrix.

DS Prove the Spectral Theorem for Hermitian Matri


ces.

unitary as well as Hermitian.

CHAPTER REVIEW
Suggestions for Student Review
1 What is the complex conjugate of a complex num
ber? List some properties of the complex conjugate.

How does the complex conjugate relate to division of


complex numbers? How does it relate to the length

of a complex number? (Section 9.1)

2 Define the polar form of a complex number. Ex

plain how to convert a complex number from stan


dard form to polar form. Is the polar form unique?
How does the polar form relate to Euler's Formula?
(Section 9.1)

3 List some of the similarities and some of the differ

ences between complex vector spaces and real vector


spaces. Discuss the differences between viewing C
as a complex vector space and as a real vector space.
(Section 9.3)

4 Explain how diagonalization of matrices over C


differs from iliagonalization over R (Section 9.4)
5 Define the real canonical form of a real matrix A.

In what situations would we find the real canonical


form of A instead of diagonalizing A? (Section 9.4)

6 Discuss the standard inner product in C". How are


the essential properties of an inner product modi

fied in generalizing from the real case to the complex


case? (Section 9.5)

7 Define the conjugate transpose of a matrix. List

some similarities between the conjugate transpose of

a complex matrix and the transpose of a real matrix.


(Section 9.5)

8 What is a Hermitian matrix? State what you can


about

diagonalizing

tion 9.6)

Hermitian matrix.

(Sec

Chapter Review

437

Chapter Quiz
El Let z1

1 - Y3i and z2
.
Z1
determme z1z2 and -.
Z2
=

2 + 2i. Use polar form to

E2 Use polar form to determine all values of (i)112.


E3 Let ii

[ 3 ;]

and V

[4 J

Calculate the fol

lowing.

(a) 2it + (1 + i)V

(b) it

(c) (it, v)

Cd) <v,it>

(e) llVll

(f) proja v

E4 Let A=

[ 1
-

13
4

(a) Determine a diagonal matrix similar to A over

C and give a diagonalizing matrix P.

(b) Determine a real canonical form of A and the


corresponding change of coordinates matrix P.

ES Prove that U -

matrix.

E6 Let A=

[
3

...L

V3

ki

[1
l

-i
-l+i

is a unitary

(a) Determine k such that A is Hermitian.


(b) With the value of k as determined in part (a),

find the eigenvalues of A and corresponding

eigenvectors. Verify that the eigenvectors are


orthogonal.

Further Problems
Fl Suppose that A is a Hermitian matrix with all non

Spectral Theorem for Hermitian Matrices), prove

negative eigenvalues. Prove that A has a square

that every n x n matrix A is unitarily triangulariz

root. That is, show that there is a Hermitian matrix

able. (This is called Schur's Theorem.)

B such that B2 = A. (Hint: Suppose that U diago


nalizes A to D so that U* AU= D. Define C to be a
square root for D and let B = UCU*.)

F4 A matrix is said to be normal if A*A= AA*.


(a) Show that every unitarily diagonalizable matrix
is normal.

F2 (a) If A is any n x n matrix, prove that A*A

(b) Use (a) to show that if A is normal, then A is

is Hermitian and has all non-negative real

unitarily similar to an upper triangular matrix

eigenvalues.

(b) If A is invertible, prove that A* A has all positive


real eigenvalues.

T and that T is normal.

(c) Prove that every upper triangular normal matrix


is diagonal and hence conclude that every nor

F3 A matrix A is said to be unitarily triangularizable


if there exists a unitary matrix U and an upper tri
angular matrix T such that U*AU = T. By adapt

mal matrix is unitarily diagonalizable. (This is


often called the Spectral Theorem for Normal
Matrices.)

ing the proof of the Principal Axis Theorem (or the

MyMathlab

Go to MyMathLab at www.mymathlab.com. You can practise many of this chapter's exercises as often as you
want. The guided solutions help you find an answer step by step. You'll find a personalized study plan available to
you, too!

APPENDIX A

Answers to Mid-Section
Exercises
CHAPTER!
Section 1.1
1.(a

[]

(b)

_,

2. 1 =
3. 1 =

4. 1=

[=J

(c) [ =i J

X2

Xt

t [-n t E IR
[ ] +t [-n t E IR
[]+1[-H IER

X2

ca+

w)

i1 + w

Xt

Section 1.2

1. (5)

X1
[
X
+(
x1)
l
o
]
[
X
t
t
l
1
since +
. Then
Let
X
X
X
n
+
(
-Xn
)
n
11
t
[
x
1
t1 tX1 ] E IR1 since tx; E IR for 1
t
x
1
l
s
x1
t
x
1
l
s(t1) s tXn StXnI st Xn (st)1.
1=

-1=

(-1) =

= ; = 0.
0

(6)

(7)

:5

: =

:5 n.

440

Appendix A

Answers to Mid-Section Exercises

[]

2. By definition, S is a subset of JR. . We have that


x=

[]

that x + y =
t1 =

[ ]
[ ]

and y =

[ ]

E S since 2(0) = 0. Let

be vectors in S. Then 2x1 = x2 and 2y1 = Y2 We find

xi +Yi
E S since 2(x1 + Y1)
X2 +Y2

txi
ESsince 2(tx1)
tx2

Tis not a subspace since

2x, + 2y, = x2 + Y2 Similarly,

t2x1 = tx2. Hence, Sis a subspace.

[ ]

is not in T.

This gives c1 +c3 0, c2 +c3 = 0, and c1 +c2 0. The first two equations imply
that c1 = c2. Putting this into the third equation gives c1 = c2 = 0. Then, from
the first equation, we get c3
0. Hence, the only solution is c1 = c2 = c3 = 0,
so the set is linearly independent.
=

1
0
0
1
0 , 0
0
0

4. The standard basis for JR.5 is

X1

1
0

X2

0
0
1
0
0

0
0
0
1
0

0
0
0
0

t1

t2
Consider X3 = t1 0 + t2 0 + t3 1 + t4 0 + t5 0
t3 . Thus, for every
X4
0
0
0
1
0
t4
X5
0
0
0
0
t5
1 E JR.5 we have a solution t; x; for 1 :::; i :::; 5. Hence, the set is a spanning set
for JR.5. Moreover, if we take 1 = 0, we get that the only solution is t; = 0 for
1 :::; i :::; 5, so the set is also linearly independent.
=

Section 1.3
1.

vw
cos()=
= 0, so ()= rads.

2. 11111 = v1 + 4 + 1 =

Y6, II.YI! =

+ i =1

3. By definition, xis a scalar multiple of 1, so it is parallel. Using Theorem 1.3.2

(2), we get

II.XII=

I 1 II
11 11

1 =

11111 =
11 11

::;::

4. x1 - 3x2 - 2x3 = 1(1) + (-3)(2) + (-2)(3) = -11

Section 1.4
1.

it. v
projil it =
v
llvl l2

[ ]

pro

ja

v=

[ ;1

8 v. it
=
17
ll ll2 it
it

=1

Appendix A

2.

proj il z1=

1
v=

4
1
v
ll ll

3' proJx <J+ Z) =

proJ1(ty )=

perpil z1= z1- proj il z1=

[-]-[ij:i [-j:i
=

441

2 /14

-1/7

<J+Z>1
y1+z1
:;.1
-z.1
.
1+
1 = proJx y+
2
2
2 1=
2 1=
11111
11111
11111
11111

proj1 z

[i].

Answers to Mid-Section Exercises

(tY) 1

.
y. 1
=t 21.=tproJ1 y
1
"""jj1jj'2
11111
.

Section 1.5
1. w. il= (u2V3 - U3V2)(u1)+ (u3V1- U1V3)(u2)+(u1v2- U2V1)(u3)=0
w. v=(u2V3- U3V2)(v1) + (u3V1- U1V3)(v2)+ (u1v2- U2V1)(v3)=0
2

HHl [=rn
=

3. The six cross-products are easily checked.


4. Area =

llil x VII=

[=l

5. The direction vector is

-xi- 2x2= -2

and

= "9= 3

J=

[=HJ m
[ ! ] [H
=

2x1 + x2= 1.

Put

X3 = 0

and
ili en so Ive

This gives x1= 0 and x2= 1. So, a vector

equation of the line of intersection is ii=

+t

lit

CHAPTER2

Section 2.1
1. Add (-1/2) times the first equation to the second equation:
2x1+ x
4 2+ Ox3= 12
-X3=- 2
x2 does not appear as a leading l, it is
x2=t E R Then we rewrite the first equation as
Since

a free variable. Thus, we let

1
X1=2(12- 4x2)=6- 2t
Thus, the general solution is

[:H t] [] fl]
6

[7 1 ]
-

12
4

R2 - R1

E
I

12 ]
-

442

Appendix A

Answers to Mid-Section Exercises

Section 2.2

1. For Example 2.1.9, we get


1

11

-3

-1

-1

-4

11

-3

-4

l
l

(-l)R3

[
[

2. (a) rank(A)

180

60

80

160

60

1
0
=

20

R1 - Ri

-3

60

20

100

60

20

(b) rank(B)

R2 - R3

-4

180

-[
l

-1

R3

l
j]
11

-3

-4

R1 - 2R2

For Example 2.1.12, we get

-[
[-

l
l

R1 - R,

3. We write the reduced row echelon form of the coefficient matrix as a homoge
neous system. We get

x1 + x4 + 2xs

X2 + X3 + X5

We see that x3, X4, and x5 are free variables, so we let x3

xs

t i, x4

tJ. Then, we rewrite the equations as


Xi

X2

-t2 - 2t3
-t1 - t3

Hence, the general solution is

Xi

-t2

2t3

X2

-ti - t3

X3

ti

X4
X5

-1

-2

-1

-1

1 + t2

0 + t3

t2

t3

t1

Section 2.3

!. Considert1

[=l Hl {] m
+

,,

Simplifying and comparing entries gives the system

t1 + 2t2 - 2t3

-3ti - 2t2
-3t1

2t3

+ t2 - 3t3

t1 , t2, t3

IR

t2, and

Appendix A

Answers to Mid-Section Exercises

443

Solving the system by row reducing the augmented matrix, we find that it is
consistent so that v is in the span. In particular, we find that

2
2
[
C-2) =;l+ I-l+ i--l= I l
l

19
4

13
4

2. Consider

Simplifying and comparing entries gives the homogeneous system

-ti - 2t2 +t3=


t1 -3t2 -t3=
-3ti -3t2+3t3=

0
0
O

Row reducing the coefficient matrix, we find that the rank of the coefficient
matrix is less than the number of variables. Thus, there is at least one param
eter. Hence, there are infinitely many solutions. Therefore, the set is linearly
dependent.

CHAPTER3
Section 3.1
1. consider

rn ]=r1[ ]+r2[ ]+t3[ ]+t4[- ]


ti+t1+3t3=
2ti +t1+5t3 -t4=
ti+3t2+5t3 - 2t4=
ti +t2 + 3t3=
2,
-

.simplifying

the right-hand side, we get the homogeneous system

0
O

Using the methods of Chapter

we find that this system has infinitely many

solutions, and hence 'B is linearly dependent.


Similar to our work above, to determine whether X E Span 'B, we solve the
system

ti +t1+3t3=
2ti+t2+ St3 -t4=5
t1 +3t2+ St3 - 2t4= -5
t1 +t1+3t3=
Using the methods of Chapter
X is in the span of 'B.

2. For any

2,

we find that this system is consistent and hence

2 [xX3i x2X4],
xi[ ]+x2[ ]+x3[ ]+x4[ ]=[ :]
22
xi =x2=x3 = x4=

we have

matrix

Hence, 'B spans all

matrices. Moreover, if we take

0,

then the only solution is the trivial solution, so the set is linearly independent.

444

Appendix A

Answers to Mid-Section Exercises

3. AT=

[ -1
[ ;]

3AT=

and (A')T=

15

[-

=(3Al

4. (a) AB is not defined since A has three columns and B has two rows.
(b) BA =

[4

(c) ATA =

(d ) BET=

1
7 2 -1

[l i]
8

13

[;

Section 3.2
1. fA(l,0) =

[n

fA(O,1) =

[-n

fA(2,3) =

[]

We have fA(2,3)=2fA(l,0) + 3fA(O,1).


2. f,(-1,1,1,0) =

[]

,f,(-3,1,0,1) =

3. (a) f is not linear. f(l,0) + f(2, 0)

[ ]

f(3,0).
2
(b) G is linear since for all 1,y E JR. and t E JR, we have
f.

g(tx + y) = g(tx1 + Y1, tx2 + Y2) =

X2
=t
1
X - X2

] [
+

tx2 + Y2
(txi +Yi ) - (tx2 + Y2)

Y2
=tg(x) + g(Y)
YI - Y2

4. H is linear since for all 1, y E JR.4 and t E JR, we have


H(tx + y) = H(tx1 + Y1, tx2 + Y2, tx3 + y3, tx4 + y4) =
=

[ ] [; ]
X3

X4

Y3

Y4

tx3 + y3 + tx4 + Y4
tx1 + Y1

= tH(x) + H(Y)

By definition of the standard matrix, we get

5. Let 1, y E JRn and

ER Then, since L and Mare linear, we get

y)) = M(tL(x) + L(Y))


= tM(L(x)) + M(L<})) = t(M0 L)(x) + (M0 L)(Y)

(M0 L)(tx + y) = M(L(tx

Hence (Mo L) is linear.

Appendix A

445

Answers to Mid-Section Exercises

Section 3.3

[ o ol
[-1 o o
1l

2. The matrix of the reflection over


1
thex1x3-plane is 0

-1

[ J

0.

The matrix of the reflection over


thex2x3-plane is

0.

Section 3.4
-4

-2

1. The solution space is Span

2. If 1

[:l

-3

-5

-6

E Null(L), then we must havex1 -x, = 0 and -2x1 + 2xi +x3 = 0.

Solving this homogeneous system, we find that Null(L)

-3
-5
8
1 +t 0,
3. The general solution is x = 0 +s
9
0
-6
0

4. We have

span

-4

-2

{[i]}.

s,t E R

Range(L) = X1

[-J [-J [J
+X

+ X3

Thus,
Range(L) = Span

5. [L] =

[-

-1
2

{[ J, [-J, []} {[ J, []}


{[ ] [-], []}
=Span

So Col(L) =Span

6. Row reducing A to RREF gives

Thus, a basis for Row(A) is

{[l [m

1 o

0
0

= Range(L).

446

Appendix A

Answers to Mid-Section Exercises

[-1 -12 1 -3 -2_;]- [ 1 -1 1 ]


{[_ :J. [-i] [=m
1
-1
21 -12 -321 - 1 1
3 -3 -2
{ [l [!] [m

7. Row reducing A to RREF gives

Thus, a basis for coI(Al is

8. Row reducing AT to RREF gives

Thus, a basis for Row(Arl "coI(Al is

9. Row reducing A to RREF gives

[21 113 -2-3 13] [1 1 -2-1 1l


_; H {[l [l [m. U }
1,
31 3
o

-8

Thus,

{ -i

and

are bases for Row(A), Col(A),

and Null(A), respectively. Thus, rank(A) = dim Row(A) =


dim Null(A) =

and so rank(A)

nullity(A) =

and nullity(A) =

= 4, as required.

Section 3.5

[2

3511 1 ]- [ 1 1 1-5/22 3/2-1 ] [-5/22 3/2-1 ]


1/7 -3/7
[ -21 1311 1 ] - [ 1 112/7
1/7]

1. We can row reduce A to the identity matrix so it is invertible. In particular, we


have

2' We have

'

Thus, the solution to Ax=

b is x =

[- .

A-1 E =

3. (a) Let R denote the reflection over the lines


We have
(b) Let

[R]

[ l

_
so A 1 =

and we find that

x2

x1.

[R][R]

Then R is its own inverse.

[ l

as required.

denote the shear in the X1 -direction by a factor oft. Then s-1 is a

shear in the X1 -direction by a factor of

[ l

as required.

-t.

We get

[S] rs-1]

[ ][ 7]

Appendix A

447

Answers to Mid-Section Exercises

Section 3.6

[ n
l
[
l
j
[
[ ][""
0

!.

Ut

E=

Then,

a"
a21

a12

a13

a22

a23

a31

a32

a33

while

2.

a12
ka22

au
ka23

a31

a32

a33

a11

a12

a21

a22

a23 = ka21

ka22

a31

a32

a33

a31

a32

a13
ka23 , as
a33

R=

5432E1A = R, where
E1 =

[- ]. [
[ i
2 =

j
j

a11
ka21

its reduced row echelon form

A to

We row reduce

a13

a12

EA=

kR2

required.

[ n

We find that

-3

Es=

(Alternative solutions are possible.)

Section 3.7
1.

We row reduce and get

[-!
[-
-3

-J

-3

_;

-6

-5

l
l

H l
[ -\ j]
[ l
H j
]
ll
[
[ ! i [- ]

R, + 4R1

-\
0

R3 + 2R2

(a) We have
and take

A= LU=

y= Ux. W riting

-2

-2

out the system

L=

::::}

-2

A= LU= -4
3

2.

o
0

o
1

L=

::::}

-6

R3 - 3R1

Therefore, we have

Write

Ly=

Y1 = 3

-4y1 + Y2 = 2

3y1 - 2y2 + y3 = 6

Ax=

b, we get

bas

LU.X =

448

Appendix A

Answers to Mid-Section Exercises

y1 = 3, Y2 = 2 + 4(3)= 14,
so

Using forward substitution, we find that

y3 = 6 - 3(3) 2(14) = 25.


Ux=
+

Thus, our system

Hence 5'

and

[{ l

y is

-xi + x2 2x3 = 3
3x2 5x3 = 14
5x3 = 25
X3 = 5, 3x2 = 14 - 5(5)::::? X2 = -1,f-,
= P3
-x, = 3 !j- - 2(5) x, = !jl.
+

Using back-substitution, we get

(b) Write

Ly=

Ax

Thus, the solution isX

=>

bas

b, we get

LUx

band take y

= Ux.

and

[ l

Writing out the system

YI= 8
-4yI Y2 = 2
3yI - 2y2 + y3 = -9
YI= 8, Y2 = 2 4(8)= 34,
5' =
y3 = -9 - 3(8) 2(34) = 35.
Ux =
-xi x2 + 2x3 = 8
3x2 + 5x3 = 34
5x3 = 35
X3 = 7, 3x2 = 34 - 5(7) X2 = -t,
[17/3]
-Xi = 8 t -2(7)::::? XI = lj-.
x = -v3 .
+

Using forward-substitution, we find that

and

::::?

and

[;l

Hence

Thus, our system

so

y is

Using back-substitution, we get

Thus , the solution is

CHAPTER4
Section 4.1
1. Consider

Row reducing the coefficient matrix of the associated homogeneous system of


linear equations gives

3 0
2 1 5 -1
3 5 -2
3 0

0 2
1 1
0

1
0

0
0

1
0

Appendix A

449

Answers to Mid-Section Exercises

Hence, there are infinitely many solutions, so 13 is linearly independent.

p(x)
t1, t2, t3, t4
+Sx - 5x2+x3 = t1(1+2x+x2+x3)2 +t2(1+x+3x2+2x3)
+ t3(3+Sx+5x +3x3)+t4(-x - 2x )

To determine if
such that

E span

13 we need to find if there exists

and

Row reducing the corresponding augmented matrix gives

3
21 5
1 2 -3
5
3 5 -2 -5
1
3
p(x)2
a+bx+cx +dx3,
1

0
-1

1
0
0
0

0
0

Since the system is consistent, we have that

0
0
1
0

1
0
0

t1 = t2 = t3 = t4 =

0
0

13.

2. Clearly, if we pick any polynomial


bination of the vectors in 13. Moreover, if we consider

we get

then it is a linear com

0. Thus, 13 is also linearly independent.

Section 4.2
1. The set S is not closed under scalar multiplication. For example,

[ ]

S, but

[]=[s.
2. Observe that axioms V2, VS, V7, V8, V9, and VlO must hold since we are
using the operations of the vector space
Let

A = [l 2]

and B

Vl We have

= [ i2J

M(2, 2).

be vectors in S.

Therefore, S is closed under addition.


V3 The matrix

02,2+A.

02,2 = [ ]

S (take

al = a2 =

0) satisfies

A+02,2 = A =

Hence, it is the zero vector of S.

A = r-;i -a2].
tA = t[ 2] = [ti ta2]

V4 The additive inverse of


V6

is ( A)
-

which is clearly in S.

S. Therefore, S is closed under scalar

multiplication.

Thus, S with these operators is a vector space as it satisfies all 10 axioms.

450

Appendix A

Answers to Mid-Section Exercises

3. By definition, 1U is a subset of P2. Taking a= b = c = 0, we get 0 E \U, so 1U is


2
2
non-empty . Let p= a1 + bix + c1x , q = a2 + b2x + c2x E \U, ands ER Then
b1 + c1 = ai and b2 + c2= a2.
2

S l p + q = (a1 + a2) + (b1 + b2)x + (c1 + c2)x and (b1 + b2) + (c1 + c2) =

b1 + CJ + b2 + C2 = a1 + a2 sop+ q E 1U
2

S2 sp = sa1 + sb1x + sc1x and sbi +sci = s(b1 + c1)= sai, so sp E 1U


Hence, 1U is a subspace of P2 .

4. By definition, {O} is a non-empty subset of V. Let x,y E {0} and s E R Then,


x= 0and y= 0. Thus,
Sl x+ y=0+ 0=0byV4.Hence,x+ yE{O}.
S2 sx= sO= 0by Theorem 4.21
. . Hence,sxE{O}.
Hence, {O} is a subspace of V and therefore is a vector space under the same
operations as V.

Section 4.3
1. Consider the equation

Row reducing the coefficient matrix of the corresponding system gives

[ ] - [ 1
0

Observe that this implies that the system is consistent and has a unique solution

for all ao + aix + a2x E P2. Thus, 13 is a basis for P2.

2. Consider the equation

Row reducing the coefficient matrix of the corresponding system gives

-1

][

2 0

0 - 0

Thus, 1 + x can be written as a linear combination of the other vectors. More

over, {1- x, 2+ 2x + x , x + x } is a linearly independent set and hence is a basis


for Span 13.

3. Observe that every polynomial in S can be written in the form


2
2
a+bx+ cx +(-a - b - c)x3 = a(l - x3) + b(x - x3) + c(x - x3)
Hence, 13 = { 1

2
x3, x - x3, x - x3} spans S. Verify that 13 is also linearly

independent and hence a basis for S. Thus, dim S = 3.

Appendix A

4. By the procedure, we first add a vector not in Span

{[: l []}

451

Answers to Mid-Section Exercises

{[: ]}

We pick

lH

Thus,

is linear!y independent. Consider

[1

X1

X2

X3

Row reducing the corresponding augmented matrix gives

Thus, the vector X

{[;]. [] [!]}

above with !I

[ !]

X1

Xt - X2

X3 - X2

is not in the span of the first two vectors. Hence, 13

is linearly independent. Moreover, repeating the row reduction

[!].

we find that the augmented matrix has a leading 1 in each

row, and hence 13 also spans JR3. Thus, 13 is a basis for JR3. Note that there are
many possible correct answers.
5. A hyperplane in JR4 is three-dimensional; therefore, we need to pick three lin

{ , , }

early independent vectors that satisfy the equation of the hyperplane. We pick

13

It is easy to 'rify that 13 is Hnearly independent and hence

forms a basis for the hyperplane. To extend this to a basis for JR4, we j ust need

to add one vector that does not lie in the hyperplane. We observe that

not satisfy the equation of the hyperplane. Thus,

for JR4.

, 1 ' 0 ,
0

does

1s a basis

452

Appendix A

Answers to Mid-Section Exercises

Section 4.4

{] [=!]=[H
[
= l [ l
m. [1]
+ t2

J. Considert

Row reducing the corresponding augmented matrix gives

-1

Thus,

2. To find the change of coordinates matrix Q we need to find the coordinates of


=

the vectors in '13 with respect to the standard basis S. We get

To find the change of coordinates matrix P, we need to find the coordinates of

the standard basis vectors with respect to the basis '13. To do this, we solve the

triple augmented systems

Thus,

We now verify that

[ -4
-1

p
1

-1

[I

-4 I

l
[= --4 j]
0
1

-1

-1

-1

-1

j][i ]=[ ]
1

Section 4.5

tx1 + Y1 + tx2 + Y2 + tx3 + y3


tx2 + Y2

Y1 + Y2 + Y3
Y2

Hence, L is linear.

]=

tL(x) + L(J)

[]
" [ !]

2. If 1

XJ

Hence,
X

x1

Appendix A

Null(L), then
0 and

x2

[Q ]
0

x3

L(x)

Since

X2

x3

{[ ] [ ]}

X2 +

X1
x2

x3

x3

xi

{[ :J}

Hence, a basis fo; Null(L) is


_
Every vector in the range of L has the form
x2

0. Thus, every vector in Null(L) has the form

x,

453

Answers to Mid-Section Exercises

XJ

] [ ]
=

Xt

+ (X2 + X )
3

[ ]

is also clearly linearly independent, it is a basis for the

range of L.

Section 4.6
1. We have

J + 1 [

] +o[ ]
] [ ] [ ]
+ 1

+0

Hence,
[L]23

-2

-2

Section 4.7
= t1L(u1) +
+ tkl(uk) = L(t1u1 +
tkuk), then t1u1 +
+ tuk E Null(L).
But, L is one-to-one, so Null(L) = {0}. Thus, t1u1 +
+ tuk
0 and hence t1
tk 0 since {u1,
ud is linearly independent. Thus, {L(u1 ),
, L(uk)}
is linearly independent.

1. If 0

. .

v be any vector in V. Since L is onto, there exists a vector x E 1U such that


L(x) = v. Since x E 1U, we can write it as a linear combination of the vectors
{u1, ..., uk}. Hence, we have

2. Let

v =

L(x)

L(t1u1 +

Therefore, Span{L(u1), ..., L(uk))

tkuk)

t1L(u1) +

tkl(uk)

V.

3. If L is an isomorphism, then it is one-to-one and onto. Since {u1, ..., Un} is a

basis, it is a linearly independent spanning set. Thus, {L(u1), ..., L(u11)} is also
linearly independent by Exercise 1, and it is a spanning set for V by Exercise 2.
Thus, it is a basis for V.

454

Appendix A

Answers to Mid-Section Exercises

CHAPTERS

Section 5.1
1. (a)
(b)

(c)

I I
1; 1
I I

=3(1) - 2(2)=-1

-2

=1(-2) - 3(0) = -2

=2(2)

4(1)=o

1
2. We have C11 = (-1) +
1 3
(-1) +

1 ll

-1

2
= 3, C12 = (-1)1+

I =I

= -8, a nd C13

4. So, detA = lC11 +2C12+3C13 =1(3)+2(-8)+3(4)=-1.

3. (a) detA =lC11 +OC21 +3C31 +(-2)C41=1(-36)+3(32)+(-2)(-26)=112


(b) detA =OC21 + OC22 + (-l)C23 + 2C24 =(-1)(0) + 2(56)=112
(c) detA =OC14 + 2C24 + OC34 + OC44 =2(56)=112

Section 5.2
1. rA is ob tained by mul tiplying ea ch row of A by

2.

det (rA)= (r)(r)(r)detA =r3 detA.

-8

detA = l

16

-14

-1 =
2 0
(-1)
-1
0

-8

-14

16

-]
9

2 0
=
(-1)
0

-14

16

-1

-2

3.
3 1
detA =(-6)(-1) +

=-6

Thus, by Theorem 5.2.1,

r.

-2

=(-6)(-1)

-1

1
=(-1)

-4
-3

-1

5/3

-4
-3

-7

-1

-5
-4
2
-9 - 4(3)(-1) +1
-7

=6(4) + 12(11)=156

-4

-2

-1

-14

-3

-4

-9

2 0

-4

-1

-6

3 2
3 + 4(-1) +

-2 -4 3

-5

-5

-2

-1

1 -I

=20

-5

-4

Appendix A

Answers to Mid-Section Exercises

455

Section 5.3
1. cofA =

[=; =;]
[-
2

-1

2. A-1 = dA(cofA)7 =

i i
-

-2

Section 5.4
1. We draw i1 and v to form a left-handed system and repeat the calculations for
the right-handed system:
Area(il, v) =Area of Square - Area 1 - Area 2

Area 3

Area 4 - Area 5

Area 6

1
1
1
1
= (v1 +U1)(v2 +u2) - -U1U2 - V2U1 - -V1V2 - -U1U2 - V2U1 - -V1V2
2
2
2
2
=Vt V2 +Vt U2 +V2U2 + V2U2 - U1 u2 - 2v2u1 - V2V1
=

V1U2 - V2U1

l [ JI
[]

-(u1v2 - U2V1)

So, Area= det

2. We have L(e1) =
vectors is

Alternatively,

and L(e2) =

[].

Hence, the area determined by the image

456

Appendix A

Answers to Mid-Section Exercises

CHAPTER6
Section 6.1
1. The only eigenvectors are multiples of e3 with eigenvalue 1.
2
2. C(..l)
det(A
,.1
so the eigenvalues are ..l1
..l2
For tl1
0, we have

=
=S.

-,.l/) = S..l = ,.l(,.l - S),


=
OJ=[ ]- [ ]
=
{[-71} =S,
-

Thus, the eigenspace of tl1

0 and

0 is Span

For ,.12

we have

A-Sf=[- -]-[ -l/]


=
{[]}.

Thus, the eigenspace of tl1

0 is Span

3. We have
C(..l)

det(A - ..l/)

=(S - I=
tl)

So tl1

=S
=S,

2. For ..l1

S
=

-,.l
0
0

-3

-tl
-2

Al =-(,.l -S)(,.l
=

2
-4 _

has algebraic multiplicity 1, and ..l2


we have

0
0

-2

Thus, a bas;s fm tbe dgenspace of .<1

2
2
-9

5 ;s

-2, we have

A bas;s foe the dgenspace of .<2


multiplicity 1.

2
2)

-2 has algebraic multiplicity

[
o
]
[
o
ol
A -SI = -S -

plicity 1. For ..l2

2
-4
- tl

2 ;s

0
0

{[]}

1
0
0

1
0

so ;t bas geometric multi

{ [-q},

so ;t also bas geometric

Appendix A

Answers to Mid-Section Exercises

457

Section 6.2

-1.

1. We have C(,i) = det(A - ,if) = -,i(,i 2)(,1 + 1). Hence, the eigenvalues are
,11 = 0, -i2 = 2, and ,t3 =
For ,11 = 0, we have

- Of =

[ 111 -1] [1 11
{[:]}
[-111 -11 [1
]
{[=il}
0

-7

-4
3

-7
4

0
0

0
1
0

-1
0

Thus, a bosis for tbe eigenspace of ,! r is


For -i2 = 2, we have

A - 2f =

-7

-1,

0
-6
3

-7

0
0

0
1
0

Thus, a basis for the eigenspace of ,! r is


For -i3 =

we have

[ - 1 [ ;]
{[il}
r: =; il
[ ]
[ l
{[]}

- (-1)/ = 1
-7

Thus, a basis for tbe eigenspace of ,i, is

and getP-1AP=D=

So, we can takeP =

-1

2. We have C(;l) = det(A - ,ii) = (,i - 2) . Thus, ,i = 2 has algebraic multiplicity


2. We have A - 21 =

so a basis for its eigenspace is

. Hence,

the geometric multiplicity is less than the algebraic multiplicity. Thus, A is not
diagonalizable.

Section 6.3
1. (a) A is not a Markov matrix.
(b) Bis a Markov matrix. We have B- I =

[ ;l
[ l [l

fixed-state vector is

2. We find that 'f =

'f2 =

3
'f =

[-:

[].

0.6
-Q.6

1,

[1 -1]
Q

[n

. ThUS,

ltS

etc. On the other hand, the fixed

state vector is the eigenvector corresponding to

,i =

which is

458

Appendix A

Answers to Mid-Section Exercises

CHAPTER 7

Section 7.1

[i] nJ [i] Hl
1 [ f] [ 6rl: [!!h]}
{

1. We have
s

go a

21-16

0,

di

1/Ys

0 and
,

nJ Hl

0. Thus, the set

vector by its length gives the orthononnal set

2 -../36

2. It is easy to verify that <J3 is orthonormal. We have

1 i12

-1 /{i, and b3

1 v3

5/-./6.

Hence,

b1

[1]11

1 i11

8/-.../3, b2

[!{]
5/-./6

c-v'lS - V2 + 1);-./6
c V2 + 2 Y3 - 1);-./6
y
.
and
( -v'18 - 2)/-./6
( .../2 + 2)/-./6
c V2 - 2 Y3 - 1);-./6
c-v'lS + m + 1);-./6
2
little effort, we find that 11111
6 and 1 y -2.

3. We have 1

4. The result is easily verified.

With a

Section 7.2
Vt

1. We want to find all

i1

v2
V3
V4

JR4

such that

i1

1
1
1

-1
=

0,

i1

i1

0. This gives the homogeneous system of equations


Vt +

-V1

V2

V3

V3

V4
V2 - V3 + V4

which has solution space Span

{ }
-

Hence, S'

Span

2. We have

perps

projs

[] [ ] [ ]
2
5/2
1 - 1
3
5/2

-1/2

1/2

{ -f } .

0, and

Appendix A

Answers to Mid-Section Exercises

459

Section 7.4

1. We verify that ( ,) satisfies the three properties of an inner product. Let A =

a1

a2

] [

b'
, B=

c2 .

h , and C = ci

(a) (A,A) = tr(ATA) = ai + a+ a+ a 0 and (A,A) = 0 if and only if


a1 = a2= a3= a4= 0, so A = 02,2. Thus, it is positive definite.
(b) (A,B) = tr(BTA)= b1a1 +b3a3+b2a2+b4a4= a1b1+a3b3+a2b2+a4b4=
(A,B), so it is symmetric.
(c) For any A,B, CE M(2, 2) and s,tE JR.,
(A, sB+ tC) = tr((sB+ tCl A)= tr((sBT+ tCT)A)

= tr(sBTA+ tCTA)= s tr(BTA)+ ttr( CTA)= s (A,B) + t(A, C)


So, ( ,) is bilinear. Thus, ( ,) is an inner product on M(2, 2). We observe that
(A,B) = a, bi +a2b2+a3h + a4b4, so it matches the dot product on the isomor
4
phic vectors in JR. .
2. 11111 = V(f,1)= Yl2+l2 + 12= Y3. llxll = y(x,x)= Y02 + l2+ 22 = Y5

3. llxll= y(x,x)= (-1 )2+02+12= Yi

CHAPTERS
Section 8.1
1. We have C(,1. ) = (,1. - 8)(,1. - 2). Thus, the eigenvalues are ,1.1 = 8 and A2 = 2.
For A1 = 8, we get
-3 -3 - l l
A_ 81 =
-3 -3
0 0

[ ] [ ]
{[ ]}
A-8!= [ ] [ ]
{[ ]}
[ ] [-11{ ;1i].

Thus, a basis for its eigenspace is

3
-3

Thus, a basis for its eigenspace is

A is orthogonally diagonalized to D=

.For ,1.2 = 2, we get

-3 - l
3
0

-1
0

.We normalize the vectors and find that


by P =

2. We have C(,1.) = -,1.(,1. - 3)2. Thus, the eigenvalues are ,1.1= 0 and ,1.2 = 3. For
A1 = 0, we get
2 -1 -1
0
A Of = -1
2 -1 - 0 l -1
-1 -1
2
0 0
0
-

1 [1 -11

460

Appendix A

Answers to Mid-Section Exercises

m ]}For = we get
-1 -1 1 [1 1 l
= [ -1 -

Thus, a basis for its eigenspace is


A-

81

3,

,1,

-1
-1

-1

-1 -1 -1
Thus, a basis for its eigenspace is
But we need an orthonormal
basis for each eigenspace, 7pp t;Gram-Schmidt Procedure and norl
1
malize the vectors to get 1 / , - .
-:- J 2!ft
- 1 /.../6
Therefore,AisdiagonalizedtoD= byP=
1 - 1 /.../6 .

{[-il [-]}.
]}
{[
[
[ i [;

1;-Y3

2/.../6

Section 8.2
1.

2.

3.

(a) Q(1) 4 + x1x2 + -Y2x


(b) Q(x) = xi - 2x1x2 + 2x + 6x2x3 - x
1
2 3/2 - 1/2
1
-1]
(b) 3/2 4
(a) [-1 -3
(c) 3
-1 /2
1
4
The corresponding symmetric matrix is A = [ -n We have C(tl) (tl+4)
(A - 1). Thus, the eigenvalues are A1 = -4 and A2 =1. For A1 -4, we get
A+ 41 = [ ] - [ ]
Thus, a basis for its eigenspace is{[-]}For tl2 =1, we get
A = [- 12 -42] [1 - 2]
Thus, a basis for its eigenspace is{[]}. Thus, takingP= [!fl ;]and
1 =Py, we get the diagonal form of Q(x) is Q(1) = -4yT + y.
(a) The corresponding symmetric matrix isA= [: !]. We have C(tl) (A-12)
(A + 4). Thus, the eigenvalues are tl1 = 12 and tl2 -4. Thus, Q(x) is
indefinite.
(b) The corresponding symmetric matrix isA= - We have C(tl) =
-3 2 3
-(A- l)(A+ l)(A-8). Thus, the eigenvalues are tl1 1, tl2 = -1, and tl3 =
Since Q(x) has both positive and negative eigenvalues, it is indefinite.
=

_I

4.

[-

8.

Appendix A

461

Answers to Mid-Section Exercises

Section 8.3
=

1. The corresponding symmetric matrix is A

[ l

We have C(,l) = ,l(,t - 2).

Thus, the eigenvalues are ,1.1 = 0 and A2 = 2. For ,1.1 = 0, we get

Thus, a basis for its eigenspace is

21 =

Thus, a basis for its eigenspace is

rn

to define the Y2-axis .

{[ ]}
r-1 1] [ ]
1 -1
{[ ]}
r ]
-

.For ,1.2 = 2, we get

-1

.We take

to define the y1 -axis and

Then the original equation is equivalent to

OyT + 2 y = 2. We get the pair of parallel lines depicted below.

CHAPTER9
Section 9.1
1.

(a)3+i

2. (a) ZI

(b)-2 +2i

(c)5

x - yi = x + yi = z1

(b) x +iy =

z1

(c) Let z2

a+ib. Then

= Zi' = x - iy if and only

if

y =

5i

(d)

13

0.

z1 + z2 = x +iy + a+ib = x + a+i(y + b)= x+a-i(y+b) = x -iy+a-ib

462

Appendix A

Answers to Mid-Section Exercises

1 +i 1 +i1+ i 2i .
=
=
=
i
1-i 1-i1 +i 2
2i
2i 1-i 2 + 2i
(b)
= 1 +i
=
=
2
l+i l+il- i
4-i
4-i 1-Si
1
21
(c)
= - i
=
1+Si 1+Si1 - Si
26
26
(a)

- - - --

4.

(2 +i (1+i)
--+-i}-i

x
0

=-

5. We have lz1 I = lv'3 + ii = v'3+l = 2. Any argument 8 satisfies


and 1 = 2 sin 8. Thus, 8 = + 2nk, k E Z. Hence,

Zl = 2 COS

. n

6 +iSin 6

x
vf3 = 2 cos 8

We have IZzl 1-1-il = Y1"+1 = ../2. Any argument 8 satisfies - 1 =


and - 1 = ../2 sin 8. Thus, 8 = ?f + 2nk, k E Z. Hence,
=

z2 =

r,;
v L.
_

cos

Sn

. Sn

4 +ism 4

../2 cos 8

6. We have
l l = lzl
lzl =Ir cos 8 - r sin 81 = r2 cos2 8 + r2 sin2 8 = ...f7i = r
Using the trigonometric identities cos 8 = cos(-8) and - sin 8 = sin(-8) gives

z = rcos 8 - r sin 8

r cos(-8) + r sin(-8) = r( cos(-8) + sin(-8))

Hence, an argument of z is -8.

7. Theorem 3 says the modulus of a quotient is the quotient of the moduli of the
factors, while the argument of the quotient is the difference of the arguments.
Taking z1 = 1 = l(cos 0 +isin 0) and z2 = z in Theorem 3 gives

2_
z2

( cos(O - 8)+isin(O - 8)) = ( cos(-8) +isin(-8))


r

Answers to Mid-Section Exercises

Appendix A

8. In Example S, we found that 2-2i

2 cos

+isin ) .Hence,

2 Y2 (cos

rr
rr
-4 +i sin -4

463

) and -1+ .../3i

( ( ;) ( 2;))
Y2
Sn
. Sn
4 2(
l2 +ism l2 ) = 1.464+i( S.464)
n 2n
n 2n
2-2i
2 Y2
.
(
-4-3 ) + m (-4-3 ))
= -- (
2
-l+-./3i
-lln
---U- +i
---U- ) -1.366 - i( 0.366)
( -lln
cos -

(2-2i)(-1+ Y3i)=4 Y2

+ 2

+isin -

cos

cos

9. We have ( 1- i)

We have

_ ,.;;
v L cos

= 2512

cos

-Sn
-Sn
4 + i sin 4

(-1- -./3i) = 2 (cos +i sin )


(-1- \13i)5 = 25

sin

Y2 (cos 7 +i sin -;), so

(1 - i)5

( 2n +
cos

-4 + 4i

so

isin

2 n

-16+16Y3i

Section 9.2

1. We row reduce the corresponding augmented matrix to get

i
1
1
i
2 l+i

3 -1-2i
2+i
2
S- i

1 +2i

Thus, the only solution is z1

+i, z2

1 0 0
0 1 0
0 0 1

0, and Z3

- i.

Section 9.3
1. All 10 vector spaces axioms are easily verified.
2. The reduced row echelon form of A is

for Row(A) is

ml [!J}

1
0
0
0

0
1 -i
0 0
0 0

Hence, a basis

464

Appendix A

Answers to Mid-Section Exercises

Section 9.4

= 2 + 3i,

1. For ..:l1

=[ 3 3i _3- 3i]- [1 -20 ].


[1 i]=[ ] +i [ ] P=[ n
11
;
3
+
-fi.i.
+
=[ .
[ -J[ ]
[ --fi.1 i]= [10] + [ -Y2l0 j.
i

we haveA - ;J.1/

An eigenvector corresponding to ;i is

2. det(A-;J./) = ;J.2 - 6;J.

So, ..:l1 =

canonical form ofA is C


We haveA-;J.1/=

Hence, a real

An eigenvector correspond'mg to

.So,

)
/l

1s

So,

P=[ -.
Section 9.5
1. We have

(it, v>=iC2 - 2i) +c1+ 2i)Cl + 3i)=-3 +7i


(2iit, v>= 2i(it,v>= -14 - i
(it, 2iv)= 2i(it, v>= -2iC-3 +7i)= 14 +
+ +
(z,Z;=z l= + +
(z, 0 z
0
z=
(z, w>= w-=!'w-=cf'-w)T=w-Tt=w---Tz=<w.t>
<v +z, w>=cv +t;T=cvT +zT)=vT +zT=(v, w> +<z. w>
<z. w +it>=!'w +it= c +ii)=!' +zTii=<z. w> +<z. it>
(az, w)=(at)T=af'=a(z, w)
(z, aw)= aw=at'=a(z, w)
=
6

6i

2. We have

Z1ZI

Thus,

Z>

for all

and equal to

ZnZn =

2
lzil

if and only if

lz111

0:

tT

ZT

3. The columns ofA are not orthogonal under the standard complex inner product,
soA is not unitary. We have B* B

/,so Bis unitary.

APPENDIX B

Answers to Practice
Problems and Chapter
Quizzes
CHAPTERl

Section 1.1
A Problems
Al

X2

(a)

(b)

[;]

3[-!]

(c)

(d)

-2

Xi

A2 (
a)
A3 (

[ ]

{[ 1
--131

A4 (a)

(b)

(b

[=] [-] 4
2
[
r-10;[-1 ! 1 ;1
0
-22
(c)

10

(c) u

2[]-2[-]
[ ]

X2

[_]

Xi

[3[62] 3 [ 4 vl]
74//3] [ -rrl
c{
[-1/2]1 13/3
[=]
9/2

(d)

(c)

(b)

Xi

-[]

Xi
X2

[]

[72]

(e)

(f)

(e)

-7/2

(f)

(d) u

-Y2
-Y2 + 7f

466

Appendix B

Answers to Practice Problems and Chapter Quizzes

AS (a)

[l

(b)

A6

[ :;]

(d)

-10

-1/2

- /3
-14/3

Ul-m [ il
m m [tl
ni-m [=]
[]-[_!] nJ
m-ni Ul
[=il nl [t] [=!] [j]

pQ = oQ-oP =

PR = oR -oh

PS = oS -oh

QR=

oR -oQ =

sR=

o R -oS =

pQ +QR=

[:l

PS + sR

AS Note that alternative correct answers are possible.

[ ] [ J

tE

(b)x=

IER

(d}X=

(a)x= -

+t

[j] Hl
[;] [-;;],
-

(c}X=

(e)X=

+t

A9 (a) XI =

-1

-2/3

+t, X2 =

(b) X1 = l + 3t, X2 = l
AlO (a) Three points

(b) Since

-2PQ =

[J [ =J
+r

nl [H
+l

tE

IER

t E

-1 + 3t,

- 2t,

[= ] [].
UJ [ ;J

t E; X =

t E; x=

+t

+t

t E

t E

P, Q, and R are collinear if PQ = tPR for some t E R


= PR, the points P, Q, and R must be collinear.

[-]

(c) The points S, T, and U are not collinear because SU* tST for any t.

Appendix B

Answers to Practice Problems and Chapter Quizzes

Section 1.2
A Problems
0
0
(c) 1
1
3

10
-7
(b)
10

5
9
Al (a)
0
1

-5

A2 (a) The set is not a subspace of IR.3.


(b) The set is a subspace of IR.3.

(c) The set is a subspace of IR.2

(d) The set is not a subspace of IR.3.


(e) The set is a subspace of IR.3.

(f) The set is a subspace of IR.4.

A3 (a) The set is a subspace of IR.4 .


(b) The set is not a subspace of IR.4.

(c) The set is not a subspace of IR.4

(d) The set is not a subspace of IR.4.


(e) The set is a subspace of IR.4.

(f) The set is not a subspace of IR.4.

A4 Alternative correct answers are possible.

[J o[l o [_:Hl
+!l- [] [Hl
lil [tl m ll

(a) l

(b)

(c) 1

(d) 1

1
+

l
-

[n [J [J [J
2

+1

AS Alternative correct answers are possible.


(a) The plane in R4 with basis

{! i}
U }
,

(b) The hype'Jllane in R4 with basis

467

468

Appendix B

Answers to Practice Problems and Chapter Quizzes

(c) The line in 11!.4 with basis

{ -I }
{l }
.

(d) The plane in JR4 with basis

-1

A6 If 1 = jJ + tJ is a subspace of 1Rn, then it contains the zero vector. Hence, there


exists t1 such that 0 = jJ + t1J Thus, jJ = -t1J and so jJ is a scalar multiple of
J On the other hand, if jJ is a scalar multiple of J, say jJ = t1J, then we have
A7

jJ + td =

t1J + tJ (t1 + t)J Hence, the set is Span{J} and thus is a subspace.
Assume that there is a non-empty subset 131
{V1, , ve} of 13 that is linearly

dependent. Then there exists Ci not all zero such that

o=

c1v1 +

+ ceve

c1v1 +

ceve + Ove+1 +

This contradicts the fact that 13 is linearly independent. Hence,


independent.

Ovn

131 must be linearly

Section 1.3
A Problems
Al (a) Y29
(e) -fiSl/5

(b) 1

(f) 1

[ ;0]

[ ]

1
(b)
1/.../2

3/5

A2 (a) 4/5

c+!J

-2/3
-2/3
(e)
1/3
0

A3 (a) 2 VlO
A4 (a) 11111

(b) 5

(c) .../2
(g) -%

(d) Y17
(h) 1

(c)

2/Ys
1/.../2
0
(f)
0
-1/.../2

(d) 3-%

(c) vT75

-v26; 11.Yll -f35; 111 + .Yll 2 .../22; 11 .YI 16; the triangle inequality:
2 -v'22 ::::: 9.38 $ -v26 + -f35 ::::: 10.58; the Cauchy-Schwarz inequality: 1 6 $
-../26(30);:::: 27.93.
=

(b) 11111 = -%; 11.Yll = Y29; 111 + .Yll


ity: -../41 ::::: 6.40 $ -% + Y29

3
AS (a)

(b)

:::::

-../41; 11 .YI = 3; the triangle inequal


7.83; the Cauchy-Schwarz inequality:

-../ 6(29);:::: 13.19.

m [- l
[-!] [-:]

O; these vectors are orthogonal.

0; these vectors are orthogonal.

Appendix B

(c)

(d)

(e)

(f)

rn [-l

Answers to Practice Problems and Chapter Quizzes

O; these vectors are not orthogonal.

4 -1
0 34
-2 0
0 X1
0 X2
0 X3
0 X4
1/3 3/2
2/3 0
-1/3 -3/2
3
6
0
2x1 4x2 -X3
3x1 -4x2 x3 8
3x1
4x3 0
X1 -4x2x2 5x3
-2x4 0
m
=

O; these vectors are orthogonal.

= O; these vectors are orthogonal.

= 4; these vectors are not orthogonal.

A6 (a) k =
A7 (a)
(c)

AS (a)
(c)

(b) k =

or k =

(b)
(d)

(b)
(d)

(b) ii=

Hl1

(d) -12
-1
2
-3
-1
2x1 -3x2 5x3 6
X2 -2
xi - x2 3x3 2
it=

-3
3x1-4x1 -2x2
5x3 -2x3
26 -12
x2 3x3 3x4 1
X2 2X3
-X4 X5
(c) k

A9 (a) ii=

(c) ii

(d) any k E IR

{]
+

= 1

(e)n=

AlO (a)
(b)

469

(c)

All (a) False. One possible counterexample is

[] [] 2 [] [ ]

(b) Our counterexample in part (a) has i1 *

A Problems

(b) proJvu =
.

_,

[-J.
r3648/2/25J
5

perpil i1 =

[]

,perpvu =
_,

_9

0, so the result does not change.

Section 1.4

Al (a) projil i1 =

r-136
102//25J
25

4 70

Appendix B

Answers to Practice Problems and Chapter Quizzes

(d) projil a=

(e) projil a=

(f) projil a=

A3 (a) u =

-[ 4/98/9] [ 40/91/9 l
-8/0 9 -1 -19/9
00
-12
0 1/2 -1 5/2
-0
3
-5/22
-1/20

=
1111

(b) proja F =

(c) p erpa F =

A4 (a) u =

11 11

(b) proja F =

(c) p erpa F =

'perpil a=

'perpil a=

'perpil a=

2/76/7
[3/7220/49]
660/49]
[330/49
[ 270/49
222/49]
-624/49
3/1/M
Ml
[-224/71Yf4.
[-16/78/7]
-[ 693/7/7]
30/7

Appendix B

Answers to Practice Problems and Chapter Quizzes

AS (a) R(5/2,5/2), 5/-../2

(b) R(58/17,91/17), 6/Yfl


(c) R(17/6, 1/3,-1/6), y29/6
(d) R(5/3,11/3,-1/3), -../6
A6 (a) 2/../26

(b) 13/../38
(c) 4/Ys
(d) Y6
A7 (a) R(l/7,3/7,-3/7,4/7)

(b) R(l5/14,13/7,17/14,3)
(c) R(O,14/3,1/3,10/3)
(d) R(-12/7,11/7,9/7,-9/7)

Section 1.5
A Problems

[= ]
[]
-27

Al (a)

(b)

(c)

(d)

(e)

(0

31
- 4

Ul
Ul
[]
[]

A2 (a) ii x ii

(b) i1 xv

[]
[ -1
-13

-v x i1

471

472

Appendix B

Answers to Practice Problems and Chapter Quizzes

(c) i1x 3w =

[ i
[ -:i
-15

Cd) ax cv+ w) =

= 3(i1x w)

-18

= axv+ax w

(e) i1. (vx w) = -14 = w . (i1xv)

en a. cvx w) = -14 = -v. cax w)

A3 (a) Y35
(b) -vTl
(c) 9

(d) 13

A4 (a) x1 - 4x2 - l0x3 = -85


(b) 2x1 - 2x2

(c) -5x1 - 2x2

3x3 = -5

6x3 = 15

(d) -17x1 - x2 +lOx3 = 0

AS (a) 39x1 + 12x2 + lOx3 = 140

(b) llx1 - 21x2 - 17x3 = -56

(c) -12x1 + 3x2 - 19x3 = -14

(d) X2 = 0

A6 (a) 1"

(b)

[6\} t;J

x{il+H

tER

tER

A 7 (a) 1

(b) 126
(c) 5

(d) 35
(e) 16

AS i1 (v x w) = 0 means that i1is orthogonal to v x w. Therefore, i1lies in the plane


through the origin that contains v and w. We can also see this by observing that
i1 (vxw) = O means that the parallelepiped determined by i1, v, and w has volume
zero; this can happen only if the three vectors lie in a common plane.

A9

ca- v) x ca+v) = ax ca+v) - vx ca+v)


= i1xi1+i1xv- vxi1-vxv
=O+i1xv+i1xv-O
= 2caxv)

Appendix B

Chapter

Answers to Practice Problems and Chapter Quizzes

4 73

Quiz

E Problems

Et x

{] HJ
+

E2 8xi - x2+7 X3
E3 To show that

ieR

{[], [ ]}
-

is a basis, we need to show that it spans IR. and that it is

linearly independent.
Consider

[] [] [-] [ : : i ]
=

This gives xi
we get ti

ti-t2 and x2

F2xi + x2) and t1

+t1

ti

t2

2ti+2t2. Solving using substitution and elimination,


=

(- 2xi + x2). Hence, every vector

written as

So, it spans IR. . Moreover, if xi

that ti

for IR.

t2

x2

[]

can be

0, then our calculations above show

0, so the set is also linearly independent. Therefore, it is a basis

.
E4 If d * 0, then ai(0) +a2(0) +a3 (0) 0 * d, so 0 S. Thus, S 1s not a subspace
3
of IR. .
On the other hand, assume d
0. Observe that, by definition, S is a subset of
...,

[]
[:] :]

R3 and that

a1 Xi +a2x2 +a3 x3
Lett

aixi+a2x2+a3X3

S since taking x1

0, x,

0, and x3

0 satisfies

0.

S . Then they must satisfy the condition of the set, so

0 and aiYi+a2Y2+ a3y3

0.

To show that S is closed under addition, we must show that x + y satisfies the
condition of S. We have x +y

[ : i

and

X3 +Y3

ai(xi +Yi)+a2(x2+Y2)+a3(X3 +y3)

aixi +a2x2+a3X3 +aiyi+a2y2

+ a3y3

Hence, x +y

S . Similarly, for any t

IR, we have tx

0 +0

[l

tXi
tx2 and
tx3

So, S is closed under scalar multiplication. Therefore, S is a subspace of IR. .

474

Appendix B

Answers to Practice Problems and Chapter Quizzes


ES The coordinate axes have direction vectors given by the standard basis vectors.
The cosine of the angle between v and
cos

e1

is

2
v e1
=
llVlllleill -v'14

The cosine of the angle between v and

e2 is

The cosine of the angle between v and

e3 is

cos Y =

v. e3
1iv1111e311 =

-v'14
1

[-il

E6 Since the origin 0(0, 0, 0) is on the line, we get that the point Q on the line closest
to Pis given by oQ = proj1oP, where
Hence,
OQ =

is a direction vector of the line.

=
11di12JJ -18/11
Qp.

[ ;1

and the closest point is Q(l 8/11, -12/11, 18/11).


z

E7 Let Q(O, 0, 0, 1) be a point in the hyperplane. A normal vector to the plane


1
is it=
Hence,

. Then, the point R in the hyperplane closest to Psatisfies PR

1
3
-;
-;
-;
-2
OR= OP+ proj11PQ =
O
2

2
4

5/2
-5/2
-1/2
3/2

Then the distance from the point to the line is the length of PR:

llPRll =

-1/2
-1/2
-1/2
-1/2

= 1

= proj,1 PQ.

Appendix B

475

Answers to Practice Problems and Chapter Quizzes

E8 A vectot orthogonal to both vectorn is

[] r-:1 Hl
x

E9 The volume of the parallelepiped determined by it+ kV, v, and w is

!Cit+ kv). cvxw)l = tit. cvxw) + k(v. cvxw))l


= 111 . cvxw) +k(O)l

2)

This equals the volume of the parallelepiped determined by it, v, and w.


ElO (i) False. The points P(O, 0, 0), Q(O, 0, 1), and R(O, 0,
form t1x1 +t2X2 = 0 with t1 and t2 not both zero.

lie in every plane of the

(ii) True. This is the definition of a line, reworded in terms of a spanning set.
(iii) True. The set contains the zero vector and hence is linearly dependent.
(iv) False. The dot product of the zero vector with itself is 0.
(v) False. Let x =

[]

and y =

Ul

Then, proj1 y =

[l

while proj51 x =

[;.

(vi) False. If y = 0, then proj1 y = 0. Thus, {proj1 y, perp1 y} contains the zero
vector, so it is ljnearly dependent.
(vii) True. We have

tlitxcv+311)11 = 1111xv+3(11x11)11 = 1111 xv+ 011 = 11itx1111


so the parallelograms have the same area.

CHAPTER2

Section 2.1
A Problems

Al (a) x =

(b) X=

(c) X =

[1;]

[] fn
[=2]
+

-1

(d) X=

-1

- +t -,
0

t ER

tEJR

476

Appendix B

Answers to Practice Problems and Chapter Quizzes

A2 (a)

A is in row echelon form.

(b) Bis in row echelon form.


(c) C is not in row echelon form because the leading 1 in the third row is not
further to the right than the leading 1 in the second row.
(d) D is not in row echelon form because the leading 1 in the third row is to left
of the leading 1 in the second row.

A3 Alternate correct answers are possible.


-3
13

-1
0
0

2
1
0

(c)

5
0
0
0

0
-1
0
0

0
-2
1
0

(d)

2
0
0
0

0
2
0
0

2
2
4
0

0
4

(e)

1
0
0
0

2
1
0
0

2
7
0

1
5
2

(f)

1
0
0
0

0
1
0
0

3
-1
24
0

(a)

(b)

[i

8
0

0
2
1
0

1
11

A4 (a) Inconsistent.
(b) Consistent The solution is ii

(c) Consistent. The solution is x

(d) Consistent. The solution is x

(e) Consistent. The solution is x

m [!]
+t

-1
0
3

+t

19/2
0
5/2
-2

-1
0
1
0

ER

-1
-1
1'
0

E JR

-1
+t

+t

O'
0
1
0
O'

ER

s,t

ER

Appendix B

AS (a)

(b)

(c)

-5

24
/11
.
.
.
:;t
=
.
. cons1stent with so1ut10n
-t
l0
/1 l

_11

_10

consistent with solutionX =

f[ H

R
IE

-3

-5

11

-8

19

[ = i l

[ l f H
l [
[ -r
Hl

l
l[
[
1 [
[

x=

(d)

[ ]

1 ]
[ 1 ] [
[ - I H - 1 ;J
3

-17

-8

-3

-2

-11

-5

Consistent

IE R.

3 6

16

-2

477

Answers to Practice Problems and Chapter Quizzes

-5

2
1

-11
5

with

[fl

solution

Consistent with solution

X=

(e)

-1

-1

13

19

-5

-17

-21

-6

(g)

-2
-3

0+

-5
-7

3
3

The system is inconsistent.

-3
1

0
4

-5

10

-1

-8

-7

solution x =

-3

(f)

10

Consistent

with

tER

'

0
2

0
-4

sistent with solution x =

-3

-2

0
0

2
1

3/2

. The system is con-

-1
t 3/2 '
+
2
-3/2

t ER

A6 (a) If a * 0, b * 0, this system is consistent, the solution is unique. If a = 0,


b * 0, the system is consistent, but the solution is not unique. If a * 0, b = 0,

the system is inconsistent. If a = 0, b = 0, this system is consistent, but the

solution is not unique.

(b) If c * 0, d * 0, the system is consistent, and the solution is unique. If d = 0,

the system is consistent only if c = 0. If c = 0, the system is consistent for all


values of d, but the solution is not unique.

A 7 600 apples, 400 bananas, and 500 oranges.


AS 75% in algebra, 90% in calculus, and 84% in physics.

478

Appendix B

Answers to Practice Problems and Chapter Quizzes

Section 2.2
A Problems
Al (a)

[i n
[ H
H
[
[i n

the rank is 2

(b)

the rank is 3.

0
0

(c)

the rank is 3.

0
0

(d)

the rank is 3.

(e)

(f)

(g)

(h)

(i)

[
[

3/2

1/2

17

23

; the rank is 2.

n
-H

the rank is 3.

the rank is 3.

-1/2
; the rank is 3.

-56
; the rank is 4.

-6
-2

A2 (a) There is one parameter. The general solution is x

1 ,

ER

(b) There are two parameters. The general solution is x

(c) There are two parameters. The general solution is x

0
+ t

-2
1 ,

-2

1
0
0

+ t

0
1 ,
0

s,tEJR.

s,tEJR.

Appendix B

4 79

Answers to Practice Problems and Chapter Quizzes

-2

1 -1

(d) There are two parameters. The general solution is 1

+t

s, t

ER

0
0
0

-4
0

(e) There are two parameters. The general solution is 1

+ t

5,

s, t E IR

0
0

-1

(f) There is one parameter. The general solution is 1

t E JR.

0
0

A3 (a)

[1 -i [ 1];
[ 1 =;] [ =];
[H
-1
1 -1 1
4

-3

solution is 1
(b)

the rank is 3; there are zero parameters. The only

0.

the rank is 2; there is one parameter. The

-7

general solution is 1

(c)

t ER.

-7

-3

-3

-5

-2

-4

-3

-7

meters. The general solution is 1

; the rank is 2; there are two para-

+t

s, t

ER

0
0
(d)

5
4

11 1 1
-11

-1

-3

-2

-2

; the rank is 3; there are two pa-

rameters. The general solution is 1

-2

+t

s, t

ER

A4 (a)

[ - I ]- [ I l

Consistent with solution 1

[ l

480

Appendix B

Answers to Practice Problems and Chapter Quizzes

(b)

2-3 21 156 ] [ 1 10 1274//7 ] .


7
2[ 4b7/71 [-1'1
7
[ 2 5 =; 119 I [ I -i ]
[] fl
16
36

-11
[ -2 -3 5 -1 l - [ ! - l
7
Hl[ i 9 -1 191 ]- [ 0 0 -0 1 ]
[ 1324 -1-6-3 014 -21 i [ 1 00 -510 00 -7 i .
7-7 5
10 -11 '
2
0
01 22 -2-3 0 4 21 10 10 00 00 -10 -40
22 54 --5 33 10 53 00 00 10 0 -3/23/2 -12
7
-40 01
-12 -3/2
3/2 '
1
0
01 00 -31 .
-13 04
[: 1 2 I [ 0 1 2i
[n
r-n
1 0 5 -2 i
- l - [ 0 1 0 1
2 -5 4 0 0 0 0
nl fH
fH

[ 12

(c)

+t

Consistent

with

solution

Consistent

with

solution

ER

-8

X=

(d)

IE R

Consisrent with solution

-8

X=

(e)

(D

The system is inconsistent.

-5
-8

solution x

(g)

+t

ER

sistent with solution x

AS (a)

Consistent with

. The system is con-

+t

ER

The

solution

The solution to the homogeneous sysrem is

to

X=

IER

is

[A I b]

is

. The solution to

[A I b]

E R The solution to the homogeneous system is

Appendix B

(c)

-1 5 2 1 -1 ] [ 10 01 -59 2 -51
1 ]
[ - -5 -9 1
010 50 -2O'
-95 -2
0 O'1
01 00 -20 ]
0 -12
u 2 50 lH 0 1 1 -1
-11 21
0 1
02

is

-1

-4

+ s

_,

-1

-1

(d)

is x =

X=t

+t

-1

'

+t

-1
-4

3
-

-4

s,t

+t

homogeneous system is x = s

A Problems

(b)

(c)

2
-1
2 01 (-1) 01 12
5
+3

01

1 +
(-1 )

-1
-1

8
4

-1
1

is not in the span.

(-2) -10
I

(b)

2
2
-2
0 (-1) 2

3
A2 (a)

4
is not in the span.
6
7

3
+

-1
1 +

The solution to

[A I b_,]

JR. The solution to the

s,tER

The solution to

[A I b]

t E R The solution to the homogeneous system is

tER

Section 2.3

Al (a)

481

Answers to Practice Problems and Chapter Quizzes

02 (-2) -1-1
1

-7
3

08

482

Appendix B

Answers to Practice Problems and Chapter Quizzes

1
(c)

is not in the span.

A3 (a) X3 = 0
(b)

X1 - 2x2 = 0, X3 = 0

(c) x1+3x2-2x3 =0
(d)

X1 +3x2 + Sx3 = 0

(e) -X1 - X2 + X3 = 0, X2+ X4 = 0


(f) -4x1 + Sx2 + X3 + 4x4 = 0

A4 (a) It is a basis for the plane.


(b) It is not a basis for the plane.
(c) It is a basis for the hyperplane.

AS (a) Linearly independent.

(b) Linearly dependent. -3t

0
-

2t

-t

0
0

3
+t

6
3

0
0
(c) Linearly dependent. 2t 0 -t 1 +t
= 0 ,
1
1
3
0
1
1
3
0
2

0
0
O'
0

t E IR

A6 (a) Linearly independent for all k

t E IR

f. -3.

(b) Linearly independent for all k f. -5/2.

A 7 (a) It is a basis.
3
(b) Only two vectors, so it cannot span IR . T herefore, it is not a basis.
3
(c) It has four vectors in IR , so it is linearly dependent. T herefore, it is not a basis.
(d) It is linearly dependent, so it is not a basis.

Section 2.4
A Problems
Al To simplify writing, let a=

Total horizontal force: R1 + R2 = 0.


Total vertical force: Rv - Fv = 0.
Total moment about A: R1s + Fv(2s) = 0.
The horizontal and vertical equations at the joints A, B, C, D, and E are
o:N2+ R2 = 0 and N1 + o:N2 + Rv = O;
N3 + aN4 + R1 = 0 and -Ni + aN4 = 0

Appendix B

-N3 +aN6
-aN4 +N1
-N7 - aN6

Answers to Practice Problems and Chapter Quizzes

0 and -aN2 +Ns+aN6

0 and -aN4 - Ns

0 and -aN6 - Fv

R1 +R2
-R2

-R2
R2 +R3

A2

483

E1

-R3

-R3
R3 +R4 +Rs

-Rs

Rs+R6
-R6

-Rs
-R6
R6 +R1 +Rs

E2

A3

x =

Chapter 2

100

Quiz

E Problems

El

E2

-1

-2

-5

-1/3

. Inconsistent.

1/3

E3 (a) The system is inconsistent for all (a, b, c) of the form (a, b, 1) or (a, -2, c) and
is consistent for all (a, b, c) where b * -2 and c * I.
(b) The system has a unique solution if and only if b * -2, c * 1 and, c * -1.

E4 (a)

-2

11/4

-1

-11/2

1 +t

-5/4

(b) x . a

0, x . v

'

S,

JR

0, and x . w

0, yields a homogeneous system of three

linear equations with five variables. Hence, the rank of the matrix is at most
three and thus there are at least# of variables - rank
So, there are non-trivial solutions.

5-3

2 parameters.

484

Appendix B

Answers to Practice Problems and Chapter Quizzes

=t1

ES Consider 51
matrix gives

m [1] [1].
[ ][
+t2

+ti
3

Row reducing the corresponding coefficient

1
1
6

1
1 0

o
1

o
0

Thus, :Bis linearly independent and spans IR3. Hence, it is a basis for IR3.

E6 (a) False. The system may have infinitely many solutions.

X1 = 1, 2x1 = 2 has more equations than variables but is

(b) False. The system


consistent.
(c) True. The system

x1 =0 has a unique solution.

(d) True. If there are more variables than equations and the system is consistent,
then there must be parameters and hence the system cannot have a unique
solution. Of course, the system may be inconsistent.

CHAPTER3

Section 3.1
A Problems
Al (a)

(b)

(c)

A2 (a)

(b)

[-

[
[

-6

1
6

-4

-3
-6

-12
-1
-11
-1

[I

-3

-1

-1

27

f l
n

12
-1

15

(c)

-3

15

l3 -4

-5

19

-1

(d) The product is not defined since the number of columns of the first matrix does
not equal the number of rows of the second matrix.

A3 (a)
(b)

A(B+C)=

[- ! l ] =AB+AC

r6
A(B+C)= l4

-16

14 =AB+AC

A(3B)= ; ; - =3(AB)
A(3B)= _2; =3(AB)

]
n

Appendix B

A4 (a)

(b)

AS (a)

A+
A+

Answers to Practice Problems and Chapter Quizzes

Bis defined. ABis not defined.

12 2lO] .

Bis not defined. ABis defined.

AB= [ lO
13

31

(A+

Bl = [= ; ;]
= [- =]=Br

(ABl

485

=Ar+ Br.
Ar.

(b) Does not exist because the matrices are not of the correct size for this product
to be defined.

(c) Does not exist because the matrices are not of the correct size for this product
to be defined.
(d) Does not exist because the matrices are not of the correct size for this product
to be defined.
(e) Does not exist because the matrices are not of the correct size for this product
to be defined.

(f)
(g)

(h)

[l l]
5[ 2 l]
139
46

]
[ 21 12
62

13
3

10

7 7
15
A
AX=
=
Z
nl
{;].
n
[12 l:
17[
ll 1
10

(i)

A6 (a)

(b)

Dre= (CTDl =

AY

A7 (a)

(b)

(c)

(d)

11

[]
[]
[i l

8
4
-4

9
11

486

Appendix B

Answers to Practice Problems and Chapter Quizzes

AS (a)

[-

-1

(b) [OJ
(c)

10

-6

-5

-4
12

3
-9

15

(d) [-3]

-l 3
.
_
A9 B oth s1'des give
27

16]
o

AlO It is in the span since 3

U ]

+( -2)

[- ]

+(-1)

[ -] [ -H
=

All It is linearly independent.


Al2 Using the second view of matrix-vector multiplication and the fact that the i-th
component of e; is 1 and all other components are 0, we get
Ae; =Oa1 +

+ Oa;-1 +la;+Oa;+1 +

+Oan =a;

Section 3.2
A Problems
2
Al (a) Domain JR. , codomain JR.4
18

-19

(b) fA(2, -5) =

(c) fA(l, Q) =

-9
6
,fA(-3, 4) =
17
-23
38
-36

-2
3
l

,fA(O, 1 ) =

3
0
5
-6

-2x1 + 3x2

(d) fA(x)

3x1 + Ox2
=

x1 +5x2
4x1 - 6x2

(e) The standard matrix offA is

A2 (a) Domain JR.4, codomain JR.3

[-111

, fA(-3, 1, 4, 2)

fA (,) =

fA(t3) =

(b) fA(2, -2, 3, 1) =

(c) fA(ti) =

[H

-2
3

3
0

1
4

5
-6

[ ]
l-n Ul
-13

[-H

fA (,) =

Answers to Practice Problems and Chapter Quizzes

Appendix B

(d) fA(x)

487

ll+::: ;::]
X1

2X - X4
3
(e) The standard matrix of fA is
+

A3 (a) f is not linear.

(b)

is linear.

(c) h is not linear.


(d) k is linear.
(e) e is not linear.
(f)

is not linear.

A4 (a) Domain JR.3, codomain IR.2, [L ]

(b) Domain IR.4, codomain IR.2, [K]

(c) Domain IR.4, codomain JR.4, [M]

[
[

-3
1

0
0

-1

-7

-3

-1

-1

-1

AS (a) The domain is IR.3 and codomain is IR.2 for both mappings.

(b) [S+ T]

3
1

[2S- 3T]

1
-8

-4

-6

9
-5

A6 (a) The domain of S is IR.4, and the codomain of Sis IR.2. The domain of Tis IR.2,

and the codomain of Tis IR.4 .

(b) [So T]

10

-19

_10

-3

, [To S]

-6

-9

5
8

16

-1 7

-16

-8

-4

0
-5

A7 (a) The domain and codomain of Lo Mare both IR.3 .

(b) The domain and codomain of Mo Lare both IR.2.


(c) Not defined
(d) Not defined
(e) Not defined
(f) The domain of N o Mis IR.3, and the codomain is IR.4.
AS [projv]
A9 [perpv]

AlO [projv]

i l- -7]
[ ]
1

16

17 -4

-4

[ : : =]
-2

-2

488

Appendix B

Answers to Practice Problems and Chapter Quizzes

Section 3.3
A Problems

[ -] [ -] [ ] rn:i -:;]
[S] [ ]
[R9oS] [; ;:]
[
( ) [s R
]
[R] [
] [S] [
]

Al ca)

(b)

A2 (a)

cos e

5 sin e
4/5 -3/5
=
-3/5 -4/5

A3 (a)

-2

-2

00 0 0
00 0

AS (a)

-2

-2

(b)

(c)There is no shear

-3/5
4/5

1
1
(b) - 8

(b)

-sin e
5 cos e

f0 0 1

A4 (a) - -2

(d)

(b)

cc)

4
-4
7

-4

[0 0 0]

4/5
3/5

T such that To P

PoS.

(d)

[ ]
0

Section 3.4
A Problems
3

Al (a) 6 is notin the range ofl.

(b)L(l,1,-2)=

3
-5
5

A2 (a) A bas;s fm Range(L)is

{[6].[m
0 0 0
0 0
00 ' 0 ' 0 ' 00
0 0
0

A basis for Null(L)is

{[]}

(b) A basis for Range(M) is

-1

empty set since Null(M)

{0}.

[i 1
[ :1
-1

A3 The matrix of L is any multiple of

A4 The matrix of L is any multiple of

-2.
-3

. A basis for Null(M) is the

Appendix B

Answers to Practice Problems and Chapter Quizzes

489

AS (a) The number of variables is 4. The rank of A is 2. The dimension of the solution
space is 2.
(b) The number of variables is 5. The rank of A is 3. The dimension of the solution
space is 2.
(c) The number of variables is 5. The rank of A is 2. The dimension of the solution
space is 3.
(d) The number of variables is 6. The rank of A is 3. The dimension of the solution
space is 3.

{[] m, rm

A6 (a) A basis fo, the mwspace is

[
J}.
{[: l m ,

A basis fo, the columnspace is

A basis fm the nullspace is the empty set

Then, rank(A)+nullity(A)

3+0

by the Rank Theorem.

3, the number of columns of A as predicted

{ j , _! }
{i}

(b) A basis for the rowspace is

{[l m, rm

A basis for the columnspace is

A basis fm ilie nullspace is

Then, rank(A)+nullity(A)

3+ 1

4, the number of columns of A as predicted

by the Rank Theorem.

2
(c) A basis for the rowspace is

r}

0 '

' 0

. A basis for the columnspace is

0
-2

A basis fm the nullspace is

Then, rank(A)+nullity(A)

by the Rank Theorem.

A 7 (a) A basis fm the nul\space is

vectorn orthogonal to

[-])

3+2

-4

5, the number of columns of A as predicted

{[ l [ ]}

-3

(' any pafr of linead y independent

{ [- ] } .
{[-l l-m

; a basis fm the range is

(b) For the nullspace, a basis is

[!]{ };

for the range,

(c) A basis for the nullspace is the empty set; the range is
for JR.3

JR.3,

so take any basis

490

Appendix B

Answers to Practice Problems and Chapter Quizzes

AS (a) n = 5

1
0
2
0
3

(b)

-2
(e) x

1
0
0

+ t

-3
-1
0
-1
1

-2
1
(f)
0
0

-3
-1
0
-1
1

0
0
1
0
-1 ' 0
0
1
1

s, t E

(c) m = 4

(d)

{t ' , }
1
3
1
2

-1

-2

-3
-1
0

0
0

-1

R So, a spanning set is

is also linearly independent, so it a basis.

(g) The rank of A is 3 and a basis for the solution space has two vectors in it, so
the dimension of the solution space is 2. We have 3 + 2
5 is the number of
variables in the system.
=

Section 3.5
A Problems
Al (a)

(b)

23 -2
2
-1
-1

0
1
0

-1
-1
1

(c) It is not invertible.


(d)

-1
1
0

H ]
10
2
-3
-3

(e)

(f)

A2 (a)

(b)

(c)

-2
0
0
0
0
0

0
]
0
0
0

Hl
ni
[=]

-1
0
1
0
0

-5/2
-1/2
1
0
-]
-1
1
0

-7/2
-1/2
1
-2
2
1
-2
1

Appendix B

A3 (a)

Answers to Practice Problems and Chapter Quizzes

A-1 =[-32 -]. s-1 [- n


[ {6]. [-l _;]
1/3]
(3At1 =[213 -2/3
(ATtl [ 2 J
= [ -YJ/2
l 12 YJ/2/2]
[ ]
[165 ]
[ - i
[ n
[- ]
= [ ]
[ l
[ ]
=
=
=

491

(AB)-1 =

(b) (AB) =
(c)
(d)

-1

-1

I
A4 (a) [Rrr;6] - = [R_rr/6]

(b)
(c)

(d)

0 1

[S-1] =

AS (a) [SJ=

=[WI)

(b) [R]

(c) [(RoSt1]=

[(SoR)-1)=

A6 Let v,y E IR11 and t ER Then there exists il,x E JR11 such that x = M(Y) and
i1 = M(v) . Then L(x)
y and L(il) = v. Since Lis linear L(tx + il) = tL(x) +

L(il)

tY

v. It follows that

M(ty

v) =tx +a

So, Mis linear.

Section 3.6

[ 1 ol EA= [6-1 23 Tl
Ol
[
2
A
E
=
2
[
-1 3 ]
2
].EA
-:
=
[
[
-23 I

A Problems

(c)

Al (a)

-u

-5

0,
0 1

0
0 1
0

I
4

0
1

0 -1

-4

tMCY)

M(v)

492

Appendix B

=E [ 00 ].EA= H 1822 2]
01 A= 2
E = [ 0 ].E H 10 ]
00 001 001 000
00
00 000 00 00
0 00
10 01 00 00
00 00 0 0
021 00 001 000
000
00 001 00 000
000
001 001 00 000
000
1
1.

Answers to Practice Problems and Chapter Quizzes

(d)

(e)

A2 (a)

-3

(b)

(c)

(d)

-3

(e)

(f)

-1.

A3 (a) It is elementary. The corresponding elementary row operation is RJ + (-4)R2.


3
(b) It is not elementary. Both row and row have been multiplied by
3
(c) It is not elementary. We have multiplied row l by
and then added row
3
to row

(d) It is elementary. The corresponding elementary row operation is R R3


(e) It is not elementary. All three rows have been swapped.

0
0
l
[
0
[
1
0
Ol
E1 = 00 01 01 =] [00 01 1/20 = 00 0 ] = l 01 ]
-A 1 = [00 1-02 01
[
l
0
0
1
0
1
0
1
0
0
A= 0 1 0][0 0 rn 01 m 01 ]

(f) It is elementary. A corresponding elementary row operation is (l)R1


3
I
I
l -4
'4
A4 (a)
E2
'3
l
,

-3

/2

Appendix B

E1 = [ 0I 00 n E2 = [ 00 1 J =[00I 001 ] = [ 0 ]
A-1 = [-7 0 ]
A= [0 00 m 00 rn 00 m 0 ]
[I
0
Ei = 01 HE2= [ 001 HE,=[ 00 H
0
0
E4 = [ 01 n E, = [ 01 I
A-1 = [ 0 !]
0
A=H 0 JU 001 rn 00 rn 00 001 ][100 00 10i
0 00 00 01 0 00 00 01 1 00 00
E i = 0 0 E2 = 0 0 1 0 = 0 0
0 0 0 0 0 1 0 0 0 0 01 0 0 0 00 0
= 00 0 00 'Es= 00 01 0 00 'E6 = 00 0 0 0
0 00 00 0 0 0 1 0 0 0
= 00 0 01 00
000
A-1 = 1 1 0 00
0
0
0 00 00 0 0 00 00 0 1 00 00 0 0 00 00
-1
A= 0 0 0 0 0 1 0 0 0 0 0 0
01 0 0 0 0 0 1 0 0 00 00 0 0 0 0 0 0
00 01 0 00 00 01 0 00 00 01 01 00
000100010001
O

(b)

-2

-3

-2

-2
1

'4

(c)

-2

-3 '3

-2

- 1 /4

- 1 /4

1 /2
4

-1/2

- 1 /4

-I

-4

(d)

,
O

'

O'

-4

-1

1 /2

O'

-1

-1

-2

- /2

- 1 /2

1 /2

-2

-2

493

Answers to Practice Problems and Chapter Quizzes

494

Appendix

Answers to Practice Problems and Chapter Quizzes

Section 3.7
A Problems

-:]
-1
- ]
-1201/210001
114
01 0000504 -7/2
53 -13/2
0 001 00 - 0 0
10 10 000001 -32-2 1
30-4/3-1 17/91 01 00 00-30-37
0-20-11 2220
--3/221 3/210000
1
0
0
0
4
0
-1 -221 0 000
A2(a)LU=[= ! rn _;];11=[=H12=Ul
0
H
LU= rn - i 11 = m 1, =r=i
LU=[= ! rn H 11= [] 1, {]
1 01 0000-10 -12 -33 01 -11 --53
0
LU= -3-1 2001 0 00 00-1201 -31 , 12= -20
(d)

(e)

(f)

(b)

(c )

O;xi=

(d)

Chapter 3 Quiz
E Problems
El (a)

1 =3179]
[-14-1 10

Answers to Practice Problems and Chapter Quizzes

Appendix B

495

(b) Not defined


( c)

1- =1
-8

-42

[]
[-16 ]
- 1

E2 (a) fA(a) =
(b)

.1ACv) =

-11
0

17

E3 (a) [R] = [R,13] =

(c) [Ro M] =

,-

2+ Y3
2 \(3 - 1
6

ol

2 -1

-Y312
1/ 2
0

(b) [M] = [re ftc 1 1 2) ] =

[-]
1

[-7 - i
-l - 2Y3

2- 2
2 '\f3+ 2
-2

- '\f3 + 2
4

-2
E4 The solution space of Ax = 0 is x =

-1
5
-2
1
6
0
1 +t 0
of Ax= bis x = 0 + s
1
0
0
7
0
0

1
1
0
0

s, t E

-1
0
+t 0
1
0

s, t E

R The solution set

R In particular, the solution set is

5
6
obtained from the solution space of Ax = 0 by translating by the vector 0 .
0
7
E5 (a) a is not in the columnspace of B. v is in the columnspace of B.
(b) x =

(cJY =

l=il
m

E6 A basis for the rowspace of A is

{ !}
1

of A is

1
0
0
0
0
1 , -1 , 0
0
0
-1
2
3

A basis for tbe nul\space is

. A basis for the columnspace

-1
1
0
0

1
-3
0

-2

496

Appendix B

Answers to Practice Problems and Chapter Quizzes

2/3
1/6

0
0

1
0
-1/3 0

1/2

1/3
-1/6

0
0

1/3

ES The matrix is invertible only for p


.

f.

1. The inverse is

[-

1 p

p
1 - 2p
-1
-1

-;;

-p
p .
1

E9 By defimt1on, the range of L 1s a subset of JR.m. We have L(O) = 0, so 0 E Range(L).


If x,y E Range(L), then there exists i1, v E JR." such that L(i1) = x and L(v) = y.
Hence, L(i1 + v) = L(u) + L(v) = x + y, so x + y E Range(L). Similarly, L(tu) =
tL(i1) = tx, so tx E Range(L). Thus, L is a subspace of JR.Ill.
ElO Consider c1L(v1) +
+ ckL(vk) = 0. Since L is linear, we get L(c1v1 + +
ckvk) = 0. Thus, c1v1 +
+ ckvk E Null(L) and so c1v1 +
+ ckvk
0. This
implies that c1 = = Ck = 0 since {V1,
, vk} is linearly independent. Therefore,
{L(v1), ... , L(vk)} is linearly independent.

. .

Ell (a) E1

[ ], [ l [ 1. [
[ rn ! m ! -i [ ! +1
1 2

(b) A=

E12 (a)

E =

1/4

3 =

4 =

1 3/2
1
0 0

= /3

(b) There is no matrix K.


(c) The range cannot be spanned by

[1

(d) The matrix of L is any multiple of

because this vector is not in JR.3.

[ =1
2

-4/3

(e) This contradicts the Rank Theorem, so there can be no such mapping L.

(f) This contradicts Theorem 3.5.2, so there can be no such matrix.

CHAPTER4
Section 4.1
A Problems
Al (a) -1 - 6x

4x2

6x3

(b) -3 + 6x - 6x2 - 3x3 - 12x4


(c) - 1 + 9x - 1 l x2 - 17 x3
(d) -3 + 2x + 6x2
(e) 7

(f) l3

2x - 5x2
x + llx2
3
3

(g) -fi. - 7r + -fi.x + ( -fi. + 7r)X2

Appendix B

A2 (a) 0

497

Answers to Practice Problems and Chapter Quizzes

0(1 + x2 + x3) + 0(2 + x + x3) + 0(-1 + x + 2x2 + x3)

(b)

2 + 4x + 3x2 + 4x3 is not in the span.

(c)

-x + 2x2 + x3

(d)

-4 - x + 3x2

(e)

-1 - 7 x + 5x2 + 4x3

2(1 + x2 + x3) + (-1)(2

x + x3) + 0(-1 + x + 2x2 + x3)

1(1 + x2 + x3) + (-2)(2 + x + x3) + 1(-1 + x


=

2x2 + x3)

(-3)(1+ x2 + x3) + 3(2 + x + x3) + 4(-1+x+2x2 + x3)

(f) 2 + x + 5x3 is not in the span.


A3 (a) The set is linearly independent.
(b) The set is linearly dependent. We have

(-3t)(l + x + x2) + tx + t(x2 + x3) + t(3 + 2x + 2x2 - x3),

IR.

(c) The set is linearly independent.


(d) The set is linearly dependent. We have

(-2t)(l +x+x3+x4)+t(2+x - x2+x3+x4)+t(x+x2+x3+x4),

IR.

A4 Consider

The corresponding augmented matrix is

1
0
0

-1
1
0

1
-2
1

a1
a .
2
a
3

1 in each row, the system is consistent for all polynomials


a1 + a x + a x2 Thus, 13 spans P . Moreover, since there is a leading 1 in each
3
2
2
column, there is a unique solution and so 13 is also linearly independent. Therefore,
it is a basis for P
2
Since there is a leading

Section 4.2
A Problems
Al (a) It is a subspace.
(b) It is a subspace.
(c) It is a subspace.
(d) It is not a subspace.
(e) It is not a subspace.

(f) It is a subspace.
A2 (a) It is a subspace.
(b) It is not a subspace.
,-

(c) It is a subspace.
(d) It is a subspace.

498

Appendix B

Answers to Practice Problems and Chapter Quizzes

A3 (a) It is a subspace.
(b) It is a subspace.
(c) It is a subspace.
(d) It is not a subspace.
(e) It is a subspace.

A4 (a) It is a subspace.
(b) It is not a subspace.
(c) It is a subspace.
(d) It is not a subspace.

AS Let the set be {v1, ... , vk} and assume that vi is the zero vector. Then we have
0

Ov1

Ov;-1

v;

Ov;+1

+ +

Ovk

Hence, by definition, {v1, ... , vk} is linearly dependent.

Section 4.3
A Problems
Al (a) It is a basis.
(b) Since it only has two vectors in IR.3, it cannot span IR. 3 and hence cannot be a
basis.
(c) Since it has four vectors in IR.3, it is linearly dependent and hence cannot be a
basis.
(d) It is not a basis.
(e) It is a basis.

A2 Show that it is a linearly independent spanning set.


A3 (a) One possible basis is

(b) One possible basis is

A4 (a) One possible basis is


3.

{[-l [!]}
ml [ il [!]}

{[

(b) One possible basis is


dimension is 4.

Hence, the dimension is 2.

{[

[ -n, [ = ]}

=J

(c) 'l3 is a basis. Thus, the dimension is 4.

AS (a) The dimension is 3.


(b) The dimension is 3.
(c) The dimension is 4.

Hence, the ili mension is 3

.Hence, the dimension is

U ] [ ] [ ]}
-

. Hence, the

Appendix B

Answers to Practice Problems and Chapter Quizzes

499

A6 Alternate correct answers are possible.


(a)

(b)

mrnl}
mrnJHJ}

A 7 Alternate correct answers are possible.

(a)

{ -! )
,

1
1
1
0
0 ' 0
0
0

0
-1

'

AS (a) One possible basis is


(b) One possible basis is

(c) One possible basis is

(d) One possible basis is

{x, 1 - x2}. Hence, the dimension is 2.

{[ ], [ ], rn ]}

{[ :l [ ]}

is 3.

{[ J.rn ][ ] }

Section 4.4
A Problems

) [xJ.

(c) [x].=

(d) [x]3 =
(e) [x] =

[_;J.

[y]3 =

[;]

UJ. Ul
nl {]
[y].=

[yJ.

[n

[y]3 =

[-]

Hl {]
l

. Hence the dimension is 2.


,

{x - 2, x2 - 4}. Hence, the dimension is 2.

(e) One possible basis is

Al (a) [x]3 =

.Hence, the dimension is 3.

. Hence, the dimension

500

Appendix B

Answers to Practice Problems and Chapter Quizzes

A2 (a) Show that it is linearly independent and spans the plane.

(b)

m m
and

are not in the plane We have

m.

[ ;J
_

A3 (a) Show that it is linearly independent and spansP .


2

(b) [p(x)]"

(c) [2- 4x

=[: l

[q(x)]" =

[H

!Ox']=

[4- 2x

[H

['(x)].=

We have

7x'J. + [-2 - 2x

Hl

= [2- 4x

3x'J.=

m +!l []

l0x2]13 = [(4 - 2)

(-2 - 2)x

(7

3)x2]!B

A4 (a) i. A is in the span of '13.

ii. '13 is linearly independent, so it forms a basis for Span '13.


iii. [A]!B

[-i j;l
3/4

(b) i. A is not in the span of '13.


ii. '13 is linearly independent, so it forms a basis for Span '13.
AS (a) The change of coordinates matrix Q from '13-coordinates to S-coordinates is

[! =]
l

The change of coordinates matrix P from S-coordinates to 'B-

3I11
coordinates isP= -15/11
-1/11

o
1
0

2/11
1/11 .
3/11

(b) The change of coordinates matrix Q from '13-coordinates to S-coordinates is

[-

.The change of coordinates matrixP from S-coordinates to '13-

-2

2/9 -1/9
coordinates isP= 7/9
1/9
4/9
7/9

1/9
-1/9 .
2/9

(c) The change of coordinates matrix Q from '13-coordinates to S-coordinates is

[- - l
-1 -1

The change of coordinates matrix P from S-coordinates to '13-

1/3
2/9
coordinates isP= 0 -1/3
1/3 -1/9

-8/9
1/3 .
4/9

Appendix B Answers to Practice Problems and Chapter Quizzes

501

Section 4.5
A Problems
Al Show that each mapping preserves addition and scalar multiplication.
A2 (a) det is not linear.

(b) L is linear.
(d) M is linear.

(c) T is not linear.


A3 (a) y is in the range of L. We have L
(b)

([\]

y.

is in the range of L. We have L(l + 2x + x2)

y.

(c) y is not in the range of L.


(d) y is not in the range of L.
A4 Alternate correct answers are possible.
(a) A basis fm Range(L) is
rank(L)+Null(L)

{[:J.lm
1

(b) A basis foe Range(L) is (1


rank(L) + Null(L)

{[-i]}

We have

{[-l]}

We have

dim JR3.
+

A basis foe Null(L) is

x, x\. A basis foe Null(L) is

dim JR3.

{[ -][ ][ ]}.
{[ ][ ][ ][ ]}

(c) AbasisforRange(L)is{l}.Abasis forNull(L)is


We have rank(L) + Null(L) = 1
(d) AbasisforRange(L)is

3 = dim M(2, 2).

.Abasis forNull(L)

is the empty set. We have rank(L) + Null(L) = 4

Section 4.6
A Problems
Al (a) [L]23 =

(b) [L]13 =

A2 (a) [L]13 =
(b) [L]13 =

[ -n
[
-1

-1

r- ]
[ ]

[L(x)]23

].
5

[]

[L(x)]23

[ l
-11

0 = dim P3

502

Appendix B

Answers to Practice Problems and Chapter Quizzes


A3 (a) [L(v,)]

(b) [L(v,)J.

(c) [4v,)]

A4 (a) 23

(b)

(b) [LJ.

[L(v 2)],

[4v2ll

[L(v2)J.

{[-], []}.

H I
4 m
)

Ul. oUl ]
[10 00
[1]
=

(b) [L]B

-2

(c) 45,3,-5)
A7 (a) [L]B

(b) [L]B

(c) [L]B

(d) [L]B

[H
[]
[ll

[reflo,-2J]B

{[J. [].[:]}

(c) L(l, 2

A6 (a)

[!].
[H
[]

3
-3

[ ]
[ -]
[-1408 -11528]
[ ]

[L(vi)l

[4v1)]

[4vi)l

[- ]

[proj"' -ol

[H [! ]
[!] [ !]
l l [ l
=

[LJ.

[L],

[L]

Appendix B

Answers to Practice Problems and Chapter Quizzes

503

[ ]
[ ]
[ _!]
[ - - ]
[ 1
[- =]
2

(e) [L], =

(0 [L]=

4
0

0
-2

AS (a) [L]1l =

2
0

-2

(b) [L],

1
-3

-1

(c) [DJ=

(d) [T]1l =

Section 4.7
A Problems
Al In each case, verify that the given mapping is linear, one-to-one, and onto.

(a) DefineL(a+bx+cx2 + dx3)

a
b
=

c
d

(b) Define L

([; ])

d
(c) DefineL(a +bx+cx2 +dx3)=

[; l
[ O]

(d) DefineL(a1(x - 2) + a2 (x2 - 2x)) = a i


o

a2

Chapter 4 Quiz
E Problems
El (a) The given set is a subset of M(4, 3) and is non-empty since it clearly contains
the zero matrix. LetA and B be any two vectors in the set. Then a11 +a12+a13
0 and b11 +b12 +b13 0. Then the first row ofA + B satisfies
=

504

Appendix B

Answers to Practice Problems and Chapter Quizzes

so the subset is closed under addition. Similarly, for any t

JR., the first row of

tA satisfies
ta11

ta12

ta13

t(au

a12

a13)

so the subset is also closed under scalar multiplication. Thus, it is a subspace


of M(4, 3) and hence a vector space.
(b) The given set is a subset of the vector space of all polynomials, and it clearly
contains the zero polynomial, so it is non-empty. Let p(x) and q(x) be in the
set. Then p(l)

(p

q)( l )

0, p(2)
p(l)

0, q(l)

q(l)

0, and q(2)

(p

and

0. Hence, p
+

q)(2)

so the subset is closed under addition. Similarly, for any t

q satisfies

p(2) + q(2)

JR., the first row of

tp satisfies
(tp)(l)

tp(l)

(tp)(2)

and

tp(2)

so the subset is also closed under scalar multiplication. Thus, it is a subspace


and hence a vector space.
(c) The set is not a vector space since it is not closed under scalar multiplication.
For example,

[ ]

is not in the set since it contains rational entries.

3 and is non-empty since it clearly contains 0.


Let x and y be in the set. Then x1 + x2 + x3
0 and Y1 + Y2 + y3
0. Then
x + y satisfies

(d) The given set is a subset of JR.

X1

Yl

X2

Y2

X3

Y3

X1

X2

X3 + Yl

Y2

Y3

so the subset is closed under addition. Similarly, for any t

JR., tx satisfies

so the subset is also closed under scalar multiplication. Thus, it is a subspace

of JR. and hence a vector space.

E2 (a) A set of five vectors in M(2, 2) is linearly dependent, so the set cannot be a
basis.
(b) Consider

Row reducing the coefficient matrix of the corresponding system gives

1
2
-

2
2
4
-2

1
0
0
0

0
1
0
0

0
0
0

2
-1
0

Thus, the system has infinitely many solutions, so the set is linearly dependent,
and hence it is not a basis.
(c) A set of three vectors in M(2, 2) cannot span M(2, 2), so the set cannot be a
basis.

Appendix B

Answers to Practice Problems and Chapter Quizzes

E3 (a) Consider

t1

505

0
1
3
0 10 31 11 00
13
02 -20 00
10 33 11 01 01 00 -20
l
1
0
13 l 02 -20 000 000 001 001
3.
10 1 33 02
-1
1
0
13
02 -3
11 33 02 01 0 00 -2-1
01 01 -3-1 - 00 00 01 01
3 2 000 0
+

t2

t3

t4

Row reducing the coefficient matrix of the corresponding system gives

Thus, '13

{V1, V2, V3} is a linearly independent set. Moreover, V4 can be writ

ten as a linear combination of the v1, v2, and v3, so '13 also spans. Hence, it
is a basis for and so dim

(b) We need to find constants t1, t2, and t3 such that

t1

t2

t3

-5

-5

Thus, [1]

E4 (a) Let il1

[!]

[=:]

and il2

[H

Then {iii. il2) is a basis for the plane since it is a set

of two linearly independent vectors in the plane.

(b) Since il3

[j]

does not lie in the plane, the set './J

{ii1, ii" ii3) is linear] y

independent and hence a basis for JR. .


(c) We have L(v1)

v1, L(v2)

v2, and L(v3)

[Lhi

-v3, so

[01 o1 ol0
0 0 -1
[o 01 01]
0 1 -1

(d) The change of coordinates matrix from '13-coordinates to S-coordinates (stan


dard coordinates) is

506

Appendix B

Answers to Practice Problems and Chapter Quizzes

It follows that
[L]s

P[L]BP-1

ES The change of coordinates matrix from S to 13 is

[ ]
0
1
0

[i -\]
[-: ] [
0

Hence,
[L]

p -1

-1
0

-2

E6 If t i L(v i ) + + tkl(vk)

and hence ti vi++ tkvk

2/3
2/3
1/3

11/3
-10/3
1/3

0, then

Null(l). Thus,

and hence t1

tk
0 since {vi, ... , vk} is linearly independent. Thus,
{l(vi), ... , L(vk)} is linearly independent.
=

E7 (a) False. !Rn is an n-dimensional subspace of IR".


(b) True. The dimension of P2 is 3, so a set of four polynomials in P2 must be
linearly dependent.
(c) False. The number of components in a coordinate vector is the number of
vectors in the basis. So, if 13 is a basis for a 4 dimensional subspace, the 13coordinate vector would have only four components.
(d) True. Both ranks are equal to the dimension of the range of l.

(e) False. Consider a linear mapping L : P2 P2. Then, the range of L is a


subspace of P2, while for any basis 13, the columnspace of [L]11 is a subspace
2
of IR Hence, Range(L) cannot equal the columnspace of [L]11
2
(xi, 0) is one-to-one, but
(f) False. The mapping l : IR IR given by l(xi)
2
dim IR -::f:: dim!R .
=

CHAPTERS
Section 5.1
A Problems
Al (a) 38
(d) 0
A2 (a) 3

(b) -5
(e) 0
(b) 0

(c) 0
(f) 48
(c) 196

(d) -136

Appendix B

A3 (a) O
(d)-90

Answers to Practice Problems and Chapter Quizzes

(b) 20

(c) 18

(e) 76

(f) 420

A4 (a)-26

(b) 98

AS(a)-1

(b) 1

507

(c) -3

Section 5.2
A Problems
Al (a) detA= 30, so A is invertible.
(b) detA = 1, so A is invertible.
(c) detA=8,so A is invertible.
(d) detA=0,so A is not invertible.
(e) detA =-1120,so A is invertible.

A2 (a) 14

(b) -12

(c)-5

(d) 716

A3 (a) detA = 3p - 14, so A is invertible for all p

(b) detA =-Sp- 20,so A is invertible for all p * -4.


(c) detA= 2p- 116, so A is invertible for all p * 58.

A4 (a) detA = 13,det B = 14, detAB=det

[ ]

26

=182
7

(b) detA=-2,det B=56,detAB=det -7 -4


11

11

25

7 =-112
8

AS (a) Since rA is the matrix where each of then rows of A must been multiplied by
r ,we can use Theorem 5.2.1n times to get det(rA)=r" detA.
(b) We have AA-1 is /,so
1 = det/ = detAA-1

(detA)(detA-1)

by Theorem 5.2.7. Since detA * 0,we get detA-1 =

deA.

(c) By Theorem 5.2.7, we have 1 = det/ = detA3 = (detA)3. Taking cube roots
of both sides gives detA = 1.

Section 5.3
A Problems
Al (a)
(c)

[
2

-]

-3

21

11

-1

-13

-7

(b)

[= ]

(d)

[=

-2

=:1

-8 -4

508

Appendix B

Answers to Practice Problems and Chapter Quizzes


A2

(a) cofA= \3t 23t ;1]


-3 -2 -2t -6
[
-2t-17
(b) A(cofAl= -2tl7 ]So detA= -2t-17 and =
-2t-17
[
-1 3 t -3
-2tl7 5 2 -3t -2 - 2t , provided -2t-17
-2 -11 -6 l
]
51/19] (b) [ 7/5] (c) [-221/1111] (d) [-13/5
(a) [-4/19
2/5

-11/15
-8/5
0

A-

t- 0.

3
A

Section 5.4
A Problems
Al

A2
A3

A4

AS

A6

(a) 11
Cb) Ail= [H Av=[]
(c) -26
(d) ldet u JI = 286 = 1(-26)1(11)
Ail=[;]. Av= [-n Area = Jctet [ -JI= 1-81= 8.
(a) 63
(b) 42
(c) 2646
(a) 41
(b) 78
(c) 3198
(a) 5
(b) 245
Then-volume of the parallelotope induced by
is ldet [v1 .. . vn]I
Since adding a multiple of one column to another does not change the determinant
(see Problem 5.2.D8), we get that
V1, ... 'Vn

This is the volume of the parallelotope induced by v1,... , v1 _ vn 1.


1,

tV

Answers to Practice Problems and Chapter Quizzes

Appendix B

509

Chapter 5 Quiz
E Problems

El

E2

-2

-2

-3

-1

-2
9
=2(-1)2+3 -3
3
1
0

3 =(-2)(3)(-1)2+3

-1

-8

-6

-1

20

-9
21

-17

12

-17

1 = 5(2)(4)(3)(1)

(b) (-1)4(7)

E6 (A-1) 1
3

E7 Xz

-12

180

120

-* .

224

49

de A

ES (a) det

(e) 7(7)

-1

21

(d) de A

-8

E4 The matrix is invertible for all k *

(c) 25(7)

-2

E3 0

ES (a) 3(7)

-1

-1

-2

[ i
-

-2

=33

(b) 1-241(33) =792

CHAPTER6
Section 6.1
A Problems

A1

m [-: l
U]
and

are not eigen,ectors of A.

[ l

is an eigenvcctocwith eigenvalue ,!

is an eigen,ectoc with eigenvalue A

eigenvalue ;i =4.

-6; and

[:]

O;

is an eigenvectoc with

510

Appendix B

Answers to Practice Problems and Chapter Quizzes

A2 (a) The eigenvalues are ..11

2 and ..12

The eigenspace of ..12 is Span

(b) The only eigenvalue is ..1

(c) The eigenvalues are ..11

2 and ..12

(e) The eigenvalues are ..11

(f) The eigenvalues are ..11

A3 (a) ..11

{[- ]}.

{[ ]}.

-3. The eigenspace of ..11 is Span

{[ ]}.

{[ ]}.

so it

3 has algebraic multiplicity 1. A basis for

so it has geometric multiplicity 1.

2 has algebraic multiplicity 2; a basis for its eigenspace is

has geometric multiplicity 1.


(c) ..11

{[!]}

-2. The eigenspace of ..11 is Span

2 has algebraic multiplicity 1. A basis for its eigenspace is

its eigenspace is

{[]}.

4. The eigenspace of ..11 is Span

{[- ]}.

has geometric multiplicity 1. ..12

(b) ..11

{[]}.

0 and ..12

The eigenspace of ..12 is Span

{[]}

{[]}

5 and ..12

The eigenspace of ..12 is Span

{[]}.

3. The eigenspace of ..11 is Span

-1 and ..12

The eigenspace of ..12 is Span

1. The eigenspace of ..1 is Span

The eigenspace of ..12 is Span

(d) The eigenvalues are ..11

{[]}

{[;]}

3. The eigenspace of ..11 is Span

2 has algebraic multiplicity 2; a basis for its eigenspace is

{[]}

. so it

{[ ]}.

so it

has geometric multiplicity 1.

(d) A,

2 has algebraic multiplicity I; a basis for its eigenspace is

it has geometric multiplicity 1. ..12


for its eigenspace is

{[]}

so

1 has algebraic multiplicity 1; a basis

so it has geometric multiplicity I. A3

algebraic multiplicity I; a basis for its eigenspace is


multiplicity 1.

{r-m,

{[; ]},

-2 has

so it has geometric

Appendix B
(e) ,! 1

has algebraic multiplicity 2; a basis for its eigenspace i

so it has geometric multiplicity 2. tl2

{[:]}

basis for its eigenspace is

( 0 ,! 1

511

Answers to Practice Problems and Chapter Quizzes

6 has algebraic multiplicity

so it has geometric multiplicity

2 has algebraic multiplicity 2; a basis for its eigenspace is

so it has geometric multiplicity 2. tl2

{[;]},

basis for its eigenspace is

{[-il r1;l}

1.

{ril r1;l}

S has algebraic multiplicity

so it has geometric multiplicity

1.

Section 6.2
A Problems
Al (a)

rn

1
with eigenvalue -7. P-

(b) P does not diagonalize A .


(c)

rn

[-il [-l
and

1
p- AP

p-

1 -

11 -1 ]
[
2

[- - 1
0 0 10

(a) The eigenvalues are tl1

1 0.

and tl2

A basis for the eigenspace of tl1 is

1.

1, [ ]
[1 O]
and

1
p- A p -

p-I

A2 Alternate correct answers are possible.

is

is an eigenvector of A

is an eigenvector of A

_3 .

are both eigenvectors of A with eigenvalue -2, and

an eigenvector of A with eigenvalue

1.

and

is an eigenvector of A with eigenvalue

.
1ue
.
with eigenva

(d)

14, r- ]
[104 -l
[- n

is an eigenvector of A with eigenvalue

A basis for the eigenspace of tl2 is

[ l

. so the geometric multiplicity


so the geometric multiplicity

is 1. Therefore, by Corollary 6.2.3, A is diagonalizable with P

p- I AP

is

8, each with algebraic multiplicity

{[ ]}
{[]}.

[=: - -n

rn

r- ]

and

512

Appendix B

Answers to Practice Problems and Chapter Quizzes

(b) The eigenvalues are '11

-6 and '12

A basis for the eigenspace of '11 is

n-;n.
{[ ]}
=

1, each with algebraic multiplicity 1.

1. A basis for the eigenspace of '12 is

so the geometric multiplicity is


. so the geometric multiplicity is

1. Therefore, by Corollary 6.2.3, A is diagonalizable with P


D

[- l

(c) The eigenvalues of A are


(d) The eigenvalues are '11

fili.

[-; ]

and

Hence, A is not diagonalizable overR

-1, '12

2, and '13

multiplicity 1. A basis for the eigenspace of ,!1 is

0, each with algebraic

{[-l]}
{[i]}
{[-]},

so the geometric

multiplicity is 1. A basis for the eigenspace of ,i, is

so the geometric

multiplicity is 1. A basis for the eigenspace of A3 is

so the geomet

ric multiplicity is 1. Therefore, by Corollary 6.2.3, A is diagonalizable with


P

[- -1
0

and D

[- l
0

(e) The eigenvalues are '11

1 with algebraic multiplicity 2 and '12

{[]}
{ [-l ] }

5 with alge

braic multiplicity 1. A basis fonhe eigenspace of ,!1 is

so the geometric

mu1tiplicity is 1. A basis for the eigenspace of ,!2 is

, so the geometric

multiplicity is 1. Therefore, by Corollary 6.2.3, A is not diagonalizable since


the geometric multiplicity of !l1 does not equal its algebraic multiplicity.
(f) The eigenvalues of A are 1, i, and

-i.

Hence, A is not diagonalizable

overR
(g) The eigenvalues are ;l1

2 with algebraic multiplicity 2 and !l2

algebraic multiplicity 1. A basis for the eigenspace of A1 is

-1 with

{[] []}
{1-i]}

geometric multiplicity is 2. A basis for the eigenspace of ,!2 is

so the

so the

geometric multiplicity is 1. Therefore, by Corollary 6.2.3, A is diagonalizable


with P

1 - i
0

and D

1 ]
0

-1

Appendix B

513

Answers to Practice Problems and Chapter Quizzes

A3 Alternate correct answers are possible.


(a) The only eigenvalue is -11
eigenspace of -11 is

{[]}.

3 with algebraic multiplicity 2. A basis for the

so the geometric multiplicity is 1. Therefore, by

Corollary 6.2.3, A is not diagonalizable since the geometric multiplicity of A,1


does not equal its algebraic multiplicity.
(b) The eigenvalues are A,1

0 and A,2

A basis for the eigenspace of A,1 is

8, each with algebraic multiplicity 1.

{[-]}.
{[ ]}.
=

so the geometric multiplicity is

1. A basis for the eigenspace of -12 is

so the geometric multiplicity is

1. Therefore, by Corollary 6.2.3, A is diagonalizable with P


D

rn ].

(c) The eigenvalues are -11

3 and A2

1. A basis for the eigenspace of -11 is


is 1. A basis for the eigenspace of A,2 is

{[]}.
{[- ]}.

and

so the geometric multiplicity


so the geometric multiplicity

[ -l

(d) The eigenvalues are A,1

[- ]

-7, each with algebraic multiplicity

is 1. Therefore, by Corollary 6.2.3, A is diagonalizable with P


D

[ -]

{I-:]}
{I:]},

2 with algebraic multiplicity 2 and A,2

algebraic multiplicity 1. A basis for the eigenspace of A1 is

ometric multiplicity is 1. A basis for the eigenspace of 12 is

and

-2 with

so the ge-

so the geo

metric multiplicity is 1. Therefore, by Corollary 6.2.3, A is not diagonalizable


since the geometric multiplicity of A,1 does not equal its algebraic multiplicity.
(e) The eigenvalues are A,1

{[-l ri]}
{I:]},

-2 with algebraic multiplicity 2 and A,2

algebraic multiplicity 1. A basis for the eigenspace of A1 is

the geometric multiplicity is 2. A basis for the eigenspace of 12 is

1- - i

1- - ]

4 with
, so

so the

geometric multiplicity is 1. Therefore, by Corollary 6.2.3, A is diagonalizable


with P

and D

514

Appendix B

Answers to Practice Problems and Chapter Quizzes

(t) The eigenvalues are ..11

with algebraic multiplicity

algebraic multipLicity I. A basis for the eigenspace of A, is

geometric multiplicity is

2.

[-0 1 1i

and D

(g) The eigenvalues of are


overR

2, 2

[0 0 1].
i and

62. 3. ,

1 with

{[-l []}
{[1]}

A basis foe the eigenspace of A2 is

geometric multiplicity is 1. Therefore, by Corollary


with P

and ..12

so the

so the

A is diagonalizable

i. Hence, A is not diagonalizable

Section 6.3
A Problems
Al (a) A is not a Markov matrix.

(b) B 1s a Markov matnx.


.
The invariant state 1s
.

(c) C is not a Markov matrix.

(d) Dis a Markov matrix. The invariant state is

A2 (a) In the Jong run,


urban dwellers.

25%

A3

67%

will be urban dwellers.

1 [ l
10 1 6

In the long run,

33%

A4 (a) The dominant eigenvalue is ..1

(b) The dominant eigenvalue is ..1

Section 6.4
A Problems

[-!]
r-J

(b) ae-051

+ be41

[n
rn

+ be0.3t

a,b ER
a,b ER

7 5%

will be

of the population will be rural dwellers

60%
20%
5.
6.

the cars will be at the train station, and

Al (a) ae-51

[1/3]

of the population will be rural dwellers and

(b) After five decades, approximately


and

[67 1113]
13 .

of the cars will be at the airport,

20%

of

of the cars will be at the city centre.

Appendix B

Answers to Practice Problems and Chapter Quizzes

515

Chapter 6 Quiz
E Problems

El (a)

(b)

(c)

(d)

m
m
m
[ f]

is not an eigenvector of A

is an eigenvector with eigenvalue I.

is an eigenvector with eigenvalue I.

is an eigenvector with eigenvalue -1.

E2 The matrix is diagonalizable. P=

[ i
-

-1

and D

[ - i
0

-2

E3 At = 2 has algebraic and geometric multiplicity 2; A2 = 4 has algebraic and


geomet1ic multiplicity

1.

Thus, A is diagonalizable.

E4 Since A is invertible, 0 is not an eigenvalue of A (see Problem 6.2. D8). Then, if


Ax= Ax, we getx= A A-x
1 , so A-1x

x.

ES (a) One-dimensional
(b) Zero-dimensional
(c) Rank(A ) =

E6 The invariant state isx=

[j:].
1/2

E7 ae-0.11

r-J

beo4t

[]

CHAPTER 7
Section 7.1
A Problems

Al (a) The set 1s orthogonal. P

(b) The set is not orthogonal.

[11-VS 21-VS ]
2/YS

l /YS

516

Appendix B

Ans ers to Practice Problems and Chapter Quizzes


w

(c) The set is orthogonal.P=

1[ 85/9//333l
4
-5/-1/...../2./2
1/3/5/..22 ./2
-0

(d) The set is not orthogonal.


A2 (a) [w]23 =

(c) [jt]23 =

-5

A3 (a) [x)23 =

(c) [w ]23 =

1
l
3/
YlO
;
1/1/Yl
\!TiO
Yl
l
0
-10/
\!TiOI .
3/W -1;Y15 3/\!TiO
[22/2/4/333]
-5/3/22
6/7/1/....2 ./2./2
3/6/1/....2 ./2./2

(b) [.Y]s =

(d) [ZJ23 =

A4 (a) It is orthogonal.

(b) It is not orthogonal. The columns of the matrix are not orthogonal.
(c) It is not orthogonal. The columns are not unit vectors.
(d) It is not orthogonal. The third column is not orthogonal to the first or second
column.
(e) It is orthogonal.
AS (a)

fr 81

(b) p =

0. Hl
[-2 -2 -1i
82 =

[l -I ]
[-75+4.../2
-1+4
.../2.../2 -4-8-2.../2
1
5+4
+4
2..
.../2
-4 8-2 8+ ./2
[-2/1/../6../6 1;1;Y3Y3 1/..0 1./2
1/1../6 1;Y3 -1/.../2
.
:;1

Jfi
;
;

[
I

;
:
[ 1}
:::: ::: :J [

(c) [L] =

},

(d) [L]s=

-2

Y2

Y2

Y2

A6 Since 'B is orthonormal, the matrix P =

no

.s

11.../2

o{J J

-1;.../2

.""us

, w

is ortho-

e can pick anoth'

Appendix B

Answers to Practice Problems and Chapter Quizzes

517

Section 7.2
A Problems

5/2
5
Al (a)
512
5

AZ (
)

2
3
(c)
5

2
/2
9
(b)
5
9/2

{ }

{[-m
mi rm
mrnrnl}
{[:J lil HJ}
{ -: , -: } { i = }
{ll ;-fi.:-l [OJ , [ 6 ]}
{[ ;;], ['5]. [-:!rs]}
(c )

( b)

-1
-1

'

AJ (a)

(c)

A4 ( a )

Cc)

(d)

1/

'

1;Yi
-1 Yi

i;Y3
1;Y3

-2;Y15
2;Y16
1;Y15
-1;Y16
o
' 3/YIS ' 2;Y16
1;Y3
i;Y15
-1;YIO

'

(b)

-2/3

1;Y3

5/Y45
1;Y3

-1;Y15
3/YIS
1;Y3 , -1;Y3 , 1;Y15
o
1;Y3
2;Y15
0
1;Y3
0

Cct)

{v\, ... , vd be an orthonormal basis for and let lVk+l ... , v11} be an
orthonormal basis for 1-. Then {v1,
, v11} is an orthonormal basis for IR.11 Thus,

AS Let

any 1

IR.11 can be written

Then
perp3(1)

1 - proj31

1- [( 1

(1

v1)v1

Vk+tWk+I

p roj3 1

.
.

(1

vk)Vk]

. + (1 VnWn

518

Appendix B

Answers to Practice Problems and Chapter Quizzes

Section 7.3
A Problems
Al (a) y =
y

10.5- l.9t

y=

(b)y =

3.4 + 0.8t

10.5- 1.9r

0
A2 (a)y =

(b)y =

-t+ t2
383/98
A3 (a) x =
32/49

(b) x

- - t2
167/650
-401/325

Section 7.4
A Problems
Al (a) (x - 2x2, 1 +

A2 (a)

(b) (2- x + 3x2 , 4 (d) 119 + 9x + 9x2 11 =

3x2 ) = -84
9 ../59
It does not define an inner product since (x - x2 , x - x2 ) 0.

3x) = -46
(c) 113 - 2x + x2 11
m

(b) It does not define an inner product since (-p, q)

f.

-(p, q).

(c) It does define an inner product.


( d) It does not define an inner product since
A3(a)
(b)

(x, x) = -2.

{ [ ].[ ].[ =]}.


{ [ ]. [ -n. [- ]}

A4 (a)=

{[i] nrnl}

[ ] [-21 ;;]
[- 13] [ -7/3/3 4/33]

pro
j8 _

projs

(b)[X] =

[;l

= 11

Answers to Practice Problems and Chapter Quizzes

Appendix B

519

AS We have
(Vt++vb Vt+ +vk) =(v1, Vt)+ +(Vt, vk) +(v2, Vt)+ (v2, v2)+
++(v2, vk) ++(vb v1) ++(vbvk)
Since {v1, ..., vk} is orthogonal, we have (v;, Vj)

Chapter

0 if i

j.

Hence,

Quiz

E Problems
El (a) Neither. The vectors are not of unit length, and the first and third vectors are
not orthogonal.
(b) Neither. The vectors are not orthogonal.
(c) The set is orthonormal.

E2 Let v1, V2, and V3 denote the vectors in '13. We have

v3x = -

v1x=2,

Y3

Hence, [x]B =

2
9/Y3 .
6/Y3

E3 (a) We have

1=
Thus, det P

(b) We have pTP

det I = det(PTP) = (det pT)(det P) = (det P)2

1.
=

I and RTR = I . Hence,

Thus, PR is orthogonal.

E4 (a) Denote the vectors in the spanning set for S by z1, z2, and z3. Let W1 = Zt.
Then

-1

-1

{ [i ,
1;V2

Nonnalizing { W" W2, W3} gives us the orthonormal basis

1I

1;V2
0

1;V2

-1/2
1/2
1/2
,-1/2

}
.

520

Appendix B

Answers to Practice Problems and Chapter Quizzes


(b) The closest point in S to x is projs x. We find that
projs x = x
Hence, x is already in .
ES (a) Let A =

[ ].

Since det A =

0,

it follows that

0
0

(A,A) = det(AA) = (detA)(detA) =


Hence, it is not an inner product.

(b) We verify that (,) satisfies the three properties of the inner product:
(A,A) = aT 1+2aT 2+2a 1+a 2 2::
and equals zero if and only if A = 02,2:

(A, B) =a11b11+2a12b12+2a21b21+a22b22

=b11a11+2b12a12+2b21a21+b22a22 = (B,A)

(A,sB+tC) = a11(sb11+tc, i)+2a12(sb12+tc12)+

+2a21(sb21+tc21) +a22(sb22+tc22)
= s(a11b11+2a12b12 + 2a21b 21+a22b22)+
+t(a11C11+2a12c12+2a21C21+a22C22)

=s(A, B)+t(A, C)
Thus, it is an inner product.

CHAPTERS
Section 8.1
A Problems
Al (a) A is symmetric.
(c) C is not symmetric since c12 f. c21.

(b) Bis symmetric.


(d) Dis symmetric.

A2 Alternate correct answers are possible.


(a) p =
(b) P =

l /Y2l
[30 -1J
[-1/1/.../2.../2 1;Y2J'
1/YTO 3/YTOl
[-4o 6J
[-3/YTO
l /YTOJ'
D=

D=

[0 o ol0
0 0
0
[o0 o
[
0
0
j]
]
] [09 o9 ol0
[0
0 0 -9

2
1 ;-.../3 -1 ;-../2 -1;%
(c) P = 1/.../3
1/-../2 -1/../6, D =
1/.../3
2/../6
(d) p =

(e) P =

2/3
-2/3
1/3

2;3 1/3
1/3 2/3 , D =
-2/3 2/3

4/../4s -2/3
2/3 , D =
s;V4s
2/Vs -2/../4s 1/3

1/Vs

-1

-1

Appendix B

Answers to Practice Problems and Chapter Quizzes

Section 8.2
A Problems

==
=
1

-
+

x7 + 6x1 x2 x
Q(x1, x2, x3)
X T - 2x + 6x2x3 x
Q(x1, X2, X3)
-2xi + 2X1X2
2x1X3 +

Al (a) Q(x1, x2)


(b)

521

A - [-3/2 1 ] Q(x) _
-_ [-1/.../2
1/.../2 ! j1i]; Q(x)
A = [_2 -22] ; Q(x) = P = [1/21-YsYs -l2//YsYs]; Q(x) .
( ) A [- ] Q(:t) - 10 - [21//YsYs -l2//YsYSJ.J ' Q(:t)
l
A=[-3 3 -31;Q(x)=2yT+3y-6y,P= 11/0 1/../3
j -j
;
l
2
/.../6
Q(x)[-4 1 -11../3 -21;Y6 1;0.../2] ,
A= 10 -1 4ol i 1-1/../3
1/../3 1;Y6
1/.../2
;Y6
)

(c)

A2 (a)

3/2

I 2
5 2
- 2Y1 - 2Y2'

'

2x2X3

is indefi-

nite.

(b)

. .
1s pos1t1ve

2
2
y1 + 6y2,

definite.

,,1,

2
2
y1 - 5y2, p -

,,1,

(d)

is indefinite.

(e)

Q(x

-5

Q(1)

=-

3 y - 6y

4y

is negative definite.

A3 (a) Positive definite.

(b) Positive definite.

(c) Indefinite.

(d) Negative definite.

(e) Positive definite.

(f) Indefinite.

Section 8.3
A Problems

Al

A2

.
. .
ismdefimte.

522

Appendix B

Answers to Practice Problems and Chapter Quizzes

A4

A3

Y2

AS (a) The graph of xT Ax =

xT Ax =

is a set of two parallel lines. The graph of

-1 is the empty set.

(b) The graph of xT Ax = 1 is a hyperbola. The graph of xT Ax =

-1

is a hyper

bola.
(c) The graph of xT Ax= 1 is a hyperboloid of two sheets. The graph of xT Ax=
-1 is a hyperboloid of one sheet.

(d) The graph of

XT Ax=

1 is a hyperbolic cylinder. The graph of xT Ax = -1 is

a hyperbolic cylinder.
(e) The graph of

xT Ax =

1 is a hyperboloid of one sheet. The graph of

xT Ax= -1 is a hyperboloid

of two sheets.

Chapter 8 Quiz
E Problems

El P

1/../3 1;.../2
0
1/../3
l /v6 -1/../3 1/.../2

2/v6

[; ]. Q(x)

E2 (a) A =

definite.

(b)

l /v6
-

A=

Q(x) =

7yi + 3y,

-3

Q(x) =

Q(x) is indefinite. Q(x) =


cone.

= 0

-3

and P

1 is an ellipse, and

1- = -i.
-3

16 l
o

and D

o
0 .
4

[; ;.Ji2] Q(x)
.

is positive

Q(x) = 0 is the origin.

-5y+5y-4y, and P=

...,12 1

-1
1/.../2

1 is a hyperboloid of two sheets, and

1;v6
Q(x)

;]

1/../3
0 is a

Appendix B

Answers to Practice Problems and Chapter Quizzes

523

E3

Y2

E4 Since A is positive definite, we have

and (x,x) = 0 if and only if x = 0.


Since A is symmetric, we have

n
For any x,y,z E IR. and s,t E IR., we have

(x,sy + tZ> = xT A(sy + tZ) = xT A(sy) + xT A(tt/


= SXT Ay + txT Az= s(x,y) + t(x,Z>
Thus, (x,y) is an inner product on IR.11

ES Since A is a 4 x 4 symmetric matrix, there exists an orthogonal matrix P that


diagonalizes A. Since the only eigenvalue of A is 3, we must have pT AP = 31.
Then we multiply on the left by P and on the right by pr, and we get

A = P(3/)PT = 3ppT = 31

CHAPTER9

Section 9.1
A Problems
Al (a)5+7i

(b) -3 - 4i

(c) -7+ 2i

(d) -1 4+Si

A2 (a) 9 +7i

(b) -10 - lOi

(c) 2+ 25i

(d) -2

A3 (a)3 +5i

(b)2 -7i

(c) 3

(d) 4i

A4 (a) Re(z) = 3, Im (z)= 6


(c) Re(z) = 24/37, Im(z) = 4/37
-

6
21
(b) 53
+ 53
l

A6 (a)

z1z2

= 2 Y2 (cos + isin ),;. =

(b)Re(z) = 17, Im(z) = -1


(d) Re(z) = 0, Im(z) = 1
(c) _ __
13 - l.2i
13

25
2
(d) -17
+ 171

}i (cos-;-;+ isin -;-;)

524

Appendix B

Answers to Practice Problems and Chapter Quizzes

(b)
(c)

( l?rr +i ?rr)
z z 2 Y2 ( +i )
z1z2 4 - 7i, -fJ - -hi
z1z2 -17+9i, -fi + fii
-4
-54 - 54i
-8 - 8 YJi
2 Y3+i)
5
( rr+krr)+ i ( rr+krr), 4.
2 [ (-z:2krr)+i (-%:2krr)]. 3.
2113 [ (-T;2krr)+ (-T;2krr)], 2.
4. 17 1 [ ( ikrr)+i ( ikrr)J 2,
1 2

cos llir.
12

sin llir.
12

'

z2

sin l12

Y2 cos 12

..1...

(This answer can be checked using Cartesian

il

form.)

(d)

(This answer can be checked using Cartesian

form.)

A7 (a)

(b)

(c)

AS (a) The roots are cos


(b) The roots are
(c) The roots are

sin

cos

isin

1 6 cos 0+

(d) The roots are


e arctan

sin

cos

1 (

(d)

sin 0+

where

Section 9.2
A Problems

[ 1: ii]3
l

Al (a) The general solution is z

-5 - 51

(b) The general solution is z

+i -1+2i
-25+2i +t
-i
0

-t

EC.

Section 9.3
A Problems
Al (a)

57+- 8i3i
r-5; 3i]
-1 - 9il
[
[1+2i 3+]
1 1+ 3i, 1 - 4i) [1 --4i]
{[12i]}.
(b)

A2 (a) [l]

(b) l(2

-4 - i
(d) -1 -7i3
[ -12+il

(c) A basis for Range(l) is

m].[:]}
{[_:-!]}

A3 (a) A basis for Row(A) is

basis for NuncAJ is

{[- \ i]}
{[ 1_;/]. [L]}.

A basis for Null(l) is

A basis foe Col(A) is

and a

Appendix B

525

Answers to Practice Problems and Chapter Quizzes

(b) A basis for Row(B) is

ml, [m

{[1- ;] , [-1:+ ;]}

A basis for Col(B) is

and a basis for Null(B) is the empty set.

(c) A basis for Row(C) is

{j I}
{ =1 }

{[ U [ 2;]}
l

A basis for Col(C) is

and a basis fo r Null(C) is

Section 9.4
A Problems
Al (a) D=
(b) D=

(c )

[ J [ -].
r +i - l r
-2
0

0
2
-

p=

[- ]

p-'AP=

P=

r -]

-1 p-' A = -2
P
,
0
-1

-1
1

ol1 , [o o
o ol
1
+
[
[
- ]
ol
o
1

1
i
+

[
]

[
[
-1
i]
2i
0

1
2

p-1 AP= 0
0
P= 0 0
0
2 1 0
1 - 2i
,

(d) D =

p=

P-'AP=

3
s

Section 9.5
A Problems

-Si, -6i, +Si,


6i+, i, -i,
i, + i,
(v, il) = 2

Al (a) (il, v) = 2
(b)

(il, v) =

(c)

(il, v)

11ai1 = m, llvll = ../26

(v, il) =

(d) (il, v> = 4

(V, il) = 3

llilll

Yil, llVll = 2

11a11

Yl5, 11v11 = ....ts

(v, a> =

A2 (a) A is not unitary.

(b) B is unitary.

A3 (a) (il, v) = 0
A4 (a) We have

llilll = {18, llVll = -../33

(b)

+[ ++iii l

(c) C is unitary.
f

(d) Dis unitary.

= det I = det( U* U) = det( U*) det U = det U det U = I det U12.

(b) The matrix U =

[ ]

is unitary and det U =

i.

526

Appendix B

Answers to Practice Problems and Chapter Quizzes

Section 9.6
A Problems
Al (a)

. u = [c.../2 + i)/../12 c..)./2+7 /2] D


.
A 1s. Herm1tian.
112
_3 ../12
,

(b) Bis not Hermitian.


.
(c) C 1s
Hernutian
U
.
.

(d) Fis Hermitian.

-i)/Ys
= [(../3l/Ys

D=

Vs

[i o]
5

7
-i)/
_4 1-.fiO-.fiO] D= [ OJ2

( ..f3

0
0

[(1--i)/-{16
2/-VW
(3 - Ys)/b l
(3 + Ys)/a
(1 - i)(l + Ys)/a (1-i)(l - Ys)/b
-

Vs

2/a
2/-VW
where a 52. 361 and b 7.639.

2/b

Chapter 9 Quiz
E Problems

..e-711/12
= 4 ...f2e-irr/12 'z2 ...l.
Yi
The square roots ofi are e irr/4 and ei51114.
7i
(a) 3 +Si
9 + 3i
(c) 11 + 4i
(d) 11 - 4i
2
(e) ff?
(f) rs : :
22 - 8i
2 3i 23i andP-1AP=D= [ 2 3i
(a) P= [
]
2 3i]
il

z
El z 12

E2

(b) [J;l
[ ]

[ l
-

E3

E4

ES

(b)

E6 (a)

P= [

and C

= [ _ ;]

1 i
_;i] [ ; _/_i] =[ ]
A is Hermitian if A = A. Thus, we must have 3 +ki = 3 -i and 3 - ki = 3 +i,
which is only true when k = -1. Thus,
is Hermitian if and only if k = -1.
vu=

[ Ii

Appendix B
(b) If k

Answers to Practice Problems and Chapter Quizzes

1 , then A

[3 i 3 i]
lo3 33 ii
and

det(A - ;l/)

Thus the eigenvalues are l1


;
A

;l I
I

- ;l
+i

-2 and A2
=

;l

(;l

5. For ;l1

2
0

{[3 i]}.
\f [3 i] lp i])
-

0.

5)

-2,

For ;l2

Thus, a basis for the eigenspace of ;l2 is


_2

2)(,l

[3+i 3 i] [ 3 i]
{[3_i]}.

Thus, a basis for the eigenspace of A1 is

We can easily verify that

5,

527

Index
A

standard for R",321

C",413

addition

subspaces, 97-99

eigenvalues, 419

complex numbers, 396

taking advantage of direction, 218

matrices, 419--423

general vector spaces, 198

vector space, 206--209

properties, 397

matrices, 117-120

best-fitting curve, finding, 342-346

parallelogram rule for, 2

block multiplication, 127-128

polynomials, 193-196

Bombelli, Rafael, 396

real scalars, 399


scalars, 117-120
vector space, 199-200
vector space over R, 197-201
vectors, 2,10, 15-16

repeated, 423
complex eigenvectors, 419

C vector space
basis, 412

complex exponential, 402--404

complex multiplication as matrix

complex inner products


Cauchy-Schwarz and triangle inequalities,

mapping, 415

adjugate method, 276

non-zero elements linearly dependent, 412

algebraic multiplicity, 302,303

one-dimensional complex vector

eigenvalues, 296--297

capacitor, capacitance of, 408

Approximation Theorem, 336,343,345

Cauchy-Schwarz and triangle inequalities,

change of coordinates equation,

parallelogram, 280-282
Argand diagram, 399,405
arguments, 400--401
augmented matrix, 69-72
axial forces, 105

complex multiplication as matrix


mappings, 415
complex numbers, 417

426--428
Cauchy-Schwarz Inequality, 33

determinants, 280-283

complex matrices and diagonalization,


417--418

125-126

angular momentum vectors, 390


arbitrary bases, 321

426--428
Hermitian property, 426
length, 426

space, 412
cancellation law and matrix multiplication,

angles, 29-31

area

roots of polynomial equations, 398


complex eigenvalues
of A, 418--419

adjugate matrix, 276

allocating resources, 107-109

quotient of complex numbers, 397-398

addition, 396
arguments, 400--401
arithmetic of, 396--397

222,240,329
change of coordinates matrix, 222-224

complex conjugates, 397-398

characteristic polynomial, 293

complex exponential, 402--404

classification and graphs, 380

complex plane, 399

en complex vector space

division, 397-398

complex conjugate, 413

electrical circuit equations, 408--410

inner products, 425

imaginary part, 396

back-substitution, 66

orthogonality, 429--431

modulus, 399--401

bases (basis)

standard inner product, 425

multiplication, 396

arbitrary, 321
C vector space, 412
complex vector spaces, 412

vector space, 412

multiplication as matrix mapping, 415


n-th roots, 404--405

codomain, 131
linear mappings, 134

polar form, 399--402

coordinate vector, 219

coefficient matrix, 69-70, 71

powers, 402--404

coordinates with respect to, 218-224

coefficients, 63

quotient, 397-398

definition, 22

cofactor expansion, 259-261

real part, 396

eigenvectors, 299-300,308

cofactor matrix, 274-275

scalars, 411

extending linearly independent subset

cofactor method, 276

two-dimensional real vector space, 412

to, 213-216
linearly dependent, 206--208

vector spaces over, 413--414

cofactors
3

3 matrix, 257

complex plane, 399

linearly independent, 206--207,211

determinants in terms of, 255-261

mutually orthogonal vectors, 321

irrelevant values, 259

basis, 412

nullspace, 229-230

matrix inverse, 274-278

complex numbers, 396--40 5

obtaining from arbitrary wFinite spanning

of n

set, 209-211
ordered, 219,236
preferred, 218

n matrix, 258

complex vector spaces

dimension, 412

column matrix, 117

eigenvectors, 417--423

columnspace, 154-155,414

inner products, 425--431

basis of, 158-159

one-dimensional, 412

range, 229-230

complete elimination, 84

scalars and coordinates of, 417

standard basis properties, 321

complex conjugates

subspaces, 413--414

529

530

Index

forming basis,308

symmetric matrices,327, 363-370,

complex vector spaces (continued)

linear transformation,289-290

373-375

systems with complex numbers,407-410


two-dimensional, 412

Diagonalization Theorem,30 I,307

linearly independent, 300

vector spaces over C, 4 I1-4 I5

differential equations and diagonalization,

mappings, 289-291

413-414
components of vectors, 2

matrices,291

315-317

vector spaces over complex numbers,

dilations and planes, 145

non-real, 301

dimensions,99

non-trivial solution, 292

compositions, I 39-141

finite dimensional vector space,215

non-zero vectors,289

computer calculations choice of pivot

trivial vector space, 212

orthogonality,367

vector spaces,211-213

principal axes,370

affecting accuracy,78
cone,386
conic sections, 384

three-dimensional space, 1 I

conjugate transpose,428

two-dimensional case, 11

consistent solutions,75-76

direction vector and parallel lines,5-6

consistent systems,75-76

distance,44

constants, orthogonality of, 356

division

constraint, I07

complex numbers,397-398

contractions and planes, 145

matrices,125

convergence, 357

polar forms,401

coordinate vector,2 I9

domain, 131
linear mappings,134

coordinates with respect to basis,218-224


orthonormal bases,323-329
corresponding homogeneous system solution

dominant eigenvalues,312

perpendicular part, 43

Cramer's Rule,276-278

projections, 40-44

cross-products,51-54

properties,348

formula for, 51-52

R2,28-31

length,52-54

R",31-34,323,354

properties,52

symmetric, 335

current,102

E
eigenpair,289

de Moivre's Formula, 402-404

eigenspaces,292-293

algebraic multiplicity, 296-297, 302,303

2 matrix,255-256, 259

complex eigenvalue of A, 418-419

3 matrix, 256-258

deficient, 297

area, 280-283

dominant,312

cofactors,255-261

eigenspaces and,292-293

Cramer's Rule, 276-278

finding,291-297

elementary row operations,264--2 7 I

geometric multiplicity,296-297

expansion along first row, 257-258

integers,297

invertibility, 269-270

mappings, 289-291

matrix inverse by cofactors, 274--278

matrices, 291

non-zero,270

n x n

products, 270-271

non-rational numbers,297

row operations and cofactor

non-real,301

elementary matrices, 175-179


reflection, I76
shears, 176
stretch, 176
elementary row operations, 70, 72-73
bad moves and shortcuts, 76-78
determinant,264--271
elimination,matrix representation of,71
ellipsoid,385
elliptic cylinder,387
equations
first-order difference, 3 I1-312
free variables, 67-68
leading variables,66-67
normal,344
rotating motion,391
second-order difference, 3 I2
solution,64
equivalent
directed line segments,7-8
systems of equations, 65
systems of linear equations, 70
error vector, 343
Euclidean spaces,1
cross-products,50-54
length and dot products, 28-37
minimum distance,44-47

matrices,420

projections, 40-44
vectors and lines in R3,9-11
vectors in R2 and R3, 1-11

power method of determining,312-3 I 3


of projections and reflections in R3,

swapping rows,265

290-291
real and imaginary parts,420

diagonal form, 300, 373

rotations in R2, 291

diagonal matrices,114

symmetric matrices, 363-367

diagonalization

electromotive force, I 03

3 matrix, 422-423

complex conjugates, 419

volume, 283-284

steady-state solution,409-410
electricity, resistor circuits in, 102-104

eigenvalues, 307-310,417

design matrix,344

expansion,269

complex numbers, 408-410

orthogonality, 367

determinants

square matrices,269-270

symmetric matrices, 363-367


electrical circuit equations

equality and matrices,113-117

degenerate quadric surfaces,387

restricted definition of,289


rotations in R2, 291

defining, 354

cosines,orthogonality of, 356

degenerate cases, 384

290-291
real and imaginary parts, 420

dot products, 29-31,323, 326

space,152-153

deficient, 297

of projections and reflections in R3,

directed Iine segments,7-8

eigenvectors, 308-310

vectors in R",14--25
Euler's Formula,403-404
explicit isomorphism,248
exponential function,315

F
Factor Theorem,398

applications of,303

complex matrices,4 I7-4 I 8

basis of,299-300

False Expansion Theorem,275

differential equations,3 I 5-317

complex,419

feasible set linear programming problem,

3 matrix,422-423

eigenvectors,299-304

complex vector spaces, 417-423

quadratic form, 373-375

diagonalization,299-304

square matrices,300, 363

finding,291-297

factors,reciprocal,265

107-109
finite dimensional vector space
dimension, 2 I5

Index

Finite spanning set, basis from arbitrary,


209-21!
first-order difference equations, 311-312
first-order linear differential equations, 315
fixed-state vectors, 309
Markov matrix, 310
Fourier coefficients, 356-357

Hermitian property, 426

homogeneous linear equations, 86-87

kernel, 151

hyperbolic cylinder, 387

Kirchhoff's Laws, 103-104, 409, 410

hyperboloid of one sheet, 386


hyperboloid of two sheets, 386-387

hyperplanes

Law of Cosines, 29

Fourier Series, 354-359


free variables, 67-68

in R",24

leading variables, 66-67

scalar equation, 36-37

left inverse, 166


length

functions, 131
addition, 134
domain of, 131
linear combinations, 134
linear mappings, 134-136
mapping points to points, 131-132
matrix mapping, 131-134
scalar multiplication, 134
Fundamental Theorem of Algebra, 417

complex inner products, 426


identity mapping, 140
identity matrix, 126-127
ill conditioned matrices, 78
image parallelogram volume, 282
imaginary axis, 399
imaginary part, 396
inconsistent solutions, 75-76
indefinite quadratic forms, 376-377
inertia tensor, 390,391

G
Gauss-Jordan elimination, 84
Gaussian elimination with
back-substitution, 68
general linear mappings, 226-232
general reflections, 147-148

infinite-dimensional vector space, 212


infinitesimal rotation, 389
inner product spaces, 348-352
unit vector, 350-351
inner products, 323,348-352,354-355

general solutions, 68

C" complex vector space, 425

general systems of linear equations, 64

complex vector spaces, 425-431

general vector spaces, 198

correct order of vectors, 429

geometric multiplicity and eigenvalues,

defining, 354

296-297

Fourier Series, 355-359

geometrical transformations

orthogonality of constants, sines, and

contractions, 145

cosines, 356

dialations, 145

properties, 349-350,354

general reflections, 147-148


inverse transformation, 171
reflections in coordinate axes in R2 or
coordinates planes in R3, 146
rotation through angle (-) about x3-axis in
R3,145-148
rotations in plane, 143-145
shears, 146
stretches, 145
Google PageRank algorithm, 307
Gram-Schmidt Procedure, 337-339,
351-352,365,367-368,429
graphs
classification, 380
cone,386
conic sections, 384
degenerate cases, 384

R",348-349
instantaneous angular velocity vectors, 390
instantaneous axis of rotation at time, 390
instantaneous rate of rotation about the
axis, 390
integers, 396
eigenvalues, 297
integral, 354
intersecting planes, 387
invariant subspace, 420
invariant vectors, 309
inverse linear mappings, 170-173
inverse mappings, 165-173
procedure for finding, 167-168
inverse matrices, 165-173
invertibility
determinant, 269-270

degenerate quadric surfaces, 387

general linear mappings, 228

diagonalizing quadratic form, 380


ellipsoid, 385
elliptic cylinder, 387
hyperbolic cylinder, 387
hyperboloid of one sheet, 386
hyperboloid of two sheets, 386-387
intersecting planes, 387
nondegenerate cases, 384
quadratic forms, 379-387

square matrices, 269-270


invertible linear transformation onto
mappings, 246-247
Invertible Matrix Theorem, 168,172,292
invertible square matrix and reduced row
echelon form (RREF), 178
isomorphic, 247
isomorphisms, 247
explicit isomorphism, 248

quadric surfaces, 385-387

vector space, 245-249

H
Hermitian linear operators, 433
Hermitian matrix, 432-435

531

cross-products, 52-54
R3,28-31
R",31-34
level sets, 108
line of intersection of two planes, 55
linear combinations, 4

linear mappings, 139-141


of matrices, 118
polynomials, 194-196
linear constraint, 107
linear difference equations, 304
linear equations
coefficients, 63
definition, 63
homogeneous, 86-87
matrix multiplication, 124
linear explicit isomorphism, 248
linear independence, 18-24
problems, 95-96
linear mappings, 134-136,175-176,
227,247
2

2 matrix, 420

change of coordinates, 240-242


codomain, 134
compositions, 139-141
domain, 134
general, 226-232
identity mapping, 140
inverse, 170-173
linear combinations, 139-141
linear operator, 134
matrices, 235-242
matrix mapping, 136-138
nullity, 230-231
nullspace, 151,228
one-to-one, 247
range, 153-156,228-230
rank, 230-231
standard matrix, 137-140,241
vector spaces over complex numbers,
413-414
linear operators, 134,227
Hermitian, 433
matrix, 235-239
standard matrix, 237
linear programming, 107-109
feasible set, l 07-109
simplex method, l 09
linear sets, linearly dependent or linearly
independent, 20-24
linear systems, solutions of, 168-170
linear transformation

Jordan normal form, 423

eigenvectors, 289-290
standard matrix, 235-239,328

Index

532

linear transformation (continued)

inverse cofactors, 274-278

linearity properties, 44

linear combinations of, 118

moment of inertia about an axis, 390, 391

linearly dependent, 20-24

linear mapping, 235-242

multiple vector spaces, 198


multiplication

modulus, 399-401

basis, 206-208

linearly dependent, 118-120

matrices, 118-120

linearly independent, 118-120

complex numbers, 396, 415

polynomials, 195-196

lower triangular, 181-182

matrices, 121-126. See also matrix

subspaces, 204

LU-decomposition, 181-187

linearly independent, 20-24

polar forms, 401


properties of matrices and scalars,

basis, 206-207, 21 l

non-real eigenvalues, 301

eigenvectors, 300

nullspace, 414

matrices, 118-120

operations on, 113-128

polynomials, 195-196

order of elements, 236

procedure for extending to basis, 213-216

orthogonal, 325-329

subspaces, 204

orthogonally diagonalizable, 366

vectors, 323

orthogonally similar, 366


partitioned, 127-128

lines
direction vector, 5
parametric equation, 6

in R3, 9

in R", 24
translated, 5

vector equation in R2, 5-8

lower triangular, 114, 181-182


LU-decomposition, 181-187
solving systems, 185-186

multiplication

multiplication, 121-126

powers of, 307-313


rank of, 85-86
real eigenvalues, 420

117-120
mutually orthogonal vectors, 321

N
n-dimensional parallelotope, 284
11-dimensional space, 14
11-th roots, 404-405
11-volume, 284
11

reduced row echelon form (RREF), 83-85

cofactor of, 258

reducing to upper-triangular form, 268

complex entries, 428

representation of elimination, 71

conjugate transpose, 428

representation of systems of linear

determinant of, 259

equations, 69-73

eigenvalues, 420

with respect to basis, 235

Hermitian, 432-435

row echelon form

similar, 300

eigenvalues, 289-291
eigenvectors, 289-291
inverse, 165-173
nullspace, 150-152
special subspaces for, 150-162
Markov matrix, 309-313
fixed-state vector, 310
Markov process, 307-313
properties, 310-311
matrices, 69
addition, 117
addition and multiplication by scalars
properties, 117-120
basis of columnspace, 158-159
basis of nullspace, 159-161
basis of rowspace, 157
calculating determinant, 264-265
cofactor expansion, 259-261
column, 117
columnspace, 154-155, 414
complex conjugate, 419-423
decompositions, 179
design, 344
determinant, 268, 280
diagonal, 114, 300
division, 125

swapping rows, 266-267

row equivalent, 70,71


row reduced to row echelon form (RREF),

73-74
row reduction, 70
rowspace, 414
scalar multiplication, 117
shortcuts and bad moves in elementary
row operations, 76-78
skew-symmetric, 379
special types of, 114
square, 114
standard, 137-140
swapping rows, 187
symmetric, 363-370
trace of, 202-203
transition, 307
transpose of, 120-121
unitarily diagonalizable, 435
unitarily similar, 435
unitary, 430-43 l
upper triangular, 181-182
matrix mappings, 131-134
affecting linear combination of vectors in

R", 133

11 matrices, 296

characteristic polynomial, 293

(REF), 73-74

mappings

unitary, 430-431
natural numbers, 396
nearest point, finding, 47
negative definite quadratic forms,

376-377
negative semidefinite quadratic forms,

376-377
Newton's equation, 391
non-homogeneous system solution, 153
non-real eigenvalues, 301
non-real eigenvectors, 301
non-square matrices, 166
non-zero determinants, 270
non-zero vectors, 41
nondegenerate cases, 384
norm, 31-32
normal equations, 344
normal to planes, 54-55
normal vector, 34-35
normalizing vectors, 312
nullity, 159-161
linear mapping, 230-231
nullspace, 150-152,414
basis, 159-161, 229-230

complex multiplication as, 415

linear mappings, 228

linear mappings, 134, 136-138


matrix multiplication, 126, 175-176,

300,326

eigenvalues, 291

block multiplication, 127-128

objective function, 107

eigenvectors, 291

cancellation law, 125-126

Ohm's law, 102-103,104

elementary, 175-179

linear equations, 124

one-dimensional complex vector

elementary row operations, 70

product not commutative, 125

equality, 113-117

properties, 133-134

finding inverse procedure, 167-168

summation notation, 124

Hermitian, 432-435

matrix of the linear operator, 235-239

identity, 126-127

matrix-vector multiplication, 175-176

ill conditioned, 78

method of least squares, 342-346

inverse, 165-173

minimum distance, 44-47

spaces, 412
one-to-one, 246
explicit isomorphism, 248
invertible linear transformation, 246
onto, 246
explicit isomorphism, 248
ordered basis, 219,236

Index

orthogonal, 33

points
calculating distance between,

Cn complex vector space, 429--431

28-29

planes, 36

position vector, 7

orthogonal basis, 337-339


subspace S, 335

polynomials
addition, l 93-196

real inner products, 430

properties, l 94

orthogonally diagonalizable, 366

linear combinations, 194-- 196

orthogonally similar, 366

linearly dependent, 195-196

orthonormal bases (basis), 321-325,

linearly independent, 195-196

change of coordinates, 325-329

scalar multiplication, 193-196

coordinates with respect to,

span, 194-- l 96

323-325

vector spaces, 193-196

error vector, 343

position vector, 7

fourier series, 354--359

positive definite quadratic forms,

general vector spaces, 323

376-- 377
positive semidefinite quadratic forms,

inner product spaces, 348-352

376--377
power method of determining eigenvalues,

overdetermined systems, 345-346

312-313
powers and complex numbers,
402--404

R",323,334
subspace S, 335

powers of matrices, 307-313

technical advantage of using, 323-325

preferred basis, 218

orthonormal columns, 364

preserve addition, 134

orthonormal sets, arguments based on, 323

preserve scalar multiplication, 134

orthonormal vectors, 322,351

principal axes, 370

overdetermined systems, 345-346

Principal Axis Theorem, 369-370,374,380,


391-392,435

principal moments of inertia, 390,391

parabola, 384

probabilities, 309

parallel lines and direction vector, 5---0

problems
linear independence, 95-96

parallel planes, 35
parallelepiped, 55-56
volumes, 56,283-284

spanning, 91-95
products
determinants, 270-271

parallelogram

of inertia, 391

area, 53, 280-282


induced by, 280
rule for addition, 2
parallelotope and 11-volume, 284

rotations, 291
length, 28-31
line through origin in, 5
reflections in coordinate axes, 146
rotation of axes, 329-330
standard basis for, 4

roots of, 293

323,336

projections onto subspace, 333-336

dot products, 28-31


eigenvectors and eigenvalues of

addition and scalar multiplication

orthogonal vectors, 321,333,35 l

method of least squares, 342-346

quotient, 397-398

multiplication, 401
polynomial equations, roots of, 398

orthogonal sets, 322

Gram-Schmidt Procedure, 337-339

symmetric matrices, 371-377


quadric surfaces, 385-387

division, 401

orthogonal matrices, 325-329


327,364

small deformations and, 388-389

polar forms, 399--402

orthogonal complement, 333


orthonormal columns and rows,

vector equation of line in, 5-8


vectors in, 1- l l
R3

zero vector, 11
defining, 9
eigenvectors and eigenvalues of
projections and reflections, 290-29 l
lines in, 9-11
reflections in coordinate planes, 146
rotation through angle ( -) about x3 axis,
145-148
scalar triple product and volumes in,
55-56
vectors in, 1-1 l
zero vector, l l

range
basis, 229-230
linear mappings, 153-156, 228-230
rank
linear mapping, 230-231
matrices, 85-86
square matrices, 269-270
summary of, 162
Rank-Nullity Theorem, 231, 232
Rank Theorem, 150-162
rational numbers, 396
Rational Roots Theorem, 398

projection of vectors, 335

real axis, 399

projection property, 44

real canonical form

projections, 40--44
eigenvectors and eigenvalues in R3,

parametric equation, 6

290-291

particular solution, 153

3 matrix, 422--423

2 real matrix, 421--422

complex characteristic roots of,


418--420

permutation matrix, 187

onto subspaces, 333-336

perpendicular of projection, 43

properties of, 44

real eigenvalues, 420

scalar multiple, 41

real eigenvectors, 423

planar trusses, I 05-106


planes

Pythagoras' Theorem, 44

real inner products and 01thogonal

real matrix, 420

matrices, 430

contractions, 145
dilations, 145
finding nearest point, 47

533

quadratic forms

complex characteristic roots of, 418--420

applications of, 388-392

real numbers, 396,418

normal to, 54--55

diagonalizing, 373-375

real part, 396

normal vector, 34--35

graphs of, 379-387

real polynomials and complex roots, 419

orthogonal, 36

indefinite, 376-377

real scalars

parallel, 35

inertia tensor, 390

addition, 399

in R", 24

negative definite, 376-377

scalar multiplication, 399

line of intersection of two, 55

rotations in, 143-145

negative semidefinite, 376--377

real vector space, 412

scalar equation, 34-36

positive definite, 376--377

reciprocal factor, 265

stretches, 145

positive semidefinite, 376--377

recursive definition, 259

Index

534

reduced row echelon form (RREF),


83-85,178
reflections

linear dependent,204

second-order difference equations,312


shears,146

linearly independence,18-24

elementary matrix,176

linearly independent,204

describing terms of vectors,218

similar,300

projections onto,333-336

eigenvectors and eigenvalues in R3,

simplex method, l 09

R", 16--1 7

sines,orthogonality of,356

spanning sets, 18-24, 204

290-291
elementary matrix,176

skew-symmetric matrices,379

in plane with normal vector,147-148

small deformations,388-389

repeated complex eigenvalues,423

solid body and small deformations,388-389

repeated real eigenvectors,423

solids

resistance,102

analysis of deformation of,304

resistor circuits in electricity, 102-104

deformation of,145

resistors,103
resources,allocating,107-109

spans,204
summation notation and matrix
multiplication,124
surfaces in higher dimensions,24-25
symmetric matrices
classifying,377

solution,64

diagonalization,327,363-370,373-375

back-substitution,66

eigenvalues,363-367

right-handed system,9

solution set,64

eigenvectors,363-367

right inverse,166

solution space,150-152

orthogonally diagonalizable,367

rigid body,rotation motion of,390-392

corresponding homogeneous system,


152-153

R"
addition and scalar multiplication of
vectors in,15
dot product,3 l-34,323,354
inner products,348-349
length,31-34
line in plane in hyperplane in,24
orthonormal basis,323,334
standard basis for,321
subset of standard basis vectors, 322
subspace, l 6--17
vectors in,14-25
zero vector,16
roots of polynomial equations,398
rotations

of axes in R2, 329-330

eigenvectors and eigenvalues in R2, 291

equations for,391
in plane,143-145
through angle(-) about x3-axis in R3,
145-148
row echelon form (REF), 73-74
consistency and uniqueness,75-76
number of leading entries in,85
row equivalent, 70,71
row reduction,70
rowspace,156--157, 414

quadratic forms,371-377
systems

systems,152-153

solution space,150-153

spanned,18

solving with LU-decomposition,185-186

spanning problems,91-95
spanning sets,18-24
definition,18

special subspaces for,150-162


systems of equations
equivalent,65

subspaces,204
vector spaces,209
spans
polynomials,194-196
subspaces,204

word problems,77-78
systems of linear difference equations,
311-312
systems of linear equations,63-68,175-176
applications of,102-109

special subspaces for mappings,150-162

augmented matrix,69-70,71

special subspaces for systems,150-162

coefficient matrix,69-70,71

Spectral Theorem for Hermitian Matrices,

complete elimination,84

435

complex coefficients, 407-410

square matrices,114

complex right-hand sides, 407-410

calculating inverse, 274-278

consistent solutions,75-76

determinant,269-271

consistent systems,75-76

diagonalization,300,363

elimination,64-66

facts about,168-170

elimination with back-substitution,83

invertible,269-270

equivalent,70

left inverse, 166

Gauss-Jordan elimination,84

lower triangular, 1 14

Gaussian elimination with

rank, 269-270

back-substitution,71-72

right inverse,166

general,64

upper triangular,114

general solution,68

standard basis for R2, 4

homogeneous,86-87

standard inner product,31,425

inconsistent solutions,75-76

standard matrix,137-140

linear programming,107-109

linear mapping, 241

scalar equation
hyperplanes,36-37
planes,34-36

linear transformation, 235-239,328


state vector,309

scalar multiplication

stretches

general vector spaces,198

elementary matrix,176

matrices,117

planes,145

real scalars,399
vector space,199-200
vector space over R, 197-20 l
vectors,2,3,10,15-16
scalar product,31

scalar triple product in R3, 55-56

planar trusses,105-106
resistor circuits in electricity,102-104

state,307

scalar form,6

polynomials,193-196

matrix representation of, 69-73

linear operator,237

subspace S

spanning problems,91-95
unique solutions,75-76
systems of linear homogeneous ordinary
differential equations,317
systems with complex numbers,407-410

orthogonal basis,335
orthonormal basis, 335
subspace S or R", orthonormal or orthogonal
basis for,337-339
subspaces,201-204

T
three-dimensional space and directed line
segments,11
trace of matrices,202-203

bases of,97-99

transition,probability of,309

complex vector spaces, 413-414

transition matrix,307,309

complex numbers, 411

definition,16

transposition of matrices,120-121

properties of matrix addition and

dimensions,99

Triangle Inequality,33

invariant, 420

trivial solution,21, 87

scalars,2

multiplication by,117-120

Index

trivial subspace,16-17
trivial vector space,203
dimension,212
two-dimensional case and directed line
segments,11

linearly dependent,21,95-96

extending linearly independent subset to

linearly independent,21,323

basis procedure,213-216
general linear mappings,226-232

mutually orthogonal,321

infinite-dimensional,212

norm,31-32,350-351

inner product,348-352

normalizing,312,322

two-dimensional complex vector spaces,412

inner product spaces,348-352

orthogonal,33,321,333,351

two-dimensional real vector space and

isomorphisms of,245-249

orthonormal,322,351

matrix of linear mapping,235-242

projection of,335

complex numbers,412
u

multiple,198

projection onto subspaces, 333-336

over complex numbers, 413-414

in R2 and R3, 1-11

Unique Representation Theorem,206,219

polynomials,193-196

unique solutions in systems of linear

scalar multiplication, 199-200

equations,75-76

spanning set,209

unitarily diagonalizable matrices,435

subspaces, 201-204

unitarily similar matrices,435

trivial,203

unitary matrices,430-431

vector space over R, 197

upper triangular,114,181-182

vectors,197
zero vector, 198

vector spaces over C, 411-415

vector equation of line in R2, 5-8

vector-valued function,42

vector space over R, 197

vectors,1

addition,197-201

addition of,10,15-16

scalar multiplication,197-201

angle between, 29-31

zero vector,197

angular momentum,390

abstract concept of,200-20 l

representation as arrow,2
reversing direction,3

scalars,41 l

unit vector,33,41,321-322,350-351

vector spaces

components of,2
controlling size of,312

in R", 14-25
scalar multiplication,2,3, 10,15-16
span of a set,91-95
standard basis for R2, 4
zero vector,4
vertices,109
voltage,102
volumes
determinants,283-284
image parallelogram,282
parallelepiped,56, 283-284
in R3, 55-56
w
word problems and systems of equations,

addition,199-200

cross-products, 51-54

basis,206-209

distance between,350-351

complex,396-435

dot products,29-31,323

coordinates with respect to basis,218-224

fixed,309

defined using number systems as

formula for length,28

zero vector,4,197-198

77-78

instantaneous angular velocity,390

R2, 11

dimension,211-213

invariant,309

R3, 11

explicit isomorphism,249

length,323,350-351

R", 16-17

scalars,198

535

You might also like