You are on page 1of 9

is called the matrix of cofactors from A. The transpose of this matrix is called the adjoint of A and is denoted by .

EXAMPLE 6 Adjoint of a Matrix


The cofactors of are

so the matrix of cofactors is

and the adjoint of is

We are now in a position to derive a formula for the inverse of an invertible matrix. We need to use an important fact that
will be proved in Section 2.3: The square matrix is invertible if and only if is not zero.


Inverse of a Matrix Using Its Adjoint

If is an invertible matrix, then


Proof We show first that

popularize the technique.

Overwork combined with a fall from a carriage led to his death at the age of 48. Cramer was apparently a good-natured
and pleasant person with broad interests. He wrote on philosophy of law and government and the history of mathematics.
He served in public office, participated in artillery and fortifications activities for the government, instructed workers on
techniques of cathedral repair, and undertook excavations of cathedral archives. Cramer received numerous honors for his

Remark To solve a system of equations in unknowns by Cramer's rule, it is necessary to evaluate determinants of

matrices. For systems with more than three equations, Gaussian elimination is far more efficient. However, Cramer's rule
does give a formula for the solution if the determinant of the coefficient matrix is nonzero.

Exercise Set 2.1

Click here for Just Ask!


(a) Find all the minors of .

(b) Find all the cofactors.



(a) and

(b) and

(c) and

(d) and
Evaluate the determinant of the matrix in Exercise 1 by a cofactor expansion along

(a) the first row

(b) the first column

(c) the second row

(d) the second column

(e) the third row

(f) the third column

For the matrix in Exercise 1, find



(b) using Theorem 2.1.2

In Exercises 5–10 evaluate by a cofactor expansion along a row or column of your choice.






In Exercises 11–14 find using Theorem 2.1.2.






(a) Evaluate using Theorem 2.1.2.

(b) Evaluate using the method of Example 4 in Section 1.5.

(c) Which method involves less computation?

In Exercises 16–21 solve by Cramer's rule, where it applies.






Show that the matrix


is invertible for all values of ; then find using Theorem 2.1.2.

Use Cramer's rule to solve for without solving for , , and .


Let be the system in Exercise 23.


(a) Solve by Cramer's rule.

(b) Solve by Gauss–Jordan elimination.

(c) Which method involves fewer computations?

Prove that if and all the entries in A are integers, then all the entries in are integers.

Let be a system of linear equations in unknowns with integer coefficients and integer constants. Prove that if
26. , the solution has integer entries.

Prove that if A is an invertible lower triangular matrix, then is lower triangular.


Derive the last cofactor expansion listed in Formula 4.

Prove: The equation of the line through the distinct points and can be written as

Prove: , , and are collinear points if and only if


If is an “upper triangular” block matrix, where and are square matrices, then

. Use this result to evaluate for

(b) Verify your answer in part (a) by using a cofactor expansion to evaluate .

Prove that if A is upper triangular and is the matrix that results when the ith row and th column of A are deleted,
32. then is upper triangular if .

What is the maximum number of zeros that a matrix can have without having a zero
33. determinant? Explain your reasoning.

Let A be a matrix of the form


How many different values can you obtain for by substituting numerical values (not
necessarily all the same) for the *'s? Explain your reasoning.

Indicate whether the statement is always true or sometimes false. Justify your answer by giving
35. a logical argument or a counterexample.

(a) is a diagonal matrix for every square matrix .

(b) In theory, Cramer's rule can be used to solve any system of linear equations, although
the amount of computation may be enormous.

(c) If A is invertible, then must also be invertible.

(d) If A has a row of zeros, then so does .

Copyright © 2005 John Wiley & Sons, Inc. All rights reserved.
2.2 In this section we shall show that the determinant of a square matrix can be
EVALUATING evaluated by reducing the matrix to row-echelon form. This method is important
since it is the most computationally efficient way to find the determinant of a
DETERMINANTS BY ROW general matrix.

A Basic Theorem

We begin with a fundamental theorem that will lead us to an efficient procedure for evaluating the determinant of a matrix of any
order .


Let be a square matrix. If has a row of zeros or a column of zeros, then .

Proof By Theorem 2.1.1, the determinant of A found by cofactor expansion along the row or column of all zeros is

where , , are the cofactors for that row or column. Hence is zero.

Here is another useful theorem:


Let A be a square matrix. Then .

Proof By Theorem 2.1.1, the determinant of A found by cofactor expansion along its first row is the same as the determinant of
found by cofactor expansion along its first column.

Remark Because of Theorem 2.2.2, nearly every theorem about determinants that contains the word row in its statement is also true
when the word column is substituted for row. To prove a column statement, one need only transpose the matrix in question, to convert
the column statement to a row statement, and then apply the corresponding known result for rows.

Elementary Row Operations

The next theorem shows how an elementary row operation on a matrix affects the value of its determinant.

Let A be an matrix.

(a) If B is the matrix that results when a single row or single column of A is multiplied by a scalar , then .

(b) If is the matrix that results when two rows or two columns of are interchanged, then .

(c) If B is the matrix that results when a multiple of one row of A is added to another row or when a multiple of one column is
added to another column, then .

We omit the proof but give the following example that illustrates the theorem for determinants.

EXAMPLE 1 Theorem 2.2.3 Applied to Determinants

We will verify the equation in the first row of Table 1 and leave the last two for the reader. By Theorem 2.1.1, the determinant of B
may be found by cofactor expansion along the first row:

since , , and do not depend on the first row of the matrix, and A and B differ only in their first rows.

Table 1

Relationship Operation

The first row of A is multiplied by .

The first and second rows of A are interchanged.

A multiple of the second row of A is added to the first row.