You are on page 1of 35

9781133110873_1001.

qxp 3/10/12 7:04 AM Page 487

Numerical
10 Methods
10.1 Gaussian Elimination with Partial Pivoting
10.2 Iterative Methods for Solving Linear Systems
10.3 Power Method for Approximating Eigenvalues
10.4 Applications of Numerical Methods

Car Crash Simulations (p. 513)

Laboratory Experiment
Probabilities (p. 512)

Analyzing Hyperlinks (p. 507)

Graphic Design (p. 500)

Pharmacokinetics (p. 493)


487
Clockwise from top left, Evgeny Murtola, 2009/Used under license from Shutterstock.com; lculig/Shutterstock.com;
Leah-Anne Thompson, 2009/Used under license from Shutterstock.com; Perov Stanislav/Shutterstock.com; Perov Stanislav/Shutterstock.com
9781133110873_1001.qxp 3/10/12 7:04 AM Page 488

488 Chapter 10 Numerical Methods

10.1 Gaussian Elimination with Partial Pivoting


Express a real number in floating point form, and determine the
stored value of a real number rounded to a specified number of
significant digits.
Compare exact solutions with rounded solutions obtained by
Gaussian elimination with partial pivoting, and observe rounding
error with ill-conditioned systems.

STORED VALUE AND ROUNDING ERROR


In Chapter 1, two methods of solving a system of n linear equations in n variables
were discussed. When either of these methods (Gaussian elimination and Gauss-Jordan
elimination) is used with a digital computer, the computer introduces a problem that has
not yet been discussedrounding error.
Digital computers store real numbers in floating point form,
M 10k

where k is an integer and the mantissa M satisfies the inequality 0.1 M < 1.
For instance, the floating point forms of some real numbers are as follows.
Real Number Floating Point Form
527 0.527 103
3.81623 0.381623 101
0.00045 0.45 103

The number of decimal places that can be stored in the mantissa depends on the
computer. If n places are stored, then it is said that the computer stores n significant
digits. Additional digits are either truncated or rounded off. When a number is truncated
to n significant digits, all digits after the first n significant digits are simply omitted. For
instance, truncated to two significant digits, the number 0.1251 becomes 0.12.
When a number is rounded to n significant digits, the last retained digit is
increased by 1 if the discarded portion is greater than half a digit, and the last retained
digit is not changed if the discarded portion is less than half a digit. For the
special case in which the discarded portion is precisely half a digit, round so that the
last retained digit is even. For instance, the following numbers have been rounded to
two significant digits.
Number Rounded Number
0.1249 0.12
0.125 0.12
0.1251 0.13
0.1349 0.13
0.135 0.14
0.1351 0.14

Most computers store numbers in binary form (base 2) rather than decimal form
(base 10). Although rounding occurs in both systems, this discussion will be restricted
to the more familiar base 10. Whenever a computer truncates or rounds a number,
a rounding error that can affect subsequent calculations is introduced. The result after
rounding or truncating is called the stored value.
9781133110873_1001.qxp 3/10/12 7:04 AM Page 489

10.1 Gaussian Elimination with Partial Pivoting 489

Finding the Stored Values of Numbers

Determine the stored value of each of the real numbers listed below in a computer that
rounds to three significant digits.
a. 54.7 b. 0.1134 c. 8.2256 d. 0.08335 e. 0.08345
SOLUTION
Number Floating Point Form Stored Value
a. 54.7 0.547 102 0.547 102
b. 0.1134 0.1134 100 0.113 100
c. 8.2256 0.82256 101 0.823 101
d. 0.08335 0.8335 101 0.834 101
e. 0.08345 0.8345 101 0.834 101
Note in parts (d) and (e) that when the discarded portion of a decimal is precisely half
a digit, the number is rounded so that the stored value ends in an even digit.

Rounding error tends to propagate as the number of arithmetic operations increases.


This phenomenon is illustrated in the next example.

Propagation of Rounding Error

Evaluate the determinant of the matrix

0.12
0.12 0.23
A
0.12
rounding each intermediate calculation to two significant digits. Then find the exact
solution and compare the two results.
SOLUTION
Rounding each intermediate calculation to two significant digits produces

A 0.120.12 0.120.23
0.0144 0.0276
0.014 0.028 Round to two significant digits.
0.014.


The exact solution is A 0.0144 0.0276 0.0132. So, to two significant
digits, the correct solution is 0.013. Note that the rounded solution is not correct to
two significant digits, even though each arithmetic operation was performed with two
significant digits of accuracy. This is what is meant when it is said that arithmetic
operations tend to propagate rounding error.

In Example 2, rounding at the intermediate steps introduced a rounding error of


0.0132 0.014 0.0008. Rounding error

Although this error may seem slight, it represents a percentage error of


0.0008
0.061 6.1%. Percentage error
0.0132
In most practical applications, a percentage error of this magnitude would be intolerable.
Keep in mind that this particular percentage error arose with only a few arithmetic steps.
When the number of arithmetic steps increases, the likelihood of a large percentage error
also increases.
9781133110873_1001.qxp 3/10/12 7:04 AM Page 490

490 Chapter 10 Numerical Methods

GAUSSIAN ELIMINATION WITH PARTIAL PIVOTING


For large systems of linear equations, Gaussian elimination can involve hundreds
of arithmetic computations, each of which can produce rounding error. The following
straightforward example illustrates the potential magnitude of this problem.

Gaussian Elimination and Rounding Error

Use Gaussian elimination to solve the following system.


0.143x1 0.357x2 2.01x3 5.173
1.31x1 0.911x2 1.99x3 5.458
11.2x1 4.30x2 0.605x3 4.415
After each intermediate calculation, round the result to three significant digits.
SOLUTION
Applying Gaussian elimination to the augmented matrix for this system produces
5.17


0.143 0.357 2.01
1.31 0.911 1.99 5.46
11.2 4.30 0.605 4.42
36.2


1.00 2.50 14.1 Dividing the first row
1.31 0.911 1.99 5.46 by 0.143 produces a
new first row.
11.2 4.30 0.605 4.42
14.1 36.2


1.00 2.50 Adding 1.31 times the
0.00 4.19 20.5 52.9 first row to the second row
11.2 4.30 0.605 4.42 produces a new second row.

36.2


1.00 2.50 14.1 Adding 11.2 times the
0.00 4.19 20.5 52.9 first row to the third row
0.00 32.3 159 409 produces a new third row.
36.2


1.00 2.50 14.1 Dividing the second row
0.00 1.00 4.89 12.6 by 4.19 produces a new
0.00 32.3 159 409 second row.

14.1 36.2


1.00 2.50 Adding 32.3 times the
0.00 1.00 4.89 12.6 second row to the third row
0.00 0.00 1.00 2.00 produces a new third row.
36.2


1.00 2.50 14.1 Multiplying the third row
0.00 1.00 4.89 12.6 . by 1 produces a new
0.00 0.00 1.00 2.00 third row.

So, x3 2.00, and using back-substitution, you can obtain x2 2.82 and
x1 0.900. Try checking this solution in the original system of equations to see
that it is not correct. The correct solution is x1 1, x 2 2, and x3 3.
9781133110873_1001.qxp 3/10/12 7:04 AM Page 491

10.1 Gaussian Elimination with Partial Pivoting 491

What went wrong with the Gaussian elimination procedure used in Example 3?
Clearly, rounding error propagated to such an extent that the final solution became
hopelessly inaccurate.
Part of the problem is that the original augmented matrix contains entries that
differ in orders of magnitude. For instance, the first column of the matrix
2.01 5.17


0.143 0.357
1.31 0.911 1.99 5.46
11.2 4.30 0.605 4.42
has entries that increase roughly by powers of 10 as one moves down the column. In
subsequent elementary row operations, the first row was multiplied by 1.31 and 11.2
and the second row was multiplied by 32.3. When floating point arithmetic is used, such
large row multipliers tend to propagate rounding error. For instance, notice what
happened during Gaussian elimination when the first row was multiplied by 11.2 and
added to the third row:
14.1 36.2


1.00 2.50
0.00 4.19 20.5 52.9
11.2 4.30 0.605 4.42
36.2


1.00 2.50 14.1 Adding 11.2 times the first
0.00 4.19 20.5 52.9 row to the third row produces
0.00 32.3 159 409 a new third row.

The third entry in the third row changed from 0.605 to 159; it lost three decimal
places of accuracy.
This type of error propagation can be lessened by appropriate row interchanges that
produce smaller multipliers. One method of restricting the size of the multipliers is
called Gaussian elimination with partial pivoting.

Gaussian Elimination with Partial Pivoting


1. Find the entry in the first column with the largest absolute value. This entry
is called the pivot.
2. Perform a row interchange, if necessary, so that the pivot is in the first row.
3. Divide the first row by the pivot. (This step is unnecessary if the pivot is 1.)
4. Use elementary row operations to reduce the remaining entries in the first
column to 0.
The completion of these four steps is called a pass. After performing the first
pass, ignore the first row and first column and repeat the four steps on the
remaining submatrix. Continue this process until the matrix is in row-echelon
form.

Example 4 on the next page shows what happens when this partial pivoting
technique is used on the system of linear equations from Example 3.
9781133110873_1001.qxp 3/10/12 7:04 AM Page 492

492 Chapter 10 Numerical Methods

Gaussian Elimination with Partial Pivoting

Use Gaussian elimination with partial pivoting to solve the system of linear equations
from Example 3. After each intermediate calculation, round the result to three
significant digits.
SOLUTION
As in Example 3, the augmented matrix for this system is
2.01 5.17


0.143 0.357
1.31 0.911 1.99 5.46 .
11.2 4.30 0.605 4.42
Pivot

In the first column, 11.2 is the pivot because it has the largest absolute value. So,
interchange the first and third rows and apply elementary row operations, as follows.
4.30 0.605


11.2 4.42 Interchange the
1.31 0.911 1.99 5.46 first and third
rows.
0.143 0.357 2.01 5.17
1.00 0.384 0.0540


0.395 Dividing the first row
1.31 0.911 1.99 5.46 by 11.2 produces a new
first row.
0.143 0.357 2.01 5.17
1.00 0.384 0.0540 0.395


Adding 1.31 times the first
0.00 0.408 1.92 4.94 row to the second row
0.143 0.357 2.01 5.17 produces a new second row.

1.00 0.384 0.0540 0.395


Adding 0.143 times the
0.00 0.408 1.92 4.94 first row to the third row
0.00 0.412 2.02 5.23 produces a new third row.
Pivot

This completes the first pass. For the second pass, consider the submatrix formed by
deleting the first row and first column. In this matrix the pivot is 0.412, so interchange
the second and third rows and proceed with Gaussian elimination, as follows.
1.00 0.384 0.0540 0.395


Interchange the
0.00 0.412 2.02 5.23 second and third
REMARK 0.00 0.408 1.92 4.94 rows.

1.00 0.384 0.0540 0.395


Note that the row multipliers Dividing the second row
used in Example 4 are 1.31, 0.00 1.00 4.90 12.7 by 0.412 produces a new
0.143, and 0.408, as second row.
0.00 0.408 1.92 4.94
contrasted with the multipliers
1.00 0.384 0.0540 0.395


of 1.31, 11.2, and 32.3 Adding 0.408 times the
encountered in Example 3. 0.00 1.00 4.90 12.7 second row to the third row
0.00 0.00 0.0800 0.240 produces a new third row.

This completes the second pass, and the entire procedure can be completed by dividing
the third row by 0.0800, as follows.
1.00 0.384 0.0540 0.395


0.00
0.00
1.00
0.00
4.90 12.7
1.00 3.00

So, x3 3.00, and back-substitution produces x2 2.00 and x1 1.00, which
agrees with the exact solution of x1 1, x2 2, and x3 3.
9781133110873_1001.qxp 3/10/12 7:04 AM Page 493

10.1 Gaussian Elimination with Partial Pivoting 493

The term partial in partial pivoting refers to the fact that in each pivot search, only
entries in the first column of the matrix or submatrix are considered. This search can be
extended to include every entry in the coefficient matrix or submatrix; the resulting
technique is called Gaussian elimination with complete pivoting. Unfortunately,
neither complete pivoting nor partial pivoting solves all problems of rounding error.
Some systems of linear equations, called ill-conditioned systems, are extremely
sensitive to numerical errors. For such systems, pivoting is not much help. A common
type of system of linear equations that tends to be ill-conditioned is one for which the
determinant of the coefficient matrix is nearly zero. The next example illustrates
this problem.

An Ill-Conditioned System of Linear Equations

Use Gaussian elimination to solve the system of linear equations.


x y 0
401
x y 20
400
Round each intermediate calculation to four significant digits.
SOLUTION
Gaussian elimination with rational arithmetic produces the exact solution y 8000 and
x 8000. But rounding 401400 1.0025 to four significant digits introduces a
large rounding error, as follows.

1
1 1 0
1.002 20

0
1 1 0
0.002 20

0
1 1 0
1.00 10,000
So, y 10,000 and back-substitution produces
x y
10,000.
This solution represents a percentage error of 25% for both the x-value and the
y-value. Note that this error was caused by a rounding error of only 0.0005 (when
1.0025 was rounded to 1.002).

LINEAR Pharmacokinetics researchers study the absorption,


ALGEBRA distribution, and elimination of a drug from the body.
APPLIED They use mathematical formulas to model data collected,
analyze the credibility of the data, account for environmental
error factors, and detect subpopulations of subjects who
share certain characteristics.
The mathematical models are used to develop dosage
regimens for a patient to achieve a target goal at a target
time, so it is vital that the computations are not skewed by
rounding errors. Researchers rely heavily on pharmacokinetic
and statistical software packages that use numerical
methods like the one shown in this chapter.
luchschen/Shutterstock.com
9781133110873_1001.qxp 3/10/12 7:04 AM Page 494

494 Chapter 10 Numerical Methods

10.1 Exercises
Floating Point Form In Exercises 18, express the real Solving an Ill-Conditioned System In Exercises 25
number in floating point form. and 26, use Gaussian elimination to solve the
1. 4281 2. 321.61 3. 2.62 4. 21.001 ill-conditioned system of linear equations, rounding each
1 intermediate calculation to three significant digits. Then
5. 0.00121 6. 0.00026 7. 8 8. 1612
compare this solution with the exact solution provided.
Finding the Stored Value In Exercises 916, determine 25. x y 2
the stored value of the real number in a computer
x 600
601 y 20
that rounds to (a) three significant digits and (b) four
significant digits. Exact: x 10,820, y 10,818
11. 92.646 12. 216.964 26. x 800
9. 331 10. 21.4 801 y 10
7 5
13. 16 14. 32 15. 71 16. 16 x y 50
Rounding Error In Exercises 17 and 18, evaluate the Exact: x 48,010, y 48,060
determinant of the matrix, rounding each intermediate 27. Comparing Ill-Conditioned Systems Solve the
calculation to three significant digits. Then compare the following ill-conditioned systems and compare their
rounded value with the exact solution. solutions.

66.00 1.07
1.24 56.00 2.12 4.22 (a) x y 2 (b) x y 2
17. 18.
1.02 2.12 x 1.0001y 2 x 1.0001y 2.0001
Gaussian Elimination In Exercises 19 and 20, use
Gaussian elimination to solve the system of linear 28. By inspection, determine
equations. After each intermediate calculation, round whether the following system is more likely to be
the result to three significant digits. Then compare this solvable by Gaussian elimination with partial
solution with the exact solution. pivoting, or is more likely to be ill-conditioned.
19. 1.21x 16.7y 28.8 20. 14.4x 17.1y 31.5 Explain your reasoning.
4.66x 64.4y 111.0 81.6x 97.4y 179.0 0.216x1 0.044x2 8.15x3 7.762
Partial Pivoting In Exercises 2124, (a) use Gaussian 2.05x1 1.02x2 0.937x3 4.183
elimination without partial pivoting to solve the 23.1x1 6.05x2 0.021x3 52.271
system, rounding to three significant digits after each
intermediate calculation, (b) use partial pivoting to solve 29. The Hilbert matrix of size n n is the n n symmetric
the same system, again rounding to three significant digits matrix Hn aij, where aij 1i j 1. As n
after each intermediate calculation, and (c) compare increases, the Hilbert matrix becomes more and more
both solutions with the exact solution provided. ill-conditioned. Use Gaussian elimination to solve the
21. 6x 1.04y 12.04 system of linear equations shown below, rounding to two
6x 6.20y 12.20 significant digits after each intermediate calculation.
Exact: x 1, y 1 Compare this solution with the exact solution x1 3,
x2 24, and x3 30.
22. 00.51x 992.6y 997.7
99.00x 449.0y 541.0 x1 12x2 13x3 1
1 1 1
Exact: x 10, y 1 2 x1 3 x2 4 x3 1
1 1 1
23. x 4.01y 0.00445z 0.00 3 x1 4 x2 5 x3 1

x 4.00y 0.00600z 0.21 30. Repeat Exercise 29 for H4x b, where


2x 4.05y 0.05000z 0.385 b 1 1 1 1T, rounding to four significant digits.
Compare this solution with the exact solution
Exact: x 0.49, y 0.1, z 20
x1 4, x2 60, x3 180, and x4 140.
24. 0.007x 61.20y 0.093z 61.3
31. The inverse of the n n Hilbert matrix Hn has integer
4.810x 05.92y 1.110z 00.0
entries. Use a software program or graphing utility to
81.400x 61.12y 1.180z 83.7 calculate the inverses of the Hilbert matrices Hn for
Exact: x 1, y 1, z 1 n 4, 5, 6, and 7. For what values of n do the inverses
appear to be accurate?
9781133110873_1002.qxp 3/10/12 7:01 AM Page 495

10.2 Iterative Methods for Solving Linear Systems 495

10.2 Iterative Methods for Solving Linear Systems


Use the Jacobi iterative method to solve a system of linear equations.
Use the Gauss-Seidel iterative method to solve a system of linear
equations.
Determine whether an iterative method converges or diverges,
and use strictly diagonally dominant coefficient matrices to obtain
convergence.

THE JACOBI METHOD


As a numerical technique, Gaussian elimination is rather unusual because it is direct.
That is, a solution is obtained after a single application of Gaussian elimination. Once
a solution has been obtained, Gaussian elimination offers no method of refinement.
The lack of refinement can be a problem because, as the preceding section shows,
Gaussian elimination is sensitive to rounding error.
Numerical techniques more commonly involve an iterative method. For instance,
in calculus, Newtons iterative method is used to approximate the zeros of a
differentiable function. This section introduces two iterative methods for approximating
the solution of a system of n linear equations in n variables.
The first iterative technique is called the Jacobi method, after Carl Gustav Jacob
Jacobi (18041851). This method makes two assumptions: (1) that the system
a11 x1 a12 x2 . . . a1n xn b1
a21 x1 a22 x2 . . . a2n xn b2
. . . .
. . . .
. . . .
an1 x1 an2 x2 . . . ann xn bn
has a unique solution and (2) that the coefficient matrix A has no zeros on its main
diagonal. If any of the diagonal entries a11, a22, . . . , ann are zero, then rows or columns
must be interchanged to obtain a coefficient matrix that has all nonzero entries on the
main diagonal.
To begin the Jacobi method, solve the first equation for x1, the second equation
for x2, and so on, as follows.
1
x1 b a12 x2 a13x3 . . . a1n xn
a11 1
1
x2 b a21 x1 a23x3 . . . a2n xn
a22 2
.
.
.
1
xn b an1 x1 an2x2 . . . an,n1 xn1
ann n
Then make an initial approximation of the solution
(x1, x2, x3, . . . , xn) Initial approximation

and substitute these values of xi on the right-hand sides of the rewritten equations
to obtain the first approximation. After this procedure has been completed, one
iteration has been performed. In the same way, the second approximation is formed by
substituting the first approximations x-values on the right-hand sides of the rewritten
equations. By repeated iterations, you will form a sequence of approximations that
often converges to the actual solution. This procedure is illustrated in Example 1.
9781133110873_1002.qxp 3/10/12 7:01 AM Page 496

496 Chapter 10 Numerical Methods

Applying the Jacobi Method

Use the Jacobi method to approximate the solution of the following system of linear
equations.
5x1 2x2 3x3 1
3x1 9x2 x3 2
2x1 x2 7x3 3
Continue the iterations until two successive approximations are identical when rounded
to three significant digits.
SOLUTION
To begin, write the system in the form
x1 15 25 x2 35 x3
x2 29 39 x1 19 x3
x3 37 27 x1 17 x 2 .
Because you do not know the actual solution, choose
x1 0, x2 0, x3 0 Initial approximation

as a convenient initial approximation. So, the first approximation is


x1 15 25 0 35 0 0.200
x2 29 39 0 19 0 0.222
x3 37 27 0 17 0 0.429.
Now substitute these x-values into the system to obtain the second approximation.
x1 5 50.222 50.429 0.146
1 2 3

x2 9 390.200 190.429 0.203


2

x3 37 270.200 170.222 0.517.


Continuing this procedure, you obtain the sequence of approximations shown in
the table.

n 0 1 2 3 4 5 6 7
x1 0.000 0.200 0.146 0.191 0.181 0.185 0.186 0.186
x2 0.000 0.222 0.203 0.328 0.332 0.329 0.331 0.331
x3 0.000 0.429 0.517 0.416 0.421 0.424 0.423 0.423

Because the last two columns in the table are identical, you can conclude that to three
significant digits the solution is
x1 0.186, x2 0.331, x3 0.423.

For the system of linear equations in Example 1, the Jacobi method is said to
converge. That is, repeated iterations succeed in producing an approximation that is
correct to three significant digits. As is generally true for iterative methods, greater
accuracy would require more iterations.
9781133110873_1002.qxp 3/10/12 7:01 AM Page 497

10.2 Iterative Methods for Solving Linear Systems 497

THE GAUSS-SEIDEL METHOD


You will now look at a modification of the Jacobi method called the Gauss-Seidel
method, named after Carl Friedrich Gauss (17771855) and Philipp L. Seidel
(18211896). This modification is no more difficult to use than the Jacobi method, and
it often requires fewer iterations to produce the same degree of accuracy.
With the Jacobi method, the values of xi obtained in the nth approximation remain
unchanged until the entire n 1th approximation has been calculated. On the other
hand, with the Gauss-Seidel method you use the new values of each xi as soon as they
are known. That is, once you have determined x1 from the first equation, its value is then
used in the second equation to obtain the new x2. Similarly, the new x1 and x2 are used
in the third equation to obtain the new x3, and so on. This procedure is demonstrated in
Example 2.

Applying the Gauss-Seidel Method

Use the Gauss-Seidel iteration method to approximate the solution to the system of
equations in Example 1.
SOLUTION
The first computation is identical to that in Example 1. That is, using
x1, x2, x3 0, 0, 0 as the initial approximation, you obtain the new value of x1.
x1 15 25(0) 35(0) 0.200
Now that you have a new value of x1, use it to compute a new value of x2. That is,
x2 29 39 0.200 19 0 0.156.
Similarly, use x1 0.200 and x2 0.156 to compute a new value of x3. That is,
x3 37 27 0.200 17 0.156 0.508.
So, the first approximation is x1 0.200, x2 0.156, and x3 0.508. Now
performing the next iteration produces
x1 15 250.156 350.508 0.167
x2 9 390.167 190.508 0.334
2

x3 37 270.167 170.334 0.429.


Continued iterations produce the sequence of approximations shown in the table.

n 0 1 2 3 4 5 6
x1 0.000 0.200 0.167 0.191 0.187 0.186 0.186
x2 0.000 0.156 0.334 0.334 0.331 0.331 0.331
x3 0.000 0.508 0.429 0.422 0.422 0.423 0.423

Note that after only six iterations of the Gauss-Seidel method, you achieved
the same accuracy as was obtained with seven iterations of the Jacobi method in
Example 1.
9781133110873_1002.qxp 3/10/12 7:01 AM Page 498

498 Chapter 10 Numerical Methods

CONVERGENCE AND DIVERGENCE


Neither of the iterative methods presented in this section always converges. That is, it
is possible to apply the Jacobi method or the Gauss-Seidel method to a system of
linear equations and obtain a divergent sequence of approximations. In such cases, it is
said that the method diverges.

An Example of Divergence

Apply the Jacobi method to the system


x1 5x2 4
7x1 x2 6
using the initial approximation x1, x2 0, 0, and show that the method diverges.
SOLUTION
As usual, begin by rewriting the system in the form
x1 4 5x2
x2 6 7x1.
Then the initial approximation 0, 0 produces the first approximation
x1 4 50 4
x2 6 70 6.
Repeated iterations produce the sequence of approximations shown in the table.

n 0 1 2 3 4 5 6 7
x1 0 4 34 174 1224 6124 42,874 214,374
x2 0 6 34 244 1224 8574 42,874 300,124

For this particular system of linear equations you can determine that the actual
solution is x1 1 and x2 1. So you can see from the table that the approximations
provided by the Jacobi method become progressively worse instead of better, and you
can conclude that the method diverges.

The problem of divergence in Example 3 is not resolved by using the Gauss-Seidel


method rather than the Jacobi method. In fact, for this particular system the
Gauss-Seidel method diverges even more rapidly, as shown in the table.

n 0 1 2 3 4 5
x1 0 4 174 6124 214,374 7,503,124
x2 0 34 1224 42,874 1,500,624 52,521,874

You will now look at a special type of coefficient matrix A, called a strictly
diagonally dominant matrix, for which it is guaranteed that both methods will
converge.
9781133110873_1002.qxp 3/10/12 7:01 AM Page 499

10.2 Iterative Methods for Solving Linear Systems 499

Definition of Strictly Diagonally Dominant Matrix


An n n matrix A is strictly diagonally dominant when the absolute value of
each entry on the main diagonal is greater than the sum of the absolute values
of the other entries in the same row. That is,


a11 > a12 a13 . . . a1n

a22 > a21 a23 . . . a2n
.
.
.

ann > an1 an2 . . . an, n1 .

Strictly Diagonally Dominant Matrices

Which of the systems of linear equations shown below has a strictly diagonally
dominant coefficient matrix?
a. 3x1 x2 4 b. 4x1 2x2 x3 1
2x1 5x2 2 x1 2x3 4
3x1 5x2 x3 3
SOLUTION
a. The coefficient matrix
1
2
3
A
5


is strictly diagonally dominant because 3 > 1 and 5 > 2 .
b. The coefficient matrix
1


4 2
A 1 0 2
3 5 1
is not strictly diagonally dominant because the entries in the second and third rows
do not conform to the definition. For instance, in the second row, a21 1, a22 0,

and a23 2, and it is not true that a22 > a21 a23 . Interchanging the second
and third rows in the original system of linear equations, however, produces the
coefficient matrix
1


4 2
A 3 5 1 ,
1 0 2
which is strictly diagonally dominant.

The next theorem, which is listed without proof, states that strict diagonal
dominance is sufficient for the convergence of either the Jacobi method or the
Gauss-Seidel method.

THEOREM 10.1 Convergence of the Jacobi


and Gauss-Seidel Methods
If A is strictly diagonally dominant, then the system of linear equations
given by Ax b has a unique solution to which the Jacobi method and the
Gauss-Seidel method will converge for any initial approximation.
9781133110873_1002.qxp 3/10/12 7:01 AM Page 500

500 Chapter 10 Numerical Methods

In Example 3, you looked at a system of linear equations for which the Jacobi and
Gauss-Seidel methods diverged. In the next example, you can see that by interchanging
the rows of the system in Example 3, you can obtain a coefficient matrix that is strictly
diagonally dominant. After this interchange, convergence is assured.

Interchanging Rows to Obtain Convergence

Interchange the rows of the system in Example 3 to obtain one with a strictly diagonally
dominant coefficient matrix. Then apply the Gauss-Seidel method to approximate the
solution to four significant digits.
SOLUTION
Begin by interchanging the two rows of the system to obtain
7x1 x2 6
x1 5x2 4.
Note that the coefficient matrix of this system is strictly diagonally dominant. Then
solve for x1 and x2 , as shown below.
x1 67 17x2
x2 45 15x1
Using the initial approximation x1, x2 0, 0, obtain the sequence of approximations
REMARK shown in the table.
Do not conclude from
Theorem 10.1 that strict n 0 1 2 3 4 5
diagonal dominance is a
necessary condition for x1 0.0000 0.8571 0.9959 0.9999 1.000 1.000
convergence of the Jacobi or
x2 0.0000 0.9714 0.9992 1.000 1.000 1.000
Gauss-Seidel methods (see
Exercise 21).
So, the solution is x1 1 and x2 1.

LINEAR The Jacobi method can be applied to boundary value


problems to analyze a region having a quantity distributed
ALGEBRA evenly across it. A grid, or mesh, is drawn over the region
APPLIED and values are estimated at the grid points.
Some illustration software programs have a gradient mesh
feature that evenly distributes color in a region between
and around user-defined grid points. In the diagram below,
a grid has been superimposed over a triangular region,
and the exterior points of the grid have been specified by
the user as percentages of blue. The value of color at each
interior grid point is equal to the average of the colors at
adjacent grid points.
c1 0.2580 80 c3 60
100
c2 0.2560 c3 40 40
c3 0.25c1 70 50 c2
80 90
As the mesh spacing decreases,
the linear system of equations c1
60 80
becomes larger. Numerical
methods allow the illustration
40 c2 c3 70
software to determine colors at
each individual point along the
mesh, resulting in a smooth
distribution of color. 20 30 40 50 60
Image copyright Slaven, 2010. Used under license from Shutterstock.com.
9781133110873_1002.qxp 3/10/12 7:01 AM Page 501

10.2 Exercises 501

10.2 Exercises
The Jacobi Method In Exercises 14, apply the Jacobi Interchanging Rows In Exercises 1720, interchange
method to the system of linear equations, using the initial the rows of the system of linear equations in
approximation x1, x2, . . . , xn 0, 0, . . . 0. Continue the given exercise to obtain a system with a strictly
performing iterations until two successive approximations diagonally dominant coefficient matrix. Then apply
are identical when rounded to three significant digits. the Gauss-Seidel method to approximate the solution to
1. 3x1 x2 2 two significant digits.
x1 4x2 5 17. Exercise 9 18. Exercise 10
2. 4x1 2x2 6 19. Exercise 11 20. Exercise 12
3x1 5x2 1 Showing Convergence In Exercises 21 and 22, the
3. 2x1 x2 2 coefficient matrix of the system of linear equations is
x1 3x2 x3 2 not strictly diagonally dominant. Show that the Jacobi
and Gauss-Seidel methods converge using an initial
x1 x2 3x3 6
approximation of x1, x2, . . . , xn 0, 0, . . . , 0.
4. 4x1 x2 x3 7
21. 4x1 5x2 1 22. 4x1 2x2 2x3 0
x1 7x2 2x3 2
x1 2x2 3 x1 3x2 x3 7
3x1 4x3 11
3x1 x2 4x3 5
The Gauss-Seidel Method In Exercises 58, apply the
True or False? In Exercises 2325, determine whether
Gauss-Seidel method to the system of linear equations in
each statement is true or false. If a statement is true, give
the given exercise.
a reason or cite an appropriate statement from the text.
5. Exercise 1 6. Exercise 2 If a statement is false, provide an example that shows the
7. Exercise 3 8. Exercise 4 statement is not true in all cases or cite an appropriate
statement from the text.
Showing Divergence In Exercises 912, show that the
23. The Jacobi method is said to converge when it produces
Gauss-Seidel method diverges for the system using the
a sequence of repeated iterations accurate to within a
initial approximation x1, x2, . . . , xn 0, 0, . . . , 0.
specific number of decimal places.
9. x1 2x2 1
24. If the Jacobi method or the Gauss-Seidel method
2x1 x2 3 diverges, then the system has no solution.
10. x1 4x2 1 25. If a matrix A is strictly diagonally dominant, then the
3x1 2x2 2 system of linear equations represented by Ax b has
11. 2x1 3x2 7 no unique solution.
x1 3x2 10x3 9
26. Consider the system of linear
3x1 x3 13
equations
12. x1 3x2 x3 5
ax1 5x2 2x3 19
3x1 x2 5
3x1 bx2 x3 1.
x2 2x3 1
2x1 x2 cx3 9
In Exercises
Strictly Diagonally Dominant Matrices (a) Describe all the values of a, b, and c that will allow
1316, determine whether the matrix is strictly you to use the Jacobi method to approximate the
diagonally dominant. solution of this system.
1 2
13. 23 1
5 14.
0 1 (b) Describe all the values of a, b, and c that will
guarantee that the Jacobi and Gauss-Seidel
5 1


12 6 0 7 methods converge.
15. 2 3 2 16. 1 4 1 (c) Let a 8, b 1, and c 4. What, if anything,
0 6 13 0 2 3 can you determine about the convergence or
divergence of the Jacobi and Gauss-Seidel
methods? Explain.
9781133110873_1003.qxp 3/10/12 7:03 AM Page 502

502 Chapter 10 Numerical Methods

10.3 Power Method for Approximating Eigenvalues


Determine the existence of a dominant eigenvalue, and find the
corresponding dominant eigenvector.
Use the power method with scaling to approximate a dominant
eigenvector of a matrix, and use the Rayleigh quotient to compute
its dominant eigenvalue.

DOMINANT EIGENVALUES AND DOMINANT EIGENVECTORS


In Chapter 7 the eigenvalues of an n n matrix A were obtained by solving its
characteristic equation
n c n1 c n2 . . . c 0.
n1 n2 0

For large values of n, polynomial equations such as this one are difficult and time
consuming to solve. Moreover, numerical techniques for approximating roots of
polynomial equations of high degree are sensitive to rounding errors. This section
introduces an alternative method for approximating eigenvalues. As presented here, the
method can be used only to find the eigenvalue of A that is largest in absolute valuethis
eigenvalue is called the dominant eigenvalue of A. Although this restriction may seem
severe, dominant eigenvalues are of primary interest in many physical applications.

Definitions of Dominant Eigenvalue and Dominant Eigenvector


Let 1, 2, . . . , and n be the eigenvalues of an n n matrix A. i is called the
dominant eigenvalue of A when

i > j, i j.
The eigenvectors corresponding to i are called dominant eigenvectors of A.

REMARK Not every matrix has a dominant eigenvalue. For instance, the matrices


Note that the eigenvalues of A 2 0 0
are 1 1 and 2 1, and

1 0
A B 0 2 0
the eigenvalues of B are 1 2, 0 1
2 2, and 3 1. 0 0 1
have no dominant eigenvalues.

Finding a Dominant Eigenvalue

Find the dominant eigenvalue and corresponding eigenvectors of the matrix


2 12
A 1 5
.
SOLUTION
From Example 4 in Section 7.1, the characteristic polynomial of A is
2 3 2 1 2. So, the eigenvalues of A are 1 1 and 2 2,
of which the dominant one is 2 2. From the same example, the dominant
eigenvectors of A those corresponding to 2 2 are of the form

1,
3
xt t 0.
9781133110873_1003.qxp 3/10/12 7:03 AM Page 503

10.3 Power Method for Approximating Eigenvalues 503

THE POWER METHOD


Like the Jacobi and Gauss-Seidel methods, the power method for approximating
eigenvalues is iterative. First assume that the matrix A has a dominant eigenvalue with
corresponding dominant eigenvectors. Then choose an initial approximation x0 of one
of the dominant eigenvectors of A. This initial approximation must be a nonzero vector
in Rn. Finally, form the sequence
x1 Ax0
x2 Ax1 AAx0 A2x0
x3 Ax2 AA2x0 A3x0
.
.
.
xk Axk1 AAk1x0 Akx0.
For large powers of k, and by properly scaling this sequence, you obtain a good
approximation of the dominant eigenvector of A. This is illustrated in Example 2.

Approximating a Dominant Eigenvector


by the Power Method
Complete six iterations of the power method to approximate a dominant eigenvector of
2 12
A 1 5
.
SOLUTION
Begin with an initial nonzero approximation of

1.
1
x0

Then obtain the approximations shown below.


Iteration Scaled Approximation
12 10
1.00
2 1 2.50
x1 Ax0 4
1 5 1 4
12 10
10
1.00
2 28 2.80
x2 Ax1
1 5 4 10
12 64
22
1.00
2 28 2.91
x3 Ax2
1 5 10 22
12 64
46
1.00
2 136 2.96
x4 Ax3
1 5 22 46
12 280
94
1.00
2 136 2.98
x5 Ax4
1 5 46 94
12 280
190
1.00
2 568 2.99
x6 Ax5
1 5 94 190

Note that the approximations in Example 2 are approaching scalar multiples of

1
3

which is a dominant eigenvector of the matrix A from Example 1. In this case the
dominant eigenvalue of the matrix A is known to be 2. For the sake of
demonstration, however, assume that the dominant eigenvalue of A is unknown. The
next theorem provides a formula for determining the eigenvalue corresponding to
a given eigenvector. This theorem is credited to the English physicist John William
Rayleigh (18421919).
9781133110873_1003.qxp 3/10/12 7:03 AM Page 504

504 Chapter 10 Numerical Methods

THEOREM 10.2 Determining an Eigenvalue


from an Eigenvector
If x is an eigenvector of a matrix A, then its corresponding eigenvalue is given by
Ax x
.
xx
This quotient is called the Rayleigh quotient.

PROOF
Because x is an eigenvector of A, it follows that Ax x and you can write
Ax x x x x x
.
xx xx xx

In cases for which the power method generates a good approximation of a dominant
eigenvector, the Rayleigh quotient provides a correspondingly good approximation of the
dominant eigenvalue. The use of the Rayleigh quotient is demonstrated in Example 3.

Approximating a Dominant Eigenvalue

Use the result of Example 2 to approximate the dominant eigenvalue of the matrix
2 12
A 1 5
.
SOLUTION
In Example 2, the sixth iteration of the power method produced

190 1901.00 .
568 2.99
x6

With x 2.99, 1 as the approximation of a dominant eigenvector of A, use the


Rayleigh quotient to obtain an approximation of the dominant eigenvalue of A. First
compute the product Ax.
2 12 6.02
1 1.00 2.01
2.99
Ax
5
Then, because
Ax x 6.022.99 2.011 20.0
and
x x 2.992.99 11 9.94
you can compute the Rayleigh quotient to be
Ax x 20.0
2.01
xx 9.94
which is a good approximation of the dominant eigenvalue 2.

REMARK As shown in Example 2, the power method tends to produce approximations with
Other scaling techniques are large entries. In practice it is best to scale down each approximation before proceeding
possible. For examples, see to the next iteration. One way to accomplish this scaling is to determine the component
Exercises 23 and 24. of Axi that has the largest absolute value and multiply the vector Axi by the reciprocal of
this component. The resulting vector will then have components whose absolute
values are less than or equal to 1.
9781133110873_1003.qxp 3/10/12 7:03 AM Page 505

10.3 Power Method for Approximating Eigenvalues 505

The Power Method with Scaling

Calculate six iterations of the power method with scaling to approximate a dominant
eigenvector of the matrix


1 2 0
A 2 1 2 .
1 3 1
Use x0 1 1 1T as the initial approximation.
SOLUTION
One iteration of the power method produces


1 2 0 1 3
Ax0 2 1 2 1 1
1 3 1 1 5
and by scaling you obtain the approximation


3 0.60
1
x1 1 0.20 .
5
5 1.00
REMARK
Note that the scaling factors A second iteration yields
used to obtain the vectors in


1 2 0 0.60 1.00
the table
Ax1 2 1 2 0.20 1.00
x1 5.00 1 3 1 1.00 2.20
x2 2.20
x3 2.80
and


x4 3.13 1.00 0.45
1
x5 3.03 x2 1.00 0.45 .
x6 3.00
2.20
2.20 1.00
are approaching the dominant Continuing this process produces the sequence of approximations shown in the table.
eigenvalue 3.
x0 x1 x2 x3 x4 x5 x6


1.00 0.60 0.45 0.48 0.50 0.50 0.50
1.00 0.20 0.45 0.55 0.51 0.50 0.50
1.00 1.00 1.00 1.00 1.00 1.00 1.00

From the table, the approximate dominant eigenvector of A is


0.50
x 0.50 .
1.00
Then use the Rayleigh quotient to approximate the dominant eigenvalue of A to be
3. (For this example, you can check that the approximations of x and are exact.)
9781133110873_1003.qxp 3/10/12 7:03 AM Page 506

506 Chapter 10 Numerical Methods

In Example 4, the power method with scaling converges to a dominant eigenvector.


The next theorem states that a sufficient condition for convergence of the power method
is that the matrix A be diagonalizable (and have a dominant eigenvalue).

THEOREM 10.3 Convergence of the Power Method


If A is an n n diagonalizable matrix with a dominant eigenvalue, then there
exists a nonzero vector x0 such that the sequence of vectors given by
Ax0, A2x0, A3x0, A4x0, ..., Akx0, . . .
approaches a multiple of the dominant eigenvector of A.

PROOF
Because A is diagonalizable, from Theorem 7.5 it has n linearly independent eigenvectors
x1, x2, . . . , xn with corresponding eigenvalues of 1, 2, . . . , n. Assume that these
eigenvalues are ordered so that 1 is the dominant eigenvalue (with a corresponding
eigenvector of x1). Because the n eigenvectors x1, x2, . . . , xn are linearly independent,
they must form a basis for Rn. For the initial approximation x0, choose a nonzero
vector such that the linear combination x0 c1x1 c2x2 . . . cnxn has a nonzero
leading coefficient. (If c1 0, the power method may not converge, and a different
x0 must be used as the initial approximation. See Exercises 19 and 20.) Now,
multiplying both sides of this equation by A produces
Ax0 Ac1x1 c2x2 . . . cnxn
c1Ax1 c2Ax2 . . . cnAxn
c11x1 c22x2 . . . cnnxn .
Repeated multiplication of both sides of this equation by A produces
Akx0 c1k1x1 c2k2x2 . . . cnknxn
which implies that
2 n
x x .
k k
Akx0 k1 c1x1 c2 2 . . . cn n
1 1

Now, from the original assumption that 1 is larger in absolute value than the other
eigenvalues, it follows that each of the fractions
2 3 n
, , ...,
1 1 1
is less than 1 in absolute value. So, as k approaches infinity, each of the factors
2 k 3 k n

k
, , ...,
1 1 1

must approach 0. This implies that the approximation Akx0 k1 c1 x1, c1 0, improves
as k increases. Because x1 is a dominant eigenvector, it follows that any scalar multiple
of x1 is also a dominant eigenvector, which shows that Akx0 approaches a multiple of
the dominant eigenvector of A.

The proof of Theorem 10.3 provides some insight into the rate of convergence of
the power method. That is, if the eigenvalues of A are ordered so that

1 > 2 3 . . . n

then the power method will converge quickly if 2 1 is small, and slowly if

2 1 is close to 1. This principle is illustrated in Example 5.
9781133110873_1003.qxp 3/10/12 7:03 AM Page 507

10.3 Power Method for Approximating Eigenvalues 507

The Rate of Convergence of the Power Method

a. The matrix

6
4 5
A
5


has eigenvalues of 1 10 and 2 1. So the ratio 2 1 is 0.1. For this
matrix, only four iterations are required to obtain successive approximations that
agree when rounded to three significant digits.

x0 x1 x2 x3 x4

1.000 1.000 1.000 1.000 1.000


1.000 0.818 0.835 0.833 0.833
REMARK
The approximations in this
example are rounded to b. The matrix
three significant digits. The
4

intermediate calculations, 10
A
however, are performed with 7 5
a higher degree of accuracy
to minimize rounding error.
has eigenvalues of 1 10 and 2 9. For this matrix, the ratio 2 1 is 0.9,
You may get slightly different and the power method does not produce successive approximations that agree to
results, depending on your three significant digits until 68 iterations have been performed.
rounding method or the
method used by your x0 x1 x2 x66 x67 x68
technology.

1.000 1.000 1.000 1.000 1.000 1.000


1.000 0.500 0.941 ... 0.715 0.714 0.714
...

In this section the power method was used to approximate the dominant eigenvalue
of a matrix. This method can be modified to approximate other eigenvalues through use
of a procedure called deflation. Moreover, the power method is only one of several
techniques that can be used to approximate the eigenvalues of a matrix. Another
popular method is called the QR algorithm.
This is the method used in most programs and calculators for finding eigenvalues and
eigenvectors. The QR algorithm uses the QR-factorization of the matrix, as presented in
Chapter 5. Discussions of the deflation method and the QR algorithm can be found in
most texts on numerical methods.

LINEAR Internet search engines use web crawlers to continually


ALGEBRA traverse the Internet, maintaining a master index of content
APPLIED and hyperlinks on billions of web pages. By analyzing how
much each web page is referenced by other sites, search
results can be ordered based on relevance. Some
algorithms do this using matrices and eigenvectors.
Suppose a search set contains n web sites. Define n n
matrix A such that

aij 1 if site i references site j and


aij 0 if site i does not reference site j, or i j.
Then scale each row vector of A to sum to 1, and define
n n matrix M as the collection of n scaled row vectors.
Using the power method on M produces a dominant
eigenvector whose entries are used to determine the page
ranks of the sites in order of importance.
rudall30/Shutterstock.com
9781133110873_1003.qxp 3/10/12 7:03 AM Page 508

508 Chapter 10 Numerical Methods

10.3 Exercises
Finding a Dominant Eigenvector In Exercises 14, 17. Rates of Convergence
use the techniques presented in Chapter 7 to find the (a) Compute the eigenvalues of
eigenvalues of the matrix A. If A has a dominant
1
1 2 2 3
eigenvalue, find a corresponding dominant eigenvector. A and B .
2 1 4
5 2

1 0
1. A 2. A (b) Apply four iterations of the power method with
3 1 3 6
scaling to each matrix in part (a), starting with
5


1 3 0 0 0 x0 1 2T.
3. A 2 1 2 4. A 3 5 0 (c) Compute the ratios 2 1 for A and B. For which
1 1 0 4 2 3 matrix do you expect faster convergence?

Using the Rayleigh Quotient In Exercises 5 and 6, use


the Rayleigh quotient to compute the eigenvalue of A 18. Rework Example 2 using the
corresponding to the eigenvector x. power method with scaling. Compare your answer
with the one found in Example 2.
5
2
4 5
5. A ,x
3 2
Writing In Exercises 19 and 20, (a) find the eigenvalues
1

2 2
6. A ,x and corresponding eigenvectors of A, (b) use the initial
0 8 9 approximation x0 to calculate two iterations of the power
Eigenvectors and Eigenvalues In Exercises 710, method with scaling, and (c) explain why the method
use the power method with scaling to approximate a does not seem to converge to a dominant eigenvector.
1
2
dominant eigenvector of the matrix A. Start with 3 1
19. A , x0
x0 [1 1]T and calculate five iterations. Then use x5 to 4 1
approximate the dominant eigenvalue of A.
3


0 2 1
1

2 1 0 20. A 0 1 0 , x0 1
7. A 8. A
0 7 1 6 0 1 2 1
1 4 6 3
9. A
2 8
10. A
2 1 Other Eigenvalues In Exercises 21 and 22, observe that
Ax x implies that A1x 1x. Apply five
Eigenvectors and Eigenvalues In Exercises 1114, iterations of the power method (with scaling) on A 1 to
use the power method with scaling to approximate a compute the eigenvalue of A with the smallest magnitude.
dominant eigenvector of the matrix A. Start with


2 3 1
12
x0 [1 1 1]T and calculate four iterations. Then use

2
21. A 22. A 0 1 2
x4 to approximate the dominant eigenvalue of A. 1 5
0 0 3


3 0 0 1 2 0
11. A 1 1 0 12. A 0 7 1 Another Scaling Technique In Exercises 23 and 24,
0 2 8 0 0 0 apply four iterations of the power method (with scaling)
to approximate the dominant eigenvalue of the matrix.
1 6


0 0 6 0 After each iteration, scale the approximation by dividing
13. A 2 7 0 14. A 0 4 0 by the length so that the resulting approximation will be
1 2 1 2 1 1 a unit vector.
4


Power Method with Scaling In Exercises 15 and 16, 7 2

5 6
the matrix A does not have a dominant eigenvalue. Apply 23. A 24. A 16 9 6
4 3
the power method with scaling, starting with 8 4 5
x0 [1 1 1]T, and observe the results of the first four
iterations. 25. Use the proof of Theorem 10.3 to show that
2 AAkx 0 1Akx 0


1 1 0 1 2
15. A 3 1 0 16. A 2 5 2 for large values of k. That is, show that the scale factors
obtained by the power method approach the dominant
0 0 2 6 6 3
eigenvalue.
9781133110873_1004.qxp 3/10/12 7:02 AM Page 509

10.4 Applications of Numerical Methods 509

10.4 Applications of Numerical Methods


Find a second-degree least squares regression polynomial or a
third-degree least squares regression polynomial for a set of data.
Use finite element analysis to determine the probability of an event.
Model population growth using an age transition matrix and use
the power method to find a stable age distribution vector.

APPLICATIONS OF GAUSSIAN ELIMINATION WITH PIVOTING


In Section 2.5, least squares regression analysis was used to find linear mathematical
models that best fit a set of n points in the plane. This procedure can be extended to
cover polynomial models of any degree, as follows.

Regression Analysis for Polynomials


The least squares regression polynomial of degree m for the points
x1, y1, x2, y2, . . . , xn, yn
is given by
y  am x m  am1x m1  . . .  a2 x2  a1x  a0
where the coefficients are determined by the following system of m  1 linear
equations.
na0  xi  a1 
x2i a2  . . .  ximam  yi
xia0  x2i a1 
x3i a2  . . .  xim1am  xi yi
xi2a0  xi3a1 
xi4a2  . . .  xim2am  x2i yi
.
.
.
xima0  xim1a1  xim2a2  . . .  xi2mam  xim yi

Note that if m  1, this system of equations reduces to

 x a
na0  i 1  yi

 x a   x a
i 0
2
i 1  x y
i i

which has a solution of


nxi yi   xi yi   yi x
a1  and a0   a1 i.
n x2i   xi 2 n n
Exercise 18 asks you to show that this formula is equivalent to the matrix formula for
linear regression that was presented in Section 2.5.
Example 1 illustrates the use of regression analysis to find a second-degree
polynomial model.
9781133110873_1004.qxp 3/10/12 7:02 AM Page 510

510 Chapter 10 Numerical Methods

Least Squares Regression Analysis

The world populations in billions for selected years from 1980 to 2010 are shown
below. (Source: U.S. Census Bureau)

Year 1980 1985 1990 1995 2000 2005 2010


Population 4.45 4.84 5.27 5.68 6.07 6.45 6.87

Find the second-degree least squares regression polynomial for these data and use the
resulting model to predict the world populations in 2015 and 2020.
SOLUTION
Begin by letting x  0 represent 1980, x  1 represent 1985, and so on. So, the
collection of points is represented by 0, 4.45, 1, 4.84, 2, 5.27, 3, 5.68, 4, 6.07,
5, 6.45, 6, 6.87, which yields
7 7 7
n  7, 
i1
xi  21, 
i1
x2i  91, x
i1
3
i  441,

7 7 7 7


i1
xi4  2275, 
i1
yi  39.63, 
i1
xi yi  130.17,  x y  582.73.
i1
2
i i

So, the system of linear equations giving the coefficients of the quadratic model
y  a2x2  a1x  a0 is
7a0  21a1  91a2  39.63
21a0  91a1  441a2  130.17
91a0  441a1  2275a2  582.73.
Gaussian elimination with pivoting on the matrix



7 21 91 39.63
21 91 441 130.17
91 441 2275 582.73
produces



y 1 4.8462 25 6.4036
9 2015 0 1 6.5 0.4020 .
8
7 2020 0 0 1 0.0017
6 Predicted
5 points So, back-substitution produces the solution
3 a2  0.0017, a1  0.4129, a0  4.4445
2
1 and the regression quadratic is
x
1 1 2 3 4 5 6 7 8 9 y  0.0017x2  0.4129x  4.4445.
Figure 10.1 Figure 10.1 compares this model with the given points. To predict the world population
in 2015, let x  7, and obtain
y  0.001772  0.41297  4.4445 7.25 billion.
Similarly, the prediction for 2020 x  8 is
y  0.001782  0.41298  4.4445 7.64 billion.
9781133110873_1004.qxp 3/10/12 7:02 AM Page 511

10.4 Applications of Numerical Methods 511

Least Squares Regression Analysis

Find the third-degree least squares regression polynomial


y  a3x3  a2x2  a1x  a0
for the points
0, 0, 1, 2, 2, 3, 3, 2, 4, 1, 5, 2, 6, 4.

SOLUTION
For this set of points, the linear system
na0  xi a1  xi2a2  x3i a3  yi
xi a0  xi2a1  x3i a2  x4i a3  xi yi
x2i a0  x3i a1  xi4a2  xi5a3  xi2 yi
xi3 a0  x4i a1  xi5a2  xi6a3  xi3 yi
becomes
7a0  21a1  91a2  441a3  14
21a0  91a1  441a2  2275a3  52
91a0  441a1  2275a2  12,201a3  242
441a0  2275a1  12,201a2  67,171a3  1258.
Using Gaussian elimination with pivoting on the matrix



7 21 91 441 14
21 91 441 2275 52
91 441 2275 12,201 242
441 2275 12,201 67,171 1258
produces



1.0000 5.1587 27.6667 152.3152 2.8526
0.0000 1.0000 8.5313 58.3482 0.6183
0.0000 0.0000 1.0000 9.7714 0.1286
0.0000 0.0000 0.0000 1.0000 0.1667
which implies
a3  0.1667, a2  1.5003, a1  3.6912, a0  0.0718.
So, the cubic model is
y  0.1667x3  1.5003x2  3.6912x  0.0718.
Figure 10.2 compares this model with the given points.
y
(6, 4)
4
(2, 3) (5, 2)
3
(3, 2)
2
(1, 2)
1
(0, 0) (4, 1)
x
1 2 3 4 5 6
Figure 10.2
9781133110873_1004.qxp 3/10/12 7:02 AM Page 512

512 Chapter 10 Numerical Methods

APPLICATIONS OF THE GAUSS-SEIDEL METHOD

An Application to Probability

Figure 10.3 is a diagram of a maze used in a laboratory experiment. The experiment


begins by placing a mouse at one of the ten interior intersections of the maze. Once the
mouse emerges in the outer corridor, it cannot return to the maze. When the mouse is at an
1 interior intersection, its choice of paths is assumed to be random. What is the probability
2 3 that the mouse will emerge in the food corridor when it begins at the ith intersection?

4 5 6 SOLUTION
7 8 9 10
Let the probability of winning (getting food) when starting at the ith intersection be
represented by pi. Then form a linear equation involving pi and the probabilities
Food associated with the intersections bordering the ith intersection. For instance, at the first
intersection the mouse has a probability of 14 of choosing the upper right path and
Figure 10.3 losing, a probability of 41 of choosing the upper left path and losing, a probability of 4
1

of choosing the lower right path (at which point it has a probability of p3 of winning),
1
and a probability of 4 of choosing the lower left path (at which point it has a probability
of p2 of winning). So
p1  140  140  14 p3  14 p2.

Upper Upper Lower Lower


right left right left

Using similar reasoning, the other nine probabilities can be represented by the
following equations.
p2  150  15 p1  15p3  15p4  15p5
p3  150  15 p1  15p2  15p5  15p6
p4  150  15p2  15p5  15p7  15p8
p5  16 p2  16 p3  16 p4  16 p6  16 p81  16 p9
p6  150  15p3  15p5  15p9  15p10
 40  41  4 p4  4 p8
1 1 1 1
p7
 51  5p4  5p5  5p7  5p9
1 1 1 1 1
p8
 51  5p5  5p6  5p8  5p10
1 1 1 1 1
p9
 40  41  4p6  4p9
1 1 1 1
p10
Rewriting these equations in standard form produces the following system of ten linear
equations in ten variables.
4p1  p2  p3 0
p1  5p2  p3  p4  p5 0
p1  p2  5p3  p5  p6 0
 p2  5p4  p5  p7  p8 0
 p2  p3  p4  6p5  p6  p8  p9 0
 p3  p5  5p6  p9  p10  0
 p4  4p7  p8 1
 p4  p5  p7  5p8  p9 1
 p5  p6  p8  5p9  p10  1
 p6  p9  4p10  1
9781133110873_1004.qxp 3/10/12 7:02 AM Page 513

10.4 Applications of Numerical Methods 513

The augmented matrix for this system is


4 1 1 0 0 0 0 0 0 0 0
1 5 1 1 1 0 0 0 0 0 0
1 1 5 0 1 1 0 0 0 0 0
0 1 0 5 1 0 1 1 0 0 0
0 1 1 1 6 1 0 1 1 0 0
0 0 1 0 1 5 0 0 1 1 0 .
0 0 0 1 0 0 4 1 0 0 1
0 0 0 1 1 0 1 5 1 0 1
0 0 0 0 1 1 0 1 5 1 1
0 0 0 0 0 1 0 0 1 4 1
Using the Gauss-Seidel method with an initial approximation of p1  p2  . . .  p10  0
produces (after 18 iterations) an approximation of
p1  0.090, p2  0.180
p3  0.180, p4  0.298
p5  0.333, p6  0.298
p7  0.455, p8  0.522
p9  0.522, p10  0.455.

The structure of the probability problem described in Example 3 is related to a


technique called finite element analysis, which is used in many engineering problems.
Note that the matrix developed in Example 3 has mostly zero entries. Such matrices
are called sparse. For solving systems of equations with sparse coefficient matrices, the
Jacobi and Gauss-Seidel methods are much more efficient than Gaussian elimination.

LINEAR Finite element analysis is used in engineering to visualize


ALGEBRA how a car will deform in a crash. It allows engineers to
APPLIED identify potential problem areas in a car design prior to
manufacturing. The car is modeled by a complex mesh of
nodes in order to analyze the distribution of stresses and
displacements. Consider a simple bar with two nodes.
y

Node Node x
u2 u4

u1 u3

The nodes defined at the ends of the bar can have


displacements in the x- and y-directions, denoted by u1, u2,
u3, and u4, and the stresses imposed on the bar during
displacement result in internal forces F1, F2, F3, and F4.
Each Fi is related to uj by a stiffness coefficient kij. These
coefficients form a 4  4 stiffness matrix for the bar. The
matrix is used to determine the displacements of the nodes
given a force P.
For complex three-dimensional structures, the mesh
includes the material and structural properties that define
how the structure will react to certain conditions.
Pakhnyushcha/Shutterstock.com
9781133110873_1004.qxp 3/10/12 7:02 AM Page 514

514 Chapter 10 Numerical Methods

APPLICATIONS OF THE POWER METHOD


Section 7.4 introduced the idea of an age transition matrix as a model for population
growth. Recall that this model was developed by grouping the population into n age
classes of equal duration. So, for a maximum life span of L years, the age classes are
represented by the intervals listed below.
First age Second age nth age
class class class


0, Ln,
Ln, 2Ln, . . . . ,
n n lL, L
The number of population members in each age class is then represented by the age
distribution vector



x1 Number in first age class
x Number in second age class
x  .2 .
..
xn Number in nth age class
Over a period of L n years, the probability that a member of the ith age class will
survive to become a member of the i  1th age class is given by pi , where
0  pi  1, i  1, 2, . . . , n  1.
The average number of offspring produced by a member of the ith age class is given
by bi , where
0  bi , i  1, 2, . . . , n.
These numbers can be written in matrix form as follows.



b1 b2 b3 . . . bn1 bn
p1 0 0 ... 0 0
A 0 p2 0 ... 0 0
.. .. .. .. ..
. . . . .
0 0 0 ... p 0
n1

Multiplying this age transition matrix by the age distribution vector for a specific time
period produces the age distribution vector for the next time period. That is,
Axi  xi1.
In Section 7.4 you saw that the growth pattern for a population is stable when the same
percentage of the total population is in each age class each year. That is,
Axi  xi1  xi.
For populations with many age classes, the solution to this eigenvalue problem can be
found with the power method, as illustrated in Example 4.
9781133110873_1004.qxp 3/10/12 7:02 AM Page 515

10.4 Applications of Numerical Methods 515

A Population Growth Model

Assume that a population of human females has the characteristics listed at the left.
Age Class Female
The table shows the age class (in years), the average number of female children during
(in years) Children Probability
a 10-year period, and the probability of surviving to the next age class. Find a stable age
0, 10 0.000 0.985 distribution vector for this population.
10, 20 0.174 0.996 SOLUTION
20, 30 0.782 0.994 The age transition matrix for this population is

30, 40 0.263 0.990


0.000 0.174 0.782 0.263 0.022 0.000 0.000 0.000 0.000 0.000
40, 50 0.022 0.975 0.985 0 0 0 0 0 0 0 0 0
0 0.996 0 0 0 0 0 0 0 0
50, 60 0.000 0.940
0 0 0.994 0 0 0 0 0 0 0
60, 70 0.000 0.866
A
0 0 0 0.990 0 0 0 0 0 0 .
0 0 0 0 0.975 0 0 0 0 0
70, 80 0.000 0.680
0 0 0 0 0 0.940 0 0 0 0
80, 90 0.000 0.361 0 0 0 0 0 0 0.866 0 0 0
0 0 0 0 0 0 0 0.680 0 0
90, 100 0.000 0.000
0 0 0 0 0 0 0 0 0.361 0

To apply the power method with scaling to find an eigenvector for this matrix, use
an initial approximation of x0  1 1 1 1 1 1 1 1 1 1T. An approximation
for an eigenvector of A, with the percentage of each age in the total population, is
shown below.
Percentage in
Eigenvector Age Class Age Class
1.000 0, 10 15.27
0.925 10, 20 14.13
0.864 20, 30 13.20
0.806 30, 40 12.31
x  0.749 40, 50 11.44
0.686 50, 60 10.48
0.605 60, 70 9.24
0.492 70, 80 7.51
0.314 80, 90 4.80
0.106 90, 100 1.62

The eigenvalue corresponding to the eigenvector x in Example 4 is  1.065. That is,


1.000 1.065 1.000
REMARK 0.925 0.985 0.925
0.864 0.921 0.864
Try duplicating the results of
Example 4. Notice that the 0.806 0.859 0.806
convergence of the power 0.749 0.798 0.749
Ax  A 1.065 .
method for this problem is 0.686 0.730 0.686
very slow. The reason is that 0.605 0.645 0.605
the dominant eigenvalue of 0.492 0.524 0.492
 1.065 is only slightly larger
in absolute value than the next
0.314 0.335 0.314
largest eigenvalue. 0.106 0.113 0.106
This means that the population in Example 4 increases by 6.5% every 10 years.
9781133110873_1004.qxp 3/10/12 7:02 AM Page 516

516 Chapter 10 Numerical Methods

10.4 Exercises
Least Squares Regression Analysis In Exercises 14, 12. Health Expenditures The total national health
find the second-degree least squares regression expenditures (in trillions of dollars) in the United States
polynomial for the given data. Then graphically compare from 2002 to 2009 are shown in the table. (Source:
the model with the given points. U.S. Census Bureau)
1. 2, 1, 1, 0, 0, 0, 1, 1, 3, 2
Year 2002 2003 2004 2005
2. 0, 4, 1, 2, 2, 1, 3, 0, 4, 1, 5, 4
3. 2, 1, 1, 2, 0, 6, 1, 3, 2, 0, 3, 1 Health
1.637 1.772 1.895 2.021
Expenditures
4. 1, 1, 2, 1, 3, 0, 4, 1, 5, 4

Least Squares Regression Analysis In Exercises


Year 2006 2007 2008 2009
510, find the third-degree least squares regression
polynomial for the given data. Then graphically compare Health
2.152 2.284 2.391 2.486
the model with the given points. Expenditures
5. 0, 0, 1, 2, 2, 4, 3, 1, 4, 0, 5, 1
6. 1, 1, 2, 4, 3, 4, 5, 1, 6, 2 (a) Find the second-degree least squares regression
polynomial for these data. Let x  2 correspond to
7. 3, 4, 1, 1, 0, 0, 1, 2, 2, 5
2002.
8. 7, 2, 3, 0, 1, 1, 2, 3, 4, 6
(b) Use the result of part (a) to predict the expenditures
9. 1, 0, 0, 3, 1, 2, 2, 0, 3, 2, 4, 3 for the years 2010 through 2012.
10. 0, 0, 2, 10, 4, 12, 6, 0, 8, 8 13. Population The total numbers of people (in millions)
in the United States 65 years of age or older in selected
11. Decompression Sickness The numbers of minutes
years are shown in the table. (Source: U.S. Census
a scuba diver can stay at particular depths without
Bureau)
acquiring decompression sickness are shown in the
table. (Source: United States Navys Standard Air
Year 1990 1995 2000 2005 2010
Decompression Tables)
Number
29.6 31.7 32.6 35.2 38.6
Depth (in feet) 35 40 50 60 70 of People
Time (in minutes) 310 200 100 60 50
(a) Find the second-degree least squares regression
polynomial for the data. Let x  0 correspond to
Depth (in feet) 80 90 100 110 1990, x  1 correspond to 1995, and so on.
Time (in minutes) 40 30 25 20 (b) Use the result of part (a) to predict the total numbers
of people in the United States 65 years of age or
older in 2015, 2020, and 2025.
(a) Find the least squares regression line for these data.
14. Mail-Order Prescriptions The total amounts of money
(b) Find the second-degree least squares regression
(in billions of dollars) spent on mail-order prescription
polynomial for these data.
sales in the United States from 2005 to 2010 are shown
(c) Sketch the graphs of the models found in parts (a) in the table. (Source: U.S. Census Bureau)
and (b).
(d) Use the models found in parts (a) and (b) to Year 2005 2006 2007 2008 2009 2010
approximate the maximum number of minutes a
diver should stay at a depth of 120 feet. (The value Amount
45.5 50.9 52.5 55.4 61.3 62.6
from the Navys tables is 15 minutes.) Spent

(a) Find the second-degree least squares regression


polynomial for the data. Let x  5 correspond to
2005.
(b) Use the result of part (a) to predict the total
mail-order sales in 2015 and 2020.
9781133110873_1004.qxp 3/10/12 7:02 AM Page 517

10.4 Exercises 517

15. Trigonometry Find the second-degree least squares 20. Probability Suppose that the experiment in Example 3
regression polynomial for the points is performed with the maze shown in the figure. Find
the probability that the mouse will emerge in the food
 2 , 0,  3 , 12, 0, 1, 3 , 12, 2 , 0. corridor when it begins at the ith intersection.

Then use the results to approximate cos 4. Compare


the approximation with the exact value.
16. Trigonometry Find the third-degree least squares 1 2 3
regression polynomial for the points

 4 , 1,  3 , 3, 0, 0, 3 , 3, 4 , 1.


4 5 6

Then use the result to approximate tan 6. Compare Food


the approximation with the exact value.
21. Temperature A square metal plate has a constant
17. Least Squares Regression Line Find the least temperature on each of its four boundaries, as shown
squares regression line for the population data from in the figure. Use a 4  4 grid to approximate the
Example 1. Then use the model to predict the world temperature distribution in the interior of the plate.
populations in 2015 and 2020, and compare the results Assume that the temperature at each interior point is
with the predictions obtained in Example 1. the average of the temperatures at the four closest
18. Least Squares Regression Line Show that the neighboring points.
formula for the least squares regression line presented
in Section 2.5 is equivalent to the formula presented in 100
this section. That is, if
1 2 3




y1 1 x1
100 0


y 1 x2 a0 4 5 6
Y  .2 , X  .. .. , A 
.. . . a1 7 8 9
yn 1 xn
then the matrix equation A  XTX1XTY is equivalent to 0

nxi yi   xiyi  yi x 22. Temperature A rectangular metal plate has a


a1  and a0   a1 i.
n x2i  xi 2 n n constant temperature on each of its four boundaries, as
shown in the figure. Use a 4  5 grid to approximate
19. Probability Suppose that the experiment in Example 3
the temperature distribution in the interior of the
is performed with the maze shown in the figure. Find plate. Assume that the temperature at each interior point
the probability that the mouse will emerge in the food is the average of the temperatures at the four closest
corridor when it begins at the ith intersection. neighboring points.

120

1 2 3 1 2 3 4

80 5 6 7 8 40
4 5 6
9 10 11 12

7 8 9
0

Food
9781133110873_1004.qxp 3/10/12 7:02 AM Page 518

518 Chapter 10 Numerical Methods

Stable Age Distribution In Exercises 23 28, the 32. Fibonacci Sequence Apply the power method to the
matrix represents the age transition matrix for a matrix
population. Use the power method with scaling to find

1
1 1
a stable age distribution vector. A
0


1 4
23. A  1 discussed in Chapter 7 (Fibonacci sequence). Use the
2 0
power method to approximate the dominant eigenvalue


1 2
24. A  1
of A.
4 0
33. Writing In Example 2 in Section 2.5, the stochastic



0 1 2 matrix
1
25. A  0 0



2
1 0.70 0.15 0.15
0 0
4 P  0.20 0.80 0.15



1 2 2 0.10 0.05 0.70
1
26. A  3 0 0
1 represents the transition probabilities for a consumer
0 3 0
preference model. Use the power method to



1 4 2 approximate a dominant eigenvector for this matrix.
3
27. A  4 0 0 How does the approximation relate to the steady state
1 matrix described in the discussion following Example 3
0 4 0
in Section 2.5?



0 2 2
1 34. Smokers and Nonsmokers In Exercise 7 in
28. A  2 0 0
Section 2.5, a population of 10,000 is divided into
0 1 0 nonsmokers, smokers of one pack or less per day, and
smokers of more than one pack per day. Use the power
29. Population Growth Model The age transition method to approximate a dominant eigenvector for
matrix for a laboratory population of rabbits is this matrix.



0 7 10 35. Television Watching In Exercise 8 in Section 2.5,
A  0.5 0 0 . a college dormitory of 200 students is divided into two
0 0.5 0 groups according to how much television they watch on
a given day. Students in the first group watch an hour
Find a stable age distribution vector for this population. or more, and students in the second group watch less
30. Population Growth Model A population has the than an hour. Use the power method to approximate a
following characteristics. dominant eigenvector for this matrix.
(a) A total of 75% of the population survives its first
year. Of that 75%, 25% survives its second year. 36. Explain each of the following.
The maximum life span is 3 years.
(a) How to use Gaussian elimination with pivoting to
(b) The average numbers of offspring for each member find a least squares regression polynomial
of the population are 2 the first year, 4 the second
(b) How to use finite element analysis to determine
year, and 2 the third year.
the probability of an event
Find a stable age distribution vector for this population.
(c) How to model population growth with an age
(See Exercise 13, Section 7.4.)
distribution matrix and use the power method to
31. Population Growth Model A population has the find a stable age distribution vector
following characteristics.
(a) A total of 80% of the population survives the first
year. Of that 80%, 25% survives the second year.
The maximum life span is 3 years.
(b) The average number of offspring for each member
of the population is 3 the first year, 6 the second
year, and 3 the third year.
Find a stable age distribution vector for this population.
(See Exercise 14, Section 7.4.)
9781133110873_10REV.qxp 3/10/12 7:03 AM Page 519

Review Exercises 519

10 Review Exercises
Floating Point Form In Exercises 16, express the real Ill-Conditioned Systems In Exercises 19 and 20,
number in floating point form. use Gaussian elimination to solve the ill-conditioned
1. 528.6 2. 475.2 system of linear equations, rounding each intermediate
calculation to three significant digits. Compare the
3. 4.85 4. 22.5
1
solution with the exact solution provided.
5. 32 6. 478
19. x y 1
Finding the Stored Value In Exercises 712, determine 999 4001
x y
the stored value of the real number in a computer that 1000 4000
(a) rounds to three significant digits, and (b) rounds to Exact solution: x 5000, y 5001
four significant digits. 20. xy 1
7. 25.2 8. 41.2 99 20,101
xy
9. 250.231 10. 628.742 100 100
5 3
11. 12 12. 16 Exact solution: x 20,001, y 20,002

Rounding Error In Exercises 13 and 14, evaluate the The Jacobi Method In Exercises 21 and 22, apply the
determinant of the matrix, rounding each intermediate Jacobi method to the system of linear equations, using
step to three significant digits. Compare the rounded the initial approximation
solution with the exact solution.
x1, x2, x3, . . . , xn 0, 0, 0, . . . , 0.

12.5 2.5 8.5 3.2
13. 14. Continue performing iterations until two successive
20.24 6.25 10.25 8.1
approximations are identical when rounded to three
Gaussian Elimination In Exercises 15 and 16, use significant digits.
Gaussian elimination to solve the system of linear 21. 2x1 x2 1 22. x1 4x2 3
equations. After each intermediate step, round the result x1 2x2 7 2x1 x2 1
to three significant digits. Compare the rounded solution
with the exact solution. The Gauss-Seidel Method In Exercises 23 and 24,
15. 2.53x 8.5y 29.65 apply the Gauss-Seidel method to the system from the
2.33x 16.1y 43.85 indicated exercise.
16. 12.5x 18.2y 56.8 23. Exercise 21 24. Exercise 22
3.2x 15.1y 4.1 In Exercises
Strictly Diagonally Dominant Matrices
25 28, determine whether the matrix is strictly
Partial Pivoting In Exercises 17 and 18, use Gaussian diagonally dominant.
elimination without partial pivoting to solve the system
2
0 1
of linear equations, rounding each intermediate step to 4 2 1
25. 26.
three significant digits. Then use Gaussian elimination 3 3


with partial pivoting to solve the same system, again 4 0 2 4 2 1
rounding each intermediate step to three significant 27. 10 12 2 28. 0 2 1
digits. Finally, compare both solutions with the exact 1 2 0 1 1 1
solution provided.
17. 2.15x 7.25y 13.7 Interchanging Rows In Exercises 2932, interchange
3.12x 6.3y 15.66 the rows of the system of linear equations to obtain a
system with a strictly diagonally dominant matrix. Then
Exact solution: apply the Gauss-Seidel method to approximate the
x 3, y 1 solution to four significant digits.
18. 4.25x 6.3y 16.85 29. x1 2x2 5 30. x1 4x2 4
6.32x 2.14y 10.6 5x1 x2 8 2x1 x2 6
Exact solution: 31. 2x1 4x2 x3 2 32. x1 3x2 x3 2
x 1, y 2 4x1 x2 x3 1 x1 x2 3x3 1
x1 x2 4x3 2 3x1 x2 x3 1
9781133110873_10REV.qxp 3/10/12 7:03 AM Page 520

520 Chapter 10 Numerical Methods

Finding Eigenvalues In Exercises 33 36, use the 49. Hospital Care The table shows the amounts spent for
techniques presented in Chapter 7 to find the eigenvalues hospital care (in billions of dollars) in the United States
of the matrix A. If A has a dominant eigenvalue, find a from 2005 through 2009. (Source: U.S. Centers for
corresponding dominant eigenvector. Medicare and Medicaid Services)

1 0
1 1 2 1
33. A 34. A Year 2005 2006 2007 2008 2009
1 4


2 2 3 1 2 3 Amount
606.5 648.3 686.8 722.1 759.1
35. A 2 1 6 36. A 0 5 1 Spent
1 2 0 0 0 4
Find the second-degree least squares regression
Using the Rayleigh Quotient In Exercises 3742, use polynomial for the data. Let x 5 correspond to 2005.
the Rayleigh quotient to compute the eigenvalue of the Then use a software program or a graphing utility with
matrix A corresponding to the eigenvector x. regression features to find a second-degree regression

21 5 , x 31
12 polynomial. Compare the results.
37. A
50. Revenue The table shows the total yearly revenues
3 (in billions of dollars) for golf courses and country
2
6 3
38. A ,x clubs in the United States from 2005 through 2009.
1 1
(Source: U.S. Census Bureau)


2 0 1 0
39. A 0 3 4 ,x 1 Year 2005 2006 2007 2008 2009
0 0 1 0
Revenue 19.4 20.5 21.2 21.0 20.36


1 2 2 1
40. A 2 5 2 , x 1
Find the second-degree least squares regression
6 6 3 3
polynomial for the data. Let x 5 correspond to 2005.


0 1 1 1 Then use a software program or a graphing utility with
41. A 2 4 2 ,x 5 regression features to find a second-degree regression
1 1 0 1 polynomial. Compare the results.


3 2 3 3 51. Baseball Salaries The table shows the average
42. A 3 4 9 ,x 0 salaries (in millions of dollars) of Major League
1 2 5 1 Baseball players on opening day of baseball season from
2006 through 2011. (Source: Major League Baseball)
Approximating Eigenvectors and Eigenvalues In
Exercises 4346, use the power method with scaling to Year 2006 2007 2008 2009 2010 2011
approximate a dominant eigenvector of the matrix A.
Start with x0 [1 1]T and calculate four iterations. Average
2.867 2.945 3.155 3.240 3.298 3.305
Then use x4 to approximate the dominant eigenvalue Salary
of A.
Find the third-degree least squares regression polynomial
3

7 2 10 for the data. Let x 6 correspond to 2006. Then use a
43. A 44. A
2 4 5 2 software program or a graphing utility with regression

20 63
1 0 features to find a third-degree regression polynomial.
45. A 46. A
4 3 Compare the results.
52. Dormitory Costs The table shows the average costs
Least Squares Regression Analysis In Exercises 47 (in dollars) of a college dormitory room from 2006
and 48, find the second-degree least squares regression through 2010. (Source: Digest of Education Statistics)
polynomial for the data. Then use a software program
or a graphing utility with regression features to find Year 2006 2007 2008 2009 2010
a second-degree regression polynomial. Compare the
results. Cost 3804 4019 4214 4446 4657
47. 2, 0, 1, 2, 0, 3, 1, 2, and 3, 0
Find the third-degree least squares regression polynomial
48. 2, 2, 1, 1, 0, 1, 1, 1, and 3, 0
for the data. Let x 6 correspond to 2006. Then use a
software program or a graphing utility with regression
features to find a third-degree regression polynomial.
Compare the results.
9781133110873_10REV.qxp 3/10/12 7:03 AM Page 521

Chapter 10 Project 521

10 Project
The populations (in millions) of the United States by decade from 1900 to 2010 are
shown in the table. (Source: U.S. Census Bureau)

Year 1900 1910 1920 1930 1940 1950


Population 76.2 92.2 106.0 123.2 132.2 151.3

Year 1960 1970 1980 1990 2000 2010


Population 179.3 203.3 226.5 248.7 281.4 308.7

Use a software program or a graphing utility to create a scatter plot of the data. Let
x 0 correspond to 1900, x 1 correspond to 1910, and so on.
Using the scatter plot, describe any patterns in the data. Do the data appear to be
linear, quadratic, or cubic in nature? Explain.
Use the techniques presented in this chapter to find
(a) a linear least squares regression equation.
(b) a second-degree least squares regression equation.
(c) a cubic least squares regression equation.
Graph each equation with the data. Briefly describe which of the regression
equations best fits the data.
Use each model to predict the populations of the United States for the years 2020,
2030, 2040, and 2050. Which, if any, of the regression equations appears to be the
best model for predicting future populations? Explain your reasoning.
The 2012 Statistical Abstract projected the populations of the United States for the
same years, as shown in the table below. Do any of your models produce the same
projections? Explain any possible differences between your projections and the
Statistical Abstract projections.

Year 2020 2030 2040 2050


Population 341.4 373.5 405.7 439.0
East/Shutterstock.com

You might also like