Professional Documents
Culture Documents
Numerical
10 Methods
10.1 Gaussian Elimination with Partial Pivoting
10.2 Iterative Methods for Solving Linear Systems
10.3 Power Method for Approximating Eigenvalues
10.4 Applications of Numerical Methods
Laboratory Experiment
Probabilities (p. 512)
where k is an integer and the mantissa M satisfies the inequality 0.1 M < 1.
For instance, the floating point forms of some real numbers are as follows.
Real Number Floating Point Form
527 0.527 103
3.81623 0.381623 101
0.00045 0.45 103
The number of decimal places that can be stored in the mantissa depends on the
computer. If n places are stored, then it is said that the computer stores n significant
digits. Additional digits are either truncated or rounded off. When a number is truncated
to n significant digits, all digits after the first n significant digits are simply omitted. For
instance, truncated to two significant digits, the number 0.1251 becomes 0.12.
When a number is rounded to n significant digits, the last retained digit is
increased by 1 if the discarded portion is greater than half a digit, and the last retained
digit is not changed if the discarded portion is less than half a digit. For the
special case in which the discarded portion is precisely half a digit, round so that the
last retained digit is even. For instance, the following numbers have been rounded to
two significant digits.
Number Rounded Number
0.1249 0.12
0.125 0.12
0.1251 0.13
0.1349 0.13
0.135 0.14
0.1351 0.14
Most computers store numbers in binary form (base 2) rather than decimal form
(base 10). Although rounding occurs in both systems, this discussion will be restricted
to the more familiar base 10. Whenever a computer truncates or rounds a number,
a rounding error that can affect subsequent calculations is introduced. The result after
rounding or truncating is called the stored value.
9781133110873_1001.qxp 3/10/12 7:04 AM Page 489
Determine the stored value of each of the real numbers listed below in a computer that
rounds to three significant digits.
a. 54.7 b. 0.1134 c. 8.2256 d. 0.08335 e. 0.08345
SOLUTION
Number Floating Point Form Stored Value
a. 54.7 0.547 102 0.547 102
b. 0.1134 0.1134 100 0.113 100
c. 8.2256 0.82256 101 0.823 101
d. 0.08335 0.8335 101 0.834 101
e. 0.08345 0.8345 101 0.834 101
Note in parts (d) and (e) that when the discarded portion of a decimal is precisely half
a digit, the number is rounded so that the stored value ends in an even digit.
0.12
0.12 0.23
A
0.12
rounding each intermediate calculation to two significant digits. Then find the exact
solution and compare the two results.
SOLUTION
Rounding each intermediate calculation to two significant digits produces
A 0.120.12 0.120.23
0.0144 0.0276
0.014 0.028 Round to two significant digits.
0.014.
The exact solution is A 0.0144 0.0276 0.0132. So, to two significant
digits, the correct solution is 0.013. Note that the rounded solution is not correct to
two significant digits, even though each arithmetic operation was performed with two
significant digits of accuracy. This is what is meant when it is said that arithmetic
operations tend to propagate rounding error.
0.143 0.357 2.01
1.31 0.911 1.99 5.46
11.2 4.30 0.605 4.42
36.2
1.00 2.50 14.1 Dividing the first row
1.31 0.911 1.99 5.46 by 0.143 produces a
new first row.
11.2 4.30 0.605 4.42
14.1 36.2
1.00 2.50 Adding 1.31 times the
0.00 4.19 20.5 52.9 first row to the second row
11.2 4.30 0.605 4.42 produces a new second row.
36.2
1.00 2.50 14.1 Adding 11.2 times the
0.00 4.19 20.5 52.9 first row to the third row
0.00 32.3 159 409 produces a new third row.
36.2
1.00 2.50 14.1 Dividing the second row
0.00 1.00 4.89 12.6 by 4.19 produces a new
0.00 32.3 159 409 second row.
14.1 36.2
1.00 2.50 Adding 32.3 times the
0.00 1.00 4.89 12.6 second row to the third row
0.00 0.00 1.00 2.00 produces a new third row.
36.2
1.00 2.50 14.1 Multiplying the third row
0.00 1.00 4.89 12.6 . by 1 produces a new
0.00 0.00 1.00 2.00 third row.
So, x3 2.00, and using back-substitution, you can obtain x2 2.82 and
x1 0.900. Try checking this solution in the original system of equations to see
that it is not correct. The correct solution is x1 1, x 2 2, and x3 3.
9781133110873_1001.qxp 3/10/12 7:04 AM Page 491
What went wrong with the Gaussian elimination procedure used in Example 3?
Clearly, rounding error propagated to such an extent that the final solution became
hopelessly inaccurate.
Part of the problem is that the original augmented matrix contains entries that
differ in orders of magnitude. For instance, the first column of the matrix
2.01 5.17
0.143 0.357
1.31 0.911 1.99 5.46
11.2 4.30 0.605 4.42
has entries that increase roughly by powers of 10 as one moves down the column. In
subsequent elementary row operations, the first row was multiplied by 1.31 and 11.2
and the second row was multiplied by 32.3. When floating point arithmetic is used, such
large row multipliers tend to propagate rounding error. For instance, notice what
happened during Gaussian elimination when the first row was multiplied by 11.2 and
added to the third row:
14.1 36.2
1.00 2.50
0.00 4.19 20.5 52.9
11.2 4.30 0.605 4.42
36.2
1.00 2.50 14.1 Adding 11.2 times the first
0.00 4.19 20.5 52.9 row to the third row produces
0.00 32.3 159 409 a new third row.
The third entry in the third row changed from 0.605 to 159; it lost three decimal
places of accuracy.
This type of error propagation can be lessened by appropriate row interchanges that
produce smaller multipliers. One method of restricting the size of the multipliers is
called Gaussian elimination with partial pivoting.
Example 4 on the next page shows what happens when this partial pivoting
technique is used on the system of linear equations from Example 3.
9781133110873_1001.qxp 3/10/12 7:04 AM Page 492
Use Gaussian elimination with partial pivoting to solve the system of linear equations
from Example 3. After each intermediate calculation, round the result to three
significant digits.
SOLUTION
As in Example 3, the augmented matrix for this system is
2.01 5.17
0.143 0.357
1.31 0.911 1.99 5.46 .
11.2 4.30 0.605 4.42
Pivot
In the first column, 11.2 is the pivot because it has the largest absolute value. So,
interchange the first and third rows and apply elementary row operations, as follows.
4.30 0.605
11.2 4.42 Interchange the
1.31 0.911 1.99 5.46 first and third
rows.
0.143 0.357 2.01 5.17
1.00 0.384 0.0540
0.395 Dividing the first row
1.31 0.911 1.99 5.46 by 11.2 produces a new
first row.
0.143 0.357 2.01 5.17
1.00 0.384 0.0540 0.395
Adding 1.31 times the first
0.00 0.408 1.92 4.94 row to the second row
0.143 0.357 2.01 5.17 produces a new second row.
Adding 0.143 times the
0.00 0.408 1.92 4.94 first row to the third row
0.00 0.412 2.02 5.23 produces a new third row.
Pivot
This completes the first pass. For the second pass, consider the submatrix formed by
deleting the first row and first column. In this matrix the pivot is 0.412, so interchange
the second and third rows and proceed with Gaussian elimination, as follows.
1.00 0.384 0.0540 0.395
Interchange the
0.00 0.412 2.02 5.23 second and third
REMARK 0.00 0.408 1.92 4.94 rows.
Note that the row multipliers Dividing the second row
used in Example 4 are 1.31, 0.00 1.00 4.90 12.7 by 0.412 produces a new
0.143, and 0.408, as second row.
0.00 0.408 1.92 4.94
contrasted with the multipliers
1.00 0.384 0.0540 0.395
of 1.31, 11.2, and 32.3 Adding 0.408 times the
encountered in Example 3. 0.00 1.00 4.90 12.7 second row to the third row
0.00 0.00 0.0800 0.240 produces a new third row.
This completes the second pass, and the entire procedure can be completed by dividing
the third row by 0.0800, as follows.
1.00 0.384 0.0540 0.395
0.00
0.00
1.00
0.00
4.90 12.7
1.00 3.00
So, x3 3.00, and back-substitution produces x2 2.00 and x1 1.00, which
agrees with the exact solution of x1 1, x2 2, and x3 3.
9781133110873_1001.qxp 3/10/12 7:04 AM Page 493
The term partial in partial pivoting refers to the fact that in each pivot search, only
entries in the first column of the matrix or submatrix are considered. This search can be
extended to include every entry in the coefficient matrix or submatrix; the resulting
technique is called Gaussian elimination with complete pivoting. Unfortunately,
neither complete pivoting nor partial pivoting solves all problems of rounding error.
Some systems of linear equations, called ill-conditioned systems, are extremely
sensitive to numerical errors. For such systems, pivoting is not much help. A common
type of system of linear equations that tends to be ill-conditioned is one for which the
determinant of the coefficient matrix is nearly zero. The next example illustrates
this problem.
1
1 1 0
1.002 20
0
1 1 0
0.002 20
0
1 1 0
1.00 10,000
So, y 10,000 and back-substitution produces
x y
10,000.
This solution represents a percentage error of 25% for both the x-value and the
y-value. Note that this error was caused by a rounding error of only 0.0005 (when
1.0025 was rounded to 1.002).
10.1 Exercises
Floating Point Form In Exercises 18, express the real Solving an Ill-Conditioned System In Exercises 25
number in floating point form. and 26, use Gaussian elimination to solve the
1. 4281 2. 321.61 3. 2.62 4. 21.001 ill-conditioned system of linear equations, rounding each
1 intermediate calculation to three significant digits. Then
5. 0.00121 6. 0.00026 7. 8 8. 1612
compare this solution with the exact solution provided.
Finding the Stored Value In Exercises 916, determine 25. x y 2
the stored value of the real number in a computer
x 600
601 y 20
that rounds to (a) three significant digits and (b) four
significant digits. Exact: x 10,820, y 10,818
11. 92.646 12. 216.964 26. x 800
9. 331 10. 21.4 801 y 10
7 5
13. 16 14. 32 15. 71 16. 16 x y 50
Rounding Error In Exercises 17 and 18, evaluate the Exact: x 48,010, y 48,060
determinant of the matrix, rounding each intermediate 27. Comparing Ill-Conditioned Systems Solve the
calculation to three significant digits. Then compare the following ill-conditioned systems and compare their
rounded value with the exact solution. solutions.
66.00 1.07
1.24 56.00 2.12 4.22 (a) x y 2 (b) x y 2
17. 18.
1.02 2.12 x 1.0001y 2 x 1.0001y 2.0001
Gaussian Elimination In Exercises 19 and 20, use
Gaussian elimination to solve the system of linear 28. By inspection, determine
equations. After each intermediate calculation, round whether the following system is more likely to be
the result to three significant digits. Then compare this solvable by Gaussian elimination with partial
solution with the exact solution. pivoting, or is more likely to be ill-conditioned.
19. 1.21x 16.7y 28.8 20. 14.4x 17.1y 31.5 Explain your reasoning.
4.66x 64.4y 111.0 81.6x 97.4y 179.0 0.216x1 0.044x2 8.15x3 7.762
Partial Pivoting In Exercises 2124, (a) use Gaussian 2.05x1 1.02x2 0.937x3 4.183
elimination without partial pivoting to solve the 23.1x1 6.05x2 0.021x3 52.271
system, rounding to three significant digits after each
intermediate calculation, (b) use partial pivoting to solve 29. The Hilbert matrix of size n n is the n n symmetric
the same system, again rounding to three significant digits matrix Hn aij, where aij 1i j 1. As n
after each intermediate calculation, and (c) compare increases, the Hilbert matrix becomes more and more
both solutions with the exact solution provided. ill-conditioned. Use Gaussian elimination to solve the
21. 6x 1.04y 12.04 system of linear equations shown below, rounding to two
6x 6.20y 12.20 significant digits after each intermediate calculation.
Exact: x 1, y 1 Compare this solution with the exact solution x1 3,
x2 24, and x3 30.
22. 00.51x 992.6y 997.7
99.00x 449.0y 541.0 x1 12x2 13x3 1
1 1 1
Exact: x 10, y 1 2 x1 3 x2 4 x3 1
1 1 1
23. x 4.01y 0.00445z 0.00 3 x1 4 x2 5 x3 1
and substitute these values of xi on the right-hand sides of the rewritten equations
to obtain the first approximation. After this procedure has been completed, one
iteration has been performed. In the same way, the second approximation is formed by
substituting the first approximations x-values on the right-hand sides of the rewritten
equations. By repeated iterations, you will form a sequence of approximations that
often converges to the actual solution. This procedure is illustrated in Example 1.
9781133110873_1002.qxp 3/10/12 7:01 AM Page 496
Use the Jacobi method to approximate the solution of the following system of linear
equations.
5x1 2x2 3x3 1
3x1 9x2 x3 2
2x1 x2 7x3 3
Continue the iterations until two successive approximations are identical when rounded
to three significant digits.
SOLUTION
To begin, write the system in the form
x1 15 25 x2 35 x3
x2 29 39 x1 19 x3
x3 37 27 x1 17 x 2 .
Because you do not know the actual solution, choose
x1 0, x2 0, x3 0 Initial approximation
n 0 1 2 3 4 5 6 7
x1 0.000 0.200 0.146 0.191 0.181 0.185 0.186 0.186
x2 0.000 0.222 0.203 0.328 0.332 0.329 0.331 0.331
x3 0.000 0.429 0.517 0.416 0.421 0.424 0.423 0.423
Because the last two columns in the table are identical, you can conclude that to three
significant digits the solution is
x1 0.186, x2 0.331, x3 0.423.
For the system of linear equations in Example 1, the Jacobi method is said to
converge. That is, repeated iterations succeed in producing an approximation that is
correct to three significant digits. As is generally true for iterative methods, greater
accuracy would require more iterations.
9781133110873_1002.qxp 3/10/12 7:01 AM Page 497
Use the Gauss-Seidel iteration method to approximate the solution to the system of
equations in Example 1.
SOLUTION
The first computation is identical to that in Example 1. That is, using
x1, x2, x3 0, 0, 0 as the initial approximation, you obtain the new value of x1.
x1 15 25(0) 35(0) 0.200
Now that you have a new value of x1, use it to compute a new value of x2. That is,
x2 29 39 0.200 19 0 0.156.
Similarly, use x1 0.200 and x2 0.156 to compute a new value of x3. That is,
x3 37 27 0.200 17 0.156 0.508.
So, the first approximation is x1 0.200, x2 0.156, and x3 0.508. Now
performing the next iteration produces
x1 15 250.156 350.508 0.167
x2 9 390.167 190.508 0.334
2
n 0 1 2 3 4 5 6
x1 0.000 0.200 0.167 0.191 0.187 0.186 0.186
x2 0.000 0.156 0.334 0.334 0.331 0.331 0.331
x3 0.000 0.508 0.429 0.422 0.422 0.423 0.423
Note that after only six iterations of the Gauss-Seidel method, you achieved
the same accuracy as was obtained with seven iterations of the Jacobi method in
Example 1.
9781133110873_1002.qxp 3/10/12 7:01 AM Page 498
An Example of Divergence
n 0 1 2 3 4 5 6 7
x1 0 4 34 174 1224 6124 42,874 214,374
x2 0 6 34 244 1224 8574 42,874 300,124
For this particular system of linear equations you can determine that the actual
solution is x1 1 and x2 1. So you can see from the table that the approximations
provided by the Jacobi method become progressively worse instead of better, and you
can conclude that the method diverges.
n 0 1 2 3 4 5
x1 0 4 174 6124 214,374 7,503,124
x2 0 34 1224 42,874 1,500,624 52,521,874
You will now look at a special type of coefficient matrix A, called a strictly
diagonally dominant matrix, for which it is guaranteed that both methods will
converge.
9781133110873_1002.qxp 3/10/12 7:01 AM Page 499
a11 > a12 a13 . . . a1n
a22 > a21 a23 . . . a2n
.
.
.
ann > an1 an2 . . . an, n1 .
Which of the systems of linear equations shown below has a strictly diagonally
dominant coefficient matrix?
a. 3x1 x2 4 b. 4x1 2x2 x3 1
2x1 5x2 2 x1 2x3 4
3x1 5x2 x3 3
SOLUTION
a. The coefficient matrix
1
2
3
A
5
is strictly diagonally dominant because 3 > 1 and 5 > 2 .
b. The coefficient matrix
1
4 2
A 1 0 2
3 5 1
is not strictly diagonally dominant because the entries in the second and third rows
do not conform to the definition. For instance, in the second row, a21 1, a22 0,
and a23 2, and it is not true that a22 > a21 a23 . Interchanging the second
and third rows in the original system of linear equations, however, produces the
coefficient matrix
1
4 2
A 3 5 1 ,
1 0 2
which is strictly diagonally dominant.
The next theorem, which is listed without proof, states that strict diagonal
dominance is sufficient for the convergence of either the Jacobi method or the
Gauss-Seidel method.
In Example 3, you looked at a system of linear equations for which the Jacobi and
Gauss-Seidel methods diverged. In the next example, you can see that by interchanging
the rows of the system in Example 3, you can obtain a coefficient matrix that is strictly
diagonally dominant. After this interchange, convergence is assured.
Interchange the rows of the system in Example 3 to obtain one with a strictly diagonally
dominant coefficient matrix. Then apply the Gauss-Seidel method to approximate the
solution to four significant digits.
SOLUTION
Begin by interchanging the two rows of the system to obtain
7x1 x2 6
x1 5x2 4.
Note that the coefficient matrix of this system is strictly diagonally dominant. Then
solve for x1 and x2 , as shown below.
x1 67 17x2
x2 45 15x1
Using the initial approximation x1, x2 0, 0, obtain the sequence of approximations
REMARK shown in the table.
Do not conclude from
Theorem 10.1 that strict n 0 1 2 3 4 5
diagonal dominance is a
necessary condition for x1 0.0000 0.8571 0.9959 0.9999 1.000 1.000
convergence of the Jacobi or
x2 0.0000 0.9714 0.9992 1.000 1.000 1.000
Gauss-Seidel methods (see
Exercise 21).
So, the solution is x1 1 and x2 1.
10.2 Exercises
The Jacobi Method In Exercises 14, apply the Jacobi Interchanging Rows In Exercises 1720, interchange
method to the system of linear equations, using the initial the rows of the system of linear equations in
approximation x1, x2, . . . , xn 0, 0, . . . 0. Continue the given exercise to obtain a system with a strictly
performing iterations until two successive approximations diagonally dominant coefficient matrix. Then apply
are identical when rounded to three significant digits. the Gauss-Seidel method to approximate the solution to
1. 3x1 x2 2 two significant digits.
x1 4x2 5 17. Exercise 9 18. Exercise 10
2. 4x1 2x2 6 19. Exercise 11 20. Exercise 12
3x1 5x2 1 Showing Convergence In Exercises 21 and 22, the
3. 2x1 x2 2 coefficient matrix of the system of linear equations is
x1 3x2 x3 2 not strictly diagonally dominant. Show that the Jacobi
and Gauss-Seidel methods converge using an initial
x1 x2 3x3 6
approximation of x1, x2, . . . , xn 0, 0, . . . , 0.
4. 4x1 x2 x3 7
21. 4x1 5x2 1 22. 4x1 2x2 2x3 0
x1 7x2 2x3 2
x1 2x2 3 x1 3x2 x3 7
3x1 4x3 11
3x1 x2 4x3 5
The Gauss-Seidel Method In Exercises 58, apply the
True or False? In Exercises 2325, determine whether
Gauss-Seidel method to the system of linear equations in
each statement is true or false. If a statement is true, give
the given exercise.
a reason or cite an appropriate statement from the text.
5. Exercise 1 6. Exercise 2 If a statement is false, provide an example that shows the
7. Exercise 3 8. Exercise 4 statement is not true in all cases or cite an appropriate
statement from the text.
Showing Divergence In Exercises 912, show that the
23. The Jacobi method is said to converge when it produces
Gauss-Seidel method diverges for the system using the
a sequence of repeated iterations accurate to within a
initial approximation x1, x2, . . . , xn 0, 0, . . . , 0.
specific number of decimal places.
9. x1 2x2 1
24. If the Jacobi method or the Gauss-Seidel method
2x1 x2 3 diverges, then the system has no solution.
10. x1 4x2 1 25. If a matrix A is strictly diagonally dominant, then the
3x1 2x2 2 system of linear equations represented by Ax b has
11. 2x1 3x2 7 no unique solution.
x1 3x2 10x3 9
26. Consider the system of linear
3x1 x3 13
equations
12. x1 3x2 x3 5
ax1 5x2 2x3 19
3x1 x2 5
3x1 bx2 x3 1.
x2 2x3 1
2x1 x2 cx3 9
In Exercises
Strictly Diagonally Dominant Matrices (a) Describe all the values of a, b, and c that will allow
1316, determine whether the matrix is strictly you to use the Jacobi method to approximate the
diagonally dominant. solution of this system.
1 2
13. 23 1
5 14.
0 1 (b) Describe all the values of a, b, and c that will
guarantee that the Jacobi and Gauss-Seidel
5 1
12 6 0 7 methods converge.
15. 2 3 2 16. 1 4 1 (c) Let a 8, b 1, and c 4. What, if anything,
0 6 13 0 2 3 can you determine about the convergence or
divergence of the Jacobi and Gauss-Seidel
methods? Explain.
9781133110873_1003.qxp 3/10/12 7:03 AM Page 502
For large values of n, polynomial equations such as this one are difficult and time
consuming to solve. Moreover, numerical techniques for approximating roots of
polynomial equations of high degree are sensitive to rounding errors. This section
introduces an alternative method for approximating eigenvalues. As presented here, the
method can be used only to find the eigenvalue of A that is largest in absolute valuethis
eigenvalue is called the dominant eigenvalue of A. Although this restriction may seem
severe, dominant eigenvalues are of primary interest in many physical applications.
i > j, i j.
The eigenvectors corresponding to i are called dominant eigenvectors of A.
REMARK Not every matrix has a dominant eigenvalue. For instance, the matrices
Note that the eigenvalues of A 2 0 0
are 1 1 and 2 1, and
1 0
A B 0 2 0
the eigenvalues of B are 1 2, 0 1
2 2, and 3 1. 0 0 1
have no dominant eigenvalues.
1,
3
xt t 0.
9781133110873_1003.qxp 3/10/12 7:03 AM Page 503
1.
1
x0
1
3
which is a dominant eigenvector of the matrix A from Example 1. In this case the
dominant eigenvalue of the matrix A is known to be 2. For the sake of
demonstration, however, assume that the dominant eigenvalue of A is unknown. The
next theorem provides a formula for determining the eigenvalue corresponding to
a given eigenvector. This theorem is credited to the English physicist John William
Rayleigh (18421919).
9781133110873_1003.qxp 3/10/12 7:03 AM Page 504
PROOF
Because x is an eigenvector of A, it follows that Ax x and you can write
Ax x x x x x
.
xx xx xx
In cases for which the power method generates a good approximation of a dominant
eigenvector, the Rayleigh quotient provides a correspondingly good approximation of the
dominant eigenvalue. The use of the Rayleigh quotient is demonstrated in Example 3.
Use the result of Example 2 to approximate the dominant eigenvalue of the matrix
2 12
A 1 5
.
SOLUTION
In Example 2, the sixth iteration of the power method produced
190 1901.00 .
568 2.99
x6
REMARK As shown in Example 2, the power method tends to produce approximations with
Other scaling techniques are large entries. In practice it is best to scale down each approximation before proceeding
possible. For examples, see to the next iteration. One way to accomplish this scaling is to determine the component
Exercises 23 and 24. of Axi that has the largest absolute value and multiply the vector Axi by the reciprocal of
this component. The resulting vector will then have components whose absolute
values are less than or equal to 1.
9781133110873_1003.qxp 3/10/12 7:03 AM Page 505
Calculate six iterations of the power method with scaling to approximate a dominant
eigenvector of the matrix
1 2 0
A 2 1 2 .
1 3 1
Use x0 1 1 1T as the initial approximation.
SOLUTION
One iteration of the power method produces
1 2 0 1 3
Ax0 2 1 2 1 1
1 3 1 1 5
and by scaling you obtain the approximation
3 0.60
1
x1 1 0.20 .
5
5 1.00
REMARK
Note that the scaling factors A second iteration yields
used to obtain the vectors in
1 2 0 0.60 1.00
the table
Ax1 2 1 2 0.20 1.00
x1 5.00 1 3 1 1.00 2.20
x2 2.20
x3 2.80
and
x4 3.13 1.00 0.45
1
x5 3.03 x2 1.00 0.45 .
x6 3.00
2.20
2.20 1.00
are approaching the dominant Continuing this process produces the sequence of approximations shown in the table.
eigenvalue 3.
x0 x1 x2 x3 x4 x5 x6
1.00 0.60 0.45 0.48 0.50 0.50 0.50
1.00 0.20 0.45 0.55 0.51 0.50 0.50
1.00 1.00 1.00 1.00 1.00 1.00 1.00
0.50
x 0.50 .
1.00
Then use the Rayleigh quotient to approximate the dominant eigenvalue of A to be
3. (For this example, you can check that the approximations of x and are exact.)
9781133110873_1003.qxp 3/10/12 7:03 AM Page 506
PROOF
Because A is diagonalizable, from Theorem 7.5 it has n linearly independent eigenvectors
x1, x2, . . . , xn with corresponding eigenvalues of 1, 2, . . . , n. Assume that these
eigenvalues are ordered so that 1 is the dominant eigenvalue (with a corresponding
eigenvector of x1). Because the n eigenvectors x1, x2, . . . , xn are linearly independent,
they must form a basis for Rn. For the initial approximation x0, choose a nonzero
vector such that the linear combination x0 c1x1 c2x2 . . . cnxn has a nonzero
leading coefficient. (If c1 0, the power method may not converge, and a different
x0 must be used as the initial approximation. See Exercises 19 and 20.) Now,
multiplying both sides of this equation by A produces
Ax0 Ac1x1 c2x2 . . . cnxn
c1Ax1 c2Ax2 . . . cnAxn
c11x1 c22x2 . . . cnnxn .
Repeated multiplication of both sides of this equation by A produces
Akx0 c1k1x1 c2k2x2 . . . cnknxn
which implies that
2 n
x x .
k k
Akx0 k1 c1x1 c2 2 . . . cn n
1 1
Now, from the original assumption that 1 is larger in absolute value than the other
eigenvalues, it follows that each of the fractions
2 3 n
, , ...,
1 1 1
is less than 1 in absolute value. So, as k approaches infinity, each of the factors
2 k 3 k n
k
, , ...,
1 1 1
must approach 0. This implies that the approximation Akx0 k1 c1 x1, c1 0, improves
as k increases. Because x1 is a dominant eigenvector, it follows that any scalar multiple
of x1 is also a dominant eigenvector, which shows that Akx0 approaches a multiple of
the dominant eigenvector of A.
The proof of Theorem 10.3 provides some insight into the rate of convergence of
the power method. That is, if the eigenvalues of A are ordered so that
1 > 2 3 . . . n
then the power method will converge quickly if 2 1 is small, and slowly if
2 1 is close to 1. This principle is illustrated in Example 5.
9781133110873_1003.qxp 3/10/12 7:03 AM Page 507
a. The matrix
6
4 5
A
5
has eigenvalues of 1 10 and 2 1. So the ratio 2 1 is 0.1. For this
matrix, only four iterations are required to obtain successive approximations that
agree when rounded to three significant digits.
x0 x1 x2 x3 x4
In this section the power method was used to approximate the dominant eigenvalue
of a matrix. This method can be modified to approximate other eigenvalues through use
of a procedure called deflation. Moreover, the power method is only one of several
techniques that can be used to approximate the eigenvalues of a matrix. Another
popular method is called the QR algorithm.
This is the method used in most programs and calculators for finding eigenvalues and
eigenvectors. The QR algorithm uses the QR-factorization of the matrix, as presented in
Chapter 5. Discussions of the deflation method and the QR algorithm can be found in
most texts on numerical methods.
10.3 Exercises
Finding a Dominant Eigenvector In Exercises 14, 17. Rates of Convergence
use the techniques presented in Chapter 7 to find the (a) Compute the eigenvalues of
eigenvalues of the matrix A. If A has a dominant
1
1 2 2 3
eigenvalue, find a corresponding dominant eigenvector. A and B .
2 1 4
5 2
1 0
1. A 2. A (b) Apply four iterations of the power method with
3 1 3 6
scaling to each matrix in part (a), starting with
5
1 3 0 0 0 x0 1 2T.
3. A 2 1 2 4. A 3 5 0 (c) Compute the ratios 2 1 for A and B. For which
1 1 0 4 2 3 matrix do you expect faster convergence?
0 2 1
1
2 1 0 20. A 0 1 0 , x0 1
7. A 8. A
0 7 1 6 0 1 2 1
1 4 6 3
9. A
2 8
10. A
2 1 Other Eigenvalues In Exercises 21 and 22, observe that
Ax x implies that A1x 1x. Apply five
Eigenvectors and Eigenvalues In Exercises 1114, iterations of the power method (with scaling) on A 1 to
use the power method with scaling to approximate a compute the eigenvalue of A with the smallest magnitude.
dominant eigenvector of the matrix A. Start with
2 3 1
12
x0 [1 1 1]T and calculate four iterations. Then use
2
21. A 22. A 0 1 2
x4 to approximate the dominant eigenvalue of A. 1 5
0 0 3
3 0 0 1 2 0
11. A 1 1 0 12. A 0 7 1 Another Scaling Technique In Exercises 23 and 24,
0 2 8 0 0 0 apply four iterations of the power method (with scaling)
to approximate the dominant eigenvalue of the matrix.
1 6
0 0 6 0 After each iteration, scale the approximation by dividing
13. A 2 7 0 14. A 0 4 0 by the length so that the resulting approximation will be
1 2 1 2 1 1 a unit vector.
4
Power Method with Scaling In Exercises 15 and 16, 7 2
5 6
the matrix A does not have a dominant eigenvalue. Apply 23. A 24. A 16 9 6
4 3
the power method with scaling, starting with 8 4 5
x0 [1 1 1]T, and observe the results of the first four
iterations. 25. Use the proof of Theorem 10.3 to show that
2 AAkx 0 1Akx 0
1 1 0 1 2
15. A 3 1 0 16. A 2 5 2 for large values of k. That is, show that the scale factors
obtained by the power method approach the dominant
0 0 2 6 6 3
eigenvalue.
9781133110873_1004.qxp 3/10/12 7:02 AM Page 509
x a
na0 i 1 yi
x a x a
i 0
2
i 1 x y
i i
The world populations in billions for selected years from 1980 to 2010 are shown
below. (Source: U.S. Census Bureau)
Find the second-degree least squares regression polynomial for these data and use the
resulting model to predict the world populations in 2015 and 2020.
SOLUTION
Begin by letting x 0 represent 1980, x 1 represent 1985, and so on. So, the
collection of points is represented by 0, 4.45, 1, 4.84, 2, 5.27, 3, 5.68, 4, 6.07,
5, 6.45, 6, 6.87, which yields
7 7 7
n 7,
i1
xi 21,
i1
x2i 91, x
i1
3
i 441,
7 7 7 7
i1
xi4 2275,
i1
yi 39.63,
i1
xi yi 130.17, x y 582.73.
i1
2
i i
So, the system of linear equations giving the coefficients of the quadratic model
y a2x2 a1x a0 is
7a0 21a1 91a2 39.63
21a0 91a1 441a2 130.17
91a0 441a1 2275a2 582.73.
Gaussian elimination with pivoting on the matrix
7 21 91 39.63
21 91 441 130.17
91 441 2275 582.73
produces
y 1 4.8462 25 6.4036
9 2015 0 1 6.5 0.4020 .
8
7 2020 0 0 1 0.0017
6 Predicted
5 points So, back-substitution produces the solution
3 a2 0.0017, a1 0.4129, a0 4.4445
2
1 and the regression quadratic is
x
1 1 2 3 4 5 6 7 8 9 y 0.0017x2 0.4129x 4.4445.
Figure 10.1 Figure 10.1 compares this model with the given points. To predict the world population
in 2015, let x 7, and obtain
y 0.001772 0.41297 4.4445 7.25 billion.
Similarly, the prediction for 2020 x 8 is
y 0.001782 0.41298 4.4445 7.64 billion.
9781133110873_1004.qxp 3/10/12 7:02 AM Page 511
SOLUTION
For this set of points, the linear system
na0 xi a1 xi2a2 x3i a3 yi
xi a0 xi2a1 x3i a2 x4i a3 xi yi
x2i a0 x3i a1 xi4a2 xi5a3 xi2 yi
xi3 a0 x4i a1 xi5a2 xi6a3 xi3 yi
becomes
7a0 21a1 91a2 441a3 14
21a0 91a1 441a2 2275a3 52
91a0 441a1 2275a2 12,201a3 242
441a0 2275a1 12,201a2 67,171a3 1258.
Using Gaussian elimination with pivoting on the matrix
7 21 91 441 14
21 91 441 2275 52
91 441 2275 12,201 242
441 2275 12,201 67,171 1258
produces
1.0000 5.1587 27.6667 152.3152 2.8526
0.0000 1.0000 8.5313 58.3482 0.6183
0.0000 0.0000 1.0000 9.7714 0.1286
0.0000 0.0000 0.0000 1.0000 0.1667
which implies
a3 0.1667, a2 1.5003, a1 3.6912, a0 0.0718.
So, the cubic model is
y 0.1667x3 1.5003x2 3.6912x 0.0718.
Figure 10.2 compares this model with the given points.
y
(6, 4)
4
(2, 3) (5, 2)
3
(3, 2)
2
(1, 2)
1
(0, 0) (4, 1)
x
1 2 3 4 5 6
Figure 10.2
9781133110873_1004.qxp 3/10/12 7:02 AM Page 512
An Application to Probability
4 5 6 SOLUTION
7 8 9 10
Let the probability of winning (getting food) when starting at the ith intersection be
represented by pi. Then form a linear equation involving pi and the probabilities
Food associated with the intersections bordering the ith intersection. For instance, at the first
intersection the mouse has a probability of 14 of choosing the upper right path and
Figure 10.3 losing, a probability of 41 of choosing the upper left path and losing, a probability of 4
1
of choosing the lower right path (at which point it has a probability of p3 of winning),
1
and a probability of 4 of choosing the lower left path (at which point it has a probability
of p2 of winning). So
p1 140 140 14 p3 14 p2.
Using similar reasoning, the other nine probabilities can be represented by the
following equations.
p2 150 15 p1 15p3 15p4 15p5
p3 150 15 p1 15p2 15p5 15p6
p4 150 15p2 15p5 15p7 15p8
p5 16 p2 16 p3 16 p4 16 p6 16 p81 16 p9
p6 150 15p3 15p5 15p9 15p10
40 41 4 p4 4 p8
1 1 1 1
p7
51 5p4 5p5 5p7 5p9
1 1 1 1 1
p8
51 5p5 5p6 5p8 5p10
1 1 1 1 1
p9
40 41 4p6 4p9
1 1 1 1
p10
Rewriting these equations in standard form produces the following system of ten linear
equations in ten variables.
4p1 p2 p3 0
p1 5p2 p3 p4 p5 0
p1 p2 5p3 p5 p6 0
p2 5p4 p5 p7 p8 0
p2 p3 p4 6p5 p6 p8 p9 0
p3 p5 5p6 p9 p10 0
p4 4p7 p8 1
p4 p5 p7 5p8 p9 1
p5 p6 p8 5p9 p10 1
p6 p9 4p10 1
9781133110873_1004.qxp 3/10/12 7:02 AM Page 513
Node Node x
u2 u4
u1 u3
0, Ln,
Ln, 2Ln, . . . . ,
n n lL, L
The number of population members in each age class is then represented by the age
distribution vector
x1 Number in first age class
x Number in second age class
x .2 .
..
xn Number in nth age class
Over a period of Ln years, the probability that a member of the ith age class will
survive to become a member of the i 1th age class is given by pi , where
0 pi 1, i 1, 2, . . . , n 1.
The average number of offspring produced by a member of the ith age class is given
by bi , where
0 bi , i 1, 2, . . . , n.
These numbers can be written in matrix form as follows.
b1 b2 b3 . . . bn1 bn
p1 0 0 ... 0 0
A 0 p2 0 ... 0 0
.. .. .. .. ..
. . . . .
0 0 0 ... p 0
n1
Multiplying this age transition matrix by the age distribution vector for a specific time
period produces the age distribution vector for the next time period. That is,
Axi xi1.
In Section 7.4 you saw that the growth pattern for a population is stable when the same
percentage of the total population is in each age class each year. That is,
Axi xi1 xi.
For populations with many age classes, the solution to this eigenvalue problem can be
found with the power method, as illustrated in Example 4.
9781133110873_1004.qxp 3/10/12 7:02 AM Page 515
Assume that a population of human females has the characteristics listed at the left.
Age Class Female
The table shows the age class (in years), the average number of female children during
(in years) Children Probability
a 10-year period, and the probability of surviving to the next age class. Find a stable age
0, 10 0.000 0.985 distribution vector for this population.
10, 20 0.174 0.996 SOLUTION
20, 30 0.782 0.994 The age transition matrix for this population is
To apply the power method with scaling to find an eigenvector for this matrix, use
an initial approximation of x0
1 1 1 1 1 1 1 1 1 1T. An approximation
for an eigenvector of A, with the percentage of each age in the total population, is
shown below.
Percentage in
Eigenvector Age Class Age Class
1.000
0, 10 15.27
0.925
10, 20 14.13
0.864
20, 30 13.20
0.806
30, 40 12.31
x 0.749
40, 50 11.44
0.686
50, 60 10.48
0.605
60, 70 9.24
0.492
70, 80 7.51
0.314
80, 90 4.80
0.106
90, 100 1.62
10.4 Exercises
Least Squares Regression Analysis In Exercises 14, 12. Health Expenditures The total national health
find the second-degree least squares regression expenditures (in trillions of dollars) in the United States
polynomial for the given data. Then graphically compare from 2002 to 2009 are shown in the table. (Source:
the model with the given points. U.S. Census Bureau)
1. 2, 1, 1, 0, 0, 0, 1, 1, 3, 2
Year 2002 2003 2004 2005
2. 0, 4, 1, 2, 2, 1, 3, 0, 4, 1, 5, 4
3. 2, 1, 1, 2, 0, 6, 1, 3, 2, 0, 3, 1 Health
1.637 1.772 1.895 2.021
Expenditures
4. 1, 1, 2, 1, 3, 0, 4, 1, 5, 4
15. Trigonometry Find the second-degree least squares 20. Probability Suppose that the experiment in Example 3
regression polynomial for the points is performed with the maze shown in the figure. Find
the probability that the mouse will emerge in the food
2 , 0, 3 , 12, 0, 1, 3 , 12, 2 , 0. corridor when it begins at the ith intersection.
y1 1 x1
100 0
y 1 x2 a0 4 5 6
Y .2 , X .. .. , A
.. . . a1 7 8 9
yn 1 xn
then the matrix equation A XTX1XTY is equivalent to 0
120
1 2 3 1 2 3 4
80 5 6 7 8 40
4 5 6
9 10 11 12
7 8 9
0
Food
9781133110873_1004.qxp 3/10/12 7:02 AM Page 518
Stable Age Distribution In Exercises 23 28, the 32. Fibonacci Sequence Apply the power method to the
matrix represents the age transition matrix for a matrix
population. Use the power method with scaling to find
1
1 1
a stable age distribution vector. A
0
1 4
23. A 1 discussed in Chapter 7 (Fibonacci sequence). Use the
2 0
power method to approximate the dominant eigenvalue
1 2
24. A 1
of A.
4 0
33. Writing In Example 2 in Section 2.5, the stochastic
0 1 2 matrix
1
25. A 0 0
2
1 0.70 0.15 0.15
0 0
4 P 0.20 0.80 0.15
1 2 2 0.10 0.05 0.70
1
26. A 3 0 0
1 represents the transition probabilities for a consumer
0 3 0
preference model. Use the power method to
1 4 2 approximate a dominant eigenvector for this matrix.
3
27. A 4 0 0 How does the approximation relate to the steady state
1 matrix described in the discussion following Example 3
0 4 0
in Section 2.5?
0 2 2
1 34. Smokers and Nonsmokers In Exercise 7 in
28. A 2 0 0
Section 2.5, a population of 10,000 is divided into
0 1 0 nonsmokers, smokers of one pack or less per day, and
smokers of more than one pack per day. Use the power
29. Population Growth Model The age transition method to approximate a dominant eigenvector for
matrix for a laboratory population of rabbits is this matrix.
0 7 10 35. Television Watching In Exercise 8 in Section 2.5,
A 0.5 0 0 . a college dormitory of 200 students is divided into two
0 0.5 0 groups according to how much television they watch on
a given day. Students in the first group watch an hour
Find a stable age distribution vector for this population. or more, and students in the second group watch less
30. Population Growth Model A population has the than an hour. Use the power method to approximate a
following characteristics. dominant eigenvector for this matrix.
(a) A total of 75% of the population survives its first
year. Of that 75%, 25% survives its second year. 36. Explain each of the following.
The maximum life span is 3 years.
(a) How to use Gaussian elimination with pivoting to
(b) The average numbers of offspring for each member find a least squares regression polynomial
of the population are 2 the first year, 4 the second
(b) How to use finite element analysis to determine
year, and 2 the third year.
the probability of an event
Find a stable age distribution vector for this population.
(c) How to model population growth with an age
(See Exercise 13, Section 7.4.)
distribution matrix and use the power method to
31. Population Growth Model A population has the find a stable age distribution vector
following characteristics.
(a) A total of 80% of the population survives the first
year. Of that 80%, 25% survives the second year.
The maximum life span is 3 years.
(b) The average number of offspring for each member
of the population is 3 the first year, 6 the second
year, and 3 the third year.
Find a stable age distribution vector for this population.
(See Exercise 14, Section 7.4.)
9781133110873_10REV.qxp 3/10/12 7:03 AM Page 519
10 Review Exercises
Floating Point Form In Exercises 16, express the real Ill-Conditioned Systems In Exercises 19 and 20,
number in floating point form. use Gaussian elimination to solve the ill-conditioned
1. 528.6 2. 475.2 system of linear equations, rounding each intermediate
calculation to three significant digits. Compare the
3. 4.85 4. 22.5
1
solution with the exact solution provided.
5. 32 6. 478
19. x y 1
Finding the Stored Value In Exercises 712, determine 999 4001
x y
the stored value of the real number in a computer that 1000 4000
(a) rounds to three significant digits, and (b) rounds to Exact solution: x 5000, y 5001
four significant digits. 20. xy 1
7. 25.2 8. 41.2 99 20,101
xy
9. 250.231 10. 628.742 100 100
5 3
11. 12 12. 16 Exact solution: x 20,001, y 20,002
Rounding Error In Exercises 13 and 14, evaluate the The Jacobi Method In Exercises 21 and 22, apply the
determinant of the matrix, rounding each intermediate Jacobi method to the system of linear equations, using
step to three significant digits. Compare the rounded the initial approximation
solution with the exact solution.
x1, x2, x3, . . . , xn 0, 0, 0, . . . , 0.
12.5 2.5 8.5 3.2
13. 14. Continue performing iterations until two successive
20.24 6.25 10.25 8.1
approximations are identical when rounded to three
Gaussian Elimination In Exercises 15 and 16, use significant digits.
Gaussian elimination to solve the system of linear 21. 2x1 x2 1 22. x1 4x2 3
equations. After each intermediate step, round the result x1 2x2 7 2x1 x2 1
to three significant digits. Compare the rounded solution
with the exact solution. The Gauss-Seidel Method In Exercises 23 and 24,
15. 2.53x 8.5y 29.65 apply the Gauss-Seidel method to the system from the
2.33x 16.1y 43.85 indicated exercise.
16. 12.5x 18.2y 56.8 23. Exercise 21 24. Exercise 22
3.2x 15.1y 4.1 In Exercises
Strictly Diagonally Dominant Matrices
25 28, determine whether the matrix is strictly
Partial Pivoting In Exercises 17 and 18, use Gaussian diagonally dominant.
elimination without partial pivoting to solve the system
2
0 1
of linear equations, rounding each intermediate step to 4 2 1
25. 26.
three significant digits. Then use Gaussian elimination 3 3
with partial pivoting to solve the same system, again 4 0 2 4 2 1
rounding each intermediate step to three significant 27. 10 12 2 28. 0 2 1
digits. Finally, compare both solutions with the exact 1 2 0 1 1 1
solution provided.
17. 2.15x 7.25y 13.7 Interchanging Rows In Exercises 2932, interchange
3.12x 6.3y 15.66 the rows of the system of linear equations to obtain a
system with a strictly diagonally dominant matrix. Then
Exact solution: apply the Gauss-Seidel method to approximate the
x 3, y 1 solution to four significant digits.
18. 4.25x 6.3y 16.85 29. x1 2x2 5 30. x1 4x2 4
6.32x 2.14y 10.6 5x1 x2 8 2x1 x2 6
Exact solution: 31. 2x1 4x2 x3 2 32. x1 3x2 x3 2
x 1, y 2 4x1 x2 x3 1 x1 x2 3x3 1
x1 x2 4x3 2 3x1 x2 x3 1
9781133110873_10REV.qxp 3/10/12 7:03 AM Page 520
Finding Eigenvalues In Exercises 33 36, use the 49. Hospital Care The table shows the amounts spent for
techniques presented in Chapter 7 to find the eigenvalues hospital care (in billions of dollars) in the United States
of the matrix A. If A has a dominant eigenvalue, find a from 2005 through 2009. (Source: U.S. Centers for
corresponding dominant eigenvector. Medicare and Medicaid Services)
1 0
1 1 2 1
33. A 34. A Year 2005 2006 2007 2008 2009
1 4
2 2 3 1 2 3 Amount
606.5 648.3 686.8 722.1 759.1
35. A 2 1 6 36. A 0 5 1 Spent
1 2 0 0 0 4
Find the second-degree least squares regression
Using the Rayleigh Quotient In Exercises 3742, use polynomial for the data. Let x 5 correspond to 2005.
the Rayleigh quotient to compute the eigenvalue of the Then use a software program or a graphing utility with
matrix A corresponding to the eigenvector x. regression features to find a second-degree regression
21 5 , x 31
12 polynomial. Compare the results.
37. A
50. Revenue The table shows the total yearly revenues
3 (in billions of dollars) for golf courses and country
2
6 3
38. A ,x clubs in the United States from 2005 through 2009.
1 1
(Source: U.S. Census Bureau)
2 0 1 0
39. A 0 3 4 ,x 1 Year 2005 2006 2007 2008 2009
0 0 1 0
Revenue 19.4 20.5 21.2 21.0 20.36
1 2 2 1
40. A 2 5 2 , x 1
Find the second-degree least squares regression
6 6 3 3
polynomial for the data. Let x 5 correspond to 2005.
0 1 1 1 Then use a software program or a graphing utility with
41. A 2 4 2 ,x 5 regression features to find a second-degree regression
1 1 0 1 polynomial. Compare the results.
3 2 3 3 51. Baseball Salaries The table shows the average
42. A 3 4 9 ,x 0 salaries (in millions of dollars) of Major League
1 2 5 1 Baseball players on opening day of baseball season from
2006 through 2011. (Source: Major League Baseball)
Approximating Eigenvectors and Eigenvalues In
Exercises 4346, use the power method with scaling to Year 2006 2007 2008 2009 2010 2011
approximate a dominant eigenvector of the matrix A.
Start with x0 [1 1]T and calculate four iterations. Average
2.867 2.945 3.155 3.240 3.298 3.305
Then use x4 to approximate the dominant eigenvalue Salary
of A.
Find the third-degree least squares regression polynomial
3
7 2 10 for the data. Let x 6 correspond to 2006. Then use a
43. A 44. A
2 4 5 2 software program or a graphing utility with regression
20 63
1 0 features to find a third-degree regression polynomial.
45. A 46. A
4 3 Compare the results.
52. Dormitory Costs The table shows the average costs
Least Squares Regression Analysis In Exercises 47 (in dollars) of a college dormitory room from 2006
and 48, find the second-degree least squares regression through 2010. (Source: Digest of Education Statistics)
polynomial for the data. Then use a software program
or a graphing utility with regression features to find Year 2006 2007 2008 2009 2010
a second-degree regression polynomial. Compare the
results. Cost 3804 4019 4214 4446 4657
47. 2, 0, 1, 2, 0, 3, 1, 2, and 3, 0
Find the third-degree least squares regression polynomial
48. 2, 2, 1, 1, 0, 1, 1, 1, and 3, 0
for the data. Let x 6 correspond to 2006. Then use a
software program or a graphing utility with regression
features to find a third-degree regression polynomial.
Compare the results.
9781133110873_10REV.qxp 3/10/12 7:03 AM Page 521
10 Project
The populations (in millions) of the United States by decade from 1900 to 2010 are
shown in the table. (Source: U.S. Census Bureau)
Use a software program or a graphing utility to create a scatter plot of the data. Let
x 0 correspond to 1900, x 1 correspond to 1910, and so on.
Using the scatter plot, describe any patterns in the data. Do the data appear to be
linear, quadratic, or cubic in nature? Explain.
Use the techniques presented in this chapter to find
(a) a linear least squares regression equation.
(b) a second-degree least squares regression equation.
(c) a cubic least squares regression equation.
Graph each equation with the data. Briefly describe which of the regression
equations best fits the data.
Use each model to predict the populations of the United States for the years 2020,
2030, 2040, and 2050. Which, if any, of the regression equations appears to be the
best model for predicting future populations? Explain your reasoning.
The 2012 Statistical Abstract projected the populations of the United States for the
same years, as shown in the table below. Do any of your models produce the same
projections? Explain any possible differences between your projections and the
Statistical Abstract projections.