You are on page 1of 43

Numerical Analysis

Math135A (001) Fall 2010

General philosophy
In this class we will derive algorithms (this is the easy part), we will prove rigorously that these algo-
rithms converge to the right answer (this is harder) and we will study rigorously how fast they converge to
the right answer (this is also hard). A good background in calculus is needed: we will use again and
again tools such as Taylor’s theorem in order to derive error estimates.

There will be lots of coding too. The homework will be computationally intensive. You will be asked to
write codes during the midterms and the final exam (with pen and paper, not with a computer).

Main Topics covered (not inclusive)

1. Solving nonlinear equations of one variable


We will derive algorithms to solve f (x) = 0. The most important one is Newton’s method. Big
emphasis will be given on deriving rigorous error estimates.

2. IEEE Floating Point Arithmetic.


We will learn how numbers are stored in the computer and what are the problems arising from the
finite precision of the computer.
3. Polynomial interpolation
How to approximate a function by a polynomial related and error estimates. This topic is needed in
order to do Numerical integration (the next topic).
4. Numerical integration
Rb
We will derive algorithms to compute a
f (x)dx. Big emphasis will be given on deriving rigorous error
estimates.
5. Solving linear Systems of n equations
We will write codes to solve Ax = b with Gaussian elimination.

MATLAB and SCILAB


MATLAB is one of the most popular software in science and engineering. It is very robust, fast and easy to
use. You can buy a student version for about $100. Fortunately there is also a “free version of MATLAB”
called SCILAB which does basically the same job than MATLAB but is a little less user friendly. You can
download it for free at www.scilab.org/ and it will be enough for all what we need to do.

You will be taught how to use MATLAB and SCILAB (they are basically the same language). Previous
familiarity with these packages is NOT REQUIRED for enrollment. The first dicussion will be a
crash course in MATLAB/SCILAB. Don’t miss it!

Your homework must be written in MATLAB or SCILAB. During exams you will be asked to write
some MATLAB/SCILAB codes.

1
Grading Scheme:
Homework: 8%
2 Midterms: 46% (23% each)
Final: 46%

Tentative dates of exams:


Midterm 1 : Wednesday, October 20
Midterm 2: Wednesday, November 17
Final : Friday, December 10

Homework:
There will be about 8 homework (≈ 1% each).
All numerical computation should be done in MATLAB/SCILAB.
No late homework will be accepted.

Office hours:
To be determined

2
Homework # 1

Due Monday, 10/04

NOTE: Each time you are ask to write a code which gives the solution with accuracy , it means that you
have to use the stopping criterium
|pn − pn−1 | ≤ 
Note that this does not guaranty that |pn − p| ≤  !!! So you are actually not sure to have found the solution
with accuracy  !!!

1. Let f (x) = x − 2−x .


(a) Plot with MATLAB the function f on [−5, 5]. Use the MATLAB command “GRID ON” so that
there is a grid on the plot. If you are using SCILAB type
a=get(”current axes”);
a.grid=[1 1];
Provide this plot. Looking at this plot, is there a zero in [0,1]? Zoom in many times and give an
estimate for this zero.
(b) Use the bisection method to find the zero of f with accuracy 10−5 (provide the MATLAB code).
Hoe many iterations were needed?
(c) Note that in the bisection method, if p0 stands for the initial iterate (b + a)/2, then we have that
|pn − pn−1 | = (b − a)/2n+1 . Use this to find analytically at what iteration your code will stop.
Compare with the previous question.
2. (a) Sketch the graph of y = x and y = tan x.
(b) Use the Bisection method to find an approximation to within 10−5 to the first positive x satisfying
x = tan x (provide the MATLAB code)
3. Suppose f is a continuous function on [a, b] with f (a) > a and f (b) < b. Show that f has a fixed point
in [a, b].
4. Let f (x) = ln(x + 2)
(a) Sketch the graph of f (x).
(b) Find f ([0, 2])
(c) Is it true that f ([0, 2]) ⊂ [0, 2]? What can you conclude about the existence of a fixed point?
(d) Find maxx∈[0,2] |f 0 (x)|.
(e) Find a number k ∈ (0, 1) such that

|f 0 (x)| ≤ k for all x ∈ [0, 2]

(f) What can you conclude about using a fixed point iteration.
(g) Find the fixed point with accuracy 10−3 . How many iterations were needed? (provide the MAT-
LAB code)
(h) Use the theoretical result given in class (|p − pn | ≤ k n (b − a)) to estimate the number of iterations
required to achieve 10−3 accuracy. Why is this number different than the one obtained in g). ?
5. Let f (x) = e−x

1
(a) Sketch the graph of f (x).
(b) Find f ([ 13 , 1])
(c) Is it true that f ([ 13 , 1]) ⊂ [ 13 , 1]? What can you conclude about the existence of a fixed point?
(d) Find maxx∈[ 13 ,1] |f 0 (x)|.
(e) Find a number k ∈ (0, 1) such that
 
1
|f 0 (x)| ≤ k for all x ∈ ,1
3

(f) What can you conclude about using a fixed point iteration.
(g) Find the fixed point with accuracy 10−3 . How many iterations were needed? (provide the MAT-
LAB code)
(h) Use the theoretical result given in class (|p − pn | ≤ k n (b − a)) to estimate the number of iterations
required to achieve 10−3 accuracy.

2
Homework # 2

Due Monday, 10/11

1. Find the solution of


x3 = x2 + x + 1
using:
(a) the bisection method (What initial interval did you choose? How did you choose it?)
(b) Newton’s method (What initial iterate did you choose? How did you choose it?)
(c) the secant method (What initial iterates did you choose? How did you choose them?)
In each case, use the stopping criterium |pn − pn−1 | ≤ 10−6 . How many iterates were needed in each
case to reach this accuracy. Which method is the fastest to converge? The slowest?

(Hint: Before to do any coding, plot the function with matlab so that you can see roughly where
is the zero.)
2. Repeat problem 1., but this time with the equation
1
ex =
0.1 + x2

3. Newton’s method is the commonly used method for calculating square roots on a computer.

(a) What equation would you solve in order to find a?
(b) Show that in this case, Newton’s method reduces to the following fixed point iteration:
 
1 a
pn+1 = pn + (1)
2 pn

(c) Explain with a picture why, for any initial iterate p0 > 0, Newton’s method will converge (I am
not looking for a precise answer: just draw a picture and write something short).

(d) Use iteration (1) to find 57 with accuracy 10−10 . Start with the initial iterate p0 = 57. How
many iterates were needed?

4. In class we have proven that Newton method converges by viewing it as a fixed point
iteration. In this problem, you will prove it in a different way, without using a fixed point
iteration.

In all the problem, we will assume that

f (x), f 0 (x), f 00 (x) are continous


f (p) = 0, f 0 (p) 6= 0

The sequence pn is defined by


f (pn )
pn+1 = pn −
f 0 (pn )
We will prove the following statement: If p0 is chosen sufficiently close to p, then limn→∞ pn = p
and the iterates have order of convergence 2.

1
(a) Use Taylor’s theorem to show that there exist a numbers ξn between p and pn such that
f 00 (ξn )
p − pn+1 = −(p − pn )2 (2)
2f 0 (pn )
(b) Explain why it is possible to pick a (small) interval I = [p − δ, p + δ] on which f 0 (x) 6= 0.
(c) Since f 0 (x) 6= 0 on I, then minx∈I |f 0 (x)| =
6 0 and therefore the number
maxx∈I |f 00 (x)|
M= .
2 minx∈I |f 0 (x)|
is well defined. Show that if |pn − p| ≤ δ, then
2
|p − pn+1 | ≤ M |p − pn | (3)
and therefore, multiplying both side by M :
2
M |p − pn+1 | ≤ (M |p − pn |) (4)

(d) Pick the initial iterate p0 such that


|p − p0 | ≤ δ (5)
1
|p − p0 | < (6)
M
1
i. Use inequalities (3) and (6) to prove that |p − p1 | ≤ M .
ii. Use inequalities (3), (5) and (6) to prove that |p − p1 | ≤ δ. ( There is a hint at the end of the
problem. )
(e) We have just proven that:
1
If |p − p0 | ≤ δ and |p − p0 | < (7)
M
1
then |p − p1 | ≤ δ and |p − p1 | < (8)
M
We can do it again to get
1
|p − p2 | ≤ δ and |p − p2 | <
M
and of course, iterating the argument:
1
|p − pn | ≤ δ and |p − pn | < (9)
M
Use (4) and part of (9) to prove that
2n
M |p − pn | ≤ (M |p − p0 |) (10)
Make sure to explain why (9) is needed.
(f) Use all the above to prove the following statement: “If p0 is chosen sufficiently close to p,
then limn→∞ pn = p and the iterates have order of convergence 2 ”

Hint for (d)ii): Note that, because of (6), M |p − p0 | < 1 and therefore, using (3), we get
|p − p1 | ≤ (M |p − p0 |) |p − p0 | ≤ |p − p0 |
Then you are almost done...

2
Homework #3

Due Monday, 10/18

1. We have seen in class that, if f (x) has a root of multiplicity m ≥ 2, then Newton’s method converges
only with order 1. Here is a modified version of Newton’s method:

f (pn )
pn+1 = pn − m
f 0 (pn )

Show that, if it converges, then it has to converge with order 2.

2. Let f (x) = ex − x − 1
(a) Plot f (x) with matlab. Where is the zero (just guess with the picture)?
(b) Show that the zero is actually where you thought it was, and show that it has multiplicity 2.
(c) How many iterations are needed with Newton’s method to find this zero with accuracy 10−10 ?
(start with p0 = 1)
(d) How many iterations are needed with the modified version of Newton’s method that we have seen
in problem 1.?
3. Prove by induction that (11111 . . . 1)2 = 2n − 1 (the binary number contain n digits, all are equal to
one).

4. Below is the IEEE floating point representation of three numbers

0 10000001010 10010011000000000000000000 00000000000000000000000000 (1)


1 10000001010 10010011000000000000000000 00000000000000000000000000 (2)
0 01111111111 01010011000000000000000000 00000000000000000000000000 (3)

(a) What are these numbers?


(b) Complete the following sentences:
The “spacing” between number (1) and the closest next machine number is 2−m where m =
The “spacing” between number (3) and the closest next machine number is 2−m where m =
Show your reasoning (no hand-waving argument, be clear).
5. (a) What is the largest possible machine number?
(b) What is the smallest possible machine number?
(c) What is the smallest possible “spacing” between two machine numbers?
(d) What is the IEEE floating point representation of one billion? What is the “spacing” between
one billion and the next machine number?

1
Homework # 4

Due Friday, 10/29

1. (a) Use Lagrange formula to find the unique polynomial of degree ≤ 2 which passes through the three
points (0,1), (-1,2) and (1,3).
(b) Do some algebra to write the polynomial as follow:
p(x) = a2 x2 + a1 x + a0

2. Consider the function f (x) = e−x . We are going to approximate it with a polynomial of degree 2 and
we are going to study how good is this approximation.

(a) Find a polynomial p2 (x) = a2 x2 +a1 x+a0 which approximate f (x) on the interval [−1, 1]. Choose
your polynomial so that p2 (−1) = f (−1), p2 (0) = f (0) and p2 (1) = f (1). Use Lagrange formula
and then do some algebra to find a1 , a2 and a3 .
(b) Plot both f and p2 on matlab on the interval let say [−2, 2].
(c) Write a matlab code to find the maximum distance between the line representing the polynomial
and the line representing the function f on the interval [−1, 1]. In other words, define the error
by E(x) = |f (x) − p2 (x)| and find the maximum value of E on the interval [−1, 1] using your
matlab code.

max E(x) =
x∈[0,1]

(d) Use the error formula that we derived in class to get an upper bound on the error on the interval
[−1, 1] of the type
E(x) ≤ C|(x − a)(x − b)(x − c)| for all x ∈ [−1, 1]
I want numerical values for C, a,b and c. I want the best C possible.
(e) Make a sketch of the function (x − a)(x − b)(x − c) from the previous question. Find analytically
where is its max on [−1, 1] and what is this max. Then write
E(x) ≤ M for all x ∈ [−1, 1]
I want a numerical value for M .
(f) Does c) and e) agree with one another?
(g) Use Taylor’s theorem to find the best polynomial of degree 2 which approximate f (x) around
x = 0. This means that the polynomial and f must have the same value at x = 0 and their
derivatives must match up to order 2 at x = 0. Estimate the error on the interval [−1, 1] using
the error formula coming from Taylor’s theorem:
E(x) ≤ C(x − a)3 for all x ∈ [−1, 1]
I want numerical values for C and a. I want the best possible C. Then Use this to find a bound
for the error on the interval [−1, 1]
E(x) ≤ M for all x ∈ [−1, 1]
I want a numerical value for M .
(h) Which of the two polynomials do the the best approximation? The one with one node or the one
with three nodes?

1
Homework # 5

Due Friday, 11/05

1. (a) Write down the statement of the intermediate value Theorem.


(b) Use the intermediate value theorem to prove
Theorem 1. Suppose g(x) and w(x) is continuous on [a, b]. Suppose also that w(x) ≤ 0 for all
x ∈ [a, b]. Then there exists θ ∈ [a, b] such that
Z b Z b
g(x)w(x)dx = g(θ) w(x)dx
a a

Your proof has to be more detailed than the one I did in class. Be more precise on how to use
the intermediate value theorem in order to conclude the proof.
2. In order to derive Simpson’s rule, we proved that, if p(x) is the unique polynomial of degree ≤ 2 which
interpolates the function f (x) at x0 , x1 and x2 , then
Z x2
h
p(x)dx = (f (x0 ) + 4f (x1 ) + f (x2 )) where h = x1 − x0
x0 3

In this problem we will do the same for the trapezoidal rule. Let p(x) be the unique polynomial of
degree ≤ 1 which interpolate the function f (x) at x0 and x1 . Prove that:
Z x1
h
p(x)dx = (f (x0 ) + f (x1 )) where h = x1 − x0
x0 2

Remark: In class we derived this result by drawing a picture and calculating the area of a trapezoid.
Here I am asking you to derive this result in a more rigourous way: First use Lagrange’s formula and
then integrate the polynomial that you obtained.
3. While proving that the error in the trapezoidal rule is O(h2 ), we used the fact that:
x4 3
(x4 − x3 )
Z
1 3
x4 − 3x24 x3 + 3x4 x23 − x33 = −

(x − x3 )(x − x4 )dx = −
x3 6 6

Check that this identity is indeed true.


R1
4. (a) Compute 0 ex dx by hand.
(b) Let f (x) = ex . What are
max |f 00 (x)| and max f (4) (x)

x∈[0,1] x∈[0,1]
R1
(c) Suppose you want to use the trapezoidal rule to compute 0 ex dx. How many nodes points will
be necessary in order to be sure that the error is less than 10−8 . To do this you will have to use
the following corollary that was proven in class:
Corollary 1. Suppose f (x) is twice continously differentiable and let M = maxx∈[a,b] |f 00 (x)|,
then Z
b (b − a)M
[a,b]
f (x)dx − In (f ) ≤ h2

12

a

1
R1
(d) Use the trapezoidal rule to compute 0 ex dx with the number of node points from the previous
question. What is the actual error? Smaller or bigger than 10−8 ? How can you explain this?
Provide your matlab code.
(e) Repeat c) and d) but with Simpson’s rule. Compare the number of node points needed for this
two method in order to reach a 10−8 precision

2
Homework #6

Due Monday, 11/15

R1
1. Let InT (ex ) be the approximation of 0 ex dx that you get by using the trapezoidal rule with n + 1 node
R1
points (x0 , x1 , . . . , xn ). Similarly, let InS (ex ) be the approximation of 0 ex dx that you get by using
Simpsons rule with n + 1 node points. Define the errors:
Z 1 Z 1
T x T x S x S x

En = e dx − In (e ) En =
e dx − In (e )
0 0

(a) Write a program which compute EnT and EnS for n = 10, 110, 210, 310, 410, . . . , 2010 and then plot
log EnT versus log n and log EnS versus log n. Use the command plot(, ,x) so that you just see the
data points. You should obtain lines.
(b) What are the equations of these lines (you can use the matlab command basic fitting and do a
linear fitting. You will find this command in “Tools”, just above your plot.)
(c) Why are the slopes −2 and −4? How can you explain this with the error formula we have derive
in class.
R1
2. In the previous problem we computed 0 ex dx with the trapezoid rule and Simpsons rule. It was not
very usefull because this integral could be computed by hand. But it R 1was 2good because it allowed
us to study numerically the error. In this problem you will compute 0 e−x dx with Simpson’s rule.
Your answer should be accurate up to 7 digits after the point (i.e: the 7 first digits after the point
should be exact). Note that this integral can not be computed analitically. In this case we really need
a computer.

(a) How many nodes points are necessary in order to be sure that
the first
seven digits after the 2point
are correct? (hint: using MAPLE, I found that maxx∈[0,1] f (4) (x) = 12, where f (x) = e−x ).
(b) Do the computation with this number of node points.
R1
3. Since Simpsons rule has degree of precision 3, if you compute 0 x3 dx with Simpsons rule, then the
R1 3
error should be exactly zero. Use Simpson rule to compute 0 x dx with n = 100. What is
Z 1
x3 dx − In[0,1] (x3 )?
0

How can you explain this difference between the theory and the actual computation?
4. (a) What system of nonlinear equations would you solve in order to find w1 , w2 , w3 , x1 , x2 and x3
such that the integration formula

I3 (f ) = w1 f (x1 ) + w2 f (x2 ) + w3 f (x3 )


R1
is exactly equal to −1
f (x)dx for every polynomial f (x) of degree ≤ 5.
(b) Check that the nodes and weights from the table I gave you in class do satisfy this system.
5. Gaussian Quadrature VS Simpson’s rule
Let
1
f (x) =
x + 1.5

1
(a) Sketch or plot with matlab f (x) on [−1, 1].
R1
(b) Compute −1 f (x)dx by hand.
R1
(c) Write a code to compute −1 f (x)dx using Gaussian Quadrature with 7 nodes points x1 , x2 , . . . , x7
(hint: store the nodes and weights of the table in two vectors “node(i)” and “w(i)”). What is the
error?
R1
(d) Write a code to compute −1 f (x)dx using Simpson rule with nodes points x0 , x1 , . . . , x6 (so that
you have a total of 7 nodes points, as in question (b)). What is the error?
(e) Which method work the best?

2
Homework #7

Due Monday, 11/29

1. Log-log plot / Numerical error


Let f (x) = ex . Define the errors made by approximating f 0 (2) with the forward difference and the
symmetric difference:

f (2 + h) − f (2) f (2 + h) − f (2 − h)
EF (h) = f 0 (2) −
0
ES (h) = f (2) −
h 2h

(a) Write a program which compute EF (h) for h = 10−1 , 10−2 , . . . , 10−5 and then plot log10 (EF (h))
versus log10 (h). Use the command plot(, ,’x’) so that you just see the data points. Do the same
thing with ES (h).
Remark: This time we use log10 (log base 10) instead of log. The reason for doing this is
simply because the plot you obtain is more readable. For example, with the symmetric difference,
you should obtain a point whose coordinates is something like (-3,-6). This means that, when
h = 10−3 , the error is roughly 10−6 (which of course is what you expect since the symmetric
difference is O(h2 ))
(b) What are the equations of these lines ( you can use the matlab command “basic fitting” and do
a linear fitting. You will find this command in “Tools”, just above your plot.)
(c) Why are the slopes 1 and 2? (We know from the theory that EF (h) ≤ M21 h and ES (h) ≤ M62 h2 .
In this question, you can assume that there exists constant C1 and C2 such that EF (h) = C1 h
and ES (h) = C2 h2 .)
(d) Repeat question (a), but this time with h = 10−1 , 10−2 , . . . , 10−10 . In class we did study the
effect of the finite precision of the computer when doing a symmetric difference. Does the log-log
plot that you obtain for the symmetric difference roughly match this very rough analysis?
(e) Study the effect of the finite precision of the computer when doing a forward difference (just mimic
what we did in class with the symmetric difference). From this analysis, what is the optimal h
that one should use? What is the best accuracy that one can get? Does this match what you
observe on your log-log plot?

2. Prove the following theorem and corollary:


Theorem 1. Suppose f is three time continuously differentiable, then there exist ξ1 ∈ [xn − h, xn ] and
ξ2 ∈ [xn − 2h, xn ] such that

3f (xn ) − 4f (xn − h) + f (xn − 2h) h2


f 0 (xn ) − = (2f 000 (ξ2 ) − f 000 (ξ1 ))
2h 3
Corollary 1. Let M = maxx∈[a,b] |f 000 (x)| and suppose xn = b is the end-point, then

0 3f (xn ) − 4f (xn − h) + f (xn − 2h) 2
error = f (xn ) −

≤ Mh
2h

1
Homework #8

Due Friday 12/03

Define the n × n matrix An and the n × 1 vector bn


 
 
3
 −2 1 0 0 0 0 ... 0 0 0 0 
   3 
 1 −2 1 0 0 0 ... 0 0 0 0   
   3 
 0 1 −2 1 0 0 ... 0 0 0 0   
   3 
 0 0 1 −2 1 0 ... 0 0 0 0  
..

.. ..
   
An = 
 
bn = 
 . 
 . . 
  ..


 .. ..  . 

 . .


 ..


 .. ..
  . 

 . .



 3


 0 0 0 0 0 0 ... 0 1 −2 1 
3
0 0 0 0 0 0 ... 0 0 1 −2

For example,    
−2 1 0 0 0 0 3

 1 −2 1 0 0 0 


 3 

 0 1 −2 1 0 0   3 
A6 =   A6 =  

 0 0 1 −2 1 0 


 3 

 0 0 0 1 −2 1   3 
0 0 0 0 1 −2 3

The goal of this homework will be to solve An x = bn for large n and study how
long it takes for the computer to do so.
Remark : The matrix An occurs, for example, when one want to solve the heat equation on a rode. The
solution of Ax = b will be the heat distribution on the rode: x(i) will be the temperature at the node
point xi = ih. Typically when dealing with this type of problem, n is very large, that is why we need good
algorithms. And, as I said in class, Gaussian elimination is a very bad one. For this specific problem, in
practice, people use algorithm that use the fact that “most” of the matrix An is filled with zeros.

1. Write a matlab code init.m (“initialize”) which creates An and bn .


2. Write a code fact.m which do the LU-factorization of a given matrix without pivoting. Given a
matrix A, your code should return you a matrix in which the row multiplier mi,j are stored in the
lower triangular part of the matrix , and the ui,j are stored in the upper triangular part of the matrix.
The code I wrote is 8 lines long. Use your code to find the LU factorization of the matrix A6 . Use
the LU command of matlab to check that your code is correct (Matlab has a build-in algorithm which
compute the LU factorization of a matrix: just type “LU(A)”).
3. Write a code rhs.m (“rhigt hand side”) which use the mi,j computed by fact.m to do the modification
from b to g (I am using the same notations than in class). The code I wrote is 5 lines long. Write a
last code backsub.m which use the ui,j computed by fact.m and the gi computed by rhs.m to find
the solution x of Ax = b. The code I wrote is 8 lines long. Use your four codes to solve A6 x = b6 .

1
If you did everything correctly, you just have to type in the matlab window:

init
fact
rhs
backsub

Check your answer using the “backslash” matlab command: if you want matlab to solve Ax = b,
you need to type:

A\b0 if you have enter b as a row vector (b’ means transpose of b)


A\b if you have enter b as a column vector.
4. Solve A500 x = b500 . What are x1 and x100 . Use the “backslash” command of matlab to check your
answer.
5. We will now see how long it takes for each of the three part of your code to solve A500 x = b500 . As
you know:
n3
• number of operations performed in fact.m ≈ 3
n2
• number of operations performed in rhs.m ≈ 2
n2
• number of operations performed in backsub.m ≈ 2

(Here we count only the multiplications and divisions; the number of additions and subtractions are
the same.) So of course we expect fact.m to take much more time than rhs.m and backsub.m. Use
the command “tic” and “toc” of matlab to find the computing time for each of the three parts of your
code (write tic at the beginning of each code, and toc at the end).

6. We will now solve A1000 x = b1000 . Note that n is twice larger than in the previous question. The
number of operations performed by fact.m should be how many times larger? Use the tic and toc
command of matlab to see if you get a computing time which match the theory.
7. Solve A1000 x = b1000 . Use the command tic and toc to find the computing time for each of the three
parts of your code. Use the backslash command of matlab to check that x(1) and x(100) are correct.

8. Solve A2000 x = b2000 . How long does it take? Then solve A2000 x = c2000 where
 
−3
 3 
 
 −3 
 
c= 3 
 
 .. 
 . 
 
 −3 
3

Since you have already solve A2000 x = b2000 , it should be very quick to solve A2000 x = c2000 : you just
need to use rhs.m and backsub.m. How long does it take?

2
Math 135A
October 4, 2010

Homework 1
1. Plot
(a) (Plot in Wolframalpha.com)
Zoomed in
Estimate for zero: x=0.6412
(b) function [ a, b, k ]=hw1b(f, a, b, er)
f=inline(f); %f is a string of characters from the input on the command line
if (f(a)*f(b) >0)||(a >= b) error('Does not satisfy conditions for bisection method'); end
%k is iterations
%a is left point
%b is right point
%x(k) is midpoint
k=1; x(k)=(b+a)/2;
while(abs(b-a)/2.>er && f(a)*f(b)<0) if(f(x(k))*f(a)<0) b=x(k);
else a=x(k);
end k=k+1;
x(k)=(b+a)/2;
end
After running the code: [a, b, k]=hw1b('x-2.^-x', -5, 5, 10.^-5), we get 0.6412 after 20 iterations
(c) Solving analytically we have,
b−a
10−5 =
2n+1
10
10−5 = n+1
2
solving for n yields 18.9316, which is close to 20.
2. Plot

(a)

1
(b) 4.4934
Matlab code
function [ a, b, k ]=hw1b(f, a, b, er) f=inline(f); if (f(a)*f(b) >0)||(a >= b) error('Does not satisfy
conditions for bisection method'); end k=1; x(k)=(b+a)/2;
(c) a − f (a) < 0 and b − f (b) > 0 we can call this new function g(x) = x − f (x), by applying the Intermediate
Value theorem, there exists a number p between a and b such that g(P ) = 0, thus f (P ) = p which is the
xed point. Q.E.D.
3. plot

(a)
(b) f (0) = ln(2) > 0, f (2) = ln(4) < 2,
(c) Yes, it is true, which implies that there exists a xed point on [0, 2]
(d) Maximum is 12
(e) 1
2
(f) We can use xed point iteration to nd the convergence
(g) 1.1445 to 1.1465, 11 iterations (used same code as above)
(h) 10.9658,
4. plot

(a)
(b) f ([ 13 , 1]) ≈ [0.71653, 0.36787]
(c) It is true, there exists a xed point on [ 13 , 1].

2
(d) The max is 1/3
(e) k=1/3
(f) Fixed point iteration allows us to see whether e−x diverges or converges
(g) between 0.5664 and 0.5684, 11 iterations
[a, b, k]=hw1b('exp(-x)-x',0,2,10^-3)
(h) number of iterations 5.918

3
Math 135A
October 11, 2010

Homework 2
1. Plot
(a) x3 = x2 + x + 1

Bisection Method
1.8393, 21 iterations for 10−5 accuracy, interval was chosen via chart between 1 and 2

(b) Initial iterate: 2, chose it via looking at chart, 1.83928676, 4 iterations


(c) Initial iterate: between 1 and2 , chose it via looking at chart, 1.8393, 7 iterations

Fastest convergence: Newtons, slowest, bisection, slowest


2. Plot

(a) Bisection: 0.6498, 18 iterations

4
(b) Newtons: 0.6498 5 iterations
(c) Secant: 0.6498, 7 iterations

Bisection is the slowest, newtons is the fastest


3. Roots
(a) f (x) = x2 − a = 0 to nd the root
(b) Newtons Method Reduction
So we have
f (x)
g(x) = x −
f 0 (x)
then
x2 − a
g(x) = x −
2x
which
x2 a
g(x) = x − −
2x 2x
therefore x a
g(x) = x − −
2 2x
which
1 a
g(x) = x−
2 x
thus  
1 a
pn+1 = pn −
2 pn
(c) Since it's linear, as long as p0 6= 0 (since it would diverge because it's part of the denominator) it will
continue to get smaller and smaller and converge eventually to the answer.
4. Proof of the convergence of Newton's Method
(a) Since
f (pn )
pn+1 = pn −
f 0 (pn )
We can expand the Taylor Series First two terms with ξ
1
f (p) = f (pn ) + (p − pn )f 0 (pn ) + (p − pn )2 f 00 (ξ)
2
If we set the above equation to 0 and dividing by f 0 (p0 )
f (pn ) 1 f 00 (ξ)
0
+ (p − pn ) + (p − pn )2 0 =0
f (pn ) 2 f (pn )
rearrange to dene pn
00
f (pn ) 1 2 f (ξ)
pn = + (p − pn ) +p
f 0 (pn ) 2 f 0 (pn )
plugging this back into the Newtons Method Sequence denition of pn+1 ,
00
 
f (pn ) 1 2 f (ξ) f (pn )
pn+1 = 0
+ (p − pn ) 0 +p − 0
f (pn ) 2 f (pn ) f (pn )
which simplies to
1 f 00 (ξ)
pn+1 = (p − pn )2 0 +p
2 f (pn )
thus
f 00 (ξ)
p − pn+1 = −(p − pn )2
2f 0 (pn )

5
(b) Since the function is continuous, we can pick a δ neighborhood in which the function isn't changing
direction, thus f 0 (x) 6= 0 holds true for that neighborhood.
(c) We know that |pn −p| > |pn+1 −p|, since after every iteration, the sequence is converging, and the distance
00
from p gets smaller and smaller. Since |pn − p| < δ this implies (using p − pn+1 = −(p − pn ) 2ff 0 (p(ξ)n )

|p − pn+1 | ≤ M |(p − pn )|2

therefore
M |p − pn+1 | ≤ (M |p − pn |)2

(d) two parts


i. Substituting n = 0 into equation 3 on the homework we have
|p − p1 | ≤ M |p − p0 |2

and using equation 6


M
|p − p1 | ≤ M |p − p0 |2 <
M2
thus
|p − p1 | 1
≤ |p − p0 |2 < 2
M M
thus
|p − p1 | 1
≤ 2
M M
and we have
1
|p − p1 | ≤
M
ii. Since M |p − p0 | ≤ 1 and
|p − p1 | ≤ (M |p − p0 |)|p − p0 | ≤ |p − p0 |
we have |p − p0 | ≤ δ , then
|p − p1 | ≤ δ
(e) We use induction to prove n
M |p − pn | ≤ (M |p − p0 |)2
so
M |p − p1 | ≤ (M |p − p0 |)2
furthemore 2
M |p − p2 | ≤ (M |p − p1 |)2 ≤ (M |p − p0 |)2
since |p − pn |M < 1 this implies that it converges and we can deduce
n
M |p − pn | ≤ (M |p − p0 |)2

(f) Using the above equation n


lim (M |p − p0 |)2 = 0
n→∞

since M |p − p0n | ≤ 0 < 1


Because of
|p − p1 | ≤ M |p − p0 |2
it has order of convergence 2

6
Math 135A
October 18, 2010

Homework 3
1. Let
f (x)
g(x) = x − m
f 0 (x)
By proving that g 0 (p) = 0,we prove that pn+1 converges with order 2. So we plug in
f (x) = (x − p)m h(x) where h(p) 6= 0

and it's derivative


f 0 (x) = m(x − p)m−1 h(x) + (x − p)m h0 (x)
into the rst equation, thus we have
(x − p)m h(x)
g(x) = x − m
m(x − p)m−1 h(x) + (x − p)m h0 (x)

which simplies to
m h(x)(x − p)
g(x) = x −
m h(x) + h0 (x)(x − p)
then the derivative is
  
m h(x) d m h(x)
g 0 (x) = 1 − 0
+ (x − p)
m h(x) + h (x)(x − p) dx m h(x) + h0 (x)(x − p)

we then can nd g 0 (p)


  
0 m h(p) d m h(p)
g (p) = 1 − 0
+ (p − p)
m h(p) + h (p)(p − p) dx m h(p) + h0 (x)(p − p)

which simplies to
m h(p)
g 0 (p) = 1 − =0
m h(p)
Thus it converges with order 2. Q.E.D.
2. Plot
(a) Here's the graph

7
The zero seems to be at x = 0
(b) f (x = 0) = e0 − 0 − 1 = 1 − 0 − 1 = 0 thus f (0) = 0
f ex − x − 1 x
0
= x
=1− x
f e −1 e −1
f 0 (0) = f (0) = 0
f 00 (0) = 1 6= 0, thus it has convergence of order 2
(c) 28 iterations
(d) 5 iterations
3. So
(1)2 = 1 × 20 = 21 − 1
(11)2 = 1 × 21 + 1 × 20 = 22 − 1
(111)2 = 1 × 22 + 1 × 21 + 1 × 20 = 23 − 1
Therefore
(111...1)2 = 2n − 1
From a more generalized way of induction, assume the above is true
1(111...1)2 = 2n−1 + 2n = 2n+1 − 1

4. IEEE Floating point


(a) 3224, -3224, 339/256≈ 1.32421875
(b) We have
211 − 212 1
= 41
252 2

8
thus m = 41
Spacing
20 − 21
252
thus m = 52
5. -
(a)
11
−1−1023 1
(−1)0 22 (1 − 252 ) = 3.59 × 10308
(b) −3.59 × 10308
(c) −21075
(d) 10000011100

9
Math 135A
10/29/2010

Homework 4
1. Lagrange formula
(a) for (0,1) (-1, 2) and (1,3) P (x) = y0 l0 (x) + y1 l1 (x) + y2 l2 (x) + y3 l3 (x), plugging in
(x − −1)(x − 1) (x + 1)(x − 1)
l0 (x) = =−
(0 − −1)(−1 − 1) 2

(x − 0)(x − 1) x(x − 1)
l1 (x) = =
(−1 − 0)(−1 − 1) 2
(x − 0)(x − −1) x(x + 1)
l2 (x) = =
(1 − 0)(1 − −1) 2
thus
(x + 1)(x − 1) 3
P (x) = − + x(x − 1) + x(x + 1)
2 2
(b) Rewriting
1
4x2 + x + 1

P (x) =
2
2. f (x) = e−x
(a) (−1, e), (0, 1), (1, e−1 )
Using Lagrange's formula
(x − 0)(x − 1) x(x − 1)
l0 (x) = =
(−1 − 0)(−1 − 1) 2
(x − −1)(x − 1)
l1 (x) = = −(x − 1)(x + 1)
(0 − −1)(0 − 1)
(x − −1)(x − 0) x(x + 1)
l2 (x) = =
(1 − −1)(1 − 0) 2
So
e x(x + 1)
P (x) = x(x − 1) − (x − 1)(x + 1) +
2 2e
which simplies to
(e − 1)2 2 (e2 − 1)
x − x+1
2e 2e
(b) Plotting it we have,

10
(c) 0.55169254
Matlab code used:
f = @(x)-1*(exp(-x)-((exp(1)-1)^2/(2*exp(1))*x.^2-(exp(2)-1)/(2*exp(1))*x+1))
x=fminbnd(f,-1,1)
The output was 0.5517, thus
max E(x) = 0.5517
x∈[0, 1]

(d) error
f (n+1) (ξ(x))
Error = (x − x0 )...(x − xn )
(n + 1)!
f 000 (ξ(x))
= (x − x0 )(x − x1 )(x − x2 )
3!
f 000 (ξ(x))
= (x − 0)(x − −1)(x − 1)
3!
f 000 (ξ(x))
= x(x + 1)(x − 1)
3!
we know that
f 000 = −e−x
max occurs at -1 e+1 =2.71828
2.71828
error = x(x + 1)(x − 1)
6
(e) Sketch

using the x value of where the maximum occurs


| − 1|
E(x) ≤ √ = 0.577350
3

(f) Error is close, but not quite the same, they agree with one another
(g) Taylor series approximation around f (x)
x2 ξ3
1−x+ −
2! 6
error formula from taylors theorem, we use the Lagrange form of the remainder term
f n+1 (ξ)
(x − a)n+1
(n + 1)!
since a = 0 and n = 2
f 3 (ξ)
(x − a)3
3!

11
Thus
1 3
C= f (ξ)
6
using ξ = −1, we have
C = e/6 = 0.45304
The error is 0.45304(x) since x=0
(h) The best approximation is the one with three nodes

12
Math 135A
11/5/2010

Homework 5
1. -
(a) Intermediate Value Theorem: If f is a continuous real-valued function on the interval [a, b] and h is a
number between f (a) and f (b) then there exists a c ∈ [a, b] such that f (c) = h.
(b) w is a continuos real-valued function on the interval of integration [a, b], thus there exists a θ ∈ [a, b]
such that f (θ) is between f (a) and f (b). THus
m ≤ g(x) ≤ M

multiplying by w(x)
mw(x) ≥ g(x)w(x) ≥ M w(x)
g(x) attains every value in m, M , and g(x)w(x) is continuous
´b
a
g(x)w(x)dx
n≤ ´b ≤M
a
w(x)dx

let θ ∈ [a, b] thus


ˆ b ˆ b
g(θ) w(x)dx = g(x)w(x)dx
a a

2. So we have
P (x) = ax + b
ˆ x1
(ax + b) dx
x0

1 2 (x1 − x0 )
ax + bx|xx10 = (a(x1 + x0 ) + 2b)
2 2
f (x0 ) = ax0 + b
and
f (x1 ) = ax1 + b
f (x1 ) + f (x0 ) = a(x0 + x1 ) + 2b
ˆ x1  
x1 − x0 h
P (x)dx = (f (x0 ) + f (x1 )) = (f (x0 ) + f (x1 ))
x0 2 2

3. The identity is true, I checked.


ˆ x4
1
(x − x3 )(x − x4 ) dx = − (x4 − x3 )6
x3 6

x4 x2 x3 x2 x3
= x3 x4 x − − +
2 2 3
1
= x(6x4 x3 − 3x4 x − 3x3 x + 2x2 )
6
1
= − (x4 − x3 )6
6
QED
4. -

13
(a) Computing ˆ 1
ex dx = ex |10 = e1 − e0 = e − 1
0

(b) M ax
max |f 00 (x)| = max |f 4 (x)| = e1 = e
x∈[0, 1] x∈[0, 1]

(c) Using
(b − a)M 2 (1 − 0)3 e
h = = 10−8
12 12n2
solving for n we have
n ≈ 4759

Error for Simpson's error


(b − a)
10−8 = M h4 = 35
180
(d) Trapezoidal Rule: 1.718281834781444, error:6.23 × 10−9
Simpsons Rule: 1.718281834141971 error:5.6 × 10−9

Matlab Code: Simpson's Rule

function [F] = simpson(f,a,b,n)


%f=name of function, a=initial value, b=end value, n=number of double intervals of size 2h
f=inline(f);
n = 2 * n;
h = (b - a) / n; S = f(a);
for i = 1 : 2 : n-1 x = a + h .* i; S = S + 4 * f(x); end
for i = 2 : 2 : n-2 x = a + h .* i; S = S + 2 * f(x); end
S = S + f(b); F = h * S / 3;
For Trapezoid rule
function [F] = trapezoid(f,a,b,n)
%f=name of function, a=initial value, b=end value, n=strips f=inline(f);
h = (b - a) / n;
T = 0.5*f(a); for i = 1 : 1 : n-1 x = a + h .* i; T = T + f(x); end
T = T + 0.5*f(b); F = h*T;

14
Math 135A
11/15/2010

Homework 6
1. Program Code
(a) (see previous homework for simpson and trapezoid function)
//Simpson Rule
k=10:100:2010;
y=[];
for i=1:21
y=[y abs(exp(1)-1-simpson('exp(x)',0,1,k(i)))]; end
ny=log10(y); nx=log10(k); plot(nx, ny, 'o')
// Trapezoid Rule
ny=[];
for i=1:1:21
y=[y abs(exp(1)-1-trapezoid('exp(x)',0,1,k(i)))];
end ny=log10(y);
plot(nx, ny, 'o')
(b) Here is the chart

15
(c) The error formula for Trapezoidal rule is O(h2 ) where as for simpsonsrule it is O(h4 ). 22 = 4, simpsons
rule is better
2. -
(a) Error has to be on the order of 10−7 , Using the error formula derive din class
(b − a)12h4
10−7 =
180
where h = (b − a)/n, we substitute this in and solve, we have n ≈ 29
(b) Matlab code is below
simpson('exp(-1*x*x)',0,1,29)
thus we have the answer
7.333997511728708 × 10−1

3. Use simpsons rule we compute with n = 100, matlab code is below


abs(0.25-simpson('x*x*x',0,1,100))
the error is
5.551115123125783 × 10−17
the error is not the algorithm's fault, but the computer's fault, there is a loss of precision.
4. So we have a polynomial of degree 5
I3 (f ) = Ix (αx5 + βx4 + γx3 + δx2 + µx + ν)

(a) so we have ˆ 1
I3 (x5 ) = x5 dx = 0 = w1 x51 + w2 x52 + w3 x53
−1
ˆ 1
2
I3 (x4 ) = x4 dx = = w1 x41 + w2 x42 + w3 x43
−1 5

16
ˆ 1
3
I3 (x ) = x3 dx = 0 = w1 x31 + w2 x32 + w3 x33
−1
ˆ 1
2 2
I3 (x ) = x2 dx = = w1 x21 + w2 x22 + w3 x23
−1 3
ˆ 1
I3 (x) = x dx = 0 = w1 x1 + w2 x2 + w3 x3
−1
ˆ 1
I3 (1) = 1 dx = 2 = w1 + w2 + w3
−1

(b) (From page 276)


[−1, 1]
I3 = 0.5555f (−0.7745) + .8888f (0) + .5555f (.77459) = 3.06693

the calculated integral is ˆ


(x5 + x4 + x3 + x2 + x + 1)dx = 3.06667

which agree.
5. Gaussian v. Simpson
(a) Graph

(b) Computing integral analytically we have


ˆ 1
1
dx = ln(2x + 3)|1−1 = ln(5) ≈ 1.60943791243410037
−1 x + 1.5

(c) Gaussian Quadrature Error:


The node points are ±.9491079123427584 ±.7415311855993945 ±.4058451513773971 and 0.
(d) Simpson's Rule Error is calculated thusly in scilab
log(5)-simpson(-1,1,7,)
the error which comes out to be .0001889, (log is the natural log here, as opposed to question 1 where
log is base-10.)
(e) The method that works best is Gaussian quadrature, which gives us an error 11 orders of magnitude
better.

17
Math 135A
11/29/2010

Homework 7
1. Matlab code is below
(a) h=[]; for i=1:5
h=[h 10^(-i)]; end
ef=[]; es=[]; for i=1:5
ef=[ef abs(exp(2)-(exp(2+h(i))-exp(2))/h(i))];
es=[es abs(exp(2)-(exp(2+h(i))-exp(2-h(i)))/(2*h(i)))];
end
nh=log10(h);
nef=log10(ef);
nes=log10(es);
plot(nh, nef, 'x');
plot(nh,nes,'x');
Plot

18
(b) yEF = 1.0031x + 0.57999, yEs = 1.9644x + 0.019253
(c) The slopes are 1 and 2 because log(h) = 2 ∗ log(h)
(d) So we have
h=[]; for i=1:10
h=[h 10^(-i)]; end
ef=[]; es=[]; for i=1:10
ef=[ef abs(exp(2)-(exp(2+h(i))-exp(2))/h(i))];
es=[es abs(exp(2)-(exp(2+h(i))-exp(2-h(i)))/(2*h(i)))];
end
nh=log10(h);
nef=log10(ef);
nes=log10(es);
plot(nh, nef, 'x');
plot(nh,nes,'x');

Yes it roughly matches


(e) The optimal h is 10−6 which agrees with computational eorts.
2. Proof

19
(a) Theorem
We do a Taylor series expansion around xn − h
h2 f 000 (xn )h3
f (xn − h) = f (xn ) − f 0 (xn )h + f 00 (xn ) −
2 3!
and around xn − 2h
2h2 f 000 (xn )8h3
f (xn − 2h) = f (xn ) − f 0 (xn )2h + f 00 (xn ) −
1 3!
let's set n = 0then we have
−3f (x0 ) + 4f (x1 ) − f (x2 ) = (−3 + 4 − 1)f (x0 ) + (−4hf 0 (x0 ) + 2hf 0 (x0 ))
4
−3f (x0 ) + 4f (x1 ) − f (x2 ) = 2hf 0 (x0 ) + Oh2 f 0 (x0 ) + [2f 000 (ξ2 ) − f 000 (ξ1 )]
3!
Thus 2 3
3f (x0 ) − 4f (x1 ) − f (x1 ) h
f 0 (x0 ) = + [2f 000 (ξ2 ) − f 000 (ξ1 )] 3
2h 2h
Thus we have a remainder term of
h3
(2f 000 (ξ2 ) − f 000 (ξ1 ))
3
(b) Corollary
Using the triangle inequality it is true
h2 h2 h2
error ≤ max |(2f 000 (ξ2 ) − f 000 (ξ1 )| ≤ max [12f 000 (ξ2 ) + |f 000 (ξ1 )|] ≤ 3M = M h2
x∈[a, b] 3 3 x∈[a,b] 3

20
Math 135A
December 3, 2010

Homework 8
1. init.m code
n=6;
A=toeplitz([-2,1,zeros(1,n-2)]);
b=3*ones(n, 1);
2. fact.m code
n=max(size(A)); %n is the size of the matrix given
Ao=A; % Ao is the matrix which will undergo operation
m=A-A;
%m is an empty matrix with same dimensions as A
for i=2:n
for j=i:n
m(i,j-1)=Ao(j,i-1)/Ao(i-1,i-1);
Ao(j,:)=Ao(j,:)-Ao(i-1,:)*m(i,j-1); end
end
U=Ao+m;
% we can add these matrices because one is a lower triangular % and the other is an upper triangular matrix
3. rhs.m code
bo=b; for i=2:n for j=i:n bo(j)=bo(j)-m(i,j-1)*bo(i-1); end
end
g=bo;
backsub.m code
x=zeros(n,1);
x(n)=g(n)/Ao(n,n);
for i=n-1:-1:1
x(i)=(g(i)-Ao(i,i+1:n)*x(i+1:n))/Ao(i,i);
end
4. Table from x1 to x100 (truncated at decimal)
-750 -8085 -15120 -21855 -28290 -34425 -40260 -45795 -51030 -55965
-1497 -8802 -15807 -22512 -28917 -35022 -40827 -46332 -51537 -56442
-2241 -9516 -16491 -23166 -29541 -35616 -41391 -46866 -52041 -56916
-2982 -10227 -17172 -23817 -30162 -36207 -41952 -47397 -52542 -57387
-3720 -10935 -17850 -24465 -30780 -36795 -42510 -47925 -53040 -57855
-4455 -11640 -18525 -25110 -31395 -37380 -43065 -48450 -53535 -58320
-5187 -12342 -19197 -25752 -32007 -37962 -43617 -48972 -54027 -58782
-5916 -13041 -19866 -26391 -32616 -38541 -44166 -49491 -54516 -59241
-6642 -13737 -20532 -27027 -33222 -39117 -44712 -50007 -55002 -59697
-7365 -14430 -21195 -27660 -33825 -39690 -45255 -50520 -55485 -60150
5. For A500 :
fact.m: Elapsed time is 1.369596 seconds.
rhs.m: Elapsed time is 0.008380 seconds.
backsub.m: Elapsed time is 0.011052 seconds.

21
6. The number of operations performed by fact.m should be 23 = 8 times larger. The Elapsed time is 11.404127
seconds, which is 8.3266 times larger than the time it took for A500 . The computing time matches the theory.
7. For A1000 :
fact.m: Elapsed time is 11.404127 seconds (from above)
rhs.m: Elapsed time is 0.031975 seconds.
backsub: Elapsed time is 0.033450 seconds.
x(1) = −1.5×10−3 and x(100)=−1.3515 × 105
8. For A2000
fact.m: Elapsed time is 97.856150 seconds.
rhs.m: Elapsed time is 0.124420 seconds.
backsub:Elapsed time is 0.089146 seconds.

Matlab code to change from b to c: for i=1:2:2000 b(i)=-b(i); end


For A2000 and c2000
rhs.m: Elapsed time is 0.125618 seconds.
backsub: Elapsed time is 0.084331 seconds.

22
Midterm 1
Math 135A: Numerical Analysis
Professor: Thomas Laurent
University of California, Riverside
1. [2 Points] In this problem, you don't need to justify your answer. -0.5 point per mistake)
2. What is the order of convergence of the sequence you obtain if you apply Newton's method to the function
f (x) graphed below?

3. What is the order of convergence of the sequence you obtain if you apply the bisection method to the function
f (x) graphed below?

4. What is the order of convergence of the sequence you obtain if you apply Newton's method to the function
f (x) graphed below?

5. What is the order of convergence of the sequence you obtain if you apply a xed point iteration to the function
f (x) graphed below?

6. What is the order of convergence of the sequence you obtain if you apply a xed point iteration to the function
f (x) graphed below?

7. [4 Points] Suppose p is a xed point of g and suppose that the sequence dened by pn+1 = g(pn ) converges
to p. Suppose also that g is three times continuously dierentiable and g 0 (p) = g 00 (p) = 0. Show that the
sequence {pn }n≥0 converges to p with order 3 (at least).

1
8. Let f (x) be a continuous function which maps [a, b] into itself (i.e.: f ([a, b]) ⊂ [a, b]). Use the intermediate
value theorem to prove that f has a xed point p ∈ [a, b].
9. In this problem, we will study a variant of Newton's method. Let f (x) be a smooth function (all the derivatives
exist and are continuous, so you don't need to worry about that). Let p be the zero of f (x). The sequence
{pn }n≥0 is dened as shown on the picture below.

On the above picture the line L is tangent to the graph of f (x) at the point (p, 0) and the lines L0 , L1 , L2
are parallel to the line L
10. [2 points] Express pn+1 as a function of pn . (If this were Newton's method, the formula would be pn+1 =
pn − f (pn )/f 0 (pn ). What is the formula for this variant of Newton's method?)
11. [3 points] Assuming that pn converges to p, what is the order of convergence? Circle the right answer:
pn converges linearly to p
pn converges at least quadratically to p.
Justify your answer (you have to use some theorem we proved in class.)
12. [1 point] Explain in no more than three sentences why this variant of Newton's method is of no practical
interest to nd the zero of a function. (there is a very simple reason for that! It is just common sense, no
math needed here!)
13. [2 points]
14. What is the distance (or spacing as I called it in class) between two adjacent machine numbers around 10,000?
Give an exact answer in the format 1/2n .
15. [1 point] Suppose you want to use Newton's method to nd the zero of a function. You know that this zero
is somewhere around 10,000. Does it make sense to use the stopping criteria |pn+1 − pn | ≤ 10−10 ?
16. Write down the simplest possible MATLAB code to solve x = 1 + 0.3 cos(x) with Newton's method. Use the
stopping criteria
|pn − pn−1 |
≤ 10−8
|pn |
choose your initial iterate to be 2. If your code compiles and gives the correct answer, then you get full credit.

Midterm 1 Solutions
1. Solutions
2. 2
3. 1

2
4. 1
5. 2
6. 1
7. Let g(p) = p Use Taylor series approximation, since g 0 (p) = g 00 (p) = 0, then the only terms that are left is
g 000 (ξ)(pn − p)3
g(pn ) = g(p) +
3!
where ξ ∈ [pn , p] thus 000
|pn+1 | g (ξn )
=

|pn − p|3 3!
because ξn ∈ [pn , p] where ξn → p
8. Let g(p) = f (p) − p use the intermediate value theorem that at one place g(p) = 0.
9. -
10. pn+1 = pn − f (pn )/f 0 (p)
11. Let g(x) = x − f (x)/f 0 (p) then g 0 (x) = 1 − f 0 (x)/f 0 (p) and we have g 0 (p) = 1 − 1 = 0 which implies pn
converges quadratically.
12. to nd p we have to know f 0 (p)
13. -
14. 213 /252 = 1/239
15. Yes, it makes sense to use the stopping criteria |pn+1 − pn | ≤ 10−10
16. Matlab code
p=@(x) 1+0.3*cos(x)-x;
dp=@(x) -0.3*sin(x)-1;
n=2;
x=2;
while(abs(p(x)-p(n))/abs(p(x)) <=10^(-8) || x=2)
n=x;
x=n-p(n)/dp(n);
end

x will return the answer.

3
Midterm 2
1. [2 points] This problem is here to help you do the next one
Let f (x) be a smooth function (i.e. f has as may continuous derivatives as you want.)
2. On the picture below, draw the unique polynomial of degree zero which interpolates f (x) at x0 .

3. Let p0 (x) be this unique polynomial of degree zero which interpolate f (x) at x0 . What is the error formula?
Complete the following:
For each x there exists a ξ in the interval ________ such that
f (x) − p0 (x) =
Since ξ depends on x we often write ξ(x)
4. In this problem, we will study the numerical integration method described in the picture below:

´
The integral ab f (x) dx is approximated by the sum of the area of the rectangles.
We will assume during all the problem that f (x) has one continuous derivative. Also we will use the notation
n .
h = b−a

4
5. [3 points] As you know the formula for Simpson's rule is
h
In (f ) = (f (x0 ) + 4f (x1 ) + 2f (x2 ) + 4f (x3 ) + . . . + 4f (xn−1 ) + f (xn ))
3
What is the formula for the integration method we are studying here?
In (f ) =

Comments: No explanations needed, just give the formula.


6. [5 points] Let E0 be the error made by our integration method on the rst interval [x0 , x1 ]:

E0 is the area of the dark region.


Prove that there exists ξ0 ∈ [x0 , x1 ] such that E0 = f 0 (ξ0 )h2 /2
Comments: Make sure to name the theorem you are using and to say why it is OK to use it.
7. As you know, for Simpson's rule, the error formula is
ˆ b
(b − a)f (4) (ξ) 4
f (x) dx − In (f ) = − h
a 180

where ξ is some number between a and b. What is the error formula for our numerical integration method?
Comments: There are two steps in this proof. Provide a picture to make your explanations clearer (okay one
picture: I am really looking for one specic picture). State which theorem you are using and say why it is
OK to use it. Make sure to say in which interval lie the ξi . Your proof do not need to be more detailed than
the one I gave in class.
8. [5 points] Find w1 , w2 , and x2 so that the numerical integration rule
I2 (f ) = w1 f (0) + w2 f (x2 )

approximates ˆ 1
f (x) dx
0
with degree of precision 2. ´
In other words, you have to choose w1 , w2 , and x2 so that I2 (p) = 01 p(x) dx for every polynomial p(x) of
degree ≤ 2.
Comment: Be careful, here we integrate from 0 to 1, and not from -1 to 1 as we have done in class with
Gaussian quadrature. Also the node x1 is xed to be equal to 0.

Midterm 2 Solutions

5
1. [x0 , x], Applying the mean value theorem f 0 (ξ(x))(x − x0 )
2. -
3. h(f (x0 ) + f (x2 ) + ...f (xn−1 ))
´ x1 ´ x1
4. f 0 (ξ(x))(x − x0 ) dx = f 0 (ξ(θ)) h2 Using the weighted mean value theorem
2

x0
f (x) − p0 (x) dx = x0
ˆ b
f 0 (θ) = w(x)f (x)dx
a

where w(x) does not change sign ˆ x1


f 0 (ξ(x))(x − x0 )dx
x0

then θ ∈ [x0 , x] such that F 0 (ξ(θ)


ˆ x1 ˆ x1
(x − x0 ) dx = f 0 (ξ(x))(x − x0 )dx
x0 x0

thus
f 0 (ξ(θ))h2
E0 =
2
5. Error for general case
n−1
X h2
f 0 (ξi )
i=0
2
where ξ ∈ [xi , xi+1 ] then
n−1
X
= f 0 (ξi )
i=0

let maxx∈[a,b] f (x) = M and minx∈[a,b] = m then


0

m ≤ f 0 (ξi ) ≤ M

which implies
n−1
X
nm ≤ f (ξi ) ≤ nM
i=0
divide the above equation by n invoke mean value theorem we have
n−1
0 1X 0
f (ξ) = f (ξ)
n i=0

thus
(a − b)2 0
f (ξ)
2n
where ξ ∈ [x0 , xn ]
6. Let P (x) = αx2 + βx + γ be a polynomial of degree 2, where α, β, γ = 1then
ˆ 1
1
x2 dx = w1 0 + w2 x22 =
0 3
ˆ 1
1
x dx = w1 0 + w2 x2 =
0 2
ˆ 1
1 dx = w1 + w2 = 1
0

Solving we have x2 = 32 , w1 = 1
4 and w2 = 3
4

You might also like