You are on page 1of 63

Numerical analysis: Root nding methods

Caroline Colijn Email: C.Colijn@bristol.ac.uk

Numerical analysis:Root nding methods p. 1/2

Re-cap of last time


Most real-world problems do not have an explicit solution but do have a numerical answer How do we nd those numerical answers (using computers)?

Numerical analysis:Root nding methods p. 2/2

Re-cap of last time


Most real-world problems do not have an explicit solution but do have a numerical answer How do we nd those numerical answers (using computers)? Problems / algorithms here are innite: exact answer cannot be achieved with nite work How to analyse: truncation error , efciency robustness

Numerical analysis:Root nding methods p. 2/2

Rough plan of course


Main sections: Root nding Interval bisection Fixed point iteration techniques Newton type techniques (special xed point methods)

Numerical analysis:Root nding methods p. 3/2

Rough plan of course


Main sections: Root nding Interval bisection Fixed point iteration techniques Newton type techniques (special xed point methods) Later: Finite differences and polynomial interpolation Numerical integration Ordinary differential equations

Numerical analysis:Root nding methods p. 3/2

Root-nding techniques
2 1 0.5 1.5

y=x y = cos x

y=0

0 0.5 1 1.5

0.5

y = cos x x
0.5 1 1.5 2

0 2 0.5 0 0.5 1 1.5 2 2.5 0

Alternative (equivalent) questions: xed point form: what x solves x = g (x) ? zero-nding form: what x solves f (x) = 0 ?

Numerical analysis:Root nding methods p. 4/2

Intermediate Value Theorem (IMVT)


(for robustness) How can we sure that an equation has a root to nd? How can we be sure to nd it?

Numerical analysis:Root nding methods p. 5/2

Intermediate Value Theorem (IMVT)


(for robustness) How can we sure that an equation has a root to nd? How can we be sure to nd it? Intermediate value theorem: Suppose f : R R is a continuous function, and a < b such that f (a) and f (b) are nonzero and of opposite signs, then there exists x with a < x < b such that f (x ) = 0. (Sufcient criterion for existence of roots x of f (x) = 0.) Obvious graphical interpretation!

Numerical analysis:Root nding methods p. 5/2

Intermediate Value Theorem (IMVT)


(for robustness) How can we sure that an equation has a root to nd? How can we be sure to nd it? Intermediate value theorem: Suppose f : R R is a continuous function, and a < b such that f (a) and f (b) are nonzero and of opposite signs, then there exists x with a < x < b such that f (x ) = 0. (Sufcient criterion for existence of roots x of f (x) = 0.) Obvious graphical interpretation!

Numerical analysis:Root nding methods p. 5/2

Lesson number one


f (x) = x2 1, a = 2, b = +2

Numerical analysis:Root nding methods p. 6/2

Lesson number one


f (x) = x2 1, a = 2, b = +2 f continuous f (a) = f (b) = 3, same sign IMVT hypotheses are not met

Numerical analysis:Root nding methods p. 6/2

Lesson number one


f (x) = x2 1, a = 2, b = +2 f continuous f (a) = f (b) = 3, same sign IMVT hypotheses are not met
5 4 3 2 1 0 1 2

y = x2 1 a
2 1 0 1

Even though hypotheses not met, there are roots.


b
2

IMVT is not necessary for roots, only sufcient

Numerical analysis:Root nding methods p. 6/2

Lesson number two


f (x) = x3 x, a = 2, b = +2

Numerical analysis:Root nding methods p. 7/2

Lesson number two


f (x) = x3 x, a = 2, b = +2 f continuous f (a) = 6, f (b) = +6, opposite sign IMVT holds

Numerical analysis:Root nding methods p. 7/2

Lesson number two


f (x) = x3 x, a = 2, b = +2 f continuous f (a) = 6, f (b) = +6, opposite sign IMVT holds
6 4 2 0 2 4 6 2 1 0 1 2

a y = x3 x b

Three roots! IMVT is no good for counting, gives only existence

Numerical analysis:Root nding methods p. 7/2

Lesson number three


Take a = 1, b = +1
f ( x) = 1/x, undened x = 0, x=0

Numerical analysis:Root nding methods p. 8/2

Lesson number three


Take a = 1, b = +1
f ( x) = 1/x, undened x = 0, x=0

f (a) = 1, f (b) = +1, opposite sign but f is discontinuous, so IMVT does not hold

Numerical analysis:Root nding methods p. 8/2

Lesson number three


Take a = 1, b = +1
f ( x) = 1/x, undened x = 0, x=0

f (a) = 1, f (b) = +1, opposite sign but f is discontinuous, so IMVT does not hold
5

y = 1/x
0

a b

and in fact no roots Conclusion: continuity essential

0.5

0.5

Numerical analysis:Root nding methods p. 8/2

Interval Bisection Method (I)


Robust root nding method Basic graphical idea: use IMVT repeatedly

Numerical analysis:Root nding methods p. 9/2

Interval Bisection Method (I)


Robust root nding method Basic graphical idea: use IMVT repeatedly
1

0.5

0.5

1 0 0.2 0.4 0.6 0.8 1 1.2

Numerical analysis:Root nding methods p. 9/2

Interval Bisection Method (I)


Robust root nding method Basic graphical idea: use IMVT repeatedly
1

0.5

0.5

1 0 0.2 0.4 0.6 0.8 1 1.2

Numerical analysis:Root nding methods p. 9/2

Interval Bisection Method (I)


Robust root nding method Basic graphical idea: use IMVT repeatedly
1

0.5

0.5

1 0 0.2 0.4 0.6 0.8 1 1.2

Numerical analysis:Root nding methods p. 9/2

Interval Bisection Method (I)


Robust root nding method Basic graphical idea: use IMVT repeatedly
1

0.5

0.5

1 0 0.2 0.4 0.6 0.8 1 1.2

Numerical analysis:Root nding methods p. 9/2

Interval Bisection Method (I)


Robust root nding method Basic graphical idea: use IMVT repeatedly
1

0.5

0.5

1 0 0.2 0.4 0.6 0.8 1 1.2

Numerical analysis:Root nding methods p. 9/2

Interval Bisection Method (II)


(the algorithm) Let f (x) be continuous with f (a1 )f (b1 ) < 0

Numerical analysis:Root nding methods p. 10/2

Interval Bisection Method (II)


(the algorithm) Let f (x) be continuous with f (a1 )f (b1 ) < 0 for n = 1, 2, . . . Let cn := (an + bn )/2 If f (cn ) = 0, stop algorithm If f (cn )f (an ) < 0, then let an+1 = an , bn+1 = cn If f (cn )f (bn ) < 0, then let an+1 = cn , bn+1 = bn

Numerical analysis:Root nding methods p. 10/2

Interval Bisection Method (II)


(the algorithm) Let f (x) be continuous with f (a1 )f (b1 ) < 0 for n = 1, 2, . . . Let cn := (an + bn )/2 If f (cn ) = 0, stop algorithm If f (cn )f (an ) < 0, then let an+1 = an , bn+1 = cn If f (cn )f (bn ) < 0, then let an+1 = cn , bn+1 = bn This keeps halving interval Use midpoint of interval as guess of root Stop when half interval length is less than tolerance

Numerical analysis:Root nding methods p. 10/2

Interval Bisection Method (III)


(the error analysis) After n steps, length of interval = (b1 a1 )2n

Numerical analysis:Root nding methods p. 11/2

Interval Bisection Method (III)


(the error analysis) After n steps, length of interval = (b1 a1 )2n Using midpoint as guess of root:
1 worst case error = (b1 a1 )2n 2 1 En 2

NB: En+1

Numerical analysis:Root nding methods p. 11/2

Interval Bisection Method (III)


(the error analysis) After n steps, length of interval = (b1 a1 )2n Using midpoint as guess of root:
1 worst case error = (b1 a1 )2n 2 1 En 2

NB: En+1

To get error less than , need


(b1 a1 )2(n+1) < ,

so need
log (b1 a1 ) log n+1> log 2

Numerical analysis:Root nding methods p. 11/2

Next method: Fixed point iteration


We seek a solution (i.e. a root ) x of x = g (x) Procedure: Choose initial guess x0 of root Let x1 = g (x0 ) Let x2 = g (x1 ) Let x3 = g (x2 ) And so on and so on and so on etc. Should give better and better approximations to root: graphically this is a cobwebbing procedure

Numerical analysis:Root nding methods p. 12/2

Examples
Does the xed point iteration always work? Take g (x) = cos x, x0 = 0 converges ok

Numerical analysis:Root nding methods p. 13/2

Examples
Does the xed point iteration always work? Take g (x) = cos x, x0 = 0 converges ok
g ( x) = x2 , x0 = 2

Numerical analysis:Root nding methods p. 13/2

Examples
Does the xed point iteration always work? Take g (x) = cos x, x0 = 0 converges ok
g ( x) = x2 , x0 = 2 oh dear, diverges to innity

does not nd root

Numerical analysis:Root nding methods p. 13/2

Examples
Does the xed point iteration always work? Take g (x) = cos x, x0 = 0 converges ok
g ( x) = x2 , x0 = 2 oh dear, diverges to innity

does not nd root


g (x) = 4x(1 x), with e.g. x0 = 1/3

Numerical analysis:Root nding methods p. 13/2

Examples
Does the xed point iteration always work? Take g (x) = cos x, x0 = 0 converges ok
g ( x) = x2 , x0 = 2 oh dear, diverges to innity

does not nd root


g (x) = 4x(1 x), with e.g. x0 = 1/3 rst example of chaotic behaviour does not nd root

Numerical analysis:Root nding methods p. 13/2

Starting with x0 = 1/3 or x0 = 1/3 + 0.001 ... 0.3333 0.3343 0.8889 0.8902 0.3951 0.3909 0.9560 0.9524 0.1684 0.1813 0.5602 0.5938 0.9855 0.9648 0.0572 0.1357 0.2158 0.4692 0.6770 0.9962 0.8747 0.0151 We need an analysis which predicts convergence

Numerical analysis:Root nding methods p. 14/2

Newton (-Raphson) method


(in fact, a type of xed point method, but solves f (x) = 0)

(x0 , f (x0 )) y = f ( x) f ( x0 ) y=0 x0

Numerical analysis:Root nding methods p. 15/2

Newton (-Raphson) method


(in fact, a type of xed point method, but solves f (x) = 0)

(x0 , f (x0 )) y = f ( x) f ( x0 ) y=0 x1 f ( x0 ) = f ( x0 ) , x0 x1 x0 f ( x0 ) x1 = x0 f ( x0 )

so

Numerical analysis:Root nding methods p. 15/2

Newton method continued


We seek a solution (i.e. a root ) x of f (x) = 0 Procedure: Choose initial guess x0 of root f ( x0 ) Let x1 = x0 f ( x0 ) f ( x1 ) Let x2 = x1 f ( x1 ) f ( x2 ) Let x3 = x2 f ( x2 ) And so on and so on and so on etc. Should give better and better approximations to root

Numerical analysis:Root nding methods p. 16/2

Example of Newton method (I)


Apply Newton method to our favourite equation
x = cos x. f (x) = x cos x, f (x) = 1 + sin x

General iteration:

xn+1 = xn

xn cos xn 1 + sin xn

Numerical analysis:Root nding methods p. 17/2

Example of Newton method (I)


Apply Newton method to our favourite equation
x = cos x. f (x) = x cos x, f (x) = 1 + sin x

General iteration:

xn+1 = xn

xn cos xn 1 + sin xn

Take e.g. x0 = 0. Then 0 cos 0 , = 1, x1 = 0 1 + sin 0 1 cos 1 , = 0.750363867840244. x2 = 1 1 + sin 1

Numerical analysis:Root nding methods p. 17/2

Example of Newton method (II)


Newton method xn n 0 0.000000000000000 1 1.000000000000000 2 0.750363867840244 3 0.739112890911362 4 0.739085133385284 5 0.739085133215161 6 0.739085133215161 7 0.739085133215161

Numerical analysis:Root nding methods p. 18/2

Example of Newton method (II)


Newton method xn n 0 0.000000000000000 1 1.000000000000000 2 0.750363867840244 3 0.739112890911362 4 0.739085133385284 5 0.739085133215161 6 0.739085133215161 7 0.739085133215161 Standard xed point method n xn 0 0 1 1.00000000000000 2 0.54030230586814 3 0.85755321584639 4 0.65428979049778 5 0.79348035874257 6 0.70136877362276 7 0.76395968290065

Upshot: Newton method is much more efcient. Why? How to analyse xed point methods in general?

Numerical analysis:Root nding methods p. 18/2

Recap: Taylor series


(this is the informal version) If F is a nice smooth function and h is small, then
1 1 2 F (x + h) = F (x) + F (x)h + F (x)h + F (x)h3 + . . . 2! 3!

Gives F at points near to x in terms of local information at x alone

Numerical analysis:Root nding methods p. 19/2

Recap: Taylor series


(this is the informal version) If F is a nice smooth function and h is small, then
1 1 2 F (x + h) = F (x) + F (x)h + F (x)h + F (x)h3 + . . . 2! 3!

Gives F at points near to x in terms of local information at x alone Truncation of series gives useful approximations:
F (x + h) F (x)

1 F (x + h) F (x) + F (x)h + F (x)h2 2!

F (x + h) F (x) + F (x)h

(better approx) (even better approx)

Numerical analysis:Root nding methods p. 19/2

Theory of xed point schemes (I)


Consider xed point iteration xn+1 = g (xn ) Let root be x , i.e. x = g (x ) Let En := xn x denote error of nth approximation. So
xn+1 = x + En+1 = g ( xn ) = g ( x + E n ) 1 3 1 2 = g ( x ) + g ( x ) En + g ( x ) En + g ( x ) En + . . . 2! 3!

Numerical analysis:Root nding methods p. 20/2

Theory of xed point schemes (I)


Consider xed point iteration xn+1 = g (xn ) Let root be x , i.e. x = g (x ) Let En := xn x denote error of nth approximation. So
xn+1 = x + En+1 = g ( xn ) = g ( x + E n ) 1 3 1 2 = g ( x ) + g ( x ) En + g ( x ) En + g ( x ) En + . . . 2! 3!

Rearrangement gives: 1 3 1 2 En+1 = g (x )En + g (x )En + g (x )En + . . . 2! 3! Error at (n + 1)th step in terms of error at nth step. How fast does error get smaller?

Numerical analysis:Root nding methods p. 20/2

Theory of xed point schemes (II)


First order methods: k := g (x ) = 0
En+1 kEn k is called the rate of linear convergence: need |k | < 1

Numerical analysis:Root nding methods p. 21/2

Theory of xed point schemes (II)


First order methods: k := g (x ) = 0
En+1 kEn k is called the rate of linear convergence: need |k | < 1

Second order methods: g (x ) = 0, g (x ) = 0


En+1 1 2 g ( x ) En 2!

(fast)

Numerical analysis:Root nding methods p. 21/2

Theory of xed point schemes (II)


First order methods: k := g (x ) = 0
En+1 kEn k is called the rate of linear convergence: need |k | < 1

Second order methods: g (x ) = 0, g (x ) = 0


En+1 1 2 g ( x ) En 2!

(fast)

Third order methods: g (x ) = 0, g (x ) = 0, g (x ) = 0


En+1 1 3 g ( x ) En 3!

(even faster)

Numerical analysis:Root nding methods p. 21/2

Examples
Newton-Raphson is second order.

Numerical analysis:Root nding methods p. 22/2

Examples
Newton-Raphson is second order. The scheme
xn+1 = A + xn x2 n is proposed for nding A. Analyse its order and rate of convergence.

Numerical analysis:Root nding methods p. 22/2

Examples
Newton-Raphson is second order. The scheme
xn+1 = A + xn x2 n is proposed for nding A. Analyse its order and rate of convergence.

Show that the scheme


x2 n + 3A xn+1 = xn 3x2 n+A has third order convergence to A.

Numerical analysis:Root nding methods p. 22/2

Spot order of a method from data?


For each iteration step: Second order methods: number of correct digits approx doubles at each step Third order methods: number of correct digits approx triples at each step

Numerical analysis:Root nding methods p. 23/2

Spot order of a method from data?


For each iteration step: Second order methods: number of correct digits approx doubles at each step Third order methods: number of correct digits approx triples at each step For rst order methods, the formula
xn+1 xn k := xn xn1

should give answers more or less independent of n (NB gives rate of convergence)

Numerical analysis:Root nding methods p. 23/2

Summary (I)
Three main methods discussed: Interval bisection Fixed point method Newton method

Numerical analysis:Root nding methods p. 24/2

Summary (I)
Three main methods discussed: Interval bisection Fixed point method Newton method

Interval bisection uses Intermediate Value Theorem repeatedly and is robust rst order method with k = 1/2

Numerical analysis:Root nding methods p. 24/2

Summary (I)
Three main methods discussed: Interval bisection Fixed point method Newton method

Interval bisection uses Intermediate Value Theorem repeatedly and is robust rst order method with k = 1/2 Fixed point and Newton methods can be generalised to solve systems of equations

Numerical analysis:Root nding methods p. 24/2

Summary (II)
Fixed point method (solves g (x) = x):
xn+1 = g (xn )

Newton method (solves f (x) = 0):


xn+1 f ( xn ) = xn f ( xn )

converges much faster than usual xed point method but : is a special case of xed point method

Numerical analysis:Root nding methods p. 25/2

Summary (III)
Analysis of convergence of xed point methods based on Taylor series expansions First order methods: k := g (x ) = 0
En+1 kEn

Need |k | < 1 for convergence

Numerical analysis:Root nding methods p. 26/2

Summary (III)
Analysis of convergence of xed point methods based on Taylor series expansions First order methods: k := g (x ) = 0
En+1 kEn

Need |k | < 1 for convergence Second order methods: g (x ) = 0, g (x ) = 0.


En+1 1 2 g ( x ) En 2!

(faster convergence)

Numerical analysis:Root nding methods p. 26/2

Summary (III)
Analysis of convergence of xed point methods based on Taylor series expansions First order methods: k := g (x ) = 0
En+1 kEn

Need |k | < 1 for convergence Second order methods: g (x ) = 0, g (x ) = 0.


En+1 1 2 g ( x ) En 2!

(faster convergence)

Recognition of order from data

Numerical analysis:Root nding methods p. 26/2

You might also like