You are on page 1of 52

Chapter---

Root finding of an equation &

Ordinary differential equations(ODE)

Aman W
Department of Applied Physics
University of Gondar
contents

Introduction
Root finding of an equation
Guess and check
Iterative algorithms
Bisection method
Newton-Raphson
Secant method
Ordinary differential equations(ODE)
Numerical differentiation
Forward & backward difference
Central difference
Introduction

What is Computational problem solving ?


Knowledge could be divided into two categories
Declarative knowledge
Statements of fact
The square root of a number x is a number y such that
y*y = x
Imperative knowledge
how to(y is square root of x) methods or recipes
Can you use this to find the square root of a particular
instance of x?
Introduction

How do we transform a physics problem into something that


can be solved on a computer?
Analyze the problem to find a suitable numerical method of
solution.
Derive an algorithm a recipe
to generate the result.
Write a computer program to embody the algorithm.
Get the program debugged and running.
Solve the problem!
Root finding of an equation
The equation in the form
y = f (x)
Contains three elements:
An input value x,
An output value y, and the rule f for computing y.
If a given a function f (x), we will determine the values of x for
which f (x) = 0.
The solutions (values of x) are known as the root of the equation f
(x) = 0, or the zeroes of the function f (x).
These roots of an equation could be solved numerically using
different techniques, algorithms
guess & check algorithm
Iterative algorithm
Root finding of an equation
Here is a recipe for deducing the square root of a number
x(attributed to Heron of Alexandria) in the first century

guess & check algorithm

Start with guess number g


If g*g is close enough to x, stop and say that g is the answer
Otherwise make a new guess, by averaging g and x/g !
Using this new guess, repeat the process until we get close
enough
Guess & check code

x = 25
We need a good way to generate
epsilon = 0.01
guess step = epsilon**2
If we could guess possible values Numguess = 0.0
ans = 0.0
for square root, g,
We should use a method(check) while abs(ans**2-x) >= epsilon and ans <= x:
where our guess is true/false
ans = ans+step
Since we can not guarantee an
exact answer, but look for Numguess = Numguess+ 1

something close enough print('Numguess = '+str(Numguess))


Start with exhaustive enumeration
if abs(ans**2 - x) >= epsilon:
Take small steps to generate print('failed')
guess else:
Check to see if close enough print(str(ans)+ 'is close to the square root of '+ str(x))
x = 25
Step could be any small epsilon = 0.01
number step = epsilon**2
Numguess = 0.0
If too small, takes a long ans = 0.0

time to find square root while abs(ans**2-x) >= epsilon and ans <= x:
Too large, it may
ans = ans+step
skip/jump over answer
Numguess = Numguess+ 1
without getting close
enough print('Numguess = '+str(Numguess))

Take step if abs(ans**2 - x) >= epsilon:


print('failed')
0.5 else:
print(str(ans)+ 'is close to the square root of '+ str(x))
2
Take another number x which is large for instance, x = 1250
Step = epsilon**2
In general, we will take x/step times through a code to find the
solution
So we need more efficient way to deal with square root
problem
Iterative Algorithms
All methods of finding roots are iterative procedures that
require a starting point, i.e., an estimate of the root.
This estimate can be crucial; a bad starting value may fail to
converge, or it may converge to the wrong root (a root
different from the one sought).
There is no universal recipe for estimating the value of a root.
If the equation is associated with a physical problem, then the
context of the problem (physical insight) might suggest the
approximate location of the root.
Otherwise, a systematic numerical search for the roots can be
carried out. One such search method is described in the above
The roots can also be located visually by plotting the function
The method of bisection uses the same principle as incremental
search- guess and check
If there is a root in the interval (x1, x2), then
f (x1) f (x2) < 0.
The method of bisection accomplishes this by successively
halving the interval until it becomes sufficiently small.
This technique is also known as the interval halving method.
Bisection is not the fastest method available for computing
roots, but it is the most reliable.
Once a root has been bracketed, bisection will always close in
on it.
Bisection method
The bisection algorithm; if there is a root in the interval (x1,
x2), then f (x1) f (x2) < 0. or
It relies on the fact that for a continuous function f,
if f (x0) < 0 and f (x1) > 0 then
f (x) = 0 for some x in the interval (x0, x1).
Bisection method

In order to halve the interval, we compute f (x3), where

x3 = (x1 + x2)

x3 is the midpoint of the interval. If f (x2) f (x3) < 0, then the root must
be in (x2, x3) and we record this by replacing the original bound x1 by
x3.
Otherwise, the root lies in (x1, x3), in which case x2 is replaced by x3.
In either case, the new interval (x1, x2) is half the size of the original
interval.
The bisection is repeated until the interval has been reduced to a small
value , so that
Example
Finding the root of a polynomial

First, two numbers a and b have to be found such that f(a) and f(b) have
opposite signs.
For the above function, a = 1 and b = 2 satisfy this criterion
F(1)=-2 and f(2)=4
Because the function is continuous, there must be a root within the interval [1,
2].
so the midpoint is
C1=(a+b)/2=1.5

The function value at the midpoint is f(c1) = (1.5) 3 ( 1.5 ) 2 = 0.125.


Because f(c1) is negative, a =1 is replaced with a = 1.5 for the next iteration to
ensure that f(a) and f(b) to have opposite signs.
Bisection
This techniques simply says that rather than exhaustively
trying things starting at 0, we should pick a number in the
middle of the range
The basic idea of the algorithm is that we know that the square
root of x lies between 0 and x

x = 0.0 x=g x
We should select half of x, g= (0+x)/2
If by chance we got the answer, we succeed(most probably
not)
If g not close enough, that is it may be
Too big
Too small
If g**2 > x, we know g is too big, so we search again

If this new g is for instance too small, g**2 <x, we know that
g is too small, we search again

At each stage we reduce range of values to search by half of it


Iterate these steps until the requested
precision is reached
Code
x = 25
epsilon how close we are epsilon = 0.01
numGuesses number of numGuesses = 0
low = 0.0
step high = x
Range ans = (high + low)/2.0
while abs(ans**2 - x) >= epsilon:
Low print('low = ' + str(low) + ' high = ' + str(high) + '
ans = ' + str(ans))
High numGuesses += 1
ans half of the range if ans**2 < x:
low = ans
Some of the important point else:
high = ans
about this algorithm is
ans = (high + low)/2.0
It reduces computation time print('numGuesses = ' + str(numGuesses))
It work well on problems that print(str(ans) + ' is close to square root of ' + str(x))
are ordered
Exercise

Use bisection to find the root of f (x) = x3 10x2 + 5 = 0 that


lies in the interval (0.6, 0.8).
Newton-Raphson

A polynomial in one variable

Sir Isaac Newton & Raphson developed a method to


approximate the value of the square root of a polynomial,
which is general approximation algorithm to find roots
For example to find the square root of 24, find the root of
p(x) = x2 24
Newton-Raphson showed that if g is an approximation /guess
to the root, then

Is a better approximation; where p is derivative of p


Let a polynomial is x2 + k ,
Its derivative is 2x
According to Newton-Raphson, for a given guess g for a root,
the better guess is

Let us look at the code


Newton-Raphson is an iterative root finding algorithm which
uses the derivative of the function to provide more information
about where to search for the root.
This method is based on Taylor expansion using the first two
terms of the Taylor series for a function f starting at our initial
guess x1:
f ( x1 h) f ( x1 ) hf ' ( x1 ) 0(h )
2

If x1+h is a root of f(x) then f (x1 + h) = 0 . Assuming h is very


small, we can drop h2
So we impose this condition and solve for h:
f (x )
f ( x1 h) 0 h '
1

f ( x1 )
Newton-Raphson Method

Based on forming the tangent line to the f(x) curve at some


guess x, then following the tangent line to where it crosses the
x-axis.
f (x i ) 0
f (x i )
'

x i x i1
f (x )
x i1 x i ' i
f (x i )

Newtons method finds the root by


approximating f (x) by a straight line that is
tangent to the curve at xi at each iteration.
Thus xi+1 is at the intersection of the x-axis
and the tangent line.
The polynomial is # Newton-Raphson for square root
epsilon = 0.01
y = 24.0
guess = y/2.0
In the while loop, we
while abs(guess*guess - y) >= epsilon:
check/test how we close
guess = guess - (((guess**2) - y)/
If not close enough, we (2*guess))
should use Newton-rap print(guess)
new guess method
print('Square root of ' + str(y) + ' is
about ' + str(guess))
A root of f (x) = x3 10x2 + 5. Compute this root with the
NewtonRaphson method.
In general, there are more iterative algorithms,
Secant Method(This method based on the same principles
as the Newton method, but it approximates the derivative
numerically)
Brents Method
etc
Both iterative and Guess and check algorithms are reusing the
same code over and over again
Use looping construct to generate guesses check,
Secant Methods

A potential problem in implementing the Newton-Raphson


method is the evaluation of the derivative - there are certain
functions whose derivatives may be difficult or inconvenient
to evaluate.
For these cases, the derivative can be approximated by a
backward finite divided difference:

f (x i1 ) f (x i )
f (x i )
'

x i1 x i


Secant Methods (cont)

Substitution of this approximation for the derivative to the


Newton-Raphson method equation gives us

f (x i )x i1 x i
x i1 x i
f (x i1 ) f (x i )

Note - this method requires two initial estimates of x but does


not require an analytical expression of the derivative.

Ordinary differential equations(ODE)
In your undergraduate study, you have taken classical
mechanics courses, here we look at how to solve these.
A general form for a first-order differential equation can
be written in the form

These equations rarely have analytical solutions, and


generally one must apply numerical methods in order to
solve them.
The equations of classical mechanics can be written as
differential equations. example
ODE algorithm
So let us take the first-order differential equation

The classic way to solve a differential equation is to start with


the known initial value of the dependent variable, y0, y(t = 0),

y'(t) f (t, y(t))



y(t0 ) y0
Problem
What is the value of y at time t?
Ordinary differential equations
In order to solve these equations, we need
1. A numerical method that permits an approximate solution
using only arithmetic.
2. Some Initial conditions.
The first an approximation to the derivative is suggested by
the definition from basic calculus:
Remember the elementary definition of the derivative of a
function f at a point x:
Ordinary differential equations
As you see in the fig, the function f at x

df ( x) f ( x h) f ( x)
lim
dx h 0 h
In our first equation,

When t is very small


How good is this as an approximation?
Let us use the Taylor expansion to derive it:

So the above equ

The leading error term vanishes as t, i.e. we say that the


approximation is of order t, i.e. O(t). Take this to be the error
term which we write as E(t). In this case the lowest power that
appears is t, so E(t)~ t, or in Big-O notation E(t) is O(t).
So the forward difference method is first order. Error proportional
to t
Remember t is small, so that t 2< t, hence O(t 2) is more
accurate than O(t)
Discretizing the derivatives

The differential equations for the 1-d motion of position,


velocity can now be written as

with an approximate solution (the Euler method)


x

If we get an initial x and v, and knowing a, these can be solved


recursively.
Eulers Rule

Use the derivative function f(t, y) to find an approximate value


for y at a small step t = h forward in time; that is,
y(t + h) = y1.

Once you can do that, you can solve the ODE for all t values
by just continuing stepping to larger times one small h at a
time
Simply we use integration to solve differential equations
Eulers rule (Figure ) is a simple algorithm for integrating the
differential equation () by one step and is just the forward-
difference algorithm for the derivative:
Velocity from position
We can turn this into an approximation rule for y (t) =f (x)
by replacing the limit as h approaches 0 with a small but finite
h: Let us take the trajectory is that of a projectile
dy y (t h) y (t )
y ' (t )
dt h

The trajectory is that of a


projectile with air resistance

Forward-difference
approximation (slanted
dashed line) for the
numerical first derivative at
time t.
Euler Method
Explicit Euler Method
Consider Forward Difference
y (t t ) y (t )
y ' (t )
t
y (t t ) y (t ) tf (t , y (t ))
Which implies

y(t t) y(t) t y'(t)

36
Euler Method

Split time t into n slices of equal length t

t0 0

ti i t
t t
n
The Explicit Euler Method Formula

y(ti1 ) y(ti ) t y'(ti )

37
Take a function which is related to that parabola

Exact differentiation

Forward difference approximation

It needs h to be very small


Not a good algorithm, however it is useful
The approximate gradient is calculated using values of t
greater than t+h, this algorithm is known as the forward
difference method.
Where h would be a separation between points
Note this is calculating a series of derivatives at the points f(x)
Just how good is this estimate?

Taylor Series expansion uses an infinite sum of terms to


represent a function at some point accurately.
Example
The A simple differential equation is the equation that gives
rise to the exponential decay law,

Where is a decay constant.


Numerical analysis: approximation.
Change in a time interval is proportional to y
Recipe to solve exponential # initial conditions
decay law: y=100.0
Define the initial condition at lamb=1.0
t=0
Repeat the following until deltaT=0.02
some final value for t is # use that equation () for some
reached steps
Use above equation () to >>> for i in
determine the solution at range(3*lamb/deltaT):
t + t print i*deltaT, y
y=y-lamb*y*deltaT
The explicit(forward difference) solution can be unstable for
large step sizes.
Compare for
dt = 0.01
dt = 0.05
dt = 0.15
Euler Method
Implicit Euler Method
Consider Backward Difference

y(t) y(t t)
y'(t)
t
Which implies

y(t) y(t t) t y'(t)

43
Euler Method

Split the time into slices of equal length

y(ti1 ) y(ti ) t y'(ti1 )

The above differential equation should be solved to get the


value of y(ti+1)
Extra computation
Sometimes worth because implicit method is more accurate

44
Example
Consider: y(t)=-15y(t), t0, y(0)=1
Exact solution: y(t)=e-15t, so y(t)0 as t0
If we examine the forward Euler method, strong
oscillatory behaviour forces us to take very small
steps even though the function looks quite
smooth
Forwards vs Backwards difference

The difference defined


above is called a forward
difference
The derivative is clearly
defined in the direction of
increasing t
Using t & t+h
We can equivalently define
a backwards difference
The derivative is defined in
the direction of decreasing x
Using t & t-h
h2 h3
f ( x h) f ( x) hf ' ( x) f ' ' ( x) f ' ' ' ( x) ...
2! 3!
f ( x ) f ( x h) h h2
f b ' ( x) f ' ' ( x) f ' ' ' ( x) ...
h 2 6
Improving the error
It should be clear that both the forward & backward
differences are O(h)
However, by averaging them we can eliminate the term that is
O(h) there by defining the centered difference:

f ' c ( x)
1
f ' f ( x) f 'b ( x)
2
f ( x h) f ( x ) f ( x ) f ( x h) h h
f ' ' ( x) f ' ' ( x)
2h 2 2
h2
f ' ' ' ( x) ....
6
f ( x h) f ( x h ) Note that all odd ordered
f ' c ( x) O(h )
2
terms cancel too. Further the
2h
central difference is symmetric
about x, forward & backward
differences are clearly not.
Central difference

Now, rather than making a


single step of h forward, we
form a central difference by
stepping forward half a step
and backward half a step:

Simply we can use the


following

We estimate the error in the


central-difference algorithm by
substituting the Taylor series
for y(t h/2) into
Central difference

The central difference gives for our first equation,


the exact derivative independent of h

More accurate than Forward Difference and


Backward Difference
Error Analysis (Assessment)
The approximation errors in numerical differentiation decrease with
decreasing step size h, while round-off errors increase with decreasing
step size (you have to take more steps and do more calculations).
To obtain a rough estimate of the round-off error, we observe that
differentiation essentially subtracts the value of a function at argument x
from that of the same function at argument x + h and then divides by h:

As h is made continually smaller, we eventually reach the round-off error


limit where y(t + h) and y(t) differ by just machine precision m

Best h values
Consider again the A simple y = 1.0 # initial y
differential equation is the
equation that gives rise to the
y0 = 1.0 # initial y0
exponential decay law, dt = 0.1 # time step
Implicit Euler Method(backward lamb = 1.0 # decay
difference, central difference)
constant
Note: This implicit solution is
stable for large step sizes. Nmax=101 # no. of steps
Compare for y=y0
dt = 0.01 for i in range(Nmax):
dt = 0.05
dt = 0.15
print i*dt, y
y = (1.0 - dt*lamb)*y

You might also like