You are on page 1of 35

CSU Vision CSU Mission Core Values CSU IGA

Transforming lives by CSU is committed to transform the lives Productivity Compassion Competent
Educating for the BEST of people and communities through high Accessibility Accountability
quality instruction and innovative Relevance Self-disciplined
research, development, production and Excellence

Republic of the Philippines


Cagayan State University
COLLEGE OF ENGINEERING
Carig Sur, Tuguegarao City

DEPARTMENT OF CHEMICAL ENGINEERING

Chemical Engineering Thermodynamics I


(ChE 60)
Second Semester 2016 – 2017

Course Topic: FINDING ROOTS OF NONLINEAR EQUATIONS WITH THE USE OF


MATLAB®
Course Activity: RESEARCH

Name of Student: Lim, Van Janssen R.


Liquigan, Jaige Mary Leonila V.
Lozano, John Harvey S.
Mora, Roxan B.

Program: BSChE

Year Level: 3rd Year

Date Submitted: May 12, 2017

Instructor: Engr. CAESAR P. LLAPITAN Rating: ________

Date Checked: ________


Table of Contents

I. INTRODUCTION ................................................................................................... 1

II. THEORETICAL BACKGROUND ......................................................................... 3

III. NUMERICAL ANALYSIS ..................................................................................... 5

A. BISECTION METHOD ........................................................................................... 5

B. FALSE POSITION METHOD ................................................................................... 9

C. SIMPLE FIXED-POINT ITERATION ...................................................................... 12

D. NEWTON-RAPHSON METHOD ........................................................................... 15

E. SECANT METHOD .............................................................................................. 18

IV. PROBLEM SOLVING .......................................................................................... 20

V. GENERALIZATION & CONCLUSION .............................................................. 31

VI. DEFINITION OF TERMS .................................................................................... 32

VII. REFERENCES ...................................................................................................... 33

1
I. Introduction

For solving roots of nonlinear equations, the quadratic formula

 b  b 2  4ac
x
2a Eq. 1-1

is the most commonly used formula to solve for

f x   ax2  bx  c  0 Eq. 1-2

The values calculated with the quadratic formula are called the “roots” of the latter formula.
They represent the values of x that make f(x) equal to zero. For this reason, roots are
sometimes called the “zeroes of the equation”.

Although the quadratic formula is handy for solving roots of a nonlinear equation, there
are many other functions for which the root cannot be determined so easily. Before the
advent of digital computers, there were a number of ways to solve for the roots of such
equations. For some cases, the roots could be obtained by direct methods. Although there
were equations like this that could be solved directly, there were many more that could not.
In such instances, the only alternative is an approximate solution technique.

One method to obtain an approximate solution is to plot the function and determine
where it crosses the x axis. This point, which represents the x value for which f(x) = 0, is
the root. Although graphical methods are useful for obtaining rough estimates of roots, they
are limited because of their lack of precision.

An alternative approach is to use trial and error. This “technique” consists of guessing
a value of x and evaluating whether f(x) is zero. If not (as is almost always the case), another
guess is made, and f(x) is again evaluated to determine whether the new value provides a
better estimate of the root. The process is repeated until a guess results in an f(x) that is
close to zero. Such haphazard methods are obviously inefficient and inadequate for the
requirements of engineering and science practice. Numerical methods represent
alternatives that are also approximate but employ systematic strategies to home in on the
true root. As elaborated in the following pages, the combination of these systematic
methods and computers makes the solution of most applied roots-of-equations problems a
simple and efficient task. Besides roots, another feature of interest to engineers and
scientists are a function’s minimum and maximum values. The determination of such

2
optimal values is referred to as optimization. As you learned in calculus, such solutions can
be obtained analytically by determining the value at which the function is flat; that is, where
its derivative is zero. Although such analytical solutions are sometimes feasible, most
practical optimization problems require numerical, computer solutions. From a numerical
standpoint, such optimization methods are similar in spirit to the root-location methods we
just discussed. That is, both involve guessing and searching for a location on a function.
The fundamental difference between the two types of problems is in the figure below.

Fig. 1-1

Root location involves searching for the location where the function equals zero. In
contrast, optimization involves searching for the function’s extreme points.

3
II. Theoretical Background

The two major methods of solving roots of nonlinear equations are the bracketing
methods and open methods. The bracketing methods start with guesses that bracket, or
contain the root and then systematically reduce the width of the bracket. Two specific
methods are covered: bisection and false position. The open methods also involve
systematic trial-and-error iterations but do not require that the initial guesses bracket the
root. Under the open methods are the fixed-point iteration, Newton-Raphson, and Secant
method.
The bisection method is a variation of the incremental search method in which the
interval is always divided in half. If a function changes sign over an interval, the function
value at the midpoint is evaluated. The location of the root is then determined as lying
within the subinterval where the sign change occurs. The subinterval then becomes the
interval for the next iteration. The process is repeated until the root is known to the required
precision.
False position (also called the linear interpolation method) is another well-known
bracketing method. It is very similar to bisection with the exception that it uses a different
strategy to come up with its new root estimate. Rather than bisecting the interval, it locates
the root by joining f (xl) and f (xu) with a straight line. The intersection of this line with the
x axis represents an improved estimate of the root. Thus, the shape of the function
influences the new root estimate. Using similar triangles, the intersection of the straight
line with the x axis can be estimated.
Open methods employ a formula to predict the root. Such a formula can be developed
for simple fixed-point iteration (or, as it is also called, one-point iteration or successive
substitution) by rearranging the function f (x) =0 so that x is on the left-hand side of the
equation.
x  g x  Eq. 2-1
This transformation can be accomplished either by algebraic manipulation or by simply
adding x to both sides of the original equation. The utility of the above equation is that it
provides a formula to predict a new value of x as a function of an old value of x. Thus,
given an initial guess at the root xi be used to compute a new estimate xi+1 as expressed by
the iterative formula
xi 1  g xi  Eq. 2-2

4
The most widely used of all root-locating formulas is the Newton-Raphson. If the initial
guess at the root is xi, a tangent can be extended from the point [xi, f (xi)]. The point where
this tangent crosses the x axis usually represents an improved estimate of the root.
In a secant method, the approach requires two initial estimates of x. However, because
f(x) is not required to change signs between the estimates, it is not classified as a bracketing
method. Rather than using two arbitrary values to estimate the derivative, an alternative
approach involves a fractional perturbation of the independent variable to estimate f’(x).

5
III. Numerical Analysis
A. Bisection Method

1. General Formula
In finding roots of nonlinear equations using bisection method, the interval is
always divided into half, as shown in Eq. 3-1

x1  x2
xm  Eq. 3-1
2

The root is determined within the subinterval. In the next iteration, the subinterval
becomes the next interval. The iteration then continues until the value of the
calculated root is near the value of the true root.

2. General Algorithm

Step 1: Start

Step 2: Let a and b are the guesses root of f(x1) f(x2) < 0, where f(x) changes the
sign between x1 and x2.

Step 3: Estimating the root of f(x) by finding the midpoint f(xm) at x1 and x2.

Step 4: Check whether if

a. If f(x1) f(xm) < 0 then the root lies on x1 and xm , then the x1 = x1 and x2 =
x2.
b. If f(x1) f(xm) > 0 then the root lies on x2 and xm , then the x1 = xm and x2 =
x2.
c. If f(x1) f(xm) = 0 then the root is xm. The algorithm is end

Step 5: Finding of new estimate of root go back to step 2.

Step 6: Finding of absolute relative appropriate error.

6
Step 7: Compare the absolute relative approximate error a ∈0 with the pre-specified
relative error tolerance s ∈1 . If s a >∈∈ , then go to Step 3, else stop the algorithm.

Step8: End

3. General Flowchart

7
Start

A B
let x1, x2, xm

𝑥1 + 𝑥2
𝑥𝑚 =
2

read x1, x2

𝑥1 + 𝑥2
𝑥𝑚 =
2

read xm

Yes
Print Xm A
f(x1) and f(xm) > 0
No
Yes

f(x1) and f(xm) < 0


No
Yes
B
f(x1) and f(xm) = 0
No

Print the xm

End

Fig. 3-1

8
Fig. 3-1 shows the general flowchart for finding roots using bisection method. The
initial values, x1, x2, xm, were the inputs. Using Eq. 3-1, the value of xm is calculated. If the
values of f(x1) and f(xm) are greater than zero, the process goes back to initializing the values
then solving the xm, if not the next step is to decide whether the values of f(x1) and f(xm) are
lesser than zero. If yes the process goes back to initializing the values then solving the xm, if no
the next step is to decide whether the values of f(x1) and f(xm) are equal to zero. If yes the
process goes back to initializing the values then solving the xm, if no the value o xm is displayed,
then the process will end.

4. General Pseudo Code

Fig. 3-2

9
B. False Position Method
1. General Formula
To solve for the root of a nonlinear equation using false position method, Eq.
3-2 is used.
f xu xl  xu 
x r  xu  Eq. 3-2
f  x l   f  xu 
The first two initials guesses are used to solved for the first root. The first root then
replaces one of the first two guesses to solve for the next root. The percentage error
is calculated. When the stopping criterion is attained, the iteration stops.

2. General Algorithm

Step 1: Start

Step 2: set i=1

f xu xl  xu 
Step 3: Compute xr  xu 
f  x l   f  xu 

If xr  xu < ϵ or i ≥ N, stop
xu

Step 4: If f (xu) f (xr) < 0, then set xr≡ xu, and xu ≡ xl

Otherwise, set xi ≡ xi+1, xi-1 ≡ xi-1; return to step 2

Step 5: End

10
3. General Flowchart

STAR

F(x), Xu, Xl, N

B i=1 A

Read F(x), Xu, li, N, i

YES
Is i ≥ N? Display xr END

NO

YES Xu ≡ xi+1
Is f(xu) f(xl) < 0? Xl ≡ xi-1 A

NO

Xu ≡ xi+1
X l ≡ xi B

Fig. 3-3

11
Fig. 3-3 shows the general flowchart for finding roots using false position method. The
first step is to initialize the values which are needed to solve the problem, and then solve for
the value of xr. The next step is to decide whether the percent error is greater than or lesser than
the stopping criterion. If the percent error is lesser than the stopping criterion, the process will
go back to initializing the values and solving for the value off xr. If the percent error is more
than the stopping criterion, the computed value will be displayed, and the process will stop.

4. General Pseudo Code

Fig. 3-4

12
C. Simple Fixed-Point Iteration
1. General Formula
In a simple fixed-point iteration, Eq. 3-3 is used to predict a new value of x as a
function of the old value of x.
x  g x  Eq. 3-3
To solve for the, an iterative formula, described by Eq. 3-4, is used.
xi 1  g xi  Eq. 3-4
Then, the percent error will be calculated. The iteration continues until the desired
value is attained.
2. General Algorithm

Step 1: Start
Step 2: Write the equation into the form of xi+1 = g (xi)
Step 3: Choose an initial value for x0
Step 4: Define x0 as xi with xi+1 = g (xi) to estimate x1
Step 5: Calculate the approximate error using the error estimator
𝑥𝑖+1 −𝑥𝑖
𝜀𝑎 = | | 100%
𝑥𝑖+1

Step 6: If the error is not desired, set the last xi+1 estimated to be xi, then return to
Step 3
Step 7: If the error acquired is equal or really close to the desired error, stop the
process.
Step 8: End

13
3. General Flowchart

Start

x(0) = ‘initial value’


x(i)=x(0), i=0

x(i+1)=x(i) A
x(i+1) = g[x(n)]

Read x(i+1)

Print x(i+1)

𝑥𝑖+1 − 𝑥𝑖
𝜀𝑎 = ฬ ฬ 100%
𝑥𝑖+1

No Yes Read Print


A 𝜀𝑎 ≈′ 𝑑𝑒𝑠𝑖𝑟𝑒𝑑′ End
x(i+1) x(i+1)

Fig. 3-5

The figure above shows the general flowchart for finding roots using simple fixed-point
iteration. The first step is to initialize all values. Next is to solve for the value of x(i+1), then
solve for the percent error. If the percent error is equal or almost equal to the desired value, the
calculated value of x(i+1) will be displayed, and the process will end. If the value o the percent
error is far from the desired value, the process will compute again or the value of x(i+1) until
the percent error is equal to the desired value.

14
4. General Pseudo Code

Fig. 3-6

15
D. Newton-Raphson Method

1. General Formula
In most root-locating problems, the Newton-Raphson method is perhaps the
most widely used formula. It is defined by Eq. 3-5
f  xi 
xi  1  x i  Eq. 3-5
f '  xi 
Like the previous methods, in a Newton-Raphson method, the iteration continues
until the stopping criterion or desired value is met.
2. General Algorithm

Step 1: START

Step 2: Set an initial guest value xo

Step 3: Substitute the initial guess to the f (xi), f’(x) and obtain xn

Step 4: Proceed to continuous calculation until the last two roots approximate each
other.

Step 5: END

16
3. General Flowchart

START

F(x), Xo, A

Read F(x), Xo,

𝑓(𝑋𝑖)
𝑋𝑖 + 1 = 𝑋𝑖 −
𝑓′(𝑋𝑖)

Fig. 3-10
YES

Display xn END

NO

Fig. 3-7

Figure 3.7 shows the general flowchart for the Newton-Raphson method. The first step
is to initialize the values, then solve for the value of xn. If the calculated value is equal to xn –
1 then xn will be displayed, if not the process will go back to initializing the values until the
calculated value and the desired value are almost the same.

17
4. General Pseudo Code

Fig. 3-8

18
E. Secant Method

1. General Formula
Eq. 3-6, which is the formula for the secant method, is derived from the Newton-
Raphson Method.
f x1   xi 1  xi 
xi 1  xi  Eq. 3-6
xi 1   xi 
In this method, two initial estimates of x are needed, but the value of the function
f(x) is not required.
2. General Algorithm
Step 1: Choose i=1
Step 2: Start with guesses Xi-1, Xi
Step 3: Use the general formula
𝑓(𝑥1 ) − (𝑥𝑖−1 − 𝑥𝑖 )
𝑥𝑖+1 = 𝑥𝑖 −
𝑓(𝑥𝑖−1 ) − 𝑓(𝑥𝑖 )
(𝑥𝑖−1 −𝑥𝑖 )
Step 4: Find the error using 𝜀𝑎 = 𝑥𝑖−1

Step 5: if error is error < tolerance Stop; else Go back to step2


Step 6: End

19
3. General Flowchart

Start

i, xi, xi-1 A

𝑓 𝑥1 − (𝑥𝑖−1 − 𝑥𝑖 )
𝑥𝑖+1 = 𝑥𝑖 −
𝑓(𝑥𝑖−1 ) − 𝑓(𝑥𝑖 )

(𝑥𝑖−1 − 𝑥𝑖 )
𝒆𝒓𝒓𝒐𝒓 =
𝑥𝑖−1

Yes
Error > Given
Print xi+1
tolerance End

No
A

Fig. 3-9

Figure 3.9 shows the general flowchart for solving roots using the secant method. The first step
is to initialize all values that are needed to solve the problem. After initializing all values, the
value of xi+1 will be calculated using Eq. 3-6. If the percent error is lesser than its tolerance the
value of xi+1 will be displayed, if not the process will again initialize the values and repeat the
proceeding steps.
4. General Pseudo Code

Fig. 3-10

20
IV. Problem Solving
1. Problem Statement

Use bisection method and false position method to determine the drag coefficient
needed so that an 80-kg bungee jumper has a velocity of 36 m/s after 4 s of free fall.
Note: The acceleration of gravity is 9.81 m/s2. Start with initial guesses of xl = 0.1
and xu = 0.2 and iterate until the approximate relative error falls below 2%.

Solution:

a. Using Bisection Method


i. Manual Computation

 gc d 
f cd   t   vt 
gm
tanh 
cd  m 

f cd  
9.8180 tanh  9.81c d 
4  36
cd  80
 

First iteration:

𝑥1 + 𝑥2
𝑥𝑚 =
2
0.1 + 0.2
𝑥𝑚1 =
2
𝑥𝑚1 = 0.15
𝑓(0.1)𝑓(0.15) = 0.860291(− 0.2040516) = −0.179544

Second iteration:

𝑥1 + 𝑥2
𝑥𝑚 =
2
0.1 + 0.15
𝑥𝑚2 =
2
𝑥𝑚2 = 0.125

xm 2  xm11
a   100
xm 2

21
0.125  0.15
a   100  20%
0.125
𝑓(0.1)𝑓(0.15) = 0.0860291(0.318407) = 0.273923

Values in Tabular Form


i x1 f(x1) xu f(xu) xr f(xr) | ε a|
1 0.1 0.8069 0.2 -1.19738 0.15 -0.20452
2 0.1 0.8069 0.15 -0.20452 0.125 0.31841 20.00%
3 0.125 0.31841 0.15 -0.20452 0.1375 0.05464 9.09%
4 0.1375 0.05464 0.15 -0.20452 0.14375 −0.07551 4.35%
5 0.1375 0.05464 0.14375 -0.07551 0.140625 −0.01058 2.22%
6 0.1375 0.05464 0.140625 -0.01058 0.139065 0.02199 1.12%

ii. Pseudo Code


function root = bisectnew(func,xl,xu,Ead)
% bisectnew(xl,xu,es,maxit):
% uses bisection method to find the root of a function
% with a fixed number of iterations to attain
% a prespecified tolerance
% input:
% func = name of function
% xl, xu = lower and upper guesses
% Ead = (optional) desired tolerance (default = 0.000001)
% output:
% root = real root
if func(xl)*func(xu)>0 %if guesses do not bracket a sign change
error('no bracket') %display an error message and terminate
end
% if necessary, assign default values
if margin<4, Ead = 0.000001; end %if Ead blank set to 0.000001
% bisection
xr = xl;
% compute n and round up to next highest integer
n = round(1 + log2((xu - xl)/Ead) + 0.5);
for i = 1:n
xrold = xr;
xr = (xl + xu)/2; if xr ~= 0, ea = abs((xr - xrold)/xr) * 100; end
test = func(xl)*func(xr);
if test < 0
xu = xr;
elseif test > 0
xl = xr; else
ea = 0;
end
end
root = xr;

22
b. Using False Position Method
i. Manual Computation

 gc d 
f cd   t   vt 
gm
tanh 
cd  m 

f cd  
9.8180 tanh  9.81c d 
4  36
cd  80
 

First iteration

f xu xl  xu 
x r  xu 
f  x l   f  xu 

xr1  0.2 
 1.197380.1  0.2  0.1418089392
0.86029   1.19738
 
f xl  f xr1  0.860291 0.03521  0.03029084611

Second Iteration

xu  xr1  0.1418089392

f xu xl  xu 
xr  xu 
f xl   f xu 

23
xr2  0.1418089392 
 0.035210.1  0.1418089392  0.1401650612
0.86029   0.03521
xr2  xr1
a   100
xr2

0.1401650612  0.1418089392
a   100  1.17%
0.1401650612

Since 1.17% is below 2%, after two iterations, the obtained root estimate is
0.1401650612 .

Values in Tabular Form


i xl f(xl) xu f(xu) xr f(xr) a

1 0.1 0.86029 0.2 -1.19738 0.1418089392 -0.0321


2 0.1 0.86029 0.1418089392  0.03521 0.1401650612 1.17%
ii. Algorithm

function root = falsepos(func,xl,xu,es,maxit)


% falsepos(xl,xu,es,maxit):
% uses the false position method to find the root
% of the function func
% input:
% func = name of function
% xl, xu = lower and upper guesses
% es = (optional) stopping criterion (%) (default = 0.001) % maxit =
(optional) maximum allowable iterations (default = 50)
% output:
% root = real root
if func(xl)*func(xu)>0 %if guesses do not bracket a sign change
error('no bracket') %display an error message and terminate
end
% default values
if nargin<5, maxit=50; end
if nargin<4, es=0.001; end
% false position
iter = 0;
xr = xl;
while (1)
xrold = xr;
xr = xu - func(xu)*(xl - xu)/(func(xl) - func(xu));
iter = iter + 1; if xr ~= 0, ea = abs((xr - xrold)/xr) * 100; end
test = func(xl)*func(xr);
if test < 0
xu = xr;
elseif test > 0
xl = xr;
else
ea = 0;
end
if ea <= es | iter >= maxit, break, end
end
root = xr;
24
2. Problem Statement
Use fixed-point iteration and the Newton-Raphson method to determine a root of

f x   0.9 x 2  1.7 x  2.5 using xo=5. Perform the computation until εa is less than
εs=0.01%. Also check your final answer.
Solution:
a. Using Fixed-Point Iteration
i. Manual Computation

0.9 xi2  2.5


xi 1 
1.7
Let: xi = 5
First Iteration
0.95  2.5
x1   11.76
1.7

11.76  5
a   100%  57.5%
11.76
Second Iteration
0.911.76  2.5
x1   71.8
1.7

25
71.8  11.76
a   100%  83.6%
71.8
Based from the calculated error, in can be observed that the solution is
diverging. Therefore, we use the second-order x.

1.7 xi  2.5
xi 1 
0.9
Let: xi = 5
First Iteration

1.75  2.5
xi 1   3.496
0.9
3.496  5
a   100%  43.0%
3.496
Second Iteration

1.73.496  2.5
xi 1   3.0629
0.9
3.0629  3.496
a   100%  14.14%
3.0629
Third Iteration

1.73.0629  2.5
xi 1   2.9263
0.9
2.9263  3.0629
a   100%  4.67%
2.9263
Fourth Iteration

1.72.9263  2.5
xi 1   2.88188
0.9
2.88188  2.9263
a   100%  1.54%
2.88188
Fifth Iteration

1.72.88188  2.5
xi 1   2.86729
0.9
2.86729  2.88188
a   100%  0.51%
2.86729

26
Sixth Iteration

1.722.86729  2.5
xi 1   2.862476
0.9
2.862476  2.86729
a   100%  0.17%
2.862476
Seventh Iteration

1.72.862476  2.5
xi 1   2.86089
0.9
2.86089  2.862476
a   100%  0.06%
2.86089
Eighth Iteration

1.72.86089  2.5
xi 1   2.86036
0.9
2.86036  2.86089
a   100%  0.02%
2.86036
Ninth Iteration

1.72.86036  2.5
xi 1   2.86019
0.9
2.86019  2.86036
a   100%  0.006%
2.86019

Values in Tabular Form


Iteration Xi a
0 5
1 3.469 43.0
2 3.0629 14.14
3 2.9263 4.67
4 2.88188 1.54
5 2.86729 0.51
6 2.862476 0.17
7 2.86089 0.06
8 2.86036 0.02
9 2.86019 0.006

27
ii. Pseudo Code

b. Using Newton-Raphson Method


i. Manual Computation
𝑥𝑥+1 = 𝑥𝑖 — 0.9𝑥 2 + 1.7𝑥 + 2.5 − 1.8𝑥𝑖 + 1.7
Let: xi = 5
First Iteration
−0.9(25) + 1.7(5) + 2.5
𝑥𝑥+1 = 𝑥𝑖 − = 3.424
−1.8(5) + 1.7
3.424 − 5
|𝜀𝑎 | = ฬ ฬ × 100 = 46.0%
3.424

28
Second Iteration
−0.9(3.424)2 + 1.7(3.424) + 2.5
𝑥𝑥+1 = 𝑥𝑖 − = 2.934
−1.8(3.424) + 1.7
2.934 − 3.424
|𝜀𝑎 | = ฬ ฬ × 100 = 17.1%
2.934
Third Iteration
−0.9(2.934)2 + 1.7(2.934) + 2.5
𝑥𝑥+1 = 𝑥𝑖 − = 2.861
−1.8(2.934) + 1.7
2.861 − 2.934
|𝜀𝑎 | = ฬ ฬ × 100 = 2.209%
2.861

Fourth Iteration
−0.9(2.861)2 + 1.7(2.861) + 2.5
𝑥𝑥+1 = 𝑥𝑖 − = 2.8601
−1.8(2.861) + 1.7

Values in Tabular Form


Iteration Xi f(xi) f’(xi) a
0 5 11.5 -7.3
1 3.42 -2.233 -4.46 46
2 2.92 -0.22 -3.56 17.1
3 2.86 -0.36 -3.43 2.20
4 2.8601 9.8E-7 3.448 0.0364
5 2.8601 7.2E-14 3.448 0.0000

2.8601 − 2.861
|𝜀𝑎 | = ฬ ฬ × 100 = 0.0364%
2.8601

Fifth Iteration

−0.9(2.8601)2 + 1.7(2.8601) + 2.5


𝑥𝑥+1 = 𝑥𝑖 − = 2.8601
−1.8(2.8601) + 1.7

2.8601 − 2.8601
|𝜀𝑎 | = ฬ ฬ × 100 = 0.0000%
2.86015

29
30
3. Problem Statement

Determine the highest root of the function x3+6x2+11x-6.1 using the secant method
with the intial guess of 2.5 amd 3.5 and the maximum tolerance of 0.01 detremine all
roots using matlab.

i. Pseudo Code

31
V. Generalization and conclusion

Bracketing methods. As the name implies, these are based on two initial guesses that
Bracket the root that is, are on either side of the root. Open methods. These methods can involve
one or more initial guesses, but there is no need for them to bracket the root.

By using these methods to solve given problems it showed that: using the bracketing
methods such as bisection and false position methods, the equation did not diverge and took
many iterations to home in on the answer. While for the open methods such as simple fixed-
point iterations and Newton-Raphson methods, these methods may cause the equation to
diverge, and not work, but when they do, they usually only take few iterations to obtain the
answer.

In conclusion, basing from the given results. It showed that the bracketing methods always
converge but slowly while open methods may diverge but they converge rapidly. By comparing
the methods individually in bracketing methods, it can be seen that the false position method
converge faster than the bisection method. While for the open methods, it showed that the
Newton-Raphson method works faster than the simple fixed-point iteration method.

32
VI. Definition of terms
 Pseudo code - a notation resembling a simplified programming language, used in
program design.
 Iteration - repetition of a mathematical or computational procedure applied to the result
of a previous application, typically as a means of obtaining successively closer
approximations to the solution of a problem.
 True value - It is the 'value' that the buyer is willing to pay for an item especially a
second-hand or used vehicle; usually "true value'' is a fixed price tag on any used
vehicle after assessing it's value based on it's condition and usage.
 Non-linear equation - A nonlinear system of equations is a set of equations where one
or more terms have a variable of degree two or higher and/or there is a product of
variables in one of the equations.
 Algorithm - a process or set of rules to be followed in calculations or other problem-
solving operations, especially by a computer.

33
VII. References

(Chapra)

https://books.google.com.ph/books?id=jBkGxMkI4DMC&printsec=frontcover#v=onepage&
q&f=false

file://C:/Users/user/Desktop/advanced/bisection 5.p

https://sites.google.com/site/knowyourrootsmaxima/introduction/bisectionmethod

http://campus.murraystate.edu/academic/faculty/wlyle/420/Bisection.html

https://en.wikipedia.org/wiki/Nonlinear_system

34

You might also like