You are on page 1of 20

Numerical

Analysis
Lecture 1
1

Introduction

Numerical analysis is a part of mathematics, but it works on


questions that are strongly related to the use of computers and to
applications from Science and Engineering.

Using numerical analysis we will be able, for instance, to handle


large systems of equations, non-linearities, complicated
geometries and solving engineering problems which have no
analytical solution.

APPROXIMATION AND ERRORS


Numerical Methods
Instead of solving for the exact solution we solve
math problems with a series of arithmetic
operations.
b

Example:

1
x dx
a

analytical solution: ln(b) ln(a)


numerical solution e. g., Trapezoidal Rule
Error Analysis
(a) identify the possible sources of error
(b) estimate the magnitude of the error
(c) determine how to minimize and control error
3

Mathematical Models
Comparing solutions:
Numerical solution,
60
t=2seg
50

V (m/sec)

40

Exact
solution

30

Numerical solution,
t=1seg

20

10

10

12

T (sec)

14

16

18

20

Error Types
In general, errors can be classified based on
their sources as non-numerical and numerical
errors.
Non-numerical errors:
(1) modeling errors: generated by
assumptions and
limitations.
(2) Rough error or mistakes: human errors
(3) uncertainty in information and data
3- 5

Numerical errors:
(1) round-off errors: due to a limited number of
significant digits
(2) truncation errors: due to the truncated terms
e.g. infinite Taylor series
(3) propagation errors: due to a sequence of
operations. It can be reduced with a good
computational order. e.g.
In summing several values, we can rank the
values in ascending order before performing
the summation.
(4) mathematical-approximation errors:
e.g. To use a linear model for representing a
nonlinear expression.

3- 6

Approximations and Rounding Errors

Unfortunately, computers introduce errors in the calculations.


However, since many engineering problems have not analytical
solution, we are forced to use numerical methods
(approximations). The only option we have is to accept the error
and try to reduce it up to a tolerable level.
The only way of minimizing the errors is by knowing and
understanding why they occur and how we can diminish them.
The most frequent errors are:
Rounding errors, due to the fact that computers can work
only with a finite representation of numbers.
Truncation errors, due to differences between the exact
and the approximate (numeric) formulations of the
mathematical problem being dealt with.
Before analyzing each one of them, we will see two important
concepts on the computer representation of numbers.
7

Approximations and Rounding Errors

Significant figures of a number:


The significant figures of a number are those which can be
used with confidence.
This concept has two important implications:

1. An approximation is acceptable when it is exact for a given


number of significant figures.
2. There are magnitudes or constants that cannot be represented
exactly:

3.14159265...
17 4.123105...

Approximations and Rounding Errors

Accuracy closeness of measured/computed values to the


"true" value (vs. inaccuracy or bias)
Bias

systematic deviation from truth, "general trend"

Precision closeness of measured/computed values with


each other (spread or scatter), relates to the
number of significant figures (vs. imprecision or
uncertainty)

Approximations and Rounding Errors


Accuracy and precision:
The errors associated with
measurements can be
characterized observing their
accuracy and precision.
Accuracy refers to how close
the value is to the true value.
Precision refers to how close
are different measured values
using the same method.

Numerical methods must be sufficiently exact (without bias) and


precise to satisfy the requirements of engineering problems. From
now on we will refer to error to refer to the inaccuracy and lack of
precision of our predictions.
10

Approximations and Rounding Errors

(a) inaccurate imprecise


(b) accurate

imprecise

(c) Inaccurate precise


(d) Accurate precise

11

Approximations and Rounding Errors

Error definitions:
True value = approximation + absolute error.
Absolute error = true value - approximation .
Relative error = absolute error / true value .
absolute error
t
100%
true value
In real cases not always one can know the true value, thus:
aproximate error
a
100%
approximat e value
In many occasions, the error is calculated as the difference
between the previous and the actual approximations.
a

actual approximat ion previous approximat ion


actual approximat ion

100%
12

Approximations and Rounding Errors


Thus, the stopping criterium of a numerical method can be:

a s
s prefixed percent tolerance
It is convenient to relate the errors with the number of
significant figures.If the following relation holds, one can be
sure that at least n significant figures are correct.

s (0.5 * 102n )%

13

Approximations and Rounding Errors

Numerical systems:
A numerical system is a
convention to represent
quantities. Since we have
10 fingers in our hands, the
most popular numerical
system has basis 10. It
uses 10 different digits.

104 103 102 101 100

8 6 4 0 9

a)

9x
1=
9
0x
10 =
0
4 x 100 = 400
6 x 1000 = 6000
8 x 10000 = 80000
86409

27 26 25 24 23 22 21 20

1 0 1 0 1 1 0 1

However, computers, due to


the memory structure, can
only store two digits: 0 and
1. Thus, they use the binary
system of numeric
representation.

b)

1x 1= 1
0x 2= 0
1x 4= 4
1x 8= 8
0 x 16 = 16
1 x 32 = 32
0 x 64 = 64
1 x 128 = 128
173

14

Round-off Errors
Background:

How are numbers stored in a computer?

The fundamental unit, a "word," consists of a string of "bits" (binary digits).


Because computers are made up of gates or switches which are either
closed or open, we work in binary or base 2 system.
A number in base q will be denoted by
(anan-1...a1a0.b1b2..bk..)q

The conversion to base 10 is, by definition


(anan-1...a1a0.b1b2..bk..)q =anqn+an-1qn-1+...+a1q+a0q0+b1q-1+b2q-2+...

Example:
(1011.01)2=1x23+0x22+1x2+1x20+0x2-1+1x2-2=11.25
15

Round-off Errors
Conversion from base 10 to base q.
This is the recipe for conversion:

Integer part: we have to divide the integer part by 2 (successively) and


to retain the fractional part in each step.

Fractional part: we have to multiply by 2 and to retain the integer part


in each step.

Example:
(26.1)10=(11010.00011)2

16

Round-off Errors
Example: An 8 bit word representation of the integer "35" is 00100011
or
0
0
1
0
0
0
1
1

26
25
24
23
22
21
20
+
0x26 1x25 0x24 0x23 0x22 1x21 1x20 = 35
32
2
1
Note: We can only represent a finite # of numbers; for our case:
127
to +127
(127 = 27 1)
or a total of 255 numbers (including 0)

17

Approximations and Rounding Errors

Representation of integer numbers:


To represent base 10 numbers in binary form the signed
magnitude method is used. The first digit stores the sign (0,
positive and 1, negative). The remaining bits are used to
store the number.

A computer working with words of 16 bits can store integer


numbers in the range -32768 to 32767.
18

Approximations and Rounding Errors

Floating point representation:


This representation is used for fractional quantities. It has
the fraction part, called mantissa, and an integer part, called
exponent or characteristic.
m * be

The mantissa is usually normalized, so that the value of m is


limited (b=2 in binary):
1
m1
b

19

Approximations and Rounding Errors


IEEE-floating point formats: there are two types of
precision (simple and double). They differ in the
number of digits available for storing the numbers:
Simple precision (32 bits): 1 bit for the sign, 8 bits
for the exponent, 23 bits for the mantissa.
Double precision (64 bits, two words of 32 bits): 1
bit for the sign, 11 bits for the exponent, 52 bits for
the mantissa.
The number of bits for the exponent and the
mantissa determine the underflow and overflow
numbers.
20

You might also like