You are on page 1of 13

Interpolation MACM 316 1/13

Approximating Functions
The usual case:
We have a table of values representing some function f(x),
arising from an experiment, historical measurements, etc.
We want to interpolate the data or ll in the blanks.
Example: Consider the following data measured from a function
f(x) on the interval 0 x 10:
0 2 4 6 8 10
0
1
2
3
4
5
6
x
y
0 2 4 6 8 10
0
1
2
3
4
5
6
x
y
x y = f(x)
0.0 0.0000
0.5 1.5163
1.0 3.6788
2.0 5.4134
2.5 5.1303
3.0 4.4808
4.0 2.9305
5.0 1.6845
6.0 0.8924
7.0 0.4468
8.0 0.2147
9.0 0.1000
10.0 0.0454
Suppose we want to approxi-
mate f(x) on the subinterval
[1, 2] by a linear function
=this is interpolation
October 13, 2008 c Steven Rauch and John Stockie
Interpolation MACM 316 2/13
Linear Interpolation
Equation of the line joining (1.0, 3.6788) and (2.0, 5.4134) is
p(x) = 1.9442 + 1.7346x
Use p(x) to approximate f(1.75) at an unknown point (inter-
polate)
f(1.75) p(1.75) = 4.9798
Given that the exact value is f(1.75) = 5.3218, the interpola-
tion error is
|4.9798 5.3218|
|5.3218|
= 0.0643 (relative error)
We could extrapolate outside the interval [1, 2] to obtain
p(2.25) = 5.8471
. . . but this is usually inaccurate.
Its typically safer to derive the linear interpolant on [2, 2.5]:
q(x) = 6.5458 0.5662x = q(2.25) = 5.2719
Interpolation vs. extrapolation
1 1.5 2 2.5
4
4.5
5
5.5
x
y
October 13, 2008 c Steven Rauch and John Stockie
Interpolation MACM 316 3/13
Higher Order Interpolation
Note: We assume the original data comes from a smooth
function, but linear interpolation on more than two points is
not smooth.
How do we get a smooth approximation for f(x)???
= use higher order polynomials!
Suppose we have a list of n + 1 points
{(x
0
, y
0
), (x
1
, y
1
), . . . , (x
n
, y
n
)}
where y
i
= f(x
i
) for i = 0, 1, 2, . . . , n.
Theorem 3.3 (p. 108): Provided the points x
i
are all dis-
tinct, there exists a UNIQUE polynomial of degree n
P
n
(x) = a
0
+ a
1
x + a
2
x
2
+ + a
n
x
n
which satises P
n
(x
i
) = y
i
for i = 0, 1, 2, . . . , n.
Furthermore,
f(x) = P
n
(x) + E
n
(x)
where the error term
E
n
(x) =
f
(n+1)
(c)
(n + 1)!
(x x
0
)(x x
1
) (x x
n
)
for some c [x
0
, x
n
].
P
n
(x) is called an interpolating polynomial for f(x).
October 13, 2008 c Steven Rauch and John Stockie
Interpolation MACM 316 4/13
Previous Example: The interpolating polynomial of degree 2 for
f(x) on [1, 2.5] is
P
2
(x) = 1.1235 + 6.3362x 1.5339x
2
,
since P
2
(1) = 3.6788, P
2
(2) = 5.4134 and P
2
(2.5) = 5.1303.
We can now approximate f(1.75) by P
2
(1.75) = 5.26735, which
has relative error of 0.0102 much smaller than we saw for the
linear approximation!
Quadratic vs. Linear Interpolation
1 1.5 2 2.5
3
3.5
4
4.5
5
5.5
x
y


f(x)
P
1
(x)
P
2
(x)
Note: Increasing the degree of the polynomial does not neces-
sarily guarantee an improved approximation.
October 13, 2008 c Steven Rauch and John Stockie
Interpolation MACM 316 5/13
Interpolating Polynomials The Mechanics
Suppose we have 3 points: (x
0
, y
0
), (x
1
, y
1
), (x
2
, y
2
).
We want to t a quadratic (n = 2): y = c
0
+ c
1
x + c
2
x
2
.
Substitute:
y
0
= c
0
+ c
1
x
0
+ c
2
x
2
0
y
1
= c
0
+ c
1
x
1
+ c
2
x
2
1
y
2
= c
0
+ c
1
x
2
+ c
2
x
2
2
This can be re-written as a linear system:
_
_
1 x
0
x
2
0
1 x
1
x
2
1
1 x
2
x
2
2
_
_
_
_
c
0
c
1
c
2
_
_
=
_
_
y
0
y
1
y
2
_
_
Generalize for n + 1 points: (x
0
, y
0
), (x
1
, y
1
), . . . (x
n
, y
n
)
_

_
1 x
0
x
2
0
x
n
0
1 x
1
x
2
1
x
n
1
.
.
.
.
.
.
.
.
.
.
.
.
1 x
n
x
2
n
x
n
n
_

_
. .
A
_

_
c
0
c
1
c
2
.
.
.
c
n
_

_
. .
c
=
_

_
y
0
y
1
y
2
.
.
.
y
n
_

_
. .
y
. . . this is a linear system for the coefcients c.
A is an (n+1) (n+1) matrix, and is called a Vandermonde
matrix.
October 13, 2008 c Steven Rauch and John Stockie
Interpolation MACM 316 6/13
Vandermonde Example
Given the following gasoline price data (obviously out of date!):
Year 1986 1988 1990 1992 1994 1996
Price (cents) 66.7 66.1 69.3 70.7 68.8 72.1
Use polynomial interpolation to approximate the gas price in 1991.
Problems:
The Vandermonde matrix is usually ill-conditioned, which can
cause large errors in the polynomial coefcients.
Solving a dense matrix system requires O(n
3
) oating point
operations, which is expensive.
We want a cheaper and more accurate method!
October 13, 2008 c Steven Rauch and John Stockie
Interpolation MACM 316 7/13
Lagrange Interpolation
Again, start with n + 1 points satisfying y
i
= f(x
i
)
(x
0
, y
0
), (x
1
, y
1
), . . . (x
n
, y
n
)
Idea: Construct a simple polynomial that is equal to 1 at x
0
and equal to 0 at every other point:
l
0
(x) :
1 0 0 0
| | | |
x
0
x
1
x
2
x
n
The following does the trick:
L
0
(x) =
(x x
1
)
(x
0
x
1
)
(x x
2
)
(x
0
x
2
)

(x x
n
)
(x
0
x
n
)
=
n

i=1
x x
i
x
0
x
i
Construct similar polynomials for the other points
L
j
(x) =
n

i=0
i =j
x x
i
x
j
x
i
which satisfy L
j
(x
i
) =
_
1 if i = j
0 if i = j
L
j
(x) is called the j
th
Lagrange polynomial.
We can now write the interpolating polynomial for f(x) as
P
n
(x) = y
0
L
0
(x) + y
1
L
1
(x) + + y
n
L
n
(x)
=
n

j=0
y
j
L
j
(x)
so that
P
n
(x
k
) = y
0
L
0
(x
k
) + y
1
L
1
(x
k
) + + y
n
L
n
(x
k
)
= y
k
L
k
(x
k
) = y
k
October 13, 2008 c Steven Rauch and John Stockie
Interpolation MACM 316 8/13
Lagrange Interpolation Example
Back to the data from our rst example:
x y = f(x)
0.0 0.0000
0.5 1.5163
1.0 3.6788
2.0 5.4134
2.5 5.1303
3.0 4.4808
4.0 2.9305
x y = f(x)
5.0 1.6845
6.0 0.8924
7.0 0.4468
8.0 0.2147
9.0 0.1000
10.0 0.0454
Find the quadratic interpolating polynomial P
2
(x) on [1, 2.5]:
First set up the Lagrange polynomials:
L
0
(x) =
(x2)
(12)
(x2.5)
(12.5)
=
(x2)(x2.5)
1.5
(leave unexpanded!!!)
L
1
(x) =
(x1)
(21)
(x2.5)
(22.5)
=
(x1)(x2.5)
0.5
L
2
(x) =
(x1)
(2.51)
(x2)
(2.52)
=
(x1)(x2)
0.75
Then construct the interpolating polynomial:
P
2
(x) =
3.6788
1.5
(x 2)(x 2.5) +
5.4134
0.5
(x 1)(x 2.5)
+
5.1303
0.75
(x 1)(x 2)
= 2.4525 (x 2)(x 2.5) 10.8268 (x 1)(x 2.5)
+ 6.8404(x 1)(x 2)
and P
2
(1.75) 5.26735
Never expand the Lagrange form . . . except to compare:
P
2
(x) = 1.1235 + 6.3362x 1.5339x
2
(same as before . . . UNIQUE!!)
October 13, 2008 c Steven Rauch and John Stockie
Interpolation MACM 316 9/13
Pros and Cons of the Lagrange Form
Advantages of the Lagrange formof the interpolating polynomial:
Polynomial can be written down easily great for manual
computation!
Cost of O(n
2
) is much better Vandermonde was O(n
3
).
Disadvantages:
If we add another point, (x
n+1
, y
n+1
), then we have to start
over from scratch none of the L
k
(x) can be reused.
No simple way to estimate error were stuck using the error
term E
n
(x) from the Theorem on page 3:
E
n
(x) =
f
(n+1)
(c)
(n + 1)!
(x x
0
)(x x
1
) (x x
n
)
for which f
(n+1)
(x) is typically not known!
October 13, 2008 c Steven Rauch and John Stockie
Interpolation MACM 316 10/13
Newton Divided Differences
Another way to construct the interpolating polynomial, P
n
(x).
Idea: Build up one point and one degree at a time.
Start at degree 0, where P
0
(x) interpolates a single point (x
0
, y
0
):
P
0
(x) = y
0
Move to degree 1, by adding a term a
1
(x x
0
):
P
1
(x) = y
0
+
y
1
y
0
x
1
x
0
(x x
0
)
This pattern can be continued to higher degree:
P
0
(x) = a
0
P
1
(x) = a
0
+ a
1
(x x
0
)
P
2
(x) = a
0
+ a
1
(x x
0
) + a
2
(x x
0
)(x x
1
)

P
n
(x) = a
0
+ a
1
(x x
0
) + a
2
(x x
0
)(x x
1
)
+ + a
n
(x x
0
)(x x
1
) (x x
n1
)
=
n

j=0
a
j
j1

i=0
(x x
i
)
We now substitute P
n
(x
k
):
y
0
= P
n
(x
0
) = a
0
= a
0
= y
0
y
1
= P
n
(x
1
) = a
0
+ a
1
(x
1
x
0
) = a
1
=
y
1
y
0
x
1
x
0
which is a divided difference.
October 13, 2008 c Steven Rauch and John Stockie
Interpolation MACM 316 11/13
The next divided difference comes from
y
2
= P
n
(x
2
) = a
0
+ a
1
(x
2
x
0
) + a
2
(x
2
x
0
)(x
2
x
1
)
= a
2
=
_
y
2
y
1
x
2
x
1

y
1
y
0
x
1
x
0
_
x
2
x
0
It helps to introduce some notation:
f[x
i
] = y
i
f[x
i
, x
i+1
] =
f[x
i+1
] f[x
i
]
x
i+1
x
i
f[x
i
, x
i+1
, x
i+2
] =
f[x
i+1
, x
i+2
] f[x
i
, x
i+1
]
x
i+2
x
i
.
.
.
f[x
i
, . . . , x
i+k
] =
f[x
i
, . . . , x
i+k1
] f[x
i+1
, . . . , x
i+k
]
x
i+k
x
i
Then, the coefcient a
k
is given by
a
k
= f[x
0
, x
1
, . . . , x
k
]
and is called the k
th
divided difference.
The calculations are much simpler when organized as a table:
x y a
1
a
2
a
3
a
4
x
0
f[x
0
]
f[x
0
, x
1
]
x
1
f[x
1
] f[x
0
, x
1
, x
2
]
f[x
1
, x
2
] f[x
0
, x
1
, x
2
, x
3
]
x
2
f[x
2
] f[x
1
, x
2
, x
3
]
.
.
.
f[x
2
, x
3
] f[x
1
, x
2
, x
3
, x
4
]
x
3
f[x
3
] f[x
2
, x
3
, x
4
]
.
.
.
f[x
3
, x
4
] f[x
2
, x
3
, x
4
, x
5
]
x
4
f[x
4
] f[x
3
, x
4
, x
5
]
f[x
4
, x
5
]
x
5
f[x
5
]
.
.
.
October 13, 2008 c Steven Rauch and John Stockie
Interpolation MACM 316 12/13
Newton Divided Differences Example
Use data from our previous example to construct the table.
Take the points x = 0.0, 0.5, 1.0, 2.0, 2.5, 3.0.
x y a
1
a
2
a
3
a
4
0.0 0.0000
3.0327
0.5 1.5163 1.2923
4.3249 1.5096
1.0 3.6788 1.7269 0.6424
1.7346 0.0965
2.0 5.4134 1.5339 0.1216
0.5662 0.4006
2.5 5.1303 0.7328
1.2990
3.0 4.4808
The coefcients a
k
can be read directly off the diagonals.
Starting at the point x = 0.0:
P
0
(x) = 0.0000
P
1
(x) = 0.0000 + 3.0327(x 0.0)
P
2
(x) = 3.0327(x 0.0) + 1.2923(x 0.0)(x 0.5)
P
3
(x) = P
2
(x) 1.5096(x 0.0)(x 0.5)(x 1.0)
etc.
Starting at the point x = 1.0 (as in the earlier examples):
P
0
(x) = 3.6788
P
1
(x) = 3.6788 + 1.7346(x 1.0)
P
2
(x) = 3.6788 + 1.7346(x 1.0) 1.5339(x 1.0)(x 2.0)
etc.
. . . exactly as before!
October 13, 2008 c Steven Rauch and John Stockie
Interpolation MACM 316 13/13
Advantages of Newton Divided Differences:
Can extract the interpolating polynomial for any interval and
any degree desired.
Cost is half that of Lagrange polynomial.
Easy access to an error estimate the error term can be writ-
ten as
E
n
(x) = f[x
0
, x
1
, . . . , x
n+1
]
n

i=0
(x x
i
)
(see Exercise 20, page 129). In practice, error can be esti-
mated by adding one more point and computing one extra
row in the table.
October 13, 2008 c Steven Rauch and John Stockie

You might also like