Professional Documents
Culture Documents
METHOD
David Carothers
William Ingham
James Liu
Carter Lyons
John Marafino
G. Edgar Parker
David Pruett
Joseph Rudmin
James S. Sochacki
Debra Warne
Paul Warne
Glen Albert
Jeb Collins
Kelly Dickinson
Paul Dostert
Nick Giffen
Henry Herr
Jennifer Jonker
Aren Knutsen
Justin Lacy
Maria Lavar
Laura Marafino
Danielle Miller
James Money
Steve Penney
Daniel Robinson
David Wilk
ABSTRACT:
This paper discusses how to pose an initial value problem (IVP) ordinary differential equation
(ODE)
y 0 = F (t, y); y(t0 ) = y0 ,
where F : Rn Rn in such a way that a modification of Picards method will generate the Taylor
series solution. We extend the result to the IVP partial differential equation (PDE)
u(t, x)t = F (u, partial derivatives of u in x) ; u(t0 , x) = P (x),
where xRn ,F : Rn+k Rn and P : Rm Rn .
We discuss how to use the method to determine roots of equations using equilibrium solutions of
ODEs through inverse functions and Newtons Method. We show that the Maclaurin polynomial
can be computed completely algebraically using Picard iteration. Finally, some theorems and
questions are asked about the modified Picard method.
I. INTRODUCTION
For continuous F the IVP ODE
y 0 (t) = F (t, y(t)) ; y(t0 ) = y0
is equivalent to the integral equation
Z
y(t) =
F (s, y(s))ds.
t0
That is, a solution to the IVP ODE is a solution to the integral equation and vice versa.
One can generate a Taylor series solution to the IVP ODE through
y1 (t) = y0
0
y3 (t) = y2 (t) +
.
.
.
y (k) (t0 )
(t t0 )k .
k!
Similarly, one can generate a solution to the integral equation using the iterative Picard process
yk+1 (t) = yk (t) +
p1 (t) = y0
Z
p2 (t) = y0 +
F (s, p1 (s))ds = y0 +
t0
F (s, y0 )ds
t0
Z
p3 (t) = y0 +
F (s, p2 (s))ds
t0
.
.
.
Z
pk+1 (t) = y0 +
F (s, pk (s))ds.
t0
Lets apply these two processes to a few ODEs and see what the similarities and differences are.
Example 1.
y 0 = y ; y(0) = 1
We see that y 00 = y 0 , y 000 = y 00 , .... Therefore, we have y(0) = 1, y 0 (0) = y(0) = 1, y 00 (0) = y 0 (0) =
1, y 000 (0) = y 00 (0) = 1, .... Therefore, the Maclaurin expansion gives
t2 t3
+ + ... = et .
2
3!
The Picard method for the integral equation gives
y(t) = 1 + t +
p1 (t) = 1
Z
p2 (t) = 1 +
1ds = 1 + t
p3 (t) = 1 +
Z
p2 (s)ds = 1 +
0
p4 (t) = 1 +
(1 + s)ds = 1 + t +
0
t
Z
p1 (s)ds =
p3 (s)ds = 1 +
0
t2
2
s2
t2 t3
)ds = 1 + t + + .
2
2
3!
(1 + s +
0
It is easily seen that the Picard process produces the Maclaurin series for et .
Example 2.
y 0 = ty ; y(1) = 1
We see that y 00 = y+ty 0 , y 000 = y 0 +y 0 +ty 00 , .... Therefore, we have y(1) = 1, y 0 (1) = (1)y(1) =
1, y 00 (1) = y(1) + (1)y 0 (1) = 2, y 000 (1) = 2y 0 (1) + (1)y 00 (1) = 4, .... Therefore, the
Taylor expansion gives
y(t) = 1 (t + 1) + (t + 1)2 2
(t2 1)
(t + 1)3
+ ... = e 2 .
3
p2 (t) = 1 +
p3 (t) = 1 +
sds =
1
1 t2
+
2
2
5 t2 t4
1 s2
s( + )ds = + +
2
2
8
4
8
1
Z
sp2 (s)ds = 1 +
sp1 (s)ds = 1 +
Example 3.
y10 = ty2 ; y1 (0) = 1
y20 = y12 y1 y2 ; y2 (0) = 0
(t2 1)
2
In this example we have to calculate Maclaurin series for y1 and y2 . We determine that y100 =
y2 + ty20 , y1000 = y20 + y20 + ty200 , ... and that y200 = 2y1 y10 y10 y2 y1 y20 , y2000 = 2y102 + 2y1 y100 y100 y2 y10 y20
y10 y20 y1 y200 , ... We see that in order to get y1000 (0) we need to first calculate y200 (0) and similarly for
higher derivatives. We determine that y1 (0) = 1, y2 (0) = 0, y10 (0) = 0, y20 (0) = 1, y100 (0) = 0, y200 (0) =
1, y1000 (0) = 2, y2000 (0) = 1, ... This gives
y1 (t) = 1 +
t3
+ ...
3
t2 t3
+ + ...
2
6
For the Picard iterates we integrate each equation during each iteration. This gives
y2 (t) = t
p2,2 (t) =
0
1ds = t
s2 ds = 1 +
sp2,2 (s)ds = 1 +
0
p2,3 (t) =
p1,3 (t) = 1 +
Z
t3
3
(1 s)ds = t
0
t2
.
2
Example 4.
y 00 + y = 0 ; y(0) = 1 ; y 0 (0) = 1
We make the substitution y2 = y 0 . Then y2 (0) = y 0 (0) = 1 and we get the following IVP ODE
y 0 = y2 ; y(0) = 1
y20 = y ; y2 (0) = 1.
We solve this system in the same manner as in Example 3.
Example 5.
y 0 = cos(y) + sin(t) ; y(1) = 0
We find that y 00 = sin(y)y 0 + cos(t), y 000 = cos(y)y 02 sin(y)y 00 sin(t), ... This gives
cos(1)
(1 + sin(1))2 + sin(1)
(t 1)2
(t 1)3 + ...
2
6
for the Taylor series expansion. For the Picard iterates we find that
y(t) = (1 + sin(1))(t 1) +
p1 (t) = 0
Z
p2 (t) =
1
Z
p3 (t) =
Z
q2,2 ( ) = 1 +
Z
q2,3 ( ) = 1 +
Z
q1,2 (s)q2,2 (s)ds = 1 +
1ds = 1
(1 + s)(1 s)ds = 1 + 2
3
3
(t + 1)3
.
3
We see that p2,k (t) is a polynomial that is similar to the Taylor polynomials for Example 2.
We now reconsider Example 5.
Example 5. (Revisited)
y 0 = cos(y) + sin(t) ; y(1) = 0
We first let = t1, u1 ( ) = +1 and u2 ( ) = y( +1). We then let u3 ( ) = cos(y( +1)), u4 ( ) =
sin(y( + 1)), u5 ( ) = sin( + 1) and u6 ( ) = cos( + 1). Substituting these into Example 5 gives
the equivalent IVP ODE
u01 = 1 ; u1 (0) = 1
u02 = u3 + u5 ; u2 (0) = y(1) = 0
u03 = sin(y( + 1))u02 = u4 (u3 + u5 ) ; u3 (0) = cos(y(1)) = 1
u04 = cos(y( + 1))u02 = u3 (u3 + u5 ) ; u4 (0) = sin(y(1)) = 0
u05 = cos( + 1) = u6 ; u5 (0) = sin(1)
u06 = sin( + 1) = u5 ; u6 (0) = cos(1)
The initial conditions are at 0 and the RHS is a polynomial in the dependent variables and autonomous. We leave it as an exercise for you to generate the 6 Picard iterates pj,k ( ), j = 1, .., 6
and show that these polynomials are similar to the Taylor polynomials for Example 5 if we replace
by t 1. The polynomials approximating the solution are then given by p1,k (t 1).
Examples 1-5 and the revisits to them lead one to conjecture and prove the following theorem.
Theorem 1
Let F = (F1 , ..., Fn ) : Rn Rn be a polynomial and Y = (y1 , ..., yn ) : R Rn . Consider the
IVP ODE
yj0 = Fk (Y ) ; yj (0) = j ; j = 1, ..., n
and the Picard iterates
then pj,k+1 is the kth Maclaurin Polynomial for yj plus a polynomial all of whose terms have degree
greater than k. (Here Pk (s) = (p1,k (s), ..., pn,k (s)).)
The autonomous restriction in Theorem 1, as Example 2 shows, is easily handled in applications.
We now consider an important IVP ODE Theorem 1 can be used on. Isaac Newton posed the
following system of differential equations to model the interactive motion of N heavenly bodies.
x00i (t) =
X mj (xj xi )
3
2
ri,j
j6=i
yi00 (t) =
X mj (yj yi )
3
2
ri,j
j6=i
zi00 (t) =
X mj (zj zi )
3
2
ri,j
j6=i
si,j = ri,j2
this system for the N-body problem can be posed as
x0i = ui
yi0 = vi
zi0 = wi
X
u0i =
mj (xj xi )s3i,j
j6=i
vi0 =
mj (yj yi )s3i,j
j6=i
wi0 =
mj (zj zi )s3i,j
j6=i
s0i,j =
12 s3i,j [2(xi xj )(ui uj ) + 2(yi yj )(vi vj ) + 2(zi zj )(wi wj )], i = 1, .., N.
We now consider an example for PDEs.
vt = uxx + w ; v(0, x) = 0
2
p2,1 (t) = 0
2
p1,2 (t) = e
Z
+
(xx0 )2
p2,1 (s)ds = e
2
(
0
(
0
0ds = e(xx0 )
(xx0 )2
2
e
+ sin(e(xx0 ) ))ds =
x
(xx0 )2
2
e
+ sin(e(xx0 ) ))t
x
Z t
Z t
2
2
(xx0 )2
+
p4,1 (s)p2,1 (s)ds = e
+
cos(e(xx0 ) )0ds = e(xx0 )
)2
e(xx0 + (
We see the process is very similar to IVP ODEs and we can treat x as a parameter in the ODE
(xx0 )2
case. We can also make the substitution y = ux giving y(0, x) = x
e
. This leads to the
system
ut = v ; u(0, x) = e(xx0 )
vt = yx + w ; v(0, x) = 0
2
(xx0 )2
e
.
x
The Picard iterates for the IVP ODE can be obtained in a manner similar as above.
This example leads to a theorem similar to Theorem 1.
Theorem 2
Let F = (F1 , ..., Fn ) : Rm Rn be a polynomial, U = (u1 , ..., un ) : R Rn and
A = (A1 , ..., An ) : Rm > Rn . Consider the IVP PDE
uj
(t, x) = Fj (x, U, x partial derivatives of U ) ; uj (0, x) = Aj (x),
t
and the Picard iterates
vj,1 (t) = Aj (x), j = 1, .., n
t
Z
vj,k+1 (t) = Aj (x) +
then vj,k+1 is the kth Maclaurin Polynomial for uj (t, x) in t with coefficients depending on x plus
a polynomial in t with coefficients depending on x all of whose terms have degree greater than k
in t. (Here Vk (s) = (v1,k (s), ..., vn,k (s)).)
We now show how to generate the Maclaurin polynomials guaranteed by the above theorems
algebraically.
k+1
+ ... + m t
m+1
n
X
j t +
j=0
Rn
then
Z
pk+2 (t) = 0 +
Z
= 0 +
0
k
m
X
X
j
(A(
j s +
j sj + B)ds
j=0
j=k+1
1
1
Ak tk+1 + ....
= 0 + (B + A0 )t + A1 t2 + ... +
2
k+1
From this we see that
m
X
j=k+1
j tj
0 = w
1 = B + A0
1
2 = A1
2
.
.
.
1
k = Ak1
k
are the coefficients for the Maclaurin polynomial of degree k for the solution Y . You should apply
this technique to Example 3 to get the Maclaurin polynomials for cosine and sine.
Now lets generate the Maclaurin polynomial for the solution of the initial value problem
y 0 (t) = A + By + Cy 2 = Q(y) y(0) = .
We start the Picard process with
P1 (t) = = M1 (t)
and then define
Z
Pk+1 (t) = +
Q(Pk (s))ds = +
0
k 1
2X
b n tn ,
n=k
where
Mk+1 (t) =
k
X
an tn
n=0
is the degree k Maclaurin polynomial for the solution y to the IVODE. One can use the integral
equation or differential equation for Pk+1 (t) to solve for ak and bk . Since a0 = and a1 = Q(),
this only needs to be done for k 2. Using Cauchy products this leads to
0
Pk+1
k
X
nan t
n1
n=1
k1
X
2k+1
X1
nbn tn1
n=k+1
2k+1
X1
(n + 1)an+1 t +
n=0
nbn tn1
n=k+1
= A+B
k 1
2X
bn t + C
n=0
= A+B
k 1
2X
bn tn + C
k 1
2X
bn tn
n=0
k 1 2k 1
2X
X
bj bn tn+j
j=0 n=0
2(2k 1)
2k 1
bn t
n=0
n=0
= A+B
k 1
2X
bn tn + C
n=0
d n tn ,
n=0
P
P k 1
where bn = an for n k and dn = nj=0 bj bnj for 0 n 2k 1 and dn = 2j=n2
k +1 bj bnj for
k
k
2 n 2(2 1). By equating like powers it is straightforward to show that
kak = Bak1 + C
k1
X
aj ak1j .
j=0
That is,
(Bak1 + C
Pk1
j=0
aj ak1j )
tk .
k
That is, we can obtain the Maclaurin polynomial algebraically. This is easily extended to any
polynomial ODE or system of polynomial ODEs. Therefore, for ODEs with polynomial right
hand side the Maclaurin coefficients of the solution can be obtained algebraically with the same
algorithm. This algorithm we call the Algebraic-Maclaurin algorithm. The Cauchy result shows
that this algorithm gives the analytic solution to the IVODE. One can also generate a formula for
the b0n s for n > k using the above results. In a future paper, we exploit this to give convergence
rates and error estimates for polynomial differential equations. It is also easy to see that one can
modify the above algorithm to work for any polynomial system.
The above examples and theorems lead to many interesting questions about the relationship
between Taylor polynomials and Picard polynomials, analytic functions and polynomial IVP ODEs,
the symbolic and numerical calculations using Picards method and the best way to pose an IVP
ODE or IVP PDE.
Mk+1 (t) = Mk (t) +
(II)
x0 = x2 ; x(0) =
x02 = x2 x3 ; x2 (0) = sin()
x03 = x22 ; x3 (0) = cos().
Notice that in this system one does not have to use the Maclaurin polynomial for sin x and that the
above algorithm will generate the coefficients of the Maclaurin polynomial for x without knowing
the one for sin. We give numerical results for these two systems using fourth order Runge-Kutta and
fourth, fifth and sixth order Maclaurin polynomials, but first we show that using the polynomial
system it is easy to generate the solution to the original IVODE. However, first we show that
turning (I) into a system of polynomials (II) also makes it easy to solve.
Using the equation for x02 and x03 from (II) it is seen that x22 + x23 = 1 so that x03 = x23 1 whose
solution is
x3 =
1 e2t+2B
.
1 + e2t+2B
4e2t+2B
(1 + e2t+2B )2
Order
4
5
6
4
5
6
R-K on I
1.44366E 09
R-K on II
1.35448E-09
3.55865E-09
3.46205E-09
Maclaurin on II
1.21177E-09
7.6911E-12
6.14E-14
4.43075E-09
6.71039E-11
1.1714E-12
In these results it is seen that the polynomial system gives the best results and that increasing
the degree improves the results significantly.
Other interesting examples are
y 0 = y r ; y(0) = ,
(The bifurcation at r = 1 arising in the polynomially equivalent systems is particularly interesting.)
y0 =
1
1
+ t ; y(0) =
y3
8
and
ut = (uxx )1/3 ; u(0, x) = f (x)
In the first example, the bifurcation at r = 1 arising in the polynomially equivalent systems is
particularly interesting.
As mentioned earlier, the above methods and concepts can be used to look at inverses and roots
of functions.
III. INVERSES AND ROOTS
We now consider determining the roots of the function f : R R. That is, we want to solve
f (x) = 0 or f (x) = t. First, consider
f (x(t)) = t
then by the chain rule:
f 0 (x)x0 (t) = 1
Letting u = [f 0 (x)]1 gives
x0 = u
u0 = u3 f 00 (x)
Choosing x(0) = a gives the IVP ODE
x0 = u ; x(0) = a
u0 = u3 f 00 (x) ; u(0) = 1/f 0 (a)
for determining f 1 , since x = f 1 . If f is a polynomial we can use Theorem 1.
Example 7. Determine f 1 for
f (x) = ln(x)
Let v = ln(x) and w = 1/x then v 0 = f 0 (x)x0 = wx0 and w0 = w2 x0 which leads to the polynomial
IVP ODE
x0 = u ; x(0) = 1
u0 = u3 w2 ; u(0) = 1/f 0 (1) = 1
v 0 = wu ; v(0) = 0
w0 = w2 u ; w(0) = 1
for determining ex . This example hints at a method for determining a Taylor series expansion for
the local inverse of a polynomial.
f (x)
.
f 0 (x)
f (x)
f 0 (x)
f (x)
= g(x).
f 0 (x)
Then
g 0 (x) = (1
f (x)f 00 (x)
).
[f 0 (x)]2
Example 8.
f (x) = x cos(x) + ex
Let v = cos(x),w = sin(x) and z = ex then from the preceding discussion
x0 = u(xv + z) = y
u0 = u3 (xv + z)(2w xv + z) = u2 y(2w xv + z)
v 0 = w(u(xv + z)) = wy
w0 = v(u(xv + z)) = vy
z 0 = z(u(xv + z)) = zy
y 0 = (u2 yf 00 (x)f (x) + uf 0 (x)y) = uy 2 (2w xv + z) u(yv xwy zy)
Now consider the ODE
x0 (t) = (x)f (x).
One can determine (x) so that the roots of f are stable equilibria for this ODE. For example, try
(x) = D(f (x)) = f 0 (x) or (x) = D(f (x))1 = [f 0 (x)]1 .
For another ODE, consider
d
f (x(t)) = tk f (x)
dt
where is a constant. By adjusting and k during the calculation of the solution to the ODE we
can speed up the convergence to the root of f .
This application of Newtons method is easily extendable to f : Rn Rn by using the last
equation.
DF h = g
or
h = [DF ]1 g.
Now consider the ODE
x0 = [DF (x)]T F (x)
The equilibrium solutions of this ODE are the zeroes of F . Note that
d
d
||F (x)||2 =
< F (x), F (x) >
dt
dt
= 2||[DF (x)]T F (x)||2 .
If G = [DF ]T F then
DG(x) = D([DF ]T F ) = [DF ]T DF [D1 ([DF ]T )F...Dn ([DF ]T )F ]
so that
DG(z) = [DF (z)]T DF (z)
at a zero z of F . Note that DG(z) is a negative definite matrix.
Now consider the generalization of this last ODE as
x0 (t) = L F (x)
and
d
< F (x(t)), F (x(t)) >=
dt
< DF (x)x0 (t), F (x) > + < F (x), DF (x)x0 (t) >=
< DF L F, F > + < F, DF L F > .
Choose L = g(x)DF T where g : Rn R > 0.
The method of steepest descent
xk+1 = xk + k f (xk )
can be studied by looking at the ODE
x0 = f (x).
0
y = Q(y) ; y(0) =
x(0)
2
.
.
.
n
If the degree of Q = k then we write xPn,k . We let Pk = n Pn,k and P = k Pk .
We also define the set of analytic functions by
A = {f |f : R R analytic}.
NOTES
[1] Pm Pk for k > m.
[2] If f P then f 0 P
[3] If f, gP then f + g, f g and f gP.
[4] If f P and f (0) 6= 0 then
1
f P
M an n by n matrix).
[11] uP if and only if there is a polynomial Q : Rn Rn so that Q(u, u0 , ..., un ) = 0. (Grobner
Bases)
[12] tan tP2
(y 0 = 1 + y 2 )
tan t P1 .
0
P2,1 = {y|y = Ay 2 + By + C}, ( 1t , tan t and tanh t).
[13] What is P2,2 ? Consider z 0 = p(z), p : C C a polynomial of degree 2.
[14] (Pade)
x(t) =
PN
i
i=0 ai t
P
i
1+ J
b
i=1 i t
PK
i=0 Mi t
REFERENCES
[1] Parker, G. Edgar, and Sochacki, James S., Implementing the Picard Iteration, Neural, Parallel, and Scientific Computation, 4 (1996), 97-112.
[2] Parker, G. Edgar, and Sochacki, James S., A Picard-McLaurin Theorem for Initial Value
PDEs, Abstract Analysis and its Applications, Vol. 5, No. 1, (2000) 4763.
[3] Pruett, C.D. Rudmin, J.W. and Lacy, J.M., An adaptive N-body algorithm of optimal order,
Journal of Computational Physics, 187 (2003), pp. 298-317.
[4] Liu James, Sochacki James and Paul Dostert, Singular perturbations and approximations for
integrodifferential equations, Proceedings of the International Conference on Control and Differential Equations, Lecture Notes in Pure and Applied Mathematics, Differential Equations
and Control Theory, Vol. 225, (2002).
[5] Liu James, Parker Edgar, Sochacki James and Knutsen Aren, Approximation methods for
integrodifferential equations, Proceedings of the International Conference on Dynamical Systems and Applications,III,383390, (2001).