You are on page 1of 18

Chapter 5

Finite dierence methods for 2D


elliptic PDE

There are many important applications of elliptic PDE, and we now give some examples of
linear and then nonlinear equations.

The 2-D Laplace equation,

uxx + uyy = 0 . (5.1)

The solution u is sometimes called a potential function, because a conservative vector


eld v (i.e., such that v = 0) is given by v = u (or alternatively v = u),
and we often also have the incompressible constraint v = 0 such that 2 u = 0.
T
(The gradient operator in 2-D is = [ x , y ] .)

The 2-D Poisson equation,

uxx + uyy = f . (5.2)

The modied Helmholtz equation,

uxx + uyy 2 u = f . (5.3)

Many incompressible ow solvers are based on solving one or several Poisson or


Helmholtz equations, e.g., the projection method for solving incompressible Navier-
Stokes equations for ow problems [?, ?, ?] at low or modest Reynolds number, or the
stream-vorticity formulation method for large Reynolds number [?, ?]. In particular,
there are some fast Poisson solvers available for regular domains, e.g., in Fishpack [?].

The Helmholtz equation,

uxx + uyy + 2 u = f . (5.4)

79
80 Z. Li

The Helmholtz equation arises in scattering problems, when is the wave number,
and the corresponding problem may not have a solution if 2 is an eigenvalue. Further,
the problem is hard to solve if is big.

General self-adjoint elliptic PDE,

(a(x, y)u(x, y)) + (x, y)u = f (x, y) (5.5)

or (aux )x + (auy )y + (x, y)u = f (x, y) , (5.6)

where we should assume that a(x, y) does not change sign in the solution domain,
e.g., a(x, y) a0 > 0, where a0 is a constant.

General elliptic PDE (diusion and advection equations),

a(x, y)uxx + 2b(x, y)uxy + c(x, y)uyy

+ d(x, y)ux + e(x, y)uy + g(x, y)u(x, y) = f (x, y), (x, y) ,

if b2 ac < 0 for all (x, y) . This equation can be re-written as

(a(x, y)u(x, y)) + w(x, y) u + c(x, y)u = f (x, y) (5.7)

after a transformation, where w(x, y) is a vector.

Diusion and reaction equation,

(a(x, y)u(x, y)) = f (u). (5.8)

Here (a(x, y)u(x, y)) is the diusion term, the typically nonlinear f (u) is the
reaction term, and if a(x, y) 1 the PDE is called a nonlinear Poisson equation.

p-Laplacian equation,
 
|Du|p2 Du = 0, p 2. (5.9)

Minimal surface equation,


 
Du
 = 0. (5.10)
1 + |Du|2


We note that an elliptic PDE (P ( x , y )u = 0) can be regarded as the steady state solution

of a corresponding parabolic PDE (ut = P ( x , y )u). Furthermore, if a linear PDE is
dened on a rectangle domain then a nite dierence approximation (in each dimension)
can be used for both the equation and the boundary conditions, but the most dicult part
is to solve the resulting linear system of algebraic equations eciently.
FD methods for linear parabolic PDE 81

5.1 Boundary and compatibility conditions

Let us consider a 2-D second-order elliptic PDE on a domain , with boundary and n
the unit normal directed according to the right side rule, cf., Figure 5.1. Some common
boundary conditions are as follows.
n
\tex{${\bf n}$}


\tex{$\Omega$} n
\tex{${\bf n}$}


\tex{$\pa \Omega$}


\tex{$\pa \Omega$}

Figure 5.1: A diagram of a two dimensional domain , its boundary and its unit normal
direction.

Dirichlet boundary condition: the solution is known on the boundary,

u(x, y)| = u0 (x, y) .

Neumann or ux boundary condition: the normal derivative is given along the bound-
ary,
u
n u = un = ux nx + uy ny = g(x, y),
n
where n = (nx , ny ) is the unit normal.

Mixed boundary condition:


 
u
(x, y)u(x, y) + (x, y) = (x, y)
n
is given along the boundary .

In some cases, a boundary condition is periodic, e.g., for = [a, b] [c, d], u(a, y) =
u(b, y) periodic in the x-direction and u(x, c) = u(x, d) periodic in the y-direction.

There can be dierent boundary conditions on dierent parts of the boundary, e.g., for a
channel ow in a domain [a, b] [c, d], the ux boundary condition may apply at x = a and
x = b, and a no-slip boundary condition u = 0 at the boundaries y = c and y = d.
82 Z. Li

For a Poisson equation with a purely Neumann boundary condition, there is no solution
unless a compatibility condition is satised. Consider the following problem:

u
u = f (x, y), (x, y) , = g(x, y).
n

On integrating over the domain


udxdy = f (x, y)dxdy ,


and applying the Greens theorem gives



u
udxdy = ds ,
n

so we have the compatibility condition


udxdy = g ds = f (x, y)dxdy (5.1)


for the solution to exist. If the compatibility condition is satised and is smooth, then
the solution does exist but it is not unique. Indeed, u(x, y) + C is a solution for arbitrary
constant C if u(x, y) is a solution, but we can specify the solution at a particular point (e.g.
u(x0 , y0 ) = 0) to render it well dened.

5.2 The central nite dierence method for Poisson equa-


tions

Let us now consider the following problem, involving a Poisson equation and Dirichlet BC:

uxx + uyy = f (x, y), (x, y) = (a, b) (c, d), (5.1)

u(x, y)| = u0 (x, y) . (5.2)

If f L2 (), then the solution u(x, y) C 2 () exists and it is unique. An analytic solution
is often unavailable, and the FD procedure is explained below.

Step 1: Generate a grid. For example, a uniform Cartesian grid can be generated:

ba
xi = a + ihx , i = 0, 1, 2, m, hx = , (5.3)
m
dc
yj = c + jhy , j = 0, 1, 2, n, hy = . (5.4)
n
In seeking an approximate solution Uij at the grid points (xi , yj ) where u(x, y) is
unknown, there are (m 1)(n 1) unknowns.
FD methods for linear parabolic PDE 83

Step 2: Represent the partial derivatives with FD formulas involving the function
values at the grid points. For example, if we adopt the three-point central FD formula
for second-order partial derivatives in the x- and y-directions respectively, then
u(xi1 , yj ) 2u(xi , yj ) + u(xi+1 , yj ) u(xi , yj1 ) 2u(xi , yj ) + u(xi , yj+1 )
+
(hx )2 (hy )2
= fij + Tij , i = 1, m 1, j = 1, n 1, (5.5)

where fij = f (xi , yj ). The local truncation error satises

(hx )2 4 u (hy )2 4 u
Tij (x i , y j ) + (xi , yj ) + O(h4 ), (5.6)
12 x4 12 y 4
where

h = max{ hx , hy } . (5.7)

and the nite dierence discretization is consistent if

lim T = 0, (5.8)


h0

so this FD discretization is consistent and second-order accurate.


We ignore the error term in Eq. (5.5) and replace the exact solution values u(xi , yj )
at the grid points with the approximate solution values Uij obtained from solving the
linear system of algebraic equations, i.e.,
 
Ui1,j + Ui+1,j Ui,j1 + Ui,j+1 2 2
+ + Uij = fij ,
(hx )2 (hy )2 (hx )2 (hy )2 (5.9)
i = 1, 2, , m 1, j = 1, 2, , n 1 .

The FD equation at the grid point (xi , yj ) involves ve grid points in a ve-point
stencil, (xi1 , yj ), (xi+1 , yj ), (xi , yj1 ), (xi , yj+1 ), and (xi , yj ). The grid points in
the FD stencil are sometimes labeled east, north, west, south, and the center in the
literature. The center (xi , yj ) is called the master grid point, where the FD equation
is used to approximate the partial dierential equation.

Solve the linear system of algebraic equations (5.9), to get the approximate values for
the solution at all of the grid points.

Error analysis, implementation, visualization etc.

5.2.1 The matrix-vector form of the FD equations

In solving the algebraic system of equations by a direct method such as Gaussian elimination
or some sparse matrix technique, knowledge of the matrix-vector structure is important,
84 Z. Li

although less so for an iterative solver such as the Jacobi, Gauss-Seidel or SOR() methods.
In the matrix-vector form AU = F, the unknown U is a 1-D array. From 2-D Poisson
equations the unknowns Uij are a 2-D array, but we can order it to get a 1-D array. We may
also need to re-order the FD equations, and it is a common practice to use the same ordering
for the equations as for the unknown array. There are two commonly used orderings, namely,
the natural ordering, a natural choice for sequential computing; and red-black ordering,
considered to be a good choice for parallel computing.

7 8 9 4 9 5

4 5 6 7 3 8

1 2 3 1 6 2

Figure 5.2: The natural ordering (left) and the red-black ordering (right).

The natural row ordering

In the natural row ordering, we order the unknowns and equations row by row. Thus the
k-th FD equation corresponding to (i, j) has the following relation:

k = i + (m 1)(j 1), i = 1, 2, , m 1, j = 1, 2, , n 1 . (5.10)

see the left diagram in Figure 5.2.


Referring to Figure 5.2, suppose that hx = hy = h, m = n = 4. Then there are nine
equations and nine unknowns, so the coecient matrix is 9 by 9. To write down the matrix-
vector form, use a 1-D array x to express the unknown Uij according to the ordering, we
should have

x1 = U11 , x2 = U21 , x3 = U31 , x4 = U12 , x5 = U22 ,


(5.11)
x6 = U32 , x7 = U13 , x8 = U23 , x9 = U33 .

Now if the algebraic equations are ordered in the same way as the unknowns, the nine
FD methods for linear parabolic PDE 85

equations from the standard central FD scheme using the ve-point stencil are
1 u01 + u10
Eqn.1 : (4x1 + x2 + x4 ) = f11
h2 h2
1 u20
Eqn.2 : (x1 4x2 + x3 + x5 ) = f21 2
h2 h
1 u30 + u41
Eqn.3 : (x2 4x3 + x6 ) = f31
h2 h2
1 u02
Eqn.4 : (x1 4x4 + x5 + x7 ) = f12 2
h2 h
1
Eqn.5 : (x2 + x4 4x5 + x6 + x8 ) = f22
h2
1 u42
Eqn.6 : (x3 + x5 4x6 + x9 ) = f32
h2 h2
1 u03 + u14
Eqn.7 : (x4 4x7 + x8 ) = f13
h2 h2
1 u24
Eqn.8 : (x5 + x7 4x8 + x9 ) = f23 2
h2 h
1 u34 + u43
Eqn.9 : (x6 + x8 4x9 ) = f33 .
h2 h2
The corresponding coecient matrix is block tridiagonal,

B I 0
1


A= 2 I B I , (5.12)
h
0 I B
where I is the 3 3 identity matrix and

4 1 0


B = 1 4 1 .

0 1 4

In general, for an n + 1 by n + 1 grid we obtain



B I 4 1


1

I B I

1 4 1



A= 2 , B= .
h .. .. .. .. .. ..
. . . . . .

I B n2 n2
1 4 n2 n2

Since A is symmetric positive denite and weakly diagonally dominant, the coecient ma-
trix A is a non-singular, and hence the solution of the system of the FD equations is unique.
86 Z. Li

The matrix-vector form is useful to understand the structure of the linear system of
algebraic equations, and as mentioned it is required when a direct method (such as Gaussian
elimination or a sparse matrix technique) is used to solve the system. However, it can
sometimes be more convenient to use a two-parameter system, especially when an iterative
method is preferred but also as more intuitive and to visualize the data. The eigenvalues
and eigenvectors of A can also be indexed by two parameters p and k, corresponding to
wave numbers in the x and y directions. The (p, k)-th eigenvector up,k has n2 components,

up,k
ij = sin(pih) sin(kjh), i, j = 1, 2, n (5.13)

for p, k = 1, 2, n; and the corresponding (p, k)-th eigenvalue is


2  
p,k = 2 cos(ph) 1) + cos(kh) 1) . (5.14)
h
The least dominant (smallest magnitude) eigenvalue is

1,1 = 2 2 + O(h2 ), (5.15)

obtained from the Taylor expansion of (5.14) in terms of h 1/n; and the dominant (largest
magnitude) eigenvalue is
8
int(n/2),int(n/2) . (5.16)
h2
It is notable that the dominant and least dominant eigenvalues are twice the magnitude of
those in 1-D representation, so we have the following estimates:
8 1 1
A2 max |p,k | = , A1 2 = 2,
h2 min |p,k | 2
(5.17)
1 4
cond2 (A) = A2 A 2 2 2 = O(n2 ) .
h
Note that the condition number is about the same as that in the 1-D case; and since it can
be large, double precision is recommended to reduce the eect of round-o errors.

5.3 Finite dierence methods for general second-order ellip-


tic PDE

If the domain of the interest is a rectangle [a, b] [c, d] and there is no mixed derivative
term uxy in the PDE, then the PDE can be discretized dimension by dimension. Consider
the following example:

(p(x, y)u) q(x, y) u = f (x, y), or (pux )x + (puy )y qu = f,

with a Dirichlet boundary condition at x = b, y = c, and y = d but a Neumann boundary


condition ux = g(y) at x = a.
FD methods for linear parabolic PDE 87

For simplicity, let us adopt a uniform Cartesian grid again


ba
xi = a + ihx , i = 0, 1, m, hx = ,
m
dc
yj = c + jhy , j = 0, 1, n, hy = .
n
If we discretize the PDE dimension by dimension, at a typical grid point (xi , yj ) the FD
equation is
pi+ 1 ,j Ui+1,j (pi+ 1 ,j + pi 1 ,j )Uij + pi 1 ,j Ui1,j
2 2 2 2
(hx )2
(5.1)
pi,j+ 1 Ui,j+1 (pi,j+ 1 + pi,j 1 )Uij + pi,j 1 Ui,j1
+ 2 2 2 2
qij Uij = fij
(hy )2
for i = 1, 2, , m 1 and j = 1, 2, , n 1, where pi 1 ,j = p(xi hx /2, yj ) and so on.
2

For the indices i = 0, j = 1, 2, , n 1, we can use the ghost point method to deal
with the Neumann boundary condition. Using the central FD scheme for the ux boundary
condition
U1,j U1,j
= g(yj ), or U1,j = U1,j 2hx g(yj ), j = 1, 2, , n 1 ,
2hx
on substituting into the FD equation at (0, j), we obtain
(p 1 ,j + p 1 ,j )U1,j (p 1 ,j + p 1 ,j )U0j
2 2 2 2
(hx )2
p0,j+ 1 U0,j+1 (p0,j+ 1 + p0,j 1 )U0j + p0,j 1 U0,j1
2 2 2 2 (5.2)
+
(hy )2
2 p 1 ,j g(yj )
q0j U0j = f0j + 2
.
hx

For a general second-order elliptic PDE with no mixed derivative term uxy , i.e.,

(p(x, y)u) + w u q(x, y)u = f (x, y) ,

the central FD scheme when |w|  1/h can be used, but an upwinding scheme may be
preferred to deal with the advection term w u.

A nite dierence formula for approximating the mixed derivative uxy

If there is a mixed derivative term uxy , we cannot proceed dimension by dimension but a
centered FD scheme (3.5) for uxy can be used, i.e.,
u(xi1 , yj1 ) + u(xi+1 , yj+1 ) u(xi+1 , yj1 ) (xi1 , yj+1 )
uxy (xi , yj ) . (5.3)
4hx hy
88 Z. Li

From a Taylor expansion at (x, y), this FD formula can be shown to be consistent and
the discretization is second-order accurate, and the consequent central FD formula for a
second-order linear PDE involves nine grid points. The resulting linear system of algebraic
equations for PDE is more dicult to solve, because it is no longer symmetric nor diagonally
dominant. Furthermore, there is no known upwinding scheme to deal with the PDE with
mixed derivatives.

5.4 Solving the resulting linear system of algebraic equations

The linear systems of algebraic equations resulting from FD discretizations for two or higher
dimensional problems are often very large, e.g., the linear system from an n n grid for an
elliptic PDE has O(n2 ) equations, so the coecient matrix is O(n2 n2 ). Even for n = 100,
a modest number, the O(104 104 ) matrix cannot be stored in most modern computers
if the desirable double precision is used. However, the matrix from a self-adjoint elliptic
PDE is sparse since the non-zero entries are about 5n2 , so an iterative method or sparse
matrix technique may be used. For an elliptic PDE dened on a rectangle domain or a disk,
frequently used methods are listed below.

Fast Poisson solvers such as the fast Fourier transform (FFT) or cyclic reduction [?].
Usually the implementation is not so easy, and the use of existing software packages
is recommended, e.g., Fishpack, written in Fortran and free on the Netlib.

Multigrid solvers, either structured multigrid, e.g., MGD9V [?] that uses a nine-point
stencil, or AMGs (algebraic multi-grid solvers).

Sparse matrix techniques.

Simple iterative methods such as Jacobi, Gauss-Seidel, SOR(). They are easy to
implement, but often slow to converge.

Other iterative methods such as the conjugate gradient (CG) or pre-conditioned con-
jugate gradient (PCG), generalized minimized residual (GMRES), conjugate gradient
(CG) or biconjugate gradient (BICG) method for non-symmetric system of equations.
We refer the reader to [?] for more information and references.

An important advantage of an iterative method is that zero entries play no role in the
matrix-vector multiplications involved and there is no need to manipulate the matrix and
vector forms, as the algebraic equations in the system are used directly. Given a linear
system of equation Ax = b where A is non-singular (det(A)
= 0). If A can be written as
A = M N where M is an invertible matrix, then (M N )x = b or M x = N x + b or
x = M 1 N x + M 1 b. We may iterate starting from an initial guess x0 , via
xk+1 = M 1 N xk + M 1 b, k = 0, 1, 2, , (5.1)
FD methods for linear parabolic PDE 89

and the iteration converges or diverges depending the spectral radius of (M 1 N ) =


max |i (M 1 N )|. (Incidentally, if T = M 1 N is a constant matrix, the iterative method is
called stationary.)

5.4.1 The Jacobi iterative method (solving the diagonals)

Solving for x1 from the rst equation in the algebraic system, x2 from the second, and so
forth, we have

1  
x1 = b1 a12 x2 a13 x3 a1n xn
a11
1  
x2 = b2 a21 x1 a23 x3 a2n xn
a22
.. .. .. ..
. . . .
1  
xi = bi ai1 x1 ai2 x2 ai,i1 xi1 ai,i+1 xi+1 ain xn
aii
.. .. .. ..
. . . .
1  
xn = bn ai1 x1 an2 x2 an,n1 xn1 .
ann

Given some initial guess x0 , the corresponding Jacobi iterative method is

1  
xk+1
1 = b1 a12 xk2 a13 xk3 a1n xkn
a11
k+1 1  
x2 = b2 a21 xk1 a23 xk3 a2n xkn
a22
.. .. .. ..
. . . .
1  
k+1
xi = bi ai1 xk1 ai2 xk2 ain xkn
aii
.. .. .. ..
. . . .
1  
xk+1
n = bn a x
i1 1
k
a x
n2 2
k
a x k
n,n1 n1 .
ann

It can be written compactly as



n

1
xk+1
i = bi aij xkj , i = 1, 2, , n , (5.2)
aii
j=1,j=i

which is the basis for easy programming. Thus for the FD equations

Ui+1 2Ui + Ui+1


= fi
h2
90 Z. Li

with Dirichlet boundary conditions U0 = ua and Un = ub, we have

ua + U2k h2 f1
U1k+1 =
2 2
k + Uk
Ui1 i+1 h2 fi
Uik+1 = , i = 2, 3, n 1
2 2
k
Un2 + ub h2 fn1
k+1
Un1 = ;
2 2
and for a two dimensional Poisson equation,
k k k k
Ui1,j + Ui+1,j + Ui,j1 + Ui,j+1 h2 fij
Uijk+1 = , i, j = 1, 2, , n 1 .
4 4

5.4.2 The Gauss-Seidel iterative method (solving the diagonals and using
the most updated information)

In the Jacobi iterative method, all components xk+1 are updated based on xk , whereas in
the Gauss-Seidel iterative method the most updated information is used as follows:
1  
xk+1
1 = b 1 a 12 x k
2 a13 x k
3 a 1n x k
n
a11
1  
xk+1
2 = b2 a21 xk+1 1 a23 xk3 a2n xkn
a22
.. .. .. ..
. . . .
k+1 1  
xi = bi ai1 xk+1 1 ai2 xk+1
2 ai,i1 xk+1 k k
i1 ai,i+1 xi+1 ain xn
aii
.. .. .. ..
. . . .
1  
k+1
xn = bn ai1 xk+1 1 a x
n2 2
k+1
a k+1
n,n1 n1 ,
x
ann
or in a compact form

i1
 n

1
xk+1
i = bi aij xk+1
j aij xkj , i = 1, 2, , n . (5.3)
aii
j=1 j=i+1

Below is a pseudo-code, where the Gauss-Seidel iterative method is used to solve the
FD equations for the Poisson equation, assuming u0 is an initial guess:

% Give u0(i,j) and a tolerance tol, say 1e-6.

err = 1000; k = 0; u = u0;


while err > tol
for i=1:n
for j=1:n
FD methods for linear parabolic PDE 91

u(i,j) = ( (u(i-1,j)+u(i+1,j)+u(i,j-1)+u(i,j+1)) -h^2*f(i,j) )/4;


end
end
err = max(max(abs(u-u0)));
u0 = u; k = k + 1; % Next iteration if err > tol
end

Note that this pseudo-code has a framework generally suitable for iterative methods.

5.4.3 The successive over-relaxation (SOR()) method: an extrapolation


technique

Suppose xk+1 k
GS denotes the update from x in the Gauss-Seidel method. Intuitively one may
anticipate the update

xk+1 = (1 )xk + xk+1


GS , (5.4)

a linear combination of xk and xk+1GS , may give a better approximation for a suitable choice
of the relaxation parameter . If the parameter < 1, the combination above is an
interpolation, and if > 1 it is an extrapolation or over-relaxation. For = 1, we recover
the Gauss-Seidel method. For elliptic problems, we usually choose 1 < 2. In component
form, the SOR() method can be represented as

i1
 n
bi
xk+1
i = (1 )xki + aij xk+1
j aij xkj , (5.5)
aii
j=1 j=i+1

for i = 1, 2, , n. Therefore, only one line in the pseudo-code of the Gauss-Seidel method
above need be changed, namely,

u(i,j) = (1-omega)*u0(i,j) + omega*( u(i-1,j) + u(i+1,j)


+ u(i,j-1) + u(i,j+1) -h^2*f(i,j) )/4

The convergence of the SOR() method depends on the choice of . For the linear
system of algebraic equations obtained from the standard ve-point stencil applied to a
Poisson equation with h = hx = hy = 1/n, it can be shown that the optimal is
2 2
opt = , (5.6)
1 + sin(/n) 1 + /n
which approaches two as n approaches innity. Although the optimal is unknown for
general elliptic PDE, we can use the optimal for the Poisson equation as a trial value,
and in fact larger rather than smaller values are recommended. If is so large that
the iterative method diverges, this is soon evident because the solution will blow-up.
Incidentally, the optimal choice of is independent of the right-hand side.
92 Z. Li

5.4.4 Convergence of stationary iterative methods

For a stationary iterative method, the following theorem provides a necessary and sucient
condition for convergence.

Theorem 5.1 Given a stationary iteration

xk+1 = T xk + c, (5.7)

where T is a constant matrix and c is a constant vector, the vector sequence {xk } converges
for arbitrary x0 if and only if (T ) < 1 where (T ) is the spectral radius of T dened as

(T ) = max |i (T )|, (5.8)

i.e., the largest magnitude of all the eigenvalues of T .

Another commonly used sucient condition to check the convergence of a stationary


iterative method is given in the following theorem.

Theorem 5.2 If there is a matrix norm  such that T  < 1, then the stationary iterative
method converges for arbitrary initial guess x0 .
We often check whether T p < 1 for p = 1, 2, , and if there is just one norm such that
T  < 1, then the iterative method is convergent. However, if T  1 there is no conclu-
sion about the convergence.

Let us now briey discuss the convergence of the Jacobi, Gauss-Seidel, and SOR()
methods. Given a linear system Ax = b, let D denote the diagonal matrix formed from the
diagonal elements of A, L the lower triangular part of A, and U the upper triangular
part of A. The iteration matrices for the three basic iteration methods are thus

Jacobi method: T = D1 (L + U ), c = D1 b.

Gauss-Seidel method: T = (D L)1 U , c = (D L)1 b.

SOR() method: T = (I L)1 ((1 )I + U ), c = (I L)1 D1 b.

Theorem 5.3 If A is strictly row diagonally dominant, i.e.,


n

|aii | > |aij | , (5.9)
j=1,j=n

then both the Jacobi and Gauss-Seidel iterative methods converge. The conclusion is also
true when (1): A is weakly row diagonally dominant
n

|aii | |aij |; (5.10)
j=1,j=n

(2): the inequality holds for at least one row; (3) A is irreducible.
FD methods for linear parabolic PDE 93

We refer the reader to [?] for the denition of irreducibility. From this theorem, both the
Jacobi and Gauss-Seidel iterative methods converge when they are applied to the linear
system of algebraic equations obtained from the standard central FD method for Poisson
equations.

5.5 A nite dierence method for Poisson equations in polar


coordinates

If the domain of interest is a circle, an annulus, or a fan etc., (cf., Figure 5.3 for some
illustrations), it is natural to use plane polar coordinates

x = r cos , y = r sin . (5.1)

The Poisson equation in these coordinates is


 
1 u 1 2u
r + 2 2 = f (r, ), conservative form
r r r r
2 u 1 u 1 2u
or + + = f (r, ).
r2 r r r2 2

BC
\tex{BC}
BC
\tex{BC}

r\tex{$r$} BC
\tex{BC}
\tex{$\theta$}

r\tex{$r$} r\tex{$r$}
r\tex{$r$} BC
\tex{BC} BC
\tex{BC}

periodic
\tex{periodic} BC
\tex{BC} BC
\tex{BC}
periodic
\tex{periodic}

BC
\tex{BC} BC
\tex{BC}

No
\tex{No 2
\tex{$2
BCBC} \pi$}
\tex{$\theta$} 2
\tex{$2\pi$}
\tex{$\theta$} \tex{$\theta$}

Figure 5.3: Diagrams of domains and boundary conditions that may be better solved in
polar coordinates.
94 Z. Li

For 0 < R1 r R2 and l r , where the origin is not in the domain of interest, on
a uniform grid in plane polar coordinates
R2 R1
ri = R1 + ir, i = 0, 1, , m, r = ,
m
r l
j = l + j, j = 0, 1, , N, = ,
N
the discretized nite dierence equation (in conservative form) is
1 ri 12 Ui1,j (ri 12 + ri+ 12 )Uij + ri+ 12 Ui+1,j
ri (r)2
(5.2)
1 Ui,j1 2Uij + Ui,j+1
+ 2 = f (ri , j ) ,
ri ()2
where again Uij is an approximation to the solution u(ri , j ).

5.5.1 Treating polar boundary conditions

If the origin is within the domain and 0 < 2, we have a periodic boundary condition in
the direction (i.e., u(r, ) = u(r, + 2)), but in the radial (r) direction the origin R1 = 0
needs special attention. There are dierent ways of dealing with a singularity at the origin,
but some methods lead to an undesirable structure in the coecient matrix from the FD
equations. One approach discussed is to use a staggered grid:
1 R2
ri = (i )r, r = , i = 1, 2, , m , (5.3)
2 m 12
where r1 = r/2 and rm = R2 . Except for i = 1 (i.e., at i = 2, , m 1), the conservative
form of the FD discretization can be used. At i = 1, the following non-conservative form is
used to deal with the pole singularity at r = 0:
U0j 2U1j + U2j 1 U2j U0j 1 U1,j1 2U1j + U1,j+1
+ + 2 = f (r1 , j ).
(r)2 r1 2r r1 ()2
Note that r0 = r/2 and r1 = r/2. The coecient of U0j in the above FD equation,
the approximation at the ghost point r0 , is zero! The above FD equation simplies to
2U1j + U2j 1 U2j 1 U1,j1 2U1j + U1,j+1
2
+ + 2 = f (r1 , j ) ,
(r) r1 2r r1 ()2
and we still have a diagonally dominant system of linear algebraic equations.

5.5.2 Using the FFT to solve Poisson equations in polar coordinates

When the solution u(r, ) is periodic in , we can approximate u(r, ) by the truncated
Fourier series
N/21

u(r, ) = un (r)ein , (5.4)
n=N/2
FD methods for linear parabolic PDE 95


where i = 1 and un (r) is the complex Fourier coecient given by
N 1
1 
un (r) = u(r, )eink . (5.5)
N
k=0

Substituting (5.4) into the Poisson equation gives


 
1 1 un n2
2 un = fn (r), n = N/2, , N/2 1 , (5.6)
r r r r r

where fn (r) is the n-th coecient of the Fourier series of f (r, ) dened in (5.5). For each
n, we can discretize the above ODE in the r direction using the staggered grid, to get a
tridiagonal system of equations that can be solved easily.
With a Dirichlet boundary condition u(rmax , ) = uBC () at r = rmax , the Fourier
transform
N 1
1  BC
uBC
n (rmax ) = u ()eink (5.7)
N
k=0

can be invoked to nd uBC


n (rmax ), the corresponding boundary condition for the ODE. After
the Fourier coecient un is obtained, the inverse Fourier transform (5.4) can be applied to
get an approximate solution to the problem.
Index

autonomous, 31

backward Eulers method, 24

consistency, 23
convergency, 24
Crank-Nicholson method, 25

equilibrium, 31
Eulers method, 21
explicit nite dierence method, 21

initial condition, 15

local truncation error, 23


Lotka-Volterra model, 18

Matlab ODE-Suite, 29

Newton cooling model, 15

pendulum model, 17

Runge-Kutta Method, 26

stability condition, 23
steady state solution, 31

96

You might also like