You are on page 1of 52

Introductory Mathematics

Lecture Notes

Siti Rosminah Shamsuddin & Pooya Sareh


Department of Aeronautics
Imperial College London

5th October 2016


Basic Information
Course: Introductory Mathematics
Degree: MSc in Advanced Computational Methods for Aeronautics, Flow
Management and Fluid-Structure Interaction
Lecturer: Siti Rosminah Shamsuddin & Pooya Sareh
E-mail: s.shamsuddin07@imperial.ac.uk
E-mail: p.sareh@imperial.ac.uk
Textbook: Mathematical Tools for Physics
Author: James Nearing

Course Content
Lecture Topic Lecturer
1 Function Expansions & Transforms S.-R. Shamsuddin
2-3 Vector Spaces, Vector Fields & Operators S.-R. Shamsuddin
4-5 Linear Algebra, Matrices, Eigenvectors S.-R. Shamsuddin
6-7 Vector Calculus Integral Theorems P. Sareh
8-9-10 Partial Differential Equations P. Sareh

1
Introductory Mathematics

0-What Is Mathematics?
Different schools of thought, particularly in philosophy, have put forth radically different
definitions of mathematics. All are controversial and there is no consensus.

Survey of leading definitions


(1) Aristotle defined mathematics as: The science of quantity. In Aristotles classific-
ation of the sciences, discrete quantities were studied by arithmetic, continuous quantities
by geometry.
(2) Auguste Comtes definition tried to explain the role of mathematics in coordin-
ating phenomena in all other fields: The science of indirect measurement, 1851. The
indirectness in Comtes definition refers to determining quantities that cannot be meas-
ured directly, such as the distance to planets or the size of atoms, by means of their
relations to quantities that can be measured directly.
(3) Benjamin Peirce: Mathematics is the science that draws necessary conclusions,
1870.
(4) Bertrand Russell: All Mathematics is Symbolic Logic, 1903.
(5) Walter Warwick Sawyer: Mathematics is the classification and study of all pos-
sible patterns, 1955.
Most contemporary reference works define mathematics mainly by summarizing its
main topics and methods:
(6) Oxford English Dictionary: The abstract science which investigates deductively
the conclusions implicit in the elementary conceptions of spatial and numerical relations,
and which includes as its main divisions geometry, arithmetic, and algebra, 1933.
(7) American Heritage Dictionary: The study of the measurement, properties, and
relationships of quantities and sets, using numbers and symbols, 2000.

Playful, metaphorical, and poetic definitions


(1) Bertrand Russell: The subject in which we never know what we are talking about,
nor whether what we are saying is true, 1901.
(2) Charles Darwin: A mathematician is a blind man in a dark room looking for a
black cat which isnt there.
(3) G. H. Hardy: A mathematician, like a painter or poet, is a maker of patterns.
If his patterns are more permanent than theirs, it is because they are made with ideas,
1940.

1
1-Field of Mathematics
Mathematics can, broadly speaking, be subdivided into the study of quantity, structure,
space, and change (i.e. arithmetic, algebra, geometry, and analysis). In addition to these
main concerns, there are also subdivisions dedicated to exploring links from the heart
of mathematics to other fields: to logic, to set theory (foundations), to the empirical
mathematics of the various sciences (applied mathematics), and more recently to the
rigorous study of uncertainty.
When I was a undergraduate, I knew that the majors in our mathematical department
include: Pure mathematics, applied mathematics, statistics, and computational mathem-
atics.
When I was a master student, I knew that the directions in pure mathematics in-
cludes: topology, algebra, number theory, differential equations and dynamic systems,
differential geometry, and functional analysis.

2-Mathematical awards
Arguably the most prestigious award in mathematics is the Fields Medal, established
in 1936 and now awarded every four years. The Fields Medal is often considered a
mathematical equivalent to the Nobel Prize.
The Wolf Prize in Mathematics, instituted in 1978, recognizes lifetime achievement,
and another major international award, the Abel Prize, was introduced in 2003. The
Chern Medal was introduced in 2010 to recognize lifetime achievement. These accolades
are awarded in recognition of a particular body of work, which may be innovational, or
provide a solution to an outstanding problem in an established field.
A famous list of 23 open problems, called Hilberts problem, was compiled in 1900
by German mathematician David Hilbert. This list achieved great celebrity among math-
ematicians, and at least nine of the problems have now been solved. A new list of seven
important problems, titled the Millennium Prize Problems, was published in 2000. A
solution to each of these problems carries a $1 million reward, and only one (the Riemann
hypothesis) is duplicated in Hilberts problems.

3-Mathematics in aeronautics
Mathematics in aeronautics includes calculus, differential equations, and linear algebra,
etc.

4-Calculus1
Calculus has been an integral part of mans intellectual training and heritage for the last
twenty-five hundred years. Calculus is the mathematical study of change, in the same
way that geometry is the study of shape and algebra is the study of operations and their
application to solving equations. It has two major branches, differential calculus (con-
cerning rates of change and slopes of curves), and integral calculus (concerning accu-
mulation of quantities and the areas under and between curves); these two branches are
related to each other by the fundamental theorem of calculus. Both branches make use
1
Extracted from: Boyer, Carl Benjamin. The history of the calculus and its conceptual development.
Courier Dover Publications, 1949.

2
of the fundamental notions of convergence of infinite sequences and infinite series to a
well-defined limit. Generally, modern calculus is considered to have been developed in
the 17th century by Isaac Newton and Gottfried Leibniz, today calculus has widespread
uses in science, engineering and economics and can solve many problems that algebra
alone cannot.
Differential and integral calculus is one of the great achievements of the humand
mind. The fundamental definitions of the calculus, those of the derivative and the integ-
ral, are now so clearly stated in textbooks on the subject, and the operations involving
them are so readily mastered, that it is easy to forget the difficulty with which these ba-
sic concepts have been developed. Frequently a clear and adequate understanding of the
fundamental notions underlying a branch of knowledge has beeen achieved comparately
late in its development. This has never been more aptly demonstrated than in the rise
of the calculus. The precision of statement an the facility of application which the rules
of the calculus early afforded werein a measure responsible for the fact that mathem-
aticians were insensible to the delicate subtleties required in the logical development of
the discipline. They sought to establish the calculus in terms of the conceptions found
in the traditional geometry and algebra which had been developed from spatial intuition.
During the eighteenth century, however, the inherent difficulty of formulating the under-
lying concepts became increasingly evident ,and it then became customary to speak of
the metaphysics of the caluclus, thus implying the inadequacy of mathematics to give a
satisfactory expositionof the bases. With the clarification of the basic notions which, in
the nineteenth century, was given in terms of precise mathematical terminology a safe
course was steered between the intuition of the concrete in nature (which may lurk in
geometry and algebra) and the mysticism of imaginative speculation (which may thrive
on trascendental metaphysics). The derivative has throughout its development been thus
precariously situated between the scientific phenomenon of velocity and the philosoph-
ical noumenon of motion.
The history of integral is similar. On the one hand, it had offered ample opportunity
for interpretations by positivistic thought in terms either of approximations or of the
compensation of errors, views based on the admitted approximative nature of scientific
measurements and on the accepted doctrine of superimposed effects. On the other hand,
it has at the same time been regarded by idealistic metaphysics as a manifestation that
beyond the finitism of sensory percipiency there is a trascendent infinite which can be but
asymptotically approached by human experience and reason. Only the precision of their
mathematical definition the work of the nineteeth century enables the derivative and the
integral to mantain their autonomous positionas abstract concepts, perhaps derived from,
but nevetheless independent of, both physical description and metaphysical explanation.

3
Function Expansions & Transforms

Many important differential equations appearing in practice cannot be solved in terms of


these functions. Solutions can often only be expressed as infinite series, that is, infinite
sums of simpler functions such as polynomials, or trig functions.
We must therefore give meaning to an infinite sum of constants, using this to give
meaning to an infinite sum of functions. When the functions being added are the simple
powers (x x0 )k , the sum is called a Taylor (power) series and if x0 = 0 a Maclaurin
series. When the functions are trig terms such as sin(kx) or cos(kx), the series might
be a Fourier series, certain infinite sums of trig functions that can be made to represent
arbitrary functions, even functions with discontinuities. This type of infinite series is also
generalized to sums of other functions such as Legendre polynomials. Eventually, solu-
tions of differential equations will be given in terms of infinite sums of Bessel functions,
themselves infinite series.

0-Infinite Series

P{a
If

k }, k = 0, 1, is a sequence of numbers, the ordered sum of all its terms, namely,
k=0 ak = a0 + a1 + , is called an infinite series.
Pn
(1) Partial sums k=0 ak .

(2) Sum of the infinite series: the limit of the sequence of its partial sums.

(3) Geometric series (


P k
P p
k=0 ar ); the p-series ( k=1 1/k ).

(4) Series with positive terms; Series with both positive and negative terms, etc.

Methods to prove convergence:

(1) The nth-term rule:


P
n=0 an < limn an = 0.

(2) The ratio test: For


P
k=0 ak , a series with positive terms, let = limk (ak+1 /ak ).
If < 1, the series converges; if > 1, the series diverges; if = 1, the test is
inconclusive.

(3) The nth root test: For 1/n .


P
n=0 an , a series with nonnegative terms, let r = limn (an )
If r < 1, the series converges; if r > 1, the series diverges; if = 1, the test is
inconclusive.

(4) An absolutely convergent series is convergent.

(5) Leibniz Theorem: For alternating series k


P
k=0 (1) ak , ak > 0, if ak ak+1 and
ak 0, then the series converges.
4
It can be shown that if = limn (an+1 /an ) exists, then r = limn (an )1/n also
exists, and = r. However, r may exist when does not, so the nth root test is more
powerful.

1-Taylor Series
Taylor polynomial approximation
x
1
f (x) = pn (x) + (x t)n f (n+1) (t)dt (1)
n! a

where the nth-degree Taylor polynomial pn (x) is given by


0
f (a) f (n) (a)
pn (x) = f (a) + (x a) + + (x a)n
1! n!
When a = 0, the series is also called a Maclaurin series.
Conditions: 1. f (x), f (1) (x), , f (n+1) (x) are continuous in a closed interval con-
taining x = a. 2. x is any point in the interval.
A Taylor series represents a function for a given value as an infinite sum of terms
that are calculated from the values of the functions derivatives. The Taylor series of a
function f (x) for a value a is the power series, if is meaningful,

X f (k) (a)
f (x) = (x a)k .
k!
k=0

2-Fourier Series
A Fourier series decomposes periodic functions into a sum of sines and cosines (tri-
gonometric terms or complex exponentials). For a periodic function f (x), periodic on
[L, L], its Fourier series representation is
n
X  nx   nx o
f (x) = 0.5a0 + an cos + bn sin (2)
L L
n=1

and the Fourier coefficients an and bn are given by


L L
1  nx  1  nx 
an = f (x) cos dx bn = f (x) sin dx (3)
L L L L L L

If f (x) is an odd function then an = 0 and if f (x) is an even function then bn = 0.


Condition: f (x) is piecewise continuous on the closed interval [L, L]. A function
is said to be piecewise continuous on the closed interval [a, b] provided that it is con-
tinuous there, with at most a finite number of exceptions where, at worst, we would find
a removable or jump discontinuity. At both a removable and a jump discontinuity, the
one-sided limits f (t+ ) = limxt+ f (x) and f (t ) = limxt f (x) exist and are finite.
A sum of continuous and periodic functions converges pointwise to a possibly dis-
continuous and nonperiodic function. This was a startling realization for mathematicians
of the early nineteenth century.

5
f (x) can also be expressed as a complex Fourier series
+
X
f (x) = cn einx/L (4)

with cn = 0.5(an ibn ) for n > 0, cn = 0.5(an + ibn ) for n < 0 and c0 = 0.5a0 .
The associate complex Fourier coefficients are given by
L
1
cm = f (x)eimx/L dx, m = 0, 1, 2, ... (5)
2L L

Termwise Integration and Differentiation

x
a0 LX1h nx  nx i
f (t)dt = (x + L) + an sin bn cos cos n
L 2 n L L
n=1

L
0 1 2
X
0.5a20 a2n + b2n

P arseval s equality : |f (x)| dx = + (6)
L L n=1

If (1) f (x) is continuous, and f 0 (x) is piecewise continuous on [L, L], (2) f (L) =
f (L), (3) f 00 (x) exists at x in (L, L), then

X  nx nx 
f 0 (x) = n an sin + bn cos .
L L L
n=1

Fourier Series of Odd and Even functions


A function f (x) defined on [0, L] can be extended as an even periodic function (bn = 0)
with the following Fourier series representation
 nx  L
X 2  nx 
f (x) = 0.5a0 + an cos an = f (x) cos dx (7)
L L 0 L
n=1

whereas if it is extended as an odd periodic function (an = 0), its Fourier series repres-
entation is

X  nx  2 L  nx 
f (x) = bn sin bn = f (x) sin dx (8)
L L 0 L
n=1

3-Integral transform
An integral transform is any transform of the following form
x2
F (m) = K(m, x)f (x)dx (9)
x1

with the following inverse transform


m2
f (x) = K 1 (m, x)F (m)dm (10)
m1
6
Example 1: The Fourier Transform
The Fourier transform of a function f (x) is defined as
+
F (m) = f (x)eimx dx (11)

and its inverse formula is


+
1
f (x) = F (m)eimx dm (12)
2

In this example, x1 = m1 = , x2 = m2 = + and K(m, x) = eimx


Note: The Fourier transform is an extension of the Fourier series when the period of the
represented function is increased and approaches infinity. It works for any non-periodic
function.

Example 2: The Laplace Transform


The Laplace transform is an example of an integral transform that will convert a differ-
ential equation into an algebraic equation. There are four operational rules that relate the
transform of derivatives and integrals with multiplication and division.

F (s) = L[f (t)] = f (t)est dt
0

Conditions: if f (t) is piecewise continuous on [0, ), and of exponential order


(|f (t)| Ket for some K and > 0), then F (s) exists for s > .

4-Galerkin Expansion
For function

t = L + N (), Lj = j j ,

assume that
X
= aj (t)j (x),

the system can be converted to

daj
= j aj + Fj (al ).
dt

Application in rotating Couette flow: Double periodic boundary


conditions

The particular flow we select is a version of the famous rotating Couette flow between
two co-axial cylinders. The gap between the cylinders is assumed to be much smaller
4
than the cylinder radius. A local Cartesian coordinate system x = (x, y, z)T is oriented
such that the axis of rotation is parallel to the z axis, while the circumferential direction
corresponds to the x axis. Only flows independent of x are considered. The flow velocity
7
Figure 1: The rotating Couette flow

is represented as (y + u, v, w)T , so that u (u, v, w)T is the velocity perturbation and


4
= (y, 0, 0)T is the equilibrium flow. Under these assumptions, the governing equations
u
are
u 1 2
+ u T u = p + u + Au, u = 0, (13)
t Re
where = (/x, /y, /z), and

0 1 0
A = 0 0 .
0 0 0

For simplicity, the flow is assumed to be 2-periodic in y and z, u and v are assumed to
be odd in y and even in z, while w is assumed odd in z and even in y :

u(y, z) = u(y + 2, z) = u(y, z + 2), p(y, z) = p(y + 2, z) = p(y, z + 2),


u(y, z) = u(y, z) = u(y, z), v(y, z) = v(y, z) = v(y, z), (14)
w(y, z) = w(y, z) = w(y, z).

8
Preliminary analysis of the stability properties of the flow
Our interest lies in the global stability of this flow and its asymptotic convergence.
We first apply the well-known energy stability approach. Setting the Lyapunov func-
tional as the perturbation energy E = kuk2 /2 leads to a linear eigenvalue problem. For
(13)-(14), the resulting eigenfunctions en,m (x) can easily be found:
 T
cos(mz) sin(ny) m cos(mz) sin(ny) n sin(mz) cos(ny)
en,m (x) = , , , (15)
2 2 m2 + n2 2 m2 + n2
where n = 1, 2, , m = 0, 1, 2, . The corresponding eigenvalues are

m m 2 + n2
n,m (Re) = p . (16)
2 m2 + n2 Re

Note that for this flow, conveniently, neither the eigenfunctions


nor the eigenvectors
depend on . The eigenvalue 1,1 is positive for Re > 4 2, with all other eigenvalues
less than 1,1 . Hence, the energy stability limit is Re = ReE = 4 2. One can show
that the flow becomes linearly unstable for 0 < < 1 and

4 2 2
Re > ReL = . (17)
1

Note that ReL = ReE for = 1/2. Moreover, for = 0 and = 1, it can be proven
that this flow is globally stable for any Re. For other values of , we solved (13)-(14)
numerically for a variety of initial conditions and observed convergence to the base flow
for all Re < ReL . This, of course, does not eliminate all possible initial conditions, and
it does not eliminate existence of unstable solutions not tending to the base flow with
time. Hence, rigorously establishing global stability in the range ReE < Re < ReL is
of interest.
Next, with the aid of SOS optimization, we analyze the global stability of the periodic
rotating Couette flow (13-14). To this end, we first reduce (13)-(14) to an uncertain
dynamical system.

Finite-dimensional uncertain system


We represent the perturbation velocity as
k
X
u(x, t) = ai (t)ui (x) + us (x, t), (18)
i=1

where the finite Galerkin basis fields ui , i = 1, , k are an orthonormal set of solen-
oidal vector fields with appropriate inner product, the residual perturbation velocity us
is solenoidal and orthogonal to all the ui , and both ui and us satisfy the boundary con-
dition (14) of the Couette flow. We substitute (18) into the flow equation (13), and take
inner product with each of the Galerkin basis fields ui for i = 1, , k. After some
straightforward manipulation this yields

da
= f (a) + a (us ) + b (us , a) + c (us ) (19)
dt

9
4
where a = (a1 , , ak )T , and the components of f , a , b , and c are
4
fi (a) = Lij aj + Nijk aj ak , (20)
4
ai (us ) = hus , gi i, (21)
4
bi (us , a) = hus , hij iaj , (22)
4
ci (us ) = hus , us ui i. (23)

Einstein summation notation (summation over repeated indices) is used in the above
equations. The inner product hw1 , w2 i is the integral of w1 w2 over the flow domain
4
V = {(y, z)|0 y 2, 0 z 2}. The second-order tensor L and the third-order
tensor N are defined component-wise as

4 1
Lij = hui , 2 uj i + hui , Auj i, (24)
Re
4
Nijk = hui , uj uk i. (25)

The vector fields gi and hij are defined as

4 1 2
gi = T ui ui T u
ui + u , (26)
Re
4
hij = uj ui ui T uj . (27)

where u is the steady flow, the stability of which is studied. For the periodic Couette
= (y, 0, 0)T . The notation used can be clarified by the Einstein equivalent of
flow, u
1 m
u uk k
(26): gim = Re 2 umi +u k xik x m m m are the m-th components
m ui , where gi , ui , x

of the vectors gi , ui and x, respectively.


The equation (19) represents the evolution of the parameters a of the Galerkin ex-
pansion, where the residual us appears but is unknown. Instead of considering in full
the dynamics of the remaining unmodelled modes us , which is itself described by a sys-
tem of partial differential equations, we will find bounds on the effect of us on a. The
evolution of q 2 = kus k2 /2 satisfies the following differential equation:

(q2 ) = a f (a) a a + (us ) + (us , a) (28)


= a (us , a) + (us ) + (us , a),

where
4
(us , a) = a (us ) + b (us , a) + c (us ), (29)
4 1
(us ) = hus , 2 us i hus , us aru + aru us i, (30)
Re
4 4 1
(us , a) = 2hus , dj iaj , dj = 2 uj (uj aru + aru uj ). (31)
Re
Note that the terms a f (a) and (us ) in (28) represent the self-contained dissipation
or generation of energy depending on ai ui and us , while the term (us , a) denotes the
generation or dissipation of energy arising from the interaction of these velocity fields.
Overall, the periodic rotating Couette flow under consideration can be described by

10
the system

a = f (a) + (us , a), (32)


2
(q ) = a (us , a) + (us ) + (us , a). (33)

In (32) and (33), an important fact is that the evolution of the dynamical system depends
on us via the perturbation terms (us , a), (us ), and (us , a). This means that for a
given q 2 > 0, there exist many us satisfying kus k2 /2 = q 2 ; producing different right-
hand sides of (32) and (33). In this sense, (3233) is an uncertain dynamical system. The
solution of this system is therefore not unique. However, if all the solutions of (3233)
tend to zero as time tends to infinity, then the solution of the Navier-Stokes system also
tends to zero.

Application in rotating Couette flow: periodic boundary condi-


tion in z and non-slip boundary condition in y
The flow is assumed to be evolved inside the domain V := {(y, z) | y }
satisfying non-slip boundary conditions along V, namely, u(, z) = 0. Further, the
flow is 2-periodic in z to achieve maximum simplification.
First consider the energy stability of the flow. The linear stability of the flow can be
analyzed similarly.

Energy stability of the flow


Setting the Lyapunov functional as the perturbation energy function E = kuk2 /2 leads
to a linear eigenvalue problem, i.e.,

1 1
Re 2 0 0

2
1 1

u = Re 2 0 u + (34)

2 y
1
0 0 Re 2
z

u = 0, u(, z) = 0, (35)

where is the Lagrange multiplier for the incompressibility condition, and is the Lag-
range multiplier for the unit norm condition kuk = 1.
Considering the 2-periodic property of the flow in z, without loss of generality, we
assume that the energy eigenfunctions u and the Lagrange multiplier take the following
form

X
u = u
m (y) cos(mz),
m=

X
v = vm (y) cos(mz), (36)
m=

X
w = w
m (y) sin(mz),
m=

X
= m (y) cos(mz).
m=
11
Substituting (36) into (34) and (35) renders to
(i) m 6= 0.

m2 Re2
(D2 m2 Re)2 (D2 m2 )
vm = vm , (37)
4
vm = (D2 m2 Re)(D2 m2 )
vm = D vm = 0, y = , (38)

and
1 2
m =
w D m = 2 (D2 m2 )(D2 m2 Re)
v, u vm ,
m m Re
where D is the differential operator d/dy.
(ii) m = 0.

(D2 Re)
u0 = 0, u
0 () = 0, v0 = w
0 = 0. (39)

For the latter case, it is easy to derive that


T
(2n 1)2
  
4 1 2n 1
0 = en,0 =
u cos y , 0, 0 , = ,
2 2 4Re

where n = 1, 2, is the mode number. For the former case, solving the energy eigen-
value problem is equivalent to solving the 6th-order ODE (37) subject to the boundary
conditions (38), which however is hard to solve analytically.
Notice that the onset of energy instability in Reynolds number, denoted by ReE , is
determined by (37)-(38) with = 0, i.e.,

m2 Re2E
(D2 m2 )3 vm = vm , (40)
4
vm = (D2 m2 )2 vm = 0, y = .
vm = D (41)

In the following, the exact solution of the problem (40)-(41) is exploited. The even and
odd solutions of (40)-(41) can be written in the forms
3
X
vm,e = Ai cosh(qi y) (42)
i=1

and
3
X
vm,o = Bi sinh(qi y), (43)
i=1

respectively, where qi are the roots of the equation

m2 Re2E
(q 2 m2 )3 = . (44)
4
The higher modes can of course be obtained from these solutions, but here our interests
only lie in the first even and odd modes of system instability. If, in place of ReE , we

12
introduce the quantity r that satisfies the relationship

m2 Re2E
= m6 r3 , (45)
4
then the roots of (44) can be written down explicitly in the form
1
q1 = im(r 1) 2 , q2 = m(A iB), q3 = m(A + iB), (46)

where the quantities A and B satisfy


1 1
2A2 = (1 + r + r2 ) 2 + (1 + r),
2
2 2 21 1
2B = (1 + r + r ) (1 + r).
2
Taking these relations into account, and setting A1 = 1, A2 = (C1 + iC2 )/2, A3 =
(C1 iC2 )/2, B1 = i, B2 = (S1 + iS2 )/2, B3 = (S1 iS2 )/2 with uncertain real
constants Ci , Si , i = 1, 2, the solutions (42) and (43) can be written more explicitly as
1
vm,e = cos(m(r 1) 2 y) + C1 cosh(mAy) cos(mBy)
+C2 sinh(mAy) sin(mBy) (47)

and
1
vm,o = sin(m(r 1) 2 y) + S1 sinh(mAy) cos(mBy)
+S2 cosh(mAy) sin(mBy). (48)

Since the boundary conditions (41) are homogeneous, ReE can be obtained by solv-
ing a characteristic value problem accordingly by regarding it as a function of the mode
number m in the z-direction. More clearly, applying the boundary conditions (41) to (47)
or (48) will give three linear homogeneous equations for the constants Ai or Bi . If those
constants are not to vanish identically, then the determinant of the system must vanish,
yielding that
1 1
(r 1) 2 tan(m(r 1) 2 ) (49)

(A + 3B) sinh(2mA) + ( 3A B) sin(2mB)
=
cosh(2mA) + cos(2mB)

for the even mode, and that


1 1
(r 1) 2 cot(m(r 1) 2 ) (50)

(A + 3B) sinh(2mA) ( 3A B) sin(2mB)
=
cosh(2mA) cos(2mB)

for the odd mode. (49) and (50) are transcendental equations relating m and r, and thus
can only be solved numerically. Then, we can see that the onset of energy instability of
the flow is determined by the first even mode at m = 1, which corresponds to r = 1.3441.
Considering the relationship (45), the energy stability limit is Re = ReE , where
3 3
ReE := 2m2 r 2 = 2 1.3441 2 = 3.1166.

Note that, for this flow, conveniently, the energy stability limit is irrelevant to .
13
Linear stability of the flow
The linear stability of the flow (13) is the stability of the linearized system of (13), i.e.,

u 1 2
= p + u + Au, u = 0, (51)
t Re
which can be obtained by solving the following linear eigenvalue problem

1
Re 2 1 0 0

1

u = Re 2 0 u + (52)

y
1
0 0 Re 2
z

u = 0, u(, z) = 0, (53)

where and are given as in (34). Similarly as in the preceding part, the problem is
equivalent to solving

(D2 m2 Re)2 (D2 m2 )


vm = m2 Re2 (1 )
vm , (54)
2 2 2 2
vm = (D m Re)(D m )
vm = D vm = 0, y = . (55)

Immediately, one can show that the Couette flow becomes linearly unstable for 0 < <
1 and
ReE 3.1166
Re > ReL := = . (56)
2 1 2 1

Note that ReL = ReE for = 1/2.

14
Vector Spaces, Vector Fields &
Operators

In the context of physics we are often interested in a quantity or property which varies
in a smooth and continuous way over some one-, two-, or three-dimensional region of
space. This constitutes either a scalar field or a vector field, depending on the nature of
property. In this chapter we consider the relationship between a scalar field involving
a variable potential and a vector field involving field, where this means force per unit
mass or change. The properties of scalar and vector fields are described and how they lead
to important concepts, such as that of a conservative field, and the important and useful
Gauss and Stokes theorems (actually will be presented separately). Finally, examples
may be given to demonstrate the ideas of vector analysis.
There are four types of functions involving scalars and vectors:

Scalar functions of a scalar, f (x)

Vector functions of a scalar, r(t)

Scalar functions of a vector, (r)

Vector functions of a vector, A(r)

1- The vector x is normalised if xT x = 1


2- The vectors x and y are orthogonal if xT y = 0
3- The vectors x1 , x2 , ..., xn are linearly independent if the only numbers which sat-
isfy the equation a1 x1 + a2 x2 + ... + an xn = 0 are a1 = a2 = ... = an = 0.
4- The vectors x1 , x2 , ..., xn form a basis for a ndimentional vector-space if any
vector x in the vector-space can be written as a linear combination of vectors in the basis
thus x = a1 x1 + a2 x2 + ... + an xn where a1 , a2 , ..., an are scalars.

0-Scalar (inner) product of vector fields


hA, Bi = A B = AT B = A1 B1 + A2 B2 + A3 B3 (57)
where A = (A1 , A2 , A3 ) and B = (B1 , B2 , B3 ).

A B = kAkkBk cos ,

where is the angle between A and B satisfying 0 .


Product laws:
(1) Commutative: A B = B A
(2) Associative: mA nB = mnA B
(3) Distributive: A (B + C) = A B + A C
15
1 1
(4) Cauchy-Schwarz inequality: A B (A A) 2 (B B) 2
Lp norms
There are many norms that could be defined for vectors. One type of norm is called
an Lp norm, often denoted as k kp . For p 1, it is defined as

n
!1
X p

kxkp = kxi kp , x = [x1 , , xn ]T . (58)


i=1

For instance, P
(1) kxk1 = i |xi |, also called the Manhattan norm because it corresponds to sums
of distances along coordinate axes, as one would travel along the rectangular street plan
of Manhattan. q
P 2
(2) kxk2 = i xi , also called the Euclidean norm, the Euclidean length, or just
the length of the vector.
(3) kxk = maxi kxi k, also called the max norm or the Chebyshev norm.
Some relationships of norms.

kxk kxk2 kxk1 ,


kxk kxk2 nkxk ,


kxk2 kxk1 nkxk2
p
Define the inner product induced norm kxk = hx, xi. Then,

(kxk + kyk)2 kx + yk2 , kx + yk2 = kxk2 + kyk2 + 2hx, yi.

1-Vector product of vector fields


A B = (A2 B3 A3 B2 , A1 B3 A3 B1 , A1 B2 A2 B1 ) (59)
The cross-product of the vectors A and B, is orthogonal to both A and B, forms a right-
handed systems with A and B, and has length given by

kA Bk = kAkkBk sin ,

where is the angle between A and B satisfying 0 .


Additional properties of the cross-product
(1) Scalar multiplication (aA) (bB) = ab(A B),
(2) Distributive laws A (B + C) = A B + A C,
(3) Anticommutation B A = A B
(4) Nonassociativity A (B C) = (A C)B (A B)C

2-Gradient of a scalar field


 

grad = = , , (60)
x y z
If we consider a surface in 3D space with (r) = const then the direction normal (i.e.
perpendicular) to the surface at the point r is the direction of grad . The magnitude of
16
the greater rate of change of (r) is the magnitude of grad .
In physical situations we may have a potential, , which varies over a particular
region and thus constitutes a field E, satisfying
 

E = = , ,
x y z

Example: As an example we calculate the electric field at point (x, y, z) due to a


charge q1 at (2, 0, 0) and a charge q2 at (2, 0, 0) where charges are in coulombs and
distances in metres. The potential at the point (x, y, z) is
q1 q2
(x, y, z) = 1/2
+ .
40 {(2 x)2 2 2
+y +z } 40 {(2 + x) + y 2 + z 2 }1/2
2

As a result, the components of the field are

q1 (2 x) q2 (2 + x)
Ex = 3/2
+
2 2 2
40 {(2 x) + y + z } 40 {(2 + x)2 + y 2 + z 2 }3/2
q1 y q2 y
Ey = 3/2
+
2 2 2
40 {(2 x) + y + z } 40 {(2 + x)2 + y 2 + z 2 }3/2
q1 z q2 z
Ez = 3/2
+ .
2 2 2
40 {(2 x) + y + z } 40 {(2 + x)2 + y 2 + z 2 }3/2

3-Divergence of a vector field


 
A1 A2 A3
div A = . A = + + (61)
x y z
The value of the scalar div A at point r gives the rate at which the material is expanding
or flowing away from the point r (outward flux per unit volume).
Theorem involving divergence
Divergence theorem that relates a volume integral and a surface integral within a
vector field. This states that

F dA = FdV , (62)

where represents the overall volume domain and denotes the total surface boundary.

4-Curl of a vector field


 
A3 A2 A1 A3 A2 A2
curl A A = , , (63)
y z z x x x
where A = (A1 , A2 , A3 ). The vector curl A at point r gives the local rotation (or
vorticity) of the material at point r. The direction of curl A is the axis of rotation and
half the magnitude of curl A is the rate of rotation or angular frequency of the rotation.
Theorem involving Curl of vectors
Stokess theorem: we consider a surface S that has a closed non-intersecting bound-
ary, C, the topology of, say, one half of a tennis ball. Stokess theorem states that for a

17
vector field F within which the surface is situated,

F dr = ( F) dA . (64)
C S

6-Repeated operations
Note that grad must operate on a scalar field and gives a vector field in return, div
operates on a vector field and gives a scalar field in return, and curl operates on a vector
field and gives a vector field in return.

2 2
 
2
div grad = = + + 2 (65)
2 x2 y 2 z

curl grad = 0 (66)


div curl A = 0 (67)
curl curl A = grad div A 2 A (68)
2A 2A
 
2 A
A= + + (69)
2 x2 y 2 z 2
where 2 = is the very important Laplacian operator.
Other forms for other coordinate systems for 2 are as follows.
(1) Spherical polar coordinates:

2
 
2 1 2 1 1
= 2 r + 2 sin + 2 2
r r r r sin r sin 2
(2) Two-dimensional polar coordinates:

2 1 1 2
2 = + +
r2 r r r2 2
(3) Cylindrical coordinates:

2 1 1 2 2
2 = + + +
r2 r r r2 2 z 2

7-Products rules
grad() = grad + grad (70)
div (A) = div A + A . grad (71)
curl (A) = curl A + (grad ) A (72)
div (A B) = B . curl A A . curl B (73)

18
Linear Algebra, Matrices,
Eigenvectors

In many practical systems there naturally arises a set of quantities that can conveniently
be represented as a certain dimensional array, referred to as a matrix. If matrices were
simply a way of representing arrays of numbers then they would have only a marginal
utility as a means of visualizing data. However, a whole branch of mathematics has
evolved, involving the manipulation of matrices, which has become a powerful tool for
the solution of many problems.
For instance, consider the set of n linear equations with n unknowns

a11 Y1 + a12 Y2 + ... + a1n Yn = 0
a21 Y1 + a22 Y2 + ... + a2n Yn = 0

(74)
..........................

an1 Y1 + an2 Y2 + ... + ann Yn = 0

The necessary and sufficient condition for the set to have a non-trivial solution (other
than Y1 = Y2 = ... = Yn = 0) is that the determinant of the array of coefficients is zero:
det(A) = 0.

0-Basic definitions and notation


(1) Transpose: AT = (aji ). (A + B)T = AT + B T . A symmetric matrix is equal to
its transpose, A = AT
(2) Diagonal matrices:

d1 0 0
0 d2 0
diag((d1 , d2 , , dn )) =

..
.
0 0 dn

A1 Ak = diag(A1 , , Ak ).
P
(3) Trace: tr(A) = i aii .

tr(A) = tr(AT ), tr(cA) = ctr(A), tr(A + B) = tr(A) + tr(B).


Pn i+j |A
(4) Determinant: |A| = j=1 aij a(ij) , where a(ij) = (1) (i)(j)
| with A(i)(j)
denoting the submatrix that is formed from A by removing the ith row and the jth
column.
19
|A| = |AT |, |cA| = cn |A|

(5) Adjugate: adj(A) = (a(ji) ) = (a(ij) )T .

A adj(A) = adj(A)A = |A|I.

1-Multiplication of matrices and multiplication of vectors and


matrices
(1) matrix multiplication

associative : A(BC) = (AB)C,

distributive over addition : A(B + C) = AB + AC, (B + C)A = BA + CA.

For any positive integer k,

I Ak = (I A)(I + A + + Ak1 ).

For an odd positive integer k,

I + Ak = (I + A)(I A + Ak1 ).

(2) Traces and determinants of square Cayley products

tr(AB) = tr(BA), tr(ABC) = tr(BCA) = tr(CAB),

xT Ax = tr(xT Ax) = tr(AxxT ).

 
A 0
|AB| = |A||B|,
= |A||B|
I B

(3) The Kronecker product


a11 B a1m B
A B = ... ..

.
an1 B anm B

|A B| = |A|m |B|n , A Rnn , B Rmm

20
(aA) (bB) = ab(A B) = (abA) B = A (abB)

(A + B) C = A C + B C, (A B) C = A (B C),

(A B)T = AT B T , (A B)(C D) = AC BD.

2-Matrix rank and the inverse of a full rank matrix


The linear dependence or independence of the vectors forming the rows or columns of
a matrix is an important characteristic of the matrix. The maximum number of linearly
independent vectors is called the rank of the matrix, rank(A).

rank(aA) = rank(A), a 6= 0, rank(A) min(n, m), A Rnm .

If the rank of a matrix is the same as its smaller dimension, we say the matrix is of full
rank. In the case of a nonsquare matrix, we may say the matrix is of full row rank or full
column rank just to emphasize which is the smaller number.

ramk(AB) min(rank(A), rank(B)),

rank(A + B) rank(A) + rank(B),

|rank(A) rank(B)| rank(A + B).

Linear systems
A linear system Anm x = b, for which a solution exists, is said to be consistent;
otherwise, it is inconsistent. The system is consistent if and only if

rank([A|b]) = rank(A), (75)

namely, the space spanned by the columns of A is the same as that spanned by the
columns of A and the vector b; therefore, b must be a linear combination of the columns
of A. A special case that yields (75) for any b is

rank(Anm ) = n,

and so if A is of full row rank, the system is consistent regardless of the value of b. In this
case, of course, the number of rows of A must be no greater than the number of columns.
A square system in which A is nonsingular is clearly consistent, and the solution is

x = A1 b.

Preservation of positive definiteness

(1) If C is positive defnite and A is of full column rank, then AT CA is positive definite

(2) If AT CA is positive definite, then A is of full column rank.

A lower bound on the rank of a matrix product


21
If A is n n and B is a matrix with n rows, then

rank(AB) rank(A) + rank(B) n.

Inverse of products and sums of matrices


The inverse of the Cayley product of two nonsingular matrices of the same size is
particularly easy to form. If A and B are square full rank matrices of the same size

(AB)1 = B 1 A1 .

A(I + A)1 = (I + A1 )1 ,

(A + BB T )1 B = A1 B(I + B T A1 B)1 ,

(A1 + B 1 )1 = A(A + B)1 B,

A A(A + B)1 A = B B(A + B)1 B,

A1 + B 1 = A1 (A + B)B 1 ,

(I + AB)1 = I A(I + BA)1 B, (I + AB)1 A = A(I + BA)1 ,

(A B)1 = A1 B 1 .

3-Eigensystems
Definitions
2- The eigenvalues of a symmetric matrix are the numbers that satisfy |A I| = 0
3- The eigenvectors of a symmetric matrix are the vectors x that satisfy (A I)x = 0

Theorems
1-The eigenvalues of any real symmetric matrix are real.
2-The eigenvectors of any real symmetric matrix corresponding to different eigenvalues
are orthogonal.

22
3-Diagonalisation of symmetric matrices
Definitions
1- An orthogonal matrix U is a real square matrix such that U T U = U U T = 1
2- If U is a real orthogonal matrix of order n n and A is a real matrix of the same order
then U T AU is called the orthogonal transform of A.
3-

Theorem
1- If A is a real symmetric matrix of order n n then it is possible to find an orthogonal
matrix U of the same order such that the orthogonal transform of A with respect to U is
diagonal and the diagonal elements of the transform are the eigenvalues of A.
2-(U T AU )m = U T Am U
-Cayley-Hamilton Theorem: A real square symmetrix matrix satisfies its own charac-
teristic equation (i.e. its own eigenvalue equation)

An + an1 An1 + an2 An2 + ... + a1 A + a0 I = 0

where

a0 = (1)n |A|, an1 = (1)n1 tr(A)

-Trace Theorem: The sum of the eigenvalues of a matrix A is equal to the sum of the
diagonal elements of A and is defined as T r(A).
-Determinant Theorem: The product of the eigenvalues of A is equal to the determinant
of A.

4-Matrix Factorizations
Matrices can be factored in a variety of ways as a product of matrices with different
properties. These different factorizations, or decompositions, reveal different aspects of
matrix algebra and are useful in different computational arenas.

Similarity transform
Two square matrices A and B are said to be similar if an invertible matrix P can be found
for which A = P BP 1 .
Similarity to a diagonal matrix
Systems of differential equations sometimes can be uncoupled by diagonalizing a
matrix, obtaining the similarity transformation A = P DP 1 , where the n columns of
P are the n eigenvectors of A, and D is a diagonal matrix and its entries are the corres-
ponding eigenvalues of A.
Similarity to a Jordan canonical form
However, the most general form is A = P JP 1 , where J is a Jordan matrix rather
than a diagonal matrix D. The Jordan matrix is a diagonal matrix with some additional
1s on the superdiagonal, the one above the main diagonal. For some matrices, the Jordan
matrix is as close to diagonalization as can be achieved.

23
A square matrix is similar to either a diagonal matrix or to a Jordan matrix. In either
event, the eigenvalues of A appear on the diagonal of D or J. A square symmetric matrix
is orthogonally similar to a diagonal matrix.

LU decomposition
LU decomposition can be obtained as a by-product of Gaussian elimination. The row
reductions that yield the upper triangular factor U also yield the lower triangular factor
L. This decomposition is an efficient way to solve systems of the form AX = Y , where
the vector Y could be one of a number of right-hand sides. In fact, the Doolittle, Crout,
and Cholesky variations of the decomposition are important algorithms for the numerical
solutions of systems of linear equations.
There are at least five different versions of LU decompositions.
1. Doolittle, L1 U , 1s on main diagonal of L.
2. Crout, LU1 , 1s on main diagonal of U .
3. LDU, L1 DU1 , 1s on main diagonals of L and U and D is a diagonal matrix.
4. Gauss, L1 DLT1 , A is symmetric, 1s on main diagonal of L, D is a diagonal matrix.
5. Cholesky, RRT , A is symmetric, positive definite, R = L1 D, with D a diagonal
matrix.

QR decomposition
The QR decomposition factors a matrix into a product of an orthogonal matrix Q and an
upper triangular matrix R. It is an important ingredient of powerful numeric methods for
finding eigenvalues and for solving the least-squares problem.
The factors Q and R for every real matrix are unique once the otherwise arbitrary
signs on the diagonal of R are fixed. Modern computational algorithms for finding ei-
genvalues numerically use some version of the QR algorithm. Assume, for A = A0 , do
QR decomposition iteratively.

A0 = Q0 R0 , A1 = R0 Q0 = Q1 R1 , , A2 = R1 Q1 = Q2 R2 , A3 = R2 Q2 = Q3 R3

If A is real and no two eigenvalues have equal magnitude, that is,

0 < |n | < < |2 | < |1 |,

then the sequence of matrices Ak converges to an upper triangular matrix with the eigen-
values of A0 on the main diagnal. If, in addition, A is symmetric, then Ak converges to
a diagonal matrix.

Singular value decomposition

A = U V T .

The singular value decomposition factors a matrix as a product of three factors, two
being orthogonal matrices and one being diagonal. The columns in one orthogonal factor
are left singular vectors, and the columns in the other orthogonal factor are the right
singular vectors. The matrix itself can be represented in outer product form in terms
of the left and right singular vectors. One use of this representation is in digital image
processing.
24
5-Solution of linear systems
These are methods for finding solutions of a set of linear equations which may be written
in the matrix form Ax = b where x are n unknown vectors.

Direct methods
Direct methods of solving linear systems all use some form of matrix factorization. The
LU factorization is the most commonly used method to solve a linear system.
For certain patterned matrices, other direct methods may be more efficient. If a given
matrix initially has a large number of zeros, it is important to preserve the zeros in the
same positions in the matrices that result from operations on the given matrix. This helps
to avoid unnecessary computations. The iterative methods discussed in the next section
are often more useful for sparse matrices.
Another important consideration is how easily an algorithm lends itself to implement-
ation on advanced computer architectures. Many of the algorithms for linear algebra can
be vectorized easily. It is now becoming more important to be able to parallelize the
algorithms.

Iterative methods
The Jacobi method
Lets start with Ax = b. A can be decomposed into a diagonal component D and the
remainder R. The solution is then obtain iteratively by

xk+1 = D1 (b Rxk )

. Each element is given by



1 X
xk+1
i = bi aij xkj , i = 1, 2, ..., n
aii
j6=i

Comments
1- The method works well is the matrix A is diagonal dominant.
2- The matrix must verify aii 6= 0.
The Gauss-Seidel method
In this method, we identify three matrices: a diagonal matrix D, a lower triangular L
with 0s on the diagonal, and an upper triangular U with 0s on the diagonal:

(D + L)x = b U x.

We can write this entire sequence of Gauss-Seidel iterations in terms of these three fixed
matrices:

x(k+1) = (D + L)1 (U x(k) + b).

The conjugate gradient method

x(k+1) = x(k) + (k) p(k) ,


25
where p(k) is a vector giving the direction of the movement.
Multigrid methods
Iterative methods have important applications in solving differential equations. The
solution of differential equations by a finite difference discretization involves the form-
ation of a grid. The solution process may begin with a fairly coarse grid on which a
solution is obtained. Then a finer grid is formed, and the solution is interpolated from
the coarser grid to the finer grid to be used as a starting point for a solution over the finer
grid. The process is then continued through finer and finer grids. If all of the coarser grids
are used throughout the process, the technique is a multigrid method. There are many
variations of exactly how to do this. Multigrid methods are useful solution techniques for
differential equations.

26
Generalised Vector Calculus
Integral Theorems

Motivation & Objectives


The objectives of the remaining course units are to:

i. Generalise and formalise integral mathematics

ii. Give incentives for developing mathematical diligence

iii. Provide physical intuition, insight and feeling for the mathematics in CFD

iv. Provide awareness of mathematical properties, characteristics & assumptions

0-Definitions & Notations


The following definitions and notations are used throughout the remainder of the course:

Variable Description Dimensions or Mapping Example


x, Scalars R11 Speed, Temperature
x, ~x Vectors Rn1 Velocity, Position
, e
n Unit Vectors Rn1 Boundary Normals
A, X Matrices Rab Rotational Operators
(x) , (x) Scalar Fields Rn1 R11 Temperature Fields
F(x) , F~ (x) Vector Fields Rn1 Rm1 Velocity Fields
S(x) Surfaces R(n1)1 Rn1 Potential Surfaces
All vectors are assumed to be of column nature, and all vector derivatives are assumed
to obey the numerator layout convention. The dimensional specification 11 for the real
scalars, or the additional 1 for the vectors, is in fact slightly redundant notation, however
it helps to appreciate the shapes of the vector equations and operators. Additionally,
this careful notation may simplify coding by making array allocation and/or operations
clearly identifiable.
In general, this course assumes a Cartesian coordinate system, and that n = m = 3.
However, all the presented concepts can be expressed in any coordinate system of choice
and most of the concepts are readily expanded to higher dimensions.

27
1-Required Vector Operators & Operations
The operator is, unless explicitly stated otherwise, assumed to be of the shape:


 T

x


x


= = = where: x = y , (76)
xT x
y

z


z
where the superscript T denotes the transpose operation.

Gradient of a Scalar Field



(x)

x


x


grad((x)) = ((x)) =
((x)) = (x) R31 .

(77)
y y


(x)

z z
The gradient is normal to level surfaces: let f : R3 R be a C 1 map and let
(x0 , y0 , z0 ) lie on the level surface S defined by f (x, y, z) = k, for k a constant. Then,
f (x0 , y0 , z0 ) is normal to the level surface in the following sense: if v is the tangent
vector at t = 0 of a path c(t) in S with c(0) = (x0 , y0 , z0 ), then (f ) v = 0.

Laplacian of a Scalar Field




x
x

2

((x)) = ((x)) = ( ) ((x)) =

((x))
y y




z z
2 2 2
 
= + + ((x))
x2 y 2 z 2
 2
(x) 2 (x) 2 (x)

= + + R11 . (78)
x2 y 2 z 2

The Laplacian is the Divergence of the Gradient of a scalar field. It has important
applications in Potential Flow Theory.

Laplacian of a Vector Field or Vector Laplacian


Not to be confused with a Laplacian Vector Field. From equation (68) the definition of
the Vector Laplacian follows as:

2 F = ( F) ( F) , (79)

28
which in Cartesian coordinates reduces to:
T
2 F(x) = 2 F1 (x) 2 F2 (x) 2 F3 (x) R31 ,

(80)

where the vector field F(x) is composed of three scalar components F1 (x), F2 (x) and
F3 (x), i.e. F(x) = [F1 (x) F2 (x) F3 (x)]T . For a scalar field, the Vector Laplacian
reverts to the familiar Laplacian.

Divergence of a Vector Field





x F1 (x)
F (x) F (x) F (x)

div(F(x)) = F(x) =

=
1
+
2
+
3
R11 .
F2 (x) x y z
y




F3 (x)
z
(81)

Curl of a Vector Field




i j
k


curl(F(x)) = F(x) =
R31 , (82)

x y z


F (x) F (x) F (x)
1 2 3

where | | denotes the determinant.

Primary Theorems of Calculus


The following theorems of calculus will be presented and discussed:

i. Fundamental Theorems of Calculus

ii. Gradient Theorem

iii. Greens Theorem

iv. Divergence Theorem

v. Stokes Theorem

29
First & Second Fundamental Theorem of Calculus
The Riemann Integral is a common and rigorous definition of an integral:
b X
f (x) dx = lim f (k ) xk , (83)
a xk 0
k

which has the geometrical interpretation of the area under the curve, as shown in figure 2.

y y
f(k )

f(x) f(x)
limxk !0

x x
xk

Figure 2: Geometrical interpretation of a Riemann Integral

Another formal, but more general definition denotes the integral as the process which
reverses Derivation, hence why sometimes it is referred to as Anti-Derivative.

1st Fundamental Theorem


The First Fundamental Theorem defines the Anti-Derivative as:
x
f (x) = g(t) dt , (84)
a

given a continuous real-valued function g(t) over the closed interval domain [a, b]. It
follows from this theorem that f (x) is continuous over the closed interval domain [a, b],
differentiable over the open domain (a, b), and by definition:

df (x)
g(x) = . (85)
dx
The First Fundamental Theorem relates the Derivative to the Integral and, most import-
antly, guarantees existence of integrals for continuous functions.

2nd Fundamental Theorem


The Second Fundamental Theorem defines definite integrals as:
b
g(x) dx = f (b) f (a) , (86)
a

for real-valued functions g(x) and f (x) on [a, b] related by equation (85). This theorem,
unlike the First Fundamental Theorem, does not require f (x) to be continuous.

30
Generalised Line Integrals & Gradient Theorem
The Gradient Theorem is also referred to as the Fundamental Theorem of Calculus for
Line Integrals. It represents the generalisation of an integration along an axis, e.g. dx
or dy, so the 2nd Fundamental Theorem of Calculus, to the integration of vector fields
along arbitrary curves, C, in their base space.

Scalar Field Line Integral


Assume a surface z = f (x, y), see figure 3, which is to be integrated from A to B along
the curve C. Geometrically, this corresponds to the curtain surface, Ss , bound by the
surface f (x, y) and C, which is expressed as:

1
Ss = f (x, y) ds where ds = dx2 + dy 2 2 . (87)
C

Assume C is parametrised such that x = x(t) and y = y(t), with t = a at A and


t = b at B, then equation (87) can be expressed as:

b
" 2  2 # 12
1 dx dy
f (x(t) , y(t)) dx2 + dy 2 2
= f (x(t) , y(t)) + dt , (88)
C a dt dt

or alternatively, if y can be expressed as y = y(x), then it follows:

xB
" 2 #1
2
dy
f (x, y) ds = f (x, y(x)) +1 dx . (89)
C xA dx

If the integral is only evaluated either along dx or dy, then only the axis projection
surfaces are obtained:

Sx = f (x, y) dx or Sy = f (x, y) dy . (90)
C C

Sx
xA Ss Sy
A yA
xB
x
C yB
ds y

Figure 3: Generalised line integral on a scalar field

31
Example: Scalar Field Line Integral

Assume C f (x, y) ds, where f (x, y) = h is constant, and C is a circle of radius R.
Parametrise C as x() = R cos() and y() = R sin() for = 0 2. It follows that:

Ss = h ds
C
1
= h dx2 + dy 2 2
C
2
" 2  2 # 12
dx dy
= h + d
0 d d
2 h i1
hR ( sin())2 + (cos())2 d
2
=
0
2
= hR d = 2hR . (91)
0

z=h

Sx
Sy
xA
Ss
xB yA
x
yB
y

C ds
Figure 4: Cylinder surface integral

Note that due to the closed loop integration path, the axis projection Sx and Sy are
nil in this case.

Sx = h dx
C xB xA
= h dx + h dx
xA xB
=0. (92)

32
Generalised Gradient Theorem
Postulate the integral of a 3D vector field, F(x) = [F1 (x) F2 (x) F3 (x)]T , along an ar-
bitrary 3D curve, C, which is parametrised as x = x(t), y = y(t), z = z(t), i.e. ds =
[dx dy dz]T , and goes from point p to point q. The corresponding generalised line in-
tegral becomes:

F ds = (F1 (x) dx + F2 (x) dy + F3 (x) dz)
C C
t=b  
dx dy dz
= F1 (x(t)) + F2 (x(t)) + F3 (x(t)) dt . (93)
t=a dt dt dt

Equation (93) may for instance represent the work performed on a particle in 3D as
it travels through an external force field, F(x). Postulating that this vector field, F(x),
can be obtained as the gradient of a scalar field, (x), i.e. F(x) = (x), it follows
together with the 2nd Fundamental Theorem of Calculus, that:


F ds = (x) ds
C C
= (p) (q) . (94)

Equation (94) is known as the Gradient Theorem and implies path independence of
the integral if and only if F(x) = (x). It immediately follows from equation (66),
that such a vector field F(x) is irrotational, i.e. curl(F) = F = 0, because:

F = ((x)) = 0 , (95)
for any scalar field (x). The scalar field is referred to as a conservative or potential
field with the corresponding vector field F being denoted as a conservative vector field.
Conversely, it is always possible to express a conservative vector field F in terms of a
scalar potential field. This theorem is at the basis of a lot of the Potential Flow and
Irrotaional fluid dynamics.

33
Greens Theorem
For a 2D convex region , i.e. x = [x y]T with boundary , Greens Theorem states:
 
F2 (x) F1 (x)
dx dy = (F1 (x) dx + F2 (x) dy) (96)
x y

y C1 : u(x)
C2
C2 : v(x)
d
C3 - C4 C3 : p(y)

c C4 : q(y)
C1
x
z a b
(a) Convex domain
y i

-i
j
ij = ji
-j

x
z
(b) Non-convex domain

Figure 5: Greens Theorem on convex & non-convex domain

Considering figure 5 (a), the F1 (x) integrand part of equation (96) can be resolved
as:
x=b
"
y=v(x)
#
F1 (x) F1 (x)
dx dy = dy dx
y x=a y=u(x) y
x=b x=b
= F1 (x, v(x)) dx + F1 (x, u(x)) dx
x=a x=a

= F1 (x, y) dx , (97)

where the integration direction always follows a right-hand rotation about the domain
Similarly, the F2 (x) integrand part of equation (96) can be resolved. If
normal, i.e. k.
the region is not convex, it can always be subdivided into sub-domains, i , where the
line integrals at the internal boundary between sub-domain i and
j, ij , cancel due to
opposite directions of integration, see figure 5 (b), i.e. ji = ij .

Greens Theorem gives the necessary and sufficient condition for a line integral
(F1 (x) dx + F2 (x) dy) to be path independent, in a simply connected region, as:

F2 (x) F1 (x) F2 (x) F1 (x)


= 0 or = . (98)
x y x y
34
This is equivalent to a nil curl of the vector field F(x) = [F1 (x, y) F2 (x, y) 0]T ,
implying that the vector field F(x) is an irrotational field in the x-y plane, i.e.:

F2 (x) F1 (x) T
 
F(x) = 0 0 = 0T . (99)
x y

Coupled with complex analysis, keyhole integration for domains with singularities,
and complex integrals, Greens Theorem can be used for developing Laurent Series,
Cauchy Residues or Laplace Transforms. These methods have important applications
in system dynamics and stability analyses.

35
Divergence Theorem
The Divergence Theorem, also referred to as Gauss or Ostrogradskys Theorem, relates
the vector flux through a domain boundary, , to the vector field within the domain, .

Theory & Derivation


This theorem can be obtained in several ways, though only two are presented below.

Bottom-Up or Surface to Volume Derivation


A definition of the divergence of a vector field can be physically interpreted as the time
evolution in an infinitesimal volume, V , of substance in a velocity field, F, see pg. 217
of the recommended textbook.
1 dV
div(F) = (F) = lim . (100)
V 0 V dt

Consider one of the infinitesimal cells i or j in figure 6. The variational change in


volume of either cell, during the time increment t, is given by the volume swept out by
its displaced surface boundary, i.e.:

Vi = t (F n ) dA , (101)
i

which in limt0 results in:


dVi
= F dA . (102)
dt i

When combining both cells i or j, the integrals at the shared boundary surface cancel,
due to opposing signs in the outwards normal vector dA, resulting in:
X  
F dA = F dA
i i
X 1 
= F dA Vi . (103)
Vi i
i

Substituting equation (102), taking limVi 0 , and using equation (100), gives the Di-
vergence Theorem as:
F dA = (F) dV , (104)

where represents the overall volume domain and denotes the total surface boundary.

dAj dAi

i j

Figure 6: Surface boundary normals for adjacent infinitesimal cells

36
Top-Down or Volume to Surface Derivation
z
: Convex Surface

z = v(x; y)

z = u(x; y)

x
R y

Figure 7: Surface boundary for a convex domain

For the convex domain, , in figure 7, consider just the F3 (x) volume integral of the
Divergence Theorem, i.e.:
" z=v(x,y) #
F3 (x) F3 (x, y, z)
dV = dz dx dy
z R z=u(x,y) z

= [F3 (x, y, v(x, y)) F3 (x, y, u(x, y))] dx dy
R
= dA ,
F3 (x) k (105)

because dx dy = k dA on v(x, y) while dx dy = k dA on u(x, y). The remaining


two parts of the volume integrand can be equally resolved, obtaining the theorem as:

(F) dV = F dA . (106)

Obtaining Greens Theorem from the Divergence Theorem


Assuming only 2D vector fields, the Divergence Theorem results in Greens Theorem.
For the reduced dimensionality, equation (106) reduces to:

(F) dA = F dsn , (107)

where is now a surface domain, is a line boundary and dsn is an outward boundary
. The latter is related to the tangential vector ds, used in the
vector, i.e. dsn = ds n
Greens Theorem, see equation (96), by a negative 2 rotation as:
 
0 1
dsn = ds . (108)
1 0

Substituting equation (108) into equation (107) leads to:


 
F1 (x) F2 (x)
+ dx dy = ((F2 (x)) dx + F1 (x) dy) , (109)
x y

which is equivalent to equation (96), for the vector field G = [G1 G2 ]T = [(F2 ) F1 ]T .
37
Stokes Theorem
Stokes Theorem relates surface integrals to line integrals. However, in its more general
form, the theorem relates integrals in dimension Rn to integrals in Rn1 .

Theory & Derivation


A strictly mathematical derivation can be found on pg. 331 of the recommended textbook.
The Graphical Derivation
Figure 8 depicts a generic surface, , in a global frame of reference x = [x y z]T , as
well as an infinitesimal surface element, i , in a locally rotated coordinate system x0 =
[x0 y 0 z 0 ]T . Firstly, the surface integral of the curl of the vector field F(x) is a scalar
quantity, and hence coordinate frame invariant, i.e.:

G x0 dA0 ,

F(x) dA = (110)
i i

0:
where G(x0 ) is the equivalent vector field in the local frame, with dA0 = dx0 dy 0 k

G2 (x0 ) G1 (x0 )

F(x) dA = 0
0
dx0 dy 0 , (111)
i i x y

to which Greens Theorem can be directly applied, giving:



G x0 ds0 .

F(x) dA = (112)
i i

z

F(x)

z0 G(x0 )

y0
x
x0 i
y
Figure 8: Stokes Theorem on a surface

Secondly, summing over all infinitesimal domains, i , noting that integrals along
internal boundaries, i , cancel due to opposite directions of integration, and defining the
overall boundary in the global reference frame, Stokes Theorem follows as:

F(x) dA = F(x) ds . (113)

The domain must be a simply connected region and F(x) must not include singu-
larities along . Stokes Theorem is the most generalised theorem and includes the Diver-
gence, Greens and the 2nd Fundamental Theorem as special cases. Chapters 13.4-13.6
of the recommended textbook are highly suggested for further discussions and examples.
38
Partial Differential Equations
NOTE: All figures for this section will be covered during lectures

Motivation & Objectives


PDE appear throughout physics and engineering. Below is a selection of import-
ant PDE which govern different physical phenomena:
t + u x = 0 1D Advection Equation
ut + u ux uxx = 0 Viscous Burgers Equation
ut + u ux = 0 Inviscid Burgers Equation
ut uxx = 0 Heat Equation
utt uxx = 0 Wave Equation
xx + yy = 0 2D Laplaces Equation
t + xxx + x = 0 Kortewegde Vries Equation

Solution Strategies
The field of PDE is vast and many solution strategies are available, a few of the
most popular analytical approaches include:

Separation of Variables
Transform Methods
Method of Characteristics
Similarity Solutions
h-Principle

Definitions & Notations


In general, any PDE can be expressed in the following form:

2 u(x)
 
u(x) u(x)
F x, u(x) , ,..., ,..., , = 0 , (114)
x1 xn xi , xj
where the vector x comprises all n problem variables and the operator F must
not be confused with F from the previous vector calculus chapter. Adopting the
u 2u
notation: = ux , = uxx , and assuming in this chapter that the variable
x x2
vector takes the form x = [x y]T R2 , equation (114) can be restated as:

F (x, y, u, ux , uy , uxx , uxy , uyy , . . . , uxxxy , ) = 0 . (115)


39
Order & Linearity
Two properties are primarily used to differentiate between different PDE. The
first is the Order which refers to the highest derivative in the given PDE. The
second is Linearity which states that linear combinations of existing solutions,
u(x, y) and v(x, y), lead to new admissible solutions.
2
Define L to be the operator which represents equation (114), e.g. L = 2
t x
for the heat equation. The PDE for the heat equation can then be rewritten as:

L(u) = 0 . (116)

A PDE is linear if and only if (iff):

L(u + v) = L(u) + L(v) and L(ku) = k L(u) , (117)

for any solution functions u, v and constant k. Hence, as examples, the heat, wave
and Laplace equations are linear while the advection and Burgers equations are
generally non-linear. Non-linearity can occur for instance if the solution, u, is part
of a derivatives coefficient or if a derivative carries a power exponent. Familiarity
with identifying linearity is paramount for subsequent studies and modules.

Existence, Uniqueness & Boundary Conditions


Existence and uniqueness of solutions for PDE is beyond the scope of this in-
troductory course. However, it is noted here that not all PDE problems are well
posed. Whether a problem is well posed or ill posed depends not only on the
PDE structure alone, but on the combination of the PDE and the given Boundary
Conditions (BC). Formally, the combination of a PDE with its BC is referred to
as a Cauchy Problem. Please refer to the Cauchy - Kowalevski theorem for details
of existence and uniqueness.
Facing any PDE, it is recommended to be careful by default, as even linear
PDE such as Laplaces equation can become ill posed with some BC. Even if a
solution to a PDE with given BC exists and is unique, it may feature undesirable
physical properties (weak vs. strong solutions).
Moreover it is important to be aware of the different types of Boundary Con-
ditions, e.g. Neumann, Dirichlet, Robin, Cauchy and Mixed BC, to which PDE
can be subjected. Again, familiarity with the different types of BC is paramount.

40
Degrees of Non-Linearity in 1st Order PDE
Let the reference 1st order PDE for most of this section be of the form:
u(x, y) u(x, y)
a(x, y) + b(x, y) = c(x, y, u(x, y)) . (118)
x y
The potential degree of non-linearity embedded in PDE of first order leads to
the following differentiations:

PDE Type Description


Linear Constant Coefficient a, b, c are constant functions
Linear a, b, c are functions of x and y only
Semi-Linear a, b functions of x and y, c may depend on u
Quasi-Linear a, b, c are functions of x, y and u
Non-Linear The derivatives carry exponents, e.g. (ux )2 ,
or derivatives cross-terms exist, e.g. ux uy

Hence, equation (118) represents a semi-linear PDE, because it permits for


light non-linearities in the source term, c(x, y, u(x, y)).

Homogeneous & Inhomogeneous Linear PDE


A linear PDE is called homogeneous iff it can be written in the form:

L (u) = 0 , (119)

which implies that the source term is nil, c(x, y) = 0, while inhomogeneous linear
PDE can be written in the form:

L (u) = c(x, y) . (120)

The inhomogeneous solution can sometimes be obtained directly or, often it


can be obtained by first finding the solution for a point source and then perform-
ing a convolution with the given BC. The overall solution to an inhomogeneous
linear PDE can then be obtained by superposition of the homogeneous, uh and the
inhomogeneous or particular solution, up , as u = uh + up . Homogeneous PDE
are often related to conservation laws without sources or forcing, e.g. conserva-
tion of mass with no source.

41
Method of Characteristics - 1st Order PDE
Theory & Derivation
Assume the following linear first order PDE:
u(x, y) u(x, y)
a(x, y) + b(x, y) = c(x, y) . (121)
x y
The solution, u(x, y), is a surface such that z = u(x, y), which can be rewrit-
ten in implicit form as 0 = u(x, y) z. The vector gradient operator, , on this
surface gives the normal vector, n, at every point as:
 T
u(x, y) u(x, y)
n= 1 . (122)
x y

Hence equation (121) can be written in the form:



u(x, y)
x
a(x, y)


u(x, y)
b(x, y) = n G = 0 . (123)

y
c(x, y)
1

It follows that the vector field G is always in the tangential plane to u(x, y).
Assume we can parametrise a curve, C, which lies in the solution plane and which
at every point satisfies the following system of ODE:
 T
dx dy dz 
= a(x, y) b(x, y) c(x, y)
T
. (124)
ds ds ds

A curve of this type is called integral curve of the vector field G, which in
the context of a PDE is known as the characteristic curves. The solution surface
can then be reconstructed (traced) from the union of all characteristic curves.
The PDE has been reduced to a system of ODE, equations (124), which can be
solved. The parametrization variable can be eliminated from this system, and by
setting z = u, the Lagrange-Charpit equations can be obtained as:
dx dy du
= = , (125)
a(x, y) b(x, y) c(x, y)
which can easily be extended to include non-linear cases. In case that c(x, y) = 0,
from the third line in equations (124), it follows that u is constant and hence
du = 0. In any case it is possible to integrate:
dy b(x, y)
= , (126)
dx a(x, y)
which, if drawn in the x-y base, results in the projected characteristic curves.
42
Example: Advection Equation
Imagine a scalar quantity being propagated in 1D, in a constant velocity field
of speed a, where the initial distribution of the scalar is prescribed along the
boundary, = {(x, t = 0)}, so that the complete problem is stated as:

t + ax = 0 ,
(127)
(x, 0) = (x) .

Integration of equations (125) results in:

x = at + k1 ,
(128)
u = k2 ,

where k1 and k2 are constants, such that the general solution can be expressed as:

x at = k1 ,
(129)
u = k2 = f (k1 ) = f (x at) ,

where f is an arbitrary function depending on the BC. The BC MUST NOT be


imposed on a curve tangential to the projected characteristic curves.

Example: Inhomogeneous PDE


Find the general solution to the following problem statement:

y ux + x uy = x2 + y 2 ,
(130)
u(x, 0) = 1 + x2 and u(0, y) = 1 + y 2 ,

by drawing the projected characteristic curves and then showing that the par-
ticular solution is different in different regions of the domain. The projected
characteristic curves are obtained as:
y dy = x dx ,
(131)
y 2 x2 = k1 ,

while the source term leads to:


du = (x2 + y 2 ) y 1 dx = x dy + y dx = d(x y) ,
(132)
u xy = k2 ,

such that the general solution is u = x y + f (y 2 x2 ). The BC result in:

1 + x2 = f (x2 ) on y = 0 ,
(133)
1 + y 2 = f (y 2 ) on x = 0 ,

so that the arbitrary function is f (t) = |t| + 1. Hence the full solution is:

2 2
u(x, y) = x y + 1 + y 2 x2 if y 2 x2 0 ,
u(x, y) = x y+1+ y x =
u(x, y) = x y + 1 + x2 y 2 if y 2 x2 0 .
(134)
This example shows the concept of regions of influence which will be revisited
further later on.
43
Quasi-Linear PDE with Shocks and Expansion Fans
Consider the following inviscid Burgers problem statement:


ut+ u ux = 0 ,
0 if x < 1 ,






1 if 1 x < 0 , (135)
u(x, 0) = (x) =
(1 x) if 0 x < 1 ,






0 if x 1 .

The general solution for this problem statement can be obtained as


u(x, t) = (x ut). Upon drawing of the projected characteristic curves, a con-
flict of the latter can be observed which results in the formation of a shock. Both
a region of an under-determined characteristic network (expansion fan) and a re-
gion of an over-determined characteristic network (shock) can be observed based
on the given BC.

Further Reading & Examples


A lot of practice is recommended for this subject. Examples are readily avail-
able online and in any standard PDE textbook. Additionally the following online
resources are recommended for further reading:

Title: First-Order Equations: Method of Characteristics


Author: Stanford University
URL Link: Click here to download document
URL Description: http://www.stanford.edu/class/math220a/
handouts/firstorder.pdf

Title: Solving linear and nonlinear partial differential equations


by the method of characteristics
Author: Laboratoire Sols Solides Structures Risques
URL Link: Click here to download document
URL Description: http://geo.hmg.inpg.fr/loret/enseee/
maths/enseee-maths-IBVPs-4.pdf

Title: Partial Differential Equations


Author: Viktor Grigoryan
URL Link: Click here to download document
URL Description: http://www.math.ucsb.edu/grigoryan/
124A.pdf

44
Method of Characteristics - 2nd Order PDE
Assume the following second order PDE:

2 u(x, y) 2 u(x, y) 2 u(x, y)


r(x, y) + 2 s(x, y) + t(x, y) = q(x, y, u(x, y)) .
x2 xy y 2
(136)
Equation (136) omits the first order derivatives but these are not vital and may
easily be included if required. Noting the following two additional equations:

d(ux ) = uxx dx + uxy dy ,


(137)
d(uy ) = uyx dx + uyy dy ,

equations (136) and (137) can be written as a system of the form:



r(x, y) 2 s(x, y) t(x, y) uxx q(x, y)
dx dy 0 uxy = dux . (138)
0 dx dy uyy duy

Finally, noting that there MUST NOT be a unique solution for uxx , uxy and
uyy , it follows that the coefficient matrix must be singular which occurs iff:
 2  
dy dy
r(x, y) 2 s(x, y) + t(x, y) = 0 . (139)
dx dx
The two roots of equation (139) are:
q
dy s(x, y) s(x, y)2 r(x, y) t(x, y)
= , (140)
dx r(x, y)
and these constitute a pair of differential equations which lead to the projected
characteristics curves. Three fundamentally different behaviours of the PDE res-
ult depending on the discriminant in equation (140).

1. Hyperbolic PDE s(x, y)2 > r(x, y) t(x, y) Real Distinct Roots
2. Parabolic PDE s(x, y)2 = r(x, y) t(x, y) Real Repeated Roots
2
3. Elliptical PDE s(x, y) < r(x, y) t(x, y) Complex Conjugate Roots
Variable Type PDE Regions of Dependence & Influence
Elliptical PDE, such as Laplaces equation, constitute Boundary Value Problem,
where the solution at one specific point in the domain depends on the current solu-
tion everywhere in the domain. Conversely, in hyperbolic PDE, e.g. wave equa-
tion, information travels (for time dependent problems) at a finite speed through
the domain, so that the solution at a point has a finite part of the domain upon
which it is dependent, the Region of Dependence. Similarly, the solution at this
point will only affect parts of the domain, the Region of Influence. Parabolic
PDE, e.g. heat equation, are similar to hyperbolic except that the information
propagates at infinite speed. Parabolic and hyperbolic PDE are suited for time-
marching schemes and are referred to as Initial Value Problems. For complex
PDE, the classification into hyperbolic, parabolic or elliptical may not be trivial.
45
A second order PDE, which is not a constant coefficient PDE, can change
its type throughout the simulation history or spacial domain. For example, the
steady Euler equation with irrotational flow, = u, can be expressed as:
 2 2
1 M2 + =0, (141)
s2 n2
where M is the Mach Number, s and n are coordinates along and normal to a
streamline respectively. This PDE is elliptical in the sub-sonic, parabolic at the
sonic and hyperbolic in the super-sonic regime.
Canonical Forms & Representative PDE
Each type of PDE can be, upon a transformation into characteristic variables
and , be expressed in its canonical form. It also follows that each PDE can be
expressed in a form similar to either the heat, wave or Laplaces equation.

PDE Type Characteristic Families Representative PDE


Hyperbolic 2 distinct real , Wave: u = 0
Parabolic 1 real (with one of choice) , Heat: u = 0
Elliptical 2 conjugate complex = v + i w, = Laplace: uvv + uww = 0

Wave Equation
The wave equation is of particular interest as a representative PDE. It features
two distinct families of real characteristic curves which upon projection into the
base space form a projected characteristic grid or network. In general the wave
equation PDE (without any BC/IC), with a propagation speed c, can be stated as:

utt c2 uxx = 0 , (142)

where r = 1, s = 0, t = c2 , q = 0, and hence the projected characteristic


curves gradients are:
dx
= c , (143)
dt
from where it follows that we have two families of straight lines:

x = ct + k1 and x = k2 ct , (144)

which can be used to define new variables:

= x ct = k1 and = x + ct = k2 . (145)

The equivalent differential operators can be deduced by chain derivation as:


u u u
ux ((x, t) , (x, t)) = = + , (146)
x x x

= + , (147)
x
46
and similarly for the time variable as:
u u u
ut ((x, t) , (x, t)) = = + , (148)
t t t

= c +c . (149)
t
The higher order operators can then be obtained by recursion as:

2 2 2 2
= + 2 + , (150)
x2 2 2
2 2
2
2
2
2
2
= c 2c + c , (151)
t2 2 2
such that finally, the initial wave equation, equation (142), in canonical form is:
 2 2
 2 2 2 2 2 2
2 2 u 2 u 2 u 2 u 2 u 2 u
c (u) = c 2c +c c c 2 c = 0,
t2 x2 2 2 2 2
(152)
which simplifies to:
2u
= u = 0 . (153)

Equation (153) can easily be integrated to the general solution:

u(, ) = f () + g() , (154)

which may also be expressed in the original variables as:

u(x, t) = f (x ct) + g(x + ct) , (155)

where f and g are arbitrary functions which depend on the given BC/IC.
Given initial conditions at time t = 0, in the form of an initial wave profile
u(x, 0) = h(x), and an initial velocity profile ut (x, 0) = v(x), equation (155) has
a final solution, referred to as dAlemberts Solution, which takes the form:

1 1 1 x+ct
u(x, t) = h(x + ct) + h(x ct) + v(s) ds . (156)
2 2 2c xct
Equation (156) shares strong similarities with the advection equation. The
first two terms are two half-amplitude initial wave profiles travelling in oppos-
ite directions. This interpretation is further visualised by noting that the wave
equation, equation (142), can be factorised into two advection equations:
  

c +c u(x, t) = 0 . (157)
t x t x
An inhomogeneous wave equation solution (i.e non-zero source q) exists:

1 1 1 x+ct 1 t x+c(tti )
u(x, t) = h(x + ct)+ h(x ct)+ v(s) ds+ q(xi , ti ) dxi dti .
2 2 2c xct 2c 0 xc(tti )
(158)
47
Laplaces Equation in 2D Incompressible & Irrotational Flow
Laplaces equation is fundamental for the Inviscid Potential Flow Theory. As-
sume a 2D irrotational flow such that the velocity field, u = [u v]T , can be
expressed as the gradient of a scalar field, i.e. u = . The irrotational charac-
teristic of the flow is inherently guaranteed, as () = 0 is a vector identity
which always holds. The incompressibility condition results in:
u v
u=0= + , (159)
x y
where the velocity vector components u and v are in turn given by:

u= and v= , (160)
x y
such that the field, , is a harmonic function, as it satisfies Laplaces equation:

2 2
+ =0. (161)
x2 y 2
Additionally, introduce a second scalar field, , such that:

u= and v = . (162)
y x
This new scalar field must obey both the incompressible and irrotational nature
of the flow. It can be seen immediately that the incompressibility conditions
holds, while for the irrotational aspect it is required that:
v u
u=0= , (163)
x y
which upon substitution with the new scalar field, , becomes:
 2
2

v u
= + 2 =0. (164)
x y x2 y
Both fields must satisfy Laplaces equation, but furthermore it can be stated:

= (= u) and = (= v) . (165)
x y y x
Equations (165) are referred to as Cauchy-Riemann conditions for any com-
plex function of the type:
= + i, (166)
to be analytic and hence differentiable. This also implies that and are not
only both harmonic functions, but also conjugates of each other. This complex
potential, , has an imaginary part, , referred to as stream function, which can
be used to draw streamlines in a flow field, i.e. the trajectories of particles in the
velocity field. Because the streamlines are always tangent to the vector velocity
field, no mass flow passes through them.
48
The stream function is constant along a streamline and the difference between
two adjacent streamlines gives the volumetric flow rate through a line which joins
these two streamlines. The real part, , referred to as velocity potential, can be
used to draw the equipotential lines in the flow, which are perpendicular to the
streamlines everywhere in the domain. Equation (160) shows how the velocity
field is simply the gradient of the velocity potential.
A conformal mapping is a function of the form w = f (z), where z is a com-
plex variable and where the function f is both analytic itself and must not have
a vanishing derivative, i.e. fz (z) 6= 0. Functions with these properties are also
referred to as holomorphic functions. Depending on the choice of the conformal
map, geometric objects such as lines or circles in one domain can be mapped
to different shapes in another domain (e.g. circles to lines). However, the key
aspect of a conformal map is that it is angle-preserving, which implies that after
mapping through a conformal map, the streamlines and equipotential lines will
remain perpendicular relative to each other.
The potential and are harmonic functions so that they must remain har-
monic under a conformal mapping. Hence Laplaces equation can be solved in
one domain with boundary conditions applied along simpler geometries, before
being mapped to a different domain with more difficult shapes. It is considerably
simpler to solve fluid flow past a circular cylinder in one domain, and then map
this cylinder to an airfoil-like shape using the Joukowsky Transformation, which
is characterised by the conformal map w = f (z) = z + z 1 .

Further Reading & Examples


Further practice is recommended with examples being readily available online
and in any standard PDE textbook. Additionally the following online resources
are recommended for further reading:
Title: Classification of PDEs and Related Properties
Author: Antonius Otto
URL Link: Click here to download document
URL Description: http://how.gi.alaska.edu/ao/sim/
chapters/chap3.pdf

Title: Chapter 16 Partial differential equations


Author: Professor C. D. Cantrell
URL Link: Click here to download document
URL Description: http://www.utdallas.edu/cantrell/
ee6481/lectures/pde.pdf

Title: Partial Differential Equations


Author: Viktor Grigoryan
URL Link: Click here to download document
URL Description: http://www.math.ucsb.edu/grigoryan/
124A.pdf

49
Separation of Variables
This method assumes the solution to a PDE, u(x, y), to be decomposed as::
u(x, y) = f (x) g(y) . (167)

Consider the 1D diffusive heat equation in a finite bar of length, L, stated as:

T 2T
= 2 ,
t x (168)
BC: T (0, t) = 0 and T (L, t) = 0 IC: T (x, 0) = (x) ,

It is assumed that the solution takes the form in equation (167), such that upon
substitution into the PDE it results that:

f (x) gt (t) = fxx (x) g(t) , (169)

which may be rearranged and expressed as:


gt (t) fxx (x)
= = c21 . (170)
g(y) f (x)
Two separate equations are hence to be solved:
gt (t) fxx (x)
= c21 and + c21 = 0 , (171)
g(y) f (x)
which upon using the BC only, results in the following solutions:

t n2 2  nx  n
gn (t) = bn e L2 , fn (x) = an sin where cn = , n N .
L L
(172)
Hence the final result can be stated as:
2 2

X  nx  t n
T (x, t) = dn sin e L2 , with dn = an bn , (173)
n=1
L

where the coefficients dn can be found by a Fourier series integral on the IC,

X  nx 
T (x, 0) = (x) = dn sin , (174)
n=1
L

which can be expressed as:


L
2  nx 
bn = (x) sin dx . (175)
L 0 L

Note the high decay rate n2 of high frequencies. The heat equation quickly
dampens high frequency signals while low frequency functions decay consider-
ably slower. Both the wave and Laplaces equation can be solved in a analogous
manner but only for selected geometries and boundary conditions.
50

You might also like