Professional Documents
Culture Documents
Lecture Notes
Course Content
Lecture Topic Lecturer
1 Function Expansions & Transforms S.-R. Shamsuddin
2-3 Vector Spaces, Vector Fields & Operators S.-R. Shamsuddin
4-5 Linear Algebra, Matrices, Eigenvectors S.-R. Shamsuddin
6-7 Vector Calculus Integral Theorems P. Sareh
8-9-10 Partial Differential Equations P. Sareh
1
Introductory Mathematics
0-What Is Mathematics?
Different schools of thought, particularly in philosophy, have put forth radically different
definitions of mathematics. All are controversial and there is no consensus.
1
1-Field of Mathematics
Mathematics can, broadly speaking, be subdivided into the study of quantity, structure,
space, and change (i.e. arithmetic, algebra, geometry, and analysis). In addition to these
main concerns, there are also subdivisions dedicated to exploring links from the heart
of mathematics to other fields: to logic, to set theory (foundations), to the empirical
mathematics of the various sciences (applied mathematics), and more recently to the
rigorous study of uncertainty.
When I was a undergraduate, I knew that the majors in our mathematical department
include: Pure mathematics, applied mathematics, statistics, and computational mathem-
atics.
When I was a master student, I knew that the directions in pure mathematics in-
cludes: topology, algebra, number theory, differential equations and dynamic systems,
differential geometry, and functional analysis.
2-Mathematical awards
Arguably the most prestigious award in mathematics is the Fields Medal, established
in 1936 and now awarded every four years. The Fields Medal is often considered a
mathematical equivalent to the Nobel Prize.
The Wolf Prize in Mathematics, instituted in 1978, recognizes lifetime achievement,
and another major international award, the Abel Prize, was introduced in 2003. The
Chern Medal was introduced in 2010 to recognize lifetime achievement. These accolades
are awarded in recognition of a particular body of work, which may be innovational, or
provide a solution to an outstanding problem in an established field.
A famous list of 23 open problems, called Hilberts problem, was compiled in 1900
by German mathematician David Hilbert. This list achieved great celebrity among math-
ematicians, and at least nine of the problems have now been solved. A new list of seven
important problems, titled the Millennium Prize Problems, was published in 2000. A
solution to each of these problems carries a $1 million reward, and only one (the Riemann
hypothesis) is duplicated in Hilberts problems.
3-Mathematics in aeronautics
Mathematics in aeronautics includes calculus, differential equations, and linear algebra,
etc.
4-Calculus1
Calculus has been an integral part of mans intellectual training and heritage for the last
twenty-five hundred years. Calculus is the mathematical study of change, in the same
way that geometry is the study of shape and algebra is the study of operations and their
application to solving equations. It has two major branches, differential calculus (con-
cerning rates of change and slopes of curves), and integral calculus (concerning accu-
mulation of quantities and the areas under and between curves); these two branches are
related to each other by the fundamental theorem of calculus. Both branches make use
1
Extracted from: Boyer, Carl Benjamin. The history of the calculus and its conceptual development.
Courier Dover Publications, 1949.
2
of the fundamental notions of convergence of infinite sequences and infinite series to a
well-defined limit. Generally, modern calculus is considered to have been developed in
the 17th century by Isaac Newton and Gottfried Leibniz, today calculus has widespread
uses in science, engineering and economics and can solve many problems that algebra
alone cannot.
Differential and integral calculus is one of the great achievements of the humand
mind. The fundamental definitions of the calculus, those of the derivative and the integ-
ral, are now so clearly stated in textbooks on the subject, and the operations involving
them are so readily mastered, that it is easy to forget the difficulty with which these ba-
sic concepts have been developed. Frequently a clear and adequate understanding of the
fundamental notions underlying a branch of knowledge has beeen achieved comparately
late in its development. This has never been more aptly demonstrated than in the rise
of the calculus. The precision of statement an the facility of application which the rules
of the calculus early afforded werein a measure responsible for the fact that mathem-
aticians were insensible to the delicate subtleties required in the logical development of
the discipline. They sought to establish the calculus in terms of the conceptions found
in the traditional geometry and algebra which had been developed from spatial intuition.
During the eighteenth century, however, the inherent difficulty of formulating the under-
lying concepts became increasingly evident ,and it then became customary to speak of
the metaphysics of the caluclus, thus implying the inadequacy of mathematics to give a
satisfactory expositionof the bases. With the clarification of the basic notions which, in
the nineteenth century, was given in terms of precise mathematical terminology a safe
course was steered between the intuition of the concrete in nature (which may lurk in
geometry and algebra) and the mysticism of imaginative speculation (which may thrive
on trascendental metaphysics). The derivative has throughout its development been thus
precariously situated between the scientific phenomenon of velocity and the philosoph-
ical noumenon of motion.
The history of integral is similar. On the one hand, it had offered ample opportunity
for interpretations by positivistic thought in terms either of approximations or of the
compensation of errors, views based on the admitted approximative nature of scientific
measurements and on the accepted doctrine of superimposed effects. On the other hand,
it has at the same time been regarded by idealistic metaphysics as a manifestation that
beyond the finitism of sensory percipiency there is a trascendent infinite which can be but
asymptotically approached by human experience and reason. Only the precision of their
mathematical definition the work of the nineteeth century enables the derivative and the
integral to mantain their autonomous positionas abstract concepts, perhaps derived from,
but nevetheless independent of, both physical description and metaphysical explanation.
3
Function Expansions & Transforms
0-Infinite Series
P{a
If
k }, k = 0, 1, is a sequence of numbers, the ordered sum of all its terms, namely,
k=0 ak = a0 + a1 + , is called an infinite series.
Pn
(1) Partial sums k=0 ak .
(2) Sum of the infinite series: the limit of the sequence of its partial sums.
(4) Series with positive terms; Series with both positive and negative terms, etc.
1-Taylor Series
Taylor polynomial approximation
x
1
f (x) = pn (x) + (x t)n f (n+1) (t)dt (1)
n! a
2-Fourier Series
A Fourier series decomposes periodic functions into a sum of sines and cosines (tri-
gonometric terms or complex exponentials). For a periodic function f (x), periodic on
[L, L], its Fourier series representation is
n
X nx nx o
f (x) = 0.5a0 + an cos + bn sin (2)
L L
n=1
5
f (x) can also be expressed as a complex Fourier series
+
X
f (x) = cn einx/L (4)
with cn = 0.5(an ibn ) for n > 0, cn = 0.5(an + ibn ) for n < 0 and c0 = 0.5a0 .
The associate complex Fourier coefficients are given by
L
1
cm = f (x)eimx/L dx, m = 0, 1, 2, ... (5)
2L L
x
a0 LX1h nx nx i
f (t)dt = (x + L) + an sin bn cos cos n
L 2 n L L
n=1
L
0 1 2
X
0.5a20 a2n + b2n
P arseval s equality : |f (x)| dx = + (6)
L L n=1
If (1) f (x) is continuous, and f 0 (x) is piecewise continuous on [L, L], (2) f (L) =
f (L), (3) f 00 (x) exists at x in (L, L), then
X nx nx
f 0 (x) = n an sin + bn cos .
L L L
n=1
whereas if it is extended as an odd periodic function (an = 0), its Fourier series repres-
entation is
X nx 2 L nx
f (x) = bn sin bn = f (x) sin dx (8)
L L 0 L
n=1
3-Integral transform
An integral transform is any transform of the following form
x2
F (m) = K(m, x)f (x)dx (9)
x1
4-Galerkin Expansion
For function
t = L + N (), Lj = j j ,
assume that
X
= aj (t)j (x),
daj
= j aj + Fj (al ).
dt
The particular flow we select is a version of the famous rotating Couette flow between
two co-axial cylinders. The gap between the cylinders is assumed to be much smaller
4
than the cylinder radius. A local Cartesian coordinate system x = (x, y, z)T is oriented
such that the axis of rotation is parallel to the z axis, while the circumferential direction
corresponds to the x axis. Only flows independent of x are considered. The flow velocity
7
Figure 1: The rotating Couette flow
For simplicity, the flow is assumed to be 2-periodic in y and z, u and v are assumed to
be odd in y and even in z, while w is assumed odd in z and even in y :
8
Preliminary analysis of the stability properties of the flow
Our interest lies in the global stability of this flow and its asymptotic convergence.
We first apply the well-known energy stability approach. Setting the Lyapunov func-
tional as the perturbation energy E = kuk2 /2 leads to a linear eigenvalue problem. For
(13)-(14), the resulting eigenfunctions en,m (x) can easily be found:
T
cos(mz) sin(ny) m cos(mz) sin(ny) n sin(mz) cos(ny)
en,m (x) = , , , (15)
2 2 m2 + n2 2 m2 + n2
where n = 1, 2, , m = 0, 1, 2, . The corresponding eigenvalues are
m m 2 + n2
n,m (Re) = p . (16)
2 m2 + n2 Re
Note that ReL = ReE for = 1/2. Moreover, for = 0 and = 1, it can be proven
that this flow is globally stable for any Re. For other values of , we solved (13)-(14)
numerically for a variety of initial conditions and observed convergence to the base flow
for all Re < ReL . This, of course, does not eliminate all possible initial conditions, and
it does not eliminate existence of unstable solutions not tending to the base flow with
time. Hence, rigorously establishing global stability in the range ReE < Re < ReL is
of interest.
Next, with the aid of SOS optimization, we analyze the global stability of the periodic
rotating Couette flow (13-14). To this end, we first reduce (13)-(14) to an uncertain
dynamical system.
where the finite Galerkin basis fields ui , i = 1, , k are an orthonormal set of solen-
oidal vector fields with appropriate inner product, the residual perturbation velocity us
is solenoidal and orthogonal to all the ui , and both ui and us satisfy the boundary con-
dition (14) of the Couette flow. We substitute (18) into the flow equation (13), and take
inner product with each of the Galerkin basis fields ui for i = 1, , k. After some
straightforward manipulation this yields
da
= f (a) + a (us ) + b (us , a) + c (us ) (19)
dt
9
4
where a = (a1 , , ak )T , and the components of f , a , b , and c are
4
fi (a) = Lij aj + Nijk aj ak , (20)
4
ai (us ) = hus , gi i, (21)
4
bi (us , a) = hus , hij iaj , (22)
4
ci (us ) = hus , us ui i. (23)
Einstein summation notation (summation over repeated indices) is used in the above
equations. The inner product hw1 , w2 i is the integral of w1 w2 over the flow domain
4
V = {(y, z)|0 y 2, 0 z 2}. The second-order tensor L and the third-order
tensor N are defined component-wise as
4 1
Lij = hui , 2 uj i + hui , Auj i, (24)
Re
4
Nijk = hui , uj uk i. (25)
4 1 2
gi = T ui ui T u
ui + u , (26)
Re
4
hij = uj ui ui T uj . (27)
where u is the steady flow, the stability of which is studied. For the periodic Couette
= (y, 0, 0)T . The notation used can be clarified by the Einstein equivalent of
flow, u
1 m
u uk k
(26): gim = Re 2 umi +u k xik x m m m are the m-th components
m ui , where gi , ui , x
where
4
(us , a) = a (us ) + b (us , a) + c (us ), (29)
4 1
(us ) = hus , 2 us i hus , us aru + aru us i, (30)
Re
4 4 1
(us , a) = 2hus , dj iaj , dj = 2 uj (uj aru + aru uj ). (31)
Re
Note that the terms a f (a) and (us ) in (28) represent the self-contained dissipation
or generation of energy depending on ai ui and us , while the term (us , a) denotes the
generation or dissipation of energy arising from the interaction of these velocity fields.
Overall, the periodic rotating Couette flow under consideration can be described by
10
the system
In (32) and (33), an important fact is that the evolution of the dynamical system depends
on us via the perturbation terms (us , a), (us ), and (us , a). This means that for a
given q 2 > 0, there exist many us satisfying kus k2 /2 = q 2 ; producing different right-
hand sides of (32) and (33). In this sense, (3233) is an uncertain dynamical system. The
solution of this system is therefore not unique. However, if all the solutions of (3233)
tend to zero as time tends to infinity, then the solution of the Navier-Stokes system also
tends to zero.
u = 0, u(, z) = 0, (35)
where is the Lagrange multiplier for the incompressibility condition, and is the Lag-
range multiplier for the unit norm condition kuk = 1.
Considering the 2-periodic property of the flow in z, without loss of generality, we
assume that the energy eigenfunctions u and the Lagrange multiplier take the following
form
X
u = u
m (y) cos(mz),
m=
X
v = vm (y) cos(mz), (36)
m=
X
w = w
m (y) sin(mz),
m=
X
= m (y) cos(mz).
m=
11
Substituting (36) into (34) and (35) renders to
(i) m 6= 0.
m2 Re2
(D2 m2 Re)2 (D2 m2 )
vm = vm , (37)
4
vm = (D2 m2 Re)(D2 m2 )
vm = D vm = 0, y = , (38)
and
1 2
m =
w D m = 2 (D2 m2 )(D2 m2 Re)
v, u vm ,
m m Re
where D is the differential operator d/dy.
(ii) m = 0.
(D2 Re)
u0 = 0, u
0 () = 0, v0 = w
0 = 0. (39)
where n = 1, 2, is the mode number. For the former case, solving the energy eigen-
value problem is equivalent to solving the 6th-order ODE (37) subject to the boundary
conditions (38), which however is hard to solve analytically.
Notice that the onset of energy instability in Reynolds number, denoted by ReE , is
determined by (37)-(38) with = 0, i.e.,
m2 Re2E
(D2 m2 )3 vm = vm , (40)
4
vm = (D2 m2 )2 vm = 0, y = .
vm = D (41)
In the following, the exact solution of the problem (40)-(41) is exploited. The even and
odd solutions of (40)-(41) can be written in the forms
3
X
vm,e = Ai cosh(qi y) (42)
i=1
and
3
X
vm,o = Bi sinh(qi y), (43)
i=1
m2 Re2E
(q 2 m2 )3 = . (44)
4
The higher modes can of course be obtained from these solutions, but here our interests
only lie in the first even and odd modes of system instability. If, in place of ReE , we
12
introduce the quantity r that satisfies the relationship
m2 Re2E
= m6 r3 , (45)
4
then the roots of (44) can be written down explicitly in the form
1
q1 = im(r 1) 2 , q2 = m(A iB), q3 = m(A + iB), (46)
and
1
vm,o = sin(m(r 1) 2 y) + S1 sinh(mAy) cos(mBy)
+S2 cosh(mAy) sin(mBy). (48)
Since the boundary conditions (41) are homogeneous, ReE can be obtained by solv-
ing a characteristic value problem accordingly by regarding it as a function of the mode
number m in the z-direction. More clearly, applying the boundary conditions (41) to (47)
or (48) will give three linear homogeneous equations for the constants Ai or Bi . If those
constants are not to vanish identically, then the determinant of the system must vanish,
yielding that
1 1
(r 1) 2 tan(m(r 1) 2 ) (49)
(A + 3B) sinh(2mA) + ( 3A B) sin(2mB)
=
cosh(2mA) + cos(2mB)
for the odd mode. (49) and (50) are transcendental equations relating m and r, and thus
can only be solved numerically. Then, we can see that the onset of energy instability of
the flow is determined by the first even mode at m = 1, which corresponds to r = 1.3441.
Considering the relationship (45), the energy stability limit is Re = ReE , where
3 3
ReE := 2m2 r 2 = 2 1.3441 2 = 3.1166.
Note that, for this flow, conveniently, the energy stability limit is irrelevant to .
13
Linear stability of the flow
The linear stability of the flow (13) is the stability of the linearized system of (13), i.e.,
u 1 2
= p + u + Au, u = 0, (51)
t Re
which can be obtained by solving the following linear eigenvalue problem
1
Re 2 1 0 0
1
u = Re 2 0 u + (52)
y
1
0 0 Re 2
z
u = 0, u(, z) = 0, (53)
where and are given as in (34). Similarly as in the preceding part, the problem is
equivalent to solving
Immediately, one can show that the Couette flow becomes linearly unstable for 0 < <
1 and
ReE 3.1166
Re > ReL := = . (56)
2 1 2 1
14
Vector Spaces, Vector Fields &
Operators
In the context of physics we are often interested in a quantity or property which varies
in a smooth and continuous way over some one-, two-, or three-dimensional region of
space. This constitutes either a scalar field or a vector field, depending on the nature of
property. In this chapter we consider the relationship between a scalar field involving
a variable potential and a vector field involving field, where this means force per unit
mass or change. The properties of scalar and vector fields are described and how they lead
to important concepts, such as that of a conservative field, and the important and useful
Gauss and Stokes theorems (actually will be presented separately). Finally, examples
may be given to demonstrate the ideas of vector analysis.
There are four types of functions involving scalars and vectors:
A B = kAkkBk cos ,
n
!1
X p
For instance, P
(1) kxk1 = i |xi |, also called the Manhattan norm because it corresponds to sums
of distances along coordinate axes, as one would travel along the rectangular street plan
of Manhattan. q
P 2
(2) kxk2 = i xi , also called the Euclidean norm, the Euclidean length, or just
the length of the vector.
(3) kxk = maxi kxi k, also called the max norm or the Chebyshev norm.
Some relationships of norms.
kxk kxk2 nkxk ,
kxk2 kxk1 nkxk2
p
Define the inner product induced norm kxk = hx, xi. Then,
kA Bk = kAkkBk sin ,
q1 (2 x) q2 (2 + x)
Ex = 3/2
+
2 2 2
40 {(2 x) + y + z } 40 {(2 + x)2 + y 2 + z 2 }3/2
q1 y q2 y
Ey = 3/2
+
2 2 2
40 {(2 x) + y + z } 40 {(2 + x)2 + y 2 + z 2 }3/2
q1 z q2 z
Ez = 3/2
+ .
2 2 2
40 {(2 x) + y + z } 40 {(2 + x)2 + y 2 + z 2 }3/2
where represents the overall volume domain and denotes the total surface boundary.
17
vector field F within which the surface is situated,
F dr = ( F) dA . (64)
C S
6-Repeated operations
Note that grad must operate on a scalar field and gives a vector field in return, div
operates on a vector field and gives a scalar field in return, and curl operates on a vector
field and gives a vector field in return.
2 2
2
div grad = = + + 2 (65)
2 x2 y 2 z
2
2 1 2 1 1
= 2 r + 2 sin + 2 2
r r r r sin r sin 2
(2) Two-dimensional polar coordinates:
2 1 1 2
2 = + +
r2 r r r2 2
(3) Cylindrical coordinates:
2 1 1 2 2
2 = + + +
r2 r r r2 2 z 2
7-Products rules
grad() = grad + grad (70)
div (A) = div A + A . grad (71)
curl (A) = curl A + (grad ) A (72)
div (A B) = B . curl A A . curl B (73)
18
Linear Algebra, Matrices,
Eigenvectors
In many practical systems there naturally arises a set of quantities that can conveniently
be represented as a certain dimensional array, referred to as a matrix. If matrices were
simply a way of representing arrays of numbers then they would have only a marginal
utility as a means of visualizing data. However, a whole branch of mathematics has
evolved, involving the manipulation of matrices, which has become a powerful tool for
the solution of many problems.
For instance, consider the set of n linear equations with n unknowns
a11 Y1 + a12 Y2 + ... + a1n Yn = 0
a21 Y1 + a22 Y2 + ... + a2n Yn = 0
(74)
..........................
an1 Y1 + an2 Y2 + ... + ann Yn = 0
The necessary and sufficient condition for the set to have a non-trivial solution (other
than Y1 = Y2 = ... = Yn = 0) is that the determinant of the array of coefficients is zero:
det(A) = 0.
A1 Ak = diag(A1 , , Ak ).
P
(3) Trace: tr(A) = i aii .
I Ak = (I A)(I + A + + Ak1 ).
I + Ak = (I + A)(I A + Ak1 ).
A 0
|AB| = |A||B|,
= |A||B|
I B
a11 B a1m B
A B = ... ..
.
an1 B anm B
20
(aA) (bB) = ab(A B) = (abA) B = A (abB)
(A + B) C = A C + B C, (A B) C = A (B C),
If the rank of a matrix is the same as its smaller dimension, we say the matrix is of full
rank. In the case of a nonsquare matrix, we may say the matrix is of full row rank or full
column rank just to emphasize which is the smaller number.
Linear systems
A linear system Anm x = b, for which a solution exists, is said to be consistent;
otherwise, it is inconsistent. The system is consistent if and only if
namely, the space spanned by the columns of A is the same as that spanned by the
columns of A and the vector b; therefore, b must be a linear combination of the columns
of A. A special case that yields (75) for any b is
rank(Anm ) = n,
and so if A is of full row rank, the system is consistent regardless of the value of b. In this
case, of course, the number of rows of A must be no greater than the number of columns.
A square system in which A is nonsingular is clearly consistent, and the solution is
x = A1 b.
(1) If C is positive defnite and A is of full column rank, then AT CA is positive definite
(AB)1 = B 1 A1 .
A(I + A)1 = (I + A1 )1 ,
(A + BB T )1 B = A1 B(I + B T A1 B)1 ,
A1 + B 1 = A1 (A + B)B 1 ,
(A B)1 = A1 B 1 .
3-Eigensystems
Definitions
2- The eigenvalues of a symmetric matrix are the numbers that satisfy |A I| = 0
3- The eigenvectors of a symmetric matrix are the vectors x that satisfy (A I)x = 0
Theorems
1-The eigenvalues of any real symmetric matrix are real.
2-The eigenvectors of any real symmetric matrix corresponding to different eigenvalues
are orthogonal.
22
3-Diagonalisation of symmetric matrices
Definitions
1- An orthogonal matrix U is a real square matrix such that U T U = U U T = 1
2- If U is a real orthogonal matrix of order n n and A is a real matrix of the same order
then U T AU is called the orthogonal transform of A.
3-
Theorem
1- If A is a real symmetric matrix of order n n then it is possible to find an orthogonal
matrix U of the same order such that the orthogonal transform of A with respect to U is
diagonal and the diagonal elements of the transform are the eigenvalues of A.
2-(U T AU )m = U T Am U
-Cayley-Hamilton Theorem: A real square symmetrix matrix satisfies its own charac-
teristic equation (i.e. its own eigenvalue equation)
where
-Trace Theorem: The sum of the eigenvalues of a matrix A is equal to the sum of the
diagonal elements of A and is defined as T r(A).
-Determinant Theorem: The product of the eigenvalues of A is equal to the determinant
of A.
4-Matrix Factorizations
Matrices can be factored in a variety of ways as a product of matrices with different
properties. These different factorizations, or decompositions, reveal different aspects of
matrix algebra and are useful in different computational arenas.
Similarity transform
Two square matrices A and B are said to be similar if an invertible matrix P can be found
for which A = P BP 1 .
Similarity to a diagonal matrix
Systems of differential equations sometimes can be uncoupled by diagonalizing a
matrix, obtaining the similarity transformation A = P DP 1 , where the n columns of
P are the n eigenvectors of A, and D is a diagonal matrix and its entries are the corres-
ponding eigenvalues of A.
Similarity to a Jordan canonical form
However, the most general form is A = P JP 1 , where J is a Jordan matrix rather
than a diagonal matrix D. The Jordan matrix is a diagonal matrix with some additional
1s on the superdiagonal, the one above the main diagonal. For some matrices, the Jordan
matrix is as close to diagonalization as can be achieved.
23
A square matrix is similar to either a diagonal matrix or to a Jordan matrix. In either
event, the eigenvalues of A appear on the diagonal of D or J. A square symmetric matrix
is orthogonally similar to a diagonal matrix.
LU decomposition
LU decomposition can be obtained as a by-product of Gaussian elimination. The row
reductions that yield the upper triangular factor U also yield the lower triangular factor
L. This decomposition is an efficient way to solve systems of the form AX = Y , where
the vector Y could be one of a number of right-hand sides. In fact, the Doolittle, Crout,
and Cholesky variations of the decomposition are important algorithms for the numerical
solutions of systems of linear equations.
There are at least five different versions of LU decompositions.
1. Doolittle, L1 U , 1s on main diagonal of L.
2. Crout, LU1 , 1s on main diagonal of U .
3. LDU, L1 DU1 , 1s on main diagonals of L and U and D is a diagonal matrix.
4. Gauss, L1 DLT1 , A is symmetric, 1s on main diagonal of L, D is a diagonal matrix.
5. Cholesky, RRT , A is symmetric, positive definite, R = L1 D, with D a diagonal
matrix.
QR decomposition
The QR decomposition factors a matrix into a product of an orthogonal matrix Q and an
upper triangular matrix R. It is an important ingredient of powerful numeric methods for
finding eigenvalues and for solving the least-squares problem.
The factors Q and R for every real matrix are unique once the otherwise arbitrary
signs on the diagonal of R are fixed. Modern computational algorithms for finding ei-
genvalues numerically use some version of the QR algorithm. Assume, for A = A0 , do
QR decomposition iteratively.
A0 = Q0 R0 , A1 = R0 Q0 = Q1 R1 , , A2 = R1 Q1 = Q2 R2 , A3 = R2 Q2 = Q3 R3
then the sequence of matrices Ak converges to an upper triangular matrix with the eigen-
values of A0 on the main diagnal. If, in addition, A is symmetric, then Ak converges to
a diagonal matrix.
A = U V T .
The singular value decomposition factors a matrix as a product of three factors, two
being orthogonal matrices and one being diagonal. The columns in one orthogonal factor
are left singular vectors, and the columns in the other orthogonal factor are the right
singular vectors. The matrix itself can be represented in outer product form in terms
of the left and right singular vectors. One use of this representation is in digital image
processing.
24
5-Solution of linear systems
These are methods for finding solutions of a set of linear equations which may be written
in the matrix form Ax = b where x are n unknown vectors.
Direct methods
Direct methods of solving linear systems all use some form of matrix factorization. The
LU factorization is the most commonly used method to solve a linear system.
For certain patterned matrices, other direct methods may be more efficient. If a given
matrix initially has a large number of zeros, it is important to preserve the zeros in the
same positions in the matrices that result from operations on the given matrix. This helps
to avoid unnecessary computations. The iterative methods discussed in the next section
are often more useful for sparse matrices.
Another important consideration is how easily an algorithm lends itself to implement-
ation on advanced computer architectures. Many of the algorithms for linear algebra can
be vectorized easily. It is now becoming more important to be able to parallelize the
algorithms.
Iterative methods
The Jacobi method
Lets start with Ax = b. A can be decomposed into a diagonal component D and the
remainder R. The solution is then obtain iteratively by
xk+1 = D1 (b Rxk )
Comments
1- The method works well is the matrix A is diagonal dominant.
2- The matrix must verify aii 6= 0.
The Gauss-Seidel method
In this method, we identify three matrices: a diagonal matrix D, a lower triangular L
with 0s on the diagonal, and an upper triangular U with 0s on the diagonal:
(D + L)x = b U x.
We can write this entire sequence of Gauss-Seidel iterations in terms of these three fixed
matrices:
26
Generalised Vector Calculus
Integral Theorems
iii. Provide physical intuition, insight and feeling for the mathematics in CFD
27
1-Required Vector Operators & Operations
The operator is, unless explicitly stated otherwise, assumed to be of the shape:
T
x
x
= = = where: x = y , (76)
xT x
y
z
z
where the superscript T denotes the transpose operation.
x
x
2
((x)) = ((x)) = ( ) ((x)) =
((x))
y y
z z
2 2 2
= + + ((x))
x2 y 2 z 2
2
(x) 2 (x) 2 (x)
= + + R11 . (78)
x2 y 2 z 2
The Laplacian is the Divergence of the Gradient of a scalar field. It has important
applications in Potential Flow Theory.
2 F = ( F) ( F) , (79)
28
which in Cartesian coordinates reduces to:
T
2 F(x) = 2 F1 (x) 2 F2 (x) 2 F3 (x) R31 ,
(80)
where the vector field F(x) is composed of three scalar components F1 (x), F2 (x) and
F3 (x), i.e. F(x) = [F1 (x) F2 (x) F3 (x)]T . For a scalar field, the Vector Laplacian
reverts to the familiar Laplacian.
v. Stokes Theorem
29
First & Second Fundamental Theorem of Calculus
The Riemann Integral is a common and rigorous definition of an integral:
b X
f (x) dx = lim f (k ) xk , (83)
a xk 0
k
which has the geometrical interpretation of the area under the curve, as shown in figure 2.
y y
f(k )
f(x) f(x)
limxk !0
x x
xk
Another formal, but more general definition denotes the integral as the process which
reverses Derivation, hence why sometimes it is referred to as Anti-Derivative.
given a continuous real-valued function g(t) over the closed interval domain [a, b]. It
follows from this theorem that f (x) is continuous over the closed interval domain [a, b],
differentiable over the open domain (a, b), and by definition:
df (x)
g(x) = . (85)
dx
The First Fundamental Theorem relates the Derivative to the Integral and, most import-
antly, guarantees existence of integrals for continuous functions.
for real-valued functions g(x) and f (x) on [a, b] related by equation (85). This theorem,
unlike the First Fundamental Theorem, does not require f (x) to be continuous.
30
Generalised Line Integrals & Gradient Theorem
The Gradient Theorem is also referred to as the Fundamental Theorem of Calculus for
Line Integrals. It represents the generalisation of an integration along an axis, e.g. dx
or dy, so the 2nd Fundamental Theorem of Calculus, to the integration of vector fields
along arbitrary curves, C, in their base space.
b
" 2 2 # 12
1 dx dy
f (x(t) , y(t)) dx2 + dy 2 2
= f (x(t) , y(t)) + dt , (88)
C a dt dt
xB
" 2 #1
2
dy
f (x, y) ds = f (x, y(x)) +1 dx . (89)
C xA dx
If the integral is only evaluated either along dx or dy, then only the axis projection
surfaces are obtained:
Sx = f (x, y) dx or Sy = f (x, y) dy . (90)
C C
Sx
xA Ss Sy
A yA
xB
x
C yB
ds y
31
Example: Scalar Field Line Integral
Assume C f (x, y) ds, where f (x, y) = h is constant, and C is a circle of radius R.
Parametrise C as x() = R cos() and y() = R sin() for = 0 2. It follows that:
Ss = h ds
C
1
= h dx2 + dy 2 2
C
2
" 2 2 # 12
dx dy
= h + d
0 d d
2 h i1
hR ( sin())2 + (cos())2 d
2
=
0
2
= hR d = 2hR . (91)
0
z=h
Sx
Sy
xA
Ss
xB yA
x
yB
y
C ds
Figure 4: Cylinder surface integral
Note that due to the closed loop integration path, the axis projection Sx and Sy are
nil in this case.
Sx = h dx
C xB xA
= h dx + h dx
xA xB
=0. (92)
32
Generalised Gradient Theorem
Postulate the integral of a 3D vector field, F(x) = [F1 (x) F2 (x) F3 (x)]T , along an ar-
bitrary 3D curve, C, which is parametrised as x = x(t), y = y(t), z = z(t), i.e. ds =
[dx dy dz]T , and goes from point p to point q. The corresponding generalised line in-
tegral becomes:
F ds = (F1 (x) dx + F2 (x) dy + F3 (x) dz)
C C
t=b
dx dy dz
= F1 (x(t)) + F2 (x(t)) + F3 (x(t)) dt . (93)
t=a dt dt dt
Equation (93) may for instance represent the work performed on a particle in 3D as
it travels through an external force field, F(x). Postulating that this vector field, F(x),
can be obtained as the gradient of a scalar field, (x), i.e. F(x) = (x), it follows
together with the 2nd Fundamental Theorem of Calculus, that:
F ds = (x) ds
C C
= (p) (q) . (94)
Equation (94) is known as the Gradient Theorem and implies path independence of
the integral if and only if F(x) = (x). It immediately follows from equation (66),
that such a vector field F(x) is irrotational, i.e. curl(F) = F = 0, because:
F = ((x)) = 0 , (95)
for any scalar field (x). The scalar field is referred to as a conservative or potential
field with the corresponding vector field F being denoted as a conservative vector field.
Conversely, it is always possible to express a conservative vector field F in terms of a
scalar potential field. This theorem is at the basis of a lot of the Potential Flow and
Irrotaional fluid dynamics.
33
Greens Theorem
For a 2D convex region , i.e. x = [x y]T with boundary , Greens Theorem states:
F2 (x) F1 (x)
dx dy = (F1 (x) dx + F2 (x) dy) (96)
x y
y C1 : u(x)
C2
C2 : v(x)
d
C3 - C4 C3 : p(y)
c C4 : q(y)
C1
x
z a b
(a) Convex domain
y i
-i
j
ij = ji
-j
x
z
(b) Non-convex domain
Considering figure 5 (a), the F1 (x) integrand part of equation (96) can be resolved
as:
x=b
"
y=v(x)
#
F1 (x) F1 (x)
dx dy = dy dx
y x=a y=u(x) y
x=b x=b
= F1 (x, v(x)) dx + F1 (x, u(x)) dx
x=a x=a
= F1 (x, y) dx , (97)
where the integration direction always follows a right-hand rotation about the domain
Similarly, the F2 (x) integrand part of equation (96) can be resolved. If
normal, i.e. k.
the region is not convex, it can always be subdivided into sub-domains, i , where the
line integrals at the internal boundary between sub-domain i and
j, ij , cancel due to
opposite directions of integration, see figure 5 (b), i.e. ji = ij .
Greens Theorem gives the necessary and sufficient condition for a line integral
(F1 (x) dx + F2 (x) dy) to be path independent, in a simply connected region, as:
F2 (x) F1 (x) T
F(x) = 0 0 = 0T . (99)
x y
Coupled with complex analysis, keyhole integration for domains with singularities,
and complex integrals, Greens Theorem can be used for developing Laurent Series,
Cauchy Residues or Laplace Transforms. These methods have important applications
in system dynamics and stability analyses.
35
Divergence Theorem
The Divergence Theorem, also referred to as Gauss or Ostrogradskys Theorem, relates
the vector flux through a domain boundary, , to the vector field within the domain, .
When combining both cells i or j, the integrals at the shared boundary surface cancel,
due to opposing signs in the outwards normal vector dA, resulting in:
X
F dA = F dA
i i
X 1
= F dA Vi . (103)
Vi i
i
Substituting equation (102), taking limVi 0 , and using equation (100), gives the Di-
vergence Theorem as:
F dA = (F) dV , (104)
where represents the overall volume domain and denotes the total surface boundary.
dAj dAi
i j
36
Top-Down or Volume to Surface Derivation
z
: Convex Surface
z = v(x; y)
z = u(x; y)
x
R y
For the convex domain, , in figure 7, consider just the F3 (x) volume integral of the
Divergence Theorem, i.e.:
" z=v(x,y) #
F3 (x) F3 (x, y, z)
dV = dz dx dy
z R z=u(x,y) z
= [F3 (x, y, v(x, y)) F3 (x, y, u(x, y))] dx dy
R
= dA ,
F3 (x) k (105)
where is now a surface domain, is a line boundary and dsn is an outward boundary
. The latter is related to the tangential vector ds, used in the
vector, i.e. dsn = ds n
Greens Theorem, see equation (96), by a negative 2 rotation as:
0 1
dsn = ds . (108)
1 0
which is equivalent to equation (96), for the vector field G = [G1 G2 ]T = [(F2 ) F1 ]T .
37
Stokes Theorem
Stokes Theorem relates surface integrals to line integrals. However, in its more general
form, the theorem relates integrals in dimension Rn to integrals in Rn1 .
0:
where G(x0 ) is the equivalent vector field in the local frame, with dA0 = dx0 dy 0 k
G2 (x0 ) G1 (x0 )
F(x) dA = 0
0
dx0 dy 0 , (111)
i i x y
z
F(x)
z0 G(x0 )
y0
x
x0 i
y
Figure 8: Stokes Theorem on a surface
Secondly, summing over all infinitesimal domains, i , noting that integrals along
internal boundaries, i , cancel due to opposite directions of integration, and defining the
overall boundary in the global reference frame, Stokes Theorem follows as:
F(x) dA = F(x) ds . (113)
The domain must be a simply connected region and F(x) must not include singu-
larities along . Stokes Theorem is the most generalised theorem and includes the Diver-
gence, Greens and the 2nd Fundamental Theorem as special cases. Chapters 13.4-13.6
of the recommended textbook are highly suggested for further discussions and examples.
38
Partial Differential Equations
NOTE: All figures for this section will be covered during lectures
Solution Strategies
The field of PDE is vast and many solution strategies are available, a few of the
most popular analytical approaches include:
Separation of Variables
Transform Methods
Method of Characteristics
Similarity Solutions
h-Principle
2 u(x)
u(x) u(x)
F x, u(x) , ,..., ,..., , = 0 , (114)
x1 xn xi , xj
where the vector x comprises all n problem variables and the operator F must
not be confused with F from the previous vector calculus chapter. Adopting the
u 2u
notation: = ux , = uxx , and assuming in this chapter that the variable
x x2
vector takes the form x = [x y]T R2 , equation (114) can be restated as:
L(u) = 0 . (116)
for any solution functions u, v and constant k. Hence, as examples, the heat, wave
and Laplace equations are linear while the advection and Burgers equations are
generally non-linear. Non-linearity can occur for instance if the solution, u, is part
of a derivatives coefficient or if a derivative carries a power exponent. Familiarity
with identifying linearity is paramount for subsequent studies and modules.
40
Degrees of Non-Linearity in 1st Order PDE
Let the reference 1st order PDE for most of this section be of the form:
u(x, y) u(x, y)
a(x, y) + b(x, y) = c(x, y, u(x, y)) . (118)
x y
The potential degree of non-linearity embedded in PDE of first order leads to
the following differentiations:
L (u) = 0 , (119)
which implies that the source term is nil, c(x, y) = 0, while inhomogeneous linear
PDE can be written in the form:
41
Method of Characteristics - 1st Order PDE
Theory & Derivation
Assume the following linear first order PDE:
u(x, y) u(x, y)
a(x, y) + b(x, y) = c(x, y) . (121)
x y
The solution, u(x, y), is a surface such that z = u(x, y), which can be rewrit-
ten in implicit form as 0 = u(x, y) z. The vector gradient operator, , on this
surface gives the normal vector, n, at every point as:
T
u(x, y) u(x, y)
n= 1 . (122)
x y
It follows that the vector field G is always in the tangential plane to u(x, y).
Assume we can parametrise a curve, C, which lies in the solution plane and which
at every point satisfies the following system of ODE:
T
dx dy dz
= a(x, y) b(x, y) c(x, y)
T
. (124)
ds ds ds
A curve of this type is called integral curve of the vector field G, which in
the context of a PDE is known as the characteristic curves. The solution surface
can then be reconstructed (traced) from the union of all characteristic curves.
The PDE has been reduced to a system of ODE, equations (124), which can be
solved. The parametrization variable can be eliminated from this system, and by
setting z = u, the Lagrange-Charpit equations can be obtained as:
dx dy du
= = , (125)
a(x, y) b(x, y) c(x, y)
which can easily be extended to include non-linear cases. In case that c(x, y) = 0,
from the third line in equations (124), it follows that u is constant and hence
du = 0. In any case it is possible to integrate:
dy b(x, y)
= , (126)
dx a(x, y)
which, if drawn in the x-y base, results in the projected characteristic curves.
42
Example: Advection Equation
Imagine a scalar quantity being propagated in 1D, in a constant velocity field
of speed a, where the initial distribution of the scalar is prescribed along the
boundary, = {(x, t = 0)}, so that the complete problem is stated as:
t + ax = 0 ,
(127)
(x, 0) = (x) .
x = at + k1 ,
(128)
u = k2 ,
where k1 and k2 are constants, such that the general solution can be expressed as:
x at = k1 ,
(129)
u = k2 = f (k1 ) = f (x at) ,
by drawing the projected characteristic curves and then showing that the par-
ticular solution is different in different regions of the domain. The projected
characteristic curves are obtained as:
y dy = x dx ,
(131)
y 2 x2 = k1 ,
1 + x2 = f (x2 ) on y = 0 ,
(133)
1 + y 2 = f (y 2 ) on x = 0 ,
so that the arbitrary function is f (t) = |t| + 1. Hence the full solution is:
2 2
u(x, y) = x y + 1 + y 2 x2 if y 2 x2 0 ,
u(x, y) = x y+1+y x =
u(x, y) = x y + 1 + x2 y 2 if y 2 x2 0 .
(134)
This example shows the concept of regions of influence which will be revisited
further later on.
43
Quasi-Linear PDE with Shocks and Expansion Fans
Consider the following inviscid Burgers problem statement:
ut+ u ux = 0 ,
0 if x < 1 ,
1 if 1 x < 0 , (135)
u(x, 0) = (x) =
(1 x) if 0 x < 1 ,
0 if x 1 .
44
Method of Characteristics - 2nd Order PDE
Assume the following second order PDE:
Finally, noting that there MUST NOT be a unique solution for uxx , uxy and
uyy , it follows that the coefficient matrix must be singular which occurs iff:
2
dy dy
r(x, y) 2 s(x, y) + t(x, y) = 0 . (139)
dx dx
The two roots of equation (139) are:
q
dy s(x, y) s(x, y)2 r(x, y) t(x, y)
= , (140)
dx r(x, y)
and these constitute a pair of differential equations which lead to the projected
characteristics curves. Three fundamentally different behaviours of the PDE res-
ult depending on the discriminant in equation (140).
1. Hyperbolic PDE s(x, y)2 > r(x, y) t(x, y) Real Distinct Roots
2. Parabolic PDE s(x, y)2 = r(x, y) t(x, y) Real Repeated Roots
2
3. Elliptical PDE s(x, y) < r(x, y) t(x, y) Complex Conjugate Roots
Variable Type PDE Regions of Dependence & Influence
Elliptical PDE, such as Laplaces equation, constitute Boundary Value Problem,
where the solution at one specific point in the domain depends on the current solu-
tion everywhere in the domain. Conversely, in hyperbolic PDE, e.g. wave equa-
tion, information travels (for time dependent problems) at a finite speed through
the domain, so that the solution at a point has a finite part of the domain upon
which it is dependent, the Region of Dependence. Similarly, the solution at this
point will only affect parts of the domain, the Region of Influence. Parabolic
PDE, e.g. heat equation, are similar to hyperbolic except that the information
propagates at infinite speed. Parabolic and hyperbolic PDE are suited for time-
marching schemes and are referred to as Initial Value Problems. For complex
PDE, the classification into hyperbolic, parabolic or elliptical may not be trivial.
45
A second order PDE, which is not a constant coefficient PDE, can change
its type throughout the simulation history or spacial domain. For example, the
steady Euler equation with irrotational flow, = u, can be expressed as:
2 2
1 M2 + =0, (141)
s2 n2
where M is the Mach Number, s and n are coordinates along and normal to a
streamline respectively. This PDE is elliptical in the sub-sonic, parabolic at the
sonic and hyperbolic in the super-sonic regime.
Canonical Forms & Representative PDE
Each type of PDE can be, upon a transformation into characteristic variables
and , be expressed in its canonical form. It also follows that each PDE can be
expressed in a form similar to either the heat, wave or Laplaces equation.
Wave Equation
The wave equation is of particular interest as a representative PDE. It features
two distinct families of real characteristic curves which upon projection into the
base space form a projected characteristic grid or network. In general the wave
equation PDE (without any BC/IC), with a propagation speed c, can be stated as:
x = ct + k1 and x = k2 ct , (144)
= x ct = k1 and = x + ct = k2 . (145)
2 2 2 2
= + 2 + , (150)
x2 2 2
2 2
2
2
2
2
2
= c 2c + c , (151)
t2 2 2
such that finally, the initial wave equation, equation (142), in canonical form is:
2 2
2 2 2 2 2 2
2 2 u 2 u 2 u 2 u 2 u 2 u
c (u) = c 2c +c c c 2 c = 0,
t2 x2 2 2 2 2
(152)
which simplifies to:
2u
= u = 0 . (153)
Equation (153) can easily be integrated to the general solution:
where f and g are arbitrary functions which depend on the given BC/IC.
Given initial conditions at time t = 0, in the form of an initial wave profile
u(x, 0) = h(x), and an initial velocity profile ut (x, 0) = v(x), equation (155) has
a final solution, referred to as dAlemberts Solution, which takes the form:
1 1 1 x+ct
u(x, t) = h(x + ct) + h(x ct) + v(s) ds . (156)
2 2 2c xct
Equation (156) shares strong similarities with the advection equation. The
first two terms are two half-amplitude initial wave profiles travelling in oppos-
ite directions. This interpretation is further visualised by noting that the wave
equation, equation (142), can be factorised into two advection equations:
c +c u(x, t) = 0 . (157)
t x t x
An inhomogeneous wave equation solution (i.e non-zero source q) exists:
1 1 1 x+ct 1 t x+c(tti )
u(x, t) = h(x + ct)+ h(x ct)+ v(s) ds+ q(xi , ti ) dxi dti .
2 2 2c xct 2c 0 xc(tti )
(158)
47
Laplaces Equation in 2D Incompressible & Irrotational Flow
Laplaces equation is fundamental for the Inviscid Potential Flow Theory. As-
sume a 2D irrotational flow such that the velocity field, u = [u v]T , can be
expressed as the gradient of a scalar field, i.e. u = . The irrotational charac-
teristic of the flow is inherently guaranteed, as () = 0 is a vector identity
which always holds. The incompressibility condition results in:
u v
u=0= + , (159)
x y
where the velocity vector components u and v are in turn given by:
u= and v= , (160)
x y
such that the field, , is a harmonic function, as it satisfies Laplaces equation:
2 2
+ =0. (161)
x2 y 2
Additionally, introduce a second scalar field, , such that:
u= and v = . (162)
y x
This new scalar field must obey both the incompressible and irrotational nature
of the flow. It can be seen immediately that the incompressibility conditions
holds, while for the irrotational aspect it is required that:
v u
u=0= , (163)
x y
which upon substitution with the new scalar field, , becomes:
2
2
v u
= + 2 =0. (164)
x y x2 y
Both fields must satisfy Laplaces equation, but furthermore it can be stated:
= (= u) and = (= v) . (165)
x y y x
Equations (165) are referred to as Cauchy-Riemann conditions for any com-
plex function of the type:
= + i, (166)
to be analytic and hence differentiable. This also implies that and are not
only both harmonic functions, but also conjugates of each other. This complex
potential, , has an imaginary part, , referred to as stream function, which can
be used to draw streamlines in a flow field, i.e. the trajectories of particles in the
velocity field. Because the streamlines are always tangent to the vector velocity
field, no mass flow passes through them.
48
The stream function is constant along a streamline and the difference between
two adjacent streamlines gives the volumetric flow rate through a line which joins
these two streamlines. The real part, , referred to as velocity potential, can be
used to draw the equipotential lines in the flow, which are perpendicular to the
streamlines everywhere in the domain. Equation (160) shows how the velocity
field is simply the gradient of the velocity potential.
A conformal mapping is a function of the form w = f (z), where z is a com-
plex variable and where the function f is both analytic itself and must not have
a vanishing derivative, i.e. fz (z) 6= 0. Functions with these properties are also
referred to as holomorphic functions. Depending on the choice of the conformal
map, geometric objects such as lines or circles in one domain can be mapped
to different shapes in another domain (e.g. circles to lines). However, the key
aspect of a conformal map is that it is angle-preserving, which implies that after
mapping through a conformal map, the streamlines and equipotential lines will
remain perpendicular relative to each other.
The potential and are harmonic functions so that they must remain har-
monic under a conformal mapping. Hence Laplaces equation can be solved in
one domain with boundary conditions applied along simpler geometries, before
being mapped to a different domain with more difficult shapes. It is considerably
simpler to solve fluid flow past a circular cylinder in one domain, and then map
this cylinder to an airfoil-like shape using the Joukowsky Transformation, which
is characterised by the conformal map w = f (z) = z + z 1 .
49
Separation of Variables
This method assumes the solution to a PDE, u(x, y), to be decomposed as::
u(x, y) = f (x) g(y) . (167)
Consider the 1D diffusive heat equation in a finite bar of length, L, stated as:
T 2T
= 2 ,
t x (168)
BC: T (0, t) = 0 and T (L, t) = 0 IC: T (x, 0) = (x) ,
It is assumed that the solution takes the form in equation (167), such that upon
substitution into the PDE it results that:
t n2 2 nx n
gn (t) = bn e L2 , fn (x) = an sin where cn = , n N .
L L
(172)
Hence the final result can be stated as:
2 2
X nx t n
T (x, t) = dn sin e L2 , with dn = an bn , (173)
n=1
L
where the coefficients dn can be found by a Fourier series integral on the IC,
X nx
T (x, 0) = (x) = dn sin , (174)
n=1
L
Note the high decay rate n2 of high frequencies. The heat equation quickly
dampens high frequency signals while low frequency functions decay consider-
ably slower. Both the wave and Laplaces equation can be solved in a analogous
manner but only for selected geometries and boundary conditions.
50