You are on page 1of 34

1

Mathematics Review
Applied Computational Fluid Dynamics
Instructor: Andr Bakker
http://www.bakker.org
Andr Bakker (2002-2006)
2
Vector Notation - 1
=
=
.
=
x
=
i i
y x Y X product: ("inner") Scalar =
r r
Z Y X product: ("outer") Vector
r r r
=
j i ij
y x a A Y X product tensor Dyadic = =
r
r r r
: ) " ("
i i
fx y Y f X tion: Multiplica = =
r
v
ji ij
b a B A product: ("inner") dot Double =
r
r
r
r
:
=
:
3
Vector Notation - 2
=
.
=
.
k ik i
x a y Y X A product: Scalar = =
r r
r
r
kj ik ij
b a c C B A product: Scalar = =
r
r
r
r
r
r
=

B a F F B A product: ker Kronec


ij ij
r
r
r
r
r
r
r
r
= =
4
The gradient of a scalar f is a vector:
Similarly, the gradient of a vector is a second order tensor, and
the gradient of a second order tensor is a third order tensor.
The divergence of a vector is a scalar:
Gradients and divergence
k j i
z
f
y
f
x
f
f f grad

= =
z
A
y
A
x
A
div
z
y
x

= = .A A
=
=
.
5
The curl (rotation) of a vector v(u,v,w) is another vector:
Curl
|

\
|

= = =
y
u
x
v
x
w
z
u
z
v
y
w
rot curl , , v v v
=
x
6
Definitions - rectangular coordinates
k j i A |

\
|

+
|

\
|

+ |

\
|

=
y
A
x
A
x
A
z
A
z
A
y
A
x
y
z x
y
z
2
2
2
2
2
2
2
z
f
y
f
x
f
f

=
k j i A
z y x
A A A
2 2 2 2
+ + =
k j i
z
f
y
f
x
f
f

=
z
A
y
A
x
A
z
y
x

= .A
7
Identities
f g g f g f + = ) (
B) ( A A) ( B )B (A )A (B B) (A + + + =
) ( ) ( ) ( A A A + = f f f
B) ( A A B B A = ) ( ) (
) ( ) ( ) ( A A A + = f f f
A)B ( B)A ( )B (A )A (B B) (A + =
8
Identities
A A) ( A
2
=
k
j
i )B (A
(

+
(

+
(

=
z
B
A
y
B
A
x
B
A
z
B
A
y
B
A
x
B
A
z
B
A
y
B
A
x
B
A
z
z
z
y
z
x
y
z
y
y
y
x
x
z
x
y
x
x
volume bounds which surface the is S where
d
f d f
S
S


=
=
da A A
da

) (
9
Differentiation rules
10
Integral theorems
volume bounds which surface the is S where
d d theorem divergence Gauss
S

=

A s A : '
)
. . (
: '
volume a bound not does but
surface a be may S e i S surface open
the bounds which curve closed the is C where
d d theorem Stokes
S C
D 3
=

s A) ( l A
11
Euclidian Norm
Various definitions exist for the norm of vectors and matrices. The
most well known is the Euclidian norm.
The Euclidian norm ||V|| of a vector V is:
The Euclidian norm ||A|| of a matrix A is:
Other norms are the spectral norm, which is the maximum of the
individual elements, or the Hlder norm, which is similar to the
Euclidian norm, but uses exponents p and 1/p instead of 2 and
1/2, with p a number larger or equal to 1.

=
j i
ij
a
,
2
A

=
i
i
v
2
V
12
Matrices - Miscellaneous
The determinant of a matrix A with elements a
ij
and i=3 rows and j=3
columns:
A diagonal matrix is a matrix where all elements are zero except a
11
, a
22
,
and a
23
. For a tri-diagonal matrix also the diagonals above and below
the main diagonal are non-zero, while all other elements are zero.
Triangular decomposition is expressing A as the product LU with L a
lower-triangular matrix (elements above diagonal are 0) and U an upper
triangular matrix.
The transpose A
T
has elements a
ij
=a
ji
. A matrix is symmetric if A
T
=A.
A sparse matrix is a matrix where the vast majority of elements is zero
and only few elements are non-zero.
32 23 11 33 21 12 31 22 13 32 21 13 31 23 12 33 22 11
33 32 31
23 22 21
13 12 11
det a a a a a a a a a a a a a a a a a a
a a a
a a a
a a a
+ + =
=
A
A
13
Matrix invariants - 1
An invariant is a scalar property of a matrix that is independent of the coordinate
system in which the matrix is written.
The first invariant I
1
of a matrix A is the trace tr A. This is simply the sum of the
diagonal components: I
1
= tr A = a
11
+ a
22
+ a
33
The second invariant is:
The third invariant is the determinant of the matrix: I
3
= det A.
The three invariants are the simplest possible linear, quadratic, and cubic
combinations of the eigenvalues that do not depend on their ordering.
33 13
31 11
22 12
21 11
33 23
32 22
2
a a
a a
a a
a a
a a
a a
I + + =
14
Matrix invariants - 2
Using the previous definitions we can form infinitely many other variants, e.g:
In CFD literature, one will sometimes encounter the following alternative
definitions of the second invariant (which are derived from the previous
definitions):
For symmetric matrices:
I
2
= (1/2)*[(tr A)
2
- tr A
2
] = a
11
a
22
+ a
22
a
33
+ a
33
a
11
or
I
2
= (1/6) * [ (a
11
-a
22
)
2
+ (a
22
-a
33
)
2
+ (a
33
-a
11
)
2
] + a
12
2
+ a
23
2
+ a
31
2
The Euclidian norm:

= =
j i
ij
a I
,
2
2
A
( )
ik ik
ii
a a I I
a I
=
=
2
2
1
2
2
1
2
15
Gauss-Seidel method
The Gauss-Seidel method is a numerical method to solve the
following set of linear equations:
We first make an initial guess for x
1
:
The superscript 1 denotes the 1st iteration.
Next, using x
1
1
:
N N NN N N N
N N
C x a x a x a x a
C x a x a x a x a
= + + + +
= + + + +
.....
.....
3 3 2 2 1 1
1 1 3 13 2 12 1 11
M M M
11
1
1
1
a
C
x =
) (
1
1
1 21
22 22
2
1
2
x a
a a
C
x =
16
Gauss-Seidel method - continued
Next, using x
1
1
and x
2
0
:
And continue, until:
For all consecutive iterations we solve for x
1
2
, using x
2
1
x
N
1
,
and next for x
2
2
using x
1
2
, x
3
1
x
N
1
, etc.
We repeat this process until convergence, i.e. until:
with a specified small value.
) (
1
1
2 32
1
1 31
33 33
3
1
3
x a x a
a a
C
x + =

=
1
1
1 1
1
n
i Ni
NN NN
N
N
x a
a a
C
x
<

) (
1 k
i
k
i
x x
17
Gauss-Seidel method - continued
It is possible to improve the speed at which this system of
equations is solved by applying overrelaxation, or improve the
stability if the system does not converge by applying
underrelaxation.
Say at iteration k the value of x
i
equals x
i
k
. If applying the Gauss-
Seidel method, the value for iteration k+1 would be x
i
k+1
, then,
instead of using x
i
k+1
, we consider this to be a predictor.
We then calculate a corrector as follows:
Here R is the relaxation factor (R>0). If R<1 we use
underrelaxation and if R>1 we use overrelaxation.
Next we recalculate x
i
k+1
as follows:
) (
1 k
i
k
i
x x R corrector =
+
corrector x x
k
i
k
i
+ =
+1
18
Gauss elimination
Consider the same set of algebraic equations shown in the
Gauss-Seidel discussion. Consider the matrix A:
The heart of the algorithm is the technique for eliminating all the
elements below the diagonal, i.e. to replace them with zeros, to
create an upper triangular matrix:
|
|
|
|
|

\
|
=
nn n n
n
n
a a a
a a a
a a a
L
M O M M
L
L
2 1
2 22 21
1 12 11
A
|
|
|
|
|

\
|
=
nn
n
n
a
a a
a a a
L
M O M M
L
L
0 0
0
2 22
1 12 11
U
19
Gauss elimination - continued
This is done by multiplying the first row by a
21
/a
11
and subtracting
it from the second row. Note that C
2
then becomes C
2
-C
1
a
21
/a
11
.
The other elements a
31
through a
n1
are treated similarly. Now all
elements in the first column below a
11
are 0.
This process is then repeated for all columns.
This process is called forward elimination.
Once the upper diagonal matrix has been created, the last
equation only contains one variable x
n
, which is readily calculated
as x
n
=C
n
/a
nn
.
This value can then be substituted in equation n-1 to calculate x
n-1
and this process can be repeated to calculate all variables x
i
. This
is called backsubstitution.
The number of operations required for this method increases
proportional to n
3
. For large matrices this can be a
computationally expensive method.
20
Tridiagonal matrix algorithm (TDMA)
TDMA is an algorithm similar to Gauss elimination for tridiagonal
matrices, i.e. matrices for which only the main diagonal and the
diagonals immediately above and below it are non-zero.
This system can be written as:
Only one element needs to be replaced by a zero on each row to
create an upper diagonal matrix.
When the algorithm reaches the ith row, only a
i,i
and C
i
need to
be modified:
Backsubstitution is then used to calculate all x
i
.
The computational effort scales with n and this is an efficient
method to solve this set of equations.
i i i i i i i i i i
C x a x a x a = + +
+ + 1 1 , , 1 1 ,
i i
i i i
i i
i i
i i i i
i i i i
a
C a
C C
a
a a
a a
, 1
1 1 ,
, 1
1 , 1 1 ,
, ,

+
= =
21
Differential equations
Ordinary differential equation (ODE): an equation which, other than the
one independent variable x and the dependent variable y, also contains
derivatives from y to x. General form:
F(x,y,y,y y
(n)
) = 0
The order of the equation is determined by the order n of the highest
order derivative.
A partial differential equation (PDE) has two or more independent
variables. A PDE with two independent variables has the following form:
with z=z(x,y). The order is again determined by the order of the highest
order partial derivative in the equation. Methods such as Laplace
transformations or variable separation can sometimes be used to
express PDEs as sets of ODEs. These will not be discussed here.
0 ... , , , , , , , ,
2
2 2
2
2
= |

\
|

y
z
y x
z
x
z
y
z
x
z
z y x F
22
Classification of partial differential equations
A general partial differential equation in coordinates x and y:
Characterization depends on the roots of the higher order (here
second order) terms:
(b
2
-4ac) > 0 then the equation is called hyperbolic.
(b
2
-4ac) = 0 then the equation is called parabolic.
(b
2
-4ac) < 0 then the equation is called elliptic.
Note: if a, b, and c themselves depend on x and y, the equations
may be of different type, depending on the location in x-y space.
In that case the equations are of mixed type.
0
2
2 2
2
2
= + +

g f
y
e
x
d
y
c
y x
b
x
a

23
Origin of the terms
The origin of the terms elliptic, parabolic, or hyperbolic used
to label these equations is simply a direct analogy with the case
for conic sections.
The general equation for a conic section from analytic geometry
is:
where if
(b
2
-4ac) > 0 the conic is a hyperbola.
(b
2
-4ac) = 0 the conic is a parabola.
(b
2
-4ac) < 0 the conic is an ellipse.
0
2 2
= + + + + + f ey dx cy bxy ax
24
Numerical integration methods
Ordinary differential equation:
Here f is a known function.
0
is the initial point. The basic
problem is how to calculate the solution a short time t after the
initial point at time t
1
=t
0
+ t. This process can then be repeated to
calculate the solution at t
2
, etc.
The simplest method is to calculate the solution at t
1
by adding
f(t
0
,
0
) t to
0
. This is called the explicit or forward Euler
method, generally expressed as:
0
0
) ( )); ( , (
) (

= = t t t f
dt
t d
t t f t
n
n
n n
n
+ = =
+
+
) , ( ) (
1
1

25
Numerical integration methods
Another method is the trapezoid rule which forms the basis of a
popular method to solve differential equations called the Crank-
Nicolson method:
Methods using points between t
n
and t
n+1
are called Runge-Kutta
methods, which come in various forms. The simplest one is
second-order Runge-Kutta:
t
n
n
t f
n
n
t f
n n

+
+
+ + =
(
(

+
)
1
,
1
( ) , (
2
1 1

) , (
) , (
2
*
2 / 1 2 / 1
1
*
2 / 1
+ +
+
+
+ =

+ =
n n
n n
n
n
n
n
t f t
t f
t


26
Numerically estimating zero-crossings
(use recursively)
Linear interpolation (regula falsi)
Method of Newton-Raphson
27
Jacobian
The general definition of the Jacobian for n functions of n
variables is the following set of partial derivatives:
In CFD, the shear stress tensor S
ij
= U
i
/ x
j
is also called the
Jacobian.
n
n n n
n
n
n
n
x
f
x
f
x
f
x
f
x
f
x
f
x
f
x
f
x
f
x x x
f f f

...
... ... ... ...
...
...
) ,..., , (
) ,..., , (
2 1
2
2
2
1
2
1
2
1
1
1
2 1
2 1
28
Jacobian - continued
The Jacobian can be used to calculate derivatives from a function
in one coordinate sytem from the derivatives of that same
function in another coordinate system.
Equations u=f(x,y), v=g(x,y), then x and y can be determined as
functions of u and v (possessing first partial derivatives) as
follows:
With similar functions for x
v
and y
v
.
The determinants in the denominators are examples of the use of
Jacobians.
y x
y x
x
y x
y x
y
y x
y x
g g
f f
g
u
y
g g
f f
g
u
x
y g g x g g y x g v
x f f y f f y x f u

= = =
= = =
/ ; / ); , (
/ ; / ); , (
29
Eigenvalues
If an equation with an adjustable parameter has non-trivial
solutions only for specific values of that parameter, those values
are called the eigenvalues and the corresponding function the
eigenfunction.
If a differential equation with an adjustable parameter only has a
solution for certain values of that parameter, those values are
called the eigenvalues of the differential equation.
For an nxn matrix A, for the equation Az = z, then z is an
eigenvector and is an eigenvalue.
The eigenvalues are the n roots of the characteristic equation
det(I-A) =
n
+p
1

n-1
++p
n
= 0
(I-A) is the characteristic matrix of A.
The polynomial is called the characteristic polynomial of A.
The product of all the eigenvalues of A is equal to det A.
The sum of all the eigenvalues is equal to tr A.
The matrix is singular if at least one eigenvalue is zero.
30
Taylor series
Let f(x) have continuous derivatives up to the (n+1)
st
order in
some interval containing the point a. Then:
... ) (
!
) (
.... ) (
! 2
) ( ' '
) (
! 1
) ( '
) ( ) (
2
+ + + + + =
n
n
a x
n
a f
a x
a f
a x
a f
a f x f
31
Error function
The error function is defined as:
It is the integral of the Gaussian (normal) distribution. It is
usually calculated from series expansions.
Properties are:
) ( 1 ) (
2
) (
0
2
z erf z erfc function: error ary Complement
dt e z erf function: Error
z
t
=
=

( )
2
exp
2 ) (
) ( ) (
1 ) (
0 ) 0 (
z
dz
z erf d
z erf z erf
erf
erf
=
=
=
=

32
Permutation symbol
The permutation symbol e
kmn
resembles a third-order tensor.
If the number of transpositions required to bring k, m, and n in the
form 1, 2, 3 is even then e
kmn
=1.
If the number of transpositions required to bring k, m, and n in the
form 1, 2, 3 is odd then e
kmn
=-1.
Otherwise e
kmn
=0.
Thus:
Instead of e
kmn
the permutation symbol is also often written as
kmn
zero are elements other all
e e e
e e e
1
1
213 321 132
312 231 123
= = =
= = =
33
Correlation functions
Continuous signals.
Let x(t) and y(t) be two signals. Then the correlation function
xy
(t) is
defined as:
The function
xx
(t) is usually referred to as the autocorrelation function
of the signal, while the function
xy
(t) is usually called the cross-
correlation function.
Discrete time series.
Let x[n] and y[n] be two real-valued discrete-time signals. The
autocorrelation function
xx
[n] of x[n] is defined as:
The cross-correlation function is defined as:


+ = d y t x t
xy
) ( ) ( ) (

+
=
+ =
m
xx
m x n m x n ] [ ] [ ] [

+
=
+ =
m
xy
m y n m x n ] [ ] [ ] [
34
Fourier transforms are used to decompose a signal into a sum of
complex exponentials (i.e. sinusoidal signals) at different
frequencies.
The general form is:
X() is the Fourier transform of x(t). It is also called the spectrum
because it shows the amplitude (energy) associated with each
frequency present in the signal. X() is a complex function with
real and imaginary parts. The magnitude |X()| is also called the
power spectrum.
Slightly different forms exist for continuous, discrete, periodic, and
aperiodic signals.

d e t x X
d e X t x
t i
t i

=
=
) ( ) (
) (
2
1
) (
Fourier transforms

You might also like