x
2
+ 1
_
= arccos
_
1
x
2
+ 1
_
, sin(arccos(x)) =
_
1 x
2
1.2 Hyperbolic functions
The hyperbolic functions are dened by:
sinh(x) =
e
x
e
x
2
, cosh(x) =
e
x
+ e
x
2
, tanh(x) =
sinh(x)
cosh(x)
From this follows that cosh
2
(x) sinh
2
(x) = 1. Further holds:
arsinh(x) = ln[x +
_
x
2
+ 1[ , arcosh(x) = arsinh(
_
x
2
1)
1
2 Mathematics Formulary by ir. J.C.A. Wevers
1.3 Calculus
The derivative of a function is dened as:
df
dx
= lim
h0
f(x +h) f(x)
h
Derivatives obey the following algebraic rules:
d(x y) = dx dy , d(xy) = xdy +ydx , d
_
x
y
_
=
ydx xdy
y
2
For the derivative of the inverse function f
inv
(y), dened by f
inv
(f(x)) = x, holds at point P = (x, f(x)):
_
df
inv
(y)
dy
_
P
_
df(x)
dx
_
P
= 1
Chain rule: if f = f(g(x)), then holds
df
dx
=
df
dg
dg
dx
Further, for the derivatives of products of functions holds:
(f g)
(n)
=
n
k=0
_
n
k
_
f
(nk)
g
(k)
For the primitive function F(x) holds: F
(x)
_
f(x)dx
ax
n
anx
n1
a(n + 1)
1
x
n+1
1/x x
2
ln[x[
a 0 ax
a
x
a
x
ln(a) a
x
/ ln(a)
e
x
e
x
e
x
a
log(x) (xln(a))
1
(xln(x) x)/ ln(a)
ln(x) 1/x xln(x) x
sin(x) cos(x) cos(x)
cos(x) sin(x) sin(x)
tan(x) cos
2
(x) ln[ cos(x)[
sin
1
(x) sin
2
(x) cos(x) ln[ tan(
1
2
x)[
sinh(x) cosh(x) cosh(x)
cosh(x) sinh(x) sinh(x)
arcsin(x) 1/
1 x
2
xarcsin(x) +
1 x
2
arccos(x) 1/
1 x
2
xarccos(x)
1 x
2
arctan(x) (1 +x
2
)
1
xarctan(x)
1
2
ln(1 +x
2
)
(a +x
2
)
1/2
x(a +x
2
)
3/2
ln[x +
a +x
2
[
(a
2
x
2
)
1
2x(a
2
+x
2
)
2
1
2a
ln[(a +x)/(a x)[
The curvature of a curve is given by: =
(1 + (y
)
2
)
3/2
[y
[
The theorem of De l H opital: if f(a) = 0 and g(a) = 0, then is lim
xa
f(x)
g(x)
= lim
xa
f
(x)
g
(x)
Chapter 1: Basics 3
1.4 Limits
lim
x0
sin(x)
x
= 1 , lim
x0
e
x
1
x
= 1 , lim
x0
tan(x)
x
= 1 , lim
k0
(1 +k)
1/k
= e , lim
x
_
1 +
n
x
_
x
= e
n
lim
x0
x
a
ln(x) = 0 , lim
x
ln
p
(x)
x
a
= 0 , lim
x0
ln(x +a)
x
= a , lim
x
x
p
a
x
= 0 als [a[ > 1.
lim
x0
_
a
1/x
1
_
= ln(a) , lim
x0
arcsin(x)
x
= 1 , lim
x
x
x = 1
1.5 Complex numbers and quaternions
1.5.1 Complex numbers
The complex number z = a + bi with a and b IR. a is the real part, b the imaginary part of z. [z[ =
a
2
+b
2
.
By denition holds: i
2
= 1. Every complex number can be written as z = [z[ exp(i), with tan() = b/a. The
complex conjugate of z is dened as z = z
2
) +i sin(
1
2
))
The following can be derived:
[z
1
+z
2
[ [z
1
[ +[z
2
[ , [z
1
z
2
[ [ [z
1
[ [z
2
[ [
And from z = r exp(i) follows: ln(z) = ln(r) +i, ln(z) = ln(z) 2ni.
1.5.2 Quaternions
Quaternions are dened as: z = a + bi + cj + dk, with a, b, c, d IR and i
2
= j
2
= k
2
= 1. The products of
i, j, k with each other are given by ij = ji = k, jk = kj = i and ki = ik = j.
4 Mathematics Formulary by ir. J.C.A. Wevers
1.6 Geometry
1.6.1 Triangles
The sine rule is:
a
sin()
=
b
sin()
=
c
sin()
Here, is the angle opposite to a, is opposite to b and opposite to c. The cosine rule is: a
2
= b
2
+c
2
2bc cos().
For each triangle holds: + + = 180
.
Further holds:
tan(
1
2
( +))
tan(
1
2
( ))
=
a +b
a b
The surface of a triangle is given by
1
2
ab sin() =
1
2
ah
a
=
_
s(s a)(s b)(s c) with h
a
the perpendicular on
a and s =
1
2
(a +b +c).
1.6.2 Curves
Cycloid: if a circle with radius a rolls along a straight line, the trajectory of a point on this circle has the following
parameter equation:
x = a(t + sin(t)) , y = a(1 + cos(t))
Epicycloid: if a small circle with radius a rolls along a big circle with radius R, the trajectory of a point on the small
circle has the following parameter equation:
x = a sin
_
R +a
a
t
_
+ (R +a) sin(t) , y = a cos
_
R +a
a
t
_
+ (R +a) cos(t)
Hypocycloid: if a small circle with radius a rolls inside a big circle with radius R, the trajectory of a point on the
small circle has the following parameter equation:
x = a sin
_
R a
a
t
_
+ (R a) sin(t) , y = a cos
_
R a
a
t
_
+ (R a) cos(t)
A hypocycloid with a = R is called a cardioid. It has the following parameterequation in polar coordinates:
r = 2a[1 cos()].
1.7 Vectors
The inner product is dened by: a
b =
i
a
i
b
i
= [a [ [
b [ cos()
where is the angle between a and
b =
_
_
a
y
b
z
a
z
b
y
a
z
b
x
a
x
b
z
a
x
b
y
a
y
b
x
_
_
=
e
x
e
y
e
z
a
x
a
y
a
z
b
x
b
y
b
z
Further holds: [a
b [ = [a [ [
b [ sin(), and a (
b c ) = (a c )
b (a
b )c.
Chapter 1: Basics 5
1.8 Series
1.8.1 Expansion
The Binomium of Newton is:
(a +b)
n
=
n
k=0
_
n
k
_
a
nk
b
k
where
_
n
k
_
:=
n!
k!(n k)!
.
By subtracting the series
n
k=0
r
k
and r
n
k=0
r
k
one nds:
n
k=0
r
k
=
1 r
n+1
1 r
and for [r[ < 1 this gives the geometric series:
k=0
r
k
=
1
1 r
.
The arithmetic series is given by:
N
n=0
(a +nV ) = a(N + 1) +
1
2
N(N + 1)V .
The expansion of a function around the point a is given by the Taylor series:
f(x) = f(a) + (x a)f
(a) +
(x a)
2
2
f
(a) + +
(x a)
n
n!
f
(n)
(a) +R
where the remainder is given by:
R
n
(h) = (1 )
n
h
n
n!
f
(n+1)
(h)
and is subject to:
mh
n+1
(n + 1)!
R
n
(h)
Mh
n+1
(n + 1)!
From this one can deduce that
(1 x)
n=0
_
n
_
x
n
One can derive that:
n=1
1
n
2
=
2
6
,
n=1
1
n
4
=
4
90
,
n=1
1
n
6
=
6
945
n
k=1
k
2
=
1
6
n(n + 1)(2n + 1) ,
n=1
(1)
n+1
n
2
=
2
12
,
n=1
(1)
n+1
n
= ln(2)
n=1
1
4n
2
1
=
1
2
,
n=1
1
(2n 1)
2
=
2
8
,
n=1
1
(2n 1)
4
=
4
96
,
n=1
(1)
n+1
(2n 1)
3
=
3
32
1.8.2 Convergence and divergence of series
If
n
[u
n
[ converges,
n
u
n
also converges.
If lim
n
u
n
,= 0 then
n
u
n
is divergent.
An alternating series of which the absolute values of the terms drop monotonously to 0 is convergent (Leibniz).
6 Mathematics Formulary by ir. J.C.A. Wevers
If
_
p
f(x)dx < , then
n
f
n
is convergent.
If u
n
> 0 n then is
n
u
n
convergent if
n
ln(u
n
+ 1) is convergent.
If u
n
= c
n
x
n
the radius of convergence of
n
u
n
is given by:
1
= lim
n
n
_
[c
n
[ = lim
n
c
n+1
c
n
.
The series
n=1
1
n
p
is convergent if p > 1 and divergent if p 1.
If: lim
n
u
n
v
n
= p, than the following is true: if p > 0 than
n
u
n
and
n
v
n
are both divergent or both convergent, if
p = 0 holds: if
n
v
n
is convergent, than
n
u
n
is also convergent.
If L is dened by: L = lim
n
n
_
[n
n
[, or by: L = lim
n
u
n+1
u
n
, then is
n
u
n
divergent if L > 1 and convergent if
L < 1.
1.8.3 Convergence and divergence of functions
f(x) is continuous in x = a only if the upper  and lower limit are equal: lim
xa
f(x) = lim
xa
f(x). This is written as:
f(a
) = f(a
+
).
If f(x) is continuous in a and: lim
xa
f
(x) = lim
xa
f
u
n

W
is convergent, than
u
n
is uniform convergent.
We dene S(x) =
n=N
u
n
(x) and F(y) =
b
_
a
f(x, y)dx := F. Than it can be proved that:
Theorem For Demands on W Than holds on W
rows f
n
continuous, f is continuous
f
n
uniform convergent
C series S(x) uniform convergent, S is continuous
u
n
continuous
integral f is continuous F is continuous
rows f
n
can be integrated, f
n
can be integrated,
f
n
uniform convergent
_
f(x)dx = lim
n
_
f
n
dx
I series S(x) is uniform convergent, S can be integrated,
_
Sdx =
_
u
n
dx
u
n
can be integrated
integral f is continuous
_
Fdy =
__
f(x, y)dxdy
rows f
n
C
1
; f
n
unif.conv f
= (x)
D series u
n
C
1
;
u
n
conv;
n
u.c. S
(x) =
n
(x)
integral f/y continuous F
y
=
_
f
y
(x, y)dx
Chapter 1: Basics 7
1.9 Products and quotients
For a, b, c, d IR holds:
The distributive property: (a +b)(c +d) = ac +ad +bc +bd
The associative property: a(bc) = b(ac) = c(ab) and a(b +c) = ab +ac
The commutative property: a +b = b +a, ab = ba.
Further holds:
a
2n
b
2n
a b
= a
2n1
a
2n2
b +a
2n3
b
2
b
2n1
,
a
2n+1
b
2n+1
a +b
=
n
k=0
a
2nk
b
2k
(a b)(a
2
ab +b
2
) = a
3
b
3
, (a +b)(a b) = a
2
+b
2
,
a
3
b
3
a +b
= a
2
ba +b
2
1.10 Logarithms
Denition:
a
log(x) = b a
b
= x. For logarithms with base e one writes ln(x).
Rules: log(x
n
) = nlog(x), log(a) + log(b) = log(ab), log(a) log(b) = log(a/b).
1.11 Polynomials
Equations of the type
n
k=0
a
k
x
k
= 0
have n roots which may be equal to each other. Each polynomial p(z) of order n 1 has at least one root in C. If
all a
k
IR holds: when x = p with p C a root, than p
b
2
4ac
2a
For a, b, c, d IR and a ,= 0 holds: the 3rd order equation ax
3
+ bx
2
+ cx + d = 0 has the general analytical
solution:
x
1
= K
3ac b
2
9a
2
K
b
3a
x
2
= x
3
=
K
2
+
3ac b
2
18a
2
K
b
3a
+i
3
2
_
K +
3ac b
2
9a
2
K
_
with K =
_
9abc 27da
2
2b
3
54a
3
+
4ac
3
c
2
b
2
18abcd + 27a
2
d
2
+ 4db
3
18a
2
_
1/3
1.12 Primes
A prime is a number IN that can only be divided by itself and 1. There are an innite number of primes. Proof:
suppose that the collection of primes P would be nite, than construct the number q = 1 +
pP
p, than holds
q = 1(p) and so Q cannot be written as a product of primes from P. This is a contradiction.
8 Mathematics Formulary by ir. J.C.A. Wevers
If (x) is the number of primes x, than holds:
lim
x
(x)
x/ ln(x)
= 1 and lim
x
(x)
x
_
2
dt
ln(t)
= 1
For each N 2 there is a prime between N and 2N.
The numbers F
k
:= 2
k
+ 1 with k IN are called Fermat numbers. Many Fermat numbers are prime.
The numbers M
k
:= 2
k
1 are called Mersenne numbers. They occur when one searches for perfect numbers,
which are numbers n IN which are the sum of their different dividers, for example 6 = 1 + 2 + 3. There
are 23 Mersenne numbers for k < 12000 which are prime: for k 2, 3, 5, 7, 13, 17, 19, 31, 61, 89, 107, 127, 521,
607, 1279, 2203, 2281, 3217, 4253, 4423, 9689, 9941, 11213.
To check if a given number n is prime one can use a sieve method. The rst known sieve method was developed by
Eratosthenes. A faster method for large numbers are the 4 Fermat tests, who dont prove that a number is prime but
give a large probability.
1. Take the rst 4 primes: b = 2, 3, 5, 7,
2. Take w(b) = b
n1
mod n, for each b,
3. If w = 1 for each b, then n is probably prime. For each other value of w, n is certainly not prime.
Chapter 2
Probability and statistics
2.1 Combinations
The number of possible combinations of k elements from n elements is given by
_
n
k
_
=
n!
k!(n k)!
The number of permutations of p from n is given by
n!
(n p)!
= p!
_
n
p
_
The number of different ways to classify n
i
elements in i groups, when the total number of elements is N, is
N!
i
n
i
!
2.2 Probability theory
The probability P(A) that an event A occurs is dened by:
P(A) =
n(A)
n(U)
where n(A) is the number of events when A occurs and n(U) the total number of events.
The probability P(A) that A does not occur is: P(A) = 1 P(A). The probability P(A B) that A and
B both occur is given by: P(A B) = P(A) + P(B) P(A B). If A and B are independent, than holds:
P(A B) = P(A) P(B).
The probability P(A[B) that A occurs, given the fact that B occurs, is:
P(A[B) =
P(A B)
P(B)
2.3 Statistics
2.3.1 General
The average or mean value x) of a collection of values is: x) =
i
x
i
/n. The standard deviation
x
in the
distribution of x is given by:
x
=
_
n
i=1
(x
i
x))
2
n
When samples are being used the sample variance s is given by s
2
=
n
n 1
2
.
9
10 Mathematics Formulary by ir. J.C.A. Wevers
The covariance
xy
of x and y is given by::
xy
=
n
i=1
(x
i
x))(y
i
y))
n 1
The correlation coefcient r
xy
of x and y than becomes: r
xy
=
xy
/
x
y
.
The standard deviation in a variable f(x, y) resulting from errors in x and y is:
2
f(x,y)
=
_
f
x
x
_
2
+
_
f
y
y
_
2
+
f
x
f
y
xy
2.3.2 Distributions
1. The Binomial distribution is the distribution describing a sampe with replacement. The probability for
success is p. The probability P for k successes in n trials is than given by:
P(x = k) =
_
n
k
_
p
k
(1 p)
nk
The standard deviation is given by
x
=
_
np(1 p) and the expectation value is = np.
2. The Hypergeometric distribution is the distribution describing a sampeling without replacement in which
the order is irrelevant. The probability for k successes in a trial with A possible successes and B possible
failures is then given by:
P(x = k) =
_
A
k
__
B
n k
_
_
A+B
n
_
The expectation value is given by = nA/(A+B).
3. The Poisson distribution is a limiting case of the binomial distribution when p 0, n and also
np = is constant.
P(x) =
x
e
x!
This distribution is normalized to
x=0
P(x) = 1.
4. The Normal distribution is a limiting case of the binomial distribution for continuous variables:
P(x) =
1
2
exp
_
1
2
_
x x)
_
2
_
5. The Uniform distribution occurs when a random number x is taken from the set a x b and is given by:
_
_
P(x) =
1
b a
if a x b
P(x) = 0 in all other cases
x) =
1
2
(b a) and
2
=
(b a)
2
12
.
Chapter 2: Probability and statistics 11
6. The Gamma distribution is given by:
_
P(x) =
x
1
e
x/
()
if 0 y
with > 0 and > 0. The distribution has the following properties: x) = ,
2
=
2
.
7. The Beta distribution is given by:
_
_
P(x) =
x
1
(1 x)
1
(, )
if 0 x 1
P(x) = 0 everywhere else
and has the following properties: x) =
+
,
2
=
( +)
2
( + + 1)
.
For P(
2
) holds: = V/2 and = 2.
8. The Weibull distribution is given by:
_
_
P(x) =
x
1
e
x
if 0 x > 0
P(x) = 0 in all other cases
The average is x) =
1/
(( + 1))
9. For a twodimensional distribution holds:
P
1
(x
1
) =
_
P(x
1
, x
2
)dx
2
, P
2
(x
2
) =
_
P(x
1
, x
2
)dx
1
with
(g(x
1
, x
2
)) =
__
g(x
1
, x
2
)P(x
1
, x
2
)dx
1
dx
2
=
x1
x2
g P
2.4 Regression analyses
When there exists a relation between the quantities x and y of the form y = ax + b and there is a measured set x
i
with related y
i
, the following relation holds for a and b with x = (x
1
, x
2
, ..., x
n
) and e = (1, 1, ..., 1):
y ax be < x, e >
i
x
2
i
, (x, y ) =
i
x
i
y
i
, (x, e ) =
i
x
i
and (e, e ) = n. a and b follow from this.
A similar method works for higher order polynomial ts: for a second order t holds:
y a
x
2
bx ce <
x
2
, x, e >
with
x
2
= (x
2
1
, ..., x
2
n
).
The correlation coefcient r is a measure for the quality of a t. In case of linear regression it is given by:
r =
n
xy
y
_
(n
x
2
(
x)
2
)(n
y
2
(
y)
2
)
Chapter 3
Calculus
3.1 Integrals
3.1.1 Arithmetic rules
The primitive function F(x) of f(x) obeys the rule F
(x) = f(x). With F(x) the primitive of f(x) holds for the
denite integral
b
_
a
f(x)dx = F(b) F(a)
If u = f(x) holds:
b
_
a
g(f(x))df(x) =
f(b)
_
f(a)
g(u)du
Partial integration: with F and G the primitives of f and g holds:
_
f(x) g(x)dx = f(x)G(x)
_
G(x)
df(x)
dx
dx
A derivative can be brought under the intergral sign (see section 1.8.3 for the required conditions):
d
dy
_
_
x=h(y)
_
x=g(y)
f(x, y)dx
_
_ =
x=h(y)
_
x=g(y)
f(x, y)
y
dx f(g(y), y)
dg(y)
dy
+f(h(y), y)
dh(y)
dy
3.1.2 Arc lengts, surfaces and volumes
The arc length of a curve y(x) is given by:
=
_
1 +
_
dy(x)
dx
_
2
dx
The arc length of a parameter curve F(x(t)) is:
=
_
Fds =
_
F(x(t))[
x(t)[dt
with
t =
dx
ds
=
x(t)
[
x(t)[
, [
t [ = 1
_
(v,
t)ds =
_
(v,
t(t))dt =
_
(v
1
dx +v
2
dy +v
3
dz)
The surface A of a solid of revolution is:
A = 2
_
y
1 +
_
dy(x)
dx
_
2
dx
12
Chapter 3: Calculus 13
The volume V of a solid of revolution is:
V =
_
f
2
(x)dx
3.1.3 Separation of quotients
Every rational function P(x)/Q(x) where P and Q are polynomials can be written as a linear combination of
functions of the type (x a)
k
with k ZZ, and of functions of the type
px +q
((x a)
2
+b
2
)
n
with b > 0 and n IN. So:
p(x)
(x a)
n
=
n
k=1
A
k
(x a)
k
,
p(x)
((x b)
2
+c
2
)
n
=
n
k=1
A
k
x +B
((x b)
2
+c
2
)
k
Recurrent relation: for n ,= 0 holds:
_
dx
(x
2
+ 1)
n+1
=
1
2n
x
(x
2
+ 1)
n
+
2n 1
2n
_
dx
(x
2
+ 1)
n
3.1.4 Special functions
Elliptic functions
Elliptic functions can be written as a power series as follows:
_
1 k
2
sin
2
(x) = 1
n=1
(2n 1)!!
(2n)!!(2n 1)
k
2n
sin
2n
(x)
1
_
1 k
2
sin
2
(x)
= 1 +
n=1
(2n 1)!!
(2n)!!
k
2n
sin
2n
(x)
with n!! = n(n 2)!!.
The Gamma function
The gamma function (y) is dened by:
(y) =
_
0
e
x
x
y1
dx
One can derive that (y + 1) = y(y) = y!. This is a way to dene faculties for nonintegers. Further one can
derive that
(n +
1
2
) =
2
n
(2n 1)!! and
(n)
(y) =
_
0
e
x
x
y1
ln
n
(x)dx
The Beta function
The betafunction (p, q) is dened by:
(p, q) =
1
_
0
x
p1
(1 x)
q1
dx
with p and q > 0. The beta and gamma functions are related by the following equation:
(p, q) =
(p)(q)
(p +q)
14 Mathematics Formulary by ir. J.C.A. Wevers
The Delta function
The delta function (x) is an innitely thin peak function with surface 1. It can be dened by:
(x) = lim
0
P(, x) with P(, x) =
_
0 for [x[ >
1
2
when [x[ <
Some properties are:
(x)dx = 1 ,
F(x)(x)dx = F(0)
3.1.5 Goniometric integrals
When solving goniometric integrals it can be useful to change variables. The following holds if one denes
tan(
1
2
x) := t:
dx =
2dt
1 +t
2
, cos(x) =
1 t
2
1 +t
2
, sin(x) =
2t
1 +t
2
Each integral of the type
_
R(x,
ax
2
+bx +c)dx can be converted into one of the types that were treated in
section 3.1.3. After this conversion one can substitute in the integrals of the type:
_
R(x,
_
x
2
+ 1)dx : x = tan() , dx =
d
cos()
of
_
x
2
+ 1 = t +x
_
R(x,
_
1 x
2
)dx : x = sin() , dx = cos()d of
_
1 x
2
= 1 tx
_
R(x,
_
x
2
1)dx : x =
1
cos()
, dx =
sin()
cos
2
()
d of
_
x
2
1 = x t
These denite integrals are easily solved:
/2
_
0
cos
n
(x) sin
m
(x)dx =
(n 1)!!(m1)!!
(m+n)!!
_
/2 when m and n are both even
1 in all other cases
Some important integrals are:
_
0
xdx
e
ax
+ 1
=
2
12a
2
,
x
2
dx
(e
x
+ 1)
2
=
2
3
,
_
0
x
3
dx
e
x
+ 1
=
4
15
3.2 Functions with more variables
3.2.1 Derivatives
The partial derivative with respect to x of a function f(x, y) is dened by:
_
f
x
_
x0
= lim
h0
f(x
0
+h, y
0
) f(x
0
, y
0
)
h
The directional derivative in the direction of is dened by:
f
= lim
r0
f(x
0
+r cos(), y
0
+r sin()) f(x
0
, y
0
)
r
= (
f, (sin, cos )) =
f v
[v[
Chapter 3: Calculus 15
When one changes to coordinates f(x(u, v), y(u, v)) holds:
f
u
=
f
x
x
u
+
f
y
y
u
If x(t) and y(t) depend only on one parameter t holds:
f
t
=
f
x
dx
dt
+
f
y
dy
dt
The total differential df of a function of 3 variables is given by:
df =
f
x
dx +
f
y
dy +
f
z
dz
So
df
dx
=
f
x
+
f
y
dy
dx
+
f
z
dz
dx
The tangent in point x
0
at the surface f(x, y) = 0 is given by the equation f
x
(x
0
)(x x
0
) +f
y
(x
0
)(y y
0
) = 0.
The tangent plane in x
0
is given by: f
x
(x
0
)(x x
0
) +f
y
(x
0
)(y y
0
) = z f(x
0
).
3.2.2 Taylor series
A function of two variables can be expanded as follows in a Taylor series:
f(x
0
+h, y
0
+k) =
n
p=0
1
p!
_
h
p
x
p
+k
p
y
p
_
f(x
0
, y
0
) +R(n)
with R(n) the residual error and
_
h
p
x
p
+k
p
y
p
_
f(a, b) =
p
m=0
_
p
m
_
h
m
k
pm
p
f(a, b)
x
m
y
pm
3.2.3 Extrema
When f is continuous on a compact boundary V there exists a global maximum and a global minumum for f on
this boundary. A boundary is called compact if it is limited and closed.
Possible extrema of f(x, y) on a boundary V IR
2
are:
1. Points on V where f(x, y) is not differentiable,
2. Points where
f =
0,
3. If the boundary V is given by (x, y) = 0, than all points where
f(x, y) +
1
(x, y, z) = 0 and
2
(x, y, z) = 0 for extrema of f(x, y, z) for points (1) and (2). Point (3) is rewritten as follows:
possible extrema are points where
f(x, y, z) +
1
1
(x, y, z) +
2
2
(x, y, z) = 0.
16 Mathematics Formulary by ir. J.C.A. Wevers
3.2.4 The operator
In cartesian coordinates (x, y, z) holds:
=
x
e
x
+
y
e
y
+
z
e
z
gradf =
f
x
e
x
+
f
y
e
y
+
f
z
e
z
div a =
a
x
x
+
a
y
y
+
a
z
z
curl a =
_
a
z
y
a
y
z
_
e
x
+
_
a
x
z
a
z
x
_
e
y
+
_
a
y
x
a
x
y
_
e
z
2
f =
2
f
x
2
+
2
f
y
2
+
2
f
z
2
In cylindrical coordinates (r, , z) holds:
=
r
e
r
+
1
r
+
z
e
z
gradf =
f
r
e
r
+
1
r
f
+
f
z
e
z
div a =
a
r
r
+
a
r
r
+
1
r
a
+
a
z
z
curl a =
_
1
r
a
z
z
_
e
r
+
_
a
r
z
a
z
r
_
e
+
_
a
r
+
a
r
1
r
a
r
_
e
z
2
f =
2
f
r
2
+
1
r
f
r
+
1
r
2
2
f
2
+
2
f
z
2
In spherical coordinates (r, , ) holds:
=
r
e
r
+
1
r
+
1
r sin
gradf =
f
r
e
r
+
1
r
f
+
1
r sin
f
div a =
a
r
r
+
2a
r
r
+
1
r
a
+
a
r tan
+
1
r sin
a
curl a =
_
1
r
a
+
a
r tan
1
r sin
a
_
e
r
+
_
1
r sin
a
r
r
a
r
_
e
+
_
a
r
+
a
r
1
r
a
r
_
e
2
f =
2
f
r
2
+
2
r
f
r
+
1
r
2
2
f
2
+
1
r
2
tan
f
+
1
r
2
sin
2
2
f
2
General orthonormal curvilinear coordinates (u, v, w) can be derived from cartesian coordinates by the transforma
tion x = x(u, v, w). The unit vectors are given by:
e
u
=
1
h
1
x
u
, e
v
=
1
h
2
x
v
, e
w
=
1
h
3
x
w
where the terms h
i
give normalization to length 1. The differential operators are than given by:
gradf =
1
h
1
f
u
e
u
+
1
h
2
f
v
e
v
+
1
h
3
f
w
e
w
Chapter 3: Calculus 17
div a =
1
h
1
h
2
h
3
_
u
(h
2
h
3
a
u
) +
v
(h
3
h
1
a
v
) +
w
(h
1
h
2
a
w
)
_
curl a =
1
h
2
h
3
_
(h
3
a
w
)
v
(h
2
a
v
)
w
_
e
u
+
1
h
3
h
1
_
(h
1
a
u
)
w
(h
3
a
w
)
u
_
e
v
+
1
h
1
h
2
_
(h
2
a
v
)
u
(h
1
a
u
)
v
_
e
w
2
f =
1
h
1
h
2
h
3
_
u
_
h
2
h
3
h
1
f
u
_
+
v
_
h
3
h
1
h
2
f
v
_
+
w
_
h
1
h
2
h
3
f
w
__
Some properties of the operator are:
div(v) = divv + grad v curl(v) = curlv + (grad) v curl grad =
0
div(u v) = v (curlu) u (curlv) curl curlv = grad divv
2
v div curlv = 0
div grad =
2
2
v (
2
v
1
,
2
v
2
,
2
v
3
)
Here, v is an arbitrary vectoreld and an arbitrary scalar eld.
3.2.5 Integral theorems
Some important integral theorems are:
Gauss:
__
_(v n)d
2
A =
___
(divv )d
3
V
Stokes for a scalar eld:
_
( e
t
)ds =
__
(n grad)d
2
A
Stokes for a vector eld:
_
(v e
t
)ds =
__
(curlv n)d
2
A
this gives:
__
_(curlv n)d
2
A = 0
Ostrogradsky:
__
_(n v )d
2
A =
___
(curlv )d
3
A
__
_(n)d
2
A =
___
(grad)d
3
V
Here the orientable surface
__
d
2
A is bounded by the Jordan curve s(t).
3.2.6 Multiple integrals
Let A be a closed curve given by f(x, y) = 0, than the surface A inside the curve in IR
2
is given by
A =
__
d
2
A =
__
dxdy
Let the surface A be dened by the function z = f(x, y). The volume V bounded by A and the xy plane is than
given by:
V =
__
f(x, y)dxdy
The volume inside a closed surface dened by z = f(x, y) is given by:
V =
___
d
3
V =
__
f(x, y)dxdy =
___
dxdydz
18 Mathematics Formulary by ir. J.C.A. Wevers
3.2.7 Coordinate transformations
The expressions d
2
A and d
3
V transform as follows when one changes coordinates to u = (u, v, w) through the
transformation x(u, v, w):
V =
___
f(x, y, z)dxdydz =
___
f(x(u))
x
u
dudvdw
In IR
2
holds:
x
u
=
x
u
x
v
y
u
y
v
Let the surface A be dened by z = F(x, y) = X(u, v). Than the volume bounded by the xy plane and F is given
by:
__
S
f(x)d
2
A =
__
G
f(x(u))
X
u
X
v
dudv =
__
G
f(x, y, F(x, y))
_
1 +
x
F
2
+
y
F
2
dxdy
3.3 Orthogonality of functions
The inner product of two functions f(x) and g(x) on the interval [a, b] is given by:
(f, g) =
b
_
a
f(x)g(x)dx
or, when using a weight function p(x), by:
(f, g) =
b
_
a
p(x)f(x)g(x)dx
The norm f follows from: f
2
= (f, f). A set functions f
i
is orthonormal if (f
i
, f
j
) =
ij
.
Each function f(x) can be written as a sum of orthogonal functions:
f(x) =
i=0
c
i
g
i
(x)
and
c
2
i
f
2
. Let the set g
i
be orthogonal, than it follows:
c
i
=
f, g
i
(g
i
, g
i
)
3.4 Fourier series
Each function can be written as a sum of independent base functions. When one chooses the orthogonal basis
(cos(nx), sin(nx)) we have a Fourier series.
A periodical function f(x) with period 2L can be written as:
f(x) = a
0
+
n=1
_
a
n
cos
_
nx
L
_
+b
n
sin
_
nx
L
__
Due to the orthogonality follows for the coefcients:
a
0
=
1
2L
L
_
L
f(t)dt , a
n
=
1
L
L
_
L
f(t) cos
_
nt
L
_
dt , b
n
=
1
L
L
_
L
f(t) sin
_
nt
L
_
dt
Chapter 3: Calculus 19
A Fourier series can also be written as a sum of complex exponents:
f(x) =
n=
c
n
e
inx
with
c
n
=
1
2
f(x)e
inx
dx
The Fourier transform of a function f(x) gives the transformed function
f():
f() =
1
f(x)e
ix
dx
The inverse transformation is given by:
1
2
_
f(x
+
) +f(x
=
1
f()e
ix
d
where f(x
+
) and f(x
) = lim
xa
f(x) , f(a
+
) = lim
xa
f(x)
For continuous functions is
1
2
[f(x
+
) +f(x
)] = f(x).
Chapter 4
Differential equations
4.1 Linear differential equations
4.1.1 First order linear DE
The general solution of a linear differential equation is given by y
A
= y
H
+ y
P
, where y
H
is the solution of the
homogeneous equation and y
P
is a particular solution.
A rst order differential equation is given by: y
(x) +
a(x)y(x) = 0.
The solution of the homogeneous equation is given by
y
H
= k exp
__
a(x)dx
_
Suppose that a(x) = a =constant.
Substitution of exp(x) in the homogeneous equation leads to the characteristic equation +a = 0
= a.
Suppose b(x) = exp(x). Than one can distinguish two cases:
1. ,= : a particular solution is: y
P
= exp(x)
2. = : a particular solution is: y
P
= xexp(x)
When a DE is solved by variation of parameters one writes: y
P
(x) = y
H
(x)f(x), and than one solves f(x) from
this.
4.1.2 Second order linear DE
A differential equation of the second order with constant coefcients is given by: y
(x) + ay
(x) +
2
y(x) = f(x) and y(0) = y
(x) +p(x)y
(x
0
) =
K
1
. When p(x) and q(x) are continuous on the open interval I there exists a unique solution y(x) on this interval.
The general solution can than be written as y(x) = c
1
y
1
(x) +c
2
y
2
(x) and y
1
and y
2
are linear independent. These
are also all solutions of the LDE.
The Wronskian is dened by:
W(y
1
, y
2
) =
y
1
y
2
y
1
y
= y
1
y
2
y
2
y
1
y
1
and y
2
are linear independent if and only if on the interval I when x
0
I so that holds:
W(y
1
(x
0
), y
2
(x
0
)) = 0.
4.1.4 Power series substitution
When a series y =
a
n
x
n
is substituted in the LDE with constant coefcients y
(x) + py
n
_
n(n 1)a
n
x
n2
+pna
n
x
n1
+qa
n
x
n
= 0
Setting coefcients for equal powers of x equal gives:
(n + 2)(n + 1)a
n+2
+p(n + 1)a
n+1
+qa
n
= 0
This gives a general relation between the coefcients. Special cases are n = 0, 1, 2.
4.2 Some special cases
4.2.1 Frobenius method
Given the LDE
d
2
y(x)
dx
2
+
b(x)
x
dy(x)
dx
+
c(x)
x
2
y(x) = 0
with b(x) and c(x) analytical at x = 0. This LDE has at least one solution of the form
y
i
(x) = x
ri
n=0
a
n
x
n
with i = 1, 2
with r real or complex and chosen so that a
0
,= 0. When one expands b(x) and c(x) as b(x) = b
0
+b
1
x+b
2
x
2
+...
and c(x) = c
0
+c
1
x +c
2
x
2
+..., it follows for r:
r
2
+ (b
0
1)r +c
0
= 0
There are now 3 possibilities:
1. r
1
= r
2
: than y(x) = y
1
(x) ln[x[ +y
2
(x).
2. r
1
r
2
IN: than y(x) = ky
1
(x) ln[x[ +y
2
(x).
3. r
1
r
2
,= ZZ: than y(x) = y
1
(x) +y
2
(x).
22 Mathematics Formulary by ir. J.C.A. Wevers
4.2.2 Euler
Given the LDE
x
2
d
2
y(x)
dx
2
+ax
dy(x)
dx
+by(x) = 0
Substitution of y(x) = x
r
gives an equation for r: r
2
+ (a 1)r +b = 0. From this one gets two solutions r
1
and
r
2
. There are now 2 possibilities:
1. r
1
,= r
2
: than y(x) = C
1
x
r1
+C
2
x
r2
.
2. r
1
= r
2
= r: than y(x) = (C
1
ln(x) +C
2
)x
r
.
4.2.3 Legendres DE
Given the LDE
(1 x
2
)
d
2
y(x)
dx
2
2x
dy(x)
dx
+n(n 1)y(x) = 0
The solutions of this equation are given by y(x) = aP
n
(x) + by
2
(x) where the Legendre polynomials P(x) are
dened by:
P
n
(x) =
d
n
dx
n
_
(1 x
2
)
n
2
n
n!
_
For these holds: P
n

2
= 2/(2n + 1).
4.2.4 The associated Legendre equation
This equation follows from the dependent part of the wave equation
2
= 0 by substitution of
= cos(). Than follows:
(1
2
)
d
d
_
(1
2
)
dP()
d
_
+ [C(1
2
) m
2
]P() = 0
Regular solutions exists only if C = l(l + 1). They are of the form:
P
m
l
() = (1
2
)
m/2
d
m
P
0
()
d
m
=
(1
2
)
m/2
2
l
l!
d
m+l
d
m+l
(
2
1)
l
For [m[ > l is P
m
l
() = 0. Some properties of P
0
l
() zijn:
1
_
1
P
0
l
()P
0
l
()d =
2
2l + 1
ll
,
l=0
P
0
l
()t
l
=
1
_
1 2t +t
2
This polynomial can be written as:
P
0
l
() =
1
_
0
( +
_
2
1 cos())
l
d
4.2.5 Solutions for Bessels equation
Given the LDE
x
2
d
2
y(x)
dx
2
+x
dy(x)
dx
+ (x
2
2
)y(x) = 0
also called Bessels equation, and the Bessel functions of the rst kind
J
(x) = x
m=0
(1)
m
x
2m
2
2m+
m!( +m+ 1)
Chapter 4: Dierential equations 23
for := n IN this becomes:
J
n
(x) = x
n
m=0
(1)
m
x
2m
2
2m+n
m!(n +m)!
When ,= ZZ the solution is given by y(x) = aJ
(x) +bJ
(x) +bY
(x), where Y
(x) =
J
(x) cos() J
(x)
sin()
and Y
n
(x) = lim
n
Y
(x)
The equation x
2
y
(x) + xy
(x) (x
2
+
2
)y(x) = 0 has the modied Bessel functions of the rst kind I
(x) =
i
= [I
(x) I
(x)]/[2 sin()].
Sometimes it can be convenient to write the solutions of Bessels equation in terms of the Hankel functions
H
(1)
n
(x) = J
n
(x) +iY
n
(x) , H
(2)
n
(x) = J
n
(x) iY
n
(x)
4.2.6 Properties of Bessel functions
Bessel functions are orthogonal with respect to the weight function p(x) = x.
J
n
(x) = (1)
n
J
n
(x). The Neumann functions N
m
(x) are denied as:
N
m
(x) =
1
2
J
m
(x) ln(x) +
1
x
m
n=0
n
x
2n
The following holds: lim
x0
J
m
(x) = x
m
, lim
x0
N
m
(x) = x
m
for m ,= 0, lim
x0
N
0
(x) = ln(x).
lim
r
H(r) =
e
ikr
e
it
r
, lim
x
J
n
(x) =
_
2
x
cos(x x
n
) , lim
x
J
n
(x) =
_
2
x
sin(x x
n
)
with x
n
=
1
2
(n +
1
2
).
J
n+1
(x) +J
n1
(x) =
2n
x
J
n
(x) , J
n+1
(x) J
n1
(x) = 2
dJ
n
(x)
dx
The following integral relations hold:
J
m
(x) =
1
2
2
_
0
exp[i(xsin() m)]d =
1
_
0
cos(xsin() m)d
4.2.7 Laguerres equation
Given the LDE
x
d
2
y(x)
dx
2
+ (1 x)
dy(x)
dx
+ny(x) = 0
Solutions of this equation are the Laguerre polynomials L
n
(x):
L
n
(x) =
e
x
n!
d
n
dx
n
_
x
n
e
x
_
=
m=0
(1)
m
m!
_
n
m
_
x
m
24 Mathematics Formulary by ir. J.C.A. Wevers
4.2.8 The associated Laguerre equation
Given the LDE
d
2
y(x)
dx
2
+
_
m+ 1
x
1
_
dy(x)
dx
+
_
n +
1
2
(m+ 1)
x
_
y(x) = 0
Solutions of this equation are the associated Laguerre polynomials L
m
n
(x):
L
m
n
(x) =
(1)
m
n!
(n m)!
e
x
x
m
d
nm
dx
nm
_
e
x
x
n
_
4.2.9 Hermite
The differential equations of Hermite are:
d
2
H
n
(x)
dx
2
2x
dH
n
(x)
dx
+ 2nH
n
(x) = 0 and
d
2
He
n
(x)
dx
2
x
dHe
n
(x)
dx
+nHe
n
(x) = 0
Solutions of these equations are the Hermite polynomials, given by:
H
n
(x) = (1)
n
exp
_
1
2
x
2
_
d
n
(exp(
1
2
x
2
))
dx
n
= 2
n/2
He
n
(x
2)
He
n
(x) = (1)
n
(exp
_
x
2
_
d
n
(exp(x
2
))
dx
n
= 2
n/2
H
n
(x/
2)
4.2.10 Chebyshev
The LDE
(1 x
2
)
d
2
U
n
(x)
dx
2
3x
dU
n
(x)
dx
+n(n + 2)U
n
(x) = 0
has solutions of the form
U
n
(x) =
sin[(n + 1) arccos(x)]
1 x
2
The LDE
(1 x
2
)
d
2
T
n
(x)
dx
2
x
dT
n
(x)
dx
+n
2
T
n
(x) = 0
has solutions T
n
(x) = cos(narccos(x)).
4.2.11 Weber
The LDE W
n
(x) + (n +
1
2
1
4
x
2
)W
n
(x) = 0 has solutions: W
n
(x) = He
n
(x) exp(
1
4
x
2
).
4.3 Nonlinear differential equations
Some nonlinear differential equations and a solution are:
y
= a
_
y
2
+b
2
y = b sinh(a(x x
0
))
y
= a
_
y
2
b
2
y = b cosh(a(x x
0
))
y
= a
_
b
2
y
2
y = b cos(a(x x
0
))
y
= a(y
2
+b
2
) y = b tan(a(x x
0
))
y
= a(y
2
b
2
) y = b coth(a(x x
0
))
y
= a(b
2
y
2
) y = b tanh(a(x x
0
))
y
= ay
_
b y
b
_
y =
b
1 +Cb exp(ax)
Chapter 4: Dierential equations 25
4.4 SturmLiouville equations
SturmLiouville equations are second order LDEs of the form:
d
dx
_
p(x)
dy(x)
dx
_
+q(x)y(x) = m(x)y(x)
The boundary conditions are chosen so that the operator
L =
d
dx
_
p(x)
d
dx
_
+q(x)
is Hermitean. The normalization function m(x) must satisfy
b
_
a
m(x)y
i
(x)y
j
(x)dx =
ij
When y
1
(x) and y
2
(x) are two linear independent solutions one can write the Wronskian in this form:
W(y
1
, y
2
) =
y
1
y
2
y
1
y
=
C
p(x)
where C is constant. By changing to another dependent variable u(x), given by: u(x) = y(x)
_
p(x), the LDE
transforms into the normal form:
d
2
u(x)
dx
2
+I(x)u(x) = 0 with I(x) =
1
4
_
p
(x)
p(x)
_
2
1
2
p
(x)
p(x)
q(x) m(x)
p(x)
If I(x) > 0, than y
/y < 0 and the solution has an oscillatory behaviour, if I(x) < 0, than y
u, n)
A frequently used solution method for PDEs is separation of variables: one assumes that the solution can be written
as u(x, t) = X(x)T(t). When this is substituted two ordinary DEs for X(x) and T(t) are obtained.
4.5.2 Special cases
The wave equation
The wave equation in 1 dimension is given by
2
u
t
2
= c
2
2
u
x
2
When the initial conditions u(x, 0) = (x) and u(x, 0)/t = (x) apply, the general solution is given by:
u(x, t) =
1
2
[(x +ct) +(x ct)] +
1
2c
x+ct
_
xct
()d
26 Mathematics Formulary by ir. J.C.A. Wevers
The diffusion equation
The diffusion equation is:
u
t
= D
2
u
Its solutions can be written in terms of the propagators P(x, x
, 0) = (x x
). In 1 dimension it reads:
P(x, x
, t) =
1
2
Dt
exp
_
(x x
)
2
4Dt
_
In 3 dimensions it reads:
P(x, x
, t) =
1
8(Dt)
3/2
exp
_
(x x
)
2
4Dt
_
With initial condition u(x, 0) = f(x) the solution is:
u(x, t) =
_
G
f(x
)P(x, x
, t)dx
2
u
x
2
= g(x, t)
is given by
u(x, t) =
_
dt
_
dx
g(x
, t
)P(x, x
, t t
)
The equation of Helmholtz
The equation of Helmholtz is obtained by substitution of u(x, t) = v(x) exp(it) in the wave equation. This gives
for v:
2
v(x, ) +k
2
v(x, ) = 0
This gives as solutions for v:
1. In cartesian coordinates: substitution of v = Aexp(i
k x) gives:
v(x) =
_
_
A(k)e
i
kx
dk
with the integrals over
k
2
= k
2
.
2. In polar coordinates:
v(r, ) =
m=0
(A
m
J
m
(kr) +B
m
N
m
(kr))e
im
3. In spherical coordinates:
v(r, , ) =
l=0
l
m=l
[A
lm
J
l+
1
2
(kr) +B
lm
J
l
1
2
(kr)]
Y (, )
r
Chapter 4: Dierential equations 27
4.5.3 Potential theory and Greens theorem
Subject of the potential theory are the Poisson equation
2
u = f(x) where f is a given function, and the Laplace
equation
2
u = 0. The solutions of these can often be interpreted as a potential. The solutions of Laplaces
equation are called harmonic functions.
When a vector eld v is given by v = grad holds:
b
_
a
(v,
t )ds = (
b ) (a )
In this case there exist functions and w so that v = grad + curl w.
The eld lines of the eld v(x) follow from:
x(t) = v(x)
The rst theorem of Green is:
___
G
[u
2
v + (u, v)]d
3
V =
__
_
S
u
v
n
d
2
A
The second theorem of Green is:
___
G
[u
2
v v
2
u]d
3
V =
__
_
S
_
u
v
n
v
u
n
_
d
2
A
A harmonic function which is 0 on the boundary of an area is also 0 within that area. A harmonic function with a
normal derivative of 0 on the boundary of an area is constant within that area.
The Dirichlet problem is:
2
u(x) = f(x) , x R , u(x) = g(x) for all x S.
It has a unique solution.
The Neumann problem is:
2
u(x) = f(x) , x R ,
u(x)
n
= h(x) for all x S.
The solution is unique except for a constant. The solution exists if:
___
R
f(x)d
3
V =
__
_
S
h(x)d
2
A
A fundamental solution of the Laplace equation satises:
2
u(x) = (x)
This has in 2 dimensions in polar coordinates the following solution:
u(r) =
ln(r)
2
This has in 3 dimensions in spherical coordinates the following solution:
u(r) =
1
4r
28 Mathematics Formulary by ir. J.C.A. Wevers
The equation
2
v = (x
[
After substituting this in Greens 2nd theorem and applying the sieve property of the function one can derive
Greens 3rd theorem:
u(
) =
1
4
___
R
2
u
r
d
3
V +
1
4
__
_
S
_
1
r
u
n
u
n
_
1
r
__
d
2
A
The Green function G(x,
) is dened by:
2
G = (x
) = 0. Than G can
be written as:
G(x,
) =
1
4[x
[
+g(x,
)
Than g(x,
) =
___
R
G(x,
)f(x)d
3
V
__
_
S
g(x)
G(x,
)
n
d
2
A
Chapter 5
Linear algebra
5.1 Vector spaces
( is a group for the operation if:
1. a, b ( a b (: a group is closed.
2. (a b) c = a (b c): a group is associative.
3. e ( so that a e = e a = a: there exists a unit element.
4. a (a ( so that a a = e: each element has an inverse.
If
5. a b = b a
the group is called Abelian or commutative. Vector spaces form an Abelian group for addition and multiplication:
1 a =a, (a) = ()a, ( +)(a +
b) = a +
b +a +
b.
W is a linear subspace if w
1
, w
2
W holds: w
1
+ w
2
W.
W is an invariant subspace of V for the operator A if w W holds: A w W.
5.2 Basis
For an orthogonal basis holds: (e
i
, e
j
) = c
ij
. For an orthonormal basis holds: (e
i
, e
j
) =
ij
.
The set vectors a
n
is linear independent if:
i
a
i
= 0
i
i
= 0
The set a
n
is a basis if it is 1. independent and 2. V =<a
1
, a
2
, ... >=
i
a
i
.
5.3 Matrix calculus
5.3.1 Basic operations
For the matrix multiplication of matrices A = a
ij
and B = b
kl
holds with
r
the row index and
k
the column index:
A
r1k1
B
r2k2
= C
r1k2
, (AB)
ij
=
k
a
ik
b
kj
where
r
is the number of rows and
k
the number of columns.
The transpose of A is dened by: a
T
ij
= a
ji
. For this holds (AB)
T
= B
T
A
T
and (A
T
)
1
= (A
1
)
T
. For the
inverse matrix holds: (A B)
1
= B
1
A
1
. The inverse matrix A
1
has the property that A A
1
= II and can
be found by diagonalization: (A
ij
[II) (II[A
1
ij
).
29
30 Mathematics Formulary by ir. J.C.A. Wevers
The inverse of a 2 2 matrix is:
_
a b
c d
_
1
=
1
ad bc
_
d b
c a
_
The determinant function D = det(A) is dened by:
det(A) = D(a
1
, a
2
, ..., a
n
)
For the determinant det(A) of a matrix A holds: det(AB) = det(A) det(B). Een 2 2 matrix has determinant:
det
_
a b
c d
_
= ad cb
The derivative of a matrix is a matrix with the derivatives of the coefcients:
dA
dt
=
da
ij
dt
and
dAB
dt
= B
dA
dt
+A
dB
dt
The derivative of the determinant is given by:
d det(A)
dt
= D(
da
1
dt
, ..., a
n
) +D(a
1
,
da
2
dt
, ..., a
n
) +... +D(a
1
, ...,
da
n
dt
)
When the rows of a matrix are considered as vectors the row rank of a matrix is the number of independent vectors
in this set. Similar for the column rank. The row rank equals the column rank for each matrix.
Let
A :
V
V be the complex extension of the real linear operator A : V V in a nite dimensional V . Then A
and
A have the same caracteristic equation.
When A
ij
IR and v
1
+i v
2
is an eigenvector of A at eigenvalue =
1
+i
2
, than holds:
1. Av
1
=
1
v
1
2
v
2
and Av
2
=
2
v
1
+
1
v
2
.
2. v
= v
1
iv
2
is an eigenvalue at
=
1
i
2
.
3. The linear span < v
1
, v
2
> is an invariant subspace of A.
If
k
n
are the columns of A, than the transformed space of A is given by:
R(A) =< Ae
1
, ..., Ae
n
>=<
k
1
, ...,
k
n
>
If the columns
k
n
of a n m matrix A are independent, than the nullspace ^(A) =
0 .
5.3.2 Matrix equations
We start with the equation
A x =
b
and
b ,=
0.
The equation
A x =
0
has exactly one solution ,=
0.
Cramers rule for the solution of systems of linear equations is: let the system be written as
A x =
b a
1
x
1
+... +a
n
x
n
=
b
then x
j
is given by:
x
j
=
D(a
1
, ..., a
j1
,
b, a
j+1
, ..., a
n
)
det(A)
Chapter 5: Linear algebra 31
5.4 Linear transformations
A transformation A is linear if: A(x +y ) = Ax +Ay.
Some common linear transformations are:
Transformation type Equation
Projection on the line <a > P(x) = (a, x)a/(a, a )
Projection on the plane (a, x) = 0 Q(x) = x P(x)
Mirror image in the line <a > S(x) = 2P(x) x
Mirror image in the plane (a, x) = 0 T(x) = 2Q(x) x = x 2P(x)
For a projection holds: x P
W
(x) P
W
(x) and P
W
(x) W.
If for a transformation A holds: (Ax, y ) = (x, Ay ) = (Ax, Ay ), than A is a projection.
Let A : W W dene a linear transformation; we dene:
If S is a subset of V : A(S) := Ax W[x S
If T is a subset of W: A
(T) := x V [A(x) T
Than A(S) is a linear subspace of W and the inverse transformation A
0 ) = E
0
is a linear subspace of V , the null space
of A, notation: ^(A). Then the following holds:
dim(^(A)) + dim(1(A)) = dim(V )
5.5 Plane and line
The equation of a line that contains the points a and
b is:
x =a +(
b a ) =a +r
The equation of a plane is:
x =a +(
b a ) +(c a ) =a +r
1
+r
2
When this is a plane in IR
3
, the normal vector to this plane is given by:
n
V
=
r
1
r
2
[r
1
r
2
[
A line can also be described by the points for which the line equation : (a, x) + b = 0 holds, and for a plane V:
(a, x) +k = 0. The normal vector to V is than: a/[a[.
The distance d between 2 points p and q is given by d( p, q ) =  p q .
In IR
2
holds: The distance of a point p to the line (a, x) +b = 0 is
d( p, ) =
[(a, p ) +b[
[a[
Similarly in IR
3
: The distance of a point p to the plane (a, x) +k = 0 is
d( p, V ) =
[(a, p ) +k[
[a[
This can be generalized for IR
n
and C
n
(theorem from Hesse).
32 Mathematics Formulary by ir. J.C.A. Wevers
5.6 Coordinate transformations
The linear transformation A from IK
n
IK
m
is given by (IK = IR of C):
y = A
mn
x
where a column of A is the image of a base vector in the original.
The matrix A
transforms a vector given w.r.t. a basis into a vector w.r.t. a basis . It is given by:
A
= ((Aa
1
), ..., (Aa
n
))
where (x) is the representation of the vector x w.r.t. basis .
The transformation matrix S
:= II
= ((a
1
), ..., (a
n
))
and S
= II
The matrix of a transformation A is than given by:
A
=
_
A
e
1
, ..., A
e
n
_
For the transformation of matrix operators to another coordinate system holds: A
= S
, A
= S
and (AB)
= A
.
Further is A
= S
, A
= A
= S
.
5.7 Eigen values
The eigenvalue equation
Ax = x
with eigenvalues can be solved with (A II) =
0 det(A II) = 0. The eigenvalues follow from this
characteristic equation. The following is true: det(A) =
i
and Tr(A) =
i
a
ii
=
i
.
The eigen values
i
are independent of the chosen basis. The matrix of A in a basis of eigenvectors, with S the
transformation matrix to this basis, S = (E
1
, ..., E
n
), is given by:
= S
1
AS = diag(
1
, ...,
n
)
When 0 is an eigen value of A than E
0
(A) = ^(A).
When is an eigen value of A holds: A
n
x =
n
x.
5.8 Transformation types
Isometric transformations
A transformation is isometric when: Ax = x. This implies that the eigen values of an isometric transformation
are given by = exp(i) [[ = 1. Than also holds: (Ax, Ay ) = (x, y ).
When W is an invariant subspace if the isometric transformation Awith dim(A) < , than also W
is an invariante
subspace.
Chapter 5: Linear algebra 33
Orthogonal transformations
A transformation A is orthogonal if A is isometric and the inverse A
3
= exp(i). A rotation over E
1
is given by the matrix
_
_
1 0 0
0 cos() sin()
0 sin() cos()
_
_
Mirrored orthogonal if det(A) = 1. Vectors from E
1
are mirrored by A w.r.t. the invariant subspace E
1
. A
mirroring in IR
2
in < (cos(
1
2
), sin(
1
2
)) > is given by:
S =
_
cos() sin()
sin() cos()
_
Mirrored orthogonal transformations in IR
3
are rotational mirrorings: rotations of axis <a
1
> through angle and
mirror plane <a
1
>
ji
]. An alternative notation is: A
H
= A
. The following
holds: (CD)
= D
. A
= A
1
if A is unitary and A
= A if A is Hermitian.
Denition: the linear transformation A is normal in a complex vector space V if A
A = AA
A = AA
.
If A is normal holds:
1. For all vectors x V and a normal transformation A holds:
(Ax, Ay ) = (A
Ax, y ) = (AA
x, y ) = (A
x, A
y )
2. x is an eigenvector of A if and only if x is an eigenvector of A
.
3. Eigenvectors of A for different eigenvalues are mutually perpendicular.
4. If E
is an invariant subspace of A.
Let the different roots of the characteristic equation of A be
i
with multiplicities n
i
. Than the dimension of each
eigenspace V
i
equals n
i
. These eigenspaces are mutually perpendicular and each vector x V can be written in
exactly one way as
x =
i
x
i
with x
i
V
i
This can also be written as: x
i
= P
i
x where P
i
is a projection on V
i
. This leads to the spectral mapping theorem:
let A be a normal transformation in a complex vector space V with dim(V ) = n. Than:
1. There exist projection transformations P
i
, 1 i p, with the properties
P
i
P
j
= 0 for i ,= j,
P
1
+... +P
p
= II,
dimP
1
(V ) +... + dimP
p
(V ) = n
and complex numbers
1
, ...,
p
so that A =
1
P
1
+... +
p
P
p
.
2. If A is unitary than holds [
i
[ = 1 i.
3. If A is Hermitian than
i
IR i.
Chapter 5: Linear algebra 35
Complete systems of commuting Hermitian transformations
Consider m Hermitian linear transformations A
i
in a n dimensional complex inner product space V . Assume they
mutually commute.
Lemma: if E
, than A
i
x E
.
Theorem. Consider m commuting Hermitian matrices A
i
. Than there exists a unitary matrix U so that all matrices
U
A
i
U are diagonal. The columns of U are the common eigenvectors of all matrices A
j
.
If all eigenvalues of a Hermitian linear transformation in a ndimensional complex vector space differ, than the
normalized eigenvector is known except for a phase factor exp(i).
Denition: a commuting set Hermitian transformations is called complete if for each set of two common eigenvec
tors v
i
, v
j
there exists a transformation A
k
so that v
i
and v
j
are eigenvectors with different eigenvalues of A
k
.
Usually a commuting set is taken as small as possible. In quantum physics one speaks of commuting observables.
The required number of commuting observables equals the number of quantum numbers required to characterize a
state.
5.9 Homogeneous coordinates
Homogeneous coordinates are used if one wants to combine both rotations and translations in one matrix transfor
mation. An extra coordinate is introduced to describe the nonlinearities. Homogeneous coordinates are derived
from cartesian coordinates as follows:
_
_
x
y
z
_
_
cart
=
_
_
_
_
wx
wy
wz
w
_
_
_
_
hom
=
_
_
_
_
X
Y
Z
w
_
_
_
_
hom
so x = X/w, y = Y/w and z = Z/w. Transformations in homogeneous coordinates are described by the following
matrices:
1. Translation along vector (X
0
, Y
0
, Z
0
, w
0
):
T =
_
_
_
_
w
0
0 0 X
0
0 w
0
0 Y
0
0 0 w
0
Z
0
0 0 0 w
0
_
_
_
_
2. Rotations of the x, y, z axis, resp. through angles , , :
R
x
() =
_
_
_
_
1 0 0 0
0 cos sin 0
0 sin cos 0
0 0 0 1
_
_
_
_
R
y
() =
_
_
_
_
cos 0 sin 0
0 1 0 0
sin 0 cos 0
0 0 0 1
_
_
_
_
R
z
() =
_
_
_
_
cos sin 0 0
sin cos 0 0
0 0 1 0
0 0 0 1
_
_
_
_
3. A perspective projection on image plane z = c with the center of projection in the origin. This transformation
has no inverse.
P(z = c) =
_
_
_
_
1 0 0 0
0 1 0 0
0 0 1 0
0 0 1/c 0
_
_
_
_
36 Mathematics Formulary by ir. J.C.A. Wevers
5.10 Inner product spaces
A complex inner product on a complex vector space is dened as follows:
1. (a,
b ) = (
b, a ),
2. (a,
1
b
1
+
2
b
2
) =
1
(a,
b
1
) +
2
(a,
b
2
) for all a,
b
1
,
b
2
V and
1
,
2
C.
3. (a, a ) 0 for all a V , (a, a ) = 0 if and only if a =
0.
Due to (1) holds: (a, a ) IR. The inner product space C
n
is the complex vector space on which a complex inner
product is dened by:
(a,
b ) =
n
i=1
a
i
b
i
For function spaces holds:
(f, g) =
b
_
a
f
(t)g(t)dt
For eacha the length a  is dened by: a  =
_
(a, a ). The following holds: a 
b  a+
b  a +
b ,
and with the angle between a and
b holds: (a,
b ) = a  
b  cos().
Let a
1
, ..., a
n
be a set of vectors in an inner product space V . Than the Gramian G of this set is given by:
G
ij
= (a
i
, a
j
). The set of vectors is independent if and only if det(G) = 0.
A set is orthonormal if (a
i
, a
j
) =
ij
. If e
1
, e
2
, ... form an orthonormal row in an innite dimensional vector space
Bessels inequality holds:
x
2
i=1
[(e
i
, x)[
2
The equal sign holds if and only if lim
n
x
n
x = 0.
The inner product space
2
is dened in C
by:
2
=
_
a = (a
1
, a
2
, ...) [
n=1
[a
n
[
2
<
_
A space is called a Hilbert space if it is
2
and if also holds: lim
n
[a
n+1
a
n
[ = 0.
5.11 The Laplace transformation
The class LT exists of functions for which holds:
1. On each interval [0, A], A > 0 there are no more than a nite number of discontinuities and each discontinuity
has an upper  and lower limit,
2. t
0
[0, > and a, M IR so that for t t
0
holds: [f(t)[ exp(at) < M.
Than there exists a Laplace transform for f.
The Laplace transformation is a generalisation of the Fourier transformation. The Laplace transform of a function
f(t) is, with s C and t 0:
F(s) =
_
0
f(t)e
st
dt
Chapter 5: Linear algebra 37
The Laplace transform of the derivative of a function is given by:
L
_
f
(n)
(t)
_
= f
(n1)
(0) sf
(n2)
(0) ... s
n1
f(0) +s
n
F(s)
The operator L has the following properties:
1. Equal shapes: if a > 0 than
L(f(at)) =
1
a
F
_
s
a
_
2. Damping: L(e
at
f(t)) = F(s +a)
3. Translation: If a > 0 and g is dened by g(t) = f(t a) if t > a and g(t) = 0 for t a, than holds:
L(g(t)) = e
sa
L(f(t)).
If s IR than holds '(f) = L('(f)) and (f) = L((f)).
For some often occurring functions holds:
f(t) = F(s) = L(f(t)) =
t
n
n!
e
at
(s a)
n1
e
at
cos(t)
s a
(s a)
2
+
2
e
at
sin(t)
(s a)
2
+
2
(t a) exp(as)
5.12 The convolution
The convolution integral is dened by:
(f g)(t) =
t
_
0
f(u)g(t u)du
The convolution has the following properties:
1. f g LT
2. L(f g) = L(f) L(g)
3. Distribution: f (g +h) = f g +f h
4. Commutative: f g = g f
5. Homogenity: f (g) = f g
If L(f) = F
1
F
2
, than is f(t) = f
1
f
2
.
5.13 Systems of linear differential equations
We start with the equation
x = Ax. Assume that x = v exp(t), than follows: Av = v. In the 2 2 case holds:
1.
1
=
2
: than x(t) =
v
i
exp(
i
t).
2.
1
,=
2
: than x(t) = (ut +v) exp(t).
38 Mathematics Formulary by ir. J.C.A. Wevers
Assume that = + i is an eigenvalue with eigenvector v, than
.
Decompose v = u +i w, than the real solutions are
c
1
[ucos(t) wsin(t)]e
t
+c
2
[v cos(t) +usin(t)]e
t
There are two solution strategies for the equation
x = Ax:
1. Let x = v exp(t) det(A
2
II) = 0.
2. Introduce: x = u and y = v, this leads to x = u and y = v. This transforms a ndimensional set of second
order equations into a 2ndimensional set of rst order equations.
5.14 Quadratic forms
5.14.1 Quadratic forms in IR
2
The general equation of a quadratic form is: x
T
Ax + 2x
T
P + S = 0. Here, A is a symmetric matrix. If =
S
1
AS = diag(
1
, ...,
n
) holds: u
T
u+2u
T
P +S = 0, so all cross terms are 0. u = (u, v, w) should be chosen
so that det(S) = +1, to maintain the same orientation as the system (x, y, z).
Starting with the equation
ax
2
+ 2bxy +cy
2
+dx +ey +f = 0
we have [A[ = ac b
2
. An ellipse has [A[ > 0, a parabola [A[ = 0 and a hyperbole [A[ < 0. In polar coordinates
this can be written as:
r =
ep
1 e cos()
An ellipse has e < 1, a parabola e = 1 and a hyperbola e > 1.
5.14.2 Quadratic surfaces in IR
3
Rank 3:
p
x
2
a
2
+q
y
2
b
2
+r
z
2
c
2
= d
Ellipsoid: p = q = r = d = 1, a, b, c are the lengths of the semi axes.
Singlebladed hyperboloid: p = q = d = 1, r = 1.
Doublebladed hyperboloid: r = d = 1, p = q = 1.
Cone: p = q = 1, r = 1, d = 0.
Rank 2:
p
x
2
a
2
+q
y
2
b
2
+r
z
c
2
= d
Elliptic paraboloid: p = q = 1, r = 1, d = 0.
Hyperbolic paraboloid: p = r = 1, q = 1, d = 0.
Elliptic cylinder: p = q = 1, r = d = 0.
Hyperbolic cylinder: p = d = 1, q = 1, r = 0.
Pair of planes: p = 1, q = 1, d = 0.
Rank 1:
py
2
+qx = d
Parabolic cylinder: p, q > 0.
Parallel pair of planes: d > 0, q = 0, p ,= 0.
Double plane: p ,= 0, q = d = 0.
Chapter 6
Complex function theory
6.1 Functions of complex variables
Complex function theory deals with complex functions of a complex variable. Some denitions:
f is analytical on ( if f is continuous and differentiable on (.
A Jordan curve is a curve that is closed and singular.
If K is a curve in C with parameter equation z = (t) = x(t) + iy(t), a t b, than the length L of K is given
by:
L =
b
_
a
_
dx
dt
_
2
+
_
dy
dt
_
2
dt =
b
_
a
dz
dt
dt =
b
_
a
[
(t)[dt
The derivative of f in point z = a is:
f
(a) = lim
za
f(z) f(a)
z a
If f(z) = u(x, y) +iv(x, y) the derivative is:
f
(z) =
u
x
+i
v
x
= i
u
y
+
v
y
Setting both results equal yields the equations of CauchyRiemann:
u
x
=
v
y
,
u
y
=
v
x
These equations imply that
2
u =
2
v = 0. f is analytical if u and v satisfy these equations.
6.2 Complex integration
6.2.1 Cauchys integral formula
Let K be a curve described by z = (t) on a t b and f(z) is continuous on K. Than the integral of f over K
is:
_
K
f(z)dz =
b
_
a
f((t))
(t)dt
fcontinuous
= F(b) F(a)
Lemma: let K be the circle with center a and radius r taken in a positive direction. Than holds for integer m:
1
2i
_
K
dz
(z a)
m
=
_
0 if m ,= 1
1 if m = 1
Theorem: if L is the length of curve K and if [f(z)[ M for z K, than, if the integral exists, holds:
_
K
f(z)dz
ML
39
40 Mathematics Formulary by ir. J.C.A. Wevers
Theorem: let f be continuous on an area G and let p be a xed point of G. Let F(z) =
_
z
p
f()d for all z G
only depend on z and not on the integration path. Than F(z) is analytical on G with F
(z) = f(z).
This leads to two equivalent formulations of the main theorem of complex integration: let the function f be analytical
on an area G. Let K and K
be two curves with the same starting  and end points, which can be transformed into
each other by continous deformation within G. Let B be a Jordan curve. Than holds
_
K
f(z)dz =
_
K
f(z)dz
_
B
f(z)dz = 0
By applying the main theorem on e
iz
/z one can derive that
_
0
sin(x)
x
dx =
2
6.2.2 Residue
A point a C is a regular point of a function f(z) if f is analytical in a. Otherwise a is a singular point or pole of
f(z). The residue of f in a is dened by
Res
z=a
f(z) =
1
2i
_
K
f(z)dz
where K is a Jordan curve which encloses a in positive direction. The residue is 0 in regular points, in singular
points it can be both 0 and ,= 0. Cauchys residue proposition is: let f be analytical within and on a Jordan curve K
except in a nite number of singular points a
i
within K. Than, if K is taken in a positive direction, holds:
1
2i
_
K
f(z)dz =
n
k=1
Res
z=a
k
f(z)
Lemma: let the function f be analytical in a, than holds:
Res
z=a
f(z)
z a
= f(a)
This leads to Cauchys integral theorem: if F is analytical on the Jordan curve K, which is taken in a positive
direction, holds:
1
2i
_
K
f(z)
z a
dz =
_
f(a) if a inside K
0 if a outside K
Theorem: let K be a curve (K need not be closed) and let () be continuous on K. Than the function
f(z) =
_
K
()d
z
is analytical with nth derivative
f
(n)
(z) = n!
_
K
()d
( z)
n+1
Theorem: let K be a curve and G an area. Let (, z) be dened for K, z G, with the following properties:
1. (, z) is limited, this means [(, z)[ M for K, z G,
2. For xed K, (, z) is an analytical function of z on G,
Chapter 6: Complex function theory 41
3. For xed z G the functions (, z) and (, z)/z are continuous functions of on K.
Than the function
f(z) =
_
K
(, z)d
is analytical with derivative
f
(z) =
_
K
(, z)
z
d
Cauchys inequality: let f(z) be an analytical function within and on the circle C : [z a[ = Rand let [f(z)[ M
for z C. Than holds
f
(n)
(a)
Mn!
R
n
6.3 Analytical functions denied by series
The series
f
n
(z) is called pointwise convergent on an area G with sum F(z) if
>0
zG
N0IR
n>n0
_
f(z)
N
n=1
f
n
(z)
<
_
The series is called uniform convergent if
>0
N0IR
n>n0
zG
_
f(z)
N
n=1
f
n
(z)
<
_
Uniform convergence implies pointwise convergence, the opposite is not necessary.
Theorem: let the power series
n=0
a
n
z
n
have a radius of convergence R. R is the distance to the rst nonessential
singularity.
If lim
n
n
_
[a
n
[ = L exists, than R = 1/L.
If lim
n
[a
n+1
[/[a
n
[ = L exists, than R = 1/L.
If these limits both dont exist one can nd R with the formula of CauchyHadamard:
1
R
= lim
n
sup
n
_
[a
n
[
6.4 Laurent series
Taylors theorem: let f be analytical in an area G and let point a G has distance r to the boundary of G. Than
f(z) can be expanded into the Taylor series near a:
f(z) =
n=0
c
n
(z a)
n
with c
n
=
f
(n)
(a)
n!
valid for [z a[ < r. The radius of convergence of the Taylor series is r. If f has a pole of order k in a than
c
1
, ..., c
k1
= 0, c
k
,= 0.
Theorem of Laurent: let f be analytical in the circular area G : r < [z a[ < R. Than f(z) can be expanded into
a Laurent series with center a:
f(z) =
n=
c
n
(z a)
n
with c
n
=
1
2i
_
K
f(w)dw
(w a)
n+1
, n ZZ
42 Mathematics Formulary by ir. J.C.A. Wevers
valid for r < [z a[ < R and K an arbitrary Jordan curve in G which encloses point a in positive direction.
The principal part of a Laurent series is:
n=1
c
n
(z a)
n
. One can classify singular points with this. There are 3
cases:
1. There is no principal part. Than a is a nonessential singularity. Dene f(a) = c
0
and the series is also valid
for [z a[ < R and f is analytical in a.
2. The principal part contains a nite number of terms. Than there exists a k IN so that
lim
za
(z a)
k
f(z) = c
k
,= 0. Than the function g(z) = (z a)
k
f(z) has a nonessential singularity in a.
One speaks of a pole of order k in z = a.
3. The principal part contains an innite number of terms. Then, a is an essential singular point of f, such as
exp(1/z) for z = 0.
If f and g are analytical, f(a) ,= 0, g(a) = 0, g
(a)
6.5 Jordans theorem
Residues are often used when solving denite integrals. We dene the notations C
+
[f(z)[, M
(, f) = max
zC
M
+
(, f) = 0 and that the integral exists, than
f(x)dx = 2i
f(x)dx = 2i
M
+
(, f) = 0. Than holds for > 0
lim
_
C
+
f(z)e
iz
dz = 0
Let f be continuous for [z[ R, (z) 0 and lim
_
C
f(z)e
iz
dz = 0
Let z = a be a simple pole of f(z) and let C
f(z)dz =
1
2
Res
z=a
f(z)
Chapter 7
Tensor calculus
7.1 Vectors and covectors
A nite dimensional vector space is denoted by 1, J. The vector space of linear transformations from 1 to J is
denoted by L(1, J). Consider L(1,IR) := 1
. We name 1
with basis
c. Properties of both are:
1. Vectors: x = x
i
c
i
with basis vectors c
i
:
c
i
=
x
i
Transformation from system i to i
is given by:
c
i
= A
i
i
c
i
=
i
1 , x
i
= A
i
i
x
i
2. Covectors:
x = x
i
c
i
with basis vectors
c
i
c
i
= dx
i
Transformation from system i to i
is given by:
c
i
= A
i
c
i
1
, x
i
= A
i
i
x
i
Here the Einstein convention is used:
a
i
b
i
:=
i
a
i
b
i
The coordinate transformation is given by:
A
i
i
=
x
i
x
i
, A
i
i
=
x
i
x
i
From this follows that A
i
k
A
k
l
=
k
l
and A
i
i
= (A
i
i
)
1
.
In differential notation the coordinate transformations are given by:
dx
i
=
x
i
x
i
dx
i
and
x
i
=
x
i
x
i
x
i
The general transformation rule for a tensor T is:
T
q1...qn
s1...sm
=
x
u
u
q1
x
p1
u
qn
x
pn
x
r1
u
s1
x
rm
u
sm
T
p1...pn
r1...rm
For an absolute tensor = 0.
43
44 Mathematics Formulary by ir. J.C.A. Wevers
7.2 Tensor algebra
The following holds:
a
ij
(x
i
+y
i
) a
ij
x
i
+a
ij
y
i
, but: a
ij
(x
i
+y
j
) , a
ij
x
i
+a
ij
y
j
and
(a
ij
+a
ji
)x
i
x
j
2a
ij
x
i
x
j
, but: (a
ij
+a
ji
)x
i
y
j
, 2a
ij
x
i
y
j
en (a
ij
a
ji
)x
i
x
j
0.
The sum and difference of two tensors is a tensor of the same rank: A
p
q
B
p
q
. The outer tensor product results in
a tensor with a rank equal to the sum of the ranks of both tensors: A
pr
q
B
m
s
= C
prm
qs
. The contraction equals two
indices and sums over them. Suppose we take r = s for a tensor A
mpr
qs
, this results in:
r
A
mpr
qr
= B
mp
q
. The inner
product of two tensors is dened by taking the outer product followed by a contraction.
7.3 Inner product
Denition: the bilinear transformation B : 1 1
IR, B(x,
y ) =
y(x) is denoted by < x,
y(x) =< x,
y >= y
i
x
i
, <
c
i
, c
j
>=
i
j
Let G : 1 1
IR h(
x,
y ) =< G
1
x,
y >
Both are not degenerated. The following holds: h(Gx, Gy ) =< x, Gy >= g(x, y ). If we identify 1 and 1
with
G, than g (or h) gives an inner product on 1.
The inner product (, )
on
k
(1) is dened by:
(, )
=
1
k!
(, )
T
0
k
(V)
The inner product of two vectors is than given by:
(x, y ) = x
i
y
i
<c
i
, Gc
j
>= g
ij
x
i
x
j
The matrix g
ij
of G is given by
g
ij
c
j
= Gc
i
The matrix g
ij
of G
1
is given by:
g
kl
c
l
= G
1
c
k
For this metric tensor g
ij
holds: g
ij
g
jk
=
k
i
. This tensor can raise or lower indices:
x
j
= g
ij
x
i
, x
i
= g
ij
x
j
and du
i
=
c
i
= g
ij
c
j
.
Chapter 7: Tensor calculus 45
7.4 Tensor product
Denition: let  and 1 be two nite dimensional vector spaces with dimensions m and n. Let 
be the
cartesian product of  and 1. A function t : 
IR; (
u;
v ) t(
u;
v ) = t
IR is called a tensor
if t is linear in
u and
v. The tensors t form a vector space denoted by  1. The elements T 1 1 are called
contravariant 2tensors: T = T
ij
c
i
c
j
= T
ij
i
j
. The elements T 1
c
i
c
j
= T
ij
dx
i
dx
j
. The elements T 1
c
i
c
j
=
T
.j
i
dx
i
j
, and analogous for T 1 1
.
The numbers given by
t
= t(
c
,
c
)
with 1 m and 1 n are the components of t.
Take x  and y 1. Than the function x y, denied by
(x y)(
u,
v) =< x,
u >
U
< y,
v >
V
is a tensor. The components are derived from: (u v )
ij
= u
i
v
j
. The tensor product of 2 tensors is given by:
_
2
0
_
form: (v w)(
p,
q) = v
i
p
i
w
k
q
k
= T
ik
p
i
q
k
_
0
2
_
form: (
q)(v, w) = p
i
v
i
q
k
w
k
= T
ik
v
i
w
k
_
1
1
_
form: (v
p)(
q, w) = v
i
q
i
p
k
w
k
= T
i
k
q
i
w
k
7.5 Symmetric and antisymmetric tensors
A tensor t 1 1 is called symmetric resp. antisymmetric if
x,
y 1
holds: t(
x,
y ) = t(
y,
x) resp. t(
x,
y ) =
t(
y,
x).
A tensor t 1
x,
y ) =
1
2
(t(
x,
y) +t(
y,
x))
/t(
x,
y ) =
1
2
(t(
x,
y) t(
y,
x))
Analogous in 1
klm
=
il
jm
im
jl
.
The permutationoperators e
pqr
are dened by: e
123
= e
231
= e
312
= 1, e
213
= e
132
= e
321
= 1, for all other
combinations e
pqr
= 0. There is a connection with the tensor:
pqr
= g
1/2
e
pqr
and
pqr
= g
1/2
e
pqr
.
7.6 Outer product
Let
k
(1) and
l
(1). Than
k+l
(1) is dened by:
=
(k +l)!
k!l!
/( )
If and
1
(1) = 1
holds: =
46 Mathematics Formulary by ir. J.C.A. Wevers
The outer product can be written as: (a
b)
i
=
ijk
a
j
b
k
, a
b = G
1
(Ga G
b ).
Take a,
b, c,
d IR
4
. Than (dt dz)(a,
b ) = a
0
b
4
b
0
a
4
is the oriented surface of the projection on the tzplane
of the parallelogram spanned by a and
b.
Further
(dt dy dz)(a,
b, c) = det
a
0
b
0
c
0
a
2
b
2
c
2
a
4
b
4
c
4
is the oriented 3dimensional volume of the projection on the tyzplane of the parallelepiped spanned by a,
b and c.
(dt dx dy dz)(a,
b, c,
d) = det(a,
b, c,
d) is the 4dimensional volume of the hyperparellelepiped spanned by
a,
b, c and
d.
7.7 The Hodge star operator
k
(1) and
nk
(1) have the same dimension because
_
n
k
_
=
_
n
nk
_
for 1 k n. Dim(
n
(1)) = 1. The choice
of a basis means the choice of an oriented measure of volume, a volume , in 1. We can gauge so that for an
orthonormal basis e
i
holds: (e
i
) = 1. This basis is than by denition positive oriented if =
e
1
e
2
...
e
n
= 1.
Because both spaces have the same dimension one can ask if there exists a bijection between them. If 1 has no extra
structure this is not the case. However, such an operation does exist if there is an inner product dened on 1 and the
corresponding volume . This is called the Hodge star operator and denoted by . The following holds:
w
k
(V)
w
kn
(V)
k
(V)
w = (, w)
i
v
j
v
i
i
w
j
7.8.3 Christoffel symbols
To each curvelinear coordinate system u
i
we add a system of n
3
functions
i
jk
of u, dened by
2
x
u
i
u
k
=
i
jk
x
u
i
These are Christoffel symbols of the second kind. Christoffel symbols are no tensors. The Christoffel symbols of the
second kind are given by:
_
i
jk
_
:=
i
jk
=
_
2
x
u
k
u
j
, dx
i
_
Chapter 7: Tensor calculus 47
with
i
jk
=
i
kj
. Their transformation to a different coordinate system is given by:
k
= A
i
i
A
j
j
A
k
k
i
jk
+A
i
i
(
j
A
i
k
)
The rst term in this expression is 0 if the primed coordinates are cartesian.
There is a relation between Christoffel symbols and the metric:
i
jk
=
1
2
g
ir
(
j
g
kr
+
k
g
rj
r
g
jk
)
and
(ln(
_
[g[)).
Lowering an index gives the Christoffel symbols of the rst kind:
i
jk
= g
il
jkl
.
7.8.4 The covariant derivative
The covariant derivative
j
of a vector, covector and of rank2 tensors is given by:
j
a
i
=
j
a
i
+
i
jk
a
k
j
a
i
=
j
a
i
k
ij
a
k
Riccis theorem:
= 0
7.9 Differential operators
The Gradient
is given by:
grad(f) = G
1
df = g
ki
f
x
i
x
k
The divergence
is given by:
div(a
i
) =
i
a
i
=
1
k
(
g a
k
)
The curl
is given by:
rot(a) = G
1
d Ga =
pqr
q
a
p
=
q
a
p
p
a
q
The Laplacian
is given by:
(f) = div grad(f) = d df =
i
g
ij
j
f = g
ij
j
f =
1
x
i
_
g g
ij
f
x
j
_
48 Mathematics Formulary by ir. J.C.A. Wevers
7.10 Differential geometry
7.10.1 Space curves
We limit ourselves to IR
3
with a xed orthonormal basis. A point is represented by the vector x = (x
1
, x
2
, x
3
). A
space curve is a collection of points represented by x = x(t). The arc length of a space curve is given by:
s(t) =
t
_
t0
_
dx
d
_
2
+
_
dy
d
_
2
+
_
dz
d
_
2
d
The derivative of s with respect to t is the length of the vector dx/dt:
_
ds
dt
_
2
=
_
dx
dt
,
dx
dt
_
The osculation plane in a point P of a space curve is the limiting position of the plane through the tangent of the
plane in point P and a point Q when Q approaches P along the space curve. The osculation plane is parallel with
x(s). If
x ,= 0 the osculation plane is given by:
y = x +
x +
x so det(y x,
x,
x) = 0
In a bending point holds, if
...
x,= 0:
y = x +
x +
...
x
The tangent has unit vector
=
x, the main normal unit vector n =
x and the binormal
b =
x
x. So the main
normal lies in the osculation plane, the binormal is perpendicular to it.
Let P be a point and Q be a nearby point of a space curve x(s). Let be the angle between the tangents in P
and Q and let be the angle between the osculation planes (binormals) in P and Q. Then the curvature and the
torsion in P are dened by:
2
=
_
d
ds
_
2
= lim
s0
_
s
_
2
,
2
=
_
d
ds
_
2
and > 0. For plane curves is the ordinary curvature and = 0. The following holds:
2
= (
) = (
x,
x) and
2
= (
b,
b)
Frenets equations express the derivatives as linear combinations of these vectors:
= n ,
n =
b ,
b = n
From this follows that det(
x,
x,
...
x ) =
2
.
Some curves and their properties are:
Screw line / =constant
Circle screw line =constant, =constant
Plane curves = 0
Circles =constant, = 0
Lines = = 0
7.10.2 Surfaces in IR
3
A surface in IR
3
is the collection of end points of the vectors x = x(u, v), so x
h
= x
h
(u
= u
(t),
u
= v
dt
x ,
dx
d
=
dv
x
The rst fundamental tensor of the surface in P is the inner product of these tangent vectors:
_
dx
dt
,
dx
d
_
= (c
, c
)
du
dt
dv
d
The covariant components w.r.t. the basis c
x are:
g
= (c
, c
)
For the angle between the parameter curves in P: u = t, v =constant and u =constant, v = holds:
cos() =
g
12
g
11
g
22
For the arc length s of P along the curve u
(t) holds:
ds
2
= g
du
du
x =
+h
N
This leads to:
= (c
,
) , h
= (
N,
) =
1
_
det [g[
det(c
1
, c
2
,
)
7.10.5 Geodetic curvature
A curve on the surface x(u
) is given by: u
= u
= u
of which the length is called the geodetic curvature of the curve in p. This remains the same if the surface is curved
and the line element remains the same. The projection of
x on
N has length
p = h
and is called the normal curvature of the curve in P. The theorem of Meusnier states that different curves on the
surface with the same tangent vector in P have the same normal curvature.
A geodetic line of a surface is a curve on the surface for which in each point the main normal of the curve is the
same as the normal on the surface. So for a geodetic line is in each point p
= 0, so
d
2
u
ds
2
+
du
ds
du
ds
= 0
50 Mathematics Formulary by ir. J.C.A. Wevers
The covariant derivative /dt in P of a vector eld of a surface along a curve is the projection on the tangent plane
in P of the normal derivative in P.
For two vector elds v(t) and w(t) along the same curve of the surface follows Leibniz rule:
d(v, w)
dt
=
_
v,
w
dt
_
+
_
w,
v
dt
_
Along a curve holds:
dt
(v
) =
_
dv
dt
+
du
dt
v
_
c
This is a
_
1
3
_
tensor with n
2
(n
2
1)/12 independent components not identically equal to 0. This tensor is a measure
for the curvature of the considered space. If it is 0, the space is a at manifold. It has the following symmetry
properties:
R
= R
= R
= R
]T
= R
+R
In a space and coordinate system where the Christoffel symbols are 0 this becomes:
R
=
1
2
g
)
The Bianchi identities are:
= 0.
The Ricci tensor is obtained by contracting the Riemann tensor: R
= R
1
2
g
= 0. The
Ricciscalar is R = g
.
Chapter 8
Numerical mathematics
8.1 Errors
There will be an error in the solution if a problem has a number of parameters which are not exactly known. The
dependency between errors in input data and errors in the solution can be expressed in the condition number c. If
the problem is given by x = (a) the rstorder approximation for an error a in a is:
x
x
=
a
(a)
(a)
a
a
The number c(a) = [a
q
1
and the smallest positive machine number is
a
min
=
q
The distance between two successive machine numbers in the interval [
p1
,
p
] is
pt
. If x is a real number and
the closest machine number is rd(x), than holds:
rd(x) = x(1 +) with [[
1
2
1t
x = rd(x)(1 +
) with [
[
1
2
1t
The number :=
1
2
1t
is called the machineaccuracy, and
,
x rd(x)
x
An often used 32 bits oat format is: 1 bit for s, 8 for the exponent and 23 for de mantissa. The base here is 2.
51
52 Mathematics Formulary by ir. J.C.A. Wevers
8.3 Systems of equations
We want to solve the matrix equation Ax =
j=2
U
1j
x
j
)/U
11
In code:
for (k = n; k > 0; k)
{
S = c[k];
for (j = k + 1; j < n; j++)
{
S = U[k][j] * x[j];
}
x[k] = S / U[k][k];
}
This algorithm requires
1
2
n(n + 1) oating point calculations.
8.3.2 Gauss elimination
Consider a general set Ax =
b. This can be reduced by Gauss elimination to a triangular form by multiplying the
rst equation with A
i1
/A
11
and than subtract it from all others; now the rst column contains all 0s except A
11
.
Than the 2nd equation is subtracted in such a way from the others that all elements on the second row are 0 except
A
22
, etc. In code:
for (k = 1; k <= n; k++)
{
for (j = k; j <= n; j++) U[k][j] = A[k][j];
c[k] = b[k];
for (i = k + 1; i <= n; i++)
{
L = A[i][k] / U[k][k];
for (j = k + 1; j <= n; j++)
{
A[i][j] = L * U[k][j];
}
b[i] = L * c[k];
}
}
Chapter 8: Numerical mathematics 53
This algorithm requires
1
3
n(n
2
1) oating point multiplications and divisions for operations on the coefcient
matrix and
1
2
n(n 1) multiplications for operations on the righthand terms, whereafter the triangular set has to be
solved with
1
2
n(n + 1) operations.
8.3.3 Pivot strategy
Some equations have to be interchanged if the corner elements A
11
, A
(1)
22
, ... are not all ,= 0 to allow Gauss elimina
tion to work. In the following, A
(n)
is the element after the nth iteration. One method is: if A
(k1)
kk
= 0, than search
for an element A
(k1)
pk
with p > k that is ,= 0 and interchange the pth and the nth equation. This strategy fails only
if the set is singular and has no solution at all.
8.4 Roots of functions
8.4.1 Successive substitution
We want to solve the equation F(x) = 0, so we want to nd the root with F() = 0.
Many solutions are essentially the following:
1. Rewrite the equation in the form x = f(x) so that a solution of this equation is also a solution of F(x) = 0.
Further, f(x) may not vary too much with respect to x near .
2. Assume an initial estimation x
0
for and obtain the series x
n
with x
n
= f(x
n1
), in the hope that lim
n
x
n
=
.
Example: choose
f(x) =
h(x)
g(x)
= x
F(x)
G(x)
than we can expect that the row x
n
with
x
0
=
x
n
= x
n1
h(x
n1
)
g(x
n1
)
converges to .
8.4.2 Local convergence
Let be a solution of x = f(x) and let x
n
= f(x
n1
) for a given x
0
. Let f
() = A with [A[ < 1. Than there exists a > 0 so that for each x
0
with [x
0
[ holds:
1. lim
n
n
n
= ,
2. If for a particular k holds: x
k
= , than for each n k holds that x
n
= . If x
n
,= for all n than holds
lim
n
x
n
x
n1
= A , lim
n
x
n
x
n1
x
n1
x
n2
= A , lim
n
x
n
x
n
x
n1
=
A
1 A
The quantity A is called the asymptotic convergence factor, the quantity B =
10
log [A[ is called the asymptotic
convergence speed.
54 Mathematics Formulary by ir. J.C.A. Wevers
8.4.3 Aitken extrapolation
We dene
A = lim
n
x
n
x
n1
x
n1
x
n2
A converges to f
n
= x
n
+
A
n
1 A
n
(x
n
x
n1
)
will converge to .
8.4.4 Newton iteration
There are more ways to transform F(x) = 0 into x = f(x). One essential condition for them all is that in a
neighbourhood of a root holds that [f
(x)
Than this becomes Newtons method. The iteration formula than becomes:
x
n
= x
n1
F(x
n1
)
F
(x
n1
)
Some remarks:
This same result can also be derived with Taylor series.
Local convergence is often difcult to determine.
If x
n
is far apart from the convergence can sometimes be very slow.
The assumption F
l=0
l=j
x x
l
x
j
x
l
The following holds:
1. Each L
j
(x) has order n,
2. L
j
(x
i
) =
ij
for i, j = 0, 1, ..., n,
3. Each polynomial p(x) can be written uniquely as
p(x) =
n
j=0
c
j
L
j
(x) with c
j
= p(x
j
)
This is not a suitable method to calculate the value of a ploynomial in a given point x = a. To do this, the Horner
algorithm is more usable: the value s =
k
c
k
x
k
in x = a can be calculated as follows:
float GetPolyValue(float c[], int n)
{
int i; float s = c[n];
for (i = n  1; i >= 0; i)
{
s = s * a + c[i];
}
return s;
}
After it is nished s has value p(a).
56 Mathematics Formulary by ir. J.C.A. Wevers
8.6 Denite integrals
Almost all numerical methods are based on a formula of the type:
b
_
a
f(x)dx =
n
i=0
c
i
f(x
i
) +R(f)
with n, c
i
and x
i
independent of f(x) and R(f) the error which has the form R(f) = Cf
(q)
() for all common
methods. Here, (a, b) and q n + 1. Often the points x
i
are chosen equidistant. Some common formulas are:
The trapezoid rule: n = 1, x
0
= a, x
1
= b, h = b a:
b
_
a
f(x)dx =
h
2
[f(x
0
) +f(x
1
)]
h
3
12
f
()
Simpsons rule: n = 2, x
0
= a, x
1
=
1
2
(a +b), x
2
= b, h =
1
2
(b a):
b
_
a
f(x)dx =
h
3
[f(x
0
) + 4f(x
1
) +f(x
2
)]
h
5
90
f
(4)
()
The midpoint rule: n = 0, x
0
=
1
2
(a +b), h = b a:
b
_
a
f(x)dx = hf(x
0
) +
h
3
24
f
()
The interval will usually be split up and the integration formulas be applied to the partial intervals if f varies much
within the interval.
A Gaussian integration formula is obtained when one wants to get both the coefcients c
j
and the points x
j
in an
integral formula so that the integral formula gives exact results for polynomials of an order as high as possible. Two
examples are:
1. Gaussian formula with 2 points:
h
_
h
f(x)dx = h
_
f
_
h
3
_
+f
_
h
3
__
+
h
5
135
f
(4)
()
2. Gaussian formula with 3 points:
h
_
h
f(x)dx =
h
9
_
5f
_
h
_
3
5
_
+ 8f(0) + 5f
_
h
_
3
5
__
+
h
7
15750
f
(6)
()
8.7 Derivatives
There are several formulas for the numerical calculation of f
(x):
Forward differentiation:
f
(x) =
f(x +h) f(x)
h
1
2
hf
()
Chapter 8: Numerical mathematics 57
Backward differentiation:
f
(x) =
f(x) f(x h)
h
+
1
2
hf
()
Central differentiation:
f
(x) =
f(x +h) f(x h)
2h
h
2
6
f
()
The approximation is better if more function values are used:
f
(x) =
f(x + 2h) + 8f(x +h) 8f(x h) +f(x 2h)
12h
+
h
4
30
f
(5)
()
There are also formulas for higher derivatives:
f
(x) =
f(x + 2h) + 16f(x +h) 30f(x) + 16f(x h) f(x 2h)
12h
2
+
h
4
90
f
(6)
()
8.8 Differential equations
We start with the rst order DE y
()
Midpoint rule (two steps, explicit):
z
n+1
= z
n1
+ 2hf(x
n
, z
n
) +
h
3
3
y
()
Trapezoid rule (single step, implicit):
z
n+1
= z
n
+
1
2
h(f(x
n
, z
n
) +f(x
n+1
, z
n+1
))
h
3
12
y
()
RungeKutta methods are an important class of singlestep methods. They work so well because the solution y(x)
can be written as:
y
n+1
= y
n
+hf(
n
, y(
n
)) with
n
(x
n
, x
n+1
)
Because
n
is unknown some measurements are done on the increment function k = hf(x, y) in well chosen
points near the solution. Than one takes for z
n+1
z
n
a weighted average of the measured values. One of the
possible 3rd order RungeKutta methods is given by:
k
1
= hf(x
n
, z
n
)
k
2
= hf(x
n
+
1
2
h, z
n
+
1
2
k
1
)
k
3
= hf(x
n
+
3
4
h, z
n
+
3
4
k
2
)
z
n+1
= z
n
+
1
9
(2k
1
+ 3k
2
+ 4k
3
)
and the classical 4th order method is:
k
1
= hf(x
n
, z
n
)
k
2
= hf(x
n
+
1
2
h, z
n
+
1
2
k
1
)
k
3
= hf(x
n
+
1
2
h, z
n
+
1
2
k
2
)
k
4
= hf(x
n
+h, z
n
+k
3
)
z
n+1
= z
n
+
1
6
(k
1
+ 2k
2
+ 2k
3
+k
4
)
Often the accuracy is increased by adjusting the stepsize for each step with the estimated error. Step doubling is
most often used for 4th order RungeKutta.
58 Mathematics Formulary by ir. J.C.A. Wevers
8.9 The fast Fourier transform
The Fourier transform of a function can be approximated when some discrete points are known. Suppose we have
N successive samples h
k
= h(t
k
) with t
k
= k, k = 0, 1, 2, ..., N 1. Than the discrete Fourier transform is
given by:
H
n
=
N1
k=0
h
k
e
2ikn/N
and the inverse Fourier transform by
h
k
=
1
N
N1
n=0
H
n
e
2ikn/N
This operation is order N
2
. It can be faster, order N
2
log(N), with the fast Fourier transform. The basic idea is
that a Fourier transform of length N can be rewritten as the sum of two discrete Fourier transforms, each of length
N/2. One is formed from the evennumbered points of the original N, the other from the oddnumbered points.
This can be implemented as follows. The array data[1..2*nn] contains on the odd positions the real and on the
even positions the imaginary parts of the input data: data[1] is the real part and data[2] the imaginary part of
f
0
, etc. The next routine replaces the values in data by their discrete Fourier transformed values if isign = 1,
and by their inverse transformed values if isign = 1. nn must be a power of 2.
#include <math.h>
#define SWAP(a,b) tempr=(a);(a)=(b);(b)=tempr
void FourierTransform(float data[], unsigned long nn, int isign)
{
unsigned long n, mmax, m, j, istep, i;
double wtemp, wr, wpr, wpi, wi, theta;
float tempr, tempi;
n = nn << 1;
j = 1;
for (i = 1; i < n; i += 2)
{
if ( j > i )
{
SWAP(data[j], data[i]);
SWAP(data[j+1], data[i+1]);
}
m = n >> 1;
while ( m >= 2 && j > m )
{
j = m;
m >>= 1;
}
j += m;
}
mmax = 2;
while ( n > mmax ) /* Outermost loop, is executed log2(nn) times */
{
istep = mmax << 1;
theta = isign * (6.28318530717959/mmax);
wtemp = sin(0.5 * theta);
wpr = 2.0 * wtemp * wtemp;
wpi = sin(theta);
Chapter 8: Numerical mathematics 59
wr = 1.0;
wi = 0.0;
for (m = 1; m < mmax; m += 2)
{
for (i = m; i <= n; i += istep) /* DanielsonLanczos equation */
{
j = i + mmax;
tempr = wr * data[j]  wi * data[j+1];
tempi = wr * data[j+1] + wi * data[j];
data[j] = data[i]  tempr;
data[j+1] = data[i+1]  tempi;
data[i] += tempr;
data[i+1] += tempi;
}
wr = (wtemp = wr) * wpr  wi * wpi + wr;
wi = wi * wpr + wtemp * wpi + wi;
}
mmax=istep;
}
}