You are on page 1of 8

MSc/Diploma/Certificate Module

Stochastic Simulation
Unit 2 Brownian motion

Contents
1 Brownian motion / Wiener process 1
White and coloured noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Brownian motion in Rd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Approximation of a Brownian motion . . . . . . . . . . . . . . . . . . . . . . . . 6
Other processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2 Problems 8

1 Brownian motion / Wiener process


Although there are many different types of stochastic processes that could be taken we look
at one type of noise (probably the most common) - that arises from Brownian motion. A
Scottish botanist, Robert Brown, observed that a small particle such as a seed in a liquid
or gas moves about in a seemingly random way. This is because it is being hit continuously
by the gas or liquid molecules. Later Norbert Wiener, an American mathematician, gave
a mathematical description of the process and proved its existence. In the literature both
the term Brownian motion and Wiener process are widely used, here we will use Brownian
motion and denote it by W . For references see, for example, [7, 6, 5]

Definition 1.1 A scalar standard Brownian motion or standard Wiener process


is a collection of random variables W (t) that satisfy

1. W (0) = 0 with probability 1.

2. For 0 s t the random variable given by the increment W (t)W (s) isnormally dis-
tributed with mean zero and variance ts. Equivalently W (t)W (s) t sN (0, 1).

3. For 0 s < t u < v the increments W (t)W (s) and W (v)W (u) are independent.

4. W (t) is continuous in t.

Brownian motion is often used to model random effects that are on a microscale small,
random, independent and additive where a random behaviour is seen on a larger scale. By
the central limit theorem then gives that the sum of these small contributions converges to
increments that are normally distributed on the large scale. We recall that the covariance
is a measure of how a random variable X changes against a different random variable Y
and
cov (X, Y ) := E[(X E[X])(Y E[Y ])] = E[XY ] E[X]E[Y ].
Thus the variance of a random variable is given by Var X = cov (X, X). To look at the
correlation in time of a random variable X and Y we consider the covariance function
c(s, t) := cov (X(s), Y (t)), for s, t [0, T ]. If X = Y then this is called the autocovari-
ance. When c(s, t) = c(s t) the covariance is said to be stationary.
The following properties of W are straight forward to see.
Stochastic Simulation, Unit 2 2

Lemma 1.2 Let W (t), t 0, be a standard Brownian motion. Then,

1. E[W (t)] = 0 and E (W (t))2 = t.


 

2. cov (W (s), W (t)) = min(s, t).

3. the process Y (t) = W (t) is also a standard Brownian motion.

Proof Since W (0) = 0 and W (t) = W (t) W (0) N (0, t) we find that

E (W (t))2 = t.
 
E[W (t)] = 0

The covariance cov (W (s), W (t)) = E[W (t)W (s)] since E[W (t)] = 0. Let us take 0
s t. Then
E[W (t)W (s)] = E[(W (s) + W (t) W (s))W (s)]
and so
E[W (t)W (s)] = E (W (s))2 + E[(W (t) W (s))W (s)].
 

From Definition 1.1 the increments W (t) W (s) and W (s) W (0) are independent so

E[W (t)W (s)] = s + E[(W (t) W (s))]E[W (s)] = s.

Thus
E[W (t)W (s)] = min(s, t), for all s, t 0.
To show that Y (t) = W (t) is also a Brownian motion it is straightforward to verify
the conditions of Definition 1.1. 
The increment W (t) W (s) of a Brownian motion over the interval [s, t] has distribution
N (0, t s) and hence the distribution of W (t) W (s) is the same as that of W (t + h)
W (s + h) for any h > 0. That is, the distribution of the increments is independent of
translations and we say that Brownian motion has stationary increments (recall the
stationary covariance). Another important property is that the Brownian motion is self-
similar. That is, by a suitable scaling we obtain another Brownian motion.

Lemma 1.3 (self-similarity) If W (t) is a standard Brownian motion then for > 0,
Y (t) := 1/2 W (t/) is also a Brownian motion.

Proof To show that Y is Brownian motion we need to check the conditions of Definition 1.1.

1. Y (0) = W (0) = 0 a.s.

2. Consider s < t and increment Y (t) Y (s) = 1/2 (W (t/) W (s/)). This has
distribution 1/2 N (0, 1 (t s)) = N (0, (t s)).

3. Increments are independent. Consider Y (t) Y (s) = 1/2 (W (t/) W (s/)) and
Y (v) Y (u) = 1/2 (W (v/) W (u/)) for 0 s t u v T . Since
(W (t/) W (s/)) and (W (v/) W (u/)) are independent.

4. Continuity of Y follows from the continuity of W .

Hence we have shown Y (t) is also a Brownian motion. 


We now examine how rough Brownian motion is. To do this we will need to consider
convergence of random variables. We say that the random variables Xj converge to X in
mean square if
E kXj Xk2 0, as j
 
Stochastic Simulation, Unit 2 3

and the converge in root mean square if


 1/2
E kXj Xk2
 
0, as j .

We need to specify a norm. For vectors we will take the


standard Euclidean norm k k2 on
Rd with inner product x, y for x, y Rd , so kxk22 = x, x . For a matrix X Rdm we

will use the Frobenius norm


1/2
X d Xm p
kXkF = |xij |2 = trace(X X),
i=1 j=1

where X is the conjugate transpose.


We start by examining the quadratic variation, N 2
P
j=0 (W (tj+1 ) W (tj )) as N .
The following lemma will also be useful for developing stochastic integrals later.

Lemma 1.4 (Quadratic variation) Let 0 = t0 < t1 < . . . < tN = T be a partition of


[0, T ] where it is understood that t := max1jN |tj+1 tj | 0 as N .
P 1
Let Wj := (W (tj+1 ) W (tj )). Then N 2
j=0 (Wj ) T in mean square as N .
PN 2
Proof Let tj := tj+1 tj , VN := j=0 (Wj ) and note that

N
X 1
(Wj )2 tj .
 
VN T =
j=0

Then look at (VN T )2 and take the expectation. The terms (Wj )2 tj are independent
(for different j) and have mean zero. Therefore, since cross terms are zero,
1 h
 NX 2 i
E (VN T )2 = E (Wj )2 tj ,


j=0

and
2
(Wj )2 tj =(Wj )4 2tj (Wj )2 + t2j
!
(Wj )4 (Wj )2
= 2 + 1 t2j
t2j tj
=(Xj2 1)2 t2j
W
where Xj := j N (0, 1). It can be shown that if Z N (0, 1) then E Z 4 = 3. Thus,
 
tj
for some C > 0 we have
h N 1
2 i X
E (Wj )2 tj C t2j CT t.
j=0

Hence as N , E (VN T )2 0.
 

We can now show that, although the quadratic variation is finite, the total variation of
Brownian paths is unbounded.

Lemma 1.5 (Total variation)


PN 1 Let 0 = t0 < t1 < . . . < tN = T be a partition of [0, T ] as
in Lemma 1.4. Then j=0 |W (tj+1 ) W (tj )| as N .
Stochastic Simulation, Unit 2 4

Proof The proof is by contradiction and we outline the idea here. Note that
N
X 1 N
X 1
2
(W (tj+1 ) W (tj )) max |W (tj+1 ) W (tj )| |W (tj+1 ) W (tj )|.
1jN 1
j=0 j=0

Now Brownian motion is continuous so as N we have max1jN 1 |W (tj+1 )


P 1
|W (tj+1 ) W (tj )| is finite then N 2
P
W (tj )| 0. If j=0 (W (tj+1 ) W (tj )) 0, but
this contradictsPLemma 1.4. 
N 1
The fact that j=0 |W (tj+1 ) W (tj )| hints that Brownian motion, even if it is
continuous,
R T dW may in fact be very rough. For example consider the numerical approximation
of 0 dt . In Sec.1 we argue that Brownian motion is not differentiable.

White and coloured noise


Despite the nice properties of Brownian motion a well known theorem of Dvoretzky, Erdos,
Kakutani states that sample paths of Brownian motion are not differentiable anywhere. To
see why this might be the case note that from Definition 1.1 the increment W (t+h)W (t),
for h > 0 is a normal random variable with mean zero and variance h.
Now consider the approximation to the derivative

Y (t) = (W (t + h) W (t))/h.

This is a normal random variable with mean zero and variance h1 . For Y to approximate
a derivative of W we want to look at the limit as h 0. We see that the variance of the
random variable Y goes to infinity as h 0 and in the limit Y is not well behaved. The
numerical derivative of two different Brownian paths is plotted in Fig.1 (c). As t 0
these numerical derivatives do not converge to a well defined function.
Although we have just argued that Brownian motion W (t) is not differentiable, often in
applications you will see dWdt(t) used. To understand properly this we will need to develop
some stochastic integration theory. Before we do this let us first relate the term dWdt(t) to
white noise with covariance (or autocorrelation function) which is the Dirac delta function,
so that  
dW (t) dW (s)
E = (s t). (1)
dt ds
Recall that the Dirac-delta satisfies the following properties
Z
(s) = 0, for s 6= 0, (s)(s)ds = (0)

R
for any continuous function (s). In particular if (s) 1 then (s)ds = 1.
To see why (1) might be true, let us fix s and t and consider
 
(W (t + h) W (t)) (W (s + h) W (s))
d(s) := E .
h h

For small h this should approximate the Dirac delta. We can simplify d(s) to get
1
d(s) = (E[W (t + h)W (s + h)] E[W (t + h)W (s)]
h2
E[W (t)W (s + h)] + E[W (t)W (s)])
Stochastic Simulation, Unit 2 5

and use Lemma 1.2 on the Brownian motion to obtain


1
d(s) = (min(t + h, s + h) min(t + h, s) min(t, s + h) + min(t, s)).
h2
We see that d is a piecewise linear function with nodes at s = t h, t, t + h and

0 for s t h
2

(s t + h)/h for t h < s < t

d(s) = .

(t + h s)/h2 for t < s < t + h
0 for s t + h

R
Furthermore d(s)ds = 1. Hence, we see d approximates (s t) for small h which
suggests (1) holds.
We briefly discuss two ways to see that dWdt(t) can be interpreted as white noise and what
this means. The first of these is based the Fourier transform of the covariance function.
The spectral density of a stochastic process X with stationary covariance function c (i.e.
c(s, t) = c(s t)) is defined for R by
Z
1
f () := eit c(t)dt.
2

When c(t) = (t) we find for all frequencies


Z
1 1
f () = eit (t)dt = .
2 2

Therefore, by analogy with white light, all frequencies contribute equally.


Note that the covariance function can also be found from the inverse Fourier transform
of the spectrum. One practical application of this is to estimate the correlation function
from numerical data using the Fourier transform.
The precise statement is given in the following theorem.

Theorem 1.6 (WienerKhintchine) The following statements are equivalent.

1. There exists a mean square continuous stationary process {X(t) : t R} with station-
ary covariance c(t).

2. The function c : R R is such that


Z
c(t) = eit f () d.
R

A second way to illustrate why dWdt(t) and the covariance (1) are called white noise is
to consider t [0, 1] and let {j (t)}
j=1 be an orthonormal basis, for example j (t) =

2 sin(jt). Now construct (t) by the random series with t [0, 1]



X
(t) := j j (t)
j=1

where each j N (0, 1) and are independent of each other. Then we have formed a random
(t) that has a homogeneous mix of the different basis functions j - just like white light
Stochastic Simulation, Unit 2 6

consists of a homogeneous mix of wavelengths. Let us look at the covariance function for
(t)

X
X
cov ((s), (t)) = cov (j , k )j (s)k (t) = j (s)j (t).
j,k=1 j=1

Although the Dirac delta is not strictly speaking a function, formally we can write

X
X


(s) = , j (s) j (s) = j (0)j (s)
j=1 j=1

and so the covariance function for (t) is cov ((s), (t)) = c(s, t) = (s t). That is, (t)
is a stochastic process with a homogeneous mix of the basis functions that has the Dirac
delta as covariance function and the noise in uncorrelated.
Coloured noise, as the name suggests, has a heterogeneous mix of the basis functions.
For example consider the stochastic process

X
(t) = j j j (t) (2)
j=1

where each j N (0, 1) and are independent and j 0. As we vary the j with j the
process (t) is said to be coloured noise and the random variables (t) and (s) are corre-
lated. Expansions of the form (2) are a useful way to examine many stochastic processes.
When the basis functions are chosen as the eigenfunctions of the covariance function, this
is termed the KarhunenLo`eve expansion.
Theorem 1.7 (Karhunen-Lo`
hR eve)
i Consider a stochastic process {X(t) : t [0, T ]}
T 2
and suppose that E 0 (X(s)) ds < . Then,
Z T
X 1
X(t) = (t) + j j (t)j , j := (X(s) (s))j (s)ds (3)
j 0
j=1

and {j , j } denote the eigenvalues and eigenfunctions of the covariance operator so that
RT
0 c(s, t)j (s)ds = j j (t). The sum in (3) converges in a mean-square sense. The random
variables j have mean zero, unit variance and are pairwise uncorrelated. Furthermore, if
the process is Gaussian, then j N (0, 1) independent identically distributed.
Another way to generate colored noise is to solve a stochastic SDE, see Sec.??, whose
solution has a particular frequency distribution. The OU process of Example ?? is often
used to generate coloured noise. For more information on coloured noise see, for example
[3]. The WienerKhintchine theorem is covered in a wide range of books, including [2, 1, 6].
The Karhunen-Lo`eve expansion is widely used to construct random fields as well as in data
analysis and signal processing, see also [4, 6].

Brownian motion in Rd
We can extend the definition of a Brownian motion on R to Rd . We can easily extend
T d
to processes W(t) = (W1 (t), W2 (2), . . . , Wd (t)) R where increments
the Definition 1.1
W(t) W(s) t sN (0, I). Here N (0, I) is the d-dimensional Normal distribution with
covariance matrix I, the Rdd identity matrix. It is straightforward to show the following
lemma.
Lemma 1.8 The process W(t) = (W1 (t), W2 (2), . . . , Wd (t))T Rd is a Brownian motion
(or a Wiener process) if and only if each of the components Wi (t) is a Brownian motion.
Stochastic Simulation, Unit 2 7

Approximation of a Brownian motion


We can use Definition 1.1 directly to construct a numerical approximation Wn to a Brownian
motion W (t) at times tn , where 0 = t0 < t1 < < tN . From 1) of Definition 1.1 we
have that W0 = W (0) = 0. We construct the approximation by noting that W (tn+1 ) =
W (tn ) + (W (tn+1 ) W (tn )) and from 2) of Definition 1.1 we know how increments are
distributed.
Letting Wn W (tn ) we get that

Wn+1 = Wn + dWn , n = 1, 2, . . . , N

where dWn tN (0, 1). This is described in Algorithm 1 and two typical result of this
process is shown in Fig.1 (a). In the figure we use a piecewise linear approximation to W(t)
for t [tn , tn+1 ]. To approximate a d dimensional W by Lemma 1.8 we can simply use
Algorithm 1 for each component, the result of this is shown in Fig.1 (b).
5

1.2

1 100

0.8
4
0.6 50

dW_1/dt
W_1(t)

0.4

0.2 0

0
3
0.2 50

0.4

0.6 100
2
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
W_2(t)

1
4
100
3.5

3
50
2.5 0

dW_2/dt
W_2(t)

2
0
1.5

1
1 50
0.5

0
100
0.5

1 2
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 1 0.5 0 0.5 1 1.5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

t W_1(t) t

(a) (b) (c)

Figure 1: (a) Two discretised Brownian motions W1 (t), W2 (t) constructed over [0, 5] with
N = 5000 so t = 0.001. (b) Brownian motion W1 (t) plotted against W2 (t). The paths
start at (0, 0) and final point at t = 5 is marked with a ?. (c) Numerical derivatives of
W1 (t) and W2 (t) from (a).

Algorithm 1 Brownian motion in one-dimension. We assume t0 = 0.


input : vector t = [t0 , t1 , . . . , tN ]
output: vector W such that component Wn = W (tn ).
1: W0 = 0
2: for n = 1 : N do
3: t = tn tn1
4: find z N (0, 1)
5: dWn = tz
6: Wn =Wn1 + dWn
7: end for

Other processes
One extension of the Wiener process is the fractional Brownian motion.
This is a family of mean zero Gaussian processes which have the properites that they
are self-similar (see Lemma 1.3) and has stationary increments.
Stochastic Simulation, Unit 2 8

References
[1] Nils Berglund and Barbara Gentz. Noise-induced phenomena in slow-fast dynamical
systems. Probability and its Applications (New York). Springer-Verlag London Ltd.,
London, 2006. A sample-paths approach.

[2] Crispin Gardiner. Stochastic Methods: A handbook for the natural and social sciences.
Springer Series in Synergetics. Springer, Berlin, fourth edition, 2009.

[3] P. Hanggi and P. Jung. Colored noise in dynamical systems. Advances in Chem. Phys.,
pages 239326, 1995.

[4] Peter E. Kloeden and Eckhard Platen. Numerical Solution of Stochastic Differential
Equations, volume 23 of Applications of Mathematics. Springer, 1992.

[5] G. J. Lord. Stochastic differential equations. In Michael Grinfeld, editor, Mathematical


Tools for Physicists, chapter 3. Wiley, 2nd edition, 2015. ISBN: 978-3-527-41188-7.

[6] G. J. Lord, C. E. Powell, and T. Shardlow. Introduction to Computational Stochastic


Partial Differential Equations. CUP, 2014. ISBN: 978-0-521-72852-2.

[7] Bernt ksendal. Stochastic Differential Equations. Universitext. Springer, Berlin, sixth
edition, 2003.

2 Problems
1. Let W (t) be a standard Brownian motion. Show that the Brownian bridge Y (t) =
W (t) tW (1) has E[Y (t)] = 0 and E[(Y (s)Y (t))] = s(1 t) for s < t.
1 2
2. Let W (t) be a Brownian motion. Show that Y (t) = W ( t) is also a Brownian
motion.

3. Use the properties of a Wiener process to consider


 
W (t) W (t)
lim E , lim Var ,
t t t t

4. Let Y (t) = (W (t + h) W (t)) /h. Show that Var(Y (t)) is unbounded as h 0.

You might also like