Professional Documents
Culture Documents
Stochastic Simulation
Unit 2 Brownian motion
Contents
1 Brownian motion / Wiener process 1
White and coloured noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Brownian motion in Rd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Approximation of a Brownian motion . . . . . . . . . . . . . . . . . . . . . . . . 6
Other processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2 Problems 8
2. For 0 s t the random variable given by the increment W (t)W (s) isnormally dis-
tributed with mean zero and variance ts. Equivalently W (t)W (s) t sN (0, 1).
3. For 0 s < t u < v the increments W (t)W (s) and W (v)W (u) are independent.
4. W (t) is continuous in t.
Brownian motion is often used to model random effects that are on a microscale small,
random, independent and additive where a random behaviour is seen on a larger scale. By
the central limit theorem then gives that the sum of these small contributions converges to
increments that are normally distributed on the large scale. We recall that the covariance
is a measure of how a random variable X changes against a different random variable Y
and
cov (X, Y ) := E[(X E[X])(Y E[Y ])] = E[XY ] E[X]E[Y ].
Thus the variance of a random variable is given by Var X = cov (X, X). To look at the
correlation in time of a random variable X and Y we consider the covariance function
c(s, t) := cov (X(s), Y (t)), for s, t [0, T ]. If X = Y then this is called the autocovari-
ance. When c(s, t) = c(s t) the covariance is said to be stationary.
The following properties of W are straight forward to see.
Stochastic Simulation, Unit 2 2
Proof Since W (0) = 0 and W (t) = W (t) W (0) N (0, t) we find that
E (W (t))2 = t.
E[W (t)] = 0
The covariance cov (W (s), W (t)) = E[W (t)W (s)] since E[W (t)] = 0. Let us take 0
s t. Then
E[W (t)W (s)] = E[(W (s) + W (t) W (s))W (s)]
and so
E[W (t)W (s)] = E (W (s))2 + E[(W (t) W (s))W (s)].
From Definition 1.1 the increments W (t) W (s) and W (s) W (0) are independent so
Thus
E[W (t)W (s)] = min(s, t), for all s, t 0.
To show that Y (t) = W (t) is also a Brownian motion it is straightforward to verify
the conditions of Definition 1.1.
The increment W (t) W (s) of a Brownian motion over the interval [s, t] has distribution
N (0, t s) and hence the distribution of W (t) W (s) is the same as that of W (t + h)
W (s + h) for any h > 0. That is, the distribution of the increments is independent of
translations and we say that Brownian motion has stationary increments (recall the
stationary covariance). Another important property is that the Brownian motion is self-
similar. That is, by a suitable scaling we obtain another Brownian motion.
Lemma 1.3 (self-similarity) If W (t) is a standard Brownian motion then for > 0,
Y (t) := 1/2 W (t/) is also a Brownian motion.
Proof To show that Y is Brownian motion we need to check the conditions of Definition 1.1.
2. Consider s < t and increment Y (t) Y (s) = 1/2 (W (t/) W (s/)). This has
distribution 1/2 N (0, 1 (t s)) = N (0, (t s)).
3. Increments are independent. Consider Y (t) Y (s) = 1/2 (W (t/) W (s/)) and
Y (v) Y (u) = 1/2 (W (v/) W (u/)) for 0 s t u v T . Since
(W (t/) W (s/)) and (W (v/) W (u/)) are independent.
N
X 1
(Wj )2 tj .
VN T =
j=0
Then look at (VN T )2 and take the expectation. The terms (Wj )2 tj are independent
(for different j) and have mean zero. Therefore, since cross terms are zero,
1 h
NX 2 i
E (VN T )2 = E (Wj )2 tj ,
j=0
and
2
(Wj )2 tj =(Wj )4 2tj (Wj )2 + t2j
!
(Wj )4 (Wj )2
= 2 + 1 t2j
t2j tj
=(Xj2 1)2 t2j
W
where Xj := j N (0, 1). It can be shown that if Z N (0, 1) then E Z 4 = 3. Thus,
tj
for some C > 0 we have
h N 1
2 i X
E (Wj )2 tj C t2j CT t.
j=0
Hence as N , E (VN T )2 0.
We can now show that, although the quadratic variation is finite, the total variation of
Brownian paths is unbounded.
Proof The proof is by contradiction and we outline the idea here. Note that
N
X 1 N
X 1
2
(W (tj+1 ) W (tj )) max |W (tj+1 ) W (tj )| |W (tj+1 ) W (tj )|.
1jN 1
j=0 j=0
Y (t) = (W (t + h) W (t))/h.
This is a normal random variable with mean zero and variance h1 . For Y to approximate
a derivative of W we want to look at the limit as h 0. We see that the variance of the
random variable Y goes to infinity as h 0 and in the limit Y is not well behaved. The
numerical derivative of two different Brownian paths is plotted in Fig.1 (c). As t 0
these numerical derivatives do not converge to a well defined function.
Although we have just argued that Brownian motion W (t) is not differentiable, often in
applications you will see dWdt(t) used. To understand properly this we will need to develop
some stochastic integration theory. Before we do this let us first relate the term dWdt(t) to
white noise with covariance (or autocorrelation function) which is the Dirac delta function,
so that
dW (t) dW (s)
E = (s t). (1)
dt ds
Recall that the Dirac-delta satisfies the following properties
Z
(s) = 0, for s 6= 0, (s)(s)ds = (0)
R
for any continuous function (s). In particular if (s) 1 then (s)ds = 1.
To see why (1) might be true, let us fix s and t and consider
(W (t + h) W (t)) (W (s + h) W (s))
d(s) := E .
h h
For small h this should approximate the Dirac delta. We can simplify d(s) to get
1
d(s) = (E[W (t + h)W (s + h)] E[W (t + h)W (s)]
h2
E[W (t)W (s + h)] + E[W (t)W (s)])
Stochastic Simulation, Unit 2 5
R
Furthermore d(s)ds = 1. Hence, we see d approximates (s t) for small h which
suggests (1) holds.
We briefly discuss two ways to see that dWdt(t) can be interpreted as white noise and what
this means. The first of these is based the Fourier transform of the covariance function.
The spectral density of a stochastic process X with stationary covariance function c (i.e.
c(s, t) = c(s t)) is defined for R by
Z
1
f () := eit c(t)dt.
2
1. There exists a mean square continuous stationary process {X(t) : t R} with station-
ary covariance c(t).
A second way to illustrate why dWdt(t) and the covariance (1) are called white noise is
to consider t [0, 1] and let {j (t)}
j=1 be an orthonormal basis, for example j (t) =
where each j N (0, 1) and are independent of each other. Then we have formed a random
(t) that has a homogeneous mix of the different basis functions j - just like white light
Stochastic Simulation, Unit 2 6
consists of a homogeneous mix of wavelengths. Let us look at the covariance function for
(t)
X
X
cov ((s), (t)) = cov (j , k )j (s)k (t) = j (s)j (t).
j,k=1 j=1
Although the Dirac delta is not strictly speaking a function, formally we can write
X
X
(s) = , j (s) j (s) = j (0)j (s)
j=1 j=1
and so the covariance function for (t) is cov ((s), (t)) = c(s, t) = (s t). That is, (t)
is a stochastic process with a homogeneous mix of the basis functions that has the Dirac
delta as covariance function and the noise in uncorrelated.
Coloured noise, as the name suggests, has a heterogeneous mix of the basis functions.
For example consider the stochastic process
X
(t) = j j j (t) (2)
j=1
where each j N (0, 1) and are independent and j 0. As we vary the j with j the
process (t) is said to be coloured noise and the random variables (t) and (s) are corre-
lated. Expansions of the form (2) are a useful way to examine many stochastic processes.
When the basis functions are chosen as the eigenfunctions of the covariance function, this
is termed the KarhunenLo`eve expansion.
Theorem 1.7 (Karhunen-Lo`
hR eve)
i Consider a stochastic process {X(t) : t [0, T ]}
T 2
and suppose that E 0 (X(s)) ds < . Then,
Z T
X 1
X(t) = (t) + j j (t)j , j := (X(s) (s))j (s)ds (3)
j 0
j=1
and {j , j } denote the eigenvalues and eigenfunctions of the covariance operator so that
RT
0 c(s, t)j (s)ds = j j (t). The sum in (3) converges in a mean-square sense. The random
variables j have mean zero, unit variance and are pairwise uncorrelated. Furthermore, if
the process is Gaussian, then j N (0, 1) independent identically distributed.
Another way to generate colored noise is to solve a stochastic SDE, see Sec.??, whose
solution has a particular frequency distribution. The OU process of Example ?? is often
used to generate coloured noise. For more information on coloured noise see, for example
[3]. The WienerKhintchine theorem is covered in a wide range of books, including [2, 1, 6].
The Karhunen-Lo`eve expansion is widely used to construct random fields as well as in data
analysis and signal processing, see also [4, 6].
Brownian motion in Rd
We can extend the definition of a Brownian motion on R to Rd . We can easily extend
T d
to processes W(t) = (W1 (t), W2 (2), . . . , Wd (t)) R where increments
the Definition 1.1
W(t) W(s) t sN (0, I). Here N (0, I) is the d-dimensional Normal distribution with
covariance matrix I, the Rdd identity matrix. It is straightforward to show the following
lemma.
Lemma 1.8 The process W(t) = (W1 (t), W2 (2), . . . , Wd (t))T Rd is a Brownian motion
(or a Wiener process) if and only if each of the components Wi (t) is a Brownian motion.
Stochastic Simulation, Unit 2 7
Wn+1 = Wn + dWn , n = 1, 2, . . . , N
where dWn tN (0, 1). This is described in Algorithm 1 and two typical result of this
process is shown in Fig.1 (a). In the figure we use a piecewise linear approximation to W(t)
for t [tn , tn+1 ]. To approximate a d dimensional W by Lemma 1.8 we can simply use
Algorithm 1 for each component, the result of this is shown in Fig.1 (b).
5
1.2
1 100
0.8
4
0.6 50
dW_1/dt
W_1(t)
0.4
0.2 0
0
3
0.2 50
0.4
0.6 100
2
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
W_2(t)
1
4
100
3.5
3
50
2.5 0
dW_2/dt
W_2(t)
2
0
1.5
1
1 50
0.5
0
100
0.5
1 2
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 1 0.5 0 0.5 1 1.5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
t W_1(t) t
Figure 1: (a) Two discretised Brownian motions W1 (t), W2 (t) constructed over [0, 5] with
N = 5000 so t = 0.001. (b) Brownian motion W1 (t) plotted against W2 (t). The paths
start at (0, 0) and final point at t = 5 is marked with a ?. (c) Numerical derivatives of
W1 (t) and W2 (t) from (a).
Other processes
One extension of the Wiener process is the fractional Brownian motion.
This is a family of mean zero Gaussian processes which have the properites that they
are self-similar (see Lemma 1.3) and has stationary increments.
Stochastic Simulation, Unit 2 8
References
[1] Nils Berglund and Barbara Gentz. Noise-induced phenomena in slow-fast dynamical
systems. Probability and its Applications (New York). Springer-Verlag London Ltd.,
London, 2006. A sample-paths approach.
[2] Crispin Gardiner. Stochastic Methods: A handbook for the natural and social sciences.
Springer Series in Synergetics. Springer, Berlin, fourth edition, 2009.
[3] P. Hanggi and P. Jung. Colored noise in dynamical systems. Advances in Chem. Phys.,
pages 239326, 1995.
[4] Peter E. Kloeden and Eckhard Platen. Numerical Solution of Stochastic Differential
Equations, volume 23 of Applications of Mathematics. Springer, 1992.
[7] Bernt ksendal. Stochastic Differential Equations. Universitext. Springer, Berlin, sixth
edition, 2003.
2 Problems
1. Let W (t) be a standard Brownian motion. Show that the Brownian bridge Y (t) =
W (t) tW (1) has E[Y (t)] = 0 and E[(Y (s)Y (t))] = s(1 t) for s < t.
1 2
2. Let W (t) be a Brownian motion. Show that Y (t) = W ( t) is also a Brownian
motion.