Professional Documents
Culture Documents
Lorenzo Pareschi
Department of Mathematics
University of Ferrara, Italy
1
Outline
• Random sampling
– Monovariate distributions
– Multivariate distributions
• Numerical results
• Basic references
2
Random Sampling
Before entering the description of the methods, we give a brief review of random
sampling, which is at the basis of several Monte Carlo methods.
We assume that our computer is able to generate a uniformly distributed random
number between 0 and 1.
Monovariate distribution
R
Let x ∈ IR be a random variable with density px (x), i.e. px (x) ≥ 0, Ω px(x) dx = 1,
and let ξ be a uniformly distributed random variable (number) in [0, 1].
px(x)
0 1 x
3
Then the relation between x and ξ can be found by equating the infinitesimal prob-
abilities
px(x) dx = 1 · dξ.
By integration one has
Z x
Px (x) = px (y) dy = ξ,
−∞
where Px (x) is the distribution function corresponding to the random variable x, i.e.
the primitive of px (x). Then the random variable x can be sampled by sampling a
uniformly distributed variable ξ, and then solving
x = Px−1 (ξ).
Example: Let px (x) = exp(− x), x ≥ 0. Then
Z x
Px(x) = exp(− y) dy = 1 − exp(− x) = ξ,
0
and therefore
x = − ln(1 − ξ)
or x = − ln ξ, because 1 − ξ is also uniformly distributed in [0, 1].
Remark: We will consider density functions defined on the whole real line. If the
support of the density is strictly contained in IR, as in the previous example, the
density function can be defined by using the Heaviside Θ-function. In the example
above one could define px (x) = exp(− x)Θ(x), x ∈ IR.
4
It may be expensive to compute the inverse function, since in general a nonlinear
equation has to be solved. A different technique is the so-called acceptance-rejection .
Let x be a random variable with density px(x), x ∈ IR. We look for Ra function
∞
w(x) ≥ px(x) ∀x ∈ IR whose primitive W (x) is easily invertible. Let A = −∞ w(x) dx
and denote with ξ1 and ξ2 uniformly [0, 1] random numbers.
Algorithm [acceptance-rejection]:
1. Sample from w(x)/A by solving the equation W (x) = Aξ1 ;
2. if w(x)ξ2 < px (x) then accept the sample, else reject the sample and repeat step 1.
0.5
0.4
0.3 w(x)
0.2
p(x)
0.1
0
−8 −6 −4 −2 η 0 2 4 6 8
x
In Figure η has been sampled from w(x). It will be accepted with probability equal
to the ratio p(η)/w(η).
5
Remark: The efficiency of the scheme depends on how easy it is to invert the function
W (x) and how frequently we accept the sample. The fraction of accepted samples
equals the ratio of the areas below the two curves px(x) and W (x) and it is therefore
equal to 1/A.
6
Discrete sampling
Let us suppose that k ∈ {1, . . . , M } is an integer random number, with probabilities
{wk }. In order to sample k with probability wk we can proceed as follows. Divide
interval [0, 1] in M intervals, i-th interval being of length wi , extract a uniform [0, 1]
random number ξ, and detect the interval k to which ξ belongs in the following way.
Algorithm [discrete sampling 1]:
Pk
1. Compute Wk = i=1
wk , k = 1, . . . , M, W0 = 0;
For an arbitrary set of probabilities {wi }, once Wi have been computed, step 2 can
be performed with a binary search, in O(ln M ) operations.
This approach is efficient if the numbers Wk can be computed once, and then used
several times, and if M is not too large. If the number M is very large and the numbers
wk may change a more efficient acceptance-rejection technique can be used, if at
least an estimate w such that w ≥ wi , i = 1, . . . , M is known.
Algorithm [discrete sampling 2]:
1. Select an integer random number uniformly in [1, . . . , M ], as
k = [[M ξ1 ]] + 1
where [[x]] denotes the integer part of x;
2. if wk ξ2 < w then accept the sampling else reject it and repeat point 1.
7
Clearly the procedure can be generalized to the case in which the estimate depends
on k, i.e. wk ≥ wk , k = 1, . . . , M .
The above technique is of crucial relevance in the Monte Carlo simulation of the
scattering in the Boltzmann equation.
Sampling without repetition
Sometimes it is useful to extract n numbers, n ≤ N , from the sequence 1, . . . , N
without repetition. This sampling is often used in several Monte Carlo simulations.
A simple and efficient method to perform the sampling is the following.
Algorithm:
1. set indi = i, i = 1, . . . , N ;
2. M = N ;
3. for i = 1 to n
• set j = [[M ξ]] + 1, seqi = indj ,
• indj = indM , M = M − 1
end for
At the end the vector seq will contain n distinct integers randomly sampled from
the first N natural numbers. Of course if n = N , the vector seq contains a random
permutation of the sequence 1, . . . , N .
8
Multivariate distributions
Suppose we want to sample a n-dimensional random variable x = (x1 , . . . , xn ), whose
probability density is px (x).
If this is not the case, then one may first look for a transformation T : x → η = T (x)
such that in the new variables the probability density is factorized, i.e.
px (x1 , . . . xn )dx1dx2 . . . dxn = pη1 (η1)pη2 (η2 ) · · · pηn (ηn )dη1dη2 . . . dηn ,
then sample the variables η1 , . . . ηn , and finally compute x by inverting the map T ,
i.e. x = T −1 (η).
Remark: In some cases such transformation can be readily found. In other cases
it is more complicated. There is a general technique to find a mapping T : x → ξ
where ξ = (ξ1 , . . . , ξn ) denotes a uniformly random variable in [0, 1]n . Of course such
transformation is not unique, since we only impose that its Jacobian determinant
J = |∂ξ/∂x| is a given function px (x1 , . . . , xn ).
9
An explicit transformation is constructed as follows
Z x1 Z
T1 (x1 ) = dη dx2 . . . dxn px (η, . . . , xn ),
R−∞ R IR
n−1
x2
−∞
dη IRn−2 dx3 . . . dxn px(x1 , η, . . . , xn )
T2 (x1 , x2 ) = R ∞ R ,
dη dx 3 . . . dx n p x (x1 , η, . . . , x n )
R−∞ RIR
n−2
x3
−∞
dη IRn−3 dx4 . . . dxn px(x1 , x2, η, . . . , xn )
T3 (x1 , x2 , x3 ) = R ∞ R ,
−∞
dη IRn−3
dx 4 . . . dx n p x (x1 , x 2, η, . . . , x n )
...
R xn
dη px(x1, . . . , η)
Tn (x1 , . . . , xn ) = R−∞ ∞ .
−∞
dη p x (x 1 , . . . , η)
Remark: This transformation is used to map an arbitrary measure with density px (x)
into the Lebesgue measure on the unit cube [0, 1]n . This can be used, for example,
to approximate a continuous measure by a discrete measure, once a good discrete
approximation of the Lebesgue measure is known.
10
Let us assume that we have a “good” approximation of the uniform measure obtained
by N suitably chosen points η(i) ∈ [0, 1]n , i = 1, . . . , N,
N
1 X
δ(η − η(i)) ≈ χ[0,1]n (η),
N i=1
then
N
1 X
δ(x − x(i) ) ≈ px (x),
N i=1
where x(i) = T −1 (η(i) ), i = 1, . . . , N . This technique can be effectively used to obtain
good quadrature formulae to compute integrals in highly dimensional space, and is
the basis of the so called quasi-Monte Carlo integration.
If the inverse transform map is too expensive, then the acceptance-rejection tech-
nique can be used, exactly as in the case of the monovariate distribution. More
precisely, let x be a random variable with density px (x), x ∈ IRn . Then
R we look for a
n
function w(x) ≥ px(x) ∀x ∈ IR which is “easy to sample”. Let A = IRn w(x) dx. Then
the algorithm works as follows
Algorithm:
2. if w(x)ξ < px (x) then accept the sample, else reject it and repeat step 1.
11
Example: As an example we show how to sample from a Gaussian distribution. Let
x be a normally distributed random variable with zero mean and unit variance,
µ 2¶
1 x
p(x) = √ exp − .
2π 2
In order
√ to sample from p one could invert the distribution function P (x) = (1 +
erf(x/ 2))/2, where
Z x
2
erf(x) = √ exp(−t2) dt,
π 0
denotes the error function . However the inversion of the error function may be
expensive.
An alternative procedure is obtained by the so called Box-Muller method described
below. Let us consider a two dimensional normally distributed random variable. Then
µ 2 ¶
1 x + y2
p(x, y) = exp − = p(x)p(y).
2π 2
If we use polar coordinates
x = ρ cos θ, y = ρ sin θ,
then we have
µ 2 ¶ µ 2¶
1 x + y2 1 ρ
exp − dx dy = exp − ρ dρ dθ.
2π 2 2π 2
12
Therefore in polar coordinates the density function is factorized as pρ dρ pθ dθ, with
µ 2¶
ρ
pρ = exp − ρ, ρ ≥ 0
2
1
pθ = , 0 ≤ θ < 2π
2π
The random variables ρ and θ are readily sampled by inverting pρ and pθ , i.e.
p
ρ = −2 ln ξ1 , θ = 2πξ2,
and, from these x and y are easily obtained.
Remark: At the end of the procedure we have two points sampled from a Nor-
mal(0,1) distribution (i.e. a Gaussian distribution with zero mean and unit variance).
Of course, if the random variable has mean µ and standard deviation σ, then x and
y will be scaled accordingly as
x = µx + σxρ cos θ, y = µy + σy ρ sin θ.
13
Example: Here we show how to sample a point uniformly from the surface of a
sphere. A point on a unit sphere is identified by the two polar angles (ϕ, θ) ,
x = sin θ cos ϕ,
y = sin θ sin ϕ,
z = cos θ.
Because the distribution is uniform, the probability of finding a point in a region is
proportional to the solid angle
dω sin θ dθ dϕ
dP = = · ,
4π 2 2π
and therefore
dϕ
= dξ1 ,
2π
sin θ dθ
= dξ2 .
2
Integrating the above expressions we have
ϕ = 2πξ1 ,
θ = arccos(1 − 2ξ2 ).
14
Direct simulation Monte Carlo (DSMC) methods
In this paragraph we will describe the classical DSMC methods due to Bird and
Nanbu in the case of the spatially homogeneous Boltzmann equation.
We assume that the kinetic equations we are considering can be written in the form
∂f 1
= [P (f, f ) − µf ],
∂t ε
where µ > 0 is a constant and P (f, f ) is a non negative bilinear operator. In
particular, for both Kac equation and Boltzmann equation for Maxwellian molecules
we have P (f, f ) = Q+(f, f ).
Examples:
Let us consider a time interval [0, tmax ], and let us discretize it in ntot intervals of
size ∆t. Let us denote by f n (v) an approximation of f (v, n∆t). The forward Euler
scheme writes
µ ¶
n+1 µ∆t n µ∆t P (f n , f n )
f = 1− f + .
² ² µ
Clearly if f n is a probability density both P (f n , f n )/µ and f n+1 are probability densi-
ties. Thus the equation has the following probabilistic interpretation:
physics : a particle with velocity vi will not collide with probability (1 − µ∆t/²),
and it will collide with probability µ∆t/², according to the collision law described by
P (f n , f n )(v).
16
Maxwellian case
Let us consider first kinetic equations for which P (f, f ) = Q+ (f, f ), i.e. the collision
kernel does not depend on the relative velocity of the particles.
Remark: In its first version, Nanbu’s algorithm was not conservative, i.e. energy
was conserved only in the mean, but not at each collision. A conservative version
of the algorithm was introduced by Babovsky. Instead of selecting single particles,
independent particle pairs are selected, and conservation is maintained at each col-
lision.
17
The expected number of particles that collide in a small time step ∆t is N µ∆t/², and
the expected number of collision pairs is N µ∆t/(2²). The algorithm is the following
Algorithm[Nanbu-Babovsky for Maxwell molecules]:
1. compute the initial velocity of the particles, {vi0 , i = 1, . . . , N },
by sampling them from the initial density f0 (v)
2. for n = 1 to ntot
given {vin , i = 1, . . . , N }
◦ set Nc = Iround(µN ∆t/(2²)
◦ select Nc pairs (i, j) uniformly among all possible pairs,
and for those
- perform the collision between i and j, and compute
vi0 and vj0 according to the collision law
- set vin+1 = vi0 , vjn+1 = vj0
◦ set vin+1 = vin for all the particles that have not been selected
end for
2D
µ ¶
cos θ
ω= , θ = 2πξ,
sin θ
3D
cos φ sin θ
ω = sin φ sin θ , θ = arccos(2ξ1 − 1), φ = 2πξ2 ,
cos θ
21
Evaluation of Σ
The upper bound Σ should be chosen as small as possible, to avoid inefficient
rejection, and it should be computed fast. It is be too expensive to compute Σ as
Σ = Bmax ≡ max B(|vi − vj |),
ij
Maxellian case
Let us consider first the Maxwellian case. The number of collisions in a short time
step ∆t is given by
N µ∆t
Nc = .
2ε
This means that the average time between collisions ∆tc is given by
∆t 2ε
∆tc = = .
Nc µN
23
Now it is possible to set a time counter, tc , and to perform the calculation as follows
Algorithm[Bird for Maxwell molecules]:
1. compute the initial velocity of the particles, {vi0 , i = 1, . . . , N },
by sampling them from the initial density f0 (v)
2. set time counter tc = 0
3. set ∆tc = 2ε/(µN )
4. for n = 1 to ntot
◦ repeat
- select a random pair (i, j) uniformly within all possible N (N − 1)/2 pairs
- perform the collision and produce vi0 , vj0
- set ṽi = vi0 , ṽj = vj0
- update the time counter tc = tc + ∆tc
until tc ≥ (n + 1)∆t
◦ set vin+1 = ṽi , i = 1, . . . , N
end for
Remark: The above algorithm is very similar to the Nanbu-Babovsky (NB) scheme
for Maxwellian molecules or for the Kac equation. The main difference is that in
NB scheme the particles can collide only once per time step, while in Bird’s scheme
multiple collisions are allowed. This has a profound influence on the time accuracy
of the method. In fact, while the solution of the NB scheme converges in probability
to the solution of the time discrete Boltzmann equation, Bird’s method converges
to the solution of the space homogeneous Boltzmann equation. In this respect it
may be considered a scheme of infinite order in time. For the space homogeneous
Boltzmann equation, the time step ∆t, in fact, can be chosen to be the full time
span tmax .
24
Variable Hard Sphere case
When a more general gas is considered, Bird’s scheme has to be modified to take
into account that the average number of collisions in a given time interval is not
constant, and that the collision probability on all pairs is not uniform. This can be
done as follows. The expected number of collisions in a time step ∆t is given by
N ρB∆t
Nc = ,
2ε
where B denotes the average collision frequency.
Once the expected number of collisions is computed, then the mean collision time
can be computed as
∆t 2ε
∆tc = = .
Nc N ρB
The Nc collisions have to be performed with probability proportional to Bij = B(|vi −
vj |). In order to do this one can use the same acceptance-rejection technique as
in Nanbu-Babovsky scheme. The drawback of this procedure is that computing B
would be too expensive. A solution to this problem is to compute a local time counter
∆tc as follows. First select a collision pair (i, j) using rejection. Then compute
2ε
∆tij = .
N ρBij
25
This choice gives the correct expected value for the collision time
X Bij 2ε
∆tc = ∆tij P = .
1≤i<j≤N Bij N ρB
1≤i<j≤N
Bird’s algorithm for general VHS molecules can therefore be summarized as:
Algorithm[Bird for VHS molecules]:
1. compute the initial velocity of the particles, {vi0 , i = 1, . . . , N },
by sampling them from the initial density f0(v)
2. set time counter tc = 0
3. for n = 1 to ntot
◦ compute an upper bound Σ of the cross section
◦ repeat
- select a random pair (i, j) uniformly within all possible N (N − 1)/2 pairs
- compute the relative cross section Bij = B(|vi − vj |)
- if Σ Rand < Bij
perform the collision between i and j, and compute
vi0 and vj0 according to the collisional law
set ṽi = vi0 , ṽj = vj0
set ∆tij = 2ε/(N ρBij )
update the time counter tc = tc + ∆tij
until tc ≥ (n + 1)∆t
◦ set vin+1 = ṽi , i = 1, . . . , N
end for
26
Remarks:
• At variance with NB scheme, there is no restriction on the time step ∆t for Bird’s
scheme. For space homogeneous calculations ∆t could be chosen to be the total
computation time tmax . However, the scheme requires an estimate of Bmax , and this
has to be updated in time. This can be done either by performing the estimate
at certain discrete time steps as in NB scheme, or by updating its value at every
collision process. A possible solution in O(1) operations is to check whether the new
particles vi0 , vj0 generated at each collision increase the quantity ∆v = maxj |vj − v̄|.
• Unfortunately Bird’s method too becomes very expensive and practically unusable
near the fluid regime because in this case the collision time between the particles
∆tij becomes very small, and a huge number of collisions is needed in order to reach
a fixed final time tmax .
27
Time Relaxed (TR) discretizations
It is possible to show that the function f satisfies the following formal expansion
∞ ³
X ´k
f (v, t) = e−µt/ε 1−e−µt/ε fk (v). (1)
k=0
The functions fk are given by the recurrence formula for k = 0, 1, . . .
k
X
1 1
fk+1(v) = P (fh, fk−h) (2)
k + 1 h=0 µ
28
Properties
i) Conservation : If the collision operator preserves some moments, then all the
functions fk will have the same moments, i.e. if, for some function φ(v)
Z Z
P (f, f )φ(v) dv = µ f φ(v) dv,
IR3 IR3
then the coefficients f (k) defined by (2) are nonnegative and satisfy ∀ k > 0
Z Z
f (k) φ(v) dv = f (0) φ(v) dv.
IR3 IR3
© ª
ii) Asymptotic behavior : If the sequence f (k) k≥0 defined by (2) is convergent,
then (1) is well defined for any value of ε. Moreover, if we denote by M (v) =
limk→∞ fk then
lim f (v, t) = M (v),
t→∞
in which M (v) is the local (Maxwellian) equilibrium.
29
TR schemes
From the previous representation, the following class of numerical schemes is ob-
tained
m
X
f n+1 (v) = (1 − τ ) τ k fkn(v) + τ m+1M (v),
k=0
where f n = f (n∆t), ∆t is a small time interval, and τ = 1−e−µ∆t/ε ( relaxed time ).
Properties :
i) conservation : If the initial condition f 0 is a non negative function, then f n+1 is
nonnegative for any µ∆t/ε, and satisfies
Z Z
f n+1 φ(v) dv = f n φ(v) dv.
R3 R3
30
Generalized TR schemes
Generalized schemes can be derived, of the following form
m
X
f n+1(v) = Ak fkn(v) + Am+1M (v).
k=0
The weights Ak = Ak (τ ) are nonnegative function that satisfy the following proper-
ties
i) conservation :
m+1
X
Ak (τ ) = 1 τ ∈ [0, 1],
k=0
lim Ak (τ ) = 0, k = 0, . . . , m
τ →1
iii) consistency :
31
Time Relaxed Monte Carlo Methods
First order TR scheme (TRMC1) :
In a space non homogeneous case, this would be equivalent to a particle method for
Euler equations.
32
Second order TR scheme (TRMC2) :
Test problems:
34
Kac equation: Details of the distribution function at time t = 2.0 for ∆t = 1.0.
Exact (line), DSMC (+), first order TRMC (4), second order TRMC (∗), third order
TRMC (◦)
0.3 0.3
0.28 0.28
0.26 0.26
0.24 0.24
0.22 0.22
0.2 0.2
0.18 0.18
0.16 0.16
35
Kac equation: L2 norm of the error vs time. DSMC (+), first order TRMC (4),
second order TRMC (∗), third order TRMC (◦). Left: Time step ∆t = 1.0. Right:
DSMC with ∆t = 0.25, first order TRMC with ∆t = 0.5, second order TRMC with
∆t = 0.75, third order TRMC3 with ∆t = 1.0.
0.06
0.014
0.05
0.012
0.04
0.01
0.03
0.008
0.02
0.006
0.01
0.004
0
0 2 4 6 8 10 12 14 16 0 2 4 6 8 10
36
Maxwellian case: L2 norm of the error vs time. DSMC (+), first order TRMC (4),
second order TRMC (∗), third order TRMC (◦). Left: Time step ∆t = 1.0. Right:
DSMC with ∆t = 0.25, first order TRMC with ∆t = 0.5, second order TRMC with
∆t = 0.75, third order TRMC3 with ∆t = 1.0.
0.018 0.011
0.0105
0.016
0.01
0.014
0.0095
0.012 0.009
0.0085
0.01
0.008
0.008
0.0075
0.006 0.007
0 2 4 6 8 10 12 14 16 0 2 4 6 8 10 12 14 16
37
Space non homogeneous case
1D Shock wave profiles
Comparison between:
The values for ρ, u and T for x < 0 are given by the Rankine-Hugoniot conditions.
Test problem :
• Hard spheres : 50 − 100 space cells and 500 particles in each cell on x > 0. The
reference solution is obtained with 200 space cells and 500 particles in each cell on
x > 0.
Remark: Since we have a stationary shock wave the accuracy of the methods can
be increased by computing averages on the solution for t À.
38
1D shock profile: DSMC(+) and first order TRMC (×) (top), second order (∗)
and third order (◦) TRMC (bottom) for ² = 1.0 and ∆t = 0.025. From left to right:
ρ, u, T . The line is the reference solution.
2.6 −1.5 5
2.4 4.5
−2
4
2.2
3.5
2 −2.5
3
1.8
ux(x,t)
T(x,t)
ρ(x,t)
−3 2.5
1.6
2
1.4 −3.5
1.5
1.2
1
−4
1 0.5
0.8 −4.5 0
−4 −3 −2 −1 0 1 2 3 4 −4 −3 −2 −1 0 1 2 3 4 −4 −3 −2 −1 0 1 2 3 4
x x x
2.6 −1.5 5
2.4 4.5
−2
4
2.2
3.5
2 −2.5
3
1.8
ux(x,t)
T(x,t)
ρ(x,t)
−3 2.5
1.6
2
1.4 −3.5
1.5
1.2
1
−4
1 0.5
0.8 −4.5 0
−4 −3 −2 −1 0 1 2 3 4 −4 −3 −2 −1 0 1 2 3 4 −4 −3 −2 −1 0 1 2 3 4
x x x
39
1D shock profile: DSMC(+) and first order TRMC (×) (top), second order (∗) and
third order (◦) TRMC (bottom) for ² = 0.1 and ∆t = 0.0025 for DSMC, ∆t = 0.025
for TRMC. From left to right: ρ, u, T . The line is the reference solution.
2.6 −1.5 5
2.4 4.5
−2
4
2.2
3.5
2 −2.5
3
1.8
ux(x,t)
T(x,t)
ρ(x,t)
−3 2.5
1.6
2
1.4 −3.5
1.5
1.2
1
−4
1 0.5
0.8 −4.5 0
−4 −3 −2 −1 0 1 2 3 4 −4 −3 −2 −1 0 1 2 3 4 −4 −3 −2 −1 0 1 2 3 4
x x x
2.6 −1.5 5
2.4 4.5
−2
4
2.2
3.5
2 −2.5
3
1.8
ux(x,t)
T(x,t)
ρ(x,t)
−3 2.5
1.6
2
1.4 −3.5
1.5
1.2
1
−4
1 0.5
0.8 −4.5 0
−4 −3 −2 −1 0 1 2 3 4 −4 −3 −2 −1 0 1 2 3 4 −4 −3 −2 −1 0 1 2 3 4
x x x
40
1D shock profile: First order TRMC (×) for ² = 10−6 and ∆t = 0.025. From left
to right: ρ, u, T .
2.6 −1.5
2.4
−2
2.2
2 −2.5
1.8
ux(x,t)
ρ(x,t)
−3
1.6
1.4 −3.5
1.2
−4
1
0.8 −4.5
−4 −3 −2 −1 0 1 2 3 4 −4 −3 −2 −1 0 1 2 3 4
x x
5
4.5
3.5
3
T(x,t)
2.5
1.5
0.5
0
−4 −3 −2 −1 0 1 2 3 4
x
41
2D Flow past an ellipse
Boltzmann region
ε > 0.01
ε << 0.01
11 11 11
10 10 10
9 9 9
8 8 8
7 7 7
6 6 6
5 5 5
4 4 4
3 3 3
2 2 2
1 1 1
2 4 6 8 10 12 14 2 4 6 8 10 12 14 2 4 6 8 10 12 14
43
2D flow: ε = 0.1. Transversal and longitudinal sections for the mass ρ at y = 6 and
x = 5 respectively for ² = 0.1 and M = 20; DSMC-NB (◦), TRMC I (+), TRMC II
(×).
0.045 0.08
0.04 0.07
0.035
0.06
0.03
0.05
0.025
0.04
0.02
0.03
0.015
0.02
0.01
0.005 0.01
0 0
0 2 4 6 8 10 12 0 5 10 15
44
2D flow: ε = 0.01. NB, TRMC1 and TRMC2 solution for the mass ρ.
DSMC Method TRMC I Method TRMC II Method
11 11 11
10 10 10
9 9 9
8 8 8
7 7 7
6 6 6
5 5 5
4 4 4
3 3 3
2 2 2
1 1 1
2 4 6 8 10 12 14 2 4 6 8 10 12 14 2 4 6 8 10 12 14
45
2D flow: ε = 0.01. Transversal and longitudinal sections for the mass ρ at y = 6
and x = 5 respectively for ² = 0.1 and M = 20; DSMC-NB (◦), TRMC I (+), TRMC
II (×).
0.04 0.06
0.035
0.05
0.03
0.04
0.025
0.02 0.03
0.015
0.02
0.01
0.01
0.005
0 0
0 2 4 6 8 10 12 0 5 10 15
46
2D flow: ε = 0.000001. NB, TRMC1 and TRMC2 solution for the mass ρ.
TRMC I Method TRMC II Method
11 11
10 10
9 9
8 8
7 7
6 6
5 5
4 4
3 3
2 2
1 1
2 4 6 8 10 12 14 2 4 6 8 10 12 14
47
2D flow: ε = 0.000001. Transversal and longitudinal sections for the mass ρ at
y = 6 and x = 5 respectively for ² = 0.1 and M = 20; DSMC-NB (◦), TRMC I (+),
TRMC II (×).
0.04 0.06
0.035
0.05
0.03
0.04
0.025
0.02 0.03
0.015
0.02
0.01
0.01
0.005
0 0
0 2 4 6 8 10 12 0 5 10 15
48
2D flow: Number of ”Collisions”. From left to right ² = 0.1, 0.01, 0.001; NB (◦),
TRMC1 (+), TRMC2 (×).
4
x 10
5000
3.5
4500
4000 3
3500
2.5
3000
2
2500
2000 1.5
1500
1
1000
0.5
500
0 0
0 50 100 150 200 250 300 350 400 0 50 100 150 200 250 300 350 400
5
x 10
4
3.5
2.5
1.5
0.5
0
0 50 100 150 200 250 300 350 400 450
49
Basic References
50