You are on page 1of 29

a

r
X
i
v
:
m
a
t
h
.
P
R
/
0
4
0
8
0
6
0

v
1



4

A
u
g

2
0
0
4
Innite volume limits of high-dimensional
sandpile models
Antal A. Jarai

Frank Redig

August 5, 2004
Abstract: We study the Abelian sandpile model on Z
d
. In d 5 we prove existence of
the innite volume addition operator, almost surely w.r.t the innite volume limit of
the uniform measures on recurrent congurations. We prove the existence of a Markov
process with stationary measure , and study ergodic properties of this process. The
main techniques we use are a connection between the statistics of waves and uniform
two-component spanning trees and results on the uniform spanning tree measure on
Z
d
.
Key-words: Abelian sandpile model, wave, addition operator, two-component
spanning tree, loop-erased random walk, tail triviality.
1 Introduction
The Abelian sandpile model (ASM), introduced originally in [2] has been studied
extensively in the physics literature, mainly because of its remarkable self-organized
critical state. Many exact results were obtained by Dhar using the group structure
of the addition operators acting on recurrent congurations introduced in [4], see e.g.
[5] for a review. The relation between recurrent congurations and spanning trees,
originally introduced by [17] has been used by Priezzhev to compute the stationary
height probabilities of the two-dimensional model in the thermodynamic limit [20].
Later on, Ivashkevich, Ktitarev and Priezzhev introduced the concept of waves
to study the avalanche statistics, and made a connection between two-component
spanning trees and waves [8, 9]. In [21] this connection was used to argue that the
critical dimension of the ASM is d = 4.
From the mathematical point of view, one is interested in the thermodynamic
limit, both for the stationary measures and for the dynamics. Recently, in [1] the

Carleton University, School of Mathematics and Statistics, 1125 Colonel By Drive, Room 4302
Herzberg Building, Ottawa, ON K1S 5B6, Canada

Faculteit Wiskunde en Informatica, Technische Universiteit Eindhoven, Postbus 513, 5600 MB


Eindhoven, The Netherlands
1
connection between recurrent congurations and spanning trees, combined with re-
sults of Pemantle [19] on existence and uniqueness of the uniform spanning forest
measure on Z
d
has led to the result that the uniform measures
V
on recurrent con-
gurations in nite volume have a unique thermodynamic (weak) limit . In [14]
this was proved for an innite tree, and a Markov process generated by Poissonian
additions to recurrent congurations was constructed. A natural continuation of [1]
is therefore to study the dynamics dened on -typical congurations. The rst ques-
tion here is to study the addition operator. We rst prove that in d 3 the addition
operator a
x
can be dened on -typical congurations. This turns out to be a rather
simple consequence of the transience of the simple random walk. However, in order to
construct a stationary process with the innite volume addition operators, it is crucial
that the measure is invariant under a
x
. We prove that this is the case if avalanches
are -a.s. nite. In order to obtain a.s. niteness of avalanches, we prove that the
statistics of waves has a bounded density with respect to the uniform two-component
spanning tree. The nal step is to show that one component of the uniform two-
component spanning tree is a.s. nite in the innite volume limit when d > 4. This
is proved using Wilsons algorithm combined with a coupling of the two-component
spanning tree with the (usual) uniform spanning tree.
Given existence of a
x
, and stationarity of under its action, we can apply the
formalism developed in [14] to construct a stationary process which is informally
described as follows. Starting from a -typical conguration , at each site x Z
d
grains are added on the event times of a Poisson process N
x
t
with mean (x), where
(x) satises the condition

x
(x)G(0, x) < ,
with G the Green function. The condition ensures that the number of topplings has
nite expectation at any time t > 0. In this paper we further study the ergodic
properties of this process. We show that tail triviality of the measure implies
ergodicity of the process. We prove that has trivial tail in any dimension d 2.
For 2 d 4 this is a rather straightforward consequence of the fact that the
height-conguration is a (non-local) coding of the edge conguration of the uniform
spanning tree, i.e., from the spanning tree in innite volume one can reconstruct the
innite height conguration almost surely. This is not the case in d > 4 where we
need a separate argument.
Our paper is organized as follows. We start with notations and denitions, re-
calling some basic facts about the ASM. In sections 3 and 4 we prove existence of
the addition operator a
x
and invariance of the measure . In section 5 we prove
existence of inverse addition operators. In section 6 we make the precise link be-
tween avalanches and waves, in section 7 we prove that waves are nite if the uniform
two-component spanning tree has a.s. a nite component. In section 8 we prove the
required a.s. niteness of the component of the origin in dimensions d 5. In sec-
tions 9 and 10 we discuss tail triviality of the stationary measure, and correspondingly,
ergodicity of the stationary process.
2
2 Notations, denitions
We consider the Abelian sandpile model, as introduced by Bak, Tang and Wiesenfeld
and generalized by Dhar. In this model one starts from a toppling matrix
xy
, indexed
by sites in Z
d
. In this paper will always be the adjacency matrix (or minus the
discrete lattice Laplacian):

xy
=
_

_
2d if x = y,
1 if [x y[ = 1,
0 otherwise.
A height conguration is a map : Z
d
N, and a stable height conguration is such
that (x)
xx
for all x Z
d
. A site where (x) >
xx
is called an unstable site.
All stable congurations are collected in the set . We endow with the product
topology. For V Z
d
,
V
denotes the stable congurations : V N. If and
W Z
d
, then
W
denotes the restriction of to the subset W. We also use
W
for
the restriction of
V
to a subset W V . The matrix
V
is the nite volume
analogon of , indexed now by the sites in V .
A toppling of a site x in volume V is dened on congurations : V N:
T
x
()(y) = (y) (
V
)
xy
(2.1)
A toppling is called legal if the site is unstable, otherwise it is called illegal. The sta-
bilization of an unstable conguration is dened to be the stable result of a sequence
of legal topplings, i.e.,
o() = T
xn
T
x
n1
. . . T
x
1
(), (2.2)
where all topplings are legal and such that o() is stable. That o() is well-dened
follows from [4, 18], see also [6]. If is stable, then, by denition o() = . The
addition operator is dene by
a
x
= o( +
x
) (2.3)
As long as we are in nite volume, a
x
is well-dened and a
x
a
y
= a
y
a
x
(Abelianness).
The dynamics of the nite volume ASM is described as follows: at each discrete
time step choose at random a site according to a probability measure p(x) > 0, x V ,
and apply a
x
to the conguration. After time n the conguration is

n
i=1
a
X
1
. . . a
Xn

where X
1
, . . . , X
n
is an i.i.d. sample of p. This gives a Markov chain with transition
operator
Pf() =

x
p(x)f(a
x
) (2.4)
Given a function F(V ) dened for all suciently large nite volumes in Z
d
, and
taking values in a metric space with metric , we say that lim
V
F(V ) = a, if for all
> 0 there exists W, such that (F(V ), a) < whenever V W. For a probability
measure on , E

will denote expectation with respect to . The boundary of V


is dened by V = y V : y has a neighbour in V
c
, while its exterior boundary is

e
V = y V
c
: y has a neighbour in V .
3
2.1 Recurrent congurations
A stable conguration
V
is called recurrent ( 1
V
) if it is recurrent in the
Markov chain, or equivalently, if for any x there exists n = n
x
such that a
n
x
= .
The addition operators restricted to the recurrent congurations form an Abelian
group and from that fact one easily concludes that the uniform measure
V
on 1
V
is the unique invariant measure of the Markov chain. One can compute the number
of recurrent congurations:
[1
V
[ = det(
V
), (2.5)
see [4]. Another important identity of [4] is the following. Denote by N
V
(x, y, ) the
number of legal topplings at y needed to obtain a
x
from +
x
. Then the expectation
satises
E

V
(N
V
(x, y, )) = G
V
(x, y) = (
V
)
1
xy
. (2.6)
From this and the Markov inequality, one also obtains G
V
(x, y) as an estimate of the

V
-probability that a site y has to be toppled if one adds at x. We also note that
for our specic choice of , G
V
is (2d)
1
times the Green function of simple random
walk in V killed upon exiting V .
Recurrent congurations are characterized by the so-called burning algorithm [4].
A conguration is recurrent if and only if it does not contain a so-called forbidden
sub-conguration, that is, a subset W V such that for all x W:
(x)

yW\{x}

xy
. (2.7)
From this explicit characterization, one easily infers a consistency property: if
1
V
and W V then
W
1
W
. From that one naturally denes recurrent
congurations in innite volume, as those congurations in such that for all V
Z
d
,
V
1
V
. This set is denoted by 1.
2.2 Innite volume: basic questions and results
In studying innite volume limits of the ASM, the following questions are addressed.
In this (non-exhaustive) list, any question can be asked only after a positive answer
to all previous questions.
1. Do the measures
V
weakly converge to a measure ? Does concentrate on
the set 1?
2. Is the addition operator a
x
dened on -a.e. conguration 1, and does it
leave invariant? Does Abelianness still hold in innite volume?
3. Can one dene a natural Markov process on 1 with stationary distribution ?
4. Has the stationary Markov process of question 3 good ergodic properties?
4
Question 1 is easily solved in d = 1, but unhappily, is trivial, concentrating on
the single conguration that is identically 2. Hence no further questions on our list are
relevant in that case. See [16] for a result on convergence to equilibrium in this case.
For an innite regular tree, the rst three questions have been answered armatively
and the fourth question remained open [14]. In dissipative models (
xx
> 2d), all
four questions are armatively answered when
xx
is suciently large [15].
For Z
d
, question 1 is positively answered in any dimension d 2, using a corre-
spondence between spanning trees and recurrent congurations and properties of the
uniform spanning forest on Z
d
[1]. The limiting measure is translation invariant.
The proof of convergence in [1] in the case d > 4 is restricted to regular volumes,
such as a sequence of cubes centered at the origin. In the appendix, we outline how
to prove convergence along an arbitrary sequence of volumes using the result of [11].
In this paper we study questions 2,3 and 4 for Z
d
, d 5, and all questions are
armatively answered.
The main problem is to prove that avalanches are a.s. nite. This is done by a
decomposition of avalanches into a sequence of waves (cf. [9, 10]), and studying the
a.s. niteness of the waves. The latter can be achieved by a two-component spanning
tree representation of waves, as introduced in [9, 10]. We then study the uniform two-
component spanning tree in innite volume and prove that the component containing
the origin is a.s. nite. This turns out to be sucient to ensure niteness of waves.
3 Existence of the addition operator
In this section we show convergence of the nite volume addition operators to an
innite volume addition operator when d > 2. This is actually very easy, but in
order to make appropriate use of this innite volume addition operator, we need to
establish that is invariant under its action, and for the latter we need to show that
avalanches are nite -a.s.
Given , call N
V
(x, y, ) the number of topplings caused at y by addition
at x in , where we apply the nite (V )-volume rule, that is, grains falling out of V
disappear. More precisely, let a
x,V
denote the addition operator acting on
V
, and
for dene
a
x,V
= (a
x,V

V
)
V
c. (3.1)
Then
+
x

V
N
V
(x, , ) = a
x,V
, (3.2)
where
V
(x, y) = (x, y)I[x V ]I[y V ] is the toppling matrix restricted to V .
We start with the following simple lemma:
Lemma 3.3. N
V
(x, y, ) is a non-decreasing function of V and depends only on
through
V
.
Proof. Let V W. Suppose we add a grain at x in conguration . We perform top-
plings inside V until inside V the conguration is stable. The result of this procedure
5
is a conguration (a
x,V

V
)
V
c Possibly
V
c
W
is not stable, in that case we perform
all the necessary topplings still needed to stabilize (a
x,V

V
)
V
c
W
inside W. This can
only cause (possibly) extra topplings at any site y inside V .
From Lemma 3.3 and by monotone convergence:
E

(sup
V
N
V
(x, y, )) = lim
V
E

(N
V
(x, y, )). (3.4)
By weak convergence of
V
to :
lim
V
E

(N
V
(x, y, )) = lim
V
lim
WV
E

W
(N
V
(x, y, ))
lim
V
lim
WV
E

W
(N
W
(x, y, ))
= lim
W
G
W
(x, y) = G(x, y). (3.5)
In the last step we used that d > 2, otherwise G
W
(x, y) diverges as W Z
d
. This
proves that for all x, y Z
d
, -a.s. N(x, y, ) = sup
V
N
V
(x, y, ) is nite and hence

_
x, y Z
d
: N(x, y, ) <
_
= 1 (3.6)
Therefore, on the event in (3.6), we can dene
a
x
= +
x
N(x, , ). (3.7)
It is easy to see that a
x
is stable, using that a
x
(y) is already determined by the
number of topplings at y and its neighbours. We also get
a
x
= lim
V
a
x,V
, -a.s., (3.8)
where a
x,V
is dened in (3.1).
Note that with this denition, there can be innite avalanches. However, if the
volume increases, it cannot happen that the number of topplings at a xed site
diverges, and that is the only problem for dening a
x
(a problem which could arise in
d = 2). More precisely, an innite avalanche that leaves eventually every nite region
does not pose a problem for dening the addition operator. However, as we will see
later on, innite avalanches do cause problems in dening a stationary process. To
dene a
x
we only need d > 2, however to exclude innite avalanches our method will
require d > 4.
It is obvious that a
x
is well behaved with respect to translations, i.e.,
a
x
=
x
a
0

x
(3.9)
where
x
is the shift on congurations:
x
(y) = (y + x).
Integrating (3.7) over we easily obtain the following innite volume analogue of
Dhars formula [4].
6
Proposition 3.10. If is invariant under a
x
, then
E

(N(x, y, )) = G(x, y) (3.11)


At this point we cannot compose dierent a
x
. Although a
x
is well-dened a.s., it
is not obvious that a
y
can be applied on a
x
.
Proposition 3.12. If is invariant under the action of a
0
, then is also invariant
under the action of a
x
for all x, and there exists a -measure one set

, such that
for any

, and every x
1
, . . . , x
n
Z
d
,
a
xn
a
x
n1
. . . a
x
0

is well-dened.
Proof. If is invariant under a
0
, then by translation invariance of , and by (3.9), it
is invariant under all a
x
. Dene
0
to be the set of those where a
x
is well-dened
for all x Z
d
. For n 1, dene inductively the sets

n
=
n1

xZ
d
a
1
x
(
n1
),
where a
1
x
here denotes inverse image (not to be confused with the inverse operator
dened later). Since the a
x
are measure preserving, it follows by induction that
(
n
) = 1 for all n, and that compositions of length n + 1 are well-dened on
n
.
Therefore,

=
n0

n
satises the properties stated.
The following proposition shows that if avalanches are nite (see later for the
precise denition of avalanches) then Abelianness holds in innite volume.
Proposition 3.13. Assume that is invariant under a
0
. Further assume that there
exists a measure one set

such that for any

and any x Z
d
, a
x
is well-
dened and there exists V
x
() such that for all W V
x
()
a
x
= a
x,W
. (3.14)
Then the set

can be chosen such that a


x

for all

and all x Z
d
.
Moreover,
a
x
(a
y
) = a
y
(a
x
) (3.15)
Proof. It is straightforward that the set

can be chosen such that a


x

for all
x Z
d
. For

and for W V
y
() V
x
(a
y
) V
x
() V
y
(a
x
),
a
x
(a
y
) = a
x,W
(a
y,W
) = a
y,W
(a
x,W
) = a
y
(a
x
). (3.16)
7
4 Invariance of under a
x
In order to dene the addition operator, all we needed was the convergence of the
nite volume Green function to innite volume Green function. However, in the
construction of a stationary process, it is essential that the candidate stationary
measure (which in this case is the innite volume limit of the uniform measures on
recurrent congurations) is invariant under the action of a
x
.
The following proposition shows that is indeed invariant, if there are no innite
avalanches -a.s. We dene the avalanche cluster caused by addition at x to be the
set
(
x
() = y Z
d
: N(x, y, ) > 0 (4.1)
We say that the avalanche at x is nite in if (
x
() is a nite set. We say that has
the nite avalanche property, if for all x Z
d
, ([(
x
[ < ) = 1.
Proposition 4.2. If has the nite avalanche property then for any local function
f and for any x Z
d
,
_
f(a
x
)d =
_
f()d. (4.3)
Proof. We have
_
f(a
x
)d =
_
f(a
x,V
)d +
1
(V, f)
=
_
f(a
x,V
)d
W
+
1
(V, f) +
2
(V, W, f)
=
_
f(a
x,W
)d
W
+
1
(V, f) +
2
(V, W, f) +
3
(V, W, f)
Here
1
and
2
can be made arbitrarily small by (3.8) and by weak convergence. We
also have
[
3
(V, W, f)[ 2|f|

W
(a
x,W
f ,= a
x,V
f).
Next, by invariance of
W
under the action of a
x,W
,
_
f(a
x,W
)d
W
=
_
fd
W
=
_
fd +
4
(W, f). (4.4)
Here, by weak convergence,
4
can be made arbitrarily small. Therefore, combining
the estimates, we conclude

_
f(a
x
)d
_
f()d

C limsup
V
limsup
WV

W
(a
x,W
f ,= a
x,V
f). (4.5)
Dene the avalanche cluster in volume W by
(
x,W
() = y W : N
W
(x, y, ) > 0, 1
W
.
8
Let D
f
denote the dependence set of the local function f. On the event (
x,W
()V =
we have a
x,V
= a
x,W
. Hence, provided D
f
V , we have

W
(a
x,W
f ,= a
x,V
f)
W
((
x,W
V ,= ).
The event on the right hand side is a cylinder event (only depends on heights in
V ). Therefore, the right hand side approaches ((
x
V ,= ), as W Z
d
. By the
assumptions of the proposition,
lim
V
((
x
V ,= ) = ([(
x
[ = ) = 0,
which completes the proof.
5 Inverse addition operators
In this section we prove that a
x
has an inverse dened -a.s., provided has the nite
avalanche property. Recall that if there are no innite avalanches, then for -a.e.
and every x Z
d
, there exists a nite set V
x
() such that a
x
= a
x,Vx()
. Dene
a
1
x,V
= (a
1
x,V

V
)
V
c
This is well-dened since
V
1
V
.
Lemma 5.1. Suppose that has the nite avalanche property.
1. For almost every there exists V
0
= V
0
() such that a
1
x,V
= a
1
x,V
0
() for all
V V
0
.
2. If we dene a
1
x
= a
1
x,V
0
()
(), then -a.s. a
1
x
(a
x
) = a
x
(a
1
x
) = .
3. As operators in L
2
(), a

x
= a
1
x
, i.e., the a
x
are unitary operators.
Proof. We will prove that
lim
V
0

_
V V
0
: a
1
x,V
() ,= a
1
x,V
0
()
_
= 0, (5.2)
what is sucient for the rst statement, by monotonicity in V
0
of the event in (5.2).
Write

_
V V
0
: a
1
x,V
() ,= a
1
x,V
0
()
_
=
_
V V
0
: a
1
x,V
(a
x
) ,= a
1
x,V
0
(a
x
)
_
=
_
V V
0
: a
1
x,V
(a
x
) ,= a
1
x,V
0
(a
x
) and V V
0
: a
x,V
() = a
x,V
0
()
_
+
V
0
=
V
0
.
(5.3)
9
Here we used the invariance of under a
x
in the rst step. The last step follows
because if a
x
= a
x,V
= a
x,V
0
, then
a
1
x,V
(a
x
) = a
1
x,V
(a
x,V
) = = a
1
x,V
0
(a
x,V
0
) = a
1
x,V
0
(a
x
). (5.4)
Next, for
V
0
we have

V
0
(V V
0
: a
x,V
,= a
x,V
0
) (5.5)
which converges to zero as V
0
Z
d
, by the nite avalanche property. This proves the
rst statement of the lemma. To prove the second statement rst remark that by the
denitions of a
x
and a
1
x
, for almost every there exists a (suciently large) V ,
such that
a
x
(a
1
x
) = a
x,V
(a
1
x,V
) = = a
1
x,V
(a
x,V
) = a
1
x
(a
x
). (5.6)
The last statement of the Lemma is an obvious consequence of the rst two.
The above lemma proves that as operators on L
2
(), the a
x
generate a unitary
group, which we denote by G.
6 Waves and avalanches
The goal of Sections 6, 7 and 8 is to prove the following theorem.
Theorem 6.1. Suppose d > 4. Then ([C
x
[ < ) = 1 for all x Z
d
.
Remark 6.2. The assumption d > 4 can be replaced by the condition that d 3 and
the conclusion of Proposition 7.11 (ii) holds.
In order to prove that avalanches are almost surely nite, we decompose avalanches
into waves. We prove that almost surely, there is a nite number of waves, and that
all waves are almost surely nite. Without loss of generality, we assume that x = 0
(the origin), and we drop indices referring to x from our notation.
We rst recall the denition of a wave, cf. [9, 10]. Consider a nite volume W,
and add a grain at site 0 in a stable conguration. If the site becomes unstable, then
topple it once and topple all other unstable sites except 0. It is easy to see that in
this procedure a site can topple at most once. The toppled sites form what is called
the rst wave. Next, if 0 has to be toppled again, we start another wave, and so on
until 0 is stable.
We dene
W
() to be the number of waves caused by addition at 0 in the volume
W. By denition,
W
is the number of topplings at 0 in W, caused by addition at 0,
that is
W
() = N
W
(0, 0, ). For xed W, let (
W
() denote the avalanche cluster in
volume W. We decompose (
W
as
(
W
() =

W
()
_
i=1

i
W
(), (6.3)
10
where
i
W
() is the i-th wave in W caused by addition at 0.
We can dene waves in innite volume as we dened the toppling numbers and
avalanches in Section 3, by monotonicity in the volume. More precisely, the dention
is as follows. By the arguments of section 3,
1
W
is a non-decreasing function of
W, and therefore we can dene
1
=
W

1
W
= lim
W

1
W
. If 0 is unstable after
the rst wave (now considered in innite volume), we condsider the second wave

2
W
with the nite volume rule. Again we have a nondecreasing family, and we set

2
=
W

2
W
= lim
W

2
W
. Note that if [
1
[ < , we have

2
W
=
2
W
for all large
W, and consequently,
2
= lim
W

2
W
. We similarly dene

i
W
as the result of the
i-th wave with the nite volume rule after the rst i 1 waves have been carried out
in innite volume. We let
i
=
W

i
W
= lim
W

i
W
. Again, under the assumption
[
j
[ < , 1 j < i, we also have
i
= lim
W

i
W
. For convenience, we dene
i
W
or

i
as the empty set, whenever such waves do not exist.
The easy part in proving niteness of avalanches is to show that the number of
waves is nite. Since
W
() is non-decreasing in W, it has a pointwise limit (),
and as before,
E

() G(0, 0) < . (6.4)


This implies < -a.s.
In order to prove that (() is nite -a.s., we show, by induction on i, that all
sets
i
() are nite -a.s. We base the proof on the following proposition, proved in
Sections 7 and 8.
Proposition 6.5. Let d > 4. For i 1 we have
lim
V
limsup
WV

W
(
i
W
, V ) = 0. (6.6)
Noting that
1
V is a local event, Proposition 6.5 with i = 1 implies that
([
1
[ < ) = 1. Assume now that ([
j
[ < ) = 1, 1 j < i. Then
(
i
, V ) (
i
, V,
j
V

, 1 j < i) + (
j
, V

for some 1 j < i). (6.7)
By the induction hypothesis, the second term on the right hand side can be made
arbitrarily small by choosing V

large. For xed V

, the event in the rst term is a
local event (only depends on sites in V


e
V

, if V

V ). Therefore, the rst term
in (6.7) equals
lim
W

W
(
i
W
, V,
j
W
V

, 1 j < i) limsup
W

W
(
i
W
, V ). (6.8)
Here the right hand side goes to 0 as V Z
d
, by Proposition 6.5, proving that
([
i
[ < ) = 1. Finiteness of all waves proved, we can pass to the limit in (6.3) and
obtain the decomposition
(() =
()
_
i=1

i
(). (6.9)
It follows that ([([ < ) = 1, which completes the proof of Theorem 6.1 assuming
Proposition 6.5.
11
7 Finiteness of waves
In this section we prove Proposition 6.5 showing that waves are nite, assuming
Proposition 7.11 below. The proof of Proposition 7.11 is completed in Section 8. To
prove that waves are nite, we use their representation as two-component spanning
trees [8, 9], which we now describe. Consider a conguration
W
1
W
with
0
=
2d, and suppose we add a particle at 0. Consider the rst wave, which is entirely
determined by the recurrent conguration
W\{0}
. The result of the rst wave on
W 0 is given by
S
1
W
() =
_

j0
a
j,W\{0}
_

W\{0}
. (7.1)
Next we associate to any
W\{0}
1
W\{0}
a tree T
W
(
W\{0}
), that will represent a
wave starting at 0. For the denition, we use Majumdar and Dhars tree construction
[17].
Denote by

W the graph obtained from Z
d
by identifying all sites in Z
d
(W 0)
to a single site
W
(removing loops). By [17], there is a one-to-one map between
recurrent congurations
W\{0}
and spanning trees of

W. The correspondence is
given by following the spread of an avalanche started at
W
. Initially, each neighbour
of
W
receives a number of grains equal to the number of edges connecting it to

W
, which results in every site toppling exactly once. The spanning tree records the
sequence in which topplings have occurred. There is some exibility in how to carry
out the topplings (and hence in the correspondence with spanning trees), and here
we make a specic choice in accordance with [9]. Namely, we rst transfer grains
from
W
only to the neighbours of 0, and carry out all possible topplings. We call
this the rst phase. The set of sites that topple in the rst phase is precisely a wave
started at 0. Now transfer grains from
W
to the boundary sites of W, which will
cause topplings at all sites that were not in the wave; this is the second phase.
The two phases can alternatively be described via the burning algorithm of Dhar
[4], which in the above context looks as follows. For convenience, let

W denote the
graph obtained by identifying all sites in Z
d
W to a single site
W
. That is,

W can
be obtained from

W by identifying 0 and
W
. We start with all sites of

W declared
unburnt. At step 0 we burn 0 (the origin). At step t, we
burn all sites y for which
y
> number of unburnt neighbours of y. (7.2)
The process stops at some step T = T(
W\{0}
). The sites that burn up to time T
is precisely the sites toppling in the rst phase. We continue by burning
W
in step
T + 1, and then repeating (7.2) as long as there are unburnt sites.
Following Majumdar and Dhars construction [17], we assign to each y W 0
burnt at time t a unique neighbour y

(called the parent of y) burnt at time t 1.


This denes a spanning subgraph of

W with two tree components, having roots 0 and

W
. Identifying 0 and
W
yields a spanning tree of

W, also representing
W\{0}
. We
denote by T
W
(
W\{0}
) the component with root 0 (the origin). With slight abuse of
language, we refer to the two-component graph as a two-component spanning tree.
12
We can generalize the above construction to further waves as follows. For k 2,
dene
S
k
W
() =
_

j0
a
j,W\{0}
_
k

W\{0}
. (7.3)
If there exists a k-th wave, then its result on W 0 is given by (7.3). Applying
the above constructions to S
k1
W
(), we obtain that the k-th wave (if there is one) is
represented by T
W
(S
k1
W
()).
We will prove that T
W
has a weak limit T , which is almost surely nite. But rst
let us show that this is actually sucient for niteness of all waves.
Consider the rst wave, and let W V . By construction,
1
W
() is precisely the
vertex set of T
W
(
W\{0}
), hence

W
(
W
(1, ) , V ) =
W
(T
W
(
W\{0}
) , V ). (7.4)
Here the right hand side is determined by the distribution of
W\{0}
under
W
. This
is dierent from the law of
W\{0}
under
W\{0}
, which is simply the uniform measure
on 1
W\{0}
. It is latter that we can get information about using the correspondence
to spanning trees. Indeed, under
W\{0}
, the spanning tree corresponding to
W\{0}
is uniformy distributed on the set of spanning trees of

W. In order to translate our
results back to
W
, we show that the former distribution has a bounded density with
respect to the latter. This will be a consequence of the following lemma.
Lemma 7.5. There is a constant C(d) > 0 such that for all d 3
sup
V Z
d
[1
V \{0}
[
[1
V
[
C(d) (7.6)
Proof. By Dhars formula (2.5),
[1
V \{0}
[ = det(
V \{0}
) = det(

V
)
where

V
denotes the matrix indexed by sites y V and dened by (

V
)
yz
=
(
V \{0}
)
yz
for y, z V 0, and (

V
)
0z
= (

V
)
z0
=
0z
. Clearly,

V
+ P =

V
where P is a matrix which has only non-zero entries P
yz
for y, z N = u : [u[ 1.
Moreover, max
y,zV
P(0, y) 2d 1. Hence
[1
V \{0}
[
[1
V
[
=
det(
V
+ P)
det(
V
)
= det(I + G
V
P),
where G
V
= (
V
)
1
. Here (G
V
P)
yz
= 0 unless z N. Therefore
det(I + G
V
P) = det(I + G
V
P)
uN,vN
(7.7)
By transience of the simple random walk in d 3, we have sup
V
sup
y,z
G
V
(y, z)
G(0, 0) < , and therefore the determinant of the nite matrix (I +G
V
P)
uN,vN
in
(7.7) is bounded by a constant depending on d.
13
We note that an alternative proof of Lemma 7.5 can be given based on the following
idea. Consider the graph

W obtained by adding an extra edge e between 0 and
W
in

W. Then the ratio in (7.6) can be expressed in terms of the probability that a
uniformly chosen spanning tree of

W contains e. By standard spanning tree results
[3, Theorem 4.1], the latter is the same as the probability that a random walk in

W
started at 0 rst hits through e.
We continue with the bounded density argument. For any conguration
W\{0}

1
W\{0}
we have

W
(
W\{0}
=
W\{0}
) =
1
[1
W
[

k 1, . . . , 2d : (k)
0
()
W\{0}
1
W

. (7.8)
Therefore,

W
(
W\{0}
=
W\{0}
)

W\{0}
(
W\{0}
=
W\{0}
)

[1
W\{0}
[
[1
W
[
2d C, (7.9)
where, by (7.6), C > 0 does not depend on or on W. From this estimate, it follows
that

W
(T
W
(
W\{0}
) , V )

W\{0}
(T
W
(
W\{0}
) , V )
C. (7.10)
For a more convenient notation, we let
(0)
W
denote the probability measure assign-
ing equal mass to each spanning tree of

W, or alternatively, to each two-component
spanning trees of

W. We can view
(0)
W
as a measure on 0, 1
E
d
in a natural way,
where E
d
is the set of edges of Z
d
. By the Majumdar-Dhar correspondence [17],
(0)
W
corresponds with the measure
W\{0}
, and the law of T
W
under
W\{0}
is that of
the component of 0 under
(0)
W
. We keep the notation T
W
when referring to
(0)
W
. In
Section 8 we prove the following Proposition.
Proposition 7.11. (i) For any d 1, the limit lim
W

(0)
W
=
(0)
exists.
(ii) Assume d > 4. The component T of 0 under
(0)
satises
(0)
([T [ < ) = 1.
By Proposition 7.11 (i), we have
lim
WV

W\{0}
(T
W
(
W\{0}
) , V ) = lim
WV

(0)
W
(T
W
, V ) =
(0)
(T , V ).
(7.12)
By Proposition 7.11 (ii), the right hand side of (7.12) goes to zero as V Z
d
, and
together with (7.10) and (7.4), we obtain the i = 1 case of (6.6).
Finiteness of the other waves follows similarly. For k 2 we have

W
(
k
W
() , V )
W
(T
W
(S
k1
W
) , V )
C
W\{0}
(T
W
(S
k1
W
) , V )
= C
W\{0}
(T
W
() , V ),
(7.13)
where the last step follows by invariance of
W\{0}
under

j0
a
j
. We have already
seen that the right hand side of (7.13) goes to zero, which completes the proof of
Proposition 6.5 assuming Proposition 7.11.
14
8 Finiteness of two-component spanning trees
In this Section we complete the arguments for niteness of avalanches by proving
Proposition 7.11, which amounts to showing that the weak limit of T
W
is almost
surely nite.
Let
W
denote the probability measure assigning equal weight to each spanning
tree of

W.
W
is known as the uniform spanning tree measure in W with wired
boundary conditions [3].
8.1 Wilsons algorithm
We use the algorithm below, due to Wilson [23], to analyze random samples from
(0)
W
and
W
.
Let G be a nite connected graph. By simple random walk on G we mean the
random walk which at each step jumps to a random neighbour, chosen uniformly. For
a path = [
1
, . . . ,
m
] on G, dene LE() as the path obtained by erasing loops
chronologically from . We call LE() the loop-erasure of . Pick a vertex r G,
called the root. Enumerate the vertices of G as x
1
, . . . , x
k
. Let (S
(i)
n
)
n1
, 1 i k
be independent simple random walks started at x
1
, . . . , x
k
, respectively. Let
T
(1)
= minn 0 : S
(1)
n
= r,
and set

(1)
= LE(S
(1)
[0, T
(1)
]).
Now recursively dene T
(i)
,
(i)
, i = 2, . . . , k as follows. Let
T
(i)
= minn 0 : S
(i)
n

1j<i

(j)
,
and

(i)
= LE(S
(i)
[0, T
(i)
]).
(If x
i

1j<i

(j)
, then
(i)
is the single point x
i
.) Let T =
1ik

(i)
. Then T is a
spanning tree of G and is uniformly distributed [23].
Applying the algorithm with G =

W and root
W
gives a sample from
W
. Sim-
ilarly, applying the method with G =

W and root
W
we get a sample from
(0)
W
.
In the latter case, we can imagine the construction happening in

W, where 0 is also
considered part of the boundary. In other words, the two-component spanning tree
is built from loop-erased random walks in

W who attach either to a piece at 0, or to
a piece at the boundary.
One can extend the algorithm to the case where G is an innite graph, on which
simple random walk is transient [3]. In this case, one chooses the root to be at innity,
and note that loop-erasure makes sense for paths that visit each site nitely many
times.
15
8.2 The weak limit
Denote the two-component spanning tree in W by F
W
. We denote by T
W
the wired
uniform spanning tree in

W. We regard
W
as the root of T
W
. If x is closer to the
root than y (in graph distance) then we say that y is a descendant of x, and x is an
ancestor of y. For a set B W we write desc(B; T
W
) for the set of descendants of
vertices in B. We sometimes think of the edges of T
W
being directed towards the
root. We write V
N
= [N, N]
d
Z
d
.
It is well-known that T
W
has a weak limit T as W Z
d
, called the (wired) uniform
spanning forest (USF) on Z
d
[19, 3]. We denote its law by . When d 3, the USF
can be constructed directly by Wilsons method in Z
d
, rooted at innity [3, Theorem
5.1].
Similarly, F
W
has a weak limit F. To see this, let W
n
be an increasing sequence
of nite volumes exhausting Z
d
. If B is a nite set of edges, [3, Corollary 4.3] implies
that
(0)
Wn
(B F
Wn
) is increasing in n. This is sucient to imply the existence of a
limit
(0)
independent of the sequence W
n
, and the limit is uniquely determined by
the conditions

(0)
(B F) = lim
n

(0)
Wn
(B F
Wn
),
as B varies over nite edge-sets (see the discussion in [3, Section 5]). This proves
part (i) of Proposition 7.11. When d 3, the conguration under
(0)
can again be
constructed by Wilsons method directly, by [3, Theorem 5.1]. Since 0 is part of the
boundary, the simple random walks in this construction are either killed when they
hit the component growing at 0, or they escape to innity.
8.3 Finiteness of T
For part (ii) of Proposition 7.11, we prove
lim
N

(0)
(T
0
V
N
) = 1. (8.1)
The proof of (8.1) is based on a coupling of
(0)
and that arises from applying
Wilsons algorithm with the same random walks in the two cases and with a suitable
common enumeration of sites.
Let 1 M < N. We dene the event
G(M, N) = desc(V
M
; T) V
N
.
In other words, G(M, N) is the event that there exists a connected set V
M
D V
N
,
such that there is no directed edge of T from Z
d
D to D. By [3, Theorem 10.1], each
component of the USF has one end, meaning that there are no two disjoint innite
paths within any component. This implies that for any M 1,
lim
N
(G(M, N)) = 1. (8.2)
16
We enumerate the sites in the following way. Let x
1
, . . . , x
l
be an enumeration of
the sites of V
N
. We let x
l+1
, x
l+2
, . . . list the rest of the sites arbitrarily. As before,
(S
(i)
n
)
n1
denotes the random walk started at x
i
, common to both constructions.
Let T
(i)
and

T
(i)
be the hitting times in the construction of T and F, respectively.
Let
(i)
and
(i)
denote the corresponding families of loop-erased paths in the two
constructions. The two families are determined by the same random walks, and we
denote by Pr the probability law that governs both of them.
Consider the construction of T, and condition on G(M, N). In terms of the paths

(i)
, the conditioning can be written as
G(M, N) =
__

l
i=1

(i)
_
V
M
=
_
.
The right hand side is in fact an implicit condition on the random walks S
(i)
. We
claim that if we further condition on the paths
(i)
, 1 i l, then we have
Pr
_

(i)
,=
(i)
for some 1 i l

(i)
, 1 i l
_

C
M
d4
, (8.3)
for some constant C, uniformly in the
(i)
. Equivalently, we show
Pr
_
S
(i)
n
= 0 for some 0 n T
(i)
, 1 i l

(i)
, 1 i l
_

C
M
d4
. (8.4)
To see that (8.3) and (8.4) are indeed equivalent, note that if S
(i)
[0, T
(i)
] does not
hit 0 for 1 i l, then we have

T
(i)
= T
(i)
and
(i)
=
(i)
for 1 i l. On the
other hand, if j is the smallest index such that S
(j)
[0, T
(j)
] hits 0, then

T
(j)
< T
(j)
and
(j)
,=
(j)
.
In order to show (8.4), we rst x 1 j l and prove a bound on
Pr
_
S
(j)
n
= 0 for some 0 n T
(j)

(i)
, 1 i l
_
. (8.5)
By the denition of the
(i)
, the expression in (8.5) in fact equals
Pr
_
S
(j)
n
= 0 for some 0 n T
(j)

(i)
, 1 i j
_
. (8.6)
We analyze (8.6) using a description of the conditional distribution of a random
walk given its loop-erasure (see [13]). This requires a few denitions. For D Z
d
and y, z D D, let T(D, y, z) denote the collection of all paths = [
0
, . . . ,
s
]
such that
0
= y,
s
= z and [0, s) D. For y, z D D let G
D
(y, z) be the
Green function for simple random walk started at y and killed at its rst exit time
T
D
from D. We have
G
D
(y, z) = E
y

0nT
D
I[S
n
= z] =

P(D,y,z)
(2d)
||
.
In the last expression, [[ denotes the number of steps in the path . We also dene
the escape probability
Es
D
(y, B) = P
y
(S(n) , B, 1 n < T
D
).
17
Let A Z
d
, x Z
d
and let = [
0
, . . . ,
m
] be a self-avoiding path with
0
= x
and [0, m) A. Let S[0, ) denote simple random walk started at x. Let LE
m
denote the operation of creating the rst m steps of the loop-erased path, dened
when there are at least m steps in the loop-erasure. It is simple to deduce that for
any m-step self-avoiding path ,
Pr(LE
m
(S[0, T
A
]) = ) = (2d)
||
_
_
||

p=0
G
A\[0,p1]
(
p
,
p
)
_
_
Es
A
(
m
, [0, m]). (8.7)
To see this, observe that one can decompose the random walk path starting at x and
ending in A
c
into its loop erasure , a family of loops
p
T(A [0, p 1],
p
,
p
),
and the portion from the endpoint of to A
c
(if any). Summing over the possible
loops attached at every vertex
p
, 0 p [[ gives (8.7).
A small modication of (8.7) gives an expression for the probability that the loop
at
p
visits 0, when the loop-erasure is . Dene

G
D
(y, z) =

P(D,y,z)
visits 0
(2d)
||
.
Then
Pr(LE
m
(S[0, T
A
]) = and the loop at
p
visits 0)
= (2d)
||

G
A\[0,p1]
(
p
,
p
)
_

q=p
G
A\[0,q1]
(
q
,
q
)
_
Es
A
(
m
, [0, m]).
(8.8)
Let T

denote the last time that S(n) visits


m1
. Then equations (8.7) and (8.8)
imply
Pr(S[0, T

] visits 0 [ LE
m
(S[0, T
A
]) = )
m1

p=0

G
A\[0,p1]
(
p
,
p
)
G
A\[0,p1]
(
p
,
p
)
. (8.9)
We analyze the right hand side of (8.9) further. First note that G
D
(y, y) 1, due to
the contribution of the 0-step walk. We also have

G
D
(y, y) =

1
P(D,y,0)
(2d)
|
1
|

2
P(D,0,y)

2
does not return to 0
(2d)
|
2
|

1
P(D,y,0)
(2d)
|
1
|

2
P(D,0,y)
(2d)
|
2
|
= G(y, 0; D) G(0, y; D) G(y)
2
,
where G(y) is the Green function in Z
d
. This yields
Pr(S[0, T

] visits 0 [ LE
m
(S[0, T
A
]) = )
m1

p=0
G(
p
)
2
. (8.10)
18
From this we obtain
Pr(S[0, T
A
] visits 0 [ LE(S[0, T
A
]) = )

0p<||
G(
p
)
2
(8.11)
for any nite or innite self-avoiding path . When is nite, this follows from the
case
m
A, when T
A
= T

+ 1, and when is innite, we let m . Applying


(8.11) with A = Z
d

1i<j

(i)
, the expression in (8.6) is less than

y
(i)
G(y)
2
, (8.12)
where denotes the vertices of the path , excluding the last one, if
(i)
is nite.
Summing this over 1 i l, and using that the
(i)
are disjoint and
(i)
V
M
= ,
we obtain that the left hand side of (8.4) is less than
l

i=1

y
(i)
G(y)
2

yZ
d
\V
M
G(y)
2
. (8.13)
Using d > 4 and the well-known fact G(y) C[y[
d2
[12, Theorem 1.5.4], we obtain
the claim in (8.4).
Now we can complete the proof of Proposition 7.11 (ii). Observe that on the event
G(M, N)
(i)
=
(i)
, 1 i l
we have T
0
V
N
. Therefore, by (8.3),
Pr
_
T
0
V
N

G(M, N)
_
1
C
M
d4
. (8.14)
Choosing M large and then N large, (8.2) and (8.14) imply (8.1) and completes the
proof.
9 Tail triviality of
In Section 10 we are going to need the d > 4 part of the following theorem.
Theorem 9.1. The measure is tail trivial for any d 2.
Proof. [Case 2 d 4] The proof is based on the fact that the uniform spanning
forest measure is tail trivial [3, Theorem 8.3]. Let A 0, 1
E
d
denote the set of
spanning trees of Z
d
with one end. Recall the uniform spanning forest measure
from Section 8. It was shown by Pemantle [19] that when 2 d 4, the measure
is concentrated on A.
It is shown in [1] that there is a mapping : A such that is the image of
under . Moreover, has the following property. Let T
x
= T
x
() denote the tree
19
consisting of all ancestors of x and its 2d neighbours in . In other words, T
x
is the
union of the paths leading from x and its neighbours to innity. It follows from the
results in [1] that
x
= (())
x
is a function of T
x
alone.
Assume that f() is a bounded tail measurable function. Then for any n, f is
a function of
x
: |x|

n only. This means that f() = f(()) = g() is a


function of the family T
x
() : |x|

n. Let T
k
= (
e
: e [k, k]
d
= ). For
1 k < n consider the event
E
n,k
=

x:xn
T
x
[k, k]
d
= .
Observe that E
n,k
T
k
, and gI[E
n,k
] is T
k
-measurable. Using that has a single
end -a.s., it is not hard to check that for any k 1
lim
n
(E
n,k
) = 1.
Letting n , this implies that there is an T
k
-measurable function g
k
, such that
g = g
k
-a.s. Since this holds for any k 1, tail triviality of implies that g is
constant -a.s. Letting c denote the constant, this imples
(f() = c) = (f(()) = c) = 1,
which completes the proof in the case 2 d 4.
[Case d > 4] The above simple proof does not work when d > 4, due to the
fact that there is no coding of the sandpile conguration in terms of the USF in
innite volume. Nevertheless, it turns out that a coding is possible by adding extra
randomness to the USF, namely, a random ordering of its components. Due to the
presence of this random ordering, however, we have not been able to deduce tail
triviality of directly from tail triviality of .
We start by recalling results from [1]. Let A denote the set of spanning forests
of Z
d
with innitely many components, where each component is innite and has a
single end. The USF measure is concentrated on A [3]. Given x Z
d
and A,
let T
(1)
x
(), . . . , T
(k)
x
() denote the trees consisting of all ancestors of x and its 2d
neighbours in . Here k = k
x
() 1. Each T
(i)
x
is a union of innite paths starting
at x or a neighbour of x, and has a unique vertex v
(i)
x
that is the rst point common
to all paths.. Let F
(i)
x
() denote the tree consisting of all descendants of v
(i)
x
in .
Let T denote the collection of nite rooted trees in Z
d
. Let
l
denote the set of
permutations of the symbols 1, . . . , l.
It follows from the proofs of Lemma 3 and Theorem 1 in [1] that the sandpile height
at x is a function of F
(i)
x
(), v
(i)
x
()
k
i=1
and a random
x

k
, in the following sense.
There are functions
l
: T
l

l
, l = 1, 2, . . . such that if
x
is a uniform random
element of
k
, given , then

x
=
kx
((F
(1)
x
, v
(1)
x
), . . . , (F
(k)
x
, v
(k)
x
),
x
) (9.2)
20
has the distribution of the height variable at x under . Here it is convenient to
think of
x
as a random ordering of those components of that contain at least one
neighbour of x. Then one can also view
x
as a function of T
(i)
x

k
i=1
and
x
.
Next we turn to a description of the joint distribution of
x

xA
0
for A
0
Z
d
nite. Let A denote the set of those site that are either in A
0
or have a neighbour
in A
0
. Let (
(1)
, . . . , (
(K)
denote the components of the USF intersecting A, with
K = K
A
(). Each (
(i)
contains a unique vertex v
(i)
A
where the paths from A(
(i)
to
innity rst meet. Let F
(i)
A
denote the tree consisting of all descendants of v
(i)
A
. Each
rooted tree (F
(j)
x
, v
(j)
x
), x A
0
, 1 j k
x
is a subtree of some F
(i)
A
, 1 i K and
the former are determined by the latter. Let
A

K
be uniformly distributed, given
. For each x A
0
,
A
induces a permutation in
kx
, by restriction. It follows from
the results in [1] that the height conguration in A
0
is a function of (F
(i)
A
, v
(i)
A
)
K
i=1
and
K
. Moreover, the joint distribution of
x

xA
0
is the one induced by
A
.
From the above we obtain the following description of in terms of the USF
and a random ordering of its components. Let A be distributed according to
. Given , we dene a random partial ordering

on Z
d
in the following way.
Let (
(1)
, (
(2)
, . . . be an enumeration of the components of , and let U
1
, U
2
, . . . be
i.i.d. random variables, given , having the uniform distribution on [0, 1]. Dene
the random partial order

depending on and U
i

i1
by letting x

y if and
only if x (
(i)
, y (
(j)
and U
i
< U
j
. Even though

is dened for sites, it is


simply an ordering of the components of . The distribution of is in fact uniquely
characterized by the property that it induces the uniform permutation on any nite
set of components, and one could dene it by this property, without reference to
the Us. This in turn shows that the distribution is independent of the ordering
(
(1)
, (
(2)
, . . . initially chosen.
Let Q = 0, 1
Z
d
Z
d
denote the space of binary relations (where for q Q we
interpret q(x, y) = 1 as x y, and q(x, y) = 0 otherwise). We denote the joint
law of (, ) on A Q by . From the couple (, ), we can recover the random
permutations
x
as follows. If v
(1)
x
, . . . , v
(k)
x
are as dened earlier, then
(
x
(1), . . . ,
x
(k)) = (j
1
, . . . , j
k
)
if and only if
v
(j
1
)
x

v
(j
k
)
x
.
The sandpile height conguration is a function of the couple (,

). Using the above

x
in (9.2) gives
x

xZ
d with distribution . In other words, there is a -a.s. dened
function : A Q such that is the image of under .
Before we start the argument proper, we need to recall some further terminology
from [1]. Given nite rooted trees (

F, v) = (F
i
, v
i
)
K
i=1
and a nite set of sites A, dene
the events
D( v) = v
1
, . . . , v
K
are in distinct components of ,
B(

F, v) = D( v) F
(i)
A
= F
i
, v
(i)
A
= v
i
for 1 i K,
21
For Z
d
nite we also dene
D

( v) = v
1
, . . . , v
K
are in distinct components of

,
B

(

F, v) = D

( v) F
(i)
A
= F
i
, v
(i)
A
= v
i
for 1 i K,
where

is the wired UST in .


Recall that tail triviality is equivalent to the following [7, page 120]. For any
cylinder event E

and > 0 there exists n such that (with V


n
= [n, n]
d
Z
d
) for
any event R

T
V
c
n
we have
[(E

) (E

)(R

)[ . (9.3)
Fix E

and , and for the moment also x n and R

. Let E =
1
(E

) and
R =
1
(R

). Let A
0
denote the set of sites on which E

depends, and let A be the


set of sites that are either in A
0
or have a neighbour in A
0
.
We rst have a closer look at the event E. We dene
S(

F, v, ) = B(

F, v) v
(1)
v
(K)
,
(
E
= (

F, v, ) : S(

F, v, ) E,
(
E
(r) = (

F, v, ) (
E
: F
i
V
r
for 1 i K.
Here E is a disjoint union of the events S(

F, v, ) over (

F, v, ) (
E
. By the
denition of we have
(E

) = (E) =

(

F, v,)G
E
1
K!
(B(

F, v)).
We also dene an analogue of S in a nite volume . Assume that the relation

is
prescribed on the exterior boundary of . For any realization of the wired UST

there is a unique extension of

into , denoted

, where x

y if and only if they


are connected (in

) to boundary vertices w(x) and w(y) satisfying w(x)

w(y).
Using this extension, we dene
S

(

F, v, ) = B

(

F, v) v
(1)

v
(K)
.
We let
,

denote the law of (

) with boundary condition

.
Introduce
G = G(r) = F
(i)
A
V
r
for 1 i K,
where we asume that A
0
V
r
V
n
. Now E G is a disjoint union of the events
S(

F, v, ) over (

F, v, ) (
E
(r). Since A
0
is xed, we can choose r large enough so
that (G(r)
c
) .
Turning to R, we dene
H = H
n
=
_
xV
c
n
kx
_
i=1
vertex set of T
(i)
x
T = T
n
= Z
d
H
n
.
22
The occurrence of R is determined by the collection of edges joining vertices in H
together with the restriction of to H. We also introduce for r < m < n and
V
m
V
n
the events
F

= T
n
= and F = F(n, m) =

= H
n
V
m
= .
In other words, F is the event that the portion of determining the sandpile cong-
uration in V
c
n
does not intersect V
m
. The value of m will be chosen later. It is easy
to see that for xed m one can choose n large enough, so that (F
c
) . This is
because F(n, m) is monotone increasing in n, and

n=m+1
F(n, m)
c
= , since each
component of the USF has a single end.
In addition to G and F we need a third auxillary event. Let
J = x, y V
r
: if x y then they are connected inside V
m
,
where x y means that x and y are in the same component of the USF. Using again
that each component of has one end, for large enough m we have (J
c
)
1
,
where we have set
1
=
1
(r) = /[(
E
(r)[. Dene the event
J
0
= F
_

_
J
c

Hn
_

1
_
,
where
Hn
denotes the conguration on the set of edges touching H
n
. By Markovs
inequality,
(J
c
0
) (F
c
) +
_
F
_

_
J
c

Hn
_

1
__
+
(J
c
)

1
2.
Choosing r large enough, m large enough and n large enough, we have
[(E

) (E G R J
0
))[ 4. (9.4)
Recall that we regard the edges of being directed towards innity. By the
denition of H, there are no directed edges from H to T. Therefore, given the
restriction of to H, the conditional law of in T is that of the wired uniform
spanning tree in T (denoted
D
). One can see this by using Wilsons method rooted
at inifnity to rst generate the conguration on H, and then the conguration in T.
Note that the event F

only depends on the portion of outside . We want


to rewrite the second term on the left hand side of (9.4) by conditioning on F

,
the portion of outside , and the restriction of to Z
d
. By the previous
paragraph, the conditional distribution of (, ) inside is given by
,

, where

is determined by the conditioning.


The above implies
(E G R J
0
) =

VmVn
_
RJ
0
F

(E G) d . (9.5)
23
Next we further analyze

,

(E G) =

(

F, v,)G
E
(r)

,

(S

(

F, v, )). (9.6)
Our aim is to show that the summand in (9.6) is close to (B(

F, v))/K!, uniformly in
and the boundary condition, if m is large enough. For this we turn to a description
of the event B

(

F, v) in terms of Wilsons algorithm. This part of the proof is similar
to the proof of Lemma 3 in [1], however it does not seem possible to use that result
directly.
Fix (F
i
, v
i
)
K
i=1
and
K
. Enumerate the vertices in
K
i=1
F
i
, starting with
v
1
, . . . , v
K
and followed by the rest of the vertices y
1
, y
2
, . . . in an arbitrary order. We
apply Wilsons method with root at the wired vertex of , and the above enumeration.
Let S
(i)
, i = 1, . . . , K be independent simple random walks started at v
i
. Let
(i)

denote the loop-erasure of S


(i)
up to its rst exit time from . We dene an event C

whose occurrence is equivalent to the occurrence of B

(

F, v), by Wilsons method.
Since the event D

( v) has to occur, we require that for i = 1, . . . , K, S


(i)
upto its rst
exit time be disjoint from
1j<i

(i)

. In addition, the fact that B

(

F, v) has to occur,
gives conditions on the paths starting at y
1
, y
2
, . . . , namely, these paths have to realize
the events (F
(i)
A
, v
(i)
A
) = (F
i
, v
i
), given the paths
(i)


K
i=1
. These implicit conditions
dene C

. We write Pr for probabilities associated with random walk events, and we


couple the constructions in dierent volumes by using the same innite random walks
S
(i)
. Analogously we dene the = Z
d
version, C, which corresponds to B(

F, v).
Let W
(i)

denote the vertex where S


(i)
exits . Then we have

,

(S

(

F, v, )) = Pr(C

, W
((1))

W
((K))

). (9.7)
For r < l < m we consider the event C
V
l
, and write C
l
for short. It is not hard to see
that lim

I[C

] = I[C], Pr-a.s., which implies that for l large enough, Pr(C


l
C)

1
.. The dierence between the right hand side of (9.7) and
Pr(C
l
, W
((1))

W
((K))

) (9.8)
is at most 2
1
. Recall that V
m
, and m > l. By conditioning on the rst exit
points from V
l
, (9.8) can be written as
Pr(C
l
) Pr
_
W
((1))

W
((K))

W
(1)
l
, . . . , W
(K)
l
_
. (9.9)
The rst factor here diers from Pr(C) = (B(

F, v)) by at most
1
. If m is large
with respect to l, the value of the second factor is essentially independent of . This
is because by a standard coupling argument, the distributions of W
(i)

and W
(j)

given
W
(i)
l
and W
(j)
l
(respectively), can be made arbitrarily close in total variation distance.
This implies that the dierence between (9.9) and
Pr(C) Pr
_
W
(1)

W
(K)

W
(1)
l
, . . . , W
(K)
l
_
24
is at most
1
, if m is large enough, uniformly in .
Observe that if W
(i)

W
(j)

for some 1 i < j K, then the event J


c
occurs.
Since the integration in (9.5) is over a subset of J
0
, we have
Pr
_
C

, W
(i)

W
(j)

for some 1 i < j K


_

,

(J
c
)
1
. (9.10)
It follows that for some universal constant C, if m is large enough

Pr
_
C

, W
((1))

W
((K))

_
Pr(C)/K!

C
1
.
Now an application of (9.7) and (9.6) implies
[
,

(E G) (E G)[ C,
uniformly in . Therefore, the dierence between the right hand side of (9.5) and
(E G)

VmVn
(R J
0
F

) = (E G) (R J
0
)
is at most C.
Using the choice of r and the choice of n again, we get
[(E

) (E

)(R

)[ C,
proving the claim in the case d > 4.
10 Ergodicity of the stationary process
Arrived at this point, we can apply the results in [14], and we obtain the following.
Theorem 10.1. Let : Z
d
(0, ) be an addition rate such that

x
(x)G(0, x) < . (10.2)
Then the following hold.
1. The closure of the operator on L
2
() dened on local functions by
L

f =

x
(x)(a
x
I)f (10.3)
is the generator of a stationary Markov process
t
: t 0.
2. If satises (10.2), then let N

t
(x) denote Poisson processes with rate (x)
that are independent (for dierent x). The limit

t
= lim
V Z
d
_

xV
a
N

t
(x)
x
_
(10.4)
exists a.s. with respect to the product of the Poisson process measures on N

t
with the stationary measure on the . Moreover,
t
is a cadlag version
of the process with generator L

.
25
Let
t
: t 0 be the stationary process with generator L

x
(x)(a
x
I).
We recall that a process is called ergodic if every (time-)shift invariant measurable
set has measure zero or one. For a Markov process, this is equivalent to the following:
if S
t
f = f for all t > 0, then f is constant -a.s. This in turn is equivalent to the
statement that Lf = 0 implies f is constant -a.s. The tail -eld on is dened as
usual:
T

nN
(x) : [x[ n (10.5)
A function f is tail measurable if its value does not change by changing the congu-
ration in a nite number of sites, i.e., if
f() = f(
V

V
c)
for every and V Z
d
nite.
Theorem 10.6. The stationary process of Theorem 10.1 is mixing.
Proof. Recall that G denotes the group generated by the unitary operators a
x
on
L
2
(). Consider the following statements.
1. The process
t
: t 0 is ergodic.
2. The process
t
: t 0 is mixing.
3. Any G-invariant function is -a.s. constant.
4. is tail trivial.
Then we have the following implications: 1, 2 and 3 are equivalent and 4 implies 3.
This will complete the proof, because 4 holds by Theorem 9.1.
It is easy to see that on L
2
(),
L

x
(x)(a
1
x
I). (10.7)
Hence L and L

commute, i.e., L is a normal operator. The equivalence of 1 and 2


then follows immediately, see [22]. To see the equivalence of 1 and 3: suppose Lf = 0,
then, using invariance of under a
x
Lf[f) =
1
2

x
(x)
_
(a
x
f f)
2
d = 0. (10.8)
Similarly, Lf = 0 implies L

f = 0, hence
L

f[f) =
1
2

x
(x)
_
(a
1
x
f f)
2
d = 0, (10.9)
which shows the invariance of f under a
x
and a
1
x
, and thus under the action of G.
Finally, to prove the implication 4 3, we will show that a function invariant under
26
the action of G is tail measurable. Suppose f : 1 R, and f(a
x
) = f() = f(a
1
x
)
for all x. If and are elements of 1 and dier in a nite number of coordinates,
then
=

x
a
(x)(x)
x
(10.10)
and hence f() = f(). This proves that f is tail measurable.
A Appendix
In this section we indicate how to extend the argument of [1] in the case d > 4 and
prove lim

= . This boils down to showing that (18) and (19) in [1] (referred to
as (18)[1], etc. below) hold with the limit taken through arbitrary volumes. Most of
the argument in [1] has been carried out for general volumes, and we only mention
the dierences. We use the notation introduced in Section 9.
We start with the extension of (18)[1]. Let x, y Z
d
be xed, and let S
(1)
and S
(2)
be independent simple random walks starting at x and y, respectively. Let T
(1)

and
T
(2)

be the rst exit times from for these random walks. The required extension of
(18)[1] follows from an extension of (27)[1], which in turn follows from the statement
lim
0
limsup

Pr
_
1
T
(1)

T
(2)

1 +
_
= 0. (A.1)
Statement (A.1) is proved in [11].
For the extension of (19)[1], we recall from Section 9 the events B

(

F, v) and
B(

F, v) dened for a collection (F
i
, v
i
)
K
i=1
. Let S
(i)
, i = 1, . . . , K be independent
random walks started at v
i
, respectively. Let T
(i)

be the exit time of S


(i)
from .
Also recall the random walk events C

and C, and that C


m
and T
(i)
m
are short for C

and T
(i)

when = [m, m]
d
Z
d
. By the arguments in [1], the required extension of
(19)[1] follows, once we show an extension of (32)[1], namely that for any permutation

K
lim
m
lim

Pr
_
C
m
, T
(1)

< < T
(K)

_
= Pr(C)
1
K!
. (A.2)
Observe that C
m
and the collection

T
(i)
,m
= T
(i)

T
(i)
m
, i = 1, . . . , K are conditionally
independent, given S
(i)
(T
(i)
m
)
K
i=1
. Therefore, using (A.1), the left hand side of (A.2)
equals
lim
m
lim

Pr(C
m
) Pr
_

T
(1)
,m
< <

T
(K)
,m
_
. (A.3)
By a standard coupling argument, the second probability approaches 1/K! for any
xed m, and hence the limit in (A.3) equals P(C)/K!. This completes the proof of
the required extension of (19)[1].
Acknowledgements. The work of AAJ was carried out at the Centrum voor
Wiskunde en Informatica, Amsterdam, The Netherlands.
27
References
[1] Athreya, S.R. and Jarai, A.A.: Innite volume limit for the stationary distribu-
tion of Abelian sandpile models. Commun. Math. Phys. 249, 197-213 (2004).
[2] Bak, P., Tang, K. and Wiesefeld, K.: Self-organized criticality. Phys. Rev. A 38,
364-374 (1988).
[3] Benjamini, I., Lyons, R., Peres, Y. and Schramm, O.: Uniform spanning forests.
Ann. Prob. 29 165 (2001).
[4] Dhar D.: Self organized critical state of sandpile automaton models.
Phys. Rev. Letters 64, 1613-1616 (1990) .
[5] Dhar D.: The Abelian sandpile and related models. Physica A 263, 4-25 (1999).
[6] Gabrielov, A.: Asymmetric Abelian avalanches and sandpiles. Preprint (1994).
[7] Georgii, H.-O.: Gibbs measures and phase transitions. de Gruyter, Berlin (1988).
[8] Ivashkevich, E.V., Ktitarev, D.V. and Priezzhev, V.B.: Critical exponents for
boundary avalanches in two-dimensional Abelian sandpile. J. Phys. A 27 L585
L590 (1994).
[9] Ivashkevich, E.V., Ktitarev, D.V. and Priezzhev, V.B.: Waves of topplings in an
Abelian sandpile. Physica A 209 347360 (1994).
[10] Ivaskevich, E.V. and Priezzhev, V.B.: Introduction to the sandpile model.
Preprint (2003) http://arxiv.org/abs/cond-mat/9801182.
[11] Jarai, A.A. and Kesten, H.: A bound for the distribution of the hitting time of
arbitrary sets by random walk. Preprint (2004).
[12] Lawler, G.F.: Intersections of random walks. Birkhauser, softcover edition
(1996).
[13] Lawler, G.F.: Loop-erased random walk. In Perplexing problems in Probability,
Progress in Probability, Vol. 44, Birkhauser, Boston, (1999).
[14] Maes, C., Redig, F. and Saada, E.: The Abelian sandpile model on an innite
tree. Ann. Probab. 30, 20812107, (2002).
[15] Maes, C., Redig, F. and Saada, E.: The innite volume limit of dissipative
Abelian sandpiles. Comm. Math. Phys. 244, 395417 (2004).
[16] Maes, C., Redig F., Saada, E. and Van Moaert, A.: On the thermodynamic
limit for a one-dimensional sandpile process. Markov Process. Related Fields 6
122 (2000).
28
[17] Majumdar, S.N. and Dhar, D.: Equivalence between the Abelian sandpile model
and the q 0 limit of the Potts model. Physica A 185 129145 (1992).
[18] Meester, R., Redig, F. and Znamenski, D.: The Abelian sandpile: a mathemati-
cal introduction. Markov Process. Related Fields 6, 1-22, (2000).
[19] Pemantle, R.: Choosing a spanning tree for the integer lattice uniformly.
Ann. Prob.19 15591574, (1991).
[20] Priezzhev, V.B.: Structure of two-dimensional sandpile. I. Height probabilities.
J. Stat. Phys.74, 955979 (1994).
[21] Priezzhev, V.B.: The upper critical dimension of the Abelian sandpile model.
J. Stat. Phys.98, 667-684 (2000).
[22] Rosenblatt, M.: Transition probability operators. Proc. Fifth Berkely Sympo-
sium Math. Stat. Prob., 2, 473-483, (1967).
[23] Wilson, D.B.: Generating random spanning trees more quickly than the cover
time. In Proceedings of the Twenty-Eighth ACM Symposium on the Theory of
Computing 296303. ACM, New York, (1996).
29

You might also like