You are on page 1of 49

Biol. Rev. (2001) 76, pp.

161209 Printed in the United Kingdom # Cambridge Philosophical Society


161
Scale invariance in biology: coincidence or
footprint of a universal mechanism?
T. GISIGER*
Groupe de Physique des Particules, UniversiteT de MontreTal, C.P. 6128, succ. centre-ville, MontreTal, QueTbec, Canada, H3C 3J7
(e-mail : gisiger!pasteur.fr)
(Received 4 October 1999; revised 14 July 2000; accepted 24 July 2000)
ABSTRACT
In this article, we present a self-contained review of recent work on complex biological systems which exhibit
no characteristic scale. This property can manifest itself with fractals (spatial scale invariance), icker noise
or 1\f-noise where f denotes the frequency of a signal (temporal scale invariance) and power laws (scale
invariance in the size and duration of events in the dynamics of the system). A hypothesis recently put
forward to explain these scale-free phenomomena is criticality, a notion introduced by physicists while
studying phase transitions in materials, where systems spontaneously arrange themselves in an unstable
manner similar, for instance, to a row of dominoes. Here, we review in a critical manner work which
investigates to what extent this idea can be generalized to biology. More precisely, we start with a brief
introduction to the concepts of absence of characteristic scale (power-law distributions, fractals and 1\f-
noise) and of critical phenomena. We then review typical mathematical models exhibiting such properties :
edge of chaos, cellular automata and self-organized critical models. These notions are then brought together
to see to what extent they can account for the scale invariance observed in ecology, evolution of species, type
III epidemics and some aspects of the central nervous system. This article also discusses how the notion of
scale invariance can give important insights into the workings of biological systems.
Key words : Scale invariance, complex systems, models, criticality, fractals, chaos, ecology, evolution,
epidemics, neurobiology.
CONTENTS
I. Introduction ............................................................................................................................ 162
II. Power laws and scale invariance ............................................................................................. 165
(1) Denition and property of power laws ............................................................................. 165
(2) Fractals in space................................................................................................................ 166
(3) Fractals in time: 1\f-noise ................................................................................................ 167
(a) Power spectrum of signals........................................................................................... 168
(b) Hursts rescaled range analysis method ...................................................................... 169
(c) Iterated function system method ................................................................................ 170
(4) Power laws in physics : phase transitions and universality ................................................ 171
(a) Critical systems, critical exponents and fractals.......................................................... 171
(b) Universality ................................................................................................................ 173
III. Generalities on models and their properties ............................................................................ 174
(1) Generalities ....................................................................................................................... 174
(2) Chaos : iterative maps and dierential equations.............................................................. 175
* Present address : Unite! de Neurobiologie Mole! culaire, Institut Pasteur, 25 rue du Dr Roux, 75724 Paris, Cedex 15,
France.
162 T. Gisiger
(3) Discrete systems in space: cellular automata and percolation .......................................... 178
(4) Self-organized criticality: the sandpile paradigm.............................................................. 179
(5) Limitations of the complex systems paradigm .................................................................. 182
IV. Complexity in ecology and evolution ...................................................................................... 182
(1) Ecology and population behaviour ................................................................................... 182
(a) Red, blue and pink noises in ecology ......................................................................... 182
(b) Ecosystems as critical systems ..................................................................................... 184
(2) Evolution........................................................................................................................... 185
(a) Self-similarity and power laws in fossil data............................................................... 186
(b) Remarks on evolution and modelling......................................................................... 190
(c) Critical models of evolution........................................................................................ 191
(d) Self-organized critical models ..................................................................................... 193
(e) Non-critical model of mass extinction......................................................................... 195
V. Dynamics of epidemics ............................................................................................................ 196
(1) Power laws in type III epidemics ..................................................................................... 196
(2) Disease epidemics modelling with critical models ............................................................. 197
VI. Scale invariance in neurobiology............................................................................................. 199
(1) Communication: music and language .............................................................................. 199
(2) 1\f-noise in cognition........................................................................................................ 201
(3) Scale invariance in the activity of neural networks .......................................................... 202
VII. Conclusion ............................................................................................................................... 204
VIII. Acknowledgements .................................................................................................................. 205
IX. References................................................................................................................................ 206
I. INTRODUCTION
Biology has come a long way since the days when,
because of a lack of experimental means, it could be
considered as a soft science. Indeed, with the
recent progress of molecular biology and genetic
engineering, extremely detailed knowledge has been
acquired about the mechanisms of living beings at
the molecular scale. Hard facts about the chemical
composition and function of ionic channels, enzymes,
neurotransmitters and neuroreceptors, and genes, to
name a few, are now routinely gathered using
powerful new methods and techniques. Parallel to
these experimental achievements, theoretical work
has been undertaken to test hypotheses using
mathematical models, which subsequently suggest
new theories and experiments. Such a dialogue
between theory and experiment, already present in
physics and chemistry, is now becoming common in
life sciences.
However, after taking this path, one is sooner or
later confronted with the reality that knowing the
elementary parts making up a system and the way
that they interact together, is not always sucient to
understand the global behaviour of the system. This
fact is already being more and more recognized by
physicists about their own eld. Indeed, after a very
lengthy programme, particle physics has now ex-
plored matter down to innitesimal scales (less than
10
") m). We now know, at least to the energies
accessible to current experiments, that matter is
made of very small elementary particles, called
quarks. The way quarks interact together to form
protons and neutrons, the interaction between these
latter particles to constitute nuclei, and nally how
they are bound together with electrons into atoms is
also relatively well known. In fact, particle physicists
claim that only four forces are necessary to bind and
hold the universe together. However, this is not the
whole story. Already in the 19th century it became
clear that knowing the interactions between two
bodies was not enough to understand or solve
completely the dynamics of a group of such bodies
put together. The classical example is ordinary
Newtonian gravity. It is possible to solve exactly,
and therefore understand fully, the equations of
motion of two bodies orbiting around one another,
like the earth around the sun. However, when a
third body is introduced, the system is no longer
solvable, except perhaps numerically. Furthermore,
even numerical solution of the problem cannot
completely account for the behaviour of the system
since the system is extremely sensitive to initial
conditions. This means that any error in the positions
and velocities of the bodies at some initial time will
be amplied over time and will corrupt the solution.
(This is similar to the so-called buttery eect
which renders impossible any long-term weather
forecasting see Lorenz, 1963). In other words, the
three-body problem can be studied, but it is much
163 Scale invariance in biology
harder to understand than the two-body problem.
Consequently, even if physicists knew how all the
particles in the universe interact with each other,
they still could not explain why, after the big bang,
matter has chosen to settle into a complex structure
with galaxies, stars, planets and life, instead of just
becoming a random-looking gas or a crystal. In a
nutshell, knowing how dierent parts of a system
work and interact together does not necessarily
explain how the whole system functions.
In the case of biology, it is not impossible that in
a not too distant future, man might be able to build
a computer capable of simulating the dynamics of a
system with approximately 10% types of components,
roughly the number of proteins forming a bacteria
for instance. However, this machine would probably
add very little to the understanding of why the
bacteria behaves as a living organism, or of life itself.
Similarly, even if we understood exactly howneurons
work and interact with each other, a purely
numerical approach would not solve the problem of
how the brain thinks. What might be more useful is
a better understanding of the emergent properties of
systems once the interactions between their parts are
known. Such work is already under way, and it deals
with what is called complexity (Parisi, 1993;
Ruthen, 1993; see also Nicolis & Prigogine, 1989
from which much of this discussion is reproduced).
Complexity is a dicult term to dene exactly.
Here, we will only hint at its meaning in the
following way. Let us consider a system made of a
large number of constituents which interact with
each other in a simple way. What then will be the
behaviour of the system as a whole? If the system
represents molecules in a standard chemical reaction,
then the outcome of the dynamics will probably be
a chemical equilibrium of some sort, where reactant
and product concentrations are constant in time. If
instead we are considering a gas in a vessel at some
temperature, the dynamics will settle in a molecular
chaos where atoms bounce around the vessel in an
uncoordinated, erratic way. These two typical
behaviours are called simple because little in-
formation is needed to describe them. They are not
interesting and as such are not considered complex.
Let us now consider the case of a uid at rest
between two parallel plates upon which is imposed a
temperature gradient T0: the lower plate is
heated but not the upper plate. For low values of
T, a shallow density gradient establishes itself by
heat conduction and dilatation of the liquid, but no
convection occurs because of the uids viscocity.
However, for T larger than some critical value
T
c
, convection motion sets in as convection rolls,
called Be! nard cells, with their axis parallel to the
plates and a diameter of approximately 1 mm. The
rotating cells allow cool liquid to sink towards the
lower plate at the contact of which it heats up, while
the less dense warm uid rises and cools down (this
is very similar to the convection motion of air in a
cloud). Structures, the Be! nard cells, have sponta-
neously appeared, created by the dynamics of the
system to help the uid dissipate the energy poured
into it as heat. This is a rst example of a complex
system, as a fair amount of information is needed to
describe it (shape, size, rotational direction, number
of cells formed, etc.). Another classical example is
the BelouzovZhabotinski (BZ) (Belousov, 1959;
Zhabotinski, 1964) chemical reaction of Ce
#
(SO
%
)
$
with CH
#
(COOH)
#
and KBrO
$
, all dissolved in
sulphuric acid. While constantly driven out of
equilibrium by addition of reactants and stirring,
complex and beautiful structures appear in various
regimes (clock-like oscillations, target patterns, spiral
waves, multiarmed spirals, etc.). These patterns
require even more information to be described, and
as such are regarded as more complex.
In the spirit of the theory of complex systems, we
should try not to look at these examples as physical
processes or reactions between chemical reactants,
but instead as systems made of many particles, or
agents , which interact with each other via certain
rules. This way, we can generalize what we know to
other systems and vice versa. A good example is the
case of a population of amoebas Dictyostelium discoi-
deum. Under ordinary conditions, the population
acts as a gas , with each amoeba living, feeding on
its own while ignoring the others. However, when
subject to starvation, the colony aggregates into a
plasmodium and forms a single entity with a new
dynamics of its own (pluricellular body). A closer
inspection of the mechanisms regulating this ag-
gregation (mainly the release of chemical messen-
gers) shows that certain phases of the phenomenon
are in fact similar to that of the BZ reaction, and
can be described using the same vocabulary (see
Nicolis & Prigogine, 1989 for details). This shows
rst that, in some situations, the nature of the
constituents of a system is important only in as far as
it aects the interaction between them. Also, it hints
at how much the theory of complex systems can
enrich our understanding of systems in biology,
physics and chemistry, to mention only a few.
However, a review of all complex systems would take
us much too far. We will therefore only concentrate
in this review on a particular class of complex
164 T. Gisiger
10
2
10
1
10
0
10
1
10
2
10
2
10
4
10
6
Earthquake magnitude s
N
u
m
b
e
r

D
(
s
)

o
f

e
a
r
t
h
q
u
a
k
e
s
/
y
e
a
r
Fig. 1. The GutenbergRichter law: the frequency per
year D(s) of the magnitude s of earthquakes follows a
straight line on a loglog scale and can be tted by the
power law D(s) l1.6592i10$ s
!.)' (continuous line).
The data shown here were recorded between 1974 and
1983 in the south-eastern United States. Reproduced
from Bak (1996). See also Gutenberg (1949). [This gure,
as well as all others which present experimental results,
was reproduced using a computer graphic package from
data published in the literature. The source of the data is
mentioned in the gures caption. The other plots were
obtained by the author using numerical simulations of the
models presented in this article.].
systems : those which are scale independent (Bak,
1996).
A classical example of such systems in physics is
the earths crust (Gutenberg, 1949; Gutenberg &
Richter, 1956; see also Turcotte, 1992). It is a well-
established fact that a photograph of a geological
feature, such as a rock or a landscape, is useless if it
does not include an object that denes the scale: a
coin, a person, trees, buildings, etc. This fact, which
has been known to geologists long before it came to
interest researchers from other elds, is described as
scale invariance: a geological feature stays roughly
the same as we look at it at larger or smaller scales.
In other words, there are no patterns there that the
eye can identify as having a typical size. The same
patterns roughly repeat themselves on a whole range
of scales. For this reason, such objects are sometimes
called self-similar or fractals , as we will see in more
detail in the next section (Mandelbrot, 1977, 1983).
It is usually believed that landscapes, coastal lines,
and the rest of the earths crust are scale-invariant
because the dynamics of the processes which shaped
them, such as erosion and sedimentation, are also
scale-invariant. One line of evidence possibly sup-
porting this hypothesis is the GutenbergRichter law
represented on Fig. 1, which shows a plot of the
distribution of earthquakes per year D(s) as a
function of their magnitude s. This empirical law
states that the data follow a power-law distribution
D(s) l1.6592i10$ s
!.)'. As we will see, power-law
distributions, unlike Gaussian distributions for in-
stance, have the particularity of not singling out any
particular value. So, the fact that the distribution of
earthquakes follows such a law indicates that
earthquake phenomena are scale-invariant : there is
no typical size for an earthquake. Smaller ones are
just (much) more probable than larger ones. Also,
the fact that all earthquakes, from the very small
(similar to a truck passing by) to the very large
(which can wipe out entire cities) obey the same
distribution, is a strong indication that they are all
produced by the same dynamics. Therefore, to
understand earthquakes, one should not exclusively
study the large events while neglecting the smaller
ones. It can also be shown that the distribution of
waiting time between earthquakes follows a power
law similar to that of GutenbergRichter: there
appears to be no typical waiting time between two
consecutive earthquakes. This could contradict
claims of nding periodicity in earthquake records.
We arrive then at the following conclusions. Since
the earths crust has not yet settled into a completely
random or equilibrium state, it is a complex system.
Further, it is a scale-invariant complex system: it
does not exhibit any characteristic scales of length,
time or size of events. Any theory or model trying to
describe geological systems will have to reproduce
these power-law distributions and fractal structures.
Signicant progress along these lines has been made
recently by using models of critical systems. Indeed,
it has been known for quite some time that systems
become scale-invariant when they are put near a
phase transition (such as the critical point of the
vapourliquid transition of water at the temperature
T
c
l647 K and density
c
l0.323 g cm
$, where
the states of vapour and liquid coexist at all scales):
they become critical (see Section II.4 for a short
introduction to critical phenomena). However, it is
only relatively recently that such ideas have been
generalized and extended to complex systems (Bak,
1996) such as the earths crust (Sornette & Sornette,
1989).
During the last few decades, evidence for scale
invariance has appeared in several elds other than
physics, and biology is no exception. Fractal struc-
tures have been observed in bones, the circulatory
165 Scale invariance in biology
system and lungs, to name only a few. The
distribution of gaps in the vegetation of rain forests
follows a power law. There does not seem to be any
characteristic time scale in extinction events com-
piled from fossil data. All these ndings might be
suggestive of scale-free complex systems. These
ndings raise the interesting question of the possible
existence of criticality in some biological systems.
Research on this subject is of a multidisciplinary
character, including ideas from biology, physics and
computer science and it is sometimes published in
non-biological journals. The aim of the present
review is to put together these developments into a
form available to non-specialists.
This paper is divided roughly into two parts. The
rst (Sections II and III) deals with the math-
ematical aspects of scale-invariant complex systems,
and as such is rather on the mathematical side. I
have tried to make this part as easy to read as
possible to non-mathematicians by avoiding un-
necessary technicalities and details. The second part
(Sections IVVI) addresses the issue of scale in-
variance in biological systems, rst introducing
experimental evidence of scale-free behaviour and
then proposing models which account for it. More
specically, in Section II I present the concepts of
power laws, fractals and 1\f-noise. In Section III, I
review some typical models such as chaotic systems,
cellular automata and self-organized critical models.
I will insist here on the scale-free properties of these
models, as excellent reviews about the other aspects
of their dynamics are available in the literature. In
Section IV, I review evidence and possible interpre-
tations of scale-free dynamics in ecological systems
(Section IV.1) and evolution (Section IV.2). In
Section V, I present work done on the dynamics of
measles epidemics in small communities. I end this
review by discussing some evidence of scale-free
dynamics in the brain: communication (Section
VI.1), cognition (Section VI.2) and neural networks
(Section VI.3).
II. POWER LAWS AND SCALE INVARIANCE
This section denes some of the mathematical
notions which will be used throughout this article. I
begin by introducing in Section II.1 the concept of
power law and show how it diers from other more
familiar functions. I then present in Section II.2 the
notion of fractals, structures without characteristic
length scales, and of fractal dimensions which
characterize them. This is followed in Section II.3 by
the denition of icker or 1\f-noise, signals with no
typical time scale, and which are therefore fractal in
time. I end this section by an introduction to critical
phenomena, to the critical exponents which charac-
terize critical systems and how they are related to the
very powerful notion of universality. This nal
section, though deeply rooted in physics, has far-
reaching implications regarding complex systems in
biology.
(1) Denition and property of power laws
Let us consider the following function:
g(x) lAx

, (1)
where A and are real and constant ( being smaller
than zero) and x is a variable. For instance g(x)
could represent the distribution D(s) of the size s of
events in an experiment, or the power spectrum
P( f ) of a signal as a function of its frequency f. This
type of function is sometimes refered to as a power
law because of the exponent . By taking the log of
both sides, one obtains :
log g(x) llog Ajlog x. (2)
When plotted on a loglog scale, this type of function
therefore gives a characteristic straight line of slope
, which intersects the ordinate axis at log A (see Fig.
1 for example). When trying to t a power law to
experimental data, as I will do often in this review,
it is customary to rst take the log of the measure-
ments, and then to t a straight line to it (by the
least-square method for instance). This method
proves less susceptible to sampling errors.
Power laws are interesting because they are scale-
invariant. We can demonstrate this fact by changing
x for a new variable xh dened by x la xh, where a is
some numerical constant. Then replacing in equa-
tion (2), one gets :
g(axh) lA(a xh)

l(Aa

) xh

. (3)
The general form of the function is then the same as
before, i.e. a power law with exponent . Only
the constant of proportionality has changed from A
to Aa

. We can therefore zoom in or zoom out on


the function by changing the value of a while its
general shape stays the same. This is partly because
no particular value of x is singled out by g(x),
contrary to the exponential e
bx
or the Gaussian
166 T. Gisiger
e
(xx
!
)
#
distributions which are localized near x l0
and x lx
!
, respectively (where b and x
!
are arbitrary
positive constants). Also, by comparison, the power
law g(x) decreases slowly from innity to zero when
x goes from zero to innity. All these characteristics
give it the property of looking the same no matter
which scale is chosen: this is what is meant by the
scale invariance property of power laws.
In this article, we will be more interested in the
exponent than the proportionality constant A.
We will therefore often write g(x) as
g(x) `x

(4)
where ` means is proportional to.
(2) Fractals in space
As we briey mentioned in the introduction, the
earths crust is full of structures without any
characteristic scale. This is, in fact, a property shared
by many objects found in nature. It was, however,
only in the 1960s and 1970s, with the pioneering
work of Mandelbrot (1977, 1983), that this fact was
given the recognition it deserved. We refer the
reader to Mandelbrots beautiful and, sometimes,
challenging books for an introduction to this new
way of seeing and thinking about the world around
us. Here, we will barely scratch the surface of this
very vast subject by focusing on the concept of
fractal dimension which will be useful to us later on.
Early in one of his books (Mandelbrot, 1983),
Mandelbrot asks the following, and now famous,
question: How long is the coast of Britain? The
intuitive way to answer that query is to take a map
of Britain, a yardstick of a given length d, and to see
how many times n(d) it can be tted around the
perimeter. The estimation of the length L(d) is then
din(d). If we repeat this procedure with a smaller
yardstick, we expect the length to increase a little
and nally, when the yardstick is small enough, to
ultimately converge towards a xed value: the true
length of the coast. This yardstick method, which is
in fact just ordinary triangulation, works well with
regular or euclidian shapes such as a polygon or a
circle. However, as Mandelbrot (1983) noticed,
triangulation does not bring the expected results
when applied to computing the length of coasts and
land frontiers. As we reduce the size d of the
yardstick, more details of the seashore or frontier
must be taken into account, making n(d) increase
quickly. It does so fast enough that the length L(d)
ldin(d) keeps increasing as d diminishes. Fur-
A B C D
(1)
(3)
(4)
(5)
10
3
10
2
10
1
Yardstick length d (km)
10
4
10
3
M
e
a
s
u
r
e
d

l
e
n
g
t
h

L
(
d
)

(
k
m
)
(2)
(6)
E
Fig. 2. (AD) Computation of the length of a circle of
radius 1 using the box counting method. A square lattice
of a given size d is laid over the curve. The number of
squares N(d) necessary to cover the circles perimeter is
then counted and the length of the circle L(d) approxi-
mated as diN(d). (A) d l0.5, N(d) l16 and L(d) l8.
(B) d l0.25, N(d) l28 and L(d) l7. (C) d l0.125,
N(d) l52 and L(d) l6.5. (D) d l0.0625, N(d) l100
and L(d) l6.25. The true value of the perimeter of the
circle is of course Ll26.28319. (E) Estimation using
the yardstick method of the length L(d) of the coast of
Australia (1), South Africa (3) and the west coast of
Britain (5), as well as the land frontiers of Germany (4)
and Portugal (6) as a function of the yardstick d used to
make the evaluation. The length of a circle (2) of radius
1500 km is also included for comparison. Reproduced
from Mandelbrot (1983).
thermore, in order to get a better estimate, one
should use a map of increasing resolution, where
previously absent bays and subbays, peninsulas and
subpenisulas now appear. Taking into account these
new features will also increase the length L(d) even
more.
Instead of using triangularization, one can rely on
a similar and roughly equivalent method called box
counting (Mandelbrot, 1977, 1983). It is more
practical and signicantly simpler to implement on a
computer. One superimposes a square grid of size d
on the curve under investigation, and counts the
minimum number N(d) of squares necessary to cover
it. The length of the curve L(d) is then approximated
by diN(d). Fig. 2AD illustrates the procedure for
a circle of radius 1. The estimate is at rst a little o,
mostly because of the curvature of the circle.
However, as d gets smaller this is quickly taken into
167 Scale invariance in biology
account and the measured length converges toward
the true value Ll26.28319. Applying these
methods to other curves like the coast of Britain gives
dierent results. Fig. 2E shows the variation of L(d)
as a function of d for dierent coastways and land
frontiers. As can be seen, the data for each curve
followa straight line over several orders of magnitude
of d. This suggests the power-law parametrization of
L(d) (Mandelbrot, 1977, 1983):
L(d) `d"

(5)
where is some real parameter to be tted on the
data.
As expected the perimeter of the circle quickly
converges to a value, and stays there for any smaller
values of d. For this part of the curve L(d), l1 (a
horizontal line) ts the data well. The same goes in
Fig. 2E for the coast of South Africa (line 3).
However, for all the other curves, L(d) follows a
straight line with non-zero slope. For instance, in the
case of the west coast of Britain, the line has slope
-k0.25, and therefore L(d) `d
!.#& and 1.25.
Mandelbrot (1977, 1983) dened as the number
of dimensions (or box dimension when using the box
counting method) of the curve. For the circle, l
1 as L(d) is independent of d: we recover the intuitive
facts that a circle is a curve of dimension 1, with a
nite value of its perimeter. The same is also almost
true for the coast of South Africa.
However, for the coast of Britain for instance, is
not an integer, which indicates that the curve under
investigation is not Euclidian. Mandelbrot coined
the term fractal to designate objects with fractional
or a non-integer number of dimensions. Also, the
data in Fig. 2E indicate that Britain possesses a coast
with a huge length, which is best described as quasi-
innite: L(d) `d
!.#& goes to innity as d goes to
zero. This is due to the fact that no matter how
closely we look at it, the coastway possesses structures
such as bays, peninsulas, subbays and subpeninsulas,
which constantly add to its length. It also means that
no matter what scale we use, we keep seeing roughly
the same thing: bays and peninsulas featuring
subbays and subpeninsulas, and so on. The coastway
is therefore eectively scale-invariant. Of course, this
scale invariance is not without bounds : there are no
features on the coastway larger than Britain itself,
and no subbay smaller than an atom. We also note
that the more intricate a curve, the higher the value
of its box dimension : a curve which moves about
so much as to completely ll an area of the plane will
have a box dimension 2.
The concept of non-integer dimension might seem
a little strange or articial at rst, but the geo-
metrical content of the exponent is not. It is a
measure of the plane- (space-)lling properties of a
curve (structure). It quanties the fact that fractals
with a nite surface may have a perimeter of (quasi)
innite length. Similarly, it shows how a body with
nite volume may have an innite area. It is
therefore not surprising to nd fractal geometry in
the shape of cell membranes (Paumgartner, Losa &
Weibel, 1981), the lungs (McNamee, 1991) and the
cardiovascular system (Goldberger & West, 1987;
Goldberger, Rigney & West, 1990). Fractal ge-
ometry also helps explain observed allometric scaling
laws in biology (West, Brown & Enquist, 1997).
Box dimension is just one measure of the intricate-
ness of fractals. Other denitions of fractal dimension
have been proposed in the literature (Mandelbrot
1977, 1983; Falconer, 1985; Barnsley, 1988; Feder,
1988), some better than others at singling out certain
features, or spatial correlations. However, they do
not give much insight into how fractals come about
in nature. Some mathematical algorithms have been
proposed to construct such fractals as the Julia and
Mandelbrot sets or even fractal landscapes, but the
results usually lack the richness and depth of the true
fractals observed in the world around us.
(3) Fractals in time: 1/f-noise
In this subsection, we will introduce the notion of
icker or 1\f-noise, which is considered one of the
footprints of complexity. We will also see how it
diers from white or Brownian noise using the
spectral analysis method (Section II.3.a), Hursts
rescaled range analysis (Section II.3.b) and the
iterated function system method (Section II.3.c).
Let us consider a record in time of a given
quantity h of a system. h can be anything from a
temperature, a sound intensity, the number of species
in an ecosystem or the voltage at the surface of a
neuron. Such a record is obtained by measuring h at
discrete times t
!
,t
"
,t
#
, ...,t
N
, giving a series of data
ot
i
,h(t
i
)q, i l1,..., N. This time series, also called
signal or noise, can be visualized by plotting h(t) as
a function of t.
Fig. 3 shows three types of signals h(t) which will
be of interest to us (this subsection merely reproduces
the discussion from Press, 1978): white noise, icker-
or 1\f-noise and Brownian noise. Fig. 3A represents
what is usually called white noise: a random
superposition of waves over a wide range of
frequencies. It can be interpreted as a completely
168 T. Gisiger
White noise
A
B
1/fnoise
Brownian noise
20
15
10
5
0
5
0 100 200 300 400 500
Time t
S
i
g
n
a
l

h
(
t
)
C
2
0
2
2
0
2
Fig. 3. Three examples of signals h(t) plotted as functions
of time t : white noise (A), 1\f- or icker noise (B) and
Brownian noise (C).
uncorrelated signal : the value of h at some time t is
totally independent of its value at any other instant.
An example is the result of tossing a coin N
consecutive times and recording the outcome each
time. Fig. 3A shows an example of white noise that
was obtained using, instead of a coin, a random
number generator with a Gaussian distribution (see
for instance Press et al., 1988). This gives a signal
which stays most of the time close to zero, with rare
and punctual excursions to higher values.
Fig. 3C represents Brownian noise, so called
because it resembles the Brownian motion of a
particle in one dimension: h(t) is then the position of
the particle as a function of time. Brownian motion
of a particle in a uid is created by the random
impact of the liquids molecules on the immersed
particle, which gives the latter erratic displacement.
This can be reproduced by what is called a random
walk as follows : the position h of the particle at some
time tj1 is obtained by adding to its previous
position (at time t) a random number (usually
drawn using a Gaussian distribution) representing
the thermal eect of the uid on the particle. The
signal h obtained is therefore strongly correlated in
time as the particle remembers well where it was a
few steps ago. We see that the curve wiggles less than
that of white noise, and that it makes large excursions
away from zero.
The curve in Fig. 3B is dierent from the rst two
but it shares some of their characteristics. It has a
tendency towards large variations like the Brownian
motion, but it also exhibits high frequencies like
white noise. This type of signal seems then to lie
somewhere between the two, and is called icker or
1\f-noise. It is this type of signal which will interest
us in the present review because it exhibits long
trends which can be interpreted as the presence of
memory, an interesting feature in biological systems
[the method presented in Press (1978) was used to
obtain the sample shown in Fig. 3B]. Such signals
have been observed in many phenomena in physics
(see Press, 1978 and references therein): light
emission intensity curves in quasars, conduction of
electronic devices, velocities of underwater sea
currents, and even the ow of sand in an hour glass
(Schick & Verveen, 1974). It is also present in some
of the biological systems presented in this review, as
well as in other phenomena such as the cyclic insulin
needs of diabetics (Campbell &Jones, 1972), healthy
heart rate (Pilgram & Kaplan, 1999), Physarum
polycephalum streaming (Coggin & Pazun, 1996) and
in some aspects of rat behaviour (Kafetzopoulos,
Gouskos & Evangelou, 1997), to name only a few.
Like fractals, icker noise can be produced math-
ematically in several ways [see Mandelbrot (1983)
and Press (1978) for instance] though these algo-
rithms do not really help understand how it comes
about in nature.
Next, we describe mathematical methods which
can distinguish icker noise from random or Brow-
nian noise (for more details on these methods and an
example of their application to the investigation of
insect populations, see Miramontes & Rohani,
1998).
(a) Power spectrum of signals
The power spectrum P( f ) of a signal h(t) is dened
as the contribution of each frequency f to the signal
h(t). This is the mathematical equivalent of a
spectrometer analysis, which decomposes a light
beam into its components in order to evaluate their
relative importance (see for instance Press et al., 1988
for a denition of the power spectrum of a signal).
169 Scale invariance in biology
White noise
A
B
1/fnoise
Brownian noise
10
0
Frequency f
P
o
w
e
r

s
p
e
c
t
r
u
m

P
(
f
)
10
1
10
2
10
3
10
0
10
1
10
2
10
3
10
2
10
0
10
2
10
2
10
1
C
Fig. 4. Power spectrum of the signals shown in Fig. 3 of
white (A), 1\f- (B) and Brownian noise (C). The lines
tted to the spectra have slopes l0.01, lk1.31 and
lk1.9, respectively.
We present in Fig. 4 the power spectra of the signals
of Fig. 3.
By analogy with white light, which is a super-
position of light of every wavelength, white noise
should have a spectrum with equal power P( f ) at
every frequency f. This is indeed what Fig. 4A
shows. P(f) can be expressed as :
P( f ) `f

(6)
where is the gradient of the line tted to the
spectrum in Fig. 4. We nd l0.01, consistent with
P( f ) `f !.
The power spectrum of a Brownian signal also
follows a straight line on a loglog plot with a slope
equal to k2 (the line tted to our data in Fig. 4C
gives lk1.9, in reasonable accord with P( f ) `
1\f #). The power spectrum of a signal gives a
quantitative measure of the importance of each
frequency. For the Brownian motion, P( f ) goes
quickly to zero when f goes to innity, illustrating
why h(t) wiggles very little: the signal has a small
content in high frequencies. The large oscillations,
which correspond to low frequencies, constitute a
large part of the signal. Dominance of these low
frequencies can be viewed as the persistence of
information in the random walk mentioned earlier.
Flicker noise, or 1\f-noise, is dened by the power
spectrum:
P( f ) `
1
f
(7)
or more generally, as in equation (6) with .
[k1.5,k0.5]. The line tted to the data of Fig. 4B
has a slope lk1.31, well in the right range. The
interest in icker noise is motivated by its strong
content in both small and large frequencies. Be-
having roughly like 1\f, P( f ) diverges as f goes to
zero, which suggests, as in the case of Brownian
motion, long-time correlations (or memory) in the
signal. But, in addition, P( f ) goes to zero very slowly
as f become large and the accumulated power stored
in the high frequencies is actually innite. Any
spectrum with roughly equal to k1 will have these
two characteristics, which explains the somewhat
loose denition of 1\f-noise. Flicker noise is therefore
a signal with a power spectrum without any
characteristic frequency or, equivalently, time scale:
this is reminiscent of the notion of fractals, but in
space-time instead of just space.
(b) Hursts rescaled range analysis method
By the 1950s, H. E. Hurst (Hurst, 1951; Hurst,
Black & Samayka, 1965) had started the investi-
gation of long- and medium-range correlations in
time records. He made his life work the study of the
Nile and the management of its water. To help him
build the perfect reservoir, one which would never
overow or go dry, he developed the rescale range
analysis, a method, he found, that detects long-range
correlations in time series (this observation was later
put on more solid theoretical grounds by Mandel-
brot : see Mandelbrot, 1983). Using this method, he
could measure the impact of past rainfall on the level
of water. For a very thorough review of this method,
see Feder (1988).
This method associates to a time series an
exponent H which takes its values between 0 and 1.
If H1\2, the signal h(t) is said to exhibit
170 T. Gisiger
persistence: if the signal has been increasing during
a period prior to time t, then it will show a
tendency to continue that trend for a period after
time t. The same is true if h(t) has been decreasing:
it will probably continue doing so. The signal will
then tend to make long excursions upwards or
downwards. Persistence therefore favours the pres-
ence of low frequencies in the signal. This is the case
for the random walk of Fig. 3C, for which we nd H
0.96. As H goes towards 1, the signal becomes
more and more monotonous. The 1\f-noise of Fig.
3B has a Hurst exponent H0.88 which is a further
indication of long-term correlations in the signal.
When H1\2, h(t) is said to exhibit anti-
persistence: whatever trend has been observed prior
to a certain time t for a duration , will have a
tendency to be reversed for the following period .
This suppresses long-term correlations and favours
the presence of high frequencies in the power
spectrum of the signal. The extreme case where H
0 is when h(t) oscillates in a very dense manner.
The case in between persistence and anti-per-
sistence, Hl1\2, is when there is no correlation
whatsoever in the signal. This is the case of the white
noise of Fig. 3A for which we nd H0.57,
consistent with the theoretical value of Hl1\2.
It can be shown that the dierent signals h(t)
which we have presented here (white, icker and
Brownian noises) are curves which ll space
[spanned by the time and h(t) axes] to a certain
extent : they are in fact fractals and their box
dimension can be expressed as a function of the
Hurst exponent H (see Mandelbrot, 1983 and Feder,
1988 for details).
(c) Iterated function system method
Another way of dierentiating time series with long-
range correlations from ordinary noise makes use of
the iterated function systems (IFS) algorithm intro-
duced by Barnsley (1988). This very ingenious
procedure was rst proposed by Jerey (1990) to
extract correlations in sequences of genes from DNA
strands.
The method associates to a time series a two-
dimensional trajectory, or more accurately a cloud
of points, which gives visual indications on the
structure and correlations in the signal (see Jerey,
1990; Miramontes & Rohani, 1998 for details and
Mata-Toledo & Willis, 1997). The main strength of
this method rests in that it does not require as many
data points as the power spectrum or the rescaled
range analysis methods.
A
B
1
0
0
C
1
0
1
0 1
0
1
1
0
Fig. 5. Results of the iterated function system (IFS)
method when applied to the time series shown in Fig. 3 of
white noise (A), icker noise (B) and Brownian signal (C).
Fig. 5 shows the trajectories associated with the
signals of Fig. 3. The method gives dramatically
dierent results for the three signals. The trajectory
associated with white noise lls the unit square with
dots without exhibiting any particular pattern (Fig.
5A). On the other hand, the Brownian motion gives
a trajectory that spends most of its time near the
edges of the square (Fig. 5C). This is due to the
large, slow excursions of the signal, which produced
the 1\f # dependence of the power spectrum: the
171 Scale invariance in biology
signal h(t) stays in the same size range for quite some
time before moving to another. This produces a
trajectory that aggregates near corners, sides and
diagonals. When the signal nally migrates to
another interval, it will produce a trajectory which
follows the edges or the diagonals of the square.
Things are quite dierent for the icker noise (Fig.
5B). The pattern exhibits a complex structure which
repeats itself at several scales, and actually looks
fractal. The divergence of the power spectrum at low
frequencies makes the trajectory spend a lot of time
near the diagonals and the edges, similarly to the
case of the Brownian motion. However, enough high
frequencies are present to move the dot away from
the corners for short periods of time. The result is the
appearance of patterns away from the edges and the
diagonals. Their regular structure and scaling
properties make them easy to recognize visually even
in short time series.
(4) Power laws in physics: phase transitions
and universality
In this section, I briey introduce the notion of
critical phenomena. This is relevant to the general
subject of this review because systems at, or near,
their critical point exhibit power laws and fractal
structures. They also illustrate the very powerful
notion of universality, which is of great interest to the
study of complex systems in physics and biology.
Critical systems being an active and vast eld of
research in physics, it is not the goal of the present
review to give it a complete introduction. I will only
justify the following armations [being obviously
rather technical, Sections II.4.a and II.4.b can be
skipped on rst reading]:
(1) Systems near a phase transition become
critical : they do not exhibit any characteristic length
scale and spontaneously organize themselves in
fractals (see Section II.4.a).
(2) Critical systems behave in a simple manner:
they obey a series of power laws with various
exponents, called critical exponents , which can be
measured experimentally (see Section II.4.a).
(3) Experiments during the 1970s and 1980s
showed that critical exponents of materials only
come with certain special values : a classication of
substances can therefore be developed, where materi-
als with identical exponents are grouped together in
classes. The principle claiming that all systems
undergoing phase transitions fall into one of a
limited set of classes is known as universality (see
Section II.4.b).
This will be sucient to the needs of this review
where we will apply these concepts to biological
systems. For a more detailed account of critical
phenomena, the reader is referred to Wilson (1979)
and the very abundant literature on the subject (see
for instance Maris & Kadano, 1978; Le Bellac,
1988; Biney et al., 1992).
(a) Critical systems, critical exponents and fractals
Everybody is familiar with phase transitions such as
water turning into ice. This is an example of
discontinuous phase transition: matter suddenly goes
froma disordered state (water phase) to an organised
state (ice phase). The sudden change in the
arrangement of the water molecules is accompanied
by the release of latent heat by the system. Here, we
will be interested in somewhat dierent systems.
They still make transitions between two dierent
phases as the temperature changes, but they do so in
a smooth and continuous manner (i.e. without
releasing latent heat). These are called continuous
phase transitions and they are of great experimental
and theoretical interest. (Here, I use the terms
discontinuous and continuous for phase transi-
tions ; physicists prefer the terms rst order and
second order phase transitions.)
The classical example of a system exhibiting a
continuous phase transition is a ferromagnetic
material such as iron. Water too can be brought to
a point where it goes through a continuous phase
transition: it is known as the critical point of water,
characterized by the critical temperature T
c
l
647 K and the critical density
c
l0.323 g cm
$.
There, the liquid and vapour states of water coexist,
and in fact look alike. This makes the dynamics of
the system dicult, even confusing, to describe with
words. Here, we will therefore use the paradigm of
the ferromagnetic material as an example of critical
system, i.e. of a system near a continuous phase
transition.
It was shown by P. Curie at the turn of the
century that a magnetized piece of iron loses its
magnetization when it is heated over the critical
temperature T
c
1043 K. Similarly, if the same
piece is cooled again to below T
c
, it spontaneously
becomes remagnetized. This is an example of a
continuous phase transition because the magnetiz-
ation of the system, which we represent by the vector
M, varies smoothly as a function of temperature:
Ml0 in the unmagnetized phase and it takes a non-
zero value in the magnetized phase. The magnetiz-
ation M of the sample, which is easily measured
172 T. Gisiger
Fig. 6. Illustration of a two-dimensional sample of
material in the magnetized phase (TT
c
): the spins m
of the atoms, represented by the small arrows, are all
aligned and produce a non-zero net magnetization M `

sample
m for the sample. T, temperature; T
c
, critical
temperature.
using a compass or more accurately a magnetometer,
therefore allows us to determine which phase the
piece of iron is in at a given instant.
To understand better the physics at work in the
system, we have to look at the microscopic level. The
sample is made of iron atoms arranged in a roughly
regular lattice. In each atom, there are electrons
spinning around a nucleus. This creates near each
atom a small magnetic eld m called a spin, which
can be approximated roughly by a little magnet like
a compass arrow (see Fig. 6 for an illustration). m is
of xed length but it can be oriented in any direction.
The total magnetization of the material is pro-
portional to the sum of the spins :
M`

sample
m (8)
over all the atoms of the sample. Therefore, if all the
spins point in the same direction, their magnetic
eects add up and give the iron sample a non-zero
magnetization M. If they point in randomdirections,
they cancel each other and the sample has no
magnetization (Ml0). This microscopic picture
can be used to explain what happens during the
phase transition. If we start with a magnetized block
of iron (all mpointing roughly in the same direction)
and heat it up, the thermal agitation in the solid will
disrupt the alignment of the spins, therefore lowering
the magnetization of the sample. When TlT
c
, the
agitation is strong enough to completely destroy the
alignment and the total magnetization is zero at T
c
and for any temperature larger than T
c
. Similarly,
when the sample is hot and is then cooled to below
the critical temperature, the spins spontaneously
align with each other. It can be shown that for T
slightly smaller than T
c
, the magnetization Mfollows
a power-law function of the temperature:
QMQ `QT
c
kTQ

, (9)
where can be measured to be approximately
0.37 (if T is larger or equal to T
c
then Ml0).
therefore quanties the behaviour of the magnetiz-
ation of the sample as a function of temperature: it
is called a critical exponent. It can be shown that
other measurable quantities of the system, such as
the correlation length dened below, obey similar
power laws near the critical point (but with dierent
critical exponents : besides , ve other exponents
are necessary to describe systems near phase tran-
sitions) (Maris & Kadano, 1978; Le Bellac, 1988;
Biney et al., 1992).
Let us consider the sample at a given temperature,
and measure the eect that ipping one spin has on
the other spins of the system. If TT
c
, the majority
of spins will be aligned in one direction. Flipping one
spin will not inuence the others because they will all
be subject at the same time to the much larger
magnetic eld of the rest of the sample. If TT
c
,
changing the orientation of one spin will modify only
that of its neighbours since the net magnetization of
the material is zero. However, near the phase
transition (TT
c
), one spin ip can change the
spins of all the others. This is because as we approach
the critical point, the range of interactions between
spins gets innite: every spin interacts with all other
spins. This can be formalized by the correlation
length, , dened as the distance at which spins
interact with each other. Near the phase transition,
it follows the power law:
`QTkT
c
Q

, (10)
with 0.69 for iron, and therefore diverges to
innity as T goes to T
c
: there is no characteristic
length scale in the system. Another way of under-
standing this phenomenon is that, as T is ne-tuned
to T
c
, the spins of the sample behave like a row of
dominoes where the fall of one brings down all the
others. Here also, the interaction of one domino
extends eectively to the whole system. This seems to
take place by the spins arranging themselves in a
scale-free, i.e. fractal, way (see Fig. 7). Fractal
structures have been conrmed both experimentally
and theoretically: at TlT
c
, the spins are arranged
173 Scale invariance in biology
A
B
C
Fig. 7. Spin disposition of a sample as simulated by the
Ising model. Each black square represents a spin pointing
up, and the white ones stand for a spin pointing down. (A)
TT
c
: almost all the spins are pointing in the same
direction, giving the sample non-zero magnetization
(magnetized phase). (B) TlT
c
: at the phase transition,
the net magnetization Ml0 but the spins have arranged
themselves in islands within islands of spins pointing in
opposite directions, which is a fractal pattern. (C) T
T
c
: the system has zero magnetization and only short-
range correlations between spins exist (unmagnetized
phase). T, temperature; T
c
, critical temperature.
in islands of all sizes where all m point up, within
others where all point down, and so on at smaller
scales, with net magnetization which is zero [see Fig.
7 for a computer simulation (Gould & Tobochnik,
1988) of a particularly simple spin system: the Ising
model].
(b) Universality
During the 1970s and 1980s, there was a great deal
of experimental work performed to measure the
critical exponents of materials : polymers, metals,
alloys, uids, gases, etc. It was expected that to each
material would correspond a dierent set of ex-
ponents. However, experiments proved this sup-
position wrong. Instead materials, even those with
no obvious similarities, seemed to group themselves
into classes characterised by a single set of critical
exponents. For instance, it can be shown that when
taking into account experimental errors, one-dimen-
sional samples of the gas Xe and the alloy -brass
have the same values for critical exponents (see
Maris & Kadano, 1978; Le Bellac, 1988; Biney et
al., 1992). This is also the case for the binary uid
mixture of methanol-hexane, trimethylpentane and
nitroethane. This gas, alloy and liquid mixture
therefore all fall into a class of substances labeled by
a single set of critical exponents. By contrast, a three-
dimensional sample of Fe does not belong to this
class. However, it has the same critical exponents as
Ni. They therefore both belong to another class.
Since critical exponents completely describe the
dynamics of a system near a continuous phase
transition, the fact that the classication mentioned
above exists proves that arbitrary critical behaviour
is not possible. Rather, only a limited number of
behaviours exist in nature, which are said to be
universal, and dene disjoint classes called uni-
versality classes. The principle which states this
classication is therefore called universality. The
following theoretical explanation of this astonishing
fact has been proposed (Wilson, 1979; see also Maris
& Kadano, 1978; Le Bellac, 1988; Biney et al.,
1992): near a continuous phase transition, a given
system is not very sensitive to the nature of the
particles it is constituted of, or to the details of the
interactions which exist between them. Instead, it
depends on other, more fundamental, characteristics
of the system such as the number of dimensions of the
sample (see Wilson, 1979). It is a point of view which
ts well within the philosophy of complex systems we
mentioned in the introduction.
Universality has been described as a physicists
dream come true. Indeed, what it tells us is that a
system, whether it is a sample in a laboratory or a
mathematical model, is very insensitive to details of
its dynamics or structure near critical points. From a
theoretical point of view, to study a given physical
system, one only has to consider the simplest
mathematical model possibly conceivable in the
same universality class. It will then yield the same
critical exponents as the system under study. A
famous example is the Ising model proposed to
explain the ferromagnetic phase transition and
which we introduce now. It represents spins of the
iron atoms by a binary variable S which can either
be equal to 1 (spin up) or to k1 (spin down). The
174 T. Gisiger
spins are distributed on a lattice and they interact
only with their nearest neighbours. Even though this
model simply represents spins as j or k, does not
allow for impurities, irregularities in the disposition
of spins, vibrations, etc., it yields the right critical
exponents.
Critical phenomena is a eld where the intuitive
idea that the description of a system is dependent on
the amount of detail put into it does not hold. As
long as a system is known to be critical, the simplest
model (sometimes simple enough to be solved by
hand) will do. This approach is not restricted to
physical systems. In fact, most of the biological
systems presented in this review will be studied using
extremely simple, usually critical, models.
III. GENERALITIES ON MODELS AND THEIR
PROPERTIES
So far, we have briey introduced the notion of
complex systems and why they are of interest in
science. We have also presented in detail the
property of some systems which do not possess any
characteristic scale, and how it can be observed:
scale invariance in fractals (measurable by the box
counting method), correlations on all time scales in
1\f-noise (diagnosed by the power spectrum of the
signal, for instance), and power-law distribution of
event size or duration. Finally, we have briey
described physical critical systems which exhibit
scale-free behaviour naturally when one tunes the
temperature near its critical value. As a bonus, we
encountered the notion of universality which tells us
that critical systems may be accurately described by
models which only approximate roughly the interac-
tions between its constituents.
We therefore have the necessary tools to detect
scale invariance in biological systems, and a general
principle, criticality, which might explain how this
scale-free dynamics arises. However, to make contact
between our understanding of a system and ex-
perimental data, one needs mathematical models.
Major types of models used in biology are dierential
equation systems, iterative maps and cellular auto-
mata, to name only a few. Here, we will review in
turn those which can produce power laws and scale
invariance.
We start in Section III.2 with dierential equa-
tions and discrete maps which exhibit transitions to
chaotic behaviour. Section III.3 presents spatially
discretized models like percolating systems and
cellular automata. We then move on, in Section
III.4, to the concept of self-organized criticality,
illustrating it with the now famous sandpile model.
This will be done with special care since most of the
models presented in this review are of this type. At
the end of this Section we take a few steps back from
these developments to discuss from a critical point of
viewthe limitations of the complex systems paradigm
(Section III.5).
(1) Generalities
Changeux (1993) gives the following denition for
theoretical models : In short, a model is an
expanded and explicit expression of a concept. It is
a formal representation of a natural object, process or
phenomenon, written in an explicit language, in a
coherent, non contradictory, minimal form, and, if
posssible, in mathematical terms. To be useful, a
model must be formulated in such a way that it
allows comparison with biological reality. [] A
mathematical algorithm cannot be identied with
physical reality. At most, it may adequately describe
some of its features
I agree with this denition. However, I feel that
two points need clarication.
First, there is the issue of the relationship between
a model and experimental data. In order to be
useful, a model should always reproduce, to a
sucient extent, the available measurements. It is
from an understanding of these data that hypotheses
about the system under study can emerge and be
crystalized into a model. The latter can then be
tested against reality by its ability to account for the
experimental data and make further predictions.
Without this two-way relationship, the line between
theoretical construction and theoretical speculation
can become blurred, and easily be crossed.
History has shown that, in physics for instance (see
Einstein & Infeld, 1938 for a general discussion on
modelling and numerous examples), some of the
most signicant theoretical advances were made by
the introduction of powerful new concepts to
interpret a mounting body of experimental evidence.
However, by no means do I imply here that
theoretical reection should be conned to the small
group of problems solvable in the short of medium
term. I just want to stress that one should be careful
about the validity and robustness of results obtained
using models based on intuition alone or sparse
experimental ndings.
Second, I think that the adjective minimal
deserves to be elaborated on somewhat (see also Bak,
1996).
175 Scale invariance in biology
A good starting point for our discussion is weather
forecasting. The goal here is to use mathematical
equations to predict, from a set of initial conditions
(temperature, pressure, humidity, etc.), the state of
the system (temperature, form and amount of
precipitation, etc.) with the best precision possible
over a range of approximately 1236 hours. In this
case, minimal translates to a model made of
hundreds or thousands of partial dierential equa-
tions with about the same amount of variables and
initial conditions. Such complexity allows the in-
clusion of a lot of details in the system, and is often
considered to be the most realistic representation of
nature. However, this approach has two serious
limitations. The rst is that the equations used are
often non-linear and their solutions are unstable and
sensitive to errors in the initial conditions : any small
error in temperature or pressure measurements at
some initial time will grow and corrupt the predic-
tions of the model as the simulation time increases.
This is the well-known buttery eect (Lorenz,
1963) which prevents any long-term weather fore-
casting. It also explains why such models can give
little insight into global warming or the prediction of
the next ice age. The second limitation of this
approach stems from the sheer complexity of the
model, which usually needs supercomputers to run
on. It makes it dicult to get an intuitive feel of the
dynamics of the system under investigation. One
then has a right to wonder what has been gained,
besides predictive power: prediction is then not
always synonymous with understanding.
At the other end of the complexity spectrum stand
models which have been simplied so much that
they have been reduced to their backbone, taking
the minimal adjective as far as one dares. Such
models are sometimes frowned upon by experimen-
talists, who do not recognize in them the realization
of the system they study. Theorists, on the other
hand, enjoy their simplicity: such models yield more
easily to analysis and can be simulated without
resorting to complicated algorithms or supercom-
puters. They too can be extremely sensitive to
variations in initial conditions (like the Lorenz
model presented in Section III.2) but in a more
tractable and controled way. A lesser predictive
power, sometimes only qualitative predictions, is
usually the price to pay for this simplicity. However,
as we saw, in certain cases like the critical systems
described in Section II.4, simplicity can coexist with
spectacularly accurate quantitative results. One only
has to choose a simple, or simplistic, model in the
same universality class as the system under study. In
20
0
20
40
30
20
10
0
10
0
10
x
z
y
Fig. 8. Classical example of chaotic behaviour generated
by the equations of Lorenz (1963). The trajectory winds
forever in a nite portion of space without ever inter-
secting itself, creating a structure called a strange
attractor. It is fractal with box dimension 2.05, zero
volume but innite area.
this review, we will consider only models belonging
to this latter class.
(2) Chaos: iterative maps and dierential
equations
While reading popular literature, one can get the
impression that chaos and fractals are two sides of
the same coin. As we saw, fractals were introduced
by Mandelbrot (1977, 1983) in the 1970s. At about
the same time, Lorenz (1963) was discovering that
the simple set of non-linear dierential equations he
had devised as a toy model for weather forecasting
exhibited strange, unheard-of behaviour. First, the
trajectory solution to these equations winds in an
unusual, non-periodic way (see Fig. 8): it has
actually been shown to follow a structure with box
dimension 2.05 (see Strogatz, 1994) which is
therefore fractal. Lorenz coined the now-famous
term strange attractor to designate it. [Attractors
can be dened as the part of space which attracts the
trajectory of a system. It can be anything from a
point (when the system tends towards an equi-
librium), a cycle (for periodic motion), to a fractal
structure. The latter is then called a strange
176 T. Gisiger
attractor.] Second, this trajectory is also extremely
sensitive to changes in initial conditions. Any small
modication
!
will grow with time t as e

t
, with ,
called a Lyapunov exponent, having a value of
roughly 0.9 for the system of Lorenz (Lorenz, 1963;
May, 1976; Feigenbaum, 1978, 1979; Strogatz,
1994). The variation induced by the introduction of

!
will then increase extremely quickly: this model is
exponentially sensitive to changes in its initial
conditions. This makes any long-term weather
forecasting with such systems impossible since any
error in the initial conditions (which are present in
any measurements) will corrupt its predictions.
The word chaos was chosen to describe the
dynamics of systems which do not exhibit any
periodicity in their behaviour and are exponentially
sensitive to change in their initial conditions. [Thus,
they have positive values of their Lyapunov ex-
ponents. For negative values of , e

t
will quickly
go to zero and changes in initial conditions will
therefore not aect the dynamics.] This behaviour
was put in a wider context by the work of
Feigenbaum (1978, 1979) who showed that, by
adjusting the values of their parameters, certain non-
linear models can shift from a non-chaotic regime
(i.e. periodic and not very sensitive to variations in
their initial conditions) to a chaotic state (i.e. non-
periodic with high sensitivity to initial conditions).
This transition takes place through a succession of
discrete changes in the dynamics of the system,
which he called bifurcations . Feigenbaum (1978,
1979) used, in his studies a simplied model of an
ecological system, called the logistic map, proposed
earlier by May (1976).
This classical model is a simple, non-linear,
iterative equation which describes the evolution of
the population x
t
of an ecosystem as a function of
time t. Though comprising only one parameter r,
which takes its values between 0 and 4 and quanties
the reproductive capabilities of the organisms, this
model is capable of producing time series x
"
, x
#
, , x
N
with increasingly complicated structures as r grows
from small values to larger ones. For r close to zero,
the population of the ecosystem tends to stabilize
with time to a constant value. For slightly higher r,
it becomes periodic, oscillating between two values
for the population. This radical change in dynamics
is an example of bifurcation. As r rises further, the
time series of the population becomes more and more
complicated as new values are added to the cycle by
further bifurcations. For the value r r
_

3.569946, the times series is still periodic but barely,


as its period is innitely long and it depends little on
variations in initial conditions. However, as r
increases still further, the population now evolves
without any periodicity and computations give a
positive value for the Lyapunov exponent : the
system has entered a chaotic regime (May, 1976;
Feigenbaum, 1978, 1979; Bai-Lin, 1989; Strogatz,
1994).
One should note that transitions to chaotic
dynamics are more than artefacts of models running
on computers. They have been identied in nu-
merous experiments ranging from lasers (Harrison &
Biswas, 1986) and convection in liquids (Libchaber,
Laroche & Fauve, 1982) to muscle bers of the chick
embryo (Guevara, Glass & Shrier, 1981). We refer
the interested reader to the literature for further
details (see for instance Strogatz, 1994).
So, to summarize, by changing the values of
parameters of non-linear models, stable or periodic
behaviours can change to a chaotic regime where the
system traces complicated (perhaps even complex)
trajectories which have no apparent periods, are
sometimes fractal, and are very sensitive to changes
in initial conditions. It is therefore understandable
that these two concepts, fractals and chaos, are
believed to be linked or even complementary.
However, there seems to be little evidence to support
this connection. In fact, there is evidence that seems
to suggest that it might not be well founded at all.
Bak (1996), in his book on complexity, gives the
following strong statement : [] Also, simple
chaotic systems cannot produce a spatial fractal
structure like the coast of Norway. In the popular
literature, one nds the subjects of chaos and fractal
geometry linked together again and again, despite
the fact that they have little to do with each other.
[] In short, chaos theory cannot explain com-
plexity. The reader should note that Bak (1996) uses
the word complexity to describe the behaviour of
scale-invariant complex systems, and not that of
complex systems in their full generality. I will not
add to this debate. Instead, I will just say that one
can certainly see that the typical examples of chaotic
systems presented above do not create fractal objects,
even if their trajectories indeed trace fractal struc-
tures.
According to Bak (1996), chaotic systems are also
not able to emit fractal time series such as 1\f-noise:
[] Chaos signals have a white noise spectrum, not
1\f. One could say that chaotic systems are nothing
but sophisticated random noise generators. []
Chaotic systems have no memory of the past and
cannot evolve. However, precisely at the critical
point where the transition to chaos occurs, there is
177 Scale invariance in biology
Edge of chaos
Chaotic signal
0.015
0.01
0.005
0
0 0.1 0.2 0.3 0.4 0.5
10
2
10
6
10
2
10
1
Frequency f
P
o
w
e
r

s
p
e
c
t
r
u
m

P
(
f
)
A
B
Fig. 9. (A) Power spectrum P( f ) of a time series produced
by the logistic map at the edge of chaos (r lr
_

3.569946). The spectrum is not of a 1\f-form but does


exhibit an interesting shape which seems to be self-similar.
(B) Power spectrum of a chaotic signal (r l3.8). P( f )
seems to follow a Gaussian distribution located in the
high-frequency part of the spectrum. r, parameter
quantifying the reproductive capabilities of the organisms
of the ecosystem of May.
complex behaviour, with a 1\f-like signal. The
complex state is at the border between predictable
periodic behaviour and unpredictable chaos. Com-
plexity occurs only at one very special point, and not
for the general values of r where there is real chaos.
[]
This statement is easier to verify, for instance by
using Mays (1976) logistic map. Fig. 9B shows the
power spectrum of the signal for this map in the
chaotic regime. P( f ) looks like a Gaussian dis-
tribution located in the high-frequency range of the
spectrum. The signal is therefore very poor in low
frequencies. The rescaled range analysis method for
this chaotic signal gives H0.36, therefore showing
anti-persistence, contrary to 1\f- and Brownian
noise. Similarly, the IFS method gives graphics
which looks nothing like that of the 1\f-noise: points
group themselves in little islands along parallel lines.
This seems to support Baks (1996) claim that
chaotic systems do not produce interesting signals.
Because of their high sensitivity to changes in the
initial conditions, it is impossible to predict what
Chaotic signal
10
2
Time intervals
D
i
s
t
r
i
b
u
t
i
o
n

D
(

)
10
1
10
0
10
0
10
1
10
2
A
B
Fig. 10. (A) Signal for the Manneville iteration map
tuned at the edge of chaos (upper), with the same signal
after being ltered (lower; a spike was drawn each time
the signal went over the value 0.6810). (B) Plot of the
distribution D() of time delays between consecutive
spikes.
chaotic systems will do in the long run. There is a loss
of information as time passes. This suggests that, by
looking at this from the opposite point of view, a
chaotic system has a very short memory: it does not
remember where it was for very long. It might
therefore not be a good candidate to describe
biological systems which must adapt and learn all
the time.
However, as Bak (1996) mentions, the map
exhibits a far richer behaviour at r lr
_
, right at the
point between periodic behaviour and the chaotic
regime, therefore sometimes called the edge of
chaos . Fig. 9A shows the power spectrum of the
signal there: the structure is certainly not 1\f, but
it is interesting and appears to be self-similar. It
can also be shown that the time series with innite
periodicity produced for r lr
_
falls on a discrete
fractal set with a box dimension smaller than 1,
called a Cantor set. Furthermore, Costa et al. (1997)
have shown that at the edge of chaos, the sensitivity
to initial conditions of the logistic map is a lot milder
than in the chaotic regime. There, the Liapunov
exponent is zero and instead the sensitivity to
178 T. Gisiger
changes in initial conditions follows a power law
which, as we saw in Section II.1, is a lot milder than
the exponential e

t
. This will sustain information in
the system a lot longer, and might provide long-term
correlation in the time evolution of the population x
t
.
It was, however, Manneville (1980) who rst
showed that, tuned exactly, an iterative function
similar to that of May (1976) can produce interesting
behaviour and power laws (see also Procaccia &
Schuster, 1983; Aizawa, Murakami & Kohyama,
1984; Kaneko, 1989). Fig. 10A (upper panel) shows
the signal produced by his map: it is formed by a
succession of peaks which, when interpreted as a
series of spikes (Fig. 10A, lower panel) looks similar
to the train of action potentials measured at the
membrane of neurons (we will return to this analogy
in Section VI.3). Fig. 10B shows the distribution of
intervals between successive spikes : it follows a
power law D() `
"
n
$ over several orders of
magnitude. Manneville (1980) also computed the
power spectrum of the signal and found that is was
1\f. This shows that it is indeed possible to generate
power laws and 1\f-noise from simple iterative maps
by ne-tuning the parameters of the system at the
edge of chaos (see Manneville, 1980 for details).
(3) Discrete systems in space: cellular
automata and percolation
In this Section, we will introduce percolation systems
and cellular automata. This will be helpful since
most models in this review belong to one of these two
types.
Iterative functions and dierential equations are
not always the most practical way to describe
biological systems. Let us take the following example.
How should we go about building a model for tree or
plant birth and growth in an empty eld? The
model should be able to predict how many plants
will have grown after a given time interval, and also
if they will be scattered in small patches or in just one
large patch which spans the whole eld. We will
make the simplifying assumption that seeds and
spores are carried by the wind over the eld, and
scattered on it uniformly.
This is of course a dicult problem, and using
dierential equations to solve it might not be the
most practical method. The underlying dynamics of
how a seed lands on the ground and decides, or not,
to produce a plant is already delicate. Further, it
depends also on the types of seed and ground
involved, the amount of precipitation, the vegetation
A B C
Fig. 11. Simulation of plant growth in a square empty
eld divided into Nl2500 small areas. Cells with a plant
on them are represented by a black square. Free areas are
symbolized by white squares. (A) p l0.1, (B) p l0.3,
and (C) p l0.58, where p is the probability that a plant
grows on each small area.
already present, etc. A more practical approach
might be a probabilistic one.
Here, we rst divide the eld into N small areas,
using a square lattice for instance. Each cell must be
small enough that at most one plant can grow on it
at a time. We then dene a number p . [0, 1] which
represents the probability that a plant grows on each
small area. An approximate value for p can be found
experimentally by reproducing the conditions found
on location. We then proceed to ll all the cells
using the probability p. Fig. 11 shows an example for
p l0.1, 0.3 and 0.58. We notice, as expected, that
the number of plants present and the size of the
patches increases with p.
Repeating the simulation several times also gives
the following interesting result : the proportion of
simulations where the eld is spanned from border to
border by a single patch varies abruptly as a
function of the probability p. It is close to zero if p
is smaller than approximately 0.59, and almost
equal to 1 if p exceeds this value. Indeed, it has been
shown that such systems, which are called perco-
lation systems because of their similarity to the
percolation of a uid in a porous medium (for
reviews see Broadbent & Hammersley, 1957;
Stauer, 1979; Hammersley, 1983), become critical
when p is ne-tuned to the value p
c
0.59275: the
shape of the patch spanning the eld is fractal, and
the properties of the systemobey power law functions
of pkp
c
. The probability p therefore plays a role
similar to temperature T in the systems of Section
II.4.
How will this model help us answer the questions
asked at the beginning of this Section? The number
of plants which will grow on the eld depends on p
and the model predicts the value Np. As to whether
plants will group themselves into small patches or
spread all over the eld, the answer depends also on
179 Scale invariance in biology
p (which one should be able to estimate exper-
imentally). As we just saw, if p 0.6 then small
scattered patches are probable. Otherwise, complete
colonisation of the eld by the plants is almost
certain.
This very simple model can be generalized to a
larger number of dimensions instead of just the two
considered here. For instance, one can easily con-
ceive of percolation in a three-dimensional cube.
Dynamics in a number of dimensions larger than
three, though harder to represent mentally, is easily
implementable on computers.
This model will be generalized in later sections to
implement the dynamics of rain forests, forest res
and epidemics. Indeed, more complex dynamics can
easily be developed. For instance, we can change the
rules according to which we ll the areas from the
eld. So far, we have only used a random number
generator for each cell. This means that there are no
interactions between plants : the growth at one place
does not aect the growth at neighbouring sites.
Overcrowding is therefore not present in our model.
This can easily be remedied using slightly more
complicated rules.
A good example of such models is the so-called
Game of Life introduced by Conway (see Gardner,
1970 and Berlekamp, Conway & Guy, 1982) in the
1970s, which mimics some aspects of life and
ecosystems. The Game of Life is dened on a square
lattice. Each lattice site can be either occupied by a
live being or be empty. Time evolves in a discrete
fashion. At each time step, all the sites from the
lattice are updated once using the following rules :
(1) An individual will die of over-exposure if it
has less than two neighbours, and of over-crowding
if it has more than three neighbours. The site it
occupied previously will then become empty at the
next time step.
(2) A new individual will be born at an empty site
only if this site is surrounded by three live neigh-
bours.
The dynamics generated by these simple rules has
proven to be very rich, with structures which glide
without changing shape, others that do not move at
all but are unstable, etc. There has been a recent
revival of interest in this model as power laws and
fractals appear in its dynamics, possibly hinting at
the presence of criticality in models of ecosystems
(see Bak, Chen & Creutz, 1989 and Sales, 1993).
Mathematical models of this latter type, called
cellular automata (Wolfram, 1983, 1984, 1986; see
also Langton, 1990) allow a simple and ecient
implementation of complicated interactions between
the constituents of a system using rules such as those
above.
(4) Self-organized criticality: the sandpile
paradigm
Here, I present the principle of self-organized
criticality introduced by Bak, Tang and Wiesenfeld
(1987) and its paradigm, the sandpile. This subject
has attracted quite a lot of attention since its
introduction: hundreds of articles are published on it
every year in physics, mathematics, computer sci-
ence, biology and economics journals. Short reviews
are available to the interested reader wishing to
learn more about this fast-moving eld (Bak &
Chen, 1989, 1991; Bak, 1990; Bak & Paczuski, 1993,
1995; Bak, 1996).
Self-organized criticality is a principle which
governs the dynamics of systems, leading them to a
complex state characterized by the presence of frac-
tals and power-law distributions. This state, as we
will see, is critical. However, self-organized critical
systems dier from the systems we presented in
Section II.4. There, one had to ne-tune a para-
meter, the temperature T, to be in a critical state.
Here, it is the dynamics of the system itself which
leads it to a scale-free state (it is therefore self-
organized). The classical example of such systems is
the sandpile (Bak et al., 1987; Bak, 1996) which we
turn to now.
Let us consider a level surface, such as a table, and
some apparatus which allows to drop sand on it,
grain by grain, every few seconds. With time, this
will create a growing accumulation of sand in the
middle of the surface. At rst, the sand grains will
stay where they land. However, as the sand pile gets
steeper, instabilities will appear: stress builds up in
the pile but it is compensated by friction between
sand grains. After a while, stress will be high enough
that just adding one single grain will release it in the
form of an avalanche: a particle of sand makes an
unstable grain topple, and briey slide. This has the
eect of slightly reducing the slope of the sandpile,
rendering it stable again until more sand is added.
Such toppling events will be small in the beginning,
but they will grow in size with time. After a while,
the sandpile will reach a state where avalanches of
all sizes are possible: from one grain of sand, to a
large proportion of the whole pile. This state is
critical since it exhibits the same domino eect
which was present in the spin system: adding one
grain of sand might aect all the other grains from
the pile. Further, this state has been attained without
180 T. Gisiger
Fig. 12. Cellular automaton introduced by Bak et al.
(1987) on a square lattice of size Nl20 in the critical
state. Grains of sand are represented by cubes, which
cannot be piled more than three particles high. Notice the
correlations in the spatial distribution of the sand.
any ne tuning of parameters : all one has to do to
reach it is to keep adding sand slowly enough so that
avalanches are over before the next grain lands.
To better understand the dynamics of the system,
let us consider the cellular automaton proposed by
Bak et al. (1987). We represent the level surface by a
NiN square grid, and grains of sand by cubes of
unit sides which can be piled one on top of another
(see Fig. 12). Let Z(i, j) be the amount of sand at the
position (i, j) on the grid. At each time step, a
random number generator will choose at which
position (i, j) of the grid the next grain of sand will
land, or equivalently which Z will increase by one
unit :
Z(i, j) Z(i, j)j1. (11)
To model the instabilities in the sandpile, we will
not allow sand to pile higher than a certain critical
value chosen arbitrarily to be 3. If at any given time,
the height of sand Z(i, j) is larger than 3 then that
particular site will distribute four grains to its four
nearest neighbours :
Z(i, j) Z(i, j)k4 (12)
Z(ij1, j) Z(ij1, j)j1 (13)
Z(ik1, j) Z(ik1, j)j1 (14)
Z(i, jj1) Z(i, jj1)j1 (15)
Z(i, jk1) Z(i, jk1)j1 (16)
A single toppling event like equations (1216) is an
avalanche of size 1. If this event happens next to a
site where three grains are already piled up, sand
will topple there as well, giving an avalanche of size
500
400
300
200
100
0
0 20 40 60 80 100
10
3
10
2
10
1
10
0
10
0
10
1
10
2
Size s
Time t
A
v
a
l
a
n
c
h
e

s
i
z
e

s
D
i
s
t
r
i
b
u
t
i
o
n

D
(
s
)
B
A
Fig. 13. (A) Record of avalanche size as a function of time
for a 20i20 sandpile. (B) Distribution D(s) of avalanches
for the same sandpile as a function of their size s. It clearly
follows a straight line over more than two orders of
magnitude, therefore implying a power law. Fit to the
data gives D(s) `s
".!.
2, and so on. We apply the updating rule (equations
1216) until all Z are equal to 3 or smaller. The
duration of the avalanche is dened as the number of
consecutive time steps during which Z(i, j) 3
somewhere in the sandpile. The result does not look
much like a real sandpile (see Fig. 12). It looks more
like small piles of height 3 and less, held close
together. However, this automaton proves a lot
easier to implement on a computer (and runs faster)
than models of a more realistic piles, and it exhibits
the same behaviour.
We now present the results of simulations of the
cellular automaton on a 20i20 grid. Fig. 13A shows
a record of avalanche sizes as a function of time.
Notice that avalanches of a wide range of sizes occur,
and that the smaller ones are much more frequent
than the larger ones. In fact, as shown on Fig. 13B,
the distribution D(s) of avalanche size s follows the
power law D(s) `s
".!. Avalanche duration also
181 Scale invariance in biology
50
40
30
20
10
0
0 20 40 60 80 100
10
3
10
2
10
1
10
0
10
0
10
1
10
2
Time t
A
v
a
l
a
n
c
h
e

d
u
r
a
t
i
o
n

s
D
i
s
t
r
i
b
u
t
i
o
n

D
(
s
)
B
A
Duration s
Fig. 14. (A) Record of avalanche duration as a
function of time t for a 20i20 sandpile. (B) The
distribution D() of avalanches as a function of their
duration follows a straight line over roughly two orders
of magnitude therefore implying a power law. Fit to the
data gives D() `
!
n
).
has interesting properties. Fig. 14A shows the time
record of the avalanche duration for the data of
Fig. 13. The two look similar. In fact, avalanche
duration distribution also follows a power law as
shown in Fig. 14B: D() `
!
n
). We refer the
interested reader to the literature for further details
about this fascinating system (Bak, Tang & Wiesen-
feld, 1988).
It is important to mention that in this system, the
criticality is only statistical, unlike the systems near
continuous phase transitions we presented in Section
II.4. In the case of the ferromagnetic sample, when
TT
c
, changing the orientation of one spin will
aect all the other spins of the sample. It is a kind of
avalanche, but it is always as large as the whole
sample. In the sandpile, events of all size and
duration are possible but their relative probability is
given by a power-law distribution. This is strongly
reminiscent of the example of earthquakes, which we
mentioned in the introduction: there is no more
typical size for avalanches than there is a typical size
for earthquakes. In the case of the sandpile, the size
and duration of an event depends of course on where
we put the grain of sand which triggers the
avalanche, but it depends much more strongly on
the amount of stress built in the pile. At some times,
the stress will be low and small events will occur. At
other times, it will be much higher and large events
will become very probable. Apart from the number
generator which distributes sand on the pile, nothing
is random in the system. One should therefore expect
strong long-term correlations in signals produced by
the dynamics of the sandpile, which is indeed what
is observed (we will come back to this point later).
These characteristics (long-term correlation, scale
invariance, and the absence of any ne tuning)
makes self-organized criticality an attractive prin-
ciple to explain the dynamics of scale-free biological
systems.
The sandpile exhibits a few more power laws. For
instance, similarly to the logistic map at the edge of
chaos of Section III.2, the sensitivity of the pile to
changes in initial conditions is of a power-law type
(Bak, 1990). The system also exhibits fractal struc-
tures (Bak & Chen, 1989; Bak, 1990) (see also Fig.
12). It was also claimed (Bak et al., 1987, 1988; Tang
& Bak, 1988) that it emits 1\f-noise.
After the ground-breaking theoretical work of P.
Bak and his colleagues, many eorts were made to
observe experimentally self-organized criticality.
The experiments were dicult because it was
necessary, in order to measure accurately the
distributions D(s) and D(), to count all the grains of
sand which moved during an avalanche. Careful
compromises therefore had to be made while
devising experimental devices. Early eorts were not
very successful, producing puzzling, and sometimes
misleading, results. It was not until the experimental
work by Frette et al. (1996) that true power-law
scaling was measured. The pile was not made of
sand, neither was it three dimensional. Instead it was
made of rice, constrained in a two-dimensional space
between two transparent plastic pannels through
which digital cameras could, with the assistance of
computers, follow the motion of all the particles of
the pile. The power laws, signature of self-organized
criticality, were reproduced but only when using a
certain type of rice which provided enough friction
to reduce the size of avalanches. This experiment
gave insight into another power-law distribution:
that for the duration of the stay of a given grain of
rice in the pyramid (Christensen et al., 1996). It also
serves as a warning that self-organized criticality
182 T. Gisiger
might not be as general and universal a principle it
might appear at rst, since it seems to be sensitive to
details of the dynamics (such as the type of rice
used).
Another unexpected development was the dem-
onstration that the original sandpile automaton
(Bak et al., 1987) does not produce 1\f-noise, but
rather 1\f #-noise, like Brownian noise (Jensen,
Christensen & Fogedby, 1989; Kerte! sz & Kiss,
1990; Christensen, Fogedby & Jensen, 1991). How-
ever, other self-organized critical systems do give
1\f-noise (Jensen, 1990), and by incorporating
dissipation in the sandpile automaton, one can tune
the noise to be in the 1\f range (Christensen, Olami
& Bak, 1992). Finally, in a recent paper, De Los
Rios and Zhang (1999) propose a sandpile-type
model which includes dissipation and a preferred
direction for the propagation of energy which
produces 1\f-noise in systems with any number of
dimensions. Interestingly enough, because of dis-
sipation eects, this system does not exhibit power-
law distributions of spatial correlations, unlike the
original sandpile model and other self-organized
critical systems. This gives theoretical support to the
experimental observation of systems which emit 1\f-
noise but are not scale-free (see references in De Los
Rios & Zhang, 1999). More work is therefore clearly
needed to understand better the connection between
icker noise and criticality in nature.
(5) Limitations of the complex systems
paradigm
Before moving to scale invariance in biological
systems, I present a few limitations of the complex
systems paradigm proposed in the introduction, i.e.
of the study of the emergent properties of systems
once a knowledge of the interactions between their
components is accessed.
The rst main problem of this approach, which is
also present in any type of modelling, is to dene
rigorous ways in which models can be tested against
reality. In the case of scale-invariant systems, things
are relatively simple: quantities such as exponents of
power laws and fractal dimensions can be measured
experimentally and compared with predictions from
models. The case of the BelouzovZhabotinski
reaction for instance, is more complex as ways to
quantify reliably the dynamics of the system are
more dicult to nd. Indeed, it is possible to
construct models which create various spatial pat-
terns, but verifying their validity in accounting for
the mechanisms of the reaction just by comparing
by eye the structures obtained from simulations
and those observed experimentally is obviously not
satisfactory.
The second main problem is more central.
Although the idea of complex behaviours emerging
from systems of identical elementary units inter-
acting with each other in simple ways is quite
attractive, it is seldom realized in nature. Indeed,
most real systems are made of an heterogenous set of
highly specialized elements which interact with each
other in complicated ways. Genetics and cell biology,
for instance, are replete with such systems.
In the light of these arguments, we see that
although the complex system approach is an im-
portant rst step towards a unied theory of systems,
it is unlikely to be the last, and many exciting
developments will have to take place in the future.
This concludes the part of this article devoted to
mathematical notions and concepts. We can now
apply these tools to biological systems.
IV. COMPLEXITY IN ECOLOGY AND
EVOLUTION
(1) Ecology and population behaviour
(a) Red, blue and pink noises in ecology
Following the work of Feigenbaum (1978, 1979) on
the logistic map of May (1976), eorts were devoted
to analysing experimental data from animal popu-
lations using similar models. This aimed at a better
understanding of the variations in the population
density of ecosystems as well as how the environment
inuences this variable. Another question which
attracted a lot of attention was whether ecosystems
poise themselves in a chaotic regime. A considerable
amount of literature has been devoted to this subject
and the reader is referred to it for further details (see
for instance the review by Hastings et al. (1993) and
references therein). Answering this second question
is especially dicult since time series obtained from
ecosystems are usually short while methods used to
detect chaotic behaviour require a large amount of
data (Sugihara & May, 1990; Kaplan & Glass,
1992; Nychka et al., 1992): results are not always
clear cut. One could argue that, considering the
complexity of the interactions between individuals,
population dynamics should be strongly inuenced
by past events. However, as we saw earlier, one of
the main characteristics of chaotic systems is to be so
sensitive to initial conditions that information is lost
after a short period. I will not pursue this matter
183 Scale invariance in biology
further and refer the reader to Berryman & Millstein
(1989) for an interesting discussion on the subject.
The question seems to be still open.
Additional insight into the dynamics of ecosystems
was revealed by the power spectra of population
density time series from diverse ecosystems. Pimm &
Redfearn (1988) considered records of 26 terrestrial
populations (insects, birds and mammals) compiled
over more than 50 years for British farmlands.
Computing the power spectrum of these data, they
found that it contains a surprisingly high content in
low frequency, and described this signal as red
noise. This indicates the presence of slow variations
or trends in population density, which might suggest
that the ecosystem is strongly inuenced by past
events (in other words, it possesses a memory). This
came quite as a surprise since simple chaotic models,
which were thought at the time to capture the
essential ingredients of the dynamics of ecosystems,
generate time series rich in high frequency, so-called
blue signals (see Section III.2). However, it was
later shown that by including spatial degrees of
freedom, chaotic models can exhibit complex spatial
structures (such as spirals or waves), very similar to
those in the BelouzovZhabotinski experiment
(Hassel, Comins & May, 1991; Bascompte & Sole! ,
1995). With this addition, the dynamics of the model
seem to generate signals with a higher content in low
frequencies, as if the system was able to store
information in these patterns. There is, however, still
much debate about the role of such chaotic models in
ecology (Bascompte & Sole! , 1995; Cohen, 1995;
Blarer & Doebeli, 1996; Kaitala & Ranta, 1996;
Sugihara, 1996; White, Begon & Bowers, 1996a;
White, Bowers & Begon, 1996b).
With the presence of low frequencies in population
time series now rmly established experimentally,
other questions arise: where do they come from? Are
they induced by external inuences from the
environment, which typically have signals rich in
low frequencies (Steele, 1985; see also Grieger, 1992,
and references therein)? Or are they produced by
the intrinsic dynamics of the population, i.e. by the
interactions between individuals ? Also, in what
proportion do low frequencies arise? Are they
strongly dominant, like in the 1\f # spectrum of the
Brownian signal ? Or is it less predominant like in the
1\f signal, sometimes called pink noise ? (see
Halley, 1996, for a discussion).
Interesting results which address these issues have
been published by Miramontes & Rohani (1998).
Their approach consists of analysing time series from
laboratory insect populations and extracting their
15000
10000
5000
0
0 100 200 300
I
n
s
e
c
t

p
o
p
u
l
a
t
i
o
n
Time t (days)
Fig. 15. Time series from a laboratory insect population of
Lucilia cuprina. Notice the change in patterns after
approximately 200 days of observation. Reproduced from
Miramontes & Rohani (1998).
low-frequency content. The insect population under
study is that of Lucilia cuprina, or Australian sheep
bowy, which Nicholson (1957) kept under identical
conditions with a constant food supply for a duration
of approximately 300400 days. Population density
was evaluated roughly daily, providing a data set of
360 points (see Fig. 15).
This data set has attracted an enormous amount
of attention in the literature. This is rst due to the
irregularities it exhibits, even under constant en-
vironmental conditions. Another interesting feature
of the data is the change in the behaviour of the time
series after approximately 200 days. This was
investigated by Nicholson (1957) who found that,
unlike wild or laboratory stock ies, female ies after
termination of the experiment could produce eggs
when given very small quantities of protein food (in
fact small enough that the original strains of ies
could not produce eggs with it). Mutant ies might
therefore have appeared in the colony and, because
they were better adapted to living in such close
quarters and with a limited food supply, took over
the whole population. The eect of this change in
the insect colony can be seen in the behaviour of the
population density which uctuates more and
becomes more ragged. For a theoretical investigation
of this phenomenon using non-linear models, see
Stokes et al. (1988).
The population time series of Nicholsons (1957)
experiment obviously contains low frequencies and
long trends. To better quantify this content, Mira-
montes & Rohani (1998) applied the three methods
outlined in Section II.3 to the population density of
184 T. Gisiger
Lucilia cuprina, and also to that of the wasp parasitoid
Heterospilus prosopidis and its host the bean weevil
Callosobruchus chinensis, cultured by Utida (1957).
The results from their analyses are consistent with a
1\f structure of the noise, rather than an 1\f #
structure. They also nd a power-law distribution
for the absolute population changes D(s) `s

with
between 2.8 and 1.7, and D() `

with between
0.95 and 1.23 for the distribution of the duration of
these uctuations. These studies show that the
intrinsic dynamics of an ecosystem, even one com-
prising a single species and without any external
perturbations, is able to generate long trends in
population density. It shows also that a 1\f power
spectrum seems to be favoured over redder signals.
This frequency dependence, and the existence of
power-law distributions of event size and duration in
the system, seems to hint toward a critical state,
instead of a chaotic one.
(b) Ecosystems as critical systems
We end this Section on ecology by presenting further
evidence suggesting that some ecological systems
seem to operate near a critical state. We start with
the investigation of rain forest dynamics by R. V.
Sole! and S. C. Manrubia (Sole! & Manrubia, 1995a,
b; Manrubia & Sole! , 1997), and nish with the work
of Keitt & Marquet (1996) on extinctions of birds
introduced by man in the Hawaiian islands.
The two main forces which appear to shape the
tree distribution in rain forests, which will be of
interest to us here, are treefall and tree regeneration.
There is also competition in the vegetation in order
to get as much sunlight as possible, inclining trees to
grow to large heights. From time to time, old trees
fall down, tearing a hole in the surrounding
vegetation, or canopy. Because of the intricate nature
of the forest, where trees are often linked to others by
elastic lianas, when a large tree falls, it often brings
others down with it. In fact, it has been observed
that treefalls bring the local area to the starting point
of vegetation growth. The gap in the vegetation is
then lled as new trees develop again. This constant
process of renewal assures a high level of diversity in
the ecosystem.
Gaps in the vegetation, which are easy to pinpoint
and persist for quite some time, can be gathered by
surveys and displayed on maps. Sole! & Manrubia
(1995a) chose the case of the rain forest on Barro
Colorado Island, which is an isolated forest in
Panama. They present a 50 ha plot showing 2582
low canopy points where the height of trees was less
10
3
10
2
10
1
10
0
10
0
10
1
Gap size s (m)
G
a
p

d
i
s
t
r
i
b
u
t
i
o
n

D
(
s
)
B
A
Fig. 16. (A) Plot of canopy gaps obtained from numerical
simulations of the rain forest model of Sole! & Manrubia
(1995a). Each dot represents a gap (zero tree height) in
the canopy. (B) Frequency distribution of gaps from the
Barro Colorado Island rain forest for the years 1982 and
1983. The dashed line, representing a power-law dis-
tribution D(s) `s
".(%, ts the data quite well. Re-
produced from Sole! & Manrubia (1995a).
than 10 m, in the years 1982 and 1983 (see Fig. 16A
for a similar plot obtained by numerical simulation
of the model described below). The map shows holes
of various sizes in the vegetation, scattered across the
plot, in a possibly fractal manner. To verify this, Sole!
and Manrubia (1995a) rst computed the frequency
of occurrence D(s) of canopy gap size s (see Fig 16B).
The distribution ts a power law with exponent
k1.74 quite well, showing that there does not
appear to be any typical size for gaps. Another
indication of the fractal nature of the gap distribution
can be gathered by computing the fractal dimensions
of this set. Using methods such as the basic box
counting method presented in Section II.2, the
authors found the non-integer value 1.86. Sole!
and Manrubia (1995a) show in fact that, typical of
real fractals, the rain forest gaps set possesses a whole
spectrum of fractal dimensions which shows corre-
185 Scale invariance in biology
lations of the gaps on all scales and ranges : it
therefore seems to be a large, living fractal structure.
The presence of fractals and power-law distri-
butions are strongly suggestive that the rain forest
has evolved to a critical state where uctuations of
all sizes are present. To verify this hypothesis, Sole!
and Manrubia (1995a, b) propose a simple, critical
mathematical model which reproduces some aspects
of the dynamics.
The forest is represented using a generalization of
the model for plant growth of Section III.3 which
implements tree birth and death by four rules. Trees
start to grow in vacant areas using the stochastic
mechanism of Section III.3 (rule 1). Rule 2
implements tree growth at locations where the
surrounding vegetation is not taller, so as to
reproduce the eect of light screening by larger trees
on smaller ones. Spontaneous tree death can take
place for two reasons : because of age or because of
disease. Rule 3 implements these mechanisms by the
introduction of a maximal value for tree age (after
which the tree dies) and a random elimination (with
a small probability) of trees of all ages. Finally, rule
4 reproduces the eect of treefall on the surrounding
vegetation: when a tree dies and falls, it also brings
down some of its neighbours. This rule takes into
account as well the fact that older trees are higher
and will therefore damage a larger area of canopy
than smaller ones. We can see that rules 2 and 4 are
especially important to the dynamics of the system
because they introduce spatial and temporal corre-
lations in the system.
During the simulations, the system evolves as the
four rules are applied successively at each time step.
In doing so, the dynamics develops correlations
between dierent points of the forest, and nally on
all length and time scales. The latter is observable by
studying the time series given by the variation of the
total biomass of the ecosystem. The power spectrum
of the signal reveals a 1\f

dependence on frequency
with 0.87 ~~1.02. Spatial correlations can be
studied by computing the fractal dimensions of the
gap distribution in the system. Sole! & Manrubia
(1995a, b) and Manrubia & Sole! (1997) found
results quite similar, both qualitatively and quan-
titatively, to those obtained from the real data set,
giving strong arguments in favour of the rain forest
operating near or at a critical state. We refer the
interested reader to their articles for further details
and applications of their models.
Keitt & Marquet (1996) approach the study of
the possible critical nature of ecosystems from a
dierent angle, focusing instead on their dynamics as
they are gradually lled by species. For this task,
they chose the geographical area formed by six
Hawaiian islands : Oahu, Kauai, Maui, Hawaii,
Molokai and Lanai. These islands were originally
populated by native Hawaiian birds, until they were
driven extinct by Polynesian settlers and afterwards
by immigrants from North America. Since then,
other bird species have been articially introduced.
Records show that 69 such species were introduced
between 1850 and 1984. They also contain data
regarding the number of extinctions that occured
during this period (35 species went extinct during
these 70 years). Keitt & Marquet (1996) analysed
these records and found several indications that the
system might be operating in a critical state.
Extinction events seem not to have occured until
more than eight species were introduced into the
ecosystem. This transition does not happen in a
continuous manner, but more in an abrupt fashion,
reminiscent of a phase transition. Keitt & Marquet
(1996) interpret this as the system going from a non-
critical state to a critical one. Also, the number of
extinction events seem to follow a power-law
distribution, with an exponent around k0n91. This
means that small extinction events are a lot more
common than larger ones. We will see below(Section
IV.2) that similar power laws are suggested by the
analysis of fossil records. Lastly, the distribution of
the lifetimes of species, which range from a few years
to more than 60 years, also follows a power law with
exponent k1.16. These ndings might therefore
illustrate how an ecosystem self-organizes into a
critical state as the web of interactions between
species and individuals develops. However, more
data on this or similar ecosystems might prove
valuable to support this claim.
(2) Evolution
In this Section, I will present the result of some recent
work done on evolution using mathematical models.
There has been a great deal of activity in this area in
the last 10 years or so, mainly because of exciting and
unexpected patterns emerging from fossil records for
families and genera. These patterns, which are best
described by power laws and self-similarity, give
biologists solid data to build models and better
understand the mechanisms of evolution and selec-
tion at the scale of species.
I will start by presenting some of the power laws
extracted from the fossil records (Section IV.2.a).
This will be followed by a few remarks about models
186 T. Gisiger
70
60
50
40
30
20
10
0
70
60
50
40
30
20
10
0
500 400 300 200 100 0
Geological time (Myr)
P
e
r

c
e
n
t

e
x
t
i
n
c
t
i
o
n
P
e
r

c
e
n
t

o
r
i
g
i
n
a
t
i
o
n
A
B
Fig. 17. Time series for the percentage of origination (A)
and extinction (B) compiled from the 1992 edition of A
Compendium of Fossil Marine Families (see text). Reproduced
from Sepkoski (1993). Myr, million years before recent.
in evolution, and the concepts and notions they use
(Section IV.2.b). I will then present models proposed
to reproduce these measurements (Sections IV.2.ce).
(a) Self-similarity and power laws in fossil data
Traditionally, evolution is viewed as a continuous,
steady process where mutations constantly introduce
new characteristics into populations, and extinctions
weed out the less-t individuals. Via reproduction,
the advantageous mutations then spread to a large
part of the population, creating a new species or
replacing the former species altogether, therefore
inducing evolution of species. This mechanism,
named after Darwin, is in fact a microscopic rule of
evolution, or microevolution (i.e. it plays at the level
of species or individuals), and it governs the whole
ecosystem from this small scale. According to this
mechanism one expects extinction records to show
only a background activity of evolution and ex-
tinction, where a low number of species constantly
emerge and others die out.
However, fossil records show that history has
witnessed periods where a large percentage of species
and families became extinct, the so-called mass
extinction events . The best documented case is the
annihilation of dinosaurs at the end of the Cretaceous
period, even though at least four such events are
known (see Raup & Sepkoski, 1982 for a detailed list
of these events). These events clearly cannot be
accounted for in terms of continuous, background
extinctions as they represent discontinuous com-
ponents to the fossil data.
To explain these occurrences, events external to
the ecosystems were introduced. It is known that the
earths ecosystems have been subject to strong
pertubations such as variations in sea level, world-
wide climatic changes, volcanic eruptions and
meteorites, to name a few. Although the eect of
these events on animal populations is not very well
understood, it is quite possible that they could have
aected them enough to wipe out entire species and
genera (Homan, 1989; Raup, 1989). The record of
the extinction rate as a function of time should then
consist of several sharp spikes, each representing a
mass extinction event, dominating a constant back-
ground of low extinctions as diversication is kept in
check by natural selection (see g. 1 of Raup &
Sepkoski, 1982 for data with such a structure).
People interested in the interplay of evolution and
extinction have therefore traditionally pruned the
data, subtracting from it mass extinction events and
any other contribution believed to have been caused
by non-biological factors. However, with the ac-
cumulation of fossil data and their careful systematic
analysis, a somewhat dierent picture of evolution
and extinction has developed recently.
The rst such study was carried out by Sepkoski
(1982) in A Compendium of Fossil Marine Families
which contained data from approximately 3500
families spanning 600 million years before recent
(Myr). This was recently updated to more than 4500
families over the same time period (Sepkoski, 1993).
This record enables one to see the variation in the
number of families as a function of time, as well as
the percentage of origination and extinction (see
Fig. 17). As can be seen in Fig. 17B, the extinctions
do not clearly separate into large ones (mass ex-
tinctions) and small ones (background extinctions).
In fact, extinctions of many sizes are present : a
few large ones, several medium-sized ones and lots
of small ones. This characteristic of the distribution
seems to be robust as was shown by Sepkoski (1982),
being already present in the 1982 Compendium. An-
other striking fact is that the origination curve
(Fig. 17A) is just as irregular as the extinction
curve.
187 Scale invariance in biology
300
0
60
40
20
0
500 400 300 200 100 0
Geological time (Myr)
P
e
r

c
e
n
t

f
a
m
i
l
y

e
x
t
i
n
c
t
i
o
n
T
o
t
a
l

f
a
m
i
l
y

e
x
t
i
n
c
t
i
o
n
A
B
200
100
600
Fig. 18. Time series of family extinction for both marine
and continental organisms, as compiled from Fossil Record
2 (Benton, 1993). (A) Total extinction of families. (B) Per
cent extinction of families. Each graph contains a minimal
and maximal curve, meant to take into account un-
certainty in a variety of taxonomic and stratigraphic
factors. Reproduced from Benton (1995). Myr, million
years before recent.
A similar result was obtained by Benton (1995)
using the Fossil Record 2 (Benton, 1993), which
contains 7186 families or family-equivalent taxa
from both continental and marine organisms. Fig.
18A shows the total number of family extinctions
and Fig. 18B the percentage of family extinctions as
a function of time. These curves too show extinctions
of many sizes. There are also similarities between the
general shape of the extinction curves from the fossil
compilations of Sepkoski (1982) and Benton (1993),
even though the curves correspond to organisms
which lived in dierent geographical areas. This last
fact had been noticed by Raup & Boyajian (1988),
using 20000 specimens of the 28000 from the Fossil
Compendium of Sepkoski (1982). They examined the
similarities between the extinction curves belonging
to dierent families or species and found that they
were quite similar, even if the species or families
concerned lived very far fromeach other. To describe
this situation, Raup & Boyajian (1988) coined the
60
0
10
2
Per cent extinction
N
u
m
b
e
r

o
f

e
x
t
i
n
c
t
i
o
n

e
v
e
n
t
s
A
B
50
40
30
20
10
0 20 40 60 80 100
10
1
10
0
10
0
10
1
10
2
Fig. 19. (A) Distribution of extinction events as a function
of their size, for 2316 marine animal families of the
Phanerozoic. Reproduced from Sneppen et al. (1995). (B)
Same distribution plotted on a loglog scale. The straight
line has slope 2.7 and has been tted to the data while
neglecting the rst point which corresponds to events
smaller than 10%.
phrase that the taxa seemed to march to the same
drummer. They concluded that this cannot ob-
viously result from purely taxonomic selectivity, and
that external, large-scale, non-biological phenomena
were responsible for most of these extinctions.
However, a closer inspection of the extinction
curves shows even more striking results. Raup (1986)
sorted the extinction events according to their size,
and computed the frequency of each size of events
(see Fig. 19A). This distribution is smooth, instead of
consisting of just two spikes corresponding to small
extinction events (background extinction) and very
large events (mass extinctions). In fact, it seems to
follow a power law as can be seen in Fig. 19B. The
frequency of the smallest extinction events (smaller
than 10%) is rather far from the distribution.
However, this can be explained by the fact that small
events are more sensitive to counting errors, as they
can be masked by background noise. The exponent
of the distribution can be evaluated to be between
k2.5 and k3.0. If indeed, the distribution obeys a
power law, then extinction events of all sizes could
188 T. Gisiger
25
0
25
Time (Myr)
N
u
m
b
e
r

o
f

f
a
m
i
l
i
e
s
A
B
20
15
10
5
300
20
15
10
5
50 100 150 200
200 100
Fig. 20. Number of families of Ammonoidea as a function
of time over a period of 320 million years, with a time
denition of 8 million years (A), and 2 million years (B).
The bottom graph therefore is an expansion of the framed
region of the top graph, but at a higher time resolution.
Similar features appear as the time scale is reduced,
implying self-similarity in the record. Reproduced from
Sole! et al. (1997). Myr, million years before recent.
occur: from one family or species, to extinction of all
the organisms in the ecosystem. As in the case of
avalanches in the sand pile, there would be no
typical size for extinction events. Mass extinction
events would be contained in the skew end of the
distribution, and the separation of extinction events
into mass extinctions and background extinction
might therefore be articial and debatable.
If extinction event statistics follow a power-law
distribution, then the time series of the number of
families present as a function of time should also
have some interesting properties. This is indeed the
case, as shown by Sole! et al. (1997). They rst
pointed out that there is self-similarity in the fossil
record of the families of Ammonoidea (House, 1989)
(see Fig. 20). Taking part of the time series, and
expanding it by improving the time denition shows
a structure similar to the original record. The record
therefore seems to be self-similar, or fractal. Sole! et al.
10
1
Frequency f
P
o
w
e
r

s
p
e
c
t
r
u
m

P
(
f
)
A
B
10
2
Percentage of family extinctions
Total number of family extinctions
10
1
10
2
10
2
10
1
Fig. 21. Power spectra P( f ) of the time series of Fig. 18 for
the total number of family extinctions (A) and of the
percentage of family extinctions (B). The dashed lines
correspond to a 1\f

spectrum with 0.97 (A) and 0.98


(B). Reproduced from Sole! et al. (1997).
(1997) conrmed the presence of this fractal struc-
ture by computing the power spectra of the signals of
Fig. 18. Fig. 21 shows the result : the power spectrum
is of a 1\f type. The authors also computed the
power spectra of time series for origination, total
extinction rate and per family extinction rate of
continental or marine organisms, with similar results
(see Sole! et al., 1997 for further details).
Fractal structure has also been shown in the
division of families into sub-families and of taxa into
sub-taxa. This was performed by Burlando (1990,
1993), who counted the number of sub-taxa associ-
ated with a given taxon. He then classied them
according to this number and compiled the dis-
tribution of sub-taxa related to a single taxon. The
distribution follows a power law with the exponent
taking a value between k2.52 and k1: taxa
containing only one subtaxon numerically dominate
those with two subtaxa, and so on. The smaller the
value of the exponent, the fewer the frequency of
taxa having many subtaxa. This work was carried
on extant organisms (Burlando, 1990), and then
189 Scale invariance in biology
8000
Depth in core (cm)
M
e
a
n

t
h
o
r
a
c
i
c

w
i
d
t
h

(
l
m
)
A
B
Life span (Myr)
140
6000
4000
2000
0
0 50 100 150 200
120
100
80
1800 1600 1400 1200 1000
N
u
m
b
e
r

o
f

g
e
n
e
r
a
Fig. 22. (A) Distribution of the life spans of fossil genera
in millions of years. It follows a power-law distribution
with exponent of approximately k2. Reproduced from
Sneppen et al. (1995). Myr, million years before recent.
(B) Variation of the mean thoracic width of fossilized
Pseudocubus vema as a function of depth in core (cm) (which
covers approximately 4 million years). Reproduced from
Kellogg (1975).
afterwards included fossils (Burlando, 1993).
Roughly, it shows that evolution has followed a path
of diversication which looks like a tree where
branches often divide a few times, but very rarely
divide many times (see Burlando, 1990, 1993 for
further details). It has been shown that the dis-
tribution of life spans of genera also follows a power
law (see Fig. 22A), with an exponent roughly equal
to k2.0 (see Sole! et al., 1997 for details).
Two facts which might seem at rst unrelated to
our discussion (but will be accounted for by the
models presented below) are some curious patterns
of the time evolution of species characteristics and of
the number of members in species or genera. First,
changes in the morphological characteristics of
species do not seem to happen in a continuous
fashion. Kellogg (1975) showed that the mean
thoracic width of Pseudocubus vema did not change
continuously in history, but rather followed sharp
transitions separated by periods of stasis (see Fig.
22B). This behaviour was called Punctuated Equi-
librium by S. J. Gould and N. Elderedge (Elder-
edge & Gould, 1972; Gould & Elderedge, 1993)
and describes the tendency of evolution to take place
via bursts of activity, instead of as a continuous,
steady progression. Second, the time variation of the
number of live individuals from a given species or
family can be shown to be discontinuous. Indeed,
Raup (1986) showed that the percentage of live
individuals of a given family (the so-called sur-
vivorship curve of the family) does not go to zero
following a simple decreasing exponential. An
exponential would imply a constant extinction
probability, like in the case of the disintegration of
radioactive material. Instead, Raup (1986) shows
that it decreases in bursts, separated by plateaux.
The bursts of extinction often coincide with known
large extinction events, giving further support for the
picture of the march to the same drummer
mentioned above (Raup & Boyajian, 1988).
These puzzling results suggest several questions,
the rst being if indeed the fossil record unearthed to
date follows power-law distributions of extinction
events and power spectra. There has been quite a lot
of work on this topic, using statistical tools as well as
Monte Carlo simulations. So far it seems that power
laws are the distributions which reproduce best the
data in most, but not all, cases. The interested reader
is referred to the literature for further details (see for
instance Newman, 1996 and Sole! & Bascompte,
1996 for a discussion).
A more dicult question, is whether real ex-
tinction and origination statistics truly follow power-
law distributions, and if their time series really are
1\f signals. It is a well-accepted fact that the great
majority of species which ever appeared on earth are
now extinct. Further, species alive today, which run
into the millions, largely outnumber the 250000 or
so fossil specimens uncovered to date. Therefore, the
results reviewed above have been computed using an
extremely small sample. Furthermore, these have
reached us because they were preserved as fossils,
which are the product of very particular geological
conditions. One can then wonder to what extent this
sample is representative of the set of species which
have lived so far. If not, is the process of fossilation
responsible for the distributions presented above?
This is not a trivial question, and one which is more
likely to be answered by geologists than by bio-
paleontologists alone. In what follows, we will put
this debate aside, and make the following hypothesis :
power laws represent well the statistics of the
evolution of species which took place on earth.
190 T. Gisiger
However, the exponents derived to date might not
have the right values.
Making this bold hypothesis raises further ques-
tions. If indeed mass extinctions were caused by
catastrophes, then have the more minor events been
caused by smaller or more local perturbations ? This
connection between the size of events and that of the
perturbations was postulated by Homan (1989),
Maynard Smith (1989) and Jablonski (1991). In
that spirit, some work has also been carried out to
nd periodicity in the records in an attempt to then
match them to cyclic perturbations or phenomena
(Raup & Sepkoski, 1984). However, this raises the
following problems : (1) what perturbation distri-
bution would give the power laws observed in
extinction records ? To answer this, one needs (2) to
understand, to a certain degree at least, the impact
of a given perturbation on an ecosystem. This, in
turn, implies (3) knowledge about interactions
between species, since if one goes extinct because of
some external factor, others, which are dependent on
it in some way, might also disappear. In what
follows, we review concepts and mathematical
models which address these three issues.
(b) Remarks on evolution and modelling
However, before considering models, some remarks
have to be made on exactly what can be expected
from them in this context (see also Bak, 1996 from
which this discussion is reproduced).
Ecosystems are, to say the least, extremely
intricate systems. They are formed by biological
components (individuals), which are themselves
subject to complicated inuences from other individ-
uals, as well as from their environment (geological
and meteorological factors). It is certainly probable
that, at least for a small portion of its existence, the
earths ecosystem has been sensitive to external
inuences. Let us consider for example the critical
period where life had just appeared on our planet.
Had the earth been subjected at that time to a large
dose of X-rays from some not-so distant supernova,
these rst organisms might have died instead of
spreading and evolving as they did. Such an
untimely event might have delayed the appearance
of life on earth by millions of years, or maybe forever.
It is therefore possible that if the history of the earth
was run over again from the beginning, our planet
would not be as it is now: removing a single small
event in its history might change the present as we
know it. This explains why a historical approach to
evolution is almost always adopted. Events are
explained a posteriori by nding probable causes for
them. For instance, man and the chimpanzee are
said to have evolved from a common ancestor
because of some geographical factor: the part of the
initial population which chose to live in the forest
evolved to become the chimpanzee, while the part
which stayed on the open plains evolved towards
man. Although this gives insight into the chronology
of events, it does not explain why this diversication
occured. In light of this argument, it would therefore
appear foolish to aim at reproducing with math-
ematical models the time series of Figs 17 and 18 for
instance. The question of what models of evolution
could, or should, be able to reproduce is therefore
not trivial.
This diculty can, however, be sidestepped by
considering only the statistical aspects of the time
series as relevant to modelling. To do so, one
considers the time series of extinctions as being
generated by a stochastic system, i.e. a system
subject to random perturbations. Because their
dynamics include some randomness, stochastic sys-
tems do not produce the same trajectories every
time, even when starting with the exactly same
initial conditions. Let us consider the simple example
of the percolation system of Section III.3 (per-
colation is not exactly a stochastic system, but it will
suce for the present purposes). Because tree growth
on the eld in that example was implemented using
an event generator with probability p, the positions
of trees will not be the same each time we run the
simulation. However, the statistical characteristics of
the system will be constant from one run to another.
For example, the number of trees will be roughly the
same each time and equal to Np. Similarly, one
should try to reproduce only the statistical aspects of
the extinction and evolution signals or, to be more
precise, the exponents of the power law distributions
of event size and duration, and of the power spectra
of the signals. Models will then be built by mimicking
the most important features of evolution and
extinction, and afterwards be judged on their ability
to reproduce these exponents.
Another issue concerns the amount of detail one
has to include in models in order to reproduce the
data. This is a somewhat technical problem, but it
would be reassuring to have some conceptual handle
on the question. After all, what good is a model if one
has to include in it an innite amount of detail to
make it reproduce the data. However, here the
notion of universality proves helpful. The abundance
of self-similarity and power laws in fossil data is
suggestive that ecosystems operate near a critical
191 Scale invariance in biology
point (see Section IV.2.c). The exponents of the
distribution are therefore analogous to the critical
exponents dening the dynamics of magnetic sys-
tems, for instance. However, as we saw in subsection
II.4, these exponents cannot take arbitrary values
because of the notion of universality: they are
constrained by the specic universality class the
system is in. This argument can be extended to
evolution and ecology. If these systems are critical, as
fossil data seem to suggest, one does not have to build
very complex systems to produce the right values of
the critical exponents. One just has to consider the
simplest model conceivable in the same universality
class as the ecosystem. Conversely, if the model
reproduces the critical exponents correctly, then it
might be expected that some important features of
evolution and extinction have been taken into
account.
If the power laws observed turn out not to be a
signature of criticality, one can still hope that the
statistics of the data can be reproduced using simple
models with robust dynamics, i.e. one which is
insensitive to small changes. In what follows, since
we are making the hypothesis of the existence of
power laws in the dynamics of evolution (but none
concerning the actual values of their exponents), we
will be more interested in the mechanisms imple-
mented in the model than in their actual predictions
for the exponents of these power laws. The latter can
be found in the literature.
(c) Critical models of evolution
It was Kauman & Johnsen (1991) who rst
introduced the idea of criticality in the modelling of
ecosystems. I will present their work next, but
before, I will briey illustrate the notion of criticality
in evolution using a toy model (i.e. a simplistic
model) similar to the magnetic sample of section
II.4.
Here, we consider that the magnetic sample
represents an ecosystem, and that its spins symbolize
the species which may live there: if a spin is up, then
the species it corresponds to is present in the
ecosystem; if that spin points down, it will be absent
from it. Stretching this analogy further, ipping a
spin from the position up to the position down
represents an extinction event, while the contrary
symbolizes the introduction of a species in the
environment, or its origination. So, at a given time,
the arrangement of spins species the species content
of the ecosystem. The temperature parameter T of
the spin systemdoes not have any immediate analogy
here. However, it allows us, by tuning it to specic
values, to x the range of interactions between
species (as it did in the case of spins).
Let us rst consider the case where the parameter
T is higher than its critical value: interactions in the
system (whether between spins or species) are then of
a short-range type. So if we remove or introduce a
species into the ecosystem (i.e. we ip a spin), only
neighbouring species will be aected: some might
disappear, because of competition or codependence;
some might also appear to take advantage of the new
free resources. Therefore, in this (stable) phase of the
dynamics of the ecosystem, only small extinction and
origination events will take place.
Next, let us set T at its critical value. Now, by
analogy with what we saw in Section II.4, inter-
actions are as large as the ecosystem itself. The
introduction or removal of just one species will aect
all others : we nd ourselves in the situation where a
small perturbation to the system creates a large
extinction event, as large in fact as the whole
ecosystem. An origination event of similar size will
also take place as newcomers take advantage of the
new space available and free ecological niches. This
type of situation will arise in an ecosystem where
species are locked with each other in a tight chain of
codependence. This is the mathematical realization
of the concept of ultra-specication, where beings
are so specialized that they cannot adjust to changes
in their environment. The classical example is that of
a predator which has evolved in order to catch a
single type of prey and cannot survive if this prey
disappears. Another way of viewing this is that, at
the critical point, the species arrange themselves as a
row of dominoes where the fall of one will bring
down all the others. Therefore, if ecosystems are
critical systems, then there is no need for huge
catastrophes to wipe out a large fraction of their
population: the elimination of a single species by a
small perturbation, or just natural selection, in a
critical ecosystem is enough to generate such large
extinction events. This therefore gives another
possible explanation to the fact that species seem to
march to the same drummer (Raup & Boyajian,
1988).
However, ecosystems are not made of spin-like
entities and there is no parameter T that can be
tuned to a critical value: we need to express the
concept of criticality in a more realistic biological
framework which enables us at the same time to
build models. For that purpose, the important
notions of tness and tness landscapes prove
extremely useful.
192 T. Gisiger
Fitness (see Wright, 1982, Kauman & Levin,
1987) is a number, arbitrarily chosen between 0 and
1, which quanties the aptitude to survive of a given
individual or species. [Note that in the following
discussion, living entities are approximated by their
genotype, and species are represented by a single
representant. They will then be used inter-
changeably.] The closer to 1 the tness, the better
are the chances of the species thriving and surviving.
By contrast, a small tness is usually characteristic of
organisms likely to disappear quickly from the
ecosystem. The tness of a particular individual will
depend on several factors. The rst, of course, is its
genotype, or more accurately its phenotype which
will determine to a large extent how it will interact
with the exterior world. Second, are environmental
factors such as geographical location, climatic
conditions, etc., but also the interactions with other
beings living in the area. When these latter con-
ditions are kept xed, Kauman & Levin (1987)
showed that it is possible to construct a tness
landscape, i.e. a surface which associates a tness
with every possible genotype. Pictorally, this should
look like a mountain landscape with hills (genotypic
regions of high tness) and valleys (genotypic regions
of low tness). In this construction, the genotype of
an individual is represented by a dot somewhere on
the landscape. Motion on the landscape is possible
by mutations. According to the formalism of Kau-
man & Levin (1987), because of the selection
pressure exerted on the individual, this walk will
drive the entity from one tness peak to another as it
tries to improve its chances of surviving.
Of course, as time passes, the environment of the
individual will change. For instance, if the favourite
prey of a given predator disappears, the latter might
have diculty surviving. Similarly, if the prey,
instead of disappearing, develops a new tactic to
evade the predator, then the latter will also struggle.
So, in fact, in the latter case, by raising its tness, the
prey has lowered at the same time that of the
predator. This an example of the mechanism by
which an individual can aect the tness of other
species by changing its own. So in fact, members of
an ecosystem, while evolving, are performing a walk,
as Kauman & Johnsen (1991) put it, on a rubbery
tness landscape which changes all the time as other
species evolve as well. If, as a consequence, several
species simultaneously lock themselves together in a
codependence chain, one can immediately see how a
critical state can be attained by the ecosystem.
The NKC model (Kauman & Johnsen, 1991) is
a mathematical realisation of these principles (see
K K
C C
K
3, N
2, N 1, N
C
Fig. 23. Diagram of the NKC model for an ecosystem of
three individuals. Each entity (1, 2, 3) is dened by N
genes, including a subset of K regulatory genes
(represented by loops on the diagram), to which a tness
and a tness landscape can be associated. The current
genotype of each individual is shown as a dot on the grey
tness surface. The tness surfaces also depend on C genes
from the other members of the ecosystem (symbolized by
arrows). Here, entities 1 and 3 are on maxima of their
respective tness landscape. However, 2 is not, but it
might reach a nearby maximum at the next time step.
This would at the same time modify the tness landscape
of 1 and 3, which might not be at tness peaks any more
in the new landscapes. They would then have to start
evolving as well. This is a simple example of a coevolution
event.
Fig. 23). It simulates the dynamics of the interactions
between species by assigning to each of them their
own tness landscape. Roughly, each species is
described by a set of N genes, the activity of which
determine its tness B. The dependence of B on the
genotype is actually non-linear as the contribution of
each gene depends also on that of K other genes (see
Fig. 23). The result is a tness landscape with a
complicated structure comprising many dips and
hills. (see Kauman & Johnsen, 1991 for further
details). In order to implement the interactions
between species, the authors also included in the
denition of the tness of each individual, a
contribution from C genes of the genotype of other
species (see Fig. 23): changing a gene by mutation
might therefore raise ones tness, but it will also
193 Scale invariance in biology
change that of others. The addition of this mech-
anism gives rise to complicated dynamics in the
system, which depends strongly (for xed values of N
and K) on the value chosen for C.
If C is smaller than K, the ecosystem settles quickly
into a stable state of equilibrium where all indi-
viduals have reached local tness maxima. The
authors, using the vocabulary of phase transitions,
describe this phase as solid or frozen. On the
other hand, if C is large compared to K, the system
takes long periods of time before settling (if it ever
does) to an equilibrium state ( gaseous phase). The
parameter C therefore plays a role similar to that of
temperature in phase transitions.
It is, however, at the border between these two
phases, where C is roughly equal to K, that the most
interesting behaviour takes place. For this value, the
system is able to evolve towards an equilibrium state.
However, when a perturbation such as forcing one
individual o its tness peak for instance is intro-
duced, it modies at the same time the tness
lanscape of its neighbours. This pushes the system
away from its equilibrium state, resulting in a phase
of activity where entities resume mutating until they
reach tness maxima again. The distribution of the
size of these coevolution avalanches (to use the
vocabulary of the sandpile model) is shown to follow
a power law, similarly then to those extracted from
fossil data. This result is, however, only obtained by
tuning the parameter C near K, thereby setting the
system in a critical state similar to that of the toy
model presented above. We refer the reader to the
literature for further details on the very rich
dynamics of the NKC model (Kauman, 1989a, b;
Kauman & Johnsen, 1991; Bak, Flyvbjerg &
Lautrup, 1992).
(d) Self-organized critical models
In the previous models, one had to tune parameters
(such as C in the NKC model) in order to put the
system in a critical state. This state seems an
interesting compromise for evolution as it ensures
periods of relative tranquillity, as well as a thorough
renewal of the species content of ecosystems which,
in turn, ensures diversity. It has then been argued
that nature might have evolved by itself towards this
ne-tuning.
A dierent approach consists in taking advantage
of the principle of self-organized criticality where, as
we saw, systems evolve towards a critical state
without the need to adjust any parameters. The rst
model of evolution of this type was introduced by
Bak and Sneppen (1993).
Let us consider an ecosystem composed of N
species, each being assigned a tness B
i
(i l1, 2,,
N) as dened before. The rst rule of the dynamics
of the model implements the purely Darwinian
scheme of mutation and selection: each species i can
mutate with a probability q
i
le
B
i
/

, where is some
positive parameter smaller than 1 which denes the
frequency of mutations. When a species does mutate,
it is then assigned a new tness. q
i
is a non-linear
function of B
i
which ensures that only species with
low tness will mutate since those with high B
i
have
considerably fewer options to raise their tness
further. It is important to note at this point that, in
the BakSneppen model, extinction and evolution
are two facets of a single mechanism: the species with
lowest tness goes extinct and is replaced by a new
one, which evolved from the former by mutation
(therefore inducing evolution). Evolution therefore
takes place in the ecosystem only because an
extinction event has occured. This is a very simplied
view of the phenomenon which has been modied in
subsequent models.
Fig. 24A shows the state of the ecosystem at some
initial time where we have assigned to each species a
random value for its tness. Fig. 24B shows the result
of the application of a purely Darwinian rule of
natural selection, i.e. survival of the ttest . The
system converges to a state where all the tnesses are
close to 1. We note that the convergence to this state
becomes increasingly slower as the minimum tness
of the species present in the ecosystem is progressively
raised by selection: at the beginning of the simu-
lation, the system goes through many mutation
events, but they become less and less frequent with
time. This does not reproduce the patterns we saw in
Figs 17 and 18, with extinction events of all sizes over
the whole time record. The dynamics is clearly
incomplete. The interactions, especially dependence,
between species which were present in the NKC
model for instance, are missing.
To remedy this problem, Bak & Sneppen (1993)
implement the interactions between species of the
model in the following way: whenever a species
mutates, so will its immediate neighbours. The
addition of this simple rule gives a dramatically
dierent dynamics to the system. Instead of nding
itself again in a situation where B is close to 1 for
every species, the ecosystem settles in a dierent state
where almost all the tnesses have grouped them-
selves in a band higher than some threshold value B
c
0.6670 (see Fig. 24C). This state is stable, as it
194 T. Gisiger
1
0.5
0
1
0.5
0
1
0.5
0
0 100 200
Species
F
i
t
n
e
s
s
A
B
C
Fig. 24. Distribution of the values of the tness B for an
ecosystem of Nl200 species. (A) Initial state where all
tnesses are chosen randomly. (B) State of the ecosystem
after following the purely Darwinian principle of re-
moving the least t species and replacing it by a mutant
species. (C) The same ecosystem after the same number of
time steps but using also the updating rule of BakSnep-
pen which takes into account interactions between species.
persists however long we let the simulation run for.
The convergence to this state gives the system the
following interesting dynamics.
As long as all species have tnesses above B
c
, the
system is in a phase of relative tranquility where
mutations seldom happen. Typically, one should
wait a period of approximately 1\q
c
le
B
c
/

10#*
time steps to see a single mutation occur. Here, the
organisms coexist peacefully and there is little change
taking place in the ecosystem.
However, when a mutation does occur in which
one of the species inherits a tness lower than B
c
, the
system enters a phase of frantic activity as the
probability of a mutation taking place is now much
higher than q
c
. This state is able to sustain itself as
15000
10000
5000
0
75
50
25
0
0 250 500 750 1000
Time (arbitrary units) A
c
c
u
m
u
l
a
t
e
d

c
h
a
n
g
e

(
a
r
b
i
t
r
a
r
y

u
n
i
t
s
)
S
i
z
e

o
f

e
x
t
i
n
c
t
i
o
n
/
m
u
t
a
t
i
o
n

e
v
e
n
t
s
0 20 40 60 80 100
A
B
Fig. 25. (A) Time series of the size of extinction\mutation
events for an ecosystem with 20 species (in most events, a
single species can mutate several times). (B) Time
evolution of the accumulated change of a given species of
the ecosystem. Every time the species mutates, the
accumulated change increases by one unit. The time is
measured in mutation events.
species get knocked out of the high end of the tness
region when one of their neighbours mutates. The
system will eventually settle back momentarily to its
quiet state when all species again have BB
c
. The
series of extinctions\mutations which took place
forms an event of measurable size and duration. Fig.
25A shows a series of such events as a function of
time. One notices that there are events of many sizes
similarly to the records from Fig. 17. Bak & Sneppen
(1993) have shown that the distribution of event size
and duration follows a power law, as does the
distribution of interaction ranges between species of
the ecosystem. This demonstrates that the system has
indeed reached a critical state where species interlock
spontaneously (i.e. without any parameter ne
tuning) into a chain of codependence. What happens
is that in the quiet state, stress has been building up
in the ecosystem like in the sandpile when it is very
steep. When a species mutates to a tness lower than
B
c
, it is like adding the grain of sand which triggers
an avalanche. In this case, it is a co-evolutionary
195 Scale invariance in biology
avalanche where mutation of species induces further
mutations of their neighbours.
Fig. 25B shows the accumulated change of a
given species in the ecosystem as a function of time.
Notice the periods of change (vertical lines) sepa-
rated by periods of stasis (horizontal line). This
compares well with Fig. 22 illustrating the notion of
punctuated equilibrium.
For further details on the model, its dynamics and
how it compares with fossil data, the reader is
referred to the literature (Sneppen, 1995; Sneppen et
al., 1995).
Another interesting model which describes ecosys-
tems as critical system was introduced by Sole! (1996)
and Sole! & Manrubia (1996). Here, evolving
interactions between species is the key ingredient
which drives the system towards a critical state. At
each time step, the total stress imposed by the rest of
the ecosystem on each species is computed. Species
which are subject to too much pressure go extinct,
and are replaced by mutants of the remaining
species. There are also spontaneous mutations
occurring, but at a lower level. This will drive the
system towards a state where removing just one
species will perturb the existence of many others, and
extinction events of all sizes will occur. Sole! ,
Bascompte and Manrubia (1996) and Sole! and
Manrubia (1997) showed that this system has
become critical with an extinction event distribution,
among other things, which follows a power law. For
further details see Sole! et al. (1996), Sole! &Manrubia
(1997) and references therein.
(e) Non-critical model of mass extinction
So far in our developments, we have presented work
which interprets the presence of power laws in fossil
data as evidence that ecosystems operate at, or near,
a critical state. However, power laws do not always
imply criticality as Newman (1996, 1997a, b)
showed. Indeed, he proposes a model for evolution
where no interactions are explicitly present between
species (although they may be implicit) which
accounts well for the data and seems to indicate that
codependence between species might, although
important, not be an essential ingredient in eco-
system dynamics. As we will see, his model therefore
does not interpret mass extinction events as co-
evolutionary avalanches, but rather as the result of
strong external stress on a temporarily weakened
ecosystem.
Part of the model of Newman (1997a, b) is the
Darwinian mechanism of elimination of unt species
by selection, and which is implemented as follows. As
in the BakSneppen model, each species possesses a
tness B. Stress, symbolized by a number between 0
and 1, is drawn using a random number generator
and applied to the system, eliminating all species
with tness smaller than this number. This stress can
be of physical origin (geographic location, climate,
volcanic eruptions, earthquakes, meteorites, etc.)
but also it can be caused by other species (predators,
competitors, parasites, etc.). Selection is then fol-
lowed by diversication as free ecological niches (in
this case, the spaces left vacant by the species which
just disappeared) are lled by new species. However,
these dynamics of a purely Darwinian type lead to
the same situation as the one illustrated by Fig. 24B
where the ecosystem becomes increasingly stable by
lling itself with highly t species. The interesting
variation which Newman (1997a, b) introduces to
complement his model is, instead of codependence,
the spontaneous ability of species to mutate and
evolve at all times, even in times of little external
stress. A new rule is then added to the model which
allows at each time step a small proportion of the
species of the ecosystem to mutate spontaneously.
This changes radically the dynamics of the system
which favours now a rich and complex activity.
Simulations show that the distribution of the size of
extinction events, as well as of species lifetimes obey
power laws. By making supplementary assumptions
about the model, Newman (1997a) was also able to
estimate the distribution of species in each genus,
which is also of this type and in the spirit of the
ndings of Burlando (1990, 1993) (see Section
IV.2.a).
That this model is able to produce power laws
without any parameter ne tuning or resorting to a
critical state is quite impressive. It is important to
note that the ecosystem of Newmans model (1997a,
b) is not critical : the species do not spontaneously set
themselves in highly unstable states to then become
extinct like falling dominoes. These events, called
coevolutionary avalanches, are a trademark of
critical models of evolution and a prediction which
has to be tested against fossil data. In Newmans
model (1997a, b), on the other hand, there are no
such avalanches as there are no direct interactions
between species. Interactions are implicitly included
in the stresses applied to the system. However, the
particular dynamics of the system allows it to make
original predictions of its own.
The model rst predicts that the longer the
waiting time for a large stress, the greater the next
extinction event will be. Indeed, periods of low
196 T. Gisiger
100
75
50
25
0
0 100 200 300
Time (arbitrary units)
N
u
m
b
e
r

o
f

e
x
t
i
n
c
t
i
o
n
s
Fig. 26. Time series of extinction events for the model of
Newman (1997a, b) for an ecosystem of Nl100 species
illustrating the notion of aftershocks following a large
extinction event.
stresses have a tendency to increase the number of
species with low to medium tness. These will then
be wiped out at the next large stress. The model
therefore tells us that in order to adapt to their
surroundings, species can sometimes render them-
selves vulnerable to other, less frequent stresses. This
could be tested using fossil data if information about
the magnitude of external perturbations can be
obtained independently.
Another prediction of the model is the so-called
aftershock, illustrated by Fig. 26. After a large
extinction event, the ecosystem is almost entirely
repopulated by new species. Statistically, about half
of them will have tness lower than 0.5. If another
large stress is applied to the system, most of these new
species will be immediately wiped out as they do not
have time to evolve suciently to withstand this new
perturbation. The second event will therefore be of
large size because the preceding extinction event left
the ecosystem vulnerable. Newman (1997a) gave the
name aftershock to this phenomenon. This pre-
diction could also in principle be tested using fossil
data. We refer the reader to the literature for further
details (Newman, 1997a, b).
V. DYNAMICS OF EPIDEMICS
We now turn to the work of C. J. Rhodes, R. M.
Anderson and colleagues on type III epidemics.
Section V.1 presents evidence for scale invariance in
the dynamics of this type of epidemic. A self-
organized critical model, formerly introduced to
account for the spreading of forest res, and which
reproduces well experimental data is then discussed
in Section V.2.
(1) Power laws in type III epidemics
Let us consider the records of measles epidemics in
the Faroe Islands in the north-eastern Atlantic, and
more specically the time series of the number of
infected people in the community month by month,
also called the monthly number of case returns. This
example is interesting in several respects. The
population under investigation is approximately
stable, at between 25000 and 30000 individuals. It is
also somewhat isolated, the main contacts with other
communities taking place during whaling and
commercial trade. It is believed that this is the main
route by which the measles virus enters the popu-
lation. The virus therefore infects a, at least partially,
non-immune population and epidemic events of
various sizes take place. The population is, however,
small enough for the epidemics to die out before the
next one starts. Because of the easily noticeable
symptoms of measles, the record is also believed to be
highly accurate. Fig. 27A shows the time series of
measles case returns for 58 years, between 1912 and
1972, before vaccination was introduced. The record
shows 43 epidemics of various sizes and duration
(most of which are very small and do not appear on
the gure).
We dene an epidemic as an event with non-zero
case returns bound by periods of zero measles
activity. Its duration is the total number of
consecutive months during which case returns are
non-zero, and the size s of the event is the total
number of persons infected during that period. With
these denitions, the records show epidemic events
ranging from one individual to more than 1500 (and
engulng close to the whole islands in one instance),
and with duration between one month and more
than a year.
Rhodes & Anderson (1996a, b) and Rhodes,
Jensen & Anderson (1997) computed the size and
duration distribution of the events and obtained the
results shown in Fig. 27B, C. The cumulative size
distribution D(size s) clearly follows a power law
over three orders of magnitude, as does the duration
distribution of events over one order of magnitude:
D(size s) `s
a
, (17)
D(duration) `
b
, (18)
where a 0.28 and b 0.8. [The cumulative distri-
butions D(size s) `
a
and D(duration) `
b
197 Scale invariance in biology
1000
500
0
1900 1920 1940 1960 1980
Time (years)
10
2
10
1
10
0
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
0
10
1
Epidemic size s
Epidemic duration s
D
(
d
u
r
a
t
i
o
n
>
s
)
D
(
s
i
z
e
>
s
)
N
u
m
b
e
r

o
f

i
n
f
e
c
t
i
v
e
s
A
B
C
Fig. 27. (A) Monthly measles-case returns for the Faroe
islands (population of approximately 25000 habitants),
between 1912 and 1972. (B) Cumulative distribution
D(size s) of epidemics according to their size s compiled
from A. The straight line represents a power law with
exponent a 0.28. (C) Cumulative distribution D(dura-
tion) of epidemic event duration . The straight line
has a slope of b 0.8. Reproduced from Rhodes &
Anderson (1996b).
are related to the non-cumulative distributions D(s)
`s

and D() `

since l1ja and l1jb.]


Rhodes & Anderson (1996b) also performed this
analysis on records between 1912 and 1970, for the
island of Bornholm, Denmark (a 0.28 and b
0.85) and the town of Reykjavik, Iceland (a 0.21
and b 0.62) (results not shown). They later
improved their estimations of the parameter a for
measles, whooping cough and mumps epidemics in
the Faroe Islands, by using longer records which run
from 1870 to 1970 (see Rhodes et al. , 1997).
This power-law behaviour shows the absence of
characteristic scale in the size and duration of
epidemics. So far, it has been dicult to reproduce
the statistics of these events using traditional models
of epidemiology. This is the case for the SEIR (which
stands for susceptible, exposed, infective and re-
coveredindividuals) compartmental model of Ander-
son & May (1991) as was shown by Rhodes &
Anderson (1996a, b) and Rhodes et al. (1997). They
suggest that the mass-action law, on which the
model is based, overestimates the interactions be-
tween the susceptible (people who have not yet been
infected but can be if exposed) and the infective
(people who can infect other people), therefore over-
producing large epidemics. Heterogeneity is a factor
which dierentiates dierent types of epidemics
dynamics. In large cities, because of the school
environment, measles is usually considered a child-
hood disease. However, in a mostly non-urban area
such as the Faroe Islands, the measles epidemics
aict all age groups : the entire population is
susceptible to catching the virus. The epidemics of
the case presented here are therefore dierent from
those of urban areas, and are classied as type III
epidemics (Bartlett, 1957, 1960) (i.e. epidemics in a
small, isolated population of susceptibles).
(2) Disease epidemics modelling with
critical models
Given that traditional models such as the SEIR
model apparently failed to reproduce the observed
data, Rhodes &Anderson (1996a, b) tried a dierent
approach. They postulated that the power laws
observed in the distributions are in fact critical
exponents of some critical system, and directed their
attention instead towards a model rst developed to
study turbulence in uids and forest re dynamics
(see Bak, Chen & Tang, 1990 and Drossel &
Schwabl, 1992 for details).
This model is somewhat similar to the percolation
model of plant growth of Section III.3. Here again,
at each time step, trees grow on empty areas of a eld
with a probability p. Also present is a so-called
lightning mechanism which sets trees on re with a
probability l. Once a tree is on re it will burn in a
single time step and leave a vacant area behind it,
where new trees can grow. It will also at the same
time set ablaze its immediate neighbours, which will
do the same to nearby trees and so on. A forest re
is then dened as the event where trees are burning
on the eld, and a size and duration can be associated
to it. Drossel & Schwabl (1992) showed that if the
introduction of new trees outnumbers the trees set
ablaze (so if p is much larger than l) then the system
198 T. Gisiger
1000
500
0
1400 1600 1800 2000
Time (arbitrary units)
10
2
10
1
10
0
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
0
10
1
Epidemic size s
Epidemic duration s
D
(
d
u
r
a
t
i
o
n
>
s
)
D
(
s
i
z
e
>
s
)
N
u
m
b
e
r

o
f

i
n
f
e
c
t
i
v
e
s
A
B
C
Fig. 28. (A) Time series of infectives obtained from the
simulation of case returns using the forest-re model
described in the text. The time scale is arbitrary. (B)
Cumulative distribution D(size s) of events according
to their size; it is compatible with a power law with
exponent a 0.29 (solid line). (C) Cumulative dis-
tribution D(duration) of duration of events shown
with a power-law distribution of exponent b 1.5.
Reproduced from Rhodes & Anderson (1996b).
will settle into a critical state with size and duration
distributions of a power-law type. Also in this state,
the distribution of burning trees in the forest can be
shown to be fractal (see Bak et al., 1990 and Drossel
& Schwabl, 1992 for details).
As Rhodes & Anderson (1996a) show, there are a
lot of similarities between the dynamics of forest res
and the spreading of a disease in a community: trees
on the lattice represent susceptibles ; burning trees
correspond to infectives ; empty sites represent people
immune to the disease (or no people at all). The
forest re model can then be viewed as a very simple
model of disease spreading in a community, but one
which does not enforce homogeneity as strongly as
the SEIR model, for instance. However, before
trying to apply this model to the problem at hand,
one must make sure that the fundamental condition
p l is satised. Otherwise the system will not
achieve a critical state. To do this, one must rst
evaluate the parameters p and l corresponding to the
population of the Faroe Islands.
This population has been roughly constant over
the last century, at between 25000 and 30000
people. Between 1912 and 1970, there have been 43
documented epidemics. An estimate of the prob-
ability of measles outbreaks would then be approxi-
mately l l43\58 0.74 per year. Also, to maintain
the population roughly constant, on average each
member of the community will give birth to one
child. Estimating the average lifetime of people in
the community to be approximately 70 years, this
gives a probability of 1\70 per year of giving birth to
a child. The number of newborn, and therefore
susceptibles, for the whole community is then
25000\70 357 per year which is roughly equal to
one per day. Therefore, we obtain l\p 0.74\365
1\493 which is very small compared to 1: the
condition for the system to settle into critical
dynamics is therefore well satised. There are clearly
two time scales in the model : births, which happen
almost every day, and the introduction of the virus,
which happens once a year. It is also important that
p is not too high, otherwise the birth of susceptibles
might fuel the existing epidemics for too long, maybe
even until the next epidemic arises.
Rhodes & Anderson (1996b) used the following
parameters for their simulations : p l0.000 026 and
l lp\300. This gave them an average population of
approximately 25000 and a distribution of epidemic
events following power laws with exponents a 0.29
and b 1.5 (see Fig. 28). Later simulations, (Rhodes
& Anderson, 1996a) where the system was allowed a
transitory period of approximatively 130 years (to
attain criticality) and the data was collected during
the following 180 years, gave the improved expo-
nents a 0.25 and b 1.27 which are quite close to
those extracted from the records. This was done
using a two-dimensional lattice. Similar compu-
tations on a three-dimensional and ve-dimensional
lattice were carried out by Clar, Drossel & Schwabl
(1994), who found the values a l0.23 and a l0.45
respectively. Apparently, the best match for the
observed measles and whooping cough patterns is
the three-dimensional forest-re model, while the
ve-dimensional version of the model reproduces
best the mumps critical exponent. Overall, the
accord is quite impressive.
199 Scale invariance in biology
I end this section with a few comments (Rhodes &
Anderson, 1996a, b; Rhodes et al., 1997). The power-
law distributions of epidemic events, and the fact
that the exponents a and b can be so well reproduced
by a critical model, are strong indications that the
spreading of measles and whooping cough in small,
isolated populations of susceptibles (i.e. in a type III
epidemic) is a critical phenomenon. This system and
that of the three-dimensional forest-re model seem
to be in the same universality class (similarly with
the ve-dimensional forest-re model and the dy-
namics of type III mumps epidemics). Such a close
match using three- and ve-dimensional models for
disease spreading can seem a little odd. However, as
Rhodes et al. (1997) pointed out, one should not view
these dimensions as physical or geographical dimen-
sions. They are closer to eective dimensionality of
the space of social connections : the more dimensions,
the more social contacts are involved. If the disease
is less transmissible (like mumps over the measles),
the social interactions are likely to be more frequent,
therefore the dimension of the social interaction
space will be higher.
Lastly, using the power-law distributions for D(s),
Rhodes and Anderson (1996a) showed that is it
possible to predict the number Eof measles epidemics
between size s
l
and s
u
for a given time interval :
El43[s
!
n
#)
l
ks
!
n
#)
u
], (19)
where s
l
and s
u
represent the lower and upper size
limits, respectively. For example, the predicted result
E for epidemics between s
l
l10 and s
u
l100 for the
next 60 years is approximately 10.7. However, the
model cannot tell us when these will occur (see
Rhodes & Anderson, 1996a for details).
We refer the reader to the literature for further
details about the model and its applications to other
diseases and population types (Rhodes & Anderson,
1996a, b; Rhodes et al., 1997).
VI. SCALE INVARIANCE IN
NEUROBIOLOGY
The recent successes of the application of self-
organized criticality fostered interest in the idea that
the brain might be operating at, or near, a critical
state (Stassinopoulos & Bak, 1995; Bak, 1996; Papa
& da Silva, 1997; da Silva, Papa & de Souza, 1998;
Chialvo & Bak, 1999). However, it seems dicult to
conceive a system which is further from, for instance
a sandpile, than the central nervous system. The
brain of the cat, ape or man, is very structured both
in form and function. Over the years, using
numerous methods (lesion studies, positron emission
tomography scan, functional electroencephalogram,
functional magnetic resonance imaging, etc.) it has
been possible to single out regions of the central
nervous system which are responsible for processing
sensory inputs, understanding and articulating lan-
guage, as well as those in charge of reection and
building strategies to solve problems, to name only a
few (see Changeux, 1985 for a review). Recent work
has even suggested that a well-dened area of the
brain hardwires the notion of number and is
responsible for its perception (Dehaene, 1997). These
modules are believed to have developed through
evolution as animals, especially mammals, moved to
ever more complex forms over the ages. In healthy
subjects, the activity of a particular region is not
likely to spread to the totality of the brain like a
perturbation in an unstable system. This is radically
dierent fromthe sandpile systemwhich is essentially
homogenous in structure, and experiences pertur-
bations on all time and length scales. Finally, even
though the brain exhibits structure over a large
number of length scales, from its size of the order of
the decimeter to that of a single neuron, of about a
few micrometers, it is hardly fractal or self-similar in
the traditional sense.
In the light of these arguments, it may therefore
seem surprising that evidence for some aspects of
scale invariance has been found in the central
nervous system. We review here three particularly
intriguing examples in communication, cognition
and electrophysiological measurements in the cortex.
(1) Communication: music and language
Fig. 29 shows the power spectra obtained by Voss
& Clarke (1975) for the loudness of diverse signals
carrying complex information such as musical pieces
and radio talk stations. These clearly show a 1\f

scaling behaviour with exponents gamma in the


vicinity of 1. The authors also show that similar
power distributions are exhibited by the power
spectra of the pitch uctuations of these signals
(results not shown).
This is quite intriguing, especially considering the
very dierent nature of the signals. Press (1978)
gives the following interpretation (or justication) of
the phenomenon: Music certainly does have struc-
ture on all dierent time scales. [] There are three
notes to a phrase, say, and three phrases to a bar,
and three bars to a theme, and three repetitions of a
theme in a development, and three developments in
a movement, and three movements in a concerto,
200 T. Gisiger
10
8
10
7
10
6
10
5
10
4
10
3
10
2
10
1
10
0
10
4
10
3
10
2
10
1
10
0
10
1
D
C
B
A
1/f
Frequency f
P
o
w
e
r

s
p
e
c
t
r
u
m

P
(
f
)
Fig. 29. Power spectrum P( f ) of the loudness uctuations
as a function of frequency f for: (A) Scott Joplin Piano
Rags ; (B) classical radio station; (C) rock station; (D)
news and talk station. Also shown is a straight line
corresponding to a 1\f signal. Reproduced from Voss &
Clarke (1975).
and perhaps three concertos in a radio broadcast. I
do not mean this really literally, but I think the idea
is clear enough. This type of argument helps to
explain the general trend of the Voss and Clarke
data, but I think there is still the real mystery of why
the agreement with 1\f looks so precise . What Press
(1978) is saying is that music, and the broadcast
itself, are a superposition of many dierent fre-
quencies which ll the power spectra on several
orders of magnitude, but that it does not explain
why the relative contribution from each frequency
follows so precisely a 1\f distribution. One could
have expected a Gaussian distribution in the medium
to high end of the frequency range, for instance. The
analysis of certain peaks in P( f ) has proven
interesting but not very enlightning to this question
(Voss & Clarke, 1975). The argument from Press
(1978) can also be extended to the rock music and
the talk stations because the signals they transmit are
too a superposition of components such as phrases,
sentences, songs, commercials, news broadcasts, etc.,
all with roughly characteristic time lengths. I see
these descriptive, geometrical arguments as explain-
ing only a facet of the phenomenon.
Voss & Clarke (1975) used stochastic music
generators to understand better the phenomenon.
These devices produce music notes by note using
random number generators to determine both their
duration and pitch in the composition. The
resulting melody was then judged by listeners. Voss
& Clarke (1975) rst used white-noise generators
which produced, as expected, a completely uncorre-
lated series of sounds, judged too random by the
subjects of the experiment. The addition of strong
time correlations by switching to a Brownian signal
with a 1\f # power spectrum generated trends in the
music so prolonged that it was found boring by the
listeners. However, using 1\f-noise generators pro-
duced music which seemed much more pleasing and
was judged even just right . This experiment
therefore indicates that the brain is used to, or at
least prefers, music with correlations on all time
scales.
Voss & Clarke (1975) go further and propose the
following interpretation: Communication, like most
human activities, has correlations that extend over
all time scales. For most musical selections the
communication is through the melody and P( f ) is
1\f-like. I certainly agree with the claim that
communication involves various time scales, as the
understanding of a particular sign or bit of in-
formation depends on the information previously
received and also on the present context. Music,
being a particularly simple type of communication,
should therefore contain long temporal correlations
which might appear on the power spectra for
loudness and pitch of signals.
Language, being the most advanced form of
communication, also contains long-term correlations
as indicated by the loudness power spectrum of Fig.
29. The 1\f behaviour is however less precise for
pitch, being of the white-noise type for very low
frequencies, and of the 1\f # type for high frequencies.
Intuitively, language involves a whole range of time
scales as sounds are compiled in syllables, from
syllables to words, from words to sentences, from
sentences to groups of sentences, while a general
meaning of the ideas expressed emerges and inu-
ences the understanding of the following sentences.
This applies both to the understanding of language
and to its articulation. It is therefore not totally
unexpected that time correlations appear in the
power spectra of Voss & Clarke (1975).
Other power laws, mainly in the distributions of
words, had already been observed in language by
Zipf (1949) several decades ago. Zipf (1949) presents
the example of the book Ulysses by James Joyce,
which possesses approximately 260430 running
words, and has been the subject of numerous
linguistic studies. One of these studies proceeds as
follows. First one counts the number of occurences
of each word used in the book. Then a rank k is
assigned to each word according to its frequency:
201 Scale invariance in biology
k l1 for the word the which appears the most
frequently, k l2 for the second most frequent of , k
l3 for and, k l4 for to, etc. This denes a
distribution D(k) of words as a function of their rank
k which ranges from 1 to approximately 29899 for
this particular book. Quite surprisingly, D(k) follows
a power law in k with exponent k1 with a very good
precision. Such a spectacular result cannot be
coincidental. Actually, many similar distributions
exist in other aspects of language, for instance in the
distribution of meanings of words in English. Let us
dene by w the number of meanings of a given word
according to some reference dictionary. We then
evaluate w for a large set of dierent words. We nd
that the number D(w) of words with w meanings
follows a power law with a slope of approximately
k0n5 (so there are more words with few meanings
than than there are words with many meanings).
These features seem to be robust and even universal
as similar power law distributions were found for
dierent books and languages, although the expo-
nents varied somewhat. Zipf (1949) extended his
analysis to children, monitoring how frequency
distributions evolved as the vocabulary of the subject
improved over the years, and also to patients aicted
by mental illness such as autism and schizophrenia.
We refer the reader to his book for further details
and spectacular ndings.
Zipf (1949) describes and justies a whole range of
human activities, including communication by lan-
guage, under the unifying principle of least eort
which he denes as individuals [] minimizing a
probable average rate of work. In a nutshell,
according to this principle, ones actions are dictated
by the goal of spending the least amount of eort in
the long run to solve a given problem. In the case of
language, Zipf (1949) explains the power law
distribution D(k) (and possibly D(w)) as the result of
two conicting forces : one from the speaker who
wishes to express his ideas with the least amount of
eort (and therefore of words), the other from the
listener who wants to expend the least amount of
eort in understanding it (and therefore prefers a
more elaborate vocabulary). Zipf (1949) also pro-
posed phenomenological models constructed on this
principle which enabled him to reproduce some of
the data.
However, it seems clear that the scale-invariance
properties presented here, i.e. 1\f-noise and power-
law distributions, reect more the dynamics of the
regions of the brain devoted to language than the
action of some more general principle. Neuroscience
should then provide a framework more suited to
investigating these phenomena. For example, Posner
& Pavese (1998) have presented evidence that
supports the location of lexical semantics in the
frontal areas, and comprehension of proposition in
the posterior areas. Such regions could therefore be
responsible for creating or storing words, and then
assembling them into meaningful sentences. How-
ever, these two functions are characterised by short
time scales (the utterance or comprehension of
words), and medium time scales (their assembly into
a sentence). These two processes working separately
might not be enough to provide the long-term
correlations necessary for ecient communication.
Feedback loops or links to other regions responsible
for longer time scales (perhaps those relating to
emotions) might then be necessary.
In our view, two approaches might be especially
helpful to the study of how the brain produces and
understands language. The rst is to analyse the
exact mechanisms which generate precise words and
sentences. This will probably be an extremely
complex project experimentally and also theoretic-
ally as it implies the construction of neurally realistic
models able to simulate the production of real words
and sentences in mature language. The second
approach, dictated by the study of complex systems,
concentrates more on the general features of these
dynamics, such as how the brain manages to
generate strings of information with such long time
correlations. Also, since the power laws presented
above are to a large extent language-independent,
they might prove useful as data to test future, even
very simple, models of communication and language.
Finally, their sensitivity to vocabulary range and
mental ability of subjects could supply further
constraints on the models.
We end this Section by noting that the presence of
1\f-noise in the power spectra of Fig. 29 does not
necessarily imply that the areas of the brain which
are responsible for language operate near a critical
state. As we saw earlier (see Section III.4), 1\f-noise
can be produced by non-critical systems, which
therefore do not exhibit spatial scale invariance. The
presence of the latter should, in principle, be testable
experimentally.
(2) 1/f-noise in cognition
Another interesting result, this time in cognitive
psychology, has been presented by Gilden, Thornton
& Mallon (1995). They showed that the time series
of the errors made in reproducing time and spatial
intervals has a 1\f power spectrum. The experiment
202 T. Gisiger
10
0
10
1
10
2
10
3
10
3
10
2
10
1
10
0
Frequency f
10
3
10
2
10
1
10
0
10
1
10
2
Time delay (ms)
D
i
s
t
r
i
b
u
t
i
o
n

D
(

)
P
o
w
e
r

s
p
e
c
t
r
u
m

P
(
f
)
1/f
A
B
Fig. 30. (A) Power spectrum P( f ) as a function of
frequency f of the error in the reproduction of a spatial
interval. Reproduced from Gilden et al. (1995). (B)
Distribution D() of time delays , or periods of
inactivity, for a typical neuron from the visual cortex of
the macaque macaca mulatta. The straight line has a slope
of k1.58. Reproduced from Papa & da Silva (1997).
proceeds as follows. Subjects are asked to reproduce
N times a given time interval, chosen between 0.3
and 10 s, by pushing a button on a keyboard. The
error e
i
, i l1, , N, is then recorded, interpreted as
a time series and its power spectrum computed. The
resulting power spectrumP( f ) (see Gilden et al., 1995
for details) behaves like 1\f

with between 0.9 and


1.1 for frequencies larger than approximately 0.2 Hz.
For larger f (which corresponds to a period of
roughly 5 s), the shape of the spectrum alters : it then
increases as approximately f #. Another similar
experiment was conducted by Gilden et al. (1995),
this time asking subjects to reproduce spatial
intervals. The result, reproduced in Fig. 30A, follows
closely a 1\f spectrum for frequencies less than
approximately 0.1 Hz, and attens (like white noise)
for higher f. In order to understand better this
phenomenon, Gilden et al. (1995) used a model
common in timing variance studies (Wing &
Kristoerson, 1973) which simulates the production
of temporal intervals using an internal clock and a
motor delay unit. By taking the internal clock to be
a source of 1\f-noise and the motor delay to be a
source of white noise (instead of considering them
both as white-noise generators, as is usually done),
Gilden et al. (1995) showed that the data for the time
intervals could be well accounted for. However, for
their model to be correct, they had to test the
hypothesis that the motor delay can indeed be
modelled by a white-noise generator. They settled
this issue by computing the spectral power density of
a signal obtained from a dierent experiment. This
time the subject was asked to react as quickly as
possible to a given visual stimulus. The power
spectrum of the time series constructed from these
delays indeed follows a curve with a 1\f ! power
spectrum. A similar result for spatial intervals has
been observed by pen placement experiments.
This phenomenon is dierent from that of the
previous Section: icker noise emerges as error in the
data instead of being the central product of the
system. It is merely a side product, but one which
could nonetheless contain important information
about cognitive mechanisms which mediate the
judgment of time and spatial magnitude (Gilden et
al., 1995), about the structure of the neural networks
making up short-term memory, or even the noise
generated by the neurons themselves. Similar results
have been obtained later by Gilden (1997) for other
cognitive operations. In this case, the author asked
subjects to perform tasks involving mental rotation,
lexical decisions, and serial search along rotational
and translational directions, while timing their
performances. The mean response time was com-
puted, and subtracted from the time series of clocked
times. The power spectrum of the resulting signal
was then computed and found to be of the 1\f

form,
with ranging from 0.7 to 0.9. This raises the
hypothesis that 1\f spectra might be common to all
conscious natural behaviour. Further investigations
are certainly indicated.
(3) Scale invariance in the activity of neural
networks
We end this Section on scale invariance in the
central nervous systemwith some interesting ndings
on the ring of neurons in the cortex. In a recent
article, Papa & da Silva (1997) present a plot of the
distribution of time intervals between successive
rings of cortex neurons (Fig. 30B). The data used
for this analysis come from a study of the visual
cortex of the macaque macaca mulatta by Gattass &
Desimone (1996). As can be seen from Fig. 30B, the
203 Scale invariance in biology
1000
10
2
10
1
10
0
10
0
10
2
Time interval (ms)
D
i
s
t
r
i
b
u
t
i
o
n

D
(

)
A
B
10
1
10
1
10
3
750 500 250 0
Time (ms)
Fig. 31. (A) Firing response of a neuron from the visual
cortex of the cat when exposed ve separate times to the
same stimulus. Each vertical bar represents a spike: their
exact timing changes widely from one series to the next.
Reproduced from Koch (1997). (B) Distribution of the
time elapsed between two consecutive spikes in the record
shown in A, as a function of its frequency. The straight
line has a slope of approximately k1.65. The distribution
was obtained by compiling the time delays in a
histogram with 30 channels, giving a time resolution of
approximately 8 ms.
distribution D() of time intervals clearly follows
a power law D() `T

with k1.6 over several


orders of magnitude of . The distribution attens
out for very small values and there is a cut-o for
large ones. The former could be caused by the
refractory period of neurons, which imposes an
upper bound on the cell ring rate, while the later
might be a consequence of the nite size of the data.
Another interesting result mentioned by Papa &
da Silva (1997) is that a similar time distribution
seems to occur in cells responding to externally
applied stimuli. In Fig. 31A are reproduced electro-
physiological measurements from Koch (1997) made
on a neuron of the visual cortex of the cat when the
same stimulus was applied ve separate times.
As can be seen, the neuron responds every time
with a dierent train of spikes. However, they do not
occur at random. Indeed, by looking at the series,
one nds that small time intervals are much more
frequent than medium ones, and medium ones than
large ones. Papa & da Silva performed a statistical
analysis of the distribution of , nding a power-
law distribution with an exponent of approximately
k1.66. It therefore seems that neuron activity in
reaction to stimuli is not random, as is sometimes
assumed. I repeated this analysis in order to plot the
results on Fig. 31B and found a power-law dis-
tribution of time delays for ranging between 10
and 65 ms, with an exponent of approximately
k1.65. The exponent is also close to that of the time
interval distribution in the case where no external
stimulus was applied. The results here are, however,
less clear cut than in Fig. 30B.
These results raise several questions. First, how
does this scale invariance in neuron activity arise?
Since the exponents of the power laws tting both
data sets are almost identical, all we can say is that
similar mechanisms might be at work with and
without external stimuli. Other important questions
are how, why and under what conditions, do neural
networks function in a state with such scale-free
activity? It is important to note that, as we saw in
Section III.4, a power-law distribution of event
durations usually indicates the presence of long
trends in the signal produced by the system. These
might, in turn, inuence more macroscopic functions
of the brain such as cognition, for instance.
A possible source for this scale invariance is the
background activity of neural networks. In a recent
article, Arieli et al. (1996; see also Ferster, 1996)
observed that the visual cortex of the cat is
perpetually subject to a highly structured spon-
taneous activity. Neurons re in a coordinated
fashion best described as waves of activity sweeping
through the network. Roughly, the ring of one
neuron from this tissue sample therefore corre-
sponded to the instant when one of these waves
passes through the position of the neuron. This
activity also strongly inuences the probability with
which a neuron will re if it is subjected to a
stimulus. Although interesting in its own right, this
activity has not yet received the theoretical and
experimental attention it deserves, unlike the similar
phenomenon of muscular contraction wave propa-
gation in the cardiac system. Why this ongoing
activity should exhibit temporal scale invariant
properties is still unknown.
One should note that, although the record of
Fig. 31A looks similar to the signal of the map of
Manneville (1980) when put at the edge of chaos (see
204 T. Gisiger
Fig. 10), the exponent of k1.3 found there falls short
of the value of k1.66 for the cortex of the cat.
This model therefore overestimates the proportion
of medium and large intervals between rings
compared to smaller ones. It also brings little in-
sight into the mechanisms producing the observed
patterns.
Papa & da Silva (1997) propose that the power-
law distribution observed is created by some mech-
anism which self-organizes the cortex into a critical
state via short-range interactions between neurons.
The neural network model they introduce to
illustrate this idea is mathematically very similar to
that used by Bak and Sneppen (1993) to model
evolution (see Section IV.2.d). As we saw above in
the sandpile model for instance, the distribution of
the duration of avalanches follows a power law. It
can be shown that so does the time one has to wait
between avalanches, called anti-avalanches by
Papa and colleagues (Papa & da Silva, 1997; da
Silva et al., 1998). During these waiting periods, sand
piles up but does not slip, as the height of the sand is
nowhere larger than the critical value 3. We can
therefore see how a self-organized critical model
might reproduce the distribution of delays between
spikes.
The model (Papa & da Silva, 1997; da Silva et al.,
1998) that the authors propose represents neurons as
devices which re stochastically with a probability
roughly equal to e

. is a parameter of the neuron


which quanties its susceptibility to re. The neurons
are then connected together in a network, with each
neuron only making synapses with its immediate
neighbours. When a neuron res, this changes
(usually by raising it) the ring probability of
neighbouring cells as new values for are assigned
to them. Also included in the model is the refractory
period which forbids a neuron from ring a second
time before a minimum time delay R has passed.
The dynamics of the model are then quite simple as
neurons re according to their intrinsic dynamics
(quantied by parameter ), itself subject to in-
uences from neighbouring cells. The dynamics,
similar to that of the BakSneppen model, leads the
system to a critical state where numerous power laws
arise and, even though each neuron only connects
to its immediate neighbours, the ring of one can
trigger that of all other cells in the array. The
model of Papa & da Silva (1997) and da Silva
et al. (1998) therefore predicts that cells in regions of
the visual cortex can arrange themselves in a falling
domino fashion. They also show that the exponent
for the anti-avalanche distribution, which approxi-
mates the time separating two rings from an
arbitrary cell of the network, is roughly equal to
k1.60, although it depends on the particular value
chosen for the refractory time R (Papa & da Silva,
1997; da Silva et al., 1998). This value is quite close
to that obtained for the visual cortex of the cat (see
above). Another strong point of this model for the
spontaneous activity of neural networks is that this
critical state is attained without ne tuning of
parameters such as synaptic weights. This is im-
portant since there is mounting evidence that there
is not enough information contained in the genome
to code for the strengths of all the synapses of the
brain (Koch, 1997) (i.e. to ne-tune synaptic
strengths). Coding for such a simple algorithm as the
one reviewed here, which adjusts the ring barriers
of neurons, would certainly require a lot less
information.
VII. CONCLUSION
In this article, I have reviewed some recent advances
in the study of scale-free biological systems. Scale
invariance is very common in nature, but it is only
since the early 1970s that the mathematical tools
necessary to dene it more clearly were introduced.
Objects without any characteristic length scales are
now called fractals (Mandelbrot, 1977, 1983) and
their structure can be analyzed and quantied using
fractal dimensions. Signals with correlations on
arbitrary time scales can be discriminated from
ordinary background noise by computing their
power spectrum or their Hurst exponent, or by using
graphical methods. Scale invariance in the dynamics
can be detected by plotting on a loglog scale the
distribution of size or duration of events in the
system.
Using these methods, scale invariance has been
observed in diverse areas of biology. Ecosystems
seem to be highly scale-invariant : rain forests
generate fractal structures (Sole! &Manrubia, 1995a,
b; Manrubia & Sole! , 1997), population time series
can exhibit 1\f behaviour (Halley, 1996; Mira-
montes & Rohani, 1998), and extinction events seem
to follow power-law distributions when enough
species are present (Keitt & Marquet, 1996). Scale
invariance persists when considering the evolution of
ecosystems on time scales of the order of hundreds of
millions of years. The time evolution of the number
of families of organisms is self-similar on several time
scales (Sole! et al., 1997), and has a 1\f power
spectrum. Also, distributions of extinction and
205 Scale invariance in biology
diversication event sizes and durations followpower
laws (Sneppen et al., 1995), as do the number of
ramications of families in sub-families, and taxa in
sub-taxa (Burlando, 1990, 1993). Certain types of
epidemics too exhibit power-law distributions,
clearly contrary to classical models of epidemics
(Rhodes & Anderson, 1996b). Finally, even the
central nervous system also seems to show some sort
of scale invariance at dierent levels : 1\f-noise in
communication (Voss &Clarke, 1975) and cognition
(Gilden et al., 1995; Gilden, 1997) power-law
distributions in language (Zipf, 1949) and back-
ground noise in the cortex (Papa & da Silva, 1997;
da Silva et al., 1998).
Scale invariance is a phenomenon well known to
physicists who worked, also during the 1970s, on
critical phenomena and phase transitions. These
systems, when one sets the temperature to some
critical value, arrange themselves in states without
any characteristic scale, exhibiting fractals and
power laws (Maris & Kadano, 1978; Wilson,
1979). This theory was later generalized by Bak et al.
(1987) to systems which spontaneously (i.e. without
the need for parameter ne tuning) organize
themselves in a critical state, therefore exhibiting
scale invariance and sometimes producing 1\f-noise.
In view of these remarkable advances, the important
question which arose from the work of Mandelbrot
(1977, 1983) has shifted from Why is there scale
invariance in nature? to Is nature critical ? (Bak
& Paczuski, 1993).
It is likely that no general answer exists to this
question. As we saw earlier, 1\f-noise, spatial scale-
invariance and power-law distributions of events do
not always implicate each other, let alone criticality.
For instance, systems can generate 1\f-noise without
being critical (see for instance De Los Rios & Zhang,
1999 and references therein). In my view, this
question is therefore best answered case by case, by
carefully comparing experimental data with the
predictions of mathematical models of the systems.
Sole! & Manrubia (1995a, b) and Manrubia &
Sole! (1997) make a strong case for rain forest
vegetation being in a critical state, reproducing
fractal dimensions of gaps in the vegetation. The
ndings of Rhodes & Anderson (1996a, b) and
Rhodes et al. (1997) on type III epidemics are just as
impressive, with a model which reproduces well the
data and gives further insight into the mechanism of
propagation of diseases. Experimental evidence for
criticality in currently existing ecosystems of animals
could however be more convincing. By contrast, the
patterns obtained from fossil data are quite as-
tonishing and put under a lot of pressure the
accepted theory of the eect of catastrophes on
ecosystems. However, a rm conrmation of the
occurence of coevolutionary avalanches is needed to
prove that ecosystems evolve to critical states. The
case of the brain is the most puzzling of all, but
theoretical and mathematical investigation of this
system is only at its beginning. Correlations on all
time scales are certainly possible in the brain, as
thoughts, emotions and even communication are
perpetually ongoing phenomena without any be-
ginning or end. However, how this scale invariance
comes about and how it aects our actions and the
way we communicate, is not known. This should
prove an interesting starting point of investigation,
complementing more traditional approaches in
neurobiology.
Even if criticality turns out not to be a dominant
principle in nature, work on critical systems and
models has already tremendously increased our
comprehension of the world around us. In
Changeuxs (1993) words, models are abstractions
which in no way can completely represent or be
identied with reality. However, every new type of
model adds to our understanding of dynamics and
introduces new concepts which help us understand
reality. Chaotic systems triggered a revolution in
their days because no simple mathematical equations
were believed able to generate unstable and com-
plicated trajectories. Even though chaotic systems
have been constructed and studied in the laboratory,
few have been observed in nature, especially in
biology. However, concepts such as attractors,
fractals and Lyapunov exponents, which were
introduced by scholars of chaos, still serve as building
blocks to understand the dynamics of non-chaotic or
even more complicated systems. Critical phenomena
have introduced physicists from all elds to phase
transitions, critical exponents and universality. It is
probable that in the future these concepts will
become as commonly known to non-physicists as
fractals are today. Perhaps by then even more exotic
and exciting dynamics will have been encountered
elsewhere.
VIII. ACKNOWLEDGEMENTS
The author thanks M. B. Paranjape for his constant
support throughout the entire preparation of this review,
as well as M. Pearson, M. Kerszberg, J.-P. Changeux, R.
Klink and V. Gisiger for many useful discussions and their
comments on the manuscript. The use of the libraries and
206 T. Gisiger
the computational facilities of the Centre de Calcul of the
Universite! de Montre! al, of the Laboratoire Rene! J. A.
Le! vesque and of the Institut Pasteur are also gratefully
acknowledged.
IX. REFERENCES
Arz.w., Y., Min.i.xr, C. & Konv.x., T. (1984). Statistical
mechanics of intermittent chaos. Progress of Theoretical Physics
79, 96124.
Axiinsox, R. M. & M.v, R. M. (1991). Infectious Diseases of
Humans, Dynamics and Control. Oxford University Press.
Anriir, A., S1inirx, A., Gnrx\.ii, A. & Ain1six, A. (1996).
Dynamics of ongoing activity: explanation of the large
variability in evoked cortical responses. Science 273, 1868
1871.
B.r-Lrx, H. (1989). Elementary Symbolic Dynamics. World Scien-
tic Publishing.
B.i, P. (1990). Self-organized criticality. Physica A163, 403409.
B.i, P. (1996). How Nature Works : the Science of Self-Organized
Criticality. Springer-Verlag.
B.i, P. & Cnix, K. (1989). The physics of fractals. Physica D38,
512.
B.i, P. & Cnix, K. (1991). Self-organized criticality. Scientic
American 264, 4653.
B.i, P., Cnix, K. & Cnii1z, M. (1989). Self-organized
criticality in the Game of Life. Nature 342, 780782.
B.i, P., Cnix, K. & T.xo, C. (1990). A forest-re model and
some thoughts on turbulence. Physics Letters A147, 297300.
B.i, P., Fiv\njino, H. & L.i1niv, B. (1992). Coevolution in
a rugged tness landscape. Physical Review A46, 67246730.
B.i, P. & P.zisir, M. (1993). Why nature is complex. Physics
World 6, 3943.
B.i, P. & P.zisir, M. (1995). Complexity, contigency, and
criticality. Proceedings of the National Academy of Sciences of the
United States of America 92, 66896696.
B.i, P. & Sxivvix, K. (1993). Punctuated equilibrium and
criticality in a simple model of evolution. Physical Review Letters
71, 40834086.
B.i, P., T.xo, C. & Wrisixiiii, K. (1987). Self-organized
criticality: an explanation of 1\f noise. Physical Review Letters
59, 381384.
B.i, P., T.xo, C. & Wrisixiiii, K.. (1988). Self-organized
criticality. Physical Review A38, 364374.
B.nxsiiv, M. (1988). Fractals Everywhere. Academic Press Inc.
B.n1ii11, M. S. (1957). Measles periodicity and community
size. Journal of the Royal Statistical Society A120, 4870.
B.n1ii11, M. S. (1960). The critical community size for measles
in the United States. Journal of the Royal Statistical Society A123,
3744.
B.soxv1i, J. & Soii! , R. V. (1995). Rethinking complexity:
modelling spatiotemporal dynamics in ecology. Trends in
Ecology and Evolution 10, 361366.
Biioiso\, B. P. (1959). Sbornik Referatov po Radiacioni Medicine,
145 (in Russian).
Bix1ox, M. J. (1993). The Fossil Record 2, London: Chapman
and Hall.
Bix1ox, M. J. (1995). Diversication and extinction in the
history of life. Science 268, 5258.
Biniii.xv, E. R., Coxw.v, J. H. .xi Giv, R. K. (1982).
Winning Ways for Your Mathematical Plays, Vol. 2, Academic
Press.
Binnvx.x, A. A. & Mriis1irx, J. A. (1989). Are ecological
systems chaotic And if not, why not ? Trends in Ecology and
Evolution 4, 2629.
Brxiv, J. J., Downri, N. J., Frsnin, A. J. & Niwx.x, M. E. J.
(1992). The Theory of Critical Phenomena. Clarendon Press.
Bi.nin, A. & Doiniir, M. (1996). In the red zone. Nature 380,
589590.
Bno.inix1, S. R. & H.xxinsiiv, J. M. (1957). Percolation
processes. I. Crystals and mazes. Proceedings of the Cambridge
Philosophical Society 53, 629641.
Bini.xio, B. (1990). The fractal dimension of taxonomic
systems. Journal of Theoretical Biology 146, 99114.
Bini.xio, B. (1993). The fractal geometry of evolution. Journal
of Theoretical Biology 163, 161172.
C.xvniii, M. J. & Joxis, B. W. (1972). Cyclic changes in
insulin needs of an unstable diabetic. Science 177, 889891.
Cn.xoiix, J.-P. (1985). Neuronal man: the biology of mind.
Pantheon Books.
Cn.xoiix, J.-P. (1993). A critical view of neuronal models of
learning and memory. Memory conceptsk1993: basic and
clinical aspects (Elsevier Science Publishers), 413433.
Cnr.i\o, D. R. & Bii, P. (1999). Learning from mistakes.
Neuroscience 90, 11371148.
Cnnrs1ixsix, K., Conn.i, A., Fni11i, V., Fiiin, J. & Joss.xo,
T. (1996). Tracer dispersion in a self-organized critical
system. Physical Review Letters 77, 107110.
Cnnrs1ixsix, K., Fooiinv, H. C. & Jixsix, H. J. (1991).
Dynamical and spatial aspects of sandpile cellular automata.
Journal of Statistical Physics 63, 653684.
Cnnrs1ixsix, K., Oi.xr, Z. & B.i, P. (1992). Deterministic 1\f
noise in nonconservative models of self-organized criticality.
Physical Review Letters 68, 24172420.
Ci.n, S., Dnossii, B. & Snw.ni, F. (1994). Scaling laws and
simulation results for the self-organized critical forest-re
model. Physical Review E50, 10091018.
Cooorx, S. J. & P.zix, J. L. (1996). Dynamic complexity in
Physarum polycephalum shuttle streaming. Protoplasma 194,
243249.
Conix, J. L. (1995). Unexpected dominance of high frequencies
in chaotic nonlinear population models. Nature 378, 610612.
Cos1., U. M. S., Lvn., M. L., Pi.s1rxo, A. R. & Ts.iirs, C.
(1997) Power-law sensitivity to initial conditions within a
logisticlike family of maps : fractality and nonextensivity.
Physical Review E56, 245250.
D. Sri\., L., P.v., A. R. R. & Di Soiz., A. M. C. (1998).
Criticality in a simple model for brain functioning. Physical
Review A242, 343348.
Din.ixi, S. (1997). Number Sense : How the Brain Creates
Mathematics. Oxford University Press.
Di Los Rros, P. & Zn.xo, Y.-C. (1999). Universal 1\f noise
from dissipative self-organized criticality models. Physical
Review Letters 82, 472475.
Dnossii B. & Snw.ni, F. (1992). Self-organized critical forest-
re model. Physical Review Letters 69, 16291632.
Erxs1irx, A. & Ixiiii, L. (1938). The Evolution of Physics : the
Growth of Ideas from Early Concept to Relativity and Quanta. Simon
and Schuster, New York.
Eiiiniioi, N. & Goiii, S. J. (1972). Models in Paleobiology.
San Francisco: Freeman, Cooper.
F.ioxin, K. J. (1985). The Geometry of Fractal Sets. Cambridge
University Press.
Fiiin, J. (1988). Fractals. Plenum Press.
207 Scale invariance in biology
Firoixn.ix, M. J. (1978). Quantitative universality for a class
of nonlinear transformations. Journal of Statistical Physics 19,
2552.
Firoixn.ix, M. J. (1979). The universal metric properties of
nonlinear transformations. Journal of Statistical Physics 21,
669706.
Fins1in, D. (1996). Is neural noise just a nuisance? Science 273,
1812.
Fni11i, V., Cnnrs1ixsix, K., M.i1ni-Sonixssix, A., Fiiin,
J., Joss.xo, T. & Mi.irx, P. (1996). Avalanche dynamics in
a pile of rice. Nature 379, 4952.
G.nixin, M. (1970). The fantastic combinations of John
Conways new solitaire game life. Scientic American 223, (4)
120124.
G.11.ss, R. & Disrxoxi, R. (1996). Responses of cells in the
superior colliculus during performance of a spatial attention
task in the macaque. Revista Brasileira de Biologia 56, 257279.
Griiix, D. L. (1997). Fluctuations in the time required for
elementary decisions, Psychological Science 8, 296301.
Griiix, D. L., Tnonx1ox, T. & M.iiox, M. W. (1995). 1\f
noise in human cognition. Science 267, 18371839.
Goiininoin, A. L., Rroxiv, D. R. & Wis1, B. J. (1990). Chaos
and fractals in human physiology. Scientic American 262,
4249.
Goiininoin, A. L. & Wis1, B. J. (1987). Fractals in physiology
and medicine. Yale Journal of Biology and Medicine 60,
421435.
Goiii, S. J. & Eiiiniioi, N. (1993). Punctuated equilibrium
comes of age. Nature 366, 223227.
Goiii, H. & Tononxri, J. (1988). An Introduction to Computer
Simulation Methods : applications to Physical Systems, Part II.
Addison-Wesley.
Gnrioin, B. (1992). Quaternary climatic uctuations as a
consequence of self-organized criticality. Physica A191, 5156.
Gii\.n., M. R., Gi.ss, L. & Snnrin, A. (1981). Phase locking,
period-doubling bifurcations, and irregular dynamics in
periodically stimulated cardiac cells. Science 2l4, 13501353.
Gi1ixnino, B. (1949). Seismicity of the Earth and Associated
Phenomena. Princeton University Press.
Gi1ixnino, B. & Rrn1in, C. F. (1956). Magnitude and
energy of earthquakes. Annali di Geosica 9, 115.
H.iiiv, J. M. (1996). Ecology, evolution and 1\f noise. Trends
in Ecology and Evolution 11, 3337.
H.xxinsiiv, J. M. (1983). Origins of percolation theory.
Annals of the Israel Physical Society 5, 4757.
H.nnrsox, R. G. .xi Brsw.s, D. J. (1986). Chaos in light.
Nature 321, 394401.
H.ssii, M. P., Coxrxs, H. N. & M.v, R. M. (1991). Spatial
structure and chaos in insect population dynamics. Nature
353, 255258.
H.s1rxos, A., Hox, C. L., Eiixin, S., Tinnrx, P. & Goiin.v,
H. C. J. (1993). Chaos in ecology: Is mother nature a strange
attractor? Annual Review of Ecological Systems 24, 133.
Hoiix.x, A. (1989). What, if anything, are mass extinctions ?
Philosophical Transactions of the Royal Society of London B325,
253261.
Hoisi, M. R. (1989). Ammonoid extinction events. Philosophical
Transactions of the Royal Society of London B325, 307326.
Hins1, H. E. (1951) Transactions of the American Society of Civil
Engineers 116, 770808.
Hins1, H. E., Bi.i, R. P. & S.x.vi., Y. M. (1965). Long-
Term Storage : an Experimental Study. Constable.
J.nioxsir, D. (1991). Extinctions : a paleontological perspective.
Science 253, 754757.
Jiiiniv, H. J. (1990). Chaos game representation of gene
structure. Nucleic Acids Research 18, 21632170.
Jixsix, H. J. (1990). Lattice gas as a model of 1\f noise. Physical
Review Letters 64, 31033106.
Jixsix, H. J., Cnnrs1ixsix, K. & Fooiinv, H. C. (1989). 1\f
noise, distribution of lifetimes, and a pile of sand. Physical
Review B40, 74257427.
K.ii1zovoiios, E., Goisios, S. & E\.xoiioi, S. N. (1997).
1\f noise and multifractal uctuations in rat behavior.
Nonlinear Analysis, Theory, Methods and Applications 30, 2007
2013.
K.r1.i., V. & R.x1., E. (1996). Scientic correspondence.
Nature 381, 199.
K.xiio, K. (1989). Pattern dynamics in spatiotemporal chaos.
Physica D34, 141.
K.vi.x, D. T. & Gi.ss, L. (1992). Direct test for determinism
in a time series. Physical Review Letters 68, 427430.
K.iiix.x, S. A. (1989a). Adaptation on rugged tness land-
scapes. Lectures in the Sciences of Complexity. The Santa Fe Institute
Series (Addison Wesley), 527618 and 619712.
K.iiix.x, S. A. (1989b). Principles of adaptation in complex
systems. Lectures in the Sciences of Complexity. The Santa Fe Institute
Series (Addison Wesley), 619712.
K.iiix.x, S. A. & Jonxsix, S., (1991). Coevolution to the
edge of chaos : coupled tness landscapes, poised states and
coevolutionary avalanches. Journal of Theoretical Biology 149,
467505.
K.iiix.x, S. A. & Li\rx, S. (1987). Towards a general theory
of adaptive walks on rugged tness landscapes. Journal of
Theoretical Biology 128, 1145.
Kir11, T. H. & M.noii1, P. A. (1996). The introduced
Hawaiian avifauna reconsidered: Evidence for self-organized
criticality? Journal of Theoretical Biology 182, 161167.
Kiiiooo, D. E. (1975). The role of phyletic changes in the
evolution of Pseudocubus vema (Radiolaria). Paleobiology 1,
359370.
Kin1i! sz, J. & Krss, L. B. (1990). The noise spectrum in the
model of self-organized criticality. Journal of Physics : Math-
ematical and General A23, L433L440.
Kon, C. (1997). Computation and the single neuron. Nature
385, 207210.
L.xo1ox, C. (1990). Computation at the edge of chaos : phase
transitions and emergent computation. Physica D42, 1237.
Li Biii., M. (1988). Des pheTnomeZ nes critiques aux champs de jauge.
InterEditions\Editions du CNRS.
Lrnn.nin, A., L.noni, C. & F.i\i, S. (1982). Period
doubling cascade in mercury, a quantitative measurement.
Journal de Physique Lettres 43, L211L216.
Lonixz, E. N. (1963). Deterministic nonperiodic ow. Journal of
the Atmospheric Sciences 20, 130.
M.xiiinno1, B. B. (1977). Fractals : Form, Chance and Dimension,
W. H. Freeman.
M.xiiinno1, B. B. (1983). The Fractal Geometry of Nature, W. H.
Freeman.
M.xxi\riii, P. (1980). Intermittency, self-similarity and 1\f
spectrum in dissipative dynamical systems. Journal of Physics
(Paris) 41, 12351243.
M.xninr., S. C. & Soii! , R. V. (1997). On forest spatial
dynamics with gap formation. Journal of Theoretical Biology
187, 159164.
208 T. Gisiger
M.nrs, H. J. &K.i.xoii, L. P. (1978). Teachingtherenormaliz-
ation group. American Journal of Physics 46, 652657.
M.1.-Toiiio, R. A. & Wriirs, M. A. (1997). Visualisation of
random sequences using the chaos game algorithm. Journal of
Systems and Software 39, 36.
M.v, R. M. (1976). Simple mathematical models with very
complicated dynamics. Nature 261, 459467.
M.vx.ni Sxr1n, J. (1989). The causes of extinction. Philo-
sophical Transactions of the Royal Society of London B325, 241252.
MN.xii, J. E. (1991). Fractal perspectives in pulmonary
physiology. Journal of Applied Physiology 71, 18.
Mrn.xox1is, O. & Ron.xr, P. (1998). Intrinsically generated
coloured noise in laboratory insect populations. Proceedings of
the Royal Society of London B265, 785792.
Niwx.x, M. E. J. (1996). Self-organized criticality, evolution
and the fossil extinction record. Proceedings of the Royal Society
of London B263, 16051610.
Niwx.x, M. E. J. (1997a). A model of mass extinction. Journal
of Theoretical Biology 189, 235252.
Niwx.x, M. E. J. (1997b). Evidence for self-organized criti-
cality in evolution. Physica D107, 293296.
Nrnoisox, A. J. (1957). The self-adjustment of populations to
change. Cold Spring Harbour Symposia on Quantitative Biology 22,
153173.
Nroirs, G. & Pnrooorxi, I. (1989). Exploring Complexity: an
Introduction. Freeman.
Nvni., D., Eiixin, S., MC.iiniv, D. & G.ii.x1, A. R.
(1992). Finding chaos in noisy systems. Journal of the Royal
Statistical Society B54, 399426.
P.v., A. R. R. & D. Sri\., L. (1997). Earthquakes in the
brain. Theory in Biosciences 116, 321327.
P.nrsr, G. (1993). Statistical physics and biology. Physics World
6, 4247.
P.ixo.n1xin, D., Los., G. & Wirnii, E. R. (1981). Res-
olution eect on the stereological estimation of surface and
volume and its interpretation in terms of fractal dimension.
Journal of Microscopy 121, 5163.
Prion.x, B. & K.vi.x, D. T. (1999). Nonstationarity and 1\f
noise characteristics in heart rate. American Journal of
Physiology: Regulatory Integrative and Comparative Physiology 45,
R1R9.
Prxx, S. L. & Riiii.nx, A. (1988). The variability of
population densities. Nature 334, 613614.
Posxin, M. I. & P.\isi, A. (1998). Anatomy of word and
sentence meaning. Proceedings of the National Academy of Sciences
of the United States of America 95, 899905.
Pniss, W. H. (1978). Flicker noises in astronomy and elsewhere.
Comments on Astrophysics 7, 103119.
Pniss, W. H., Tiiioisiv, S. A., Vi11inirxo, W. T.,
Fi.xxinv, B. P. (1988). Numerical Recipes in C: the Art of
Scientic Computing, Second Edition, Cambridge University
Press.
Pno.r., I. & Snis1in, H. (1983). Functional renormaliz-
ation-group theory of universal 1\f noise in dynamical
systems. Physical Review A28, 12101212.
R.iv, D. M. (1986). Biological extinction in earth history.
Science 231, 15281533.
R.iv, D. M. (1989). The case for extraterrestrial causes of
extinction. Philosophical Transactions of the Royal Society of London
B325, 421435.
R.iv, D. M. & Bov.jr.x, G. E. (1988). Patterns of generic
extinction in the fossil record. Paleobiology 14, 109125.
R.iv, D. M. & Siviosir, J. J., Jn. (1982). Mass extinctions in
the marine fossil records. Science 215, 15011502.
R.iv, D. M. & Siviosir, J. J., Jn. (1984). Periodicity of
extinctions in the geological past. Proceedings of the National
Academy of Sciences of the United States of America 81, 801805.
Rnoiis, C. J. & Axiinsox, R. M. (1996a). A scaling analysis of
measles epidemics in a small population. Philosophical Trans-
actions of the Royal Society of London B351, 16791688.
Rnoiis, C. J. & Axiinsox, R. M. (1996b). Power laws
governing epidemics in isolated populations. Nature 381,
600602.
Rnoiis, C. J., Jixsix, H. J. & Axiinsox, R. M. (1997). On the
critical behaviour of simple epidemics. Proceedings of the Royal
Society of London B264, 16391646.
Ri1nix, R. (1993) Adapting to complexity. Scientic American
268, 130140.
S.iis, T. R. (1993). Life in one dimension: statistics and self-
organized criticality. Journal of Physics A: Mathematical and
General 26, 61876193.
Snri, K. L. & Vin\iix, A. A. (1974) 1\f noise with a low
frequency white noise limit. Nature 251, 599601.
Siviosir, J. J., Jn. (1982). A compendium of fossil marine families
Milwaukee. Public Museum Contributions in Biology and
Geology 51.
Siviosir, J. J., Jn. (1993). Ten years in the library: new data
conrm paleontological patterns. Paleobiology 19, 4351.
Sxivvix, K. (1995). Extremal dynamics and punctuated co-
evolution. Physica A221, 168179.
Sxivvix, K., B.i, P., Fiv\njino, H. & Jixsix, M. H. (1995).
Evolution as a self-organized critical phenomenon. Proceedings
of the National Academy of Sciences of the United States of America
92, 52095213.
Soii! , R. V. (1996). On macroevolution, extinctions and critical
phenomena. Complexity 1, 4044.
Soii! , R. V. & B.soxv1i, J. (1996). Are critical phenomena
relevant to large-scale phenomena. Proceedings of the Royal
Society of London 263, 161168.
Soii! , R. V., B.soxv1i, J. & M.xninr., S. C. (1996).
Extinction: bad genes or weak chaos. Proceedings of the Royal
Society of London B263, 14071413.
Soii! , R. V. & M.xninr., S. C. (1995a). Are rain forests self-
organized in a critical state? Journal of Theoretical Biology 173,
3140.
Soii! , R. V. & M.xninr., S. C. (1995b). Self-similarity in rain
forests : Evidence for a critical state. Physical Review E51,
62506253.
Soii! , R. V. & M.xninr., S. C. (1996). Extinction and self-
organized criticality in a model of large-scale evolution.
Physical Review E54, R42R45.
Soii! , R. V. & M.xninr., S. C. (1997). Criticality and
unpredictability in macroevolution. Physical Review E55,
45004507.
Soii! , R. V., M.xninr., S. C., Bix1ox, M. & B.i, P. (1997).
Self-similarity of extinction statistics in the fossil record. Nature
388, 764767.
Sonxi11i, A. & Sonxi11i, D. (1989). Self-organized criticality
and earthquakes. Europhysics Letters 9, 197202.
S1.ssrxovoiios, D. & B.i, P. (1995). Democratic reinforce-
ment : a principle for brain function. Physical Review E51,
50335039.
S1.iiiin, D. (1979). Scaling theory of percolation clusters.
Physics Reports 54, 174.
209 Scale invariance in biology
S1iiii, J. H. (1985). A comparison of terrestrial and marine
ecological systems. Nature 313, 355358.
S1oiis, T. K., Ginxiv, W. S. C., Nrsni1, R. M. & Biv1ni, S.
P. (1988). Parameter evolution in a laboratory insect
population. Theoretical Population Theory 34, 248265.
S1noo.1z, S. H. (1994). Nonlinear Dynamics and Chaos. Addison-
Wesley Publishing.
Siorn.n., G. (1996). Scientic correspondence. Nature 381,
199.
Siorn.n., G. & M.v, R. M. (1990). Nonlinear forecasting as a
way of distinguishing chaos from measurement error in time
series. Nature 344, 734741.
T.xo, C. & B.i, P. (1988). Critical exponents and scaling
relations for self-organized critical phenomena. Physical Review
Letters 60, 23472350.
Tino11i, D. L. (1992). Fractals and Chaos in Geology and
Geophysics. Cambridge University Press.
U1ri., S. (1957). Cyclic uctuations of population density
intrinsic to the host-parasite system. Ecology 38, 442449.
Voss, R. F. & Ci.nii, J. (1975). 1\f noise in music and speech.
Nature 258, 317318.
Wis1, G. B., Bnowx, J. H. & Exoirs1, B. J. (1997). A general
model for the origin of allosteric scaling laws in biology. Nature
276, 122126.
Wnr1i, A., Bioox, M. & Bowins, R. G. (1996a). Explaining
the colour of power spectra in chaotic ecological models.
Proceedings of the Royal Society of London B263, 17311737.
Wnr1i, A., Bowins, R. G. & Bioox, M. (1996b). Red\Blue
chaotic power spectra. Nature 381, 198.
Wrisox, K. G. (1979). Problems in physics with many scales of
length. Scientic American 241, 158179.
Wrxo, A. M. & Knrs1oiiinsox, A. B. (1973). The timing of
interresponse intervals. Perception and Psychophysics 14, 455460.
Woiin.x, S. (1983). Statistical mechanics of cellular automata.
Review of Modern Physics 55, 601644.
Woiin.x, S. (1984). Universality and complexity in cellular
automata. Physica D10, 135.
Woiin.x, S. (1986). Theory and Applications of Cellular Automata.
World Scientic.
Wnron1, S. (1982). Character change, speciation, and the
higher taxa. Evolution 36, 427443.
Zn.no1rxsir, A. (1964). Biozika 9, 306 (in Russian).
Zrvi, G. K. (1949). Human Behaviour and the Principle of Least
Eort. Haner Publishing.

You might also like