You are on page 1of 26

DFGSchwerpunktprogramm 1114

Mathematical methods for time series analysis and digital image processing

Order patterns in time series


Christoph Bandt Faten Shiha

Preprint 106

Preprint Series DFG-SPP 1114 Preprint 106 April 2005

The consecutive numbering of the publications is determined by their chronological order. The aim of this preprint series is to make new research rapidly available for scientic discussion. Therefore, the responsibility for the contents is solely due to the authors. The publications will be distributed by the authors.

Order Patterns in Time Series


Christoph Bandt and Faten Shiha March 2005
Abstract In an attempt to nd fast and robust methods for data mining, we study time series by counting order patterns of 2, 3 or 4 equally spaced values. The statistics for stationary Gaussian processes is given. The pattern distribution for some well-known empirical time series deviates strongly from any Gaussian model. Applications include descriptive methods, similar to autocorrelation, as well as tests for symmetry properties like reversibility.

Introduction

In applied statistics, the simplest methods are often the best. Moreover, in recent years the routine study of huge multivariate data sets, data mining, requires algorithms which work extremely fast. Here we present some very elementary methods for the analysis of a univariate time series ( xt )t=1,...,T which can be generalized to the multivariate setting. We do not use the actual values xt . We only need to know whether xs < xt or xs > xt . We shall assume that the underlying variables Xt have a continuous distribution so that ties xs = xt are very rare. In other words, we study time series on ordinal level. This means we forget a lot of structure. All standard methods of time series analysis, based on vector spaces and additive decomposition of processes, are not available in our setting. Probably this is the reason that, with exception of a number of papers by Marc Hallin and co-authors on rank correlation [11, 12, 15], a few case studies like those of Brillinger [7, 8], and our recent work [4, 3] ordinal methods have rarely been used for time series although they are very common in elementary statistics. On the other hand, methods based on the topological structure of time series instead of the vector space structure are very robust under noise and non-linear perturbation. We also need not assume much stationarity of the underlying process, in particular if we compare only values xs , xt for small |s t|. So let us start now with the simplest case. 1

Counting up and down

First examples
We consider the time series (x1 , ..., xT ) and a delay parameter d {1, 2, 3, ...}. Now we determine the relative frequency of time points t for which xt < xt+d . p( d ) = card{t| 1 t T d, xt < xt+d } T d

At rst glance, one might expect that this value is always 1 . However, for the 2 well-known monthly sunspot series and annual Canadian lynx series [20] the function p(d) looks curious (Figure 1). It oscillates between 0.42 and 0.64, clearly indicating the 11-years (132 months) periodicity of sunspots and the 10-years cycle of lynx. Note that the sunspots have 3046 values and the lynx only 114. Let us consider each series as a sample from a stationary process and each p(d) as an estimate, approximated by a value from a binomial distribution with 1 n = T d, p 2 . Then the standard deviation of a single p(d) becomes 0.01 for the sunspots and 0.05 for the lynx. This agrees with the uctuations of consecutive values in Figure 1. Thus for the short lynx series, single but the whole structure is values p(d) do not dier too signicantly from 1 2 meaningful.
p(d) p(d) 0.6 0.6

0.5 0.5

0.4 0

500

1000

0.4

20

40

Figure 1. The function p(d) for the monthly sunspot series 1749-2003 and the Canadian lynx series [20] If x(t) is a function of continuous time, 0 t T, then let d assume real values between 0 and T, and let us dene p(d) with the Lebesgue length measure . ({t| 0 t T d, xt < xt+d }) p( d ) = T d Example 2.1. Consider a sawtooth function with period one: x(t) = t for a (1 t) for a t 1. Extend x(t) periodically, 0 t a, and x(t) = 1 a 2

x(t + m) = x(t) for any integer m. Then p(d) = d + a(1 2d) for 0<d<1

. . . . . . . . .. . . . .. . . . . . . . . . . .. .. . . . . . . . . . .. . . . .. .. . . . . . . . . . . . . . . .. .. .. .. .. .. .. . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . .. . . . . . . . . . . .. .. . .. . .. . . . . . . . . . . . .. . . . . . . . ... . . . . . . . . . .. .. . . .

x 6 x ( t ) > x ( t + d) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . a . . . . . . . . . . . . .. . . . d d d

p( d ) 6 a ......................... 1a 0

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

a1

1
2 3

Figure 2. Sawtooth function and its p(d) for a =

See Figure 2. Instead of considering large T and the limit T , we take T = 1 and all t [0, 1], calculating t + d modulo 1 for t > T d. Then x ( t) > x ( t + d ) if and only if a(1 d) < t < 1 d(1 a) which proves the formula for p(d).

Of course, p(d) also has period one, p(d + 1) = p(d), and p(0) = 0. It might be better to say that p is not dened for integers d = m since x(t + m) = x(t) for all t. Actually, it is not possible to dene p(0) so that p becomes continuous at 0 because p(0+ ) = limd0+ p(d) = ({t| x (t) > 0})/T and p(0 ) = ({t| x (t) < 0})/T = 1 p(0+ ). This argument holds for all piecewise dierentiable functions x(t). Thus for continuous time p(d) is discontinuous at 0 and will be considered only for d > 0. If x is periodic with period s, we take 0 < d < s. Let us say that a function, a time series or an underlying stochastic 1 for all d. The sawtooth function is balanced process is balanced if p(d) = 2 . only for the symmetric case a = 1 2

Gaussian processes versus chaotic time series


We want to give an argument which shows that series from Gaussian processes are balanced while series derived from dynamical systems are usually not. For a stationary process (Xt )tT the value p(d) is determined by the two-dimensional distribution of (Xt , Xt+d ) where t does not matter. Proposition 2.2. Stationary Gaussian processes are balanced. A process with stationary increments is balanced if the median of increments over any time span d is 0 and P (Xt+d = Xt ) = 0. 3

Proof. Since E (Xt ) = E (Xt+d ), we may assume the mean is zero. 1 . We only need one assumption: that We have to show P (Xt < Xt+d ) = 2 the two-dimensional density (x, y ) of (Xt , Xt+d ) is symmetric to the origin: (x, y ) = (x, y ). Then each line through the origin, in particular the line x = y, divides the two-dimensional distribution into two parts with 1 probability 2 . The second assertion follows directly from the denition. Example 2.3. In [0, 1] we consider the chaotic time series xt+1 = f (xt ) where f is the tent map f (x) = 2x for 0 x 1 and f (x) = 2(1 x) for 2 1 x 1 (Figure 3). Lebesgue measure is invariant with respect to f, so 2 this dynamical system can be considered as a stochastic process where the initial value x1 [0, 1] is the random element. For almost all x1 [0, 1], the xt become uniformly distributed on [0, 1], for T . Similarly, the (xt , xt+1 ) are uniformly distributed points on the graph of the function f.
xt+1
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xt+2

f (x )

1 2

1 2

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... . . . . . . . . . . . . . . . . . . .

f ( x)

1 2

xt

1 2

xt

Figure 3. Asymmetry of chaotic attractors for the tent map The line y = x divides the square into two parts, and the upper part has 2 2 probability 3 . In other words, xt < xt+1 if and only if xt < 3 so p(1) = 2 . It 3 2 4 8 2 . is easy to see that xt < xt+2 for 0 < x < 5 and 3 < x < 5 so that p(2) = 15 The general formula is p (d) = 1 1 (1 + 2d ). 2 2 1

Instead of proving the formula, we discuss this as a typical case of a chaotic dynamical system. (It is well-known that the above dynamical system is conjugate to the logistic map g (x) = 4x(1 x) on [0, 1] with its invariant measure [10].) It is common to visualize the attractor of a chaotic system as a two-dimensional projection of its invariant measure, like the graph in Figure 3. To determine p(d), we have to calculate the invariant measure of the attractor intersected with the upper triangle. If the invariant measure of the whole space is 1, then the measure of the upper triangle is probably not 4

1 , but it seems unlikely that it is exactly 1 . Why should a chaotic far from 2 2 attractor adapt to the two triangles?

Some other examples


Example 2.4. In Figure 4 we study p(d) by simulation of AR processes with non-Gaussian noise. We take Xt = 0.9Xt1 + t and Xt = 1.8Xt1 0.9Xt2 + t where the white noise t is exponentially distributed. In the AR(1) process the exponential distribution causes p(d) to be essentially below 1 up to d = 20 (to get the smooth appearance, we took the average of 500 2 functions p(d) with T = 5000). For the AR(2) process the deviations from 1 2 are smaller, and the periodicity of the process is indicated.
p(d) p(d) 0.51

0.5

0.5 0.4

10

20

30

0.49 0

10

20

30

Figure 4. The function p(d) for an AR processes with exponential noise. Left: Xt = 0.9Xt1 + t . Right: Xt = 1.8Xt1 0.9Xt2 + t . Let us come back to periodic functions, like the sawtooth example above. Can we nd an example with sinusoid p(d), as for the sunspots? The sine cos(t + d ). But function itself is balanced since sin(t + d) sin t = 2 sin d 2 2 there are very simple analytic periodic functions which are not balanced.
1 Example 2.5. Let x(t) = sin t + 2 sin 2t (Figure 5). The graph of ), and it must have p(d) is S-shaped, point symmetric with respect to (, 1 2 a discontinuity at t = 2. However, this discontinuity disappears if we add to x(t) a little Gaussian noise. Then p(d) looks almost like a sine, as for the sunspots. If stronger noise is added, p(d) gets a at part, as x(t) itself, and 1 becomes asymmetric with respect to 2 , with a range from 0.42 to 0.54.

x(t)

p(d)

(a)

0.6

(b)

0.5

0.4

1 p(d)

0 p(d)

6 d

0.6

(c)

0.5

(d)

0.5 0.45 0.4

6 d

6 d

1 Figure 5. (a) The function x(t) = sin t + 2 sin 2t. (b) p(d) for this function. (c),(d) p(d) for the disturbed function x(t) + c t where t is standard Gaussian white noise, c = 0.07 and c = 0.7.

Tests
Many tests have been suggested to control various properties of the underlying process of a given time series, as for example Gaussian distribution, linearity, and reversibility [20]. A stationary process is called reversible if (Xt1 , Xt2 , ..., Xtk ) has the same distribution as (Xtt1 , Xtt2 , ..., Xttk ), for arbitrary t, t1 , ..., tk . Obviously, such a process must be balanced. Since is necessary for being Gaussian as well as reversible, p(d) can p (d) = 1 2 be used as test statistics for checking these properties. for the Test for balance. Under the null hypothesis P (d) = 1 2 underlying process, and rather weak independence requirements, the value (T d)p(d) for the time series has a binomial distribution with n = T d 1 and p = 2 . Its signicance for each d can easily be checked. Nevertheless, we rather recommend drawing the function p(d) as a visual tool for deciding Gaussian distribution and reversibility. It gives only one particular parameter of the two-dimensional distributions, but it gives this parameter for all delays d within one gure. Test for trend. When p(d) increases with d, we can expect that our time series has an increasing trend. This can be seen for the sunspots in Figure 1. Thus to check for trend, we can do a linear regression with 6

with p(d), d = p(d) instead of xt , or we can compare the p(d), d = 1, ..., T 4 T T + 1, ..., 2 , for instance by a Mann-Whitney test. 4
p(d) 0.9 linear trend random walk 0.7

0.5 0

100

200

300

400

Figure 6. p(d) for two simple trend models, b = 0.02 Example 2.6. To see how p(d) describes trends, consider two simple models (Figure 6). The linear trend model Xt = a + bt + t implies p(d) = P (t t+d < bd) .

For uniform noise on [1, 1], the density of t t+d is symmetric to zero, and (z ) = (2 z )/4 for 0 z 2. Thus p(d) = 1 + bd (4 bd) for 0 bd 2, 2 8 1 ) and ( 2 , 1), and p(d) = 1 for bd > 2. Thus p(d) is almost linear between (0, 2 b and it is not much dierent for Gaussian noise. The random walk with drift
d

Xt = Xt1 + b + t

gives

p( d ) = P (
k=1

t+k + bd > 0) = (b d)

when t is standard Gaussian noise. The increase of p(d) is much slower.


p(d) 0.7 3

0.5 2 1 0.3 0 200 400 d

Figure 7. p(d) for three parts of the sunspot series 1: 1749-1832, 2: 1833-1916, 3: 1917-2000 Checking stationarity. Calculating p(d) for dierent parts of a suciently long series we can look for changes in the behavior of the time series. For Figure 7 the sunspot series was divided into three equal parts. While the periodicity remains stable, it can be seen that the trend which was visible in the whole series (Figure 1) is only due to the increased values within the 20th century (part 3). An improved method was used by A. Groth [13] to study changes in an EEG, caused for instance by epileptic seizure. There 7

are other properties, like self-similarity or long-term dependence of the underlying process, which can also be checked with p(d). Of course, p(d) only describes one tiny detail necessary for these properties, but this detail is easy to extract and evaluate even from short time series.

Order patterns of length 3

Denition
Now let us consider three equidistant time points. The corresponding values xt , xt+d and xt+2d can form the six order patterns shown in Figure 8. The most intuitive way to denote these patterns is by the sequence of rank numbers (1 denotes the minimum and 3 the maximum of the values). For instance, the relation xt+d < xt+2d < xt describes the order pattern = 312 since xt is the largest, xt+d the smallest, and xt+2d the second of the three values. This notation applies to any order pattern of n values xt , xt+d , ..., xt+(n1)d .
. . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . .

123

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . .

132

213

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . .

312

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . .

231

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

321

Figure 8. The six order patterns of length 3. We dene the relative frequency p (d) of an order pattern in (xt )t=1,...,T as card{t| 1 t T (n 1)d, (xt , xt+d , ..., xt+(n1)d ) forms order pattern } T (n 1)d Our p(d) is the special case n = 2 and = 12. As in the previous section, we can dene p (d) also for functions x(t) in continuous time, t [0, T ], replacing the cardinality of the nite set by the length measure of time points. For a stationary process, we can work with the distribution of (Xt , Xt+d , ..., Xt+(n1)d ) for a xed t. Actually, it is enough to know the signs of increments. In this section, we consider the case n = 3. Thus for a process with stationary increments, we need only the two-dimensional distribution of (Xt+d Xt , Xt+2d Xt+d ) for a xed t.

Basic properties
There are some obvious relations between the frequencies of order patterns. Let us write p for p (d) if no confusion is possible. Proposition 3.1. For a process with stationary increments, and all d a) p12 = p123 + p132 + p231 = p123 + p213 + p312 b) p132 + p231 = p213 + p312 c) p123 p321 = 2p12 1 d) p12 (2d) = p123 (d) + p132 (d) + p213 (d) e) p12 (d) p12 (2d) = p231 (d) p213 (d) = p312 (d) p132 (d) For an arbitrary time series with T values, d) holds exactly, and the other equations hold approximately: the dierence between both sides of the equad d d in a) and e), T in b), and T2 in c). tion is at most T d 2d d Proof. We start with processes. For a) we write p12 as P (X (t) < X (t + d)) = P (X (t) < X (t + d) < X (t + 2d)) + + P (X (t) < X (t + 2d) < X (t + d)) + P (X (t + 2d) < X (t) < X (t + d)). Then we work similarly with p12 = P (X (t + d) < X (t + 2d)), and for d) with p12 (2d) = P (X (t) < X (t + 2d)). b) immediately follows from a). For c) we use a) and the complementary relation p21 = p321 + p213 + p312 to conclude p12 p123 = p21 p321 . e) is a consequence of a) and d). For a time series, the same argument works for d). For a) we note that p12 is obtained from all t T d while the order patterns of length 3 are calculated from t T 2d. Thus p12 (T d) = (p123 + p213 + p312 )(T 2d) + y with 0 y d The dierence between both sides in a) is (x d(p123 + p132 + p231 ))/(T d), its absolute value is at most d/(T d). This also holds for e), as dierence of a) and d). For c) we take the dierence of two such equations, the error is at most d/(T d). For b) we can use another argument: the numbers of local maxima (given by patterns 132 and 231) and of local minima (given by patterns 213 and 312) within any sequence xt , xt+d , ...xt+kd can dier only by one. In our case we take k so that T d < t + kd T, and we sum over all initial values t with 0 < t d to obtain the bound d/(T 2d). All arguments also apply to functions x(t) on [0, T ]. 9

Our error bounds are good for small d, but useless for d = T /4, for instance. Numerical simulations show, however, that the coincidence in the above relations is much better than the proven bounds (see Figure 10).
p 123 0.2 0 132

200 231

d 213

312

321

Figure 9. The frequencies of patterns of length 3 for the sunspot series

Examples
The frequencies of the six order patterns in Figure 8 for the sunspot series are shown in Figure 9. It can be seen that the patterns 123 and 321 appear more often than the others, but not for all d. Moreover, the patterns 132 and 213 as well as 231 and 312 do appear with almost the same frequency, for all d. This could be accidental for some d but not for all of them! It indicates a kind of symmetry of the underlying process. Since 132 and 213 are obtained from each other by rotation around 180 o (see Figure 8), we say that the process is rotation symmetric for patterns of length 3 if p132 = p213 for all d. Equation b) above then shows that also p231 = p312 . For the third pair of rotated patterns, 123 and 321, equality would only be possible for those d for which p12 (d) = 1 , by equation c). 2 Proposition 3.1 shows that given p12 there are only two independent numbers among the six frequencies p for length 3. So the question arises which of the p , or combinations thereof, are the most instructive ones. We suggest u = p123 + p321 as an index of persistence 10

v = p132 p213 = p312 p231 as index of rotation symmetry. Additionally, we may consider w(d) = p12 (d) p12 (2d) which indicates the dierence of the shapes of maxima and minima, see equation e). The index u is good for detecting periodicity even in a very noisy time series. u(d) will s 3s 5s , 2 , 2 , ... where s denotes the period. This is easy to be minimal at d = 2 s explain: two successive steps of length 2 in the same direction (both < or both >) are never possible in the strictly periodic case where x(t) = x(t + s) for all t. On the other hand, u(d) will be large for d near to s, 2s, 3s, ... at least when the periodic function is suciently smooth: then if x(t) is on an increasing branch of the function, this is also true for x(t + s) and x(t +2s), by periodicity. For d = s with small this implies x(t) > x(t + d) > x(t +2d). For d = s + we get x(t) < x(t + d) < x(t + 2d). But for d = s we would get equality, which a little perturbation will take to one or the other side. Thus u has not a maximum at s, but two maxima left and right of s, and between them a minimum at s. This eect can be seen very well in Figure 10.
u(d)
u(d) 0.6

0.4
0.4

0.2
0.2

66

132

198

264

330

1
w(d) 0.1

10

20

30

v(d) 0.1

0.1 100 300 d

0.1 100 300 d

Figure 10. Upper panel: u(d) for sunspots and lynx. Lower panel: Dierent versions of v (d) (thin: p132 p213 , thick: p312 p231 ) and of w(d) (thick: p12 (d) p12 (2d), thin: p231 p213 , dotted: p312 p132 ) for the sunspots The lynx series is rather smooth since u(1) is the maximum. For the sunspots, the rst maximum occurs for d between 15 and 20, indicating the noisy character of the series. The lower panel of Figure 10 shows that the sunspot process is rotation symmetric while the shape of maxima and minima is quite dierent. The dierences between dierent representations of 11

v and w given by equation b) and e) are surprisingly small. Remember that an error estimate based on binomial distribution says that the values u(d), considered as estimates for the underlying process, have standard deviation 0.1 (cf. comment to Figure 1).

Gaussian processes
For a reversible process we have p123 = p321 , p132 = p231 , and p213 = p312 . If the distribution of (Z1 = Xt+d Xt , Z2 = Xt+2d Xt+d ) is symmetric with respect to its mean (i.e. for mean zero the density fulls (x, y ) = (x, y )) then p123 = p321 , p132 = p213 , and p231 = p312 (cf. Figure 8). Let us consider a process with stationary increments for which ( Z1 , Z2 ) has Gaussian distribution, for all d. It fulls both sets of equations since (y, x) = (x, y ) = (x, y ) for the density of (Z1 , Z2 ). Such a process is . also rotation symmetric, and is completely characterized by p123 = p321 = u 2 1u The other patterns have probability 4 . Let us determine p123 . The following is well-known [2, 17]. Lemma 3.2. If (Z1 , Z2 , Z3 ) has a Gaussian distribution with zero mean and correlation coecients ij = (Zi , Zj ) then 1 1 + arcsin 12 , 4 2 1 1 P (Z1 > 0, Z2 > 0, Z3 > 0) = + (arcsin 12 + arcsin 13 + arcsin 23 ) . 8 4 P (Z1 > 0, Z2 > 0) = For our processes, p123 (d) = P (Z1 > 0, Z2 > 0), so only = (Z1 , Z2 ) need to be determined. In the case of a stationary Gaussian process, we can 2 assume E (Xs ) = 0 and E (Xs ) = 1 for all s, and take t = 0 in Z1 , Z2 . = E ((Xd X0 )(X2d Xd )) 2d 1 2d = 2 E ((Xd X0 ) ) 2(1 d )

where d = (X0 , Xd ) is the autocorrelation of the process. Now we use the formula arcsin = 2 arcsin (1 + )/2 /2 and get the following result after a small calculation. Proposition 3.3. tion function d , For a stationary Gaussian process with autocorrela1 1 arcsin 2 12 1 2d 1 d

p123 (d) = p321 (d) =

It is clear that for white noise, or any process with exchangeable threedimensional distributions, all patterns p of length 3 have the same proba1 . The reason is that for any vector x1 , x2 , x3 , all permutations of the bility 6 coordinates have the same probability to occur in a time series. A corresponding remark holds for patterns of order n. With Proposition 3.3 we can analytically describe p123 for some other processes. Corollary 3.4. For a Gaussian AR(1)-process Xt = aXt1 + t we have p123 (d) = 1 1 arcsin 1 + ad 2 for all d.

For a Gaussian AR(2)-process Xt = a1 Xt1 + a2 Xt2 + t we have p123 (1) = 1 1 arcsin 1 + a1 a2 2 .

For fractional Brownian motion with Hurst parameter H (0, 1), p123 (d) = 1 arcsin 2H 1 for all d .

2 Proof. d = ad for AR(1), and 1 = a1 /(1a2 ), 2 = (a2 1 a2 + a2 )/(1 a2 ) for AR(2) [19] are just inserted into Proposition 3.3. Fractional Brownian motion B (t) = BH (t) is a Gaussian process with mean zero, stationary increments, variance E (B 2 (t)) = t2H and covariance E (B (s)B (t)) = 1 2H (s + t2H |s t|2H ). This implies E ((B (t + d) B (t))2 ) = d2H and 2

1 E ((B (t + d) B (t))(B (t + 2d) B (t + d))) = ((2d)2H 2d2H ) 2 so that = (Z1 , Z2 ) = 22H 1 1. Since B (t) is a self-similar process, and also the p do not depend on the value of d. The formula arcsin = 2 arcsin (1 + )/2 /2 together with Lemma 3.2 gives the result.
1 1 , we get p123 (d) = 4 , p132 (d) = 1 . For ordinary Brownian motion, H = 2 8 1 Moreover, p123 (d) 2 for H 1. For H 0, we obtain the case of white noise: equal probabilities for all permutations.

Persistence as autocorrelation
For a stationary process, we can consider p123 (d) as a kind of autocorrelation function. Actually, it is a diagonal version of Kendalls tau [3, 6]. The value p123 (d) = 1 corresponds to d = 0, it will be approached for d 6 if there is no trend or long-range dependence. On the other hand, the 13

For Gaussian AR(1)-processes Xt = aXt1 + t , a < 1, the standardization 1 ) with range (2, 4) is more appropriate. Using the q123 (d) = 12(p123 (d) 6 corollary we can easily check that there is a perfect coincidence between q123 and the autocorrelation function: the dierence is less than 0.02 for all a > 0.17 and all d. Even for an AR(2)-process like Xt = 1.7Xt1 0.8Xt2 + t where the corollary implies q123 (1) = 2.62, the coincidence shown in Figure 11a is remarkable. Thus although the standardization of p123 is not uniquely determined, it can compete with classical autocorrelation.
p p

should correspond to = 1 and the minimal maximal value p123 (d) = 1 2 value p123 (d) = 0 to = 1. Thus one way to standardize the function 1 ), with range ( 2 , 4 ) near (1, 1). Figure 11 p123 (d) is q123 (d) = 4(p123 (d) 6 3 3 b shows the autocorrelation function of the sunspot series, inuenced by the upwards trend, compared to q123 (d) as well as the standardized persistence 1 function 2(u(d) 3 ).

0.5

0 0 0.5

10

20

30

50

100

150

Figure 11. Autocorrelation and standardized function p123 (d) for the Gaussian AR(2) process Xt = 1.7Xt1 0.8Xt2 + t (left) and for the sunspot series (right, dotted: 2(u(d) 1 )) 3

Longer order patterns

So far, we have considered only the simplest order patterns. There are many ways to extend this work. Even for patterns of length 3, we can take arbitrary delays d1 , d2 instead of d, 2d. Some experiments in this direction were done by Groth [13]. For patterns of length 4, determined for vectors (xt , xt+d1 , xt+d2 , xt+d3 ) the choice d1 = d, d2 = k and d3 = d + k leads to Kendalls tau an interesting autocorrelation function [3, 6]. Here k is the delay parameter of autocorrelation while d is another scale parameter. The tau-autocorrelation of Ferguson, Genest and Hallin [11] is a sum over all d while we prefer to have d as a free parameter which can be chosen in an appropriate way. Some results of the previous section, like Proposition 3.1, can be extended 14

to equidistant patterns of length 4, dj = jd. In particular, we can determine the probabilities p for Gaussian processes, using the second part of Lemma 3.2. Because of the symmetries of normal distributions, there are eight classes of permutations: Theorem 4.1. For a stationary Gaussian process and arbitrary d > 0, p1234 = p4321 = p3142 = p2413 p4231 = p1324 p2143 = p3412 1 8 1 = 8 1 = 8 1 = 8 1 4 1 + 4 1 + 4 1 + 4 + (arcsin 1 + 2 arcsin 2 ) , (2 arcsin 3 + arcsin 4 ) , (arcsin 4 2 arcsin 5 ) , (2 arcsin 6 + arcsin 1 ) , 1 8 1 = 8 1 = 8 1 = 8 1 4 1 + 4 1 + 4 1 + 4 + (arcsin 7 arcsin 1 arcsin 5 ) , (arcsin 7 arcsin 4 arcsin 5 ) , (arcsin 3 + arcsin 8 arcsin 5 ) , (arcsin 6 arcsin 8 + arcsin 2 ) , 2d + 3d d 1

p1243 = p2134 = p3421 = p4312 = p1423 = p4132 = p3241 = p2314 p3124 = p1342 = p4213 = p2431 p1432 = p4123 = p2341 = p3214 where 1 = 22d d 3d , 2(1 d ) 4 = d 3d , 2(1 2d ) 7 = 2 =

2d 2d 1 , 2(1 d ) 1 2 1 2d , 1 d ,

3 = 6 =

2 (1 2d )(1 3d ) d + 3d 2d 1

5 =

2 (1 d )(1 2d )

d + 2d 3d 1

, 2 (1 d )(1 3d ) d 2d 8 = . (1 d )(1 3d )

Details are given in [18]. This theorem looks similar for patterns with delays d1 , d2 , d3 . However, analytical expressions as above seem not to exist for patterns of order 5 since there is no appropriate generalization of Lemma 3.2 [2, 17]. For fractional Brownian motion with Hurst parameter H, we have the same formula as in the rst part of the above theorem, but now with 1 = (1 + 32H 22H +1 )/2, 2 = 22H 1 1, 3 = (1 32H 22H )/(2 6H ), 4 = (32H 1)/22H +1 ), 5 = 2H 1 , 6 = (22H 32H 1)/(2 3H ), 15

1 and p1243 = In particular, ordinary Brownian motion fulls p1234 = p4321 = 8 1 p2134 = p3421 = p4312 = 16 , and all other patterns have smaller probability.

7 = (32H 22H 1)/(2H +1 ), 8 = (22H 1)/3H .

For length 4 we have not specied reasonable parameters, or groups of patterns, like the indices of persistence and rotation invariance for length 3. Numerical experiments have shown, however, that the study of individual order patterns can be useful even for order 4. For instance, the rotation symmetry for the sunspot series can be veried for all order 4 patterns which seems rather curious. One way to deal with the distribution of all patterns together is the permutation entropy introduced in [4]. This is just the Shannon entropy of the n! probabilities p of order patterns for any xed length n. In numerical studies of chaotic time series, we can go with n up to 16 [4], and theoretically we can go with n to innity [5]. For empirical time series of moderate length (500 or less), lengths up to n = 5 make sense, as was veried for speech signals in [4] and for EEG data in [9, 16]. Some other approaches to ordinal time series were sketched in [3]. We presented a starting point. A lot of work is waiting ahead.

References
[1] H.D.I. Abarbanel, Analysis of observed chaotic data, Springer, New York 1995 [2] R.H. Bacon, Approximation to multivariate normal orthant probabilities, Ann. Math. Statist. 34 (1963), 191-198 [3] C. Bandt, Ordinal time series analysis, Ecological Modelling 182 (2005), 229-238 [4] C. Bandt and B. Pompe, Permutation entropy: a natural complexity measure for time series, Phys. Rev. Lett. 88 (2002), 174102 [5] C. Bandt, G. Keller and B. Pompe, Entropy of interval maps via permutations, Nonlinearity 15 (2002), 1595-1602 [6] C. Bandt, S. Laude and H. Lauer, Kendalls tau as autocorrelation, Preprint, Greifswald 2000 [7] D. Brillinger, An analysis of an ordinal-valued time series, Lecture Notes in Statistics 115, 73-87, Springer 1996

16

[8] D. Brillinger et al, Automatic methods for generating seismic intensity maps, J. Appl. Probab. 38A (2001), 188-201 [9] Y. Cao, W. Tung, J.B. Gao, V.A. Protopopescu and L.M. Hively, Detecting dynamical changes in time series using the permutation entropy, Phys. Rev. E 70 (2004), 046217 [10] P. Collet and J.-P. Eckmann, Iterated maps on the interval as dynamical systems, Birkh auser, Basel 1980 [11] S.T. Ferguson, C. Genest and M. Hallin, Kendalls tau for serial dependence, Canadian J. Statist. 28 (2000), 587-604 [12] B. Garel and M. Hallin, Rank-based autoregressive order identication, J. Am. Stat. Assoc. 94 (1999), 1357-1371 [13] A. Groth, Visualization of coupling in time series by order recurrence plots, Greifswald 2004, submitted [14] M. Hallin and J. Jure ckova, Optimal tests for autoregressive models based on autoregression rank scores, Annals Stat. 27 (1999), 1385-1414 [15] M. Hallin and B.J.M. Werker, Optimal testing for semi-parametric AR models from Gaussian Lagrange multipliers to autoregression rank scores and adaptive tests, Asymptotics, Nonparametrics, and time series (ed. S. Gosh), Marcel Dekker, New York 1999, 295-350 [16] K. Keller and H. Lauer, Symbolic analysis of high-dimensional time series, Int. J. Bifurcation and Chaos 13 (2003), 2428-2432 [17] R.L. Plackett, A reduction formula for normal multivariate integrals, Biometrika 41 (1954), 351-360 [18] F.A. Shiha, Distributions of order patterns in time series, PhD dissertation, Greifswald 2004 [19] R.H. Shumway, D.S. Stoer, Time Series Analysis and its Applications, Springer 2000 [20] H. Tong, Non-linear time series, Oxford University Press 1990 Christoph Bandt Institut f ur Mathematik und Informatik Arndt-Universit at 17487 Greifswald Germany bandt@uni-greifswald.de 17 Faten Shiha Mathematics Department Faculty of Science Mansoura University Mansoura, Egypt fshiha@mans.edu.eg

Preprint Series DFG-SPP 1114


http://www.math.uni-bremen.de/zetem/DFG-Schwerpunkt/preprints/

Reports
[1] W. Horbelt, J. Timmer, and H.U. Voss. Parameter estimation in nonlinear delayed feedback systems from noisy data. 2002 May. ISBN 3-88722-530-9. [2] A. Martin. Propagation of singularities. 2002 July. ISBN 3-88722-533-3. [3] T.G. M uller and J. Timmer. Fitting parameters in partial dierential equations from partially observed noisy data. 2002 August. ISBN 3-88722-536-8. [4] G. Steidl, S. Dahlke, and G. Teschke. Coorbit spaces and banach frames on homogeneous spaces with applications to the sphere. 2002 August. ISBN 3-88722-537-6. [5] J. Timmer, T.G. M uller, I. Swameye, O. Sandra, and U. Klingm uller. Modeling the non-linear dynamics of cellular signal transduction. 2002 September. ISBN 3-88722539-2. [6] M. Thiel, M.C. Romano, U. Schwarz, J. Kurths, and J. Timmer. Surrogate based hypothesis test without surrogates. 2002 September. ISBN 3-88722-540-6. [7] K. Keller and H. Lauer. Symbolic analysis of high-dimensional time series. 2002 September. ISBN 3-88722-538-4. [8] F. Friedrich, G. Winkler, O. Wittich, and V. Liebscher. Elementary rigorous introduction to exact sampling. 2002 October. ISBN 3-88722-541-4. [9] S.Albeverio and D.Belomestny. Reconstructing the intensity of non-stationary poisson. 2002 November. ISBN 3-88722-544-9. [10] O. Treiber, F. Wanninger, H. F uhr, W. Panzer, G. Winkler, and D. Regulla. An adaptive algorithm for the detection of microcalcications in simulated low-dose mammography. 2002 November. ISBN 3-88722-545-7. [11] M. Peifer, J. Timmer, and H.U. Voss. Nonparametric identication of nonlinear oscillating systems. 2002 November. ISBN 3-88722-546-5. [12] S.M. Prigarin and G. Winkler. Numerical solution of boundary value problems for stochastic dierential equations on the basis of the gibbs sampler. 2002 November. ISBN 3-88722-549-X. [13] A. Martin, S.M.Prigarin, and G. Winkler. Exact numerical algorithms for linear stochastic wave equation and stochastic klein-gordon equation. 2002 November. ISBN 3-88722547-3. [14] Andreas Groth. Estimation of periodicity in time series by ordinal analysis with application to speech. 2002 November. ISBN 3-88722-550-3.

[15] H.U. Voss, J. Timmer, and J. Kurths. Nonlinear dynamical system identication from uncertain and indirect measurements. 2002 December. ISBN 3-88722-548-1. [16] U. Clarenz, M. Droske, and M. Rumpf. Towards fast non-rigid registration. 2002 December. ISBN 3-88722-551-1. [17] U. Clarenz, S. Henn, M. Rumpf, and K. Witsch. Relations between optimization and gradient ow with applications to image registration. 2002 December. ISBN 3-88722552-X. [18] M. Droske and M. Rumpf. A variational approach to non-rigid morphological registration. 2002 December. ISBN 3-88722-553-8. [19] T. Preuer and M. Rumpf. Extracting motion velocities from 3d image sequences and spatio-temporal smoothing. 2002 December. ISBN 3-88722-555-4. [20] K. Mikula, T. Preuer, and M. Rumpf. Morphological image sequence processing. 2002 December. ISBN 3-88722-556-2. [21] V. Reitmann. Observation stability for controlled evolutionary variational inequalities. 2003 January. ISBN 3-88722-557-0. [22] K. Koch. A new family of interpolating scaling vectors. 2003 January. ISBN 3-88722558-9. [23] A. Martin. Small ball asymptotics for the stochastic wave equation. 2003 January. ISBN 3-88722-559-7. [24] P. Maa, T. K ohler, R. Costa, U. Parlitz, J. Kalden, U. Wichard, and C. Merkwirth. Mathematical methods for forecasting bank transaction data. 2003 January. ISBN 3-88722-569-4. [25] D. Belomestny and H. Siegel. Stochastic and self-similar nature of highway trac data. 2003 February. ISBN 3-88722-568-6. [26] G. Steidl, J. Weickert, T. Brox, P. Mr azek, and M. Welk. On the equivalence of soft wavelet shrinkage, total variation diusion, and sides. 2003 February. ISBN 3-88722561-9. [27] J. Polzehl and V. Spokoiny. Local likelihood modeling by adaptive weights smoothing. 2003 February. ISBN 3-88722-564-3. [28] I. Stuke, T. Aach, C. Mota, and E. Barth. Estimation of multiple motions: regularization and performance evaluation. 2003 February. ISBN 3-88722-565-1. [29] I. Stuke, T. Aach, C. Mota, and E. Barth. Linear and regularized solutions for multiple motions. 2003 February. ISBN 3-88722-566-X. [30] W. Horbelt and J. Timmer. Asymptotic scaling laws for precision of parameter estimates in dynamical systems. 2003 February. ISBN 3-88722-567-8. [31] R. Dahlhaus and S. Subba Rao. Statistical inference of time-varying arch processes. 2003 April. ISBN 3-88722-572-4.

[32] G. Winkler, A. Kempe, V. Liebscher, and O. Wittich. Parsimonious segmentation of time series by potts models. 2003 April. ISBN 3-88722-573-2. [33] R. Ramlau and G. Teschke. Regularization of sobolev embedding operators and applications. 2003 April. ISBN 3-88722-574-0. [34] K. Bredies, D. Lorenz, and P. Maa. Mathematical concepts of multiscale smoothing. 2003 April. ISBN 3-88722-575-9. [35] A. Martin, S.M. Prigarin, and G. Winkler. Exact and fast numerical algorithms for the stochastic wave equation. 2003 May. ISBN 3-88722-576-7. [36] D. Maraun, W. Horbelt, H. Rust, J. Timmer, H.P. Happersberger, and F. Drepper. Identication of rate constants and non-observable absorption spectra in nonlinear biochemical reaction dynamics. 2003 May. ISBN 3-88722-577-5. [37] Q. Xie, M. Holschneider, and M. Kulesh. Some remarks on linear dieomorphisms in wavelet space. 2003 July. ISBN 3-88722-582-1. [38] M.S. Diallo, M., Holschneider, M. Kulesh, F. Scherbaum, and F. Adler. Characterization of seismic waves polarization attributes using continuous wavelet transforms. 2003 July. ISBN 3-88722-581-3. [39] T. Maiwald, M. Winterhalder, A. Aschenbrenner-Scheibe, H.U. Voss, A. SchulzeBonhage, and J. Timmer. Comparison of three nonlinear seizure prediction methods by means of the seizure prediction characteristic. 2003 September. ISBN 3-88722-594-5. [40] M. Kulesh, M. Holschneider, M.S. Diallo, Q. Xie, and F. Scherbaum. Modeling of wave dispersion using continuous wavelet transforms. 2003 October. ISBN 3-88722-595-3. [41] A.G.Rossberg, K.Bartholom e, and J.Timmer. Data-driven optimal ltering for phase and frequency of noisy oscillations: Application to vortex ow metering. 2004 January. ISBN 3-88722-600-3. [42] Karsten Koch. Interpolating scaling vectors. 2004 February. ISBN 3-88722-601-1. [43] O.Hansen, S.Fischer, and R.Ramlau. Regularization of mellin-type inverse problems with an application to oil engeneering. 2004 February. ISBN 3-88722-602-X. [44] T.Aach, I.Stuke, C.Mota, and E.Barth. Estimation of multiple local orientations in image signals. 2004 February. ISBN 3-88722-607-0. [45] C.Mota, T.Aach, I.Stuke, and E.Barth. Estimation of multiple orientations in multidimensional signals. 2004 February. ISBN 3-88722-608-9. [46] I.Stuke, T.Aach, E.Barth, and C.Mota. Analysing superimposed oriented patterns. 2004 February. ISBN 3-88722-609-7. [47] Henning Thielemann. Bounds for smoothness of renable functions. 2004 February. ISBN 3-88722-610-0. [48] Dirk A. Lorenz. Variational denoising in besov spaces and interpolation of hard and soft wavelet shrinkage. 2004 March. ISBN 3-88722-605-4.

[49] V. Reitmann and H. Kantz. Frequency domain conditions for the existence of almost periodic solutions in evolutionary variational inequalities. 2004 March. ISBN 3-88722606-2. [50] Karsten Koch. Interpolating scaling vectors: Application to signal and image denoising. 2004 May. ISBN 3-88722-614-3. [51] Pavel Mr azek, Joachim Weickert, and Andr es Bruhn. On robust estimation and smoothing with spatial and tonal kernels. 2004 June. ISBN 3-88722-615-1. [52] Dirk A. Lorenz. Solving variational problems in image processing via projections - a common view on tv denoising and wavelet shrinkage. 2004 June. ISBN 3-88722-616-X. [53] A.G. Rossberg, K.Bartholom e, H.U. Voss, and J. Timmer. Phase synchronization from noisy univariate signals. 2004 August. ISBN 3-88722-617-8. [54] Markus Fenn and Gabriele Steidl. Robust local approximation of scattered data. 2004 October. ISBN 3-88722-622-4. [55] Henning Thielemann. Audio processing using haskell. 2004 October. ISBN 3-88722623-2. [56] M.Holschneider, M. S. Diallo, M. Kulesh, F. Scherbaum, M. Ohrnberger, and E. L uck. Characterization of dispersive surface wave using continuous wavelet transforms. 2004 October. ISBN 3-88722-624-0. [57] M. S. Diallo, M. Kulesh, M. Holschneider, and F. Scherbaum. Instantaneous polarization attributes in the time-frequency domain and wave eld separation. 2004 October. ISBN 3-88722-625-9. [58] Stephan Dahlke, Erich Novak, and Winfried Sickel. Optimal approximation of elliptic problems by linear and nonlinear mappings. 2004 October. ISBN 3-88722-627-5. [59] Hanno Scharr. Towards a multi-camera generalization of brightness constancy. 2004 November. ISBN 3-88722-628-3. [60] Hanno Scharr. Optimal lters for extended optical ow. 2004 November. ISBN 3-88722629-1. [61] Volker Reitmann and Holger Kantz. Stability investigation of volterra integral equations by realization theory and frequency-domain methods. 2004 November. ISBN 3-88722636-4. [62] Cicero Mota, Michael Door, Ingo Stuke, and Erhardt Barth. Categorization of transparent-motion patterns using the projective plane. 2004 November. ISBN 3-88722637-2. [63] Ingo Stuke, Til Aach, Erhardt Barth, and Cicero Mota. Multiple-motion estimation by block-matching using markov random elds. 2004 November. ISBN 3-88722-635-6. [64] Cicero Mota, Ingo Stuke, Til Aach, and Erhardt Barth. Spatial and spectral analysis of occluded motions. 2004 November. ISBN 3-88722-638-0.

[65] Cicero Mota, Ingo Stuke, Til Aach, and Erhardt Barth. Estimation of multiple orientations at corners and junctions. 2004 November. ISBN 3-88722-639-9. [66] A. Benabdallah, A. L oser, and G. Radons. From hidden diusion processes to hidden markov models. 2004 December. ISBN 3-88722-641-0. [67] Andreas Groth. Visualization and detection of coupling in time series by order recurrence plots. 2004 December. ISBN 3-88722-642-9. [68] M. Winterhalder, B. Schelter, J. Kurths, and J. Timmer. Sensitivity and specicity of coherence and phase synchronization analysis. 2005 January. ISBN 3-88722-648-8. [69] M. Winterhalder, B. Schelter, W. Hesse, K. Schwab, L. Leistritz, D. Klan, R. Bauer, J. Timmer, and H. Witte. Comparison of time series analysis techniques to detect direct and time-varying interrelations in multivariate, neural systems. 2005 January. ISBN 3-88722-643-7. [70] B. Schelter, M. Winterhalder, K. Schwab, L. Leistritz, W. Hesse, R. Bauer, H. Witte, and J. Timmer. Quantication of directed signal transfer within neural networks by partial directed coherence: A novel approach to infer causal time-depending inuences in noisy, multivariate time series. 2005 January. ISBN 3-88722-644-5. [71] B. Schelter, M. Winterhalder, B. Hellwig, B. Guschlbauer, C.H. L ucking, and J. Timmer. Direct or indirect? graphical models for neural oscillators. 2005 January. ISBN 3-88722-645-3. [72] B. Schelter, M. Winterhalder, T. Maiwald, A. Brandt, A. Schad, A. Schulze-Bonhage, and J. Timmer. Testing statistical signicance of multivariate epileptic seizure prediction methods. 2005 January. ISBN 3-88722-646-1. [73] B. Schelter, M. Winterhalder, M. Eichler, M. Peifer, B. Hellwig, B. Guschlbauer, C.H. L ucking, R. Dahlhaus, and J. Timmer. Testing for directed inuences in neuroscience using partial directed coherence. 2005 January. ISBN 3-88722-647-X. [74] Dirk Lorenz and Torsten K ohler. A comparison of denoising methods for one dimensional time series. 2005 January. ISBN 3-88722-649-6. [75] Esther Klann, Peter Maass, and Ronny Ramlau. Tikhonov regularization with wavelet shrinkage for linear inverse problems. 2005 January. [76] Eduardo Valenzuela-Dom nguez and J urgen Franke. A bernstein inequality for strongly mixing spatial random processes. 2005 January. ISBN 3-88722-650-X. [77] J. Weickert, G. Steidl, P. Mr azek, M. Welk, and T. Brox. Diusion lters and wavelets: What can they learn from each other? 2005 January. [78] M. Peifer, B. Schelter, M. Winterhalder, and J. Timmer. Mixing properties of the r ossler system and consequences for coherence and synchronization analysis. 2005 January. ISBN 3-88722-651-8. [79] Ulrich Clarenz, Marc Droske, Stefan Henn, Martin Rumpf, and Kristian Witsch. Computational methods for nonlinear image registration. 2005 January.

[80] Ulrich Clarenz, Nathan Litke, and Martin Rumpf. Axioms and variational problems in surface parameterization. 2005 January. [81] Robert Strzodka, Marc Droske, and Matrin Rumpf. Image registration by a regularized gradient ow - a streaming implementation in dx9 graphics hardware. 2005 January. [82] Marc Droske and Martin Rumpf. A level set formulation for willmore ow. 2005 January. [83] Hanno Scharr, Ingo Stuke, Cicero Mota, and Erhardt Barth. Estimation of transparent motions with physical models for additional brightness variation. 2005 February. [84] Kai Krajsek and Rudolf Mester. Wiener-optimized discrete lters for dierential motion estimation. 2005 February. [85] Ronny Ramlau and Gerd Teschke. Tikhonov replacement functionals for iteratively solving nonlinear operator equations. 2005 March. [86] Matthias M uhlich and Rudolf Mester. Derivation of the tls error matrix covariance for orientation estimation using regularized dierential operators. 2005 March. [87] Mamadou Sanou Diallo, Michail Kulesh, Matthias Holschneider, Kristina Kurennaya, and Frank Scherbaum. Instantaneous polarization attributes based on adaptive covariance method. 2005 March. [88] Robert Strzodka and Christoph Garbe. Real-time motion estimation and visualization on graphics cards. 2005 April. [89] Matthias Holschneider and Gerd Teschke. wavelets. 2005 April. On the existence of optimally localized

[90] Gerd Teschke. Multi-frame representations in linear inverse problems with mixed multiconstraints. 2005 April. [91] Rainer Dahlhaus and Suhasini Subba Rao. A recursive online algorithm for the estimation of time-varying arch parameters. 2005 April. [92] Suhasini Subba Rao. On some nonstationary, nonlinear random processes and their stationary approximations. 2005 April. [93] Suhasini Subba Rao. Statistical analysis of a spatio-temporal model with location dependent parameters and a test for spatial stationarity. 2005 April. [94] Piotr Fryzlewicz, Theofanis Sapatinas, and Suhasini Subba Rao. Normalised least squares estimation in locally stationary arch models. 2005 April. [95] Piotr Fryzlewicz, Theofanis Sapatinas, and Suhasini Subba Rao. Haar-sz technique for locally stationary volatility estimation. 2005 April. [96] Suhasini Subba Rao. On multiple regression models with nonstationary correlated errors. 2005 April. [97] S ebastien Van Bellegam and Rainer Dahlhaus. Semiparametric estimation by model selection for locally stationary processes. 2005 April.

[98] M. Griebel, T. Preuer, M. Rumpf, A. Schweitzer, and A. Telea. Flow eld clustering via algebraic multigrid. 2005 April. [99] Marc Droske and Wolfgang Ring. A mumford-shah level-set approach for geometric image registration. 2005 April. [100] M. Diehl, R. K usters, and H. Scharr. Simultaneous estimation of local and global parameters in image sequences. 2005 April. [101] H. Scharr, M.J. Black, and H.W. Haussecker. Image statistics and anisotropic diusion. 2005 April. [102] H. Scharr, M. Felsberg, and P.E. Forss en. Noise adaptive channel smoothing of low-dose images. 2005 April. [103] H. Scharr and R. K usters. A linear model for simultaneous estimation of 3d motion and depth. 2005 April. [104] H. Scharr and R. K usters. Simultaneous estimation of motion and disparity: Comparison of 2-, 3- and 5-camera setups. 2005 April. [105] Christoph Bandt. Ordinal time series analysis. 2005 April. [106] Christoph Bandt and Faten Shiha. Order patterns in time series. 2005 April.

You might also like