You are on page 1of 25

First-Order Spacings of Random Matrix Eigenvalues

Rebecca C. Lehman December 25, 2001


Mathematics Department Princeton University Princeton, NJ 08544
The eigenvalues of large random matrices are useful in many contexts, particularly statistical physics. For the Gaussian Orthogonal Ensemble, we present the known distribution of their local spacings, an analogue of the Central Limit Theorem for eigenvalues. We then investigate the local spacings of eigenvalues from other distributions: in particular the Uniform, Cauchy and Poisson, and show evidence that the distribution from the Gaussian may in fact be universal.

Abstract

E-mail:

rclehman@math.princeton.edu

1 A Little Motivation
In statistical mechanics, large and complicated systems can often be modeled as ensembles of random numbers. But random numbers are only meaningful if their probability distribution is known, and in physics it is sometimes di cult to guess the appropriate distribution. Classical probability theory gives us the Weak Law of Large Numbers and the Central Limit Theorem, which essentially state that large sums of random numbers seem to behave in a certain way regardless of the particular distribution. However, not everything is independent of the distribution. Some things can only be proved for certain nice distributions. For instance, the rst-order spacings of an ordered set are exponential for the uniform distribution, but the proof does not work for other distributions. 1 Pn Theorem 1.1 (Weak Law of Large Numbers) If Sn = n 1 xi , where the xi are independently randomly distributed over any distribution with 2) = mean 0 and variance normalized to 1, then E (Sn ) = 0, and limn!1 E (Sn 0. So for all 0 the probability that jSn j goes to 0. Proof: n 1X E (Sn ) = n E (xi ) = E (xi ) = 0 (1.1)
1

1.1 Classical Theory

1 E (x x ) 2 E (Sn ) = n i j 2 ij 1 X E (x2 ) = E (x2 i) i 2 n n 1 X E (x x ) = (n2 ; n)E (xi )2 i j n2 n2


i6=j
2 But E (x2 i ) = 1 and E (xi ) = 0. So

(1.2) (1.3) (1.4)


2

The probability of jSn j shev's inequality,

1 +0 = 1 2 E (Sn )= n n

is the probability of jSn j2

. By Cheby-

Z
jxj

dP
2

Z
jxj

x dP

Z
jxj 2 Prob(Sn

x dP
2

= n1

1 Z jxjdP 1 Z S 2 dP
2

0.

2 If we hold constant and allow n to get large, Prob(Sn 1 n=p n

(1.5) (1.6) 2 ) approaches

tribution with mean 0 and variance 1, and suitably integrable. 2Then as n has a Gaussian limiting probability distribution e;2 , appropriately normalized.

xj are independently randomly distributed according to some probability dis-

Theorem 1.2 (Central Limit Theorem) Let

Pn x , where the
1

n!1

Proof: We use Fourier analysis. If x and y have probability distributions and , then x + y has distribution ? .p So n has distribution ? : : : ? , where is the distribution evaluated at x n, and normalized appropriately p by multiplying by n. The Fourier transform converts convolution to mul^x tiplication, and f^(cx) = 1 c f ( c ), so the Fourier transform of the distribution is (^)n evaluated at pn , with the normalizations canceling out. ^( ) = e2 ix d ;1 ^(0) = 1 d^ j d =0 = 2i E (x) = 0 d2 ^ j = 4 2 E (x2 ) = 4

Z1

(1.7) (1.8) (1.9)


2

(1.10) (1.11) We do a Taylor expansion of ^(t) about the origin, then substitute t = p . n

=0

(^)n (t) = (1 ; 2 2 t2 + )n ! 2 2 n (^ )n p = 1; 2

(1.12) (1.13)

As n ! 1, (^)n ! e;2 2 2 , a Gaussian. Since Gaussians are only renormalized by Fourier transform, the original distribution ? ? must also be Gaussian. Theorem 1.3 (First Order Spacings) Let x0 xn be random numbers drawn from the uniform distribution on 0,1), ordered by size. Then the probability distribution of the rst-order spacings xj ; xj ;1 , normalized 1 by dividing by the average spacing n , approaches e;x . Proof: First we note that the uniform distribution on 0,1) can be considered as the uniform distribution on S 1 = R Z . Since the distribution is uniform and therefore invariant under translations, without loss of generality we can relabel the zero so that xj is x1 and xj ;1 = x0 = 0. Since x1 is the rst non-zero value, the probability that the rst value x1 is greater than a is the probability that all the xj except x0 = 0 are greater than a, which is (1 ; a)n . The probability that nx1 is between t and t + is thus n n 1; t ; 1; t+ (1.14) In the limit, as n ! 1, this goes to e ; e;(t+ ) Dividing by and taking the limit as shrinks to 0, the probability distribution of x1 is e;x . The proof as given applies only to the uniform distribution. However, it can be extended to other suitably integrable distributions by scaling local density to 1, using the cumulative distribution function.
;t

In quantum mechanics, many properties of a system in a given state can be represented by the eigenvalues of a symmetric or Hermitian linear operator for instance the energy levels of a system in state are the eigenvalues solving the equation H = E . Unfortunately, in practice, for systems of any reasonable complexity the size of the matrix is usually impracticably large, if it is even known to be nite. Computing the actual entries of such a matrix is usually impossible. Hence it is often useful to treat most systems as random matrices of size N approaching in nity. Just as classical statistical mechanics treats positions and velocities as random variables in order to study their aggregate properties (for instance, the frequency of their collisions or their total energy), so in the quantum framework the analogous assumption would be to treat the linear operators de ning the system as random matrices, and the individual properties as their eigenvalues. 4

1.2 Quantum mechanics and Random Matrix Theory

It would therefore be useful to have analogues of the Law of Large Numbers and the Central Limit Theorem, revealing universal properties of the eigenvalues of unknown matrix distributions. Wigner's Semicircle is one such property. The local eigenvalue spacings, unlike the spacings of a classical random number distribution, are conjectured to be another, but as yet this is unproved.

2 Facts about Random Matrix Eigenvalues


Theorem 2.1 (Joint Probability Density) If an n n random symmetric matrix X is drawn from a probability density g( 1 n ) with respect to (n)(n;1) Q the Lebesgue measure i j dXij on R 2 , where g is expressed in terms of the increasingly ordered eigenvalues 1 n , then the joint probability density function of the eigenvalues on the eigenvalue space Rnj 1 < < n is
Cn g (
.
1

2.1 Joint Probability Density

n)

i<j

j i ; ij

(2.1)

Proof: We construct the measure on eigenvalue space corresponding to the standard Lebesgue measure. With probability 1, X = ODOT , where D is a diagonal matrix with distinct eigenvalues, and O is an orthogonal matrix: O OT = I , so dOT O = ;OT dO. Di erentiating X, and substituting dM = dOT O, we get

dX = dO D OT + O dD OT + O D dOT dO D OT = O OT dO D OT = ;O dOT O D OT = ;O dM D OT O D dOT = O D dOT O OT = O D dM OT


Therefore we nd

(2.2) (2.3)

dX = O(dD ; dM D + D dM )OT
5

Since the measure dX is invariant under orthogonal transformations, we get

i j

dXij =
= =

Y Y Y

i j i<j i<j

(dD ; dM D + D dM )ij ( j ; i )dMij


n Y i=1 i

j i ; jj

n Y i=1

i<j

dMij

(2.4)

We can integrate out the dMij to nd that the induced measure is

Cn
.

i<j

j i ; jj

n Y

i=1

It follows that if X has a density g( 1 ability density in eigenvalue space is Cn g(

the probn) i Q j dXij , then Q n n ) i<j j i ; j j i=1 d i .

Theorem 2.2 (Wigner's Semicircle Law) If X is an n n symmetric matrix from some probability distribution such that the elements ij , up to the symmetric condition, are independently randomly distributed with mean 0, variance 1 and as n ! 1, Ck (n) = sup1 i j nE (j ij jk ) = O(1) , then X tends to the semicircle the mean eigenvalue distribution of the matrix p n p distribution 21 4 ; x2 as n ! 1.
Mehta 5] proves this in his discussion of the Gaussian ensembles, relying on the joint probability distribution. Hiai and Petz 2] prove the theorem by a more conceptual combinatorial argument citing Voiculescu, which does not rely on the the messy joint probability distribution, and we will follow their argument here. The proof is by the method of moments: we can write the moments of the mean eigenvalue distribution in a combinatorial form, and show that the same form characterizes the sequence of moments of the semicircle. The kth moment of the mean eigenvalue distribution is X E( ) (2.5) E (Tr(X k )) = 1
2 +1 1 nk

2.2 Semicircle Law

m1 m2

mk n

m1 m2

mk m1

De nition 2.3 Non-Crossing Partitions We de ne a non-crossing partition P of a set S to be a partition into pairs Sj = fsj1 sj2 g such that
sj1 < sk1 < sj2 i sj1 < sk2 < sj2 .

Lemma 2.4 The kth moment E (Tr(X k )) approaches 0 if k is odd, and the number of non-crossing partitions of k] if k is even, as n ! 1.
Proof: If any mi mi+1 appears without repetition, its expectation value is 0, so by independence the expectation value of the product containing it is 0. In particular, any term containing more than k 2 + 1 distinct terms contributes nothing to ; the sum. k There are at most n l l possible terms where l of the mj are distinct, since each of the l distinct mj can take one of n values, and each of the k factors is chosen from the l distinct values. Since jE ( ))j E (j jk )( 1 ) E (j jk )( 1 ) C (n) (2.6)
k k ; lk Ck (n) which vanishes as n ! 1 if l the sum over all such terms is n l k 2 +1 m1 m2 mk m1 m1 m2 n

mk m1

l=k 2 + 1 distinct factors. If k is odd, the moment is 0. If k is even, we replace k by 2k0 .


Then we are interested in 1 X(E (

So the only possible sum that doesn't go to zero is over the terms with

k +1.
2

where the sum is over sequences fmj g such that exactly k0 + 1 of the mj are distinct, and every consecutive pair fmj mj +1 g (considering j modulo 2k0 ) appears at least twice. By induction, to any non-crossing partition of 2k0 ] we can associate n(n ; 1) (n ; k0 ) terms in this sum: if k0 = 1 there is a single partition f1 2g to which we assign the n(n ; 1) terms de ned by m1 m2 m2 m1 . Any non-crossing partition of 2k0 +2] must contain some pair of form sj1 sj1 +1. Removing that pair from the partition, we associate a partition of 2k0 ] by shifting downward. To each of the n(n ; 1) (n ; k0 ) terms that correspond to this partition, we associate (n ; k0 ) 2k0 + 2 terms by inserting one of the (n ; k0 ; 1) terms not yet used, in the sj1 and sj1 + 2 positions. Conversely, any sequence satisfying the conditions has each pair fmj mj +1 g appearing exactly twice, and de nes a non-crossing partition by fi j g 2 P i fmi mi+1g = fmj mj+1g. We prove this inductively: for k0 = 1 it is trivial. Assume it holds for k0 ; 1. There must be some r such that mr appears only 7

nk0 +1

m1 m2

mk0 m1 ))

(2.7)

once in the m sequence. Then mr;1 = mr+1 6= mr . Removing mr;1 and mr+1 we get a sequence with k0 distinct elements. It de nes some partition. Combine this non-crossing partition with the additional pair fr ; 1 rg. The result is still non-crossing, so the result holds. So the sum 0 1 X(E ( ) = n(n ; 1) (n ; k ) s 0 (2.8) the coe cient goes to 1, and our lemma is proved.
k k+1 k m1 m2 mk0 m1 k nk0 +1 nk0+1 where sk0 is the number of non-crossing partitions of 2k0 ]. As n ! 1

Lemma 2.5 The number;2s k of non-crossing partitions of 2k] is the kth k 1 . Catalan number c =
Any non-crossing partition pairs 1 with some even element 2m, since any element sj1 between 1 and its pair partner must also have sj2 between 1 and its pair partner. The number of pair partitions containing f1 2mg is sm;1 sk;m : it is determined by a non-crossing partition of the numbers inside (1 2m)P and one of those outside (1 2m). This gives us the recursion ;1 relation sk = k i=0 si sk;1;i for k 2. The function 1 k+1 2k! X p 1 x g(x) = 2 (1 ; 1 ; 4x) = k + 1 k (2.9) 0 is a generator function of the Catalan numbers. Since g(x) satis es the functional g(x)2 = g(x) ; x, its Taylor coe cients satisfy cn = Pn;1 c c equation i n;1;i for n 2, the same recursion as the sn, with the same ini0 tialization: c0 = s0 = c1 = s1 = 1. So cn = sn by induction.

Lemma 2.6 The (2k + 1)th moment of the semicircle distribution is 0, and

the 2kth is ck . The odd moments are 0 because the semicircle distribution is an even function, so the integral vanishes by symmetry. For the evens, we integrate by parts to get Z2p 1 4 ; x2 (x2k;1 (4 ; x2 ))0 dx = 4(2k ; 1)(m2k;2 ) ; (2k ; 1)m2k m2k = 2 ;2 k ; 1) m = 2(2 (2.10) k ; 1 2k;2 (2.11)

The Catalan sequence clearly satis es this recursion relation, and m0 = c0 = 1, so the lemma is proved. p Thus the moments of the semicircle 21 4 ; x2 and the moments of the mean eigenvalue distribution are both equal to 0 if k is odd and the Catalan numbers if k is even, so since a function is completely determined by its moments, the eigenvalue distribution must be semicircular.

3 Gaussian
The Gaussian Orthogonal Ensemble, the probability distribution de ned on real symmetric matrices by choosing xij from a Gaussian distribution ij , where C and a are appropriate normalization constants chosen C e;ax2 such that the variance of the trace E (Tr(X 2 )) = 1, is a particularly nice distribution both physically and mathematically. It is invariant under the orthogonal group, which makes it suitable for modeling physical spaces, and also makes the critical properties of the eigenvalues relatively easy to compute. Pn Q i ) i<j j i ; j j (see The joint distribution for the GOE is C 0 e;a( i=1 2 Theorem 2.1). Theorem 3.1 (Characterizing the Gaussian) The probability distributions on real-symmetric matrices which are independent of choice of basis (i.e. P (QT X Q) = P (X ) for Q orthogonal) and have all entries independently randomly distributed are precisely those of form e;aTr(X )2 +bTr(X )+c for some constants a b c a 0. Q Proof: We follow Mehta 5]. Let P = i j fij (Xij ): X has entries independently distributed, and suppose P is invariant under the Orthogonal group. In particular, if X 0 = OT X O, where 1 0 0 cos sin 0 0 B ; sin cos 0 0 0C C B C B O = B 0 C 0 1 0 0 A @ . . .. .. then

@X = @OT X 0 O + OT X 0 @O @ @ @ T @O = @ OXOT O + OT OXOT @O @ = AX + XAT


9

(3.1)

where

0 07 7 7 07 5 .. .. 0 Since P isP invariant under orthogonal transformations, the logarithmic derivative @@ log(fij (Xij )) should vanish: X 1 @fij @Xij =0 (3.2)
ij Substituting for @X @ and expanding, we get 1 @f11 + 1 @f22 (2X ) + 1 @f12 )(X ; X 12 11 22 f @X f @X f @X

2 0 ;1 0 0 6 T 1 0 0 0 6 6 O = A = @O 6 0 1 0 @ 40 . .

fij @Xij @

n X

11

X1k f1k @X1k X2k f2k @X2k X1k X2k This equation is of form f (x1 ) + g(x2 ) = h(x1 x2 ), which can only be solved by functions of form a + b ln x. So Ck = 0 and 1 @f2k 1 @f1k (3.5) X1k f1k @X1k = X2k f2k @X2k = c 2 2 X1k . We can do the same for fjk for Integrating, we get f1k (X1k ) = e a any j 6= k. Since all invariants can be expressed in terms of traces of powers of X , and the o -diagonal elements appear as squares in the exponent, P (X ) can be expressed as an exponential in Tr(X ) and Tr(X 2 ).

Since each term of the sum depends on independent variables, each individually must be constant. Dividing by X1k X2k we get ; 1 @f1k + 1 @f2k = Ck (3.4)

k=3

@f1k X + 1 @f2k X ; f1 @X 2k f @X 1k
1k 1k 2k 2k

11

22

22

12

12

(3.3)

The Gaussian is also mathematically nice in that it is possible calculate its local eigenvalue spacings. Wigner (of the Wigner Semicircle Law) surmised that the local nearest neighbor distribution, on small intervals normalized 2 ; Bx to have density 1, would be Axe , with constants chosen so as to set the integral and the mean to 1. Remarkably, Mehta 4]and Gaudin 1] have proved that the actual spacings are extremely close to Wigner's surmise. 10

3.1 Local Spacings

sity P (S ) of the distance between two consecutive eigenvalues of an n n matrix from the Gaussian distribution (in the region of constant density, 1 2 d2 normalized by their mean) approaches 4 dt2 where (t) is (up to several constants) the RFredholm determinant corresponding to the linear convolution t K f ,for the kernel K = 1 sin( ; ) + sin( + ) operator f ! ; t 2 ; +

Theorem 3.2 (First-Order Spacings) As n ! 1, the probability den-

n and rewrite Pn(S ) by repeatedly converting from product to determinant form and using determinant operations. Eventually Pn (S ) is written as a

The proof of this theorem is very technical. Essentially the idea is to x

nite Fredholm determinant whose kernels, fortunately, have a known limit as n ! 1: We give only the barest sketch of the key points: Since we are interested in what happens as n ! 1 it su ces to consider even n = 2m. Fixing m, and writing S = 2 , the spacing distribution Pm (S ) for a matrix of size 2m is derived from the 2-point correlation function Z (2 m ; 2)! P (; ) = P (; )d d (3.6)
m
0 1 2m;2 1 2m;2

where

P(

n) = e

2+ + 2 ) ;( 1 n

Y
i<j

j i ; jj

(3.7)

and the integral is taken over i ordered in increasing size and with no i in the interval (; ). Expanding the absolute value product and factoring out a 2 , we nd (3.8) P (; ) = (2m ; 2)! 2 e;2 2 R( )
m

where
0

= (2m)! 2

m;1) ; 2m(24

1 Y 1

; k 2
2m;2 2 2m;2

(3.9) 1 .. .

2+ R( ) = e;( 1

2 + 2 m;2 )

;
.. .

1
2

1
2

1
1 2 1

2m;2

(3.10) By integrating over the odd variables and applying column operations to the determinant, we get R( ) as a symmetric integral over the even variables. This allows us to integrate independently over all the variables and divide by (m ; 1)! 11

Expanding the determinant by minors and changing variables several times, it is possible to derive n o (3.11) R( ) = ; R1 ( ) ; 41 dd R1( ) R where if 2i = 2 1 e;2y2 y2i dy,

R(1) =

0 1
2m;2

1
0 2m;2

2 2 2m

2m;2

...

2m 4m;4

;2 d Then P (; ) = 2m e;2 2 R1 ( ) so the probability of an arbitrary 2 0 d spacing being at (; ) is 2m(2m ; 1)Pm (; ). R1 can also be written out explicitly as an integral and dy, and if we de ne

m(

)=

Z1 Z1

2+ e;2(y1

2 ) +ym

i<j

2 2 (yi2 ; yj ) dy1

dym

(3.12)

then it is possible to substitute in for R1 to get !2m;1 d2 ( ) P (S ) = 2m m! 0 d 2 m

(3.13)

m( ) . Let m be the normalization m (0) We want the limit of m when m increases p but is normalized to the magnitude of the mean spacing: set t = 2 2m. 2 Let (t) = limm!1 m (t) then P (S ) is a constant times ddt 2 . The 2 constant can be found to be 4 by calculating the moments. But we still need to nd . Changing pvariables one more time, we renormalize t to and yi to zi by dividing by 2:

m(

) =

Z1 Z1

2+ e;2(y1

2 ) +ym

i<j

2 2 (yi2 ; yj ) dy1 2 1 z1 2 1 z2 .. . 2 1 zm

dym (3.14)
2m;2 z1 2m;2 z2 2m;2 zm

= C

Z1 Z1

2+ e;2(z1

2 ) +zm

. ..

(3.15)

12

Rewriting the determinant with column operations, one can write it in terms of the Hermite polynomials and then the harmonic oscillator functions nally the integration is brought inside the determinant to get
m (t) = det ij ;

u2i (z )u2j (z)dz

(3.16)

;1 which is the Fredholm determinant for the kernel Km = m u2k (x)u2k (y). 0 This kernel can be rewritten more suggestively as pm 1 u2m(x)u2m;1 (y) ; u2m (y)u2m;1 (x) + u2m (x)u2m;1 (y) + u2m (y)u2m;1 (x) 2 x;y x+y

Finally changing variables and letting m ! 1 we get K = 21 ( sin(;; ) ) + sin( + ) ). + Despite its ugly in nite determinant form, is actually rapidly convergent and P (S ) can be computed to reasonable accuracy. It is approximated within 5% 1] by the Wigner surmise, which we will use in our plots because it is much faster to compute. Note that the proof of the spacings distribution relied crucially on the normal distribution: with any other distribution it would not be possible to write P (; ) as a determinant and integrate out the odd terms. Most distributions are thought to be completely intractable at this time. Nevertheless, the level spacing distribution is conjectured to be a robust universal property: in fact, our data suggests that it may be even more universal than the semicircle law. This is the conjecture which we investigate numerically.

4 Computations
It is conjectured that the known rst-order distributions for the Gaussian are in fact universal, like the Central Limit Theorem and the Semicircle Law, so that they hold for any suitably normalized, su ciently integrable function, or and even beyond. We investigate this conjecture by constructing large matrices using the software package Matlab 3] and plotting their rst-order spacings against the Wigner surmise. To construct matrix entries according to an arbitrary distribution , one can draw random numbers from the uniform R distribution on 0, 1], and then invert the cumulative distribution function 0t d . In some cases this integral and its inverse can be computed in closed form. If not, it is necessary to construct a lookup table of the CDF values for small increments. Matlab 13

contains e cient built-in random matrix generators for the uniform, Gaussian and various other distributions. We found that to generate a symmetric matrix, it was signi cantly more e cient to call the random matrix generator (generating n2 entries) then and replace the lower half by symmetry, than to make n(n2;1) calls to the random number generator. We look at matrix elements identically distributed according to the following continuous and discrete distributions: Continuous: Gaussian (to test the programs) 1 Uniform on -1,1]: P (t) = 2 , C2, Symmetric Cauchy on (;1 1): P (t) = 1+ t Discrete: Sign: P (1) = 1 =1 2 P (;1) 2, n ; Poisson: P (n) = e n! . Note that the Poisson and Cauchy distributions do not have mean zero. The Poisson distribution has variable mean . The Cauchy distribution does not even have nite moments, and therefore does not obey a semicircle law at all.

5 Results
The program could process about 700 300x300 matrices per hour, varying slightly according to the distribution. Our data seems consistent with the conjecture for all the test distributions, for sets of up to 5000 matrices of size up to 300x300. The Cauchy distribution (which is not semicircular and does not provide a bound to the entries) converges more slowly to the surmise than the others, but as the size of the matrix grows, all do seem to be approaching the surmise.

14

3.5

x 10

The local spacings of the central 3/5 of the eigenvalues of 5000 300x300 uniform matrices, normalized in batches of 20. 3

2.5

1.5

0.5

0 12000

0.5

1.5

2.5

3.5

4.5

The local spacings of the central 3/5 of the eigenvalues of 5000 100x100 Cauchy matrices, normalized in batches of 20. 10000

8000

6000

4000

2000

0.5

1.5

2.5

3.5

4.5

15

2.5

x 10

The local spacings of the central 3/5 of the eigenvalues of 5000 200x200 Cauchy matrices, normalized in batches of 20. 2

1.5

0.5

0.5
4

1.5

2.5

3.5

4.5

3.5

x 10

The local spacings of the central 3/5 of the eigenvalues of 5000 300x300 Cauchy matrices, normalized in batches of 20. 3

2.5

1.5

0.5

0.5

1.5

2.5

3.5

4.5

16

2500 The eigenvalues of the Cauchy distribution are NOT semicirular. 2000

1500

1000

500

0 300 3.5 x 10
4

200

100

100

200

300

The local spacings of the central 3/5 of the eigenvalues of 5000 300x300 Poisson matrices with lambda=5 normalized in batches of 20. 3

2.5

1.5

0.5

0.5

1.5

2.5

3.5

4.5

17

3.5

x 10

The local spacings of the central 3/5 of the eigenvalues of 5000 300x300 Poisson matrices with lambda=10, normalized in batches of 20. 3

2.5

1.5

0.5

0.5
4

1.5

2.5

3.5

4.5

3.5

x 10

The local spacings of the central 3/5 of the eigenvalues of 5000 300x300 Poisson matrices with lambda=20, normalized in batches of 20.

2.5

1.5

0.5

0.5

1.5

2.5

3.5

4.5

18

3.5

x 10

The local spacings of the central 3/5 of the eigenvalues of 5000 300x300 Poisson matrices with lambda=30, normalized in batches of 20. 3

2.5

1.5

0.5

0.5
4

1.5

2.5

3.5

4.5

3.5

x 10

The local spacings of the central 3/5 of the eigenvalues of 5000 300x300 Poisson matrices with lambda=50, normalized in batches of 20. 3

2.5

1.5

0.5

0.5

1.5

2.5

3.5

4.5

19

3.5

x 10

The local spacings of the central 3/5 of the eigenvalues of 5000 300x300 sign matrices, normalized in batches of 20.

2.5

1.5

0.5

0.5

1.5

2.5

3.5

4.5

We then examined the minima from sets of normalized spacings. The minima from sets of 20 and 100 were predicted to look like e;x , but our data appear like a compressed front end of the Wigner curve. More work must be done in this area.

20

1000 Minima of the central 20 eigenvalue spacings of 5000 300x300 Gaussians 900

800

700

600

500

400

300

200

100

0 2000

0.5

1.5

2.5

3.5

4.5

Minima of the central 100 eigenvalue spacings of 5000 300x300 Gaussians 1800

1600

1400

1200

1000

800

600

400

200

0.5

1.5

2.5

3.5

4.5

21

900 Minima of the central 20 eigenvalue spacings of 500 300x300 sign matrices 800

700

600

500

400

300

200

100

0.5

1.5

2.5

3.5

4.5

2000 Minima of the central 100 eigenvalue spacings of 5000 300x300 sign matrices 1800

1600

1400

1200

1000

800

600

400

200

0.5

1.5

2.5

3.5

4.5

22

900 Minima of the central 20 eigenvalue spacings of 5000 300x300 uniform matrices 800

700

600

500

400

300

200

100

0.5

1.5

2.5

3.5

4.5

2000 Minima of the central 100 eigenvalue spacings of 5000 300x300 uniform matrices 1800

1600

1400

1200

1000

800

600

400

200

0.5

1.5

2.5

3.5

4.5

23

900 Minima of the central 20 eigenvalue spacings of 5000 300x300 Cauchy matrices 800

700

600

500

400

300

200

100

0.5

1.5

2.5

3.5

4.5

2500 Minima of the central 100 eigenvalue spacings of 5000 300x300 Cauchy matrices

2000

1500

1000

500

0.5

1.5

2.5

3.5

4.5

24

6 Bibliography
1] Michael Gaudin. "Sur la Loi Limite de L' Espacement des Valeurs Propres d'une Matrice Aleatoire", Nuclear Physics, vol 25 (1961), reprinted in 6]. 2] Furmio Hiai and Denes Petz. The Semicircle Law, Free Random Variables, and Entropy. American Mathematical Society, Providence, RI, 2000. Mathematical Surveys and Monographs, 77. 3] Matlab. The Mathworks, Natick, Mass. 4] M. L. Mehta. "On the Statistical Properties of the Level-Spacing", Nuclear Physics, vol 18, 1960. Reprinted in 6]. 5] M. L. Mehta. Random Matrices. Academic Press, San Deigo, CA, 1991. 6] Charles Porter. Statistical Theories of Spectra: Fluctuations. Academic Press, New York and London, 1965. 7] Graig A. Tracy and Harold Widom. "Introduction to Random Matrices", Geometric and Quantum Aspects of Integral Systems. Springer, Berlin, 1993. Lecture Notes in Physics, vol 424.

25

You might also like