You are on page 1of 95

1/113

Random Matrices in Wireless Flexible Networks

Romain Couillet
Alcatel-Lucent Chair on Flexible Radio, Supélec, Gif sur Yvette, FRANCE
romain.couillet@supelec.fr

Johannes Kepler Universitä t, Linz, Austria.


2/113

Outline

Tools for Random Matrix Theory


Introduction to Large Dimensional Random Matrix Theory
History of Mathematical Advances
The Moment Approach and Free Probability
Introduction of the Stieltjes Transform
Summary of what we know and what is left to be done

Random Matrix Theory and Performance of Communication Systems


Performance of CDMA
Performance of MIMO Systems

Random Matrix Theory and Signal Source Sensing


Finite Random Matrix Analysis
Large Dimensional Random Matrix Analysis

Random Matrix Theory and Statistical Inference


Free Probability Method
The Stieltjes Transform Approach

Random Matrix Theory and Multi-Source Power Estimation


Free Probability Approach
The Stieltjes Transform Approach
Tools for Random Matrix TheoryIntroduction to Large Dimensional Random Matrix Theory 5/113

Large dimensional data

Let w1 , w2 . . . ∈ CN be independently drawn from an N-variate process of mean zero and


covariance R = E[w1 wH 1] ∈ C
N×N .

Law of large numbers


As n → ∞,
n
1X H a.s.
wi wH
i = WW −→ R
n
i=1

In reality, one cannot afford n → ∞.


I if n  N,
n
1X
Rn = wi wH
i
n
i=1

is a “good” estimate of R.
I if N/n = O(1), and if both (n, N) are large, we can still say, for all (i, j),
a.s.
(Rn )ij −→ (R)ij

What about the global behaviour? What about the eigenvalue distribution?
Tools for Random Matrix TheoryIntroduction to Large Dimensional Random Matrix Theory 6/113

Empirical and limit spectra of Wishart matrices

Empirical eigenvalue distribution


0.8 Marcenko-Pastur Law

0.6
Density

0.4

0.2

0
0 0.5 1 1.5 2 2.5 3

Eigenvalues of Rn

Figure: Histogram of the eigenvalues of Rn for n = 2000, N = 500, R = IN


Tools for Random Matrix TheoryIntroduction to Large Dimensional Random Matrix Theory 7/113

The Marcenko-Pastur Law

1.2
c = 0.1
c = 0.2
1 c = 0.5

0.8
Density fc (x)

0.6

0.4

0.2

0
0 0.5 1 1.5 2 2.5 3

Figure: Marcenko-Pastur law for different limit ratios c = lim N/n.


Tools for Random Matrix TheoryIntroduction to Large Dimensional Random Matrix Theory 8/113

The Marcenko-Pastur law

Let W ∈ CN×n have i.i.d. elements, of zero mean and variance 1/n.
Eigenvalues of the matrix
 

   
 
  
n WH W
   
 
  
 
| {z }  
N

when N, n → ∞ with N/n → c IS NOT IDENTITY!

Remark: If the entries are Gaussian, the matrix is called a Wishart matrix with n degrees of
freedom. The exact distribution is known in the finite case.
Tools for Random Matrix TheoryHistory of Mathematical Advances 10/113

The birth of large dimensional random matrix theory

E. Wigner, “Characteristic vectors of bordered matrices with infinite dimensions,” The annals of
mathematics, vol. 62, pp. 546-564, 1955.

0 +1 +1 +1 −1 −1 ···
 
 +1 0 −1 +1 +1 +1 ··· 
+1 −1 0 +1 +1 +1 ···
 
 
1  +1 +1 +1 0 +1 +1 ···

XN = √ 
 
−1 +1 +1 +1 0 −1 ···

N  
−1 +1 +1 +1 −1 0 ···
 
 
.. .. .. .. .. ..
 
..
. . . . . . .

As the matrix dimension increases, what can we say about the eigenvalues (energy levels)?
Tools for Random Matrix TheoryHistory of Mathematical Advances 11/113

Semi-circle law, Full circle law...

I If XN ∈ CN×N is Hermitian with i.i.d. entries of mean 0, variance 1/N above the diagonal,
a.s.
then F XN −→ F where F has density f the semi-circle law

1
q
f (x) = (4 − x 2 )+

I Shown from the method of moments
1 1
lim tr X2k
N = C 2k
N→∞ N k +1 k
which are exactly the moments of f (x)!
I If XN ∈ CN×N has i.i.d. 0 mean, variance 1/N entries, then asymptotically its complex
eigenvalues distribute uniformly on the complex unit circle.
Tools for Random Matrix TheoryHistory of Mathematical Advances 12/113

Semi-circle law

Empirical eigenvalue distribution


0.4 Semi-circle Law

0.3
Density

0.2

0.1

0
−3 −2 −1 0 1 2 3

Eigenvalues

Figure: Histogram of the eigenvalues of Wigner matrices and the semi-circle law, for N = 500
Tools for Random Matrix TheoryHistory of Mathematical Advances 13/113

Circular law
Empirical eigenvalue distribution
Circular Law

0.5
Eigenvalues (imaginary part)

−0.5

−1

−1 −0.5 0 0.5 1

Eigenvalues (real part)

Figure: Eigenvalues of XN with i.i.d. standard Gaussian entries, for N = 500.


Tools for Random Matrix TheoryHistory of Mathematical Advances 14/113

More involved matrix models

I much study has surrounded the Marcenko-Pastur law, the Wigner semi-circle law etc.
I for practical purposes, we often need more general matrix models
I products and sums of random matrices
I i.i.d. models with correlation/variance profile
I distribution of inverses etc.
I for these models, it is often impossible to have a closed-form expression of the limiting
distribution.
I sometimes we do not have a limiting convergence.

To study these models, the method of moments is not enough!


A consistent powerful mathematical framework is required.
Tools for Random Matrix TheoryThe Moment Approach and Free Probability 16/113

Eigenvalue distribution and moments

I The Hermitian matrix RN ∈ CN×N has successive empirical moments MkN , k = 1, 2, . . .,

N
1 X k
MkN = λi
N
i=1

I In classical probability theory, for A, B independent,

ck (A + B) = ck (A) + ck (B)

with ck (X ) the cumulants of X . The cumulants ck are connected to the moments mk by,
X Y
mk = c|V |
π∈P(k) V ∈π

A natural extension of classical probability for non-commutative random variables exist, called
Free Probability
Tools for Random Matrix TheoryThe Moment Approach and Free Probability 17/113

Free probability

I To connect the moments of A + B to those of A and B, independence is not enough. A and B


must be asymptotically free,
I two Gaussian matrices are free
I a Gaussian matrix and any deterministic matrix are free
I unitary (Haar distributed) matrices are free
I a Haar matrix and a Gaussian matrix are free etc.
I Similarly as in classical probability, we define free cumulants Ck ,

C1 = M1
C2 = M2 − M12
C3 = M3 − 3M1 M2 + 2M12

R. Speicher, “Combinatorial theory of the free product with amalgamation and operator-valued
free probability theory,” Mem. A.M.S., vol. 627, 1998.
I Combinatorial description by non-crossing partitions,
X Y
Mn = C|V |
π∈NC(n) V ∈π
Tools for Random Matrix TheoryThe Moment Approach and Free Probability 18/113

Non-crossing partitions

8 2

7 3

6 4

Figure: Non-crossing partition π = {{1, 3, 4}, {2}, {5, 6, 7}, {8}} of NC(8).
Tools for Random Matrix TheoryThe Moment Approach and Free Probability 19/113

Moments of sums and products of random matrices

I Combinatorial calculus of all moments


Theorem
For free random matrices A and B, we have the relationship,

Ck (A + B) = Ck (A) + Ck (B)
X Y
Mn (AB) = C|V1 | (A)C|V2 | (B)
(π1 ,π2 )∈NC(n) V1 ∈π1
V2 ∈π2

in conjunction with free moment-cumulant formula, gives all moments of sum and product.
Theorem
If F is a compactly supported distribution function, then F is determined by its moments.
I In the absence of support compactness, it is impossible to retrieve the distribution function
from moments. This is in particular the case of Vandermonde matrices.
Tools for Random Matrix TheoryThe Moment Approach and Free Probability 20/113

Free convolution
I In classical probability theory, for independent A, B,
Z

µA+B (x) = µA (x) ∗ µB (x)= µA (t)µB (x − t)dt

I In free probability, for free A, B, we use the notations

µA+B = µA  µB , µA = µA+B µB , µAB = µA  µB , µA = µA+B  µB

Ø. Ryan, M. Debbah, “Multiplicative free convolution and information-plus-noise type matrices,”


Arxiv preprint math.PR/0702342, 2007.

Theorem
Convolution of the information-plus-noise model Let WN ∈ CN×n have i.i.d. Gaussian entries of
mean 0 and variance 1, AN ∈ CN×n , such that µ 1 A AH ⇒ µA , as n/N → c. Then the eigenvalue
n N N
distribution of
1
(AN + σWN ) (AN + σWN )H
BN =
n
converges weakly and almost surely to µB such that

µB = (µA  µc )  δσ2  µc

with µc the Marcenko-Pastur law with ratio c.


Tools for Random Matrix TheoryThe Moment Approach and Free Probability 21/113

Similarities between classical and free probability

Classical
Z Probability Free Z
probability
Moments mk = x k dF (x) Mk = x k dF (x)
X Y X Y
Cumulants mn = c|V | Mn = C|V |
π∈P(n) V ∈π π∈N C(n) V ∈π
Independence classical independence freeness
Additive convolution fA+B = fA ∗ fB µA+B = µA  µB
Multiplicative convolution fAB µAB = µA  µB
Sum Rule ck (A + B) = ck (A) + ck (B) Ck (A + B) = Ck (A) + Ck (B)
n n
1 X 1 X
Central Limit √ xi → N (0, 1) √ Xi ⇒ semi-circle law
n i=1 n i=1
Tools for Random Matrix TheoryThe Moment Approach and Free Probability 22/113

Bibliography on Free Probability related work

I D. Voiculescu, “Addition of certain non-commuting random variables,” Journal of functional


analysis, vol. 66, no. 3, pp. 323-346, 1986.
I R. Speicher, “Combinatorial theory of the free product with amalgamation and
operator-valued free probability theory,” Mem. A.M.S., vol. 627, 1998.
I R. Seroul, D. O’Shea, “Programming for Mathematicians,” Springer, 2000.
I H. Bercovici, V. Pata, “The law of large numbers for free identically distributed random
variables,” The Annals of Probability, pp. 453-465, 1996.
I A. Nica, R. Speicher, “On the multiplication of free N-tuples of noncommutative random
variables,” American Journal of Mathematics, pp. 799-837, 1996.
I Ø. Ryan, M. Debbah, “Multiplicative free convolution and information-plus-noise type
matrices,” Arxiv preprint math.PR/0702342, 2007.
I N. R. Rao, A. Edelman, “The polynomial method for random matrices,” Foundations of
Computational Mathematics, vol. 8, no. 6, pp. 649-702, 2008.
I Ø. Ryan, M. Debbah, “Asymptotic Behavior of Random Vandermonde Matrices With Entries
on the Unit Circle,” IEEE Trans. on Information Theory, vol. 55, no. 7, pp. 3115-3147, 2009.
Tools for Random Matrix TheoryIntroduction of the Stieltjes Transform 24/113

The Stieltjes transform

Definition
Let F be a real distribution function. The Stieltjes transform mF of F is the function defined, for
z ∈ C \ R, as
1
Z
mF (z) = dF (λ)
λ−z
For a < b real, denoting z = x + iy, we have the inverse formula

1
F 0 (x) = lim =[mF (x + iy)]
y →0 π

Knowing the Stieltjes transform is knowing the eigenvalue distribution!


Tools for Random Matrix TheoryIntroduction of the Stieltjes Transform 25/113

Remark on the Stieltjes transform

I If F is the eigenvalue distribution of a Hermitian matrix XN ∈ CN×N , we might denote



mX =mF , and
1 1
Z
mX (z) = dF (λ) = tr (XN − zIN )−1
λ−z N

I For compactly supported eigenvalue distribution,



1 1
Z X
mF (z) = − λ
=− MkN z −k −1
z 1− z k=0

The Stieltjes transform is doubly more powerful than the moment approach!
I conveys more information than any K -finite sequence M1 , . . . , MK .
I is not handicapped by the support compactness constraint.

I however, Stieltjes transform methods, while stronger, are more painful to work with.
Tools for Random Matrix TheoryIntroduction of the Stieltjes Transform 26/113

The Marcenko-Pastur law


Theorem
Let XN ∈ CN×n have i.i.d. zero mean variance 1/n entries with finite eighth order moments. As
n, N → ∞ with Nn → c ∈ (0, ∞), the e.s.d. of XN XHN converges almost surely to a nonrandom
distribution function Fc with density fc given by

1 p
fc (x) = (1 − c −1 )+ δ(x) + (x − a)+ (b − x)+
2πcx
√ √
where a = (1 − c)2 , and b = (1 + c)2 .

1.2
c = 0.1
c = 0.2
1 c = 0.5

0.8
Density fc (x)

0.6

0.4

0.2

0
0 0.5 1 1.5 2 2.5 3

x
Tools for Random Matrix TheoryIntroduction of the Stieltjes Transform 27/113

Diagonal entries of the resolvent

Since we want an expression of mF , we start by identifying the diagonal entries of the resolvent
(XN XH −1 of X XH . Denote
N − zIN ) N N  H
y
XN =
Y
Now, for z ∈ C+ , we have
−1
yH y − z yH YH
 −1 
XN XH
N − zIN =
Yy YYH − zIN−1

Consider the first diagonal element of (RN − zIN )−1 . From the matrix inversion lemma,
−1
(A − BD−1 C)−1 −A−1 B(D − CA−1 B)−1
  
A B
=
C D −(A − BD−1 C)−1 CA−1 (D − CA−1 B)−1

which here gives  −1  1


XN XH
N − zIN =
11 −z − zyH (YH Y − zIn )−1 y
Tools for Random Matrix TheoryIntroduction of the Stieltjes Transform 28/113

Trace Lemma

Z. Bai, J. Silverstein, “Spectral Analysis of Large Dimensional Random Matrices”, Springer Series
in Statistics, 2009.
To go further, we need the following result,
Theorem
Let {AN } ∈ CN×N . Let {xN } ∈ CN , be a random vector of i.i.d. entries with zero mean, variance
1/N and finite 8th order moment, independent of AN . Then

1 a.s.
xH
N AN xN − tr AN −→ 0.
N

For large N, we therefore have approximately


 −1  1
XN XH
N − zIN '
11 −z − z N1 tr(YH Y − zIn )−1
Tools for Random Matrix TheoryIntroduction of the Stieltjes Transform 29/113

Rank-1 perturbation lemma

J. W. Silverstein, Z. D. Bai, “On the empirical distribution of eigenvalues of a class of large


dimensional random matrices,” Journal of Multivariate Analysis, vol. 54, no. 2, pp. 175-192, 1995.
It is somewhat intuitive that adding a single column to Y won’t affect the trace in the limit.
Theorem
Let A and B be N × N with B Hermitian positive definite, and v ∈ CN . For z ∈ C \ R− ,

1 
tr (B − zIN )−1 − (B + vvH − zIN )−1 A ≤ 1 kAk
N N dist(z, R+ )

with kAk the spectral norm of A, and dist(z, A) = infy ∈A ky − zk.


Therefore, for large N, we have approximately,
 −1  1
XN XH
N − zIN '
11 −z − z N1 tr(YH Y − zIn )−1
1
'
−z − z N1 tr(XH X − zIn )−1
N N
1
=
−z − z Nn mF (z)

in which we recognize the Stieltjes transform mF of the l.s.d. of XH


N XN .
Tools for Random Matrix TheoryIntroduction of the Stieltjes Transform 30/113

End of the proof

We have again the relation


n N −n 1
mF (z) = mF (z) +
N N z
hence  −1  1
XN XH
N − zIN ' n
11 N
− 1 − z − zmF (z)
Note that the choice (1, 1) is irrelevant here, so the expression is valid for all pair (i, i). Summing
over the N terms and averaging, we finally have

1  −1 1
mF (z) = tr XN XH
N − zIN '
N c − 1 − z − zmF (z)

which solve a polynomial of second order. Finally


p
c−1 1 (c − 1 − z)2 − 4z
mF (z) = − + .
2z 2 2z
From the inverse Stieltjes transform formula, we then verify that mF is the Stieltjes transform of the
Marcenko-Pastur law.
Tools for Random Matrix TheoryIntroduction of the Stieltjes Transform 31/113

Asymptotic results using the Stieltjes transform

J. W. Silverstein, Z. D. Bai, “On the empirical distribution of eigenvalues of a class of large


dimensional random matrices,” Journal of Multivariate Analysis, vol. 54, no. 2, pp. 175-192, 1995.

Theorem
Let BN = XN TN XH
N ∈C
N×N , X ∈ CN×n has i.i.d. entries of mean 0 and variance 1/N,
N
F N ⇒ F , n/N → c. Then, F BN ⇒ F almost surely, F having Stieltjes transform
T T

!−1  −1
t 1
Z
−1
mF (z) = c dF T (t) − z = tr TN mF (z)TN + IN −z
1 + tmF (z) N

which has a unique solution mF (z) ∈ C+ if z ∈ C+ , and mF (z) > 0 if z < 0.

I in general, no explicit expression for F .


1 1
I Stieltjes transform of BN = TN2 XH 2
N XN TN with asymptotic distribution F ,

1
mF = cmF + (c − 1)
z

Pn
Spectrum of the sample covariance matrix model BN = i=1 x0i x0H 0H 0 0 0
i , with XN = [x1 , . . . , xn ], xi
i.i.d. with zero mean and covariance TN = E[x01 x0H
1 ].
Tools for Random Matrix TheoryIntroduction of the Stieltjes Transform 32/113

Getting F 0 from mF

I Remember that, for a < b real,

∆ 1
f (x)=F 0 (x) = lim =[mF (x + iy)]
y →0 π
I to plot the density f (x), span z = x + iy on the line {x ∈ R, y = ε} parallel but close to the
real axis, solve mF (z) for each z, and plot =[mF (z)].

Example (Sample covariance matrix)


1 1
1
For N multiple of 3, let dF T (x) = 3
δ(x − 1) + 13 δ(x − 3) + 31 δ(x − K ) and let BN = TN2 XH 2
N XN TN
with F BN → F , then
1
mF = cmF + (c − 1)
z
!−1
t
Z
mF (z) = c dF T (t) − z
1 + tmF (z)

We take c = 1/10 and alternatively K = 7 and K = 4.


Tools for Random Matrix TheoryIntroduction of the Stieltjes Transform 33/113

Spectrum of the sample covariance matrix

Empirical eigenvalue distribution Empirical eigenvalue distribution


f (x) f (x)
0.6 0.6

0.4 0.4
Density

Density
0.2 0.2

0 0
1 3 7 1 3 4

Eigenvalues Eigenvalues

1 1
Figure: Histogram of the eigenvalues of BN = TN2 XH 2
N XN TN
, N = 3000, n = 300, with TN diagonal composed of
three evenly weighted masses in (i) 1, 3 and 7 on top, (ii) 1, 3 and 4 at bottom.
Tools for Random Matrix TheoryIntroduction of the Stieltjes Transform 34/113

The Shannon Transform

A. M. Tulino, S. Verdù, “Random matrix theory and wireless communications,” Now Publishers Inc.,
2004.

Definition
Let F be a probability distribution, mF its Stieltjes transform, then the Shannon-transform VF of F
is defined as Z ∞ Z ∞ 
∆ 1
VF (x)= log(1 + xλ)dF (λ) = − mF (−t) dt
0 x t

If F is the distribution function of the eigenvalues of XXH ∈ CN×N ,

1  
VF (x) = log det IN + xXXH .
N
Note that this last relation is fundamental to wireless communication purposes!
Tools for Random Matrix TheorySummary of what we know and what is left to be done 36/113

Models studied with analytic tools

I Models involving i.i.d. matrices


1 1
I sample covariance matrix models, XTXH and T 2 XH XT 2
1 1
I doubly correlated models, R2 XTXH R 2 . With X Gaussian, Kronecker model.
1 1
I doubly correlated models with external matrix, R 2 XTXH R 2 + A.
I variance profile, XXH , where X has i.i.d. entries with mean 0, variance σi,j
2
.
I Ricean channels, XXH + A, where X has a variance profile.
1 1
sum of doubly correlated i.i.d. matrices, Kk=1 Rk2 Xk Tk XH 2
P
k Rk .
I
H
I information-plus-noise models (X + A)(X + A)
1 1 P 1 1
frequency-selective doubly-correlated channels ( Kk=1 Rk2 Xk Tk Xk Rk2 )( Kk=1 Rk2 Xk Tk Xk Rk2 )
P
I
1 1
sum of frequency-selective doubly-correlated channels Kk=1 Rk2 Hk Tk HH 2
P
k Rk , where
I
PL 0 1 0 H 0 1
Hk = l=1 Rkl Xkl Tkl Xkl Rkl .
2 2

I Models involving a column subset W of unitary matrices


PK 1 1
I sum of doubly correlated Haar matrices Rk2 Wk Tk WH
k =1
2
k Rk
1 1
sum of products of correlated i.i.d. and/or Haar matrices k=1 Rk2 Xk Tk WH
PK H 2
k Sk Wk Tk Xk Rk
I

In most cases, T and R can be taken random, but independent of X. More involved random
matrices, such as Vandermonde matrices, were not yet studied.
Tools for Random Matrix TheorySummary of what we know and what is left to be done 37/113

Models studied with moments/free probability

I asymptotic results
I most of the above models with Gaussian X.
I products V1 VH H
1 T1 V2 V2 T2 ... of Vandermonde and deterministic matrices
I conjecture: any probability space of matrices invariant to row or column permutations.
I marginal studies, not yet fully explored
I rectangular free convolution: singular values of rectangular matrices
I finite size models. Instead of almost sure convergence of mXN as N → ∞, we can study finite size
behaviour of E[mXN ].
Tools for Random Matrix TheorySummary of what we know and what is left to be done 38/113

Related bibliography

I R. B. Dozier, J. W. Silverstein, “On the empirical distribution of eigenvalues of large dimensional


information-plus-noise-type matrices,” Journal of Multivariate Analysis, vol. 98, no. 4, pp. 678-694, 2007.
I J. W. Silverstein, Z. D. Bai, “On the empirical distribution of eigenvalues of a class of large dimensional
random matrices,” Journal of Multivariate Analysis, vol. 54, no. 2, pp. 175-192, 1995.
I J. W. Silverstein, S. Choi “Analysis of the limiting spectral distribution of large dimensional random
matrices” Journal of Multivariate Analysis, vol. 54, no. 2, pp. 295-309, 1995.
I F. Benaych-Georges, “Rectangular random matrices, related free entropy and free Fisher’s information,”
Arxiv preprint math/0512081, 2005.
I Ø. Ryan, M. Debbah, “Multiplicative free convolution and information-plus-noise type matrices,” Arxiv
preprint math.PR/0702342, 2007.
I V. L. Girko, “Theory of Random Determinants,” Kluwer, Dordrecht, 1990.
I R. Couillet, M. Debbah, J. W. Silverstein, “A deterministic equivalent for the capacity analysis of correlated
multi-user MIMO channels,” submitted to IEEE Trans. on Information Theory.
I V. L. Girko, “Theory of Random Determinants,” Kluwer, Dordrecht, 1990.
I W. Hachem, Ph. Loubaton, J. Najim, “Deterministic Equivalents for Certain Functionals of Large Random
Matrices”, Annals of Applied Probability, vol. 17, no. 3, 2007.
I M. J. M. Peacock, I. B. Collings, M. L. Honig, “Eigenvalue distributions of sums and products of large
random matrices via incremental matrix expansions,” IEEE Trans. on Information Theory, vol. 54, no. 5, pp.
2123, 2008.
I D. Petz, J. Réffy, “On Asymptotics of large Haar distributed unitary matrices,” Periodica Math. Hungar., vol.
49, pp. 103-117, 2004.
I Ø. Ryan, A. Masucci, S. Yang, M. Debbah, “Finite dimensional statistical inference,” submitted to IEEE
Trans. on Information Theory, Dec. 2009.
Tools for Random Matrix TheorySummary of what we know and what is left to be done 39/113

Technical Bibliography

I W. Rudin, “Real and complex analysis,” New York, 1966.


I P. Billingsley, “Probability and measure,” Wiley New York, 2008.
I P. Billingsley, “Convergence of probability measures,” Wiley New York, 1968.
Random Matrix Theory and Performance of Communication SystemsPerformance of CDMA 42/113

Uplink random CDMA

Uplink Random CDMA Network

h1
h4
h3
h2

P1 P3
P4
P2
Random Matrix Theory and Performance of Communication SystemsPerformance of CDMA 43/113

Capacity of uplink random CDMA

I System model conditions,


I uplink random CDMA
I K mobile users, 1 base station
I N chips per CDMA spreading code.
I User k, k ∈ {1, . . . , K } has code wk ∼ CN (0, IN )
I User k transmits the symbol
p sk .
I User k’s channel is hk Pk , with Pk the power of user k
I The base station receives
K
X p
y= hk wk Pk sk + n
k =1
I This can be written in the more compact form
1
y = WHP 2 s + n

with
I s = [s1 , . . . , sK ]T ∈ CK ,
I W = [w1 , . . . , wK ] ∈ CN×K ,
I P = diag(P1 , . . . , PK ) ∈ CK ×K ,
I H = diag(h1 , . . . , hK ) ∈ CK ×K .
Random Matrix Theory and Performance of Communication SystemsPerformance of CDMA 44/113

MMSE decoder
I Consists into taking
 −1
rk = wH H H 2
k WHPH W + σ IN y
as symbol for user k.
I The SINR for user’s k signal is
(MMSE)
X
−1
γk = Pk |hk |2 wH
k( Pi |hi |2 wi wH 2
i + σ IN ) wk (1)
1≤i≤K
i6=k
−1
= Pk |hk |2 wH H H 2 H 2
k (WHPH W − Pk |hk | wk wk + σ IN ) wk . (2)

I Now we have the following result


Theorem (Trace Lemma)
If x ∈ CN is i.i.d. with entries of zero mean, variance 1/N, and A ∈ CN×N is independent of x, then
X a.s. 1
xH Ax = xi∗ xj Aij −→ tr A.
N
i,j

I Applying this result, for N large,

−1 1 −1 a.s.
wH H H 2 H 2
k (WHPH W −Pk |hk | wk wk +σ IN ) wk − tr(WHPHH WH −Pk |hk |2 wk wH 2
k +σ IN ) −→ 0.
N
Random Matrix Theory and Performance of Communication SystemsPerformance of CDMA 45/113

MMSE decoder

−1 1 −1 a.s.
wH H H 2 H 2
k (WHPH W −Pk |hk | wk wk +σ IN ) wk − tr(WHPHH WH −Pk |hk |2 wk wH 2
k +σ IN ) −→ 0.
N

I Second important result,


Theorem (Rank 1 perturbation Lemma)
Let A ∈ CN×N , x ∈ CN , t > 0, then

1
tr(A + tIN )−1 − 1 tr(A + xxH + tIN )−1 ≤ 1

N N tN

I As N grows large,

1  −1 1  −1
tr WHPHH WH − Pk |hk |2 wk wH 2
k + σ IN − tr WHPHH WH + σ 2 IN → 0,
N N

I The RHS is the Stieltjes transform of WHPHH WH in z = −σ 2 !

mWHPHH WH (−σ 2 )
Random Matrix Theory and Performance of Communication SystemsPerformance of CDMA 46/113

MMSE decoder
I From previous result,
a.s.
mWHPHH WH (−σ 2 ) − mN (−σ 2 ) −→ 0
with mN (−σ 2 ) the unique positive solution of
 −1
1  −1
m= tr HPHH mHPHH + IK + σ2
N

independent of k!
I This is also  −1
2 1 X Pi |hi |2
m= σ +
 
N 1 + mPi |hi |2
1≤i≤K

I Finally,
(MMSE) a.s.
γk − Pk |hk |2 mN (−σ 2 ) −→ 0
and the capacity reads

K
1 X a.s.
C (MMSE) (σ 2 ) − log2 (1 + Pk |hk |2 mN (−σ 2 )) −→ 0.
K
k =1
Random Matrix Theory and Performance of Communication SystemsPerformance of CDMA 47/113

MMSE decoder

K
1 X a.s.
C (MMSE) (σ 2 ) − log2 (1 + Pk |hk |2 mN (−σ 2 )) −→ 0.
K
k =1

I AWGN channel, Pk = P, hk = 1,
p !
a.s. −(σ 2 + (c − 1)P) + (σ 2 + (c − 1)P)2 + 4Pσ 2
C (MMSE) (σ 2 ) −→ c log2 1+
2σ 2

I Rayleigh channel, Pk = P, |hk | Rayleigh,


 −1
Pt
Z
m = σ2 + c e−t dt
1 + Ptm

and Z  
a.s.
CMMSE (σ 2 ) −→ c log2 1 + Ptm(−σ 2 ) e−t dt.
Random Matrix Theory and Performance of Communication SystemsPerformance of CDMA 48/113

Matched-Filter, Optimal decoder ...

R. Couillet, M. Debbah, J. W. Silverstein, “A Deterministic Equivalent for the Capacity Analysis of


Correlated Multi-User MIMO Channels,” IEEE Trans. on Information Theory, accepted, on arXiv.
I Similarly, we can compute deterministic equivalents for the matched-filter performance,
K
!
1 X P |h |2 a.s.
CMF (σ 2 ) − log2 1 + 1 PK k k −→ 0
N 2
i=1 Pi |hi | + σ
2
k =1 N
I AWGN case,  
a.s. P
CMF (σ 2 ) −→ c log2 1+
Pc + σ 2
I Rayleigh case,
Pc + σ 2
 
a.s. Pc+σ 2
CMF (σ 2 ) −→ −c log2 (e)e P Ei −
P
I ... and the optimal joint-decoder performance
 
K 2 K
1 X P k |hk | − 1
X  
Copt (σ 2 ) − log2 1 + 2 2 2
log2 1 + cPk |hk |2 mN (−σ 2 )
σ N 1 + cPk |hk | mN (−σ ) N
k =1 k =1
 
2 2 a.s.
− log2 (e) σ mN (−σ ) − 1 −→ 0.

with mN (−σ 2 ) defined as previously.


I Similar expressions are obtained for the AWGN and Rayleigh cases.
Random Matrix Theory and Performance of Communication SystemsPerformance of CDMA 49/113

Simulation results: AWGN case


5
MF, det. eq.
MF, simulation
MMSE, det. eq.
4
Spectral efficiency [bits/s/Hz] MMSE, simulation
Optimal, det. eq.
Optimal, simulation

0
−5 0 5 10 15 20 25 30

SNR [dB]

Figure: Spectral efficiency of random CDMA decoders, AWGN channels. Comparison between simulations and
deterministic equivalents (det. eq.), for the matched-filter, the MMSE decoder and the optimal decoder, K = 16
users, N = 32 chips per code. Rayleigh channels. Error bars indicate two standard deviations.
Random Matrix Theory and Performance of Communication SystemsPerformance of CDMA 50/113

Simulation results: Rayleigh case


5
MF, det. eq.
MF, simulation
MMSE, det. eq.
4
Spectral efficiency [bits/s/Hz] MMSE, simulation
Optimal, det. eq.
Optimal, simulation

0
−5 0 5 10 15 20 25 30

SNR [dB]

Figure: Spectral efficiency of random CDMA decoders, Rayleigh fading channels. Comparison between
simulations and deterministic equivalents (det. eq.), for the matched-filter, the MMSE decoder and the optimal
decoder, K = 16 users, N = 32 chips per code. Rayleigh channels. Error bars indicate two standard deviations.
Random Matrix Theory and Performance of Communication SystemsPerformance of CDMA 51/113

Simulation results: Performance as a function of K /N

MF
5
MMSE
Optimal

4
Spectral efficiency [bits/s/Hz]

0
0 1 2 3 4

Figure: Spectral efficiency of random CDMA decoders, for different asymptotic ratios c = K /N, SNR=10 dB,
AWGN channels. Deterministic equivalents for the matched-filter, the MMSE decoder and the optimal decoder.
Rayleigh channels.
Random Matrix Theory and Performance of Communication SystemsPerformance of MIMO Systems 53/113

MIMO-MAC, SINR of the MMSE receiver

R. Couillet, M. Debbah, J. W. Silverstein, “A deterministic equivalent for the capacity analysis of


correlated multi-user MIMO channels,” submitted to IEEE Trans. on Information Theory.

K
X 1 1
BN = Hk HH 2 2
k , with Hk = Rk Xk Tk
k=1

with Xk ∈ CN×nk with i.i.d. entries of zero mean, variance 1/nk , Rk Hermitian nonnegative
definite, Tk diagonal. Denote ck = N/nk . Then, as all N and nk grow large, with ratio ck ,
  −1
K
1 −1 1  X a.s.
tr(BN − zIN ) − tr −z IN +
 ēk (z)Rk  −→ 0
N N
k =1

where the set of functions {ei (z)} form the unique solution to the K equations
  −1
K
1 X 1  −1
ei (z) = tr Ri −z IN +
  ēk (z)Rk  , ēi (z) = tr Ti −z Ini + ci ei (z)Ti
N ni
k=1

Hence, the SINR at the output of the MMSE receiver for user stream i of user k, γik , satisfies
 −1
a.s.
γik = hH H
k ,i BN − hk,i hk ,i − tk ,i ei (−σ 2 ) −→ 0.
Random Matrix Theory and Performance of Communication SystemsPerformance of MIMO Systems 54/113

Deterministic equivalent approach: guess work

We will use here the “guess-work” method to find the deterministic equivalent. Consider the
simpler case K = 1.
Back to the original notations, we seek a matrix D such that

1 1 a.s.
tr(BN − zIN )−1 − tr D−1 −→ 0
N N
as N → ∞.
Resolvent lemma
For invertible A, B matrices,

A−1 − B−1 = −A−1 (A − B)B−1

Taking the matrix differences,


1 1
D−1 − (BN − zIN )−1 = D−1 (R 2 XTXH R 2 − zIN − D)(BN − zIN )−1

It seems convenient to take D = −zIN + ēBN R with ēBN left to be defined (the notation BN in ēBN
reminds that we do not look yet for a deterministic quantity).
Random Matrix Theory and Performance of Communication SystemsPerformance of MIMO Systems 55/113

Deterministic equivalent approach: guess work (2)


“Silverstein’s” lemma
Let A be Hermitian invertible, then for any vector x and scalar τ such that A + τ xxH is invertible

xH A−1
xH (A + τ xxH )−1 =
1 + τ xA−1 xH

With D = −zIN + ēBN R,


1 1
D−1 − (BN − zIN )−1 = D−1 (R 2 XTXH R 2 − zIN − D)(BN − zIN )−1
1
  1
= D−1 R 2 XTXH R 2 (BN − zIN )−1 − ēBN D−1 R(BN − zIN )−1
n
X 1 1
= D−1 τj R 2 xj xH
j R (BN − zIN )
2 −1
− ēBN D−1 R(BN − zIN )−1
j=1
1 1
n
X D−1 R 2 xj xH
j R (B(j) − zIN )
2 −1
= τj 1 1
− ēBN D−1 R(BN − zIN )−1
j=1 1 + τj xH R 2 (B(j) − zIN )−1 R 2 xj
Pn τj
 −1
1 1
Choice of ēBN : ēBN = n j=1 1+τ c 1 tr R(B −zI )−1 = n
tr T In + T N1 tr R(BN − zIN )−1
j N N N
1 −1 −1 1
 1 1

1 −1 1 −1 1 X
n
xH
j R (B(j) − zIN )
2 D R 2 xj 1
n tr R (BN − zIN )
2 −1
RD−1 R 2
tr D − tr(BN −zIN ) = τj 
1 1
− 1

N N N j=1 1 + τj xH R 2 (B(j) − zIN )−1 R 2 xj 1 + cτj N1 tr R 2 (BN − zIN )−1
Random Matrix Theory and Performance of Communication SystemsPerformance of MIMO Systems 56/113

Ergodic Mutual Information of MIMO-MAC


Remember now that Z Z ∞ 
1

log(1 + xt)dF (t) = − mF (−t) dt
1/x t

R. Couillet, M. Debbah, J. W. Silverstein, “A deterministic equivalent for the capacity analysis of


correlated multi-user MIMO channels,” submitted to IEEE Trans. on Information Theory.

Theorem
Under the previous model for BN , as N, nk grow large,
  
  K
1 1 X
E log det(xBN + IN ) −  log det IN + ēk (−1/x)Rk 
N N
k=1
K K 
X 1 X 1
+ log det Ink + ck ek (−1/x)Tk log det Ink + ck ek (−1/x)
N N
k =1 k=1

K
1X a.s.
− ēk (−1/x)ek (−1/x) −→ 0
x
k=1
Random Matrix Theory and Performance of Communication SystemsPerformance of MIMO Systems 57/113

Ergodic sum rate capacity of MIMO-MAC

We look for P?1 , . . . , P?K that achieve the optimal sum rate.

R. Couillet, M. Debbah, J. W. Silverstein, “A deterministic equivalent for the capacity analysis of


correlated multi-user MIMO channels,” submitted to IEEE Trans. on Information Theory.
The deterministic-equivalent maximizing precoders P◦1 , . . . , P◦K satisfy

P◦k = Uk diag(pk◦,1 , . . . , pk,n



k
)UH
k, where Tk = Uk diag(tk ,1 , . . . , tk,nk )UH
k

and pk ,i defined by iterative water-filling as


!+
1
pk◦,i = µk − ◦
ek tk,i

with µk such that tr P◦k = Pk , and ek◦ defined as ek for (P1 , . . . , PK ) = (P◦1 , . . . , P◦K ).
Moreover, under some conditions,
kP?k − P◦k k → 0.
Random Matrix Theory and Performance of Communication SystemsPerformance of MIMO Systems 58/113

Variance profile

W. Hachem, Ph. Loubaton, J. Najim, “Deterministic equivalents for certain functionals of large
random matrices,” Annals of Applied Probability, vol. 17, no. 3, pp. 875-930, 2007.

Theorem
1 2
Let XN ∈ CN×n have independent entries with (i, j)th entry of zero mean and variance σ .
n ij
Let
AN ∈ RN×n be deterministic with uniformly bounded column norm. Then

1  −1 1 a.s.
tr (XN + AN )(XN + AN )H − zIN − tr TN (z) −→ 0
N N
where TN (z) is the unique function that solves
 −1  −1
TN (z) = Ψ−1 (z) − zAN Ψ̃(z)ATN , T̃N (z) = Ψ̃−1 (z) − zATN Ψ(z)AN

with Ψ(z) = diag(ψi (z)), Ψ̃(z) = diag(ψ̃i (z)), with entries defined as
 −1  −1
1 1
ψi (z) = − z(1 + tr D̃i T̃(z)) , ψ̃j (z) = − z(1 + tr Dj T(z))
n n

and Dj = diag(σij2 , 1 ≤ i ≤ N), D̃i = diag(σij2 , 1 ≤ j ≤ n)


Random Matrix Theory and Performance of Communication SystemsPerformance of MIMO Systems 59/113

Variance profile

W. Hachem, Ph. Loubaton, J. Najim, “Deterministic equivalents for certain functionals of large
random matrices,” Annals of Applied Probability, vol. 17, no. 3, pp. 875-930, 2007.

Theorem
For the previous model, we also have that
 
1 1
E log det IN + 2 (XN + AN )(XN + AN )H
N σ

has deterministic equivalent


 
1 1
log det 2
Ψ(−σ 2 )−1 + AN Ψ̃(−σ 2 )ATN
N σ
1 1 σ2 X 2
+ log det 2 Ψ(−σ 2 )−1 − σij Tii (−σ 2 )T̃jj (−σ 2 ).
N σ nN
i,j
Random Matrix Theory and Performance of Communication SystemsPerformance of MIMO Systems 60/113

Haar random matrices

M. Debbah, W. Hachem, P. Loubaton, M. de Courville, “MMSE analysis of certain large isometric


random precoded systems”, IEEE Transactions on Information Theory, vol. 49, no. 5, pp.
1293-1311, 2003.

I Recent results were proposed when the matrices XN are unitary and unitarily invariant (Haar
matrices).
I The central result is the trace lemma
Lemma
Let W ∈ CN×n be n < N columns of a Haar matrix and w a column of W. Let BN ∈ CN×N a
random matrix, function of all columns of W except w. Then, assuming that, for growing N,
c = supn n/N < 1 and B = supN kBN k < ∞, we have:

1 a.s.
wH BN w − tr(IN − WWH )BN −→ 0.
N −n
Random Matrix Theory and Performance of Communication SystemsPerformance of MIMO Systems 61/113

Haar random matrices (2)

R. Couillet, J. Hoydis, M. Debbah, “Random beamforming over quasi-static and fading channels: a
deterministic equivalent approach”, to appear in IEEE Trans. on Inf. Theory.
Theorem
ni
Let Ti ∈ Cni ×ni be nonnegative diagonal and let Hi ∈ CN×Ni . Define Ri , Hi HH
i ∈C
N×N , c =
i Ni
Ni
and c̄i = N
. Denote
K
X
BN = Hi Wi Ti WH H
i Hi .
i=1

Then, as N, N1 , . . . , NK , n1 , . . . , nK → ∞ with ratios bounded c̄i and 0 ≤ ci ≤ 1 for all i, almost


surely
 −1
K
BN 1 X
F − FN ⇒ 0, with mN (z) = tr ēi (z)Ri − zIN 
N
i=1

where (ē1 (z), . . . , ēK (z)) are the solutions (conditionally unique) of
 −1
K
1 X
ei (z) = tr Ri  ēj (z)Rj − zIN 
N
j=1
1 −1
ēi (z) = tr Ti ei (z)Ti + [c̄i − ei (z)ēi (z)]Ini (compare to i.i.d. case!)
N
Random Matrix Theory and Signal Source Sensing 63/113

Signal Sensing in Cognitive Radios


Random Matrix Theory and Signal Source Sensing 64/113

Position of the Problem

Decide on presence of informative signal or pure noise.

Limited a priori Knowledge


I Known parameters: the prior information I
I N sensors
I L sampling periods
I unit transmit power
I unit channel variance
I Possibly unknown parameters
I M signal sources
I noise power equals σ 2

One situation, one solution


For a given prior information I, there must be a unique solution to the detection problem.
Random Matrix Theory and Signal Source Sensing 65/113

Problem statement

Signal detection is a typical hypothesis testing problem.


I H0 : only background noise.

θ11 ··· θ1L


 
 . .. .. 
Y = σΘ = σ  .. . . 
θN1 ··· θNL

I H1 : informative signal plus noise.


(1) (L)
 
s ··· ··· s1
 1. .. .. .. 
 . 
h11 ... h1M σ ··· 0   . . . . 


 . (1) (L) 
.. .. .. .. ..  s

··· ··· sM 
Y =  .. . . . . . M
 θ11 ··· ··· θ1L 

hN1 ... hNM 0 ··· σ   .. .. .. .. 

 . . . . 
θN1 ··· ··· θNL
Random Matrix Theory and Signal Source SensingFinite Random Matrix Analysis 67/113

Solution

Solution of hypothesis testing is the function

PH1 |Y (Y) PH1 · PY|H1 (Y)


C(Y) = =
PH0 |Y (Y) PH0 · PY|H0 (Y)

If the receiver does not know if H1 is more likely than H0 ,

1
PH1 = PH0 =
2
Therefore,
PY|H1 (Y)
C(Y) =
PY|H0 (Y)
Random Matrix Theory and Signal Source SensingFinite Random Matrix Analysis 68/113

Odds for hypothesis H0

If the SNR is known then the maximum Entropy Principle leads to

1 − 1 tr YYH
PY|H0 (Y) = e σ2
(πσ 2 )NL
Random Matrix Theory and Signal Source SensingFinite Random Matrix Analysis 69/113

Odds for hypothesis H1

If known N, M, SNR only then


Z
PY|H1 (Y) = PY|ΣH1 (Y, Σ)PΣ (Σ)dΣ

= PY|ΣH1 (Y, U, LΛ)PΛ (Λ)dUdΛ
U (N)×R+ N

with
H
h11 ... h1M σ ··· 0 h11 ... h1M σ ··· 0
 
 . .. .. .. .. ..   .. .. .. .. .. .. 
Σ = L  .. . . . . .   . . . . . .
hN1 ... hNM 0 ··· σ hN1 ... hNM 0 ··· σ
= U (LΛ) UH
Random Matrix Theory and Signal Source SensingFinite Random Matrix Analysis 70/113

Odds for hypothesis H1 (2)

Case M = 1.
Maximum Entropy distribution for H is Gaussian i.i.d channel. Unordered eigenvalue distribution
for Σ,
2 N
1 e−(λ1 −σ ) Y
PΛ (Λ)dΛ = 1(λ1 >σ2 ) (λ1 − σ 2 )N−1 δ(λi − σ 2 )dλ1 . . . dλN
N (N − 1)!
i=2

Maximum Entropy distribution for Y|ΣH1 is correlated Gaussian,


 
1 − tr YYH UΛ−1 UH
PY|ΣI1 (Y, U, LΛ) = e
π LN det(Λ)L
Random Matrix Theory and Signal Source SensingFinite Random Matrix Analysis 71/113

Neyman-Pearson Test

I M = 1,
2 1 PN λl
σ − 2 i=1 λi N
e σ X e σ2
PY|I1 (Y) = LN 2(N−1)(L−1) QN JN−L−1 (σ 2 , λl )
Nπ σ l=1 i=1 (λl − λi )
i6=l

with (λ1 , . . . , λN ) = eig(YYH ) and


Z +∞ y
Jk (x, y) = t k e−t− t dt
x

I From which we have the Neyman-Pearson test


2 λl
N σ + 2
1 X σ 2(N+L−1) e σ
CY|I1 (Y) = QN JN−L−1 (σ 2 , λl )
N i=1 (λl − λi )
l=1
i6=l

Neyman-Pearson test only depends on the eigenvalues! But in an involved way!


Random Matrix Theory and Signal Source SensingFinite Random Matrix Analysis 72/113

Neyman-Pearson Test against energy detector, SNR known

0.6

0.55

0.5
Correct detection rate

0.45

0.4

0.35

0.3

0.25 Bayesian detector


Energy detector
0.2
−3 −3 −2
1 · 10 5 · 10 1 · 10 2 · 10−2

False alarm rate

Figure: ROC curve for SIMO transmission, M = 1, N = 4, L = 8, SNR = −3 dB, FAR range of practical interest.
Random Matrix Theory and Signal Source SensingFinite Random Matrix Analysis 73/113

Neyman-Pearson Test, Unknown SNR

I We need to integrate out the prior for σ 2 .


I This leads to
PY|σ2 ,I 0 (Y, σ 2 )Pσ2 (σ 2 )dσ 2
R
M
C(Y) = R
PY|σ2 ,H0 (Y, σ 2 )Pσ2 (σ 2 )dσ 2

I prior Pσ2 (σ 2 ) is chosen to be


2 2
I uniform over [σ− , σ+ ]
I Jeffrey over (0, ∞)
Random Matrix Theory and Signal Source SensingLarge Dimensional Random Matrix Analysis 75/113

Reminder: the Marcenko-Pastur Law


1
If H0 , then the eigenvalues of N
YYH asymptotically distribute as

0.9

0.8

0.7

0.6
Density fc (x)

0.5

0.4

0.3

0.2

0.1

0 √ 2 √ 2
σ 2 (1 − c) σ 2 (1 + c)

Figure: Marcenko-Pastur law with c = lim N/L.


Random Matrix Theory and Signal Source SensingLarge Dimensional Random Matrix Analysis 76/113

Alternative Tests in Large Random Matrix Theory

Z. D. Bai, J. W. Silverstein, “No eigenvalues outside the support of the limiting spectral distribution
of large-dimensional sample covariance matrices,” The Annals of Probability, vol. 26, no.1 pp.
316-345, 1998.

Theorem
√ √
P(no eigenvalues outside [σ 2 (1 − c)2 , σ 2 (1 + c)2 ] for all large N) = 1

I If H0 ,

λmax ( N1 YYH ) a.s. (1 + c)2
−→ √
λmin ( N1 YYH ) (1 − c)2

I independent of the SNR!


Random Matrix Theory and Signal Source SensingLarge Dimensional Random Matrix Analysis 77/113

Conditioning Number Test

L. S. Cardoso, M. Debbah, P. Bianchi, J. Najim, “Cooperative spectrum sensing using random


matrix theory,” International Symposium on Wireless Pervasive Computing, Santorini, Greece,
2008.

I conditioning number test


λmax ( N1 YYH )
Ccond (Y) =
λmin ( N1 YYH )
I if Ccond (Y) > τ , presence of a signal.
I if Ccond (Y) < τ , absence of signal.
I but this is ad-hoc! how good does it compare to optimal?
I can we find non ad-hoc approaches?
Random Matrix Theory and Signal Source SensingLarge Dimensional Random Matrix Analysis 78/113

Alternative Tests in Large Random Matrix Theory (2)

Bianchi, J. Najim, M. Maida, M. Debbah, “Performance of Some Eigen-based Hypothesis Tests for
Collaborative Sensing,” Proceedings of IEEE Statistical Signal Processing Workshop, 2009.

Generalized Likelihood Ratio Test


I Alternative test to Neyman-Pearson,

supH,σ2 PH1 |Y,H,σ2 (Y)


CGLRT (Y) =
supσ2 PH0 |Y,σ2 (Y)

I based on ratios of maximum likelihood


I clearly sub-optimal but avoid the need for priors.

I GLRT test

1
!N−1 −L
1 N−1 λmax ( N YYH ) λmax ( 1 YYH )
 
CGLRT (Y) =  1 − 1 N
1 − PNN  .
N
P
i=1 λi N i=1 λi

I Contrary to the ad-hoc conditioning number test, GLRT based on


λmax
1
N
tr(YYH )
Random Matrix Theory and Signal Source SensingLarge Dimensional Random Matrix Analysis 79/113

Neyman-Pearson Test against Asymptotic Tests


0.7

0.6

0.5
Correct detection rate

0.4

0.3

0.2
Bayesian, Jeffreys
Bayesian, uniform
0.1 Cond. number
GLRT
0
−3 −3 −2
1 · 10 5 · 10 1 · 10 2 · 10−2

False alarm rate

2
Figure: ROC curve for a priori unknown σ of the Bayesian method, conditioning number method and GLRT
method, M = 1, N = 4, L = 8, SNR = 0 dB. For the Bayesian method, both uniform and Jeffreys prior, with
exponent α = 1, are provided.
Random Matrix Theory and Statistical Inference 81/113

Position of the problem

1 1
I it has long been difficult to analytically invert the simplest BN = TN2 XN XH 2
N TN model to recover
the diagonal entries of TN . Indeed, we only have the deterministic equivalent result
 −1
t
Z
mN (z) = −z + c dF TN (t)
1 + tmN (z)

with mN the deterministic equivalent of the Stieltjes transform for BN = XH


N TN XN .
I when TN has eigenvalues t1 , . . . , tK with multiplicity n1 , . . . , nK , this is
 −1
K
1 X tk
mN (z) = −z +
 nk 
N 1 + tk mN (z)
k =1

I an N, n-consistent estimator for the tk ’s was never found until recently...


I however, moment-based methods and free probability approaches provide simple solutions to
estimate consistently all moments of F TN .
Random Matrix Theory and Statistical InferenceFree Probability Method 83/113

Free Deconvolution Approach: Reminders


I For free random matrices A and B, we have the cumulant/moment relationships,
Ck (A + B) = Ck (A) + Ck (B)
X Y
Mn (AB) = C|V1 | (A)C|V2 | (B)
(π1 ,π2 )∈NC(n) V1 ∈π1
V2 ∈π2
I this allows one to compute all moments of sum and product distributions
µA  µB , µA  µB
I in addition, we have results for the information-plus-noise model
1
(RN + σXN ) (RN + σXN )H
BN =
n
whose e.s.d. converges weakly and almost surely to µB such that

µB = (µΓ  µc )  δσ2  µc
with µc the Marucenko-Pastur law and ΓN = RN RH N.
I all basic matrix operations needed in wireless communications are accessible for convenient
matrices (Gaussian, Vandermonde etc.)
I all operations are merely polynomial operations on the moments. As a consequence, for
BN = f (RN ),

All moments of the l.s.d. of BN are obtained as polynomials of those of the l.s.d. of RN
Random Matrix Theory and Statistical InferenceFree Probability Method 84/113

From free convolution to free deconvolution

Ø. Ryan, M. Debbah, “Multiplicative free convolution and information-plus-noise type matrices,”


Arxiv preprint math.PR/0702342, 2007.

I The k th moment of the l.s.d. of BN is a polynomial of the k-first moments of the l.s.d. of RN .
I we can therefore invert the problem and express the k th moment of RN as the first k moments of BN .
This entails deconvolution operations,
µA = µA+B µB
µA = µAB  µB
and for the information-plus-noise model, BN = (RN + σXN ) (RN + σXN )H
1
n

µΓ = (µB  µc ) δσ2  µc
I for more involved models, the polynomial relations can be iterated and even automatically
generated.
Random Matrix Theory and Statistical InferenceFree Probability Method 85/113

Example of polynomial relation


I Consider the information-plus-noise model

Y=D+X

with Y ∈ CN×n , D∈ CN×n , X∈ CN×n with i.i.d. entries of mean 0 and variance 1. Denote
1 1
Mk = lim tr( YYH )k
n→∞ n N
1 1
Dk = lim tr( DDH )k
n→∞ n N
I For that model, we have the relations

M1 = D1 + 1
M2 = D2 + (2 + 2c)D1 + (1 + c)
M3 = D3 + (3 + 3c)D2 + 3cD1 2 + (1 + 3c + c 2 )

hence

D1 = M1 − 1
D2 = M2 − (2 + 2c)M1 + (1 + c)
D3 = M3 − (3 + 3c)M2 − 3cM1 2 + (6c 2 + 18c + 6)M1 − (4c 2 + 12c + 4)
Random Matrix Theory and Statistical InferenceFree Probability Method 86/113

Free deconvolution: Eigenvalue Inference

I For practical finite size applications, the deconvolved moments will exhibit errors. Different
strategies are available,
PK
I direct inversion with Newton-Girard formulas. Assuming perfect evaluation of K1 m
k=1 Pk ,
P1 , . . . , PK are given by the K solutions of the polynomial

X K − Π1 X K −1 + Π2 X K −2 − . . . + (−1)K ΠK

where the Πm ’s (known as the elementary symmetric polynomials) are iteratively defined as

k
X
(−1)k kΠk + (−1)k +i Si Πk−i = 0
i=1
Pk k
where Sk = i=1 Pi .
I may lead to non-real solutions!
I does not minimize any conventional error criterion
I convenient for one-shot power inference
I when multiple realizations are available, statistical solutions are preferable
Random Matrix Theory and Statistical InferenceFree Probability Method 87/113

Free deconvolution: inferring powers

I alternative approach: estimators that minimize conventional error metrics

Z. D. Bai, J. W. Silverstein, “CLT of linear spectral statistics of large dimensional sample


covariance matrices,” Annals of Probability, vol. 32, no. 1A, pp. 553-605, 2004.
1
I for the model Y = T 2 X, an asymptotic central limit result is known for the moments, i.e. for
(N)
Mk the order k empirical moment of N1 YYH and Mk its limit, as N → ∞,
 
(N)
N Mk − Mk ⇒ X

where X is a central Gaussian random variable.


I then maximum-likelihood or MMSE estimators can then be used to find the moments of T.
Random Matrix Theory and Statistical InferenceThe Stieltjes Transform Approach 89/113

The Stieltjes Transform Method

I Consider the sample covariance matrix model

∆ 1 1 H 1
Y=T 2 X ∈ CN×n , BN = YY ∈ CN×N , BN = YH Y ∈ Cn×n
n n

where T ∈ CN×N has eigenvalues t1 , . . . , tK , tk with multiplicity Nk and X ∈ CN×n is i.i.d. zero
mean, variance 1.

J. W. Silverstein, Z. D. Bai, “On the empirical distribution of eigenvalues of a class of large


dimensional random matrices,” J. of Multivariate Analysis, vol. 54, no. 2, pp. 175-192, 1995.
a.s.
I If F T ⇒ T , then mF BN (z) = mBN (z) −→ mF (z) such that

mT −1/mF (z) = −zmF (z)mF (z)

with mF (z) = cmF (z) + (c − 1) z1 and N/n → c.


Random Matrix Theory and Statistical InferenceThe Stieltjes Transform Approach 90/113

Complex integration
I From Cauchy integral formula, denoting Ck a contour enclosing only tk ,
K
1 1 1 X N
I I I
ω ω
tk = dω = Nj dω = ωmT (ω)dω.
2πi Ck ω − tk 2πi Ck Nk ω − tj 2πiNk Ck
j=1

I After the variable change ω = −1/mF (z),

N 1
I mF0 (z)
tk = zmF (z) dz,
Nk 2πi CF ,k mF2 (z)

I When the system dimensions are large,


N
∆ 1 X 1
mF (z) ' mBN (z)= , with (λ1 , . . . , λN ) = eig(BN ) = eig(YYH ).
N λk − z
k=1

I Dominated convergence arguments then show


0 (z)
mB
N 1 n X
I
a.s. N
tk − t̂k −→ 0 with t̂k = zmBN (z) 2 (z)
dz = (λm − µm )
Nk 2πi CF ,k mB Nk m∈N
N k

with Nk the indexes of cluster k and µ1 < . . . < µN are the ordered eigenvalues of the matrix
√ √ T
diag(λ) − n1 λ λ , λ = (λ1 , . . . , λN )T .
Random Matrix Theory and Multi-Source Power Estimation 92/113

Application Context: Coverage range in Femtocells


Random Matrix Theory and Multi-Source Power Estimation 93/113

Problem statement

I a device embedded with N antennas receives a signal


I originating from multiple sources
I number of sources K is not necessarily known
I source k is equipped with nk antennas (ideally nk >> 1)
I signal k goes through unknown MIMO channel Hk ∈ CN×nk
I the variance σ 2 of the additive noise is not necessarily known
I the problem is to infer
I P1 , . . . , PK knowing K , n1 , . . . , nK
I P1 , . . . , PK and n1 , . . . , nK knowing K
I K , P1 , . . . , PK and n1 , . . . , nK
I we will regard the problem under the angle of
I free deconvolution: i.e. from the moments of the receive YYH , infer those of P, and infer on P
I Stieltjes transform: i.e. from analytical formulas on the asymptotic eigenvalue distribution of YYH , we
derive consistent estimates of each Pk .
Random Matrix Theory and Multi-Source Power Estimation 94/113

System model
(t)
I at time t, source k transmit signal xk ∈ Cnk with i.i.d. entries of zero mean and variance 1.
I we denote Pk the power emitted by user k
I the channel Hk ∈ CN×nk from user k to the receiver has i.i.d. entries of zero mean and
variance 1/N.
I at time t, the additive noise is denoted σw(t) , with w(t) ∈ CN with i.i.d. entries of zero mean
and variance 1.
I hence the receive signal y(t) at time t,

K
(t) (t)
X p
y(t) = Hk Pk xk + σwk
k=1

Gathering M time instant into Y = [y(1) . . . y(M) ] ∈ CN×M , this can be written

K
X p 1
Y= Hk Pk Xk + σW = HP 2 X + σW
k=1

with H = [H1 . . . HK ] ∈ CN×n , n = K


P
k=1 nk ,
P = diag(P1 , . . . , P1 , P2 , . . . , P2 , . . . , PK , . . . , PK ) where Pk has multiplicity nk on the
diagonal, XH = [XH H H n×M , X = [x(1) . . . x(M) ] ∈ Cnk ×M , W defined similarly.
1 . . . XK ] ∈ C k k k
Random Matrix Theory and Multi-Source Power EstimationFree Probability Approach 96/113

Reminder on free deconvolution


I Free probability provides tools to compute
K K
1 X 1 X k
dk = λ(P)k = Pi
K K
i=1 i=1
as a function of
N
1 X 1
mk = λ( YYH )k
N M
i=1

I One can obtain all the successive sum powers of P1 , . . . , PK .


I From that, we can infer on the values of each Pk !
I The tools come from the relations,
I cumulant to moment (and also moment to cumulant),
X Y
Mn = C|V |
π∈NC(n) V ∈π
I Sums of cumulants for asymptotically free A and B (of measure µA  µB ),
Ck (A + B) = Ck (A) + Ck (B)
I Products of cumulants for asymptotically free A and B (of measure µA  µB ),
X Y
Mn (AB) = C|V1 | (A)C|V2 | (B)
(π1 ,π2 )∈NC(n) V1 ∈π1
V2 ∈π2

I Moments of information plus noise models BN = (AN + σWN ) (AN + σWN )H ,


1
n

µB = (µA  µc )  δσ2  µc
with µc the Marcenko-Pastur law with ratio c.
Random Matrix Theory and Multi-Source Power EstimationFree Probability Approach 97/113

Free deconvolution approach

I one can deconvolve YYH in three steps,


1 1
I an information-plus-noise model with “deterministic matrix” HP 2 XXH P 2 HH ,
H 1 1 H
YY = (HP 2 X + σW)(HP 2 X + σW)

1 1
I from HP 2 XXH P 2 HH , up to a Gram matrix commutation, we can deconvolve the signal X,
1 H 1 H
P 2 HH P 2 XX

1 1
I from P 2 HHH P 2 , a new matrix commutation allows one to deconvolve HHH
H
PHH
Random Matrix Theory and Multi-Source Power EstimationFree Probability Approach 98/113

Free deconvolution operations

In terms of free probability operations, this is


I noise deconvolution
 
µ1 1 1 = (µ 1 YYH  µc ) δσ2  µc
M
HP 2 XXH P 2 HH M

with µc the Marcenko-Pastur law and c = N/M.


I signal deconvolution
 
N N
µ1 1 1 = µ 1 1 + 1− δ0
M
P2 HH HP 2 XXH n M1 HP 2 XXH P 2 HH n

I channel deconvolution
µP = µP 1 HH H  µηc1
n

with c1 = n/N
Random Matrix Theory and Multi-Source Power EstimationFree Probability Approach 99/113

Free deconvolution: moments


I from the three previous steps (plus addition of null eigenvalues), the moments of P can be
computed from those of YYH .
I this process can be automatized by combinatorics softwares
I finite size formulas are also available
I the first moments mk of M1 YYH as a function of the first moments dk of P read
−1
m1 = N nd1 + 1
−2 −1 −1 −2 2 −1 −1 2 2
   
m2 = N M n+N n d2 + N n +N M n d1

−1 −1 −1
   
+ 2N n + 2M n d1 + 1 + NM

−3 −2 −3 −2 −1 −1 −2 −1
 
m3 = 3N M n+N n + 6N M n+N M n+N n d3

−3 −1 2 −2 −2 2 −2 2 −1 −1 2
 
+ 6N M n + 6N M n + 3N n + 3N M n d2 d1

−3 −2 3 −3 3 −2 −1 3 −1 −2 3 3
 
+ N M n +N n + 3N M n +N M n d1

−2 −1 −1 −2 −1 −1
 
+ 6N M n + 6N M n + 3N n + 3M n d2

−2 −2 2 −2 2 −1 −1 2 −2 2 2
 
+ 3N M n + 3N n + 9N M n + 3M n d1

−1 −2 −1 −1 −2
 
+ 3N M n + 3N n + 9M n + 3NM n d1

where
N K K
1 X 1 1 X 1 X k
mk = λ( YYH )k and dk = λ(P)k = Pi
N M K K
i=1 i=1 i=1
Random Matrix Theory and Multi-Source Power EstimationFree Probability Approach 100/113

Free deconvolution: inferring powers

1 PK
Direct inversion with Newton-Girard formulas. Assuming perfect evaluation of K k=1 Pkm ,
P1 , . . . , PK are given by the K solutions of the polynomial

X K − Π1 X K −1 + Π2 X K −2 − . . . + (−1)K ΠK

where the Πm ’s (known as the elementary symmetric polynomials) are iteratively defined as

k
X
(−1)k kΠk + (−1)k +i Si Πk −i = 0
i=1
Pk
where Sk = i=1 Pik .
I may lead to non-real solutions!
I does not minimize any conventional error criterion
I convenient for one-shot power inference
I when multiple realizations are available, statistical solutions are preferable
Random Matrix Theory and Multi-Source Power EstimationThe Stieltjes Transform Approach 102/113

Stieltjes transform approach

I Remember the matrix model 1


Y = HP 2 X + σW
with W, Y ∈ CN×M , H ∈ CN×n , X ∈ Cn×M , and P ∈ Cn×n diagonal.
I this can be written in the following way
h iX
1
Y = HP 2 σI ∈ CN×M
W

and extend it into the matrix


" # 
1
Yext = HP
2 σI X ∈ C(N+n)×M
0 0 W

which is a sample covariance matrix model.


I the population covariance matrix is

HPHH + σ 2 IN
 
0
0 0

itself a sample covariance matrix.


Random Matrix Theory and Multi-Source Power EstimationThe Stieltjes Transform Approach 103/113

Asymptotic spectrum

R. Couillet, J. W. Silverstein, Z. Bai, M. Debbah, “Eigen-inference Energy Estimation of Multiple


Sources”, IEEE Trans. on Information Theory, 2010, submitted.
1
I the asymptotic spectrum of M
YYH has Stietljes transform m(z), z ∈ C+ , such that

M M −N 1
m(z) = m(z) +
N N z
where m(z) is the unique solution in C+ of

K
1 1 1 X nk Pk
= −σ 2 + −
m(z) f (z) N 1 + Pk f (z)
k =1

where f (z) is given by


M −N M
f (z) = m(z) − zm(z)2
N N
and we want to determine Pk .
Random Matrix Theory and Multi-Source Power EstimationThe Stieltjes Transform Approach 104/113

1 H
Asymptotic spectrum of M YY
0.1
Asymptotic spectrum
Empirical eigenvalues

0.075
Density

0.05

0.025

0
0.1 1 3 10

Estimated powers

Figure: Empirical and asymptotic eigenvalue distribution of M1 YYH when P has three distinct entries P1 = 1,
P2 = 3, P3 = 10, n1 = n2 = n3 , N/n = 10, M/N = 10, σ 2 = 0.1. Empirical test: n = 60.
Random Matrix Theory and Multi-Source Power EstimationThe Stieltjes Transform Approach 105/113

Stieltjes transform approach: final result

R. Couillet, J. W. Silverstein, Z. Bai, M. Debbah, “Eigen-inference Energy Estimation of Multiple


Sources”, IEEE Trans. on Information Theory, 2010, submitted.

Theorem
Let BN = M1 YYH ∈ CN×N , with Y defined as previously. Denote its ordered eigenvalues vector
λ = (λ1 , . . . , λN ), λ1 < . . . , λN . Further assume asymptotic spectrum separability. Then, for
k ∈ {1, . . . , K }, as N, n, M grow large, we have
a.s.
P̂k − Pk −→ 0

where the estimate P̂k is given by

NM X
P̂k = (ηi − µi )
nk (M − N)
i∈Nk

PK PK
with Nk = {N − i=k ni + 1, . . . , N − i=k +1 ni } the set of indexes matching the cluster
1
√ √ T
corresponding to Pk , (η1 , . . . , ηN ) the ordered eigenvalues of diag(λ) − N
λ λ and
√ √ T
(µ1 , . . . , µN ) the ordered eigenvalues of diag(λ) − M1 λ λ .
Random Matrix Theory and Multi-Source Power EstimationThe Stieltjes Transform Approach 106/113

Comments on the result

I very compact formula


I low computational complexity
I assuming cluster separation, it allows also to infer the number of eigenvalues, as well as the
multiplicity of each eigenvalue.
I however, strong requirement on cluster separation
I if separation is not true, the mean of the eigenvalues instead of the eigenvalues themselves is
computed.
I it is possible to infer K , all nk and all Pk using the Stieltjes transform method.
Random Matrix Theory and Multi-Source Power EstimationThe Stieltjes Transform Approach 107/113

Multi-Source Power Estimation: Performance Comparison

1.2
Stieltjes transform method Stieltjes transform method
Moment-based method Moment-based method
6
1

0.8

4
Density

Density
0.6

0.4
2

0.2

0 0
1 3 10 1 3 10

Estimated powers Estimated powers

Figure: Multi-source power estimation, for K = 3, P1 = 1, P2 = 3, P3 = 10, n1 /n = n2 /n = n3 /n = 1/3


,n/N = N/M = 1/10, SNR = 10 dB, for 10, 000 simulation runs; Top n = 60, bottom n = 6.
Random Matrix Theory and Multi-Source Power EstimationThe Stieltjes Transform Approach 108/113

Multi-Source Power Estimation: Performance Comparison

Stieltjes transform method (n = 6)


10 Moment-based method (n = 6)
Stieltjes transform method (n = 60)
Normalized mean square error [dB]
Moment-based method (n = 60)
0

−10

−20

−30

−40
−15 −10 −5 0 5 10 15 20

SNR [dB]

Figure: Normalized mean square error of the vector (P̂1 , P̂2 , P̂3 ), P1 = 1, P2 = 3, P3 = 10,
n1 /n = n2 /n = n3 /n = 1/3 ,n/N = N/M = 1/10, for 10, 000 simulation runs.
Random Matrix Theory and Multi-Source Power EstimationThe Stieltjes Transform Approach 109/113

General comments and steps left to fulfill

I up to this day
I the moment approach is much simpler to derive
I it does not require any cluster separation
I the finite size case is treated in the mean, which the Stieltjes transform approach cannot do.
I however, the Stieltjes transform approach makes full use of the spectral knowledge, when the
moment approach is limited to a few moments.
I the results are more natural, and more “telling”
I in the future, it is expected that the cluster separation requirement can be overtaken.
I a natural general framework attached to the Stieltjes transform method could arise
I central limit results on the estimates is expected
Random Matrix Theory and Multi-Source Power EstimationThe Stieltjes Transform Approach 110/113

Related bibliography

I N. El Karoui, “Spectrum estimation for large dimensional covariance matrices using random
matrix theory,” Annals of Statistics, vol. 36, no. 6, pp. 2757-2790, 2008.
I N. R. Rao, J. A. Mingo, R. Speicher, A. Edelman, “Statistical eigen-inference from large
Wishart matrices,” Annals of Statistics, vol. 36, no. 6, pp. 2850-2885, 2008.
I R. Couillet, M. Debbah, “Free deconvolution for OFDM multicell SNR detection”, PIMRC
2008, Cannes, France.
I X. Mestre, “Improved estimation of eigenvalues and eigenvectors of covariance matrices
using their sample estimates,” IEEE trans. on Information Theory, vol. 54, no. 11, pp.
5113-5129, 2008.
I R. Couillet, J. W. Silverstein, M. Debbah, “Eigen-inference for multi-source power estimation”,
submitted to ISIT 2010.
I Z. D. Bai, J. W. Silverstein, “No eigenvalues outside the support of the limiting spectral
distribution of large-dimensional sample covariance matrices,” The Annals of Probability, vol.
26, no.1 pp. 316-345, 1998.
I Z. D. Bai, J. W. Silverstein, “CLT of linear spectral statistics of large dimensional sample
covariance matrices,” Annals of Probability, vol. 32, no. 1A, pp. 553-605, 2004.
I J. Silverstein, Z. Bai, “Exact separation of eigenvalues of large dimensional sample
covariance matrices” Annals of Probability, vol. 27, no. 3, pp. 1536-1555, 1999.
I Ø. Ryan, M. Debbah, “Free Deconvolution for Signal Processing Applications,” IEEE
International Symposium on Information Theory, pp. 1846-1850, 2007.
Random Matrix Theory and Multi-Source Power EstimationThe Stieltjes Transform Approach 111/113

Selected authors’ recent bibliography

I Articles in Journals,
I R. Couillet, J. W. Silverstein, Z. Bai, M. Debbah, “Eigen-Inference for Energy Estimation of Multiple
Sources,” IEEE Trans. on Information Theory, 2010, submitted.
I R. Couillet, J. W. Silverstein, M. Debbah, “A Deterministic Equivalent for the Capacity Analysis of
Correlated Multi-User MIMO Channels,” IEEE Trans. on Information Theory, 2010, accepted for
publication.
I P. Bianchi, J. Najim, M. Maida, M. Debbah, “Performance of Some Eigen-based Hypothesis Tests for
Collaborative Sensing,” IEEE Trans. on Information Theory, 2010, accepted for publication.
I R. Couillet, M. Debbah, “A Bayesian Framework for Collaborative Multi-Source Signal Sensing,” IEEE
Trans. on Signal Processing, 2010, accepted for publication.
I S. Wagner, R. Couillet, M. Debbah, D. T. M. Slock, “Large System Analysis of Linear Precoding in
MISO Broadcast Channels with Limited Feedback,” IEEE Trans. on Information Theory, 2010,
submitted.
I A. Masucci, Ø. Ryan, S. Yang, M. Debbah, “Gaussian Finite Dimensional Statistical Inference,” IEEE
Trans. on Information Theory, 2009, submitted.
I Ø. Ryan, M. Debbah, “Asymptotic Behaviour of Random Vandermonde Matrices with Entries on the
Unit Circle,” IEEE Trans. on Information Theory, vol. 55, no. 7 pp. 3115-3148, July 2009.
I M. Debbah, R. Müller, “MIMO channel modeling and the principle of maximum entropy,” IEEE Trans.
on Information Theory, vol. 51, no. 5, pp. 1667-1690, 2005.
I Articles in International Conferences
I R. Couillet, S. Wagner, M. Debbah, A. Silva, “The Space Frontier: Physical Limits of Multiple
Antenna Information Transfer”, Inter-Perf 2008, Athens, Greece. BEST STUDENT PAPER
AWARD.
I R. Couillet, M. Debbah, V. Poor, “Self-organized spectrum sharing in large MIMO multiple access
channels”, submitted to ISIT 2010.
I R. Couillet, M. Debbah, “Uplink capacity of self-organizing clustered orthogonal CDMA networks in
flat fading channels”, ITW 2009 Fall, Taormina, Sicily.
Random Matrix Theory and Multi-Source Power EstimationThe Stieltjes Transform Approach 112/113

Available in bookstores!
Random Matrix Theory and Multi-Source Power EstimationThe Stieltjes Transform Approach 113/113

Detailed outline
Romain Couillet, Mérouane Debbah, Random Matrix Methods for Wireless Communications.
1. Theoretical aspects
1.1 Preliminary
1.2 Tools for random matrix theory
1.3 Deterministic equivalents
1.4 Central limit theorems
1.5 Spectrum analysis
1.6 Eigen-inference
1.7 Extreme eigenvalues
2. Applications to wireless communications
2.1 Introduction
2.2 System performance: capacity and rate-regions
2.2.1 Introduction
2.2.2 Performance of CDMA technologies
2.2.3 Performance of multiple antenna systems
2.2.4 Multi-user communications, rate regions and sum-rate
2.2.5 Design of multi-user receivers
2.2.6 Analysis of multi-cellular networks
2.2.7 Communications in ad-hoc networks
2.3 Detection
2.4 Estimation
2.5 Modelling
2.6 Random matrix theory and self-organizing networks
2.7 Perspectives
2.8 Conclusion

You might also like