You are on page 1of 144

Mathematics

Münster Lectures Münster Lectures


in Mathematics in Mathematics
Free Probability and Operator Algebras

Free Probability and Operator Algebras


Dan-Virgil Voiculescu, Nicolai Stammeier and

Free Probability
Moritz Weber, Editors

Free probability is a probability theory dealing with variables having


the highest degree of noncommutativity, an aspect found in many

and Operator
areas (quantum mechanics, free group algebras, random matrices
etc). Thirty years after its foundation, it is a well-established and very
active field of mathematics. Originating from Voiculescu’s attempt
to solve the free group factor problem in operator algebras, free
probability has important connections with random matrix theory,
combinatorics, harmonic analysis, representation theory of large
groups, and wireless communication.
Algebras
These lecture notes arose from a masterclass in Münster, Germany
and present the state of free probability from an operator algebraic
Dan-Virgil Voiculescu

D.-V. Voiculescu, N. Stammeier and M. Weber, Eds.


perspective. This volume includes introductory lectures on random

Nicolai Stammeier
matrices and combinatorics of free probability (Speicher), free
monotone transport (Shlyakhtenko), free group factors (Dykema),
free convolution (Bercovici), easy quantum groups (Weber), and a
historical review with an outlook (Voiculescu). In order to make it
more accessible, the exposition features a chapter on basics in free
Moritz Weber
probability, and exercises for each part.

This book is aimed at master students to early career researchers Editors


familiar with basic notions and concepts from operator algebras.

ISBN 978-3-03719-165-1

www.ems-ph.org

Voiculescu / Font: NewsGothic / Pantone: 287 / Pantone: 116 / Format: 170 x 240 / RB: 7.2 mm
Münster Lectures in Mathematics
Edited by
Christopher Deninger (c.deninger@uni-muenster.de) and
Linus Kramer (linus.kramer@uni-muenster.de), Universität Münster, Germany

Münster Lectures in Mathematics report on recent developments in mathematics. Material


considered for publication includes monographs and lecture notes or seminars on a new field or
a new angle at a classical field.
Free Probability
and Operator
Algebras

Dan-Virgil Voiculescu
Nicolai Stammeier
Moritz Weber
Editors
Editors:

Prof. Dan-Virgil Voiculescu Prof. Moritz Weber


Department of Mathematics FB Mathematik und Informatik
University of California Universität des Saarlandes
Berkeley, CA 94720-3840 Postfach 151150
USA 66041 Saarbrücken
Germany
E-mail: dvv@math.berkeley.edu
E-mail: weber@math.uni-sb.de
Dr. Nicolai Stammeier
Department of Mathematics
University of Oslo
P.O. Box 1053 Blindern
1360 Oslo
Norway
E-mail: n.stammeier@wwu.de

2010 Mathematics Subject Classification: Primary 46L54; secondary 60B20, 47C15, 20G42

Key words: Free probability, operator algebras, random matrices, free monotone transport, free group
factors, free convolution, compact quantum groups, easy quantum groups, noncrossing partitions, free
independence, entropy, max-stable laws, exchangeability

ISBN 978-3-03719-165-1

The Swiss National Library lists this publication in The Swiss Book, the Swiss national bibliography, and the
detailed bibliographic data are available on the Internet at http://www.helveticat.ch.

This work is subject to copyright. All rights are reserved, whether the whole or part of the material is
concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting,
reproduction on microfilms or in other ways, and storage in data banks. For any kind of use permission
of the copyright owner must be obtained.

© European Mathematical Society 2016

Contact address:
European Mathematical Society Publishing House
Seminar for Applied Mathematics
ETH-Zentrum SEW A27
CH-8092 Zürich
Switzerland

Phone: +41 (0)44 632 34 36


Email: info@ems-ph.org
Homepage: www.ems-ph.org

Typeset using the authors’ TEX files: Nicolai Stammeier (Oslo), Moritz Weber (Saarbrücken)
Printing and binding: Beltz Bad Langensalza GmbH, Bad Langensalza, Germany
∞ Printed on acid free paper
987654321
“Small observations can lead to big discoveries.”
(Dan-V. Voiculescu)
Contents

Preface – Nicolai Stammeier and Moritz Weber . . . . . . . . . ix

Background and outlook – Dan-Virgil Voiculescu . . . . . . . . . 1

Basics in free probability – Moritz Weber . . . . . . . . . . . . . . 7


Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Noncommutative probability spaces . . . . . . . . . . . . . . . . . . . 7
Freeness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Noncommutative distributions . . . . . . . . . . . . . . . . . . . . . . 11
Examples of noncommutative distributions . . . . . . . . . . . . . . . 13

Random matrices and combinatorics – Roland Speicher . . . . . 17


Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Gaussian random matrices and Wigner’s semicircle law . . . . . . . . 17
The free central limit theorem . . . . . . . . . . . . . . . . . . . . . . 21
Noncrossing partitions and free cumulants . . . . . . . . . . . . . . . 24
Sums and products of free variables . . . . . . . . . . . . . . . . . . . 27
Asymptotic freeness of random matrices . . . . . . . . . . . . . . . . 32
Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

Free monotone transport – Dimitri Shlyakhtenko . . . . . . . . . 39


Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Classical transportation theory . . . . . . . . . . . . . . . . . . . . . 39
Translation to the free case . . . . . . . . . . . . . . . . . . . . . . . . 41
Free Gibbs laws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Connection between random matrices and free Gibbs states . . . . . 48
Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Is our map optimal? . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

Free group factors – Ken Dykema . . . . . . . . . . . . . . . . . . . 57


Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
C∗ -noncommutative probability spaces . . . . . . . . . . . . . . . . . 57
Reduced free products of C∗ -algebras and
von Neumann algebras . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Applications of random matrices to von Neumann algebras . . . . . . 61
Interpolated free group factors and
some results about free products . . . . . . . . . . . . . . . . . . . . . 65
Further results about free group factors . . . . . . . . . . . . . . . . . 68
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

Free convolution – Hari Bercovici . . . . . . . . . . . . . . . . . . . 73


Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Limit theorems in classical probability theory . . . . . . . . . . . . . 73
viii

Limit theorems in free probability theory . . . . . . . . . . . . . . . . 74


Unbounded random variables . . . . . . . . . . . . . . . . . . . . . . 75
Univariate limit theorems . . . . . . . . . . . . . . . . . . . . . . . . . 81
Multiplicative free convolution . . . . . . . . . . . . . . . . . . . . . . 82
Multivariate limit theorems . . . . . . . . . . . . . . . . . . . . . . . 83
Subordination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Comments and exercises . . . . . . . . . . . . . . . . . . . . . . . . . 87
Easy quantum groups – Moritz Weber . . . . . . . . . . . . . . . . 95
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Compact matrix quantum groups . . . . . . . . . . . . . . . . . . . . 95
Categories of partitions . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Examples and classification of easy quantum groups . . . . . . . . . . 104
De Finetti theorems in free probability . . . . . . . . . . . . . . . . . 108
Laws of characters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
The Haar state on easy quantum groups . . . . . . . . . . . . . . . . 112
Fusion rules of easy quantum groups . . . . . . . . . . . . . . . . . . 113
Associated von Neumann algebras . . . . . . . . . . . . . . . . . . . . 115
Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

Participants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Contributors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Preface

The present lecture notes arose from the masterclass “Free Probability and
Operator Algebras” held September 2–6, 2013 in Münster, Germany. We would
like to express our deep gratitude to the lecturers
• Dan-V. Voiculescu for providing a comprehensive account on early days of
free probability and hints where this theory may lead in the future,
• Roland Speicher for explaining intriguing connections of free probability to
the theory of random matrices and combinatorics,
• Dima Shlyakhtenko for presenting the state of the art theory concerning free
monotone transport,
• Ken Dykema for telling us the fascinating, yet unfinished story of free group
factors,
• Hari Bercovici for discussing free convolution, the free way of dealing with
sums and products of independent random variables, and finally
• Moritz Weber for introducing us to easy quantum groups and explaining to
us why we might want to care about them.
The lectures were attended by roughly 50 participants from various coun-
tries whose seniority ranged from master student to full professor, but most
participants were PhD students and young postdoctoral researchers. Taken
into account the impressions we got during the week as well as the feedback
we collected afterwards, we feel that this event has been very successful in
stimulating sustainable interactions between distinguished experts in the field
and young emerging researchers.
As the lecturers conveyed their themes with great enthusiasm that struck
the audience, a lot of work was carried out behind the curtains, both during
and before this event: Since the idea of having such a masterclass in Münster
came to life on a pleasant evening spent in the leisure room of the MFO (Ober-
wolfach) in October 2012, the organizers received advice and support from their
mentors Joachim Cuntz and Roland Speicher.
Many small and not so small things were taken care of by our phenomenal
team of secretaries: Elke Ernsting, Gabriele Dierkes and Lisa Steggemann.
Without their well-structured, competent work and their patience, this event
would have been impossible.
In order to ease the preparation of these lecture notes for the lecturers, the
participants Cédric Schonard and Jonas Wahl took notes and typeset a useful
first draft for each lecturer. We would like to thank both of them for their
contribution as well as Siegfried Echterhoff for the financial support granted to
undertake this step. In addition, we would like to thank Linus Kramer (WWU)
for all his efforts that led to the creation of this new lecture notes series within
the framework of the EMS Publishing House, Karin Halupczok (WWU) and
Simon Winter (Dimler & Albroscheit) for their valuable editing, and Thomas
Hintermann (EMS Publishing House) for his efficient and competent handling.
x Preface

Finally and quite importantly, we wish to thank the SFB Groups, Geometry
and Actions at the Mathematics Department in Münster for hosting the event
and providing us with generous support. This enabled us to invite renowned
specialists to Münster as well as to offer support for young talented researchers
from distant places that otherwise would not have had access to sufficient
funding in order to attend.
We are convinced that this masterclass added to the outstanding reputation
of the Mathematics Department at the University of Münster. On the other
hand, it also served the mathematical community as a whole by stimulating
scientific interaction and spreading knowledge. With this perspective in mind,
the creation of a lecture notes for this masterclass is nothing but the canonical
next step. The result of our efforts is right in front of you, and we hope that
it will prove itself an enjoyable and valuable source.
It is supposed to serve as an introduction into free probability from an
operator algebraic point of view as well as a reference book for this approach.
This is why we also inserted a lecture on basics of free probability which was
not part of the original masterclass lectures.
Again, we thank all speakers not only for giving the lectures in Münster but
also for all their efforts to improve these lecture notes and all their detailed
proof-reading. Finally, we thank Dan-V. Voiculescu for co-editing these lecture
notes together with us.

Nicolai Stammeier and Moritz Weber


(organizers of the masterclass)

June 2015
Background and outlook

Dan-Virgil Voiculescu

Free probability is a probability theory adapted to dealing with variables


which have the highest degree of noncommutativity. Failure of commutativity
may occur in many ways. One of the most famous examples is the quantum
mechanics’ commutation relation XY − Y X = I, but this is only “mild”
noncommutativity. Indeed, in this case the variables X and Y commute with
XY − Y X.
Where should we then look for the “highest noncommutativity”? Roughly,
it is often to be found in objects which are given the adjective “free”, like
free groups. The free group Fn on n generators g1 , . . . , gn consists of words
k(1) k(i) k(m)
gi(1) gi(2) . . . gi(m) , where i(1) 6= i(2) 6= i(3) 6= · · · 6= i(m) and k(j) ∈ Z \ {0}.
Another source is the full Fock space. If H is a Hilbert space, let
M
T (H) = H⊗n ,
n≥0
⊗0
where H = C1. Let Lh ξ = h ⊗ ξ, h ∈ H be the left creation operators on
T (H). The Lh and L∗h generate the extended Cuntz C ∗ -algebra.
Random variables in free probability, like the observables in quantum me-
chanics, are operators on Hilbert spaces. Many basic notions can be presented
in a more general purely algebraic setting. A unital algebra A over C endowed
with a linear functional ϕ : A → C such that ϕ(1) = 1 is then called a noncom-
mutative probability space and the elements a ∈ A are called noncommutative
random variables.
The distribution of a family (ai )i∈I ⊂ A of noncommutative random vari-
ables amounts to the information provided by the collection of moments
ϕ(ai(1) . . . ai(n) ), where i(1), . . . , i(n) ∈ I. This can also be put in the form
of the linear map ϕ ◦ χ : ChXi | i ∈ Ii → C, where χ : ChXi | i ∈ Ii → A
is the homomorphism of the algebra of noncommutative polynomials in the
indeterminates Xi , i ∈ I to A, which maps each Xi to ai . In the case of one
hermitian variable a = a∗ ∈ (A, ϕ) where A is a C ∗ -algebra and ϕ is a state,
the distribution corresponds to a compactly supported probability measure µa

While working on this paper the author was supported in part by NSF Grant
DMS 1001881.
2 D.-V. Voiculescu

on R such that Z
ϕ(P (a)) = P (t)dµa (t)
for all polynomials P . Thus, in the case of one hermitian random variable, we
get a probability measure, like in classical probability theory. It is completely
determined by the collection of the moments.
What distinguishes free probability from other noncommutative probability
theories is the definition of independence, which is different from the one used
in quantum mechanics (and in classical probability).
Indeed, in quantum mechanics, independence is modeled on tensor products
and we shall refer to it as classical independence. Two subalgebras 1 ∈ B, 1 ∈ C
in (A, ϕ) are classically independent if they commute (i.e. [B, C] = 0) and if
ϕ(bc) = ϕ(b)ϕ(c)
holds for all b ∈ B and c ∈ C. Note that in classical probability, this amounts
to the fact that independent variables factorize under the expectation.
In free probability we have free independence. A family of subalgebras
1 ∈ Ai , i ∈ I in (A, ϕ) is freely independent (or free) if
ϕ(a1 a2 . . . ak ) = 0
whenever aj ∈ Ai(j) , 1 ≤ j ≤ k are such that i(j) 6= i(j + 1), 1 ≤ j < k and all
ϕ(aj ) = 0, 1 ≤ j ≤ k. Sets of variables in (A, ϕ) are free, by definition, if the
algebras they generate are free.
Recently, I developed a version of freeness with left and right variables,
which I called bi-freeness. Since this extension of free probability is at a very
early stage, I took a look, to get some inspiration, at my old first free probability
paper which I presented at a conference in Buşteni.1 It was a well-attended
international conference in a mountain resort in Romania, which took place
August 29–September 9, 1983, that is almost exactly 30 years ago, from the
days of our master class in Münster.
Just before starting in this new direction, I had worked with Mihai Pimsner,
computing the K-theory of the reduced C ∗ -algebras of free groups. From the
K-theory work I had acquired a taste for operator algebras associated with free
groups and I became interested in a famous problem about the von Neumann
algebras L(Fn ) generated by the left regular representations of free groups,
which appears in Kadison’s Baton–Rouge problem list. The problem, which
may have already been known to Murray and von Neumann, is:
Are L(Fm ) and L(Fn ) nonisomorphic if m 6= n?
This is still an open problem. Fortunately, after trying in vain to solve it, I
realized it was time to be more humble and to ask: is there anything I can do,
which may be useful in connection with this problem? Since I had come across
computations of norms and spectra of certain convolution operators on free
1D. Voiculescu, Symmetries of some reduced free product C ∗ -algebras, in Operator al-
gebras and their connections with topology and ergodic theory (Buşteni, 1983), 556–588,
Lecture Notes in Math., 1132, Springer, Berlin, 1985. MR0799593
Background and outlook 3

groups (i.e., elements of L(Fn )), I thought of finding ways to streamline some
of these computations and perhaps to be able to compute more complicated
examples. This, of course, meant computing expectations of powers of such
operators with respect to the von Neumann tracial state τ (T ) = hT ee , ee i,
eg being the canonical basis of the ℓ2 space ℓ2 (Fn ), and e ∈ Fn the neutral
element.
The key remark I made was that if T1 , T2 are convolution operators on
Fm and Fn , then the operator on Fm+n = Fm ∗ Fn which is T1 + T2 has
moments τ ((T1 + T2 )p ) which depend only on the moments τ (Tjk ), j = 1, 2,
but not on the actual T1 and T2 . This was like the addition of independent
random variables, only classical independence had to be replaced by a notion
of free independence, which led to a free central limit theorem, a free analog of
the Gaussian functor, free convolution, an abstract existence theorem for one
variable free cumulants, etc.
An important consequence of the central limit theorem was that I realized
that the analog of the Gauss law in free probability was the semicircle law. By
Wigner’s work this law is known to play a key role in random matrix theory
as a limit of eigenvalue distributions. After wondering for a few years about
this coincidence, I understood what the connection was: large independent
Gaussian random matrices give rise to freely independent random variables
asymptotically. On a suitable algebra F (X , µ, Mn ) of n × n matrix-valued
functions T : X → Mn , the probability measure µ on X gives rise to an
expectation functional
Z
ϕ(T ) = n−1 Tr(T (ω))dµ(ω).
X

When T = T , its distribution is easily seen to be the average of the probability
measures giving mass 1/n to the eigenvalues of T (ω).
A consequence of the asymptotic freeness of independent large Gaussian
matrices with i.i.d. entries was that the von Neumann algebra L(Fn ) could
be viewed as being asymptotically generated by an n-tuple of such random
matrices. With this asymptotic random matrix model I could prove, for in-
stance, the isomorphism of L(F∞ ) and P L(F∞ )P where P 6= 0 is a projection
with τ (P ) ∈ Q. Florin Radulescu was then able to remove the restriction
that τ (P ) be rational, and Ken Dykema and Florin Radulescu then discovered
the interpolated free group factors L(Fr ) where r > 1 does not need to be
an integer. These enjoy many of the properties of the L(Fn ), for instance,
L(Fr ) ∗ L(Fs ) ∼ L(Fr+s ).
In the meantime, Roland Speicher developed a theory of free stochastic in-
tegration on the Cuntz algebra and began developing the combinatorial side
of free probability. He discovered that, at the combinatorial level, the pas-
sage from classical probability to free probability meant replacing the lattice
of all partitions of {1, . . . , n} by the lattice of noncrossing partitions. This
was precisely how the combinatorial formulae for classical cumulants turned
into combinatorial formulae for free cumulants based on noncrossing partitions.
4 D.-V. Voiculescu

The combinatorial development was joined by Alexandru Nica with his many
essential contributions to the subject. On the other hand, the first classi-
cal probabilist to join in the effort was Philippe Biane with a wide range of
contributions from advanced stochastics to free probability aspects of asymp-
totics of group representations and processes with free increments. At present
the combinatorial side has also reached beyond noncrossing partitions to more
complicated diagrammatics like in the developing connections with subfactor
theory and in the study of second-order freeness.
The early successes of free probability were marked by several other deep
results.
I should mention here the deep analytic theorem of Hari Bercovici and
Vittorino Pata about the correspondence between classical and free infinitely
divisible laws in which the domains of attractions of corresponding laws are
equal.
In another direction, rather recently, a sweeping generalization of the almost
sure results about largest eigenvalues from one to several Gaussian random
matrices was obtained by Uffe Haarerup and Steen Thorbjørnsen which gave
a demonstration that free probability could make important analytically hard
contributions to random matrix theory.
By the end of the 80s and beginning of the 90s a parallel between classical
and free probability had emerged and it is natural to ask:

How far does the parallel between classical and free probability extend?

At present, we know that the answer is: very far and it is one of the most
surprising things about free probability.
Let me give a few examples of items on the list of classical probability items
with free probability analogs:

• limit laws,
• stochastic processes with independent increments,
• addition and multiplication of independent random variables,
• stochastic integration,
• combinatorics of cumulants,
• continuous entropy,
• max-stable laws,
• exchangeability.

There is an amazing parallelism in all this. However, when one takes a closer
look there are often serious differences and open questions, which one would
like to understand. I would like to illustrate this with some comments about
the last three entries of the list.
Entropy. A few minutes are certainly not enough time for a lecture on free
entropy. Looking at the parallelism, this should be a quantity χ(X1 , . . . , Xn ),
where Xj = Xj∗ are in a von Neumann algebra (M, τ ) endowed with a faithful
normal tracial state and which behaves like H(f1 , . . . , fn ), the continuous or
Background and outlook 5

differential entropy of Shannon which is given by the familiar formula


Z Z
H(f1 , . . . , fn ) = − . . . p(t1 , . . . , tn ) log p(t1 , . . . , tn )dt1 . . . dtn

if the joint distribution of f1 , . . . , fn has a density p(t1 , . . . , tn ) with respect to


the Lebesgue measure.
I actually defined the free entropy in two ways (the so-called “microstates”
and “microstates free” approaches) and there are still many difficult open tech-
nical problems among which proving the equivalence of the two approaches is
perhaps the most important. I should mention the works of Biane–Capitaine–
Guionnet, Guionnet–Shlyakhtenko and Yoann Dabrowski for remarkable con-
tributions to this very difficult problem.
Leaving aside these problems and looking only at the classical/free parallel
one has to wonder about something P else. In classical probability it is possible
to define a discrete entropy − pj log pj , which is the fundamental entropy
notion. This leads to the question: is there a definition for a discrete free
entropy or is this an instance where the parallel is interrupted by a fundamental
difference?
Max-Stable Laws. For classical i.i.d. random variables f1 , f2 , . . . and suitable
sequences (bn )n∈N and (cn )n∈N of numbers, the limit laws of
b−1
n max(f1 , . . . , fn ) + cn

are called max-stable laws (if they exist). A free analog has been found in joint
work of Gerard Ben Arous and myself. The definition for hermitian random
variables in a von Neumann algebra with faithful tracial state (M, τ ) of the
max is to define the spectral projections of X ∨ Y by
E(X ∨ Y ; (−∞, a)) = E(X; (−∞, a)) ∧ E(Y ; (−∞, a)).
We were able to classify these laws and show that, like in the classical case,
there are deeper things like the connection to Poisson processes which have
free analogs. This may be viewed as the half-full glass of the parallel for max-
stable laws, but there is also a half-empty glass to be considered which concerns
the applications. The well-known question about how high to build a dam in
Amsterdam in order that the probability of a flooding within the next 100
years be less than 1% based on the data on flooding for a certain number of
years, is answered using max-stable laws. One may wonder whether there is a
free analog to this application. What is a free dam in free Amsterdam to avoid
a free flooding with free probability . . . ?
Exchangeability. Claus Köstler and Roland Speicher discovered a free
de Finetti type theorem where invariance under classical permutations (which,
in the classical setting, gives conditional classical independence with respect to
a tail algebra of events) is replaced by invariance under the free quantum per-
mutation group (which then yields free independence with amalgamation over
a tail algebra). Due to work of Roland Speicher and Teodor Banica noncross-
ing partitions also make their appearance in this setting. It remains an open
6 D.-V. Voiculescu

question whether the noncommutative distributions of the variables generat-


ing the free quantum groups can be well integrated into the free probability
framework. So, do the distributions which arise in the free quantum group
setting fit in the free probability context or do these laws go beyond?
Outlook. I was asked to make some comments about the outlook of free
probability. Since I do not have a crystal ball, I really cannot predict the
future. I could add, that many important developments in a subject, but by
no means all, are often unexpected. Perhaps paying much attention to detail
can be rewarding, small observations can lead to important discoveries.
Perhaps it is better to conclude with some remarks. The hyperfinite II1
factor is the nicest among II1 factors and the free group factors may be next
in line in a beauty contest for II1 factors. A Huge Theorem of Alain Connes
characterizes the hyperfinite II1 factor among II1 factors. Often in discussions
with Dimitri Shlyakhtenko we wonder whether there might be a similar theo-
rem for free group factors. I understand that this question is also considered by
von Neumann algebra experts. Unfortunately, while several important proper-
ties of the free group factors are known, it is not clear what a good candidate
for a list of properties characterizing this class or some variant of this class
might be.
Another concluding comment I would like to make is to return to the notion
of bi-freeness which I mentioned earlier and advertise the recent articles about
free probability for pairs of faces. An interesting feature of this notion is
that it implies certain free independence relations as well as certain classical
independence relations.
Basics in free probability

Moritz Weber

1. Introduction
This small chapter provides some basic definitions and notions in free prob-
ability theory. It may be used as a warm-up for the following lectures as well
as a reference chapter for the most frequently used terms appearing in the
sequel. Most of the presented material is composed from Nica and Speicher’s
book [5]. Other sources are the books by Voiculescu, Dykema, and Nica [7]
and the one by Hiai and Petz [1]. Several aspects mentioned in this chapter
will be explained in more detail throughout the lectures of the other authors.

2. Noncommutative probability spaces


Definition 2.1 (Noncommutative probability spaces).
(a) An (algebraic) noncommutative probability space (A, ϕ) is a unital algebra
A over C together with a unital linear functional ϕ : A → C, i.e. ϕ(1) = 1.
(b) A noncommutative ∗ -probability space (A, ϕ) is a unital ∗ -algebra A to-
gether with a unital linear functional ϕ : A → C which is positive in the
sense that ϕ(a∗ a) ≥ 0 for all a ∈ A. Then also ϕ(a∗ ) = ϕ(a) is satisfied.
(c) A noncommutative C ∗ -probability space (A, ϕ) is a unital C ∗ -algebra A
together with a state ϕ : A → C, i.e. ϕ is positive, linear and unital.
(d) A noncommutative W ∗ -probability space (A, ϕ) is a von Neumann algebra
A together with a normal state ϕ : A → C, i.e. ϕ is ultra-weakly continuous.
(e) We say that a linear functional ϕ : A → C on an algebra is a trace (or ϕ
is tracial ), if ϕ(ab) = ϕ(ba) for all a, b ∈ A. If A is a ∗ -algebra, we say
that ϕ is faithful, if ϕ(a∗ a) = 0 implies a = 0. In some cases some of these
extra assumptions on ϕ are made for a noncommutative probability space,
in particular for W ∗ -probability spaces.
(f) Elements x ∈ A of a noncommutative probability space (A, ϕ) are called
(noncommutative) random variables.

The author thanks Guillaume Cébron and Roland Speicher for helping to improve this
chapter, in particular the section on examples of noncommutative distributions.
8 M. Weber

Example 2.2. Here are examples of noncommutative probability spaces.


(a) Let (Ω, P) be a classical probability space. Denote by L∞ (Ω, P) the set of
all bounded measurable complex-valued functions on Ω. Using the classical
expectation E : L∞ (Ω, P) → C, we put
Z
A := L∞ (Ω, P), ϕ(a) := E(a) := a(ω) dP(ω).

(b) For n ∈ N denote by Mn (C) the algebra of complex-valued n × n-matrices.
We put
n
1X
A := Mn (C), ϕ(a) := tr(a) := aii .
n i=1
(c) Let G be a group with neutral element e and denote by CGPits group
algebra, i.e. the set of all formal finite linear combinations g∈G αg g,
where αg ∈ C. There is a multiplication on CG given by convolution:
X  X  X X 
αg g βh h := αg βh f.
g∈G h∈G f ∈G g,h∈G
gh=f

Note that this product naturally extends the group multiplication when
identifying g ∈ G with g ∈ CG. We also have an involution on CG given
by X ∗ X
αg g := αg g −1 .
g∈G g∈G
We put
X  X 
A := CG, ϕ αg g := τG αg g := αe .
One can check that τG is actually a faithful trace.
(d) Let H be a Hilbert space with inner product h·, ·i and B(H) the algebra
of bounded linear operators on H. Let ξ ∈ H be a vector of norm one. We
put
A := B(H), ϕ(a) := haξ, ξi.
Replacing the scalars C somehow by a subalgebra B, we obtain a notion of
operator-valued probability spaces.
Definition 2.3 (Operator-valued noncommutative probability spaces). An
operator-valued noncommutative probability space (A, B, E) is the triple of a
unital algebra A over C, a subalgebra 1 ∈ B ⊂ A containing the unit of A, and
a conditional expectation E : A → B, i.e. E is a linear map such that E(b) = b
for all b ∈ B and E(b1 ab2 ) = b1 E(a)b2 for all a ∈ A and all b1 , b2 ∈ B.
Example 2.4. Let (A0 , ϕ) be a noncommutative probability space. Let
M2 (A0 ) be the algebra of 2 × 2-matrices with entries in A0 . Then the fol-
lowing is an operator-valued noncommutative probability space:
   
a b ϕ(a) ϕ(b)
A := M2 (A0 ), B := M2 (C), E := .
c d ϕ(c) ϕ(d)
Basics in free probability 9

3. Freeness
Definition 3.1 (Freeness). Let (A, ϕ) be a noncommutative probability space
and let 1 ∈ Ai ⊂ A be subalgebras containing the unit of A, for i ∈ I and some
index set I.
(a) The algebras (Ai )i∈I are tensor independent or classically independent, if
• ab = ba for all Qa ∈ Ai , b ∈ Aj with i 6= j,
n
• ϕ(a1 . . . an ) = j=1 ϕ(aj ) whenever aj ∈ Aij and all ij are mutually
different; for all n ∈ N.
(b) The algebras (Ai )i∈I are free or freely independent, if
ϕ(a1 . . . an ) = 0
whenever ϕ(aj ) = 0 for all j and aj ∈ Aij with i1 6= i2 6= · · · 6= in ; for all
n ∈ N.
(c) Random variables xi ∈ A, i ∈ I are called free, if the algebras Ai :=
alg(xi , 1) ⊂ A they generate are free. The random variables xi are called

-free, if the ∗ -algebras Ai := alg(xi , x∗i , 1) ⊂ A are so, provided (A, ϕ) is
a ∗ -probability space. Likewise we define (∗ -)freeness for sets of random
variables via the unital (∗ -)algebras they generate.
Remark 3.2. There is also a definition of Boolean independence, as well as of
monotone and anti-monotone independence, see [4]. Further related concepts
are Male’s traffic freeness [3] or Lenczewski’s matricial freeness [2]. The very
recent concept of bi-freeness was initiated in 2013 by Voiculescu [6].
Example 3.3. (a) Let (L∞ (Ω, P), E) be the noncommutative probability space
as in Example 2.2 (a), and a, b ∈ L∞ (Ω, P) two random variables which are in-
dependent in the usual sense in probability theory. Then E(ak bl ) = E(ak )E(bl )
for all k, l ∈ N0 . Hence a and b are tensor independent in the sense of Defini-
tion 3.1 (a).
(b) Let us now come to the key example for freeness, namely the one from
which the definition of freeness was derived. Consider the free groups Fn and
Fm on n and m generators respectively. How do we capture the fact that Fn+m
can be written as the free products of Fn and Fm ? In other words, how to
formalize the fact that Fn and Fm sit freely inside Fn+m ?
Consider G := Fn+m with generators x1 , . . . , xn+m . We view G1 := Fn
and G2 := Fm as subgroups of G generated by x1 , . . . , xn and xn+1 , . . . , xn+m
respectively. Denote by e the neutral element in G. Then G1 , G2 ⊂ G are free
in the following sense. Whenever we take elements gj ∈ Gij with i1 6= i2 6=
· · · 6= in such that gj 6= e, then
g1 . . . gn 6= e.
In other words, elements from G1 and G2 share no relations. We take this as
a definition of freeness for groups.
This can easily be generalized to the group algebras: The algebras A1 :=
CFn and A2 := CFm in A := CFn+m have the property that whenever we take
10 M. Weber

P
aj = αjg g ∈ Aij with i1 6= i2 6= · · · 6= in such that αje = 0, then
X
a1 . . . an = βg g with βe = 0.
Now, how to generalize it again, to an operator algebraic setting? How to
formalize that the free group factors LFn and LFm are supposed to sit freely
inside LFn+m ? This is different from the case of group algebras, we cannot
simply say: “If the neutral element does not appear in the ai , it shall not
appear in the product of them.” The way out is provided by the trace τG as
defined in Example 2.2 (c). Reformulating the above property of freeness of
the group algebras, we obtain: The algebras A1 := CFn and AP 2 := CFm in
A := CFn+m are free in the sense that whenever we take aj = αjg g ∈ Aij
with i1 6= i2 6= · · · 6= in such that τG (aj ) = 0, then
τG (a1 . . . an ) = 0.
But this is exactly Definition 3.1 of freeness! Using τG , we can thus extend the
idea of freeness of groups to freeness of the enveloping von Neumann algebras.
Note that we have: If subgroups Gi ⊂ G for i ∈ I are free in the above
sense of freeness for groups, then also the group algebras (CGi )i∈I are free in
the noncommutative probability space (CG, τG ) in the sense of Definition 3.1.
This goes over to the C ∗ - and von Neumann algebras.
(c) Let H beL a Hilbert space. The full Fock space over H is by definition

F (H) := CΩ ⊕ n=1 H ⊗n . Here, Ω is some vector of norm one, the so-called
vacuum vector. Following Example 2.2 (d), we remark that B(F (H)) can be
turned into a noncommutative probability space endowed with the vector state
ϕ(x) = hxΩ, Ωi. Given a vector ξ ∈ H, we define the (left) creation operator
l(ξ) ∈ F (H) by
l(ξ)Ω := ξ, l(ξ)η1 ⊗ · · · ⊗ ηn := ξ ⊗ η1 ⊗ · · · ⊗ ηn for n ≥ 1.
Its adjoint is called the (left) annihilation operator. Now, if ξ1 , . . . , ξn ∈ H
form an orthonormal system, then l(ξ1 ), . . . , l(ξn ) are ∗ -free in (B(F (H)), ϕ).
The example of creation operators on a Fock space is quite instructive in free
probability theory. Moreover, we can also define the right creation operator by
placing ξ to the right of the vectors. The interplay of left and right creation
operators is the starting point for Voiculescu’s definition of bi-freeness.
We now list a few basic properties of freeness. Freeness may be understood
as a rule for computing mixed moments, like classical independence (in the
sense of Definition 3.1). We begin with the classical situation. Let a, b ∈ (A, ϕ)
be tensor independent, and assume we know all moments {ϕ(am ) | m ∈ N}
of a and also all of b. Then we know all mixed moments in a and b, since
ϕ(an bm ) = ϕ(an )ϕ(bm ).
How about the free case? Let a, b ∈ (A, ϕ) be freely independent, and again
assume that we know all moments of a and b. Then we also know all mixed
moments in a and b, i.e. we can express ϕ(an1 bn2 . . . anm ) with ni ∈ N0 as a
polynomial in the moments of a and b. For instance, centering the variables
Basics in free probability 11

yields ϕ(a − ϕ(a)1) = 0 and likewise for b. Then, ϕ((a − ϕ(a)1)(b − ϕ(b)1)) = 0
by the definition of freeness, which yields
ϕ(ab) = ϕ(a)ϕ(b).
Likewise we infer
ϕ(aba) = ϕ(a2 )ϕ(b)
or
ϕ(abab) = ϕ(a2 )ϕ(b)2 + ϕ(a)2 ϕ(b2 ) − ϕ(a)2 ϕ(b)2 .
Proposition 3.4. Let (Ai )i∈I be free in (A, ϕ) and denote by B the algebra
generated by all Ai . Then the restriction ϕ|B is uniquely determined by ϕ|Ai ,
i ∈ I. Hence, knowing the moments of the variables in the Ai implies knowing
the moments of the elements in the algebra generated by all Ai .
Proposition 3.5. Let x, y, z ∈ (A, ϕ) be random variables.
(a) If x and y are freely independent and xy = yx, then we have
ϕ((x − ϕ(x)1)2 ) = 0 or ϕ((y − ϕ(y)1)2 ) = 0.
Thus, if x and y are in addition classically independent, selfadjoint, and
if ϕ is faithful, at least one of these variables is almost surely constant.
Hence, freeness is not a generalization of classical independence. It is
somehow the other extreme, suitable for the noncommutative situation.
(b) If x ∈ C1 is a constant, then x and y are free.
(c) The concept of freeness is commutative: If x and y are free, then also y
and x are free.
(d) The concept of freeness is associative: The variables x, y, z are free if and
only if x and {y, z} are free as well as y and z are free.
Definition 3.6 (Operator-valued freeness). Let (A, B, E) be an operator-
valued noncommutative probability space and let B ⊂ Ai ⊂ A be subalgebras
of A containing B. The algebras (Ai )i∈I ⊂ A are free with amalgamation over
B or free with respect to E, if
E(a1 . . . an ) = 0
whenever E(aj ) = 0 for all j and aj ∈ Aij with i1 6= i2 6= · · · 6= in ; for all
n ∈ N.
Random variables xi ∈ A, i ∈ I are called free, if the algebras alg(xi , B) are
free; and likewise ∗ -freeness is defined via the ∗ -algebras they generate.
In the case B = C the above definition simply boils down to freeness as in
Definition 3.1.

4. Noncommutative distributions
Denote by ChX1 , . . . , Xn i the polynomials in the noncommuting variables
X1 , . . . , Xn .
Definition 4.1 (Noncommutative distribution). Let (A, ϕ) be a noncommu-
tative probability space and let a1 , . . . , an ∈ A.
12 M. Weber

(a) The collection of joint moments



ϕ(ai1 . . . aim ) | m ∈ N, 1 ≤ ij ≤ n
is called the joint distribution of a1 , . . . , an . Sometimes, it is more en-
lightening to distinguish between the moments and the distribution in the
sense that the linear functional µ : ChX1 , . . . , Xn i → C given by
µ(p) = ϕ(p(a1 , . . . , an ))
is called the joint distribution of the elements.
(b) The collection of joint ∗ -moments

ϕ(aǫi11 . . . aǫim
m
) | m ∈ N, 1 ≤ ij ≤ n, ǫj ∈ {1, ∗}
or rather the functional µ : ChX1 , X1∗ . . . , Xn , Xn∗ i → C given by
µ(p) = ϕ(p(a1 , . . . , an ))

is called the joint -distribution of a1 , . . . , an .
In the case of one variable, the noncommutative distribution of a (nice)
element is given by a measure, like in the classical situation. Hence, in this
case, we know exactly how to interpret the moments as a distribution in the
analytical sense.
Proposition 4.2. Let (A, ϕ) be a noncommutative C ∗ -probability space, and
let a ∈ A be normal. Then there is a compactly supported measure µ on C such
that Z
z k z̄ l dµ(z) = ϕ(ak (a∗ )l ) for all k, l ∈ N0 .
If a is selfadjoint, then the measure is compactly supported on R.
The distribution of the elements completely determines the algebra they
generate in the following sense. This is the reason why—from an (operator)
algebraic perspective—we want to know mixed moments. Freeness provides a
rule for computing them, as mentioned before.
Proposition 4.3. Let (A, ϕ) and (B, ψ) be ∗ -probability spaces and let ϕ and
ψ be faithful. Let a1 , . . . , an ∈ A and b1 , . . . , bn ∈ B have the same joint

-distribution. Then the algebras A0 ⊂ A and B0 ⊂ B generated by a1 , . . . , an
and b1 , . . . , bn respectively are isomorphic via ai 7→ bi . This lifts to the
C ∗ -algebraic and von Neumann algebraic levels.
Definition 4.4. Let (AN , ϕN ) and (A, ϕ) be noncommutative probability
spaces, N ∈ N. Let (aN,i )i∈I be families in AN and (ai )i∈I be in A.
(a) We say that (aN,i )i∈I converges in distribution towards (ai )i∈I , if
ϕN (aN,i1 . . . aN,in ) → ϕ(ai1 . . . ain )
for N → ∞ and all n ∈ N, i1 , . . . , in ∈ I. It converges in ∗ -distribution, if
the same holds true for the joint ∗ -moments.
(b) The variables (aN,i )i∈I are asymptotically free, if they converge in distri-
bution to free elements (ai )i∈I .
Basics in free probability 13

Definition 4.5 (Operator-valued noncommutative distribution). Let (A, B, E)


be an operator-valued noncommutative probability space and let a1 , . . . , an ∈
A. The collection of joint moments

E(ai1 b1 ai2 b2 ai3 . . . aim−1 bm−1 aim ) | m ∈ N, 1 ≤ ij ≤ n, bj ∈ B
is called the joint distribution of a1 , . . . , an .

5. Examples of noncommutative distributions


Finally, we list examples of (univariate) distributions. Note that we can
express the distribution of a single element in three ways: via the measure
obtained from Proposition 4.2 (or rather via its density—if it exists—with
respect to the Lebesgue measure), via moments, and via the cumulants (see
Speicher’s lecture for the definition of cumulants).
Example 5.1. (a) Let a ∈ L∞ (Ω, P) be a classical random variable in a clas-
sical probability space. The measure obtained from Proposition 4.2 is exactly
the distribution of a in the classical sense.
(b) Let a ∈ (Mn (C), tr) be a normal matrix and let λ1 , . . . , λn ∈ C be
its eigenvalues counted with multiplicities. Diagonalizing a, we see that the
measure from Proposition 4.2 is given by the eigenvalue distribution of a, i.e.
n
1X
µ= δλk .
n
k=1
Here, δλ denotes the Dirac measure on {λ}.
(c) One of the most important examples of a distribution in free probability
is the semi-circular element (with variance σ 2 ) or simply the semi-circle. We
say that s is a standard semi-circle, if its variance is σ 2 = 1. A semi-circle
with variance σ 2 is given by s ∈ (A, ϕ) with s = s∗ and the characterization
by moments
 
2m 2m 2m 1 2m
ϕ(s ) = σ Cm = σ , ϕ(s2m+1 ) = 0
m+1 m
for all m ∈ N0 . Here, Cm denotes the m-th Catalan number—besides the
binomial coefficients one of the most important series of numbers in combina-
torics. The Catalan numbers count the number of noncrossing pair partitions
N C2 (2m) (or equivalently of all noncrossing partitions N C(m)). The cumu-
lants of the semi-circle are
κ2 (s, s) = σ 2 , κn (s, . . . , s) = 0 for n 6= 2.
The measure according to Proposition 4.2 has the density
1 p 2
t 7→ 4σ − t2 on [−2σ, 2σ].
2πσ 2
The semicircle in free probability plays the role of the Gaussian in classical
probability—it is the limiting distribution of a central limit theorem.
Here are some concrete examples of semi-circular elements. Let S ∈ B(ℓ2 (N))
be the unilateral shift given by Sen := en+1 and consider ϕ(x) := hxe1 , e1 i.
14 M. Weber

Then S + S ∗ is a semi-circle. Moreover, l(ξ) + l(ξ)∗ is a semi-circle with respect


to ϕ(x) = hxΩ, Ωi, where l(ξ) is a left creation operator on some Fock space and
Ω is the vacuum vector. In this case σ 2 = kξk. If some vectors ξ1 , . . . , ξn ∈ H
form an orthonormal system, then l(ξ1 ) + l(ξ1 )∗ , . . . , l(ξn ) + l(ξn )∗ form a
system of n free standard semi-circular elements. √
(d) If s1 and s2 are two free standard semi-circles, then c = 1/ 2 (s1 + is2 )
is called a circular element. Its ∗ -moments are not so explicit, since c is not
normal—we would need to consider all possible expressions of the form
ϕ(ca1 (c∗ )a2 ca3 . . . (c∗ )an ), a i ∈ N0 .
However, it is easy to see that all moments vanish where the numbers of c’s
and c∗ ’s do not coincide. The cumulants in turn are easy to write down:
κ2 (c, c∗ ) = κ2 (c∗ , c) = 1, all other cumulants are zero.
Since c is not normal, there does not exist a measure according to Proposi-
tion 4.2.
(e) An element u ∈ (A, ϕ) in a noncommutative ∗ -probability space is called
a Haar unitary, if it is unitary (i.e. u∗ u = uu∗ = 1) and the moments are given
by
ϕ(um ) = ϕ((u∗ )m ) = 0 for m ∈ N \ {0}.
In terms of cumulants, we know that those with alternating arguments are of
the form
κ2m (u, u∗ , . . . , u, u∗ ) = κ2m (u∗ , u, . . . , u∗ , u) = (−1)m−1 Cm−1 .
All other cumulants vanish. The measure from Proposition 4.2 is the normal-
ized Lebesgue measure on the unit circle in C, whence the name Haar unitary.
(f) An element u ∈ (A, ϕ) in a noncommutative ∗ -probability space is called
a k-Haar unitary for k ∈ N, if it is unitary, uk = 1, and the moments are given
as follows:
ϕ(um ) = ϕ((u∗ )m ) = 0 for m ∈ N \ {0}, k ∤ m,
where k ∤ m means that k does not divide m. For the cumulants we have
no nice formula (but note that the case k = 2 is the same as the symmetric
Bernoulli in Example (i) below). The measure is the uniform measure on the
set of all k-th roots of the unity.
(g) Let u be a Haar unitary. Then the distribution of u + u∗ is the arcsine
law. In moments:
 
 2m 
ϕ (u + u∗ )2m = , ϕ (u + u∗ )2m+1 = 0.
m
In terms of cumulants:
κ2m (u + u∗ , . . . , u + u∗ ) = 2(−1)m−1 Cm−1 ,
κ2m+1 (u + u∗ , . . . , u + u∗ ) = 0.
The density of the arcsine law is given by
1
t 7→ √ , t ∈ (−2, 2).
π 4 − t2
Basics in free probability 15

(h) An element a ∈ (A, ϕ) is a free Poisson with rate λ ≥ 0 and jump size
α ∈ R or the Marchenko–Pastur law, if its moments are given by
Xn   
λk n n−1
ϕ(an ) = αn .
n−k+1 k k−1
k=1
The cumulants are
κn (a, . . . , a) = λαn .
The measure of the free Poisson law is given by
(
(1 − λ)δ0 + ν, if 0 ≤ λ ≤ 1,
ν, if λ > 1,
where ν has density
1 p √ √
t 7→ 4λα2 − (t − α(1 + λ))2 on [α(1 − λ)2 , α(1 + λ)2 ].
2παt
The square of a semicircular element of variance σ 2 is a free Poisson element
with rate λ = 1 and jump size α = σ 2 . For the definition of a compound
Poisson, see [5, Lecture 12].
(i) A selfadjoint variable b ∈ (A, ϕ) is called a symmetric Bernoulli variable,
if its moments are, with α > 0,
ϕ(b2m ) = α2m , ϕ(b2m+1 ) = 0.
The cumulants are
κ2m (b, . . . , b) = (−1)m−1 Cm−1 α2m , κ2m+1 (b, . . . , b) = 0
for m ∈ N0 . Its measure according to Proposition 4.2 is
1
(δ−α + δα ).
2
(j) Let p ∈ (A, ϕ) be a projection, i.e. p = p∗ = p2 , with ϕ(p) = t ∈ [0, 1].
Its moments are
ϕ(pn ) = t.
Its measure is
(1 − t)δ0 + tδ1 .
(k) The free Cauchy distribution is the distribution of an unbounded vari-
able. It is the same as the classical Cauchy distribution.

References
[1] F. Hiai and D. Petz, The semicircle law, free random variables and entropy, Math.
Surveys Monogr., 77, Amer. Math. Soc., Providence, RI, 2000. MR1746976
[2] R. Lenczewski, Matricially free random variables, J. Funct. Anal. 258 (2010), no. 12,
4075–4121. MR2609539
[3] C. Male, The distributions of traffics and their free product. arXiv:1111.4662 [math.PR]
(2011).
[4] N. Muraki, The five independences as natural products, Infin. Dimens. Anal. Quantum
Probab. Relat. Top. 6 (2003), no. 3, 337–371. MR2016316
16 M. Weber

[5] A. Nica and R. Speicher, Lectures on the combinatorics of free probability, London Math.
Soc. Lecture Note Ser., 335, Cambridge Univ. Press, Cambridge, 2006. MR2266879
[6] D.-V. Voiculescu, Free probability for pairs of faces I, Comm. Math. Phys. 332 (2014),
no. 3, 955–980. MR3262618
[7] D.-V. Voiculescu, K. J. Dykema, and A. Nica, Free random variables, CRM Monogr.
Ser., 1, Amer. Math. Soc., Providence, RI, 1992. MR1217253
Random matrices and combinatorics

Roland Speicher

1. Introduction
The notion of free independence was introduced by Voiculescu [7] in 1983
in the context of operator algebras, giving rise to free probability theory. In
1991, Voiculescu [10] discovered that this notion of freeness also appeared in
the context of random matrices. The latter had already been a subject of in-
vestigation in statistics (Wishart [13], 1928) and physics (Wigner [12], 1955)
for quite some time. One of the basic results in random matrix theory was
Wigner’s discovery that the eigenvalue distribution of a Gaussian unitary en-
semble is asymptotically given by the semicircular law. Since the semicircle
law is also the limit in the free version of a central limit theorem, this pointed
to a connection between free probability theory and random matrices. We
will first present a short introduction to random matrices and show Wigner’s
semicircle law, and then switch to the free probability side and show that the
semicircle shows also up as the limit in a free central limit theorem. This mo-
tivated Voiculescu to look for a deeper relation between random matrices and
asymptotic freeness. We will present a few examples of this connection in the
final Section 6. However, before coming to this, we will give a more thorough
treatment of the combinatorial structure of free probability theory, based on
the lattice of noncrossing partitions and the notion of free cumulants.

2. Gaussian random matrices and Wigner’s semicircle law


Definition 2.1. Let (Ω, P) be a classical probability space. A random matrix
is a matrix A = (aij )N
i,j=1 where the entries aij : Ω → C, i, j = 1, . . . , N are
classical random variables. The corresponding noncommutative probability
space (A, ϕ) of N × N random matrices is given by
A = MN (L∞− (Ω, P)) = MN (C) ⊗ L∞− (Ω, P)
and
ϕ = tr ⊗ E,
where \
L∞− (Ω, P) = Lp (Ω, P)
1≤p<∞
18 R. Speicher

denotes the space of random variables for which all moments exist. Moreover,
tr denotes the normalized trace on MN (C) and E denotes the expectation on
∞−
(Ω, P). Hence we have A = (aij )Ni,j=1 ∈ A if and only if aij ∈ L (Ω, P) for
all i, j = 1, . . . , N and
N
1 X
ϕ(A) = tr ⊗ E(A) = E[tr(A)] = E[aii ].
N i=1

Remark 2.2. Consider a selfadjoint random matrix A = A∗ ∈ A, i.e. aij = aji


for all i, j = 1, . . . , N . Let λ1 , . . . , λN denote the eigenvalues of A. Then
X N  Z
1
ϕ(Ak ) = E[tr(Ak )] = E λki = tk dµA (t),
N i=1
where
Z X
N
1
µA = δλi (ω) dP(ω)
N Ω i=1

denotes the averaged eigenvalue distribution of A. In other words, the set of


moments of A with respect to ϕ corresponds to the analytic object µA .
Definition 2.3. A (selfadjoint) Gaussian random matrix is a random matrix
A = (aij )N
i,j=1 where
• A = A∗ , i.e. aij = aji for all i, j = 1, . . . , N ,
• aij (1 ≤ i ≤ j ≤ N ) are independent complex Gaussian random variables
with
1
E[aij ] = 0, E[a2ij ] = 0 (i 6= j), E[aij aji ] = E[aij aij ] = .
N
Remark 2.4. Such random matrices are addressed as Gaussian unitary en-
semble (GUE). “Unitary” refers to the fact that the distribution of the entries
of A is invariant under unitary conjugations.
We want now to calculate ϕ(Am ) for a GUE. For this we need expectations
of products of entries, which form a Gaussian family in the following sense:
Definition 2.5. Random variables x1 , . . . , xn form a Gaussian family, if for
all m ∈ N and for all 1 ≤ i(1), . . . , i(m) ≤ n:
X Y
E[xi(1) . . . xi(m) ] = E[xi(r) xi(s) ],
π∈P2 (m) (r,s)∈π

where P2 (m) denotes the set of pair-partitions of m elements (i.e., the de-
composition of the set {1, . . . , m} into disjoint pairs (r1 , s1 ), (r2 , s2 ), . . .). This
combinatorial formula, which expresses all higher moments of a Gaussian fam-
ily in terms of second moments, is usually called the Wick formula.
It might be interesting to note that, while the work of Wick (in physics) is
from 1950, the same formula was also shown by Isserlis in 1918 in probability
theory—thus the name “Isserlis formula” might also be appropriate.
Random matrices and combinatorics 19

Example 2.6. Let x1 , . . . , xn be a Gaussian family. Then for odd m, it follows


that E[xi(1) . . . xi(m) ] = 0 and for m = 2 the Wick-formula yields the trivial
identity E[xi(1) xi(2) ] = E[xi(1) xi(2) ]. However, for m = 4 the set P2 (4) consists
of three elements and we have that
E[xi(1) xi(2) xi(3) xi(4) ] = E[xi(1) xi(2) ]E[xi(3) xi(4) ]
+ E[xi(1) xi(3) ]E[xi(2) xi(4) ]
+ E[xi(1) xi(4) ]E[xi(2) xi(3) ].
We note that real i.i.d. Gaussian random variables x1 , . . . , xn form a Gauss-
ian family with E[xi xj ] = δij σ 2 : first, note that xi Gaussian implies
Z (
m 1 m − 2σt2 0, m odd,
E[xi ] = √ t e 2
dt = m
2πσ R σ (m − 1)(m − 3) · . . . · 1, m even,
(
0, m odd,
=
σ m #P2 (m), m even;

second, since E[xi xj ] = δij σ 2 , the Wick formula only counts pairings of the
same xi ’s, hence it factorizes for different ones, corresponding to the indepen-
dence of different xi ’s.
Thus, the entries Re aij , Im aij (i, j = 1, . . . , N ) of a GUE A = (aij )N
i,j=1
form a Gaussian family, where the second moments are given by
1
E[aij akl ] = δil δjk for i, j, k, l = 1, . . . , N .
N
Now we can calculate the moments of the GUE A = (aij )N
i,j=1 for m even:

N
X
1
ϕ(Am ) = E[ai(1)i(2) ai(2)i(3) . . . ai(m)i(1) ]
N
i(1),...,i(m)=1
N
X X Y
1
= E[ai(r)i(r+1) ai(s)i(s+1) ]
N
i(1),...,i(m)=1 π∈P2 (m) (r,s)∈π
N
X X Y
1 1
= δi(r)i(s+1) δi(r+1)i(s)
N N
i(1),...,i(m)=1 π∈P2 (m) (r,s)∈π

X N
X m
Y
1
= 1+ m
δi(r)i(π(r)+1)
N 2
π∈P2 (m) i(1),...,i(m)=1 r=1

X N
X m
Y
1
= m δi(r)i(γπ(r)) ,
N 1+ 2
π∈P2 (m) i(1),...,i(m)=1 r=1

where we identify π ∈ P2 (m) with the permutation π ∈ Sm that switches the


places of r and s for (r, s) ∈ π and where we denote the long cycle permutation
20 R. Speicher

(1, 2, 3, . . . , m − 1, m) ∈ Sm by γ. By noting that


N
X m
Y
δi(r)i(γπ(r)) = N #(γπ),
i(1),...,i(m)=1 r=1

where #(γπ) denotes the number of cycles of γπ, we get the following theorem.
Theorem 2.7. For an N × N GUE random matrix A = (aij )N
i,j=1 we have
the genus expansion
X m
ϕ(Am ) = N #(γπ)−1− 2 .
π∈P2 (m)

Example 2.8. For m = 2, P2 (m) only consists of the element π = (1, 2) and
we have γ = (1, 2). Hence γπ = e and #γπ = 2 which yields the second
moment
ϕ(A2 ) = N 2−1−1 = 1.
In the case m = 4, P2 (m) contains the elements
π1 = (1, 2)(3, 4), π2 = (1, 3)(2, 4), π3 = (1, 4)(2, 3)
and we have γ = (1, 2, 3, 4). Hence
#(γπ1 ) − 3 = 0, #(γπ2 ) − 3 = −2, #(γπ3 ) − 3 = 0,
which yields
1 N →∞
ϕ(A4 ) = 2 + −−−−→ 2.
N2
In the same way we obtain
10 N →∞
ϕ(A6 ) = 5 + −−−−→ 5,
N2
70 21 N →∞
ϕ(A8 ) = 14 + 2 + 4 −−−−→ 14.
N N
More general, one has #(γπ) − 1 − m 2 ≤ 0 for all π ∈ P2 (m) and equality
holds exactly for the so-called noncrossing π. A pair-partition π ∈ P2 (m) is
crossing if we can find two blocks (r1 , s1 ) and (r2 , s2 ) of π which cross, i.e.,
with r1 < r2 < s1 < s2 .
Thus, for N → ∞, the moments of a GUE A = (aij )N
i,j=1 are given by

lim ϕ(Am ) = #N C 2 (m),


N →∞
where N C 2 (m) denotes the set of noncrossing pair-partitions.
If we put cm = #N C 2 (2m) for m ∈ N, one can show that
m−1
X
cm = ck cm−k−1 ,
k=0
which is exactly the recursion for the Catalan numbers. Thus,
 
1 2m
cm = .
m+1 m
Random matrices and combinatorics 21

Figure 1. Comparison between the histogram for the 4000 eigen-


values of one realization of a 4000 × 4000 Gaussian random ma-
trix and the semicircle distribution; as the agreement between his-
togram and semicircle suggests, Wigner’s semicircle law does not
only hold for the averaged eigenvalue distributions, but also almost
surely for generic realizations.

One can show that these are exactly the moments for the semicircular law, i.e.
Z 2 p
1
cm = tm 4 − t2 dt
2π −2
and hence we get the following theorem; see Figure 1.
Theorem 2.9 (Wigner’s semicircle law). The asymptotic eigenvalue distribu-
tion of a GUE A is given by the semicircle law, i.e.
lim µA = µS (weak convergence),
N →∞

where
1 p
dµS (t) = 4 − t2 dt on [−2, 2].

3. The free central limit theorem


We will now switch to the free probability side and see that the semicircle
distribution appears there also as one of the basic distributions. For this we
will look on the free analog of the central limit theorem. This free central
limit theorem was one of the first theorems of Voiculescu in free probability
theory. It was also my entry point into the free world. From the work of
my PhD supervisor, Wilhelm von Waldenfels, I was aware of combinatorial
approaches to classical and bosonic/fermionic central limit theorems, and I
tried to understand in this spirit Voiculescu’s result. In the following I will
present this combinatorial approach.
22 R. Speicher

In order to illuminate the parallels (and also the differences) between clas-
sical and free, we will give a uniform treatment of both the classical and the
free central limit theorem.
Consider a sequence (ai )∞ i=1 of elements of a noncommutative probability
space (A, ϕ) which are
• identically distributed,
• centered, i.e. ϕ(a1 ) = 0,
• normalized, i.e. ϕ(a21 ) = 1,
• either classically independent or freely independent.
Note that we require the existence of moments here.
What can we say about
a1 + · · · + aN
SN = √ ,
N
when N → ∞?
Definition 3.1. We say that SN ∈ (AN , ϕN ), N ∈ N converges in distribution
to x ∈ (A, ϕ), if
m
lim ϕN (SN ) = ϕ(xm )
N →∞
dist.
for all m ∈ N. This convergence is denoted by SN −−−→ x.
Let us see whether we can control the moments of SN when N goes to ∞.
We have
N
X
m 1 m 1
ϕ(SN )= m ϕ((a1 + · · · + aN ) )= m ϕ(ai(1) . . . ai(m) ).
N2 N2
i(1),...,i(m)=1

For i = (i(1), . . . , i(m)) we denote by ker(i) the maximal partition of {1, . . . , m}


such that i is constant on blocks. For example, i = (1, 3, 1, 5, 3) and j =
(3, 4, 3, 6, 4) have the same kernel. Now we note that by the fact that all our
variables are identically distributed and by basic properties of either classical
independence or freeness we have
ϕ(ai(1) . . . ai(m) ) = ϕ(aj(1) . . . aj(m) ),
whenever ker i = ker j and hence
1 X 
m
ϕ(SN )= m κπ # i : {1, . . . , m} → {1, . . . , N } | ker i = π
N2
π∈P(m)
1 X
= m κπ N (N − 1) · . . . · (N − (#π − 1))
N2
π∈P(m)
X m
∼ κπ N #π− 2
π∈P(m)

for large N (i.e. these sequences tend to the same limit), where P(m) denotes
the set of all partitions of {1, . . . , m} (for a formal definition see Definition 4.1),
Random matrices and combinatorics 23

κπ equals ϕ(ai(1) . . . ai(m) ) if ker i = π, and #π denotes the number of blocks


of π (see again Definition 4.1).
Let π have a singleton, i.e., a block consisting of just one element (meaning
that one of the appearing indices is different from all the others.) Then
κπ = ϕ(ai(1) . . . ai(k) . . . ai(m) )
= ϕ(ai(1) . . . ai(k−1) ai(k+1) . . . ai(m) )ϕ(ai(k) ) = 0,
where i(k) is the index that differs from all the others. Here we used the
free/classical independence of the sets {ai(1) , . . . , ai(k−1) , ai(k+1) , . . . , ai(m) }
and {ai(k) } and the fact that all our variables are centered. Hence κπ 6= 0
implies that π = {V1 , . . . , Vr } with |Vj | ≥ 2 for all j = 1, . . . , r and thus
r = #π ≤ m 2.
Altogether we get
X X
m
lim ϕ(SN )= κπ = κπ ,
N →∞
π∈P(m) π∈P2 (m)
π has no singleton
#π= m 2

where P2 (m) denotes, as in the previous section, the set of pair-partitions. In


particular, it follows that
m
lim ϕ(SN )=0
N →∞
for odd m.
Now we want to distinguish the classical and the free case:
(1) If we consider the ai ’s to be classical (commutative) independent random
variables, then we have, for even m and for all π ∈ P2 (m),
κπ = ϕ(ai(1) . . . ai(m) ) = 1,
where ker i = π. Thus we have
(
m 0, m odd,
lim ϕ(SN ) = #P2 (m) =
N →∞ (m − 1)(m − 3) · . . . · 1, m even,
which are exactly the moments of the Gaussian distribution. This proves the
classical central limit theorem, in the case where all moments exist.
(2) If the ai ’s are free, we get
(
0, π is crossing,
κπ =
1, π ∈ N C 2 (m).
For instance, if

π = {(1, 6), (2, 5), (3, 4)} =

we obtain
κπ = ϕ(a1 a2 a3 a3 a2 a1 )
= ϕ(a3 a3 )ϕ(a1 a2 a2 a1 ) = ϕ(a3 a3 )ϕ(a2 a2 )ϕ(a1 a1 ) = 1,
24 R. Speicher

and if
π = {(1, 5), (2, 3), (4, 6)} =

we have
κπ = ϕ(a1 a2 a2 a3 a1 a3 ) = ϕ(a2 a2 )ϕ(a1 a3 a1 a3 ) = 0,
by definition of freeness and the fact that all ϕ(ai ) = 0.
So we get in the limit of the free central limit theorem that the moments are
counted by the number of noncrossing pair-partitions; hence the same moments
as in the limit of Gaussian random matrices.
Theorem 3.2. Assume a1 , a2 , . . . ∈ (A, ϕ) are free and identically distributed
with ϕ(a1 ) = 0 and ϕ(a21 ) = 1. Then
a1 + · · · + aN dist.
√ −−−→ s,
N
where s is a semicircular element, i.e.
Z 2 p
1
ϕ(sm ) = tm 4 − t2 dt = #N C 2 (m).
2π −2
The free central limit theorem can be easily generalized to a multivariate
version:
(i) (i)
Theorem 3.3. Let {a1 | i ∈ I}, {a2 | i ∈ I}, . . . ⊂ (A, ϕ) be a sequence of
families of freely independent random variables with identical distribution and
(i)
ϕ(ar ) = 0 for all r ∈ N, i ∈ I. We denote the covariance by
cij = ϕ(a(i) (j)
r ar ), i, j ∈ I.
Then
 a(i) + · · · + a(i) 
1 N dist.
√ −−−→ (si )i∈I ,
N i∈I
where (si )i∈I is a semicircular family of covariance (cij )i,j∈I , i.e.
X Y
ϕ(si(1) . . . si(m) ) = ci(r)i(p)
π∈N C 2 (m) (r,p)∈π

for all m ∈ N.

4. Noncrossing partitions and free cumulants


After having realized that the transition from the classical to the free central
limit theorem consists, on a combinatorial level, in replacing all pair-partitions
by noncrossing pair-partitions, it was tempting to try to develop a general
approach to free probability theory based on this observation. For this the
combinatorial description of classical probability theory in terms of cumulants
and the lattice of all partitions, as presented in the work of Gian-Carlo Rota
and his coworkers, was instrumental. Motivated by this I developed the follow-
ing general combinatorial approach to free probability theory, resting on the
notion of free cumulants.
Random matrices and combinatorics 25

Definition 4.1. A partition of {1, . . . , n} is a collection π = {V1 , . . . , Vr } of


subsets of {1, . . . , n} with
• Vi 6= ∅ for all i = 1, . . . , r,
• SVi ∩ Vj = ∅ for i 6= j and
• ri=1 Vi = {1, . . . , n}.
The Vi ’s are called the blocks of π.
The partition π is noncrossing if we do not have p1 , p2 , q1 , q2 ∈ {1, . . . , n}
such that p1 < q1 < p2 < q2 and p1 , p2 belong to the block Vi , while q1 , q2
belong to the block Vj , but Vi 6= Vj . We denote the set of all partitions of
{1, . . . , n} by P(n) and the subset of all noncrossing partitions by N C(n).
We can define a partial order on N C(n) by: π1 ≤ π2 if and only if each
block of π1 is contained in a block of π2 . For instance, we have
1234 1234

This partial order induces a lattice structure on N C(n), i.e. for all π, σ ∈
N C(n) there is a minimal partition π ∨ σ that is larger than π and larger than
σ (called the join of π and σ) and a maximal partition π ∧ σ that is smaller
than π and smaller than σ (called the meet of π and σ).
Example 4.2. We have
∧ =

and
∨ = .

The lattice N C(n) has the maximal element 1n consisting of one block of size n
and the minimal element 0n consisting of n blocks of size one.
Definition 4.3. Let (A, ϕ) be a noncommutative probability space. The free
cumulants κn : An → C, n ≥ 1, are inductively defined by the moment-cumu-
lant formulas X
ϕ(a1 . . . an ) = κπ (a1 , . . . , an ),
π∈N C(n)

where, for π = {V1 , . . . , Vr },


r
Y
κπ (a1 , . . . , an ) = κ|Vi | ((aj )j∈Vi ).
i=1

Hence κπ is defined by factorization into the blocks, where each κ|Vi | is applied
to those elements aj whose indices j lie in the block Vi .
Example 4.4. For n = 1, we get
κ1 (a1 ) = ϕ(a1 )
26 R. Speicher

and for n = 2, the moment-cumulant formula yields


ϕ(a1 a2 ) = κ2 (a1 , a2 ) + κ1 (a1 )κ1 (a2 ).
Hence
κ2 (a1 , a2 ) = ϕ(a1 a2 ) − ϕ(a1 )ϕ(a2 ).
For n = 3, we have five noncrossing partitions and hence κ3 is determined by
ϕ(a1 a2 a3 ) = κ3 (a1 , a2 , a3 ) + κ1 (a1 )κ2 (a2 , a3 ) + κ1 (a2 )κ2 (a1 , a3 )
+ κ1 (a3 )κ2 (a1 , a2 ) + κ1 (a1 )κ1 (a2 )κ1 (a3 ).
One can use an inductive argument to show that κn is an n-linear functional.
Now we want to have a look at the behavior of κn with respect to products of
elements of A. We consider the following example:
κ2 (a1 a2 , a3 ) = ϕ((a1 a2 )a3 ) − ϕ(a1 a2 )ϕ(a3 )
= ϕ(a1 a2 a3 ) − ϕ(a1 a2 )ϕ(a3 )
= κ3 (a1 , a2 , a3 ) + κ1 (a1 )κ2 (a2 , a3 ) + κ1 (a2 )κ2 (a1 , a3 ).
We note that the cumulants appearing in the last equation are exactly the ones
that correspond to partitions in N C(3) that connect the blocks {1, 2} and {3}.
This can be generalized to the following result.
Theorem 4.5. Consider a1 , . . . , an ∈ A and multiply some of them together
to A1 , . . . , Am (m ≤ n) such that A1 · · · Am = a1 · · · an . Then
X
κm (A1 , . . . , Am ) = κπ (a1 , . . . , an ),
π∈N C(n)
π∨σ=1n

where i, j belong to the same block of σ if and only if ai and aj are factors in
the same Ak .
Remark 4.6. We note that the condition π ∨ σ = 1n appearing in the sum in
the last theorem means that one has to consider all partitions π that couple
all blocks of σ.
Now we present the main result on free cumulants, which connects them to
freeness.
Theorem 4.7. Let (A, ϕ) be a noncommutative probability space. Unital sub-
algebras A1 , . . . , As ⊂ A are free if and only if all mixed cumulants vanish,
i.e.
κn (a1 , . . . , an ) = 0
whenever there are k, l ∈ {1, . . . , n} such that ak ∈ Ai(k) , al ∈ Ai(l) , and
i(k) 6= i(l).
Proof. It is easy to show by induction that A1 , . . . , As are free whenever
mixed cumulants vanish. On the other hand, if A1 , . . . , As are free, we note
that another easy inductive argument shows that κn (a1 , . . . , an ) = 0 when-
ever ϕ(ai ) = 0 for all i = 1, . . . , n, aj ∈ Ai(j) and i(j) 6= i(j + 1) for all
Random matrices and combinatorics 27

j = 1, . . . , n−1. The difficult part of the proof is to weaken this condition to the
one needed in the theorem. To do so, one shows first that κn (a1 , . . . , an ) = 0
if 1 ∈ {a1 , . . . , an } and n ≥ 2. But then we get that
κn (a1 , . . . , an ) = κn (a1 − ϕ(a1 )1, . . . , an − ϕ(an )1)
and hence we can get rid of the condition ϕ(ai ) = 0 for all i = 1, . . . , n.
Therefore κn (a1 , . . . , an ) = 0 whenever i(j) 6= i(j + 1) for all j = 1, . . . , n − 1.
Let there be k, l ∈ {1, . . . , n} such that i(k) 6= i(l). We multiply neigh-
bors from the same algebra together to get elements A1 , . . . , Am such that
A1 · · · Am = a1 · · · an and Aj , Aj+1 are from different algebras for all j =
1, . . . , m − 1. Hence
κm (A1 , . . . , Am ) = 0
but we also have
X
κm (A1 , . . . , Am ) = κπ (a1 , . . . , an )
π∈N C(n)
π∨σ=1n
X
= κn (a1 , . . . , an ) + κπ (a1 , . . . , an ).
π6=1n
π∨σ=1n

Now we assume that we know the statement for κl , l < n. Then we have that
κπ (a1 , . . . , an ) 6= 0 only for partitions π that couple elements ai from the same
algebra. But as σ also does so, we then have π ∨ σ 6= 1. 

Applying the product formula once more, we also get the following result.
Theorem 4.8. Let (A, ϕ) be a noncommutative probability space. Elements
b1 , . . . , bs ∈ A are free if and only if, for all n,
κn (bi(1) , . . . , bi(n) ) = 0
whenever there are k, l ∈ {1, . . . , n} such that i(k) 6= i(l).

5. Sums and products of free variables


5.1. Sums. Let (A, ϕ) be a noncommutative probability space and let a, b ∈ A
be free and selfadjoint. How can we describe the distribution of a + b in terms
of the distributions of a and b? Of course, one can calculate the moments of
a + b in terms of moments of a and b, but moments turn out not to be the
adequate tool to deal with sums of free variables, as calculating the moments of
a + b gets increasingly complicated for higher powers of a + b. Using cumulants
is more promising.
Let us denote κan = κn (a, a, . . . , a). Then we have
κa+b
n = κn (a + b, a + b, . . . , a + b)
= κn (a, a, . . . , a) + κn (b, b, . . . , b) + κn (mixed terms)
= κan + κbn ,
28 R. Speicher

as mixed cumulants vanish by the results of the last section. Hence cumulants
are additive with respect to what we want to call free convolution. However,
at the moment the relation between moments and cumulants is just given on
a combinatorial level by summations over large sets of noncrossing partitions.
In order to be really useful, we need analytic tools for a better understanding
of this relation between moments and cumulants.
Theorem 5.2. We denote the n-th moment ϕ(an ) of a ∈ A by mn and we
consider the formal power series
X∞
M (z) = 1 + mn z n (moment series),
n=1
X∞
C(z) = 1 + κan z n (cumulant series).
n=1
Then the moment-cumulant relation
X
mn = κaπ
π∈N C(n)

is equivalent to
M (z) = C(zM (z)).
Proof. To simplify notation, we will write κπ instead of κaπ . The crucial ob-
servation is now that we can encode a noncrossing partition by its first block
V = (j1 = 1, j2 , . . . , js ) and noncrossing partitions π1 , . . . , πs of the points be-
tween consecutive points of V ; i.e., πr is a noncrossing partition of ir :=
jr+1 − jr − 1 points (where we put js+1 := n). This leads to the following
rewriting of our moment-cumulant formula:
X Xn X X X
mn = κπ = ··· κs κπ1 · · · κπs
π∈N C(n) s=1 i1 ,...,is ≥0 π1 ∈N C(i1 ) πs ∈N C(is )
i1 +···+is +s=n
n
X X
= κs mi1 · · · mis .
s=1 i1 ,...,is ≥0
i1 +···+is +s=n

Hence we have that



X ∞ X
X n X
M (z) = 1 + mn z n = 1 + κs z s mi1 z i1 · · · mis z is
n=1 n=1 s=1 i1 ,...,is ≥0
i1 +···+is +s=n

X X
=1+ κs z s mi1 z i1 · · · mis z is
s=1 i1 ,...,is ≥0
X∞
=1+ κs z s M (z)s
s=1
= C(zM (z)). 
Random matrices and combinatorics 29

Remark 5.3. Classical cumulants (cn ) are defined by the moment-cumulant


formula X
mn = cπ .
π∈P(n)

In terms of
X∞ X∞
mn n cn n
A(z) = 1 + z and B(z) = 1 + z ,
n=1
n! n=1
n!
this is equivalent to
B(z) = log A(z).
Thus classical cumulants are essentially the coefficients of the logarithm of
the Fourier transform (or characteristic function) of the considered random
variable.
To be able to use analytic methods, it is useful to rewrite M (z) and C(z)
in terms of the Cauchy transform
 1  X ∞
ϕ(an ) M ( 1z )
G(z) = ϕ = =
z−a n=0
z n+1 z
and Voiculescu’s R-transform

X C(z) − 1
R(z) = κn+1 z n = .
n=0
z
Then, as
G( 1z )
M (z) = and C(z) = zR(z) + 1,
z
the relation M (z) = C(zM (z)) can be rewritten as
G( 1z )  1    1 
= zM (z)R(zM (z)) + 1 = G R G + 1.
z z z
1
Replacing z by z leads to
zG(z) = G(z)R(G(z)) + 1
and hence
1
R(G(z)) + = z,
G(z)
1
i.e. R(z) + z and G(z) are inverses under composition. Thus,
 1
G R(z) + =z
z
also holds.
The advantage of G(z) over M (z) is that
 1  Z 1
G(z) = ϕ = dµa (t)
z−a z−t
30 R. Speicher

defines an analytic function G : C+ → C− and we can recover µ = µa from G


by the Stieltjes inversion formula
1
dµ(t) = − lim Im G(t + iε).
π ε→0
Example 5.4. We consider the semicircular distribution µs which is charac-
terized by the moments being given by the Catalan numbers (counting non-
crossing pair-partitions) or equivalently
(
0, if n 6= 2,
κn =
1, if n = 2.
The characterization in terms of cumulants can directly be derived from the
following:
X X Y
#N C2 (m) = ϕ(sm ) = κsπ = κ|V | (s, . . . , s).
π∈N C(m) π∈N C(m) V ∈π

Suppose, we only know the characterization by cumulants. What is the mea-


sure associated to the distribution of this element? We see that R(z) = κ2 z = z
and
1 1
z = R(G(z)) + = G(z) + .
G(z) G(z)
This implies G(z)2 + 1 = zG(z) and this is solved by

z ± z2 − 4
G(z) = .
2
As G(z) ∼ 1/z for z → ∞, we have

z− z2 − 4
G(z) = .
2
The Stieltjes inversion formula finally shows that
 p 
1 t + iε − (t + iε)2 − 4
dµs (t) = − lim Im
π ε→0 2
( √
1
4 − t2 , if t ∈ [−2, 2],
= 2π
0, otherwise,
which indeed describes the semicircle on the interval [−2, 2].
Altogether, we have found an analytic way to calculate the free convolution
µ = µ1 ⊞ µ2 of two distributions µ1 and µ2 :
(i) First calculate their Cauchy and R-transforms Gµ1 , Gµ2 and Rµ1 , Rµ2 .
(ii) As the cumulants of free variables are additive, we have Rµ = Rµ1 + Rµ2 .
(iii) Compute Gµ from Rµ using Gµ (Rµ (z) + z1 ) = z.
(iv) Use the Stieltjes inversion formula to get µ from Gµ .
Random matrices and combinatorics 31

Exercise 5.5.
(1) Calculate ( 12 δ−1 + 21 δ1 )⊞n for n = 2, 3, . . . .
(2) Prove: If a, b are free, then Ga+b (z) = Ga [z − Rb (Ga+b (z))].
5.6. Products. Let (A, ϕ) be a noncommutative probability space and let
a, b ∈ A be free. As in the sum case, we want to find a way of recovering
µab = µa ⊠ µb from µa and µb . We note that this is not an operation on real
probability measures as ab is in general not selfadjoint, even if a and b are.
However, if b ≥ 0, then b1/2 ab1/2 is selfadjoint and has the same moments as
ab (since ϕ is tracial on the algebra generated by a and b) and thus, we also
have
µa ⊠ µb = µb1/2 ab1/2 .
Calculating the moments directly, again, turns out to be rather complicated,
so we need other methods to determine µa ⊠ µb .
Let σ denote the pair-partition {{1, 2}, {3, 4}, . . . , {2n − 1, 2n}} for n ∈ N.
Then, for {a1 , . . . , an }, {b1 , . . . , bn } free, we have, by Theorem 4.5,
X
κn (a1 b1 , a2 b2 , . . . , an bn ) = κπ (a1 , b1 , a2 , b2 , . . . , an , bn ),
π∈N C(2n)
π∨σ=12n

and we can decompose π into π = π1 ∪ π2 , where


π1 ∈ N C(odd) := N C(1, 3, 5, . . . , 2n − 1),
π2 ∈ N C(even) := N C(2, 4, 6, . . . , 2n).
Hence we can write the expression above as
X X
κπ1 (a1 , a2 , . . . , an ) κπ2 (b1 , b2 , . . . , bn ).
π1 ∈N C(odd) π2 ∈N C(even)
π1 ∪π2 ∈N C(2n)
(π1 ∪π2 )∨σ=12n

Given π1 ∈ N C(odd), there exists a unique π2 ∈ N C(even) that fulfills the


summing conditions above. This π2 is called the Kreweras complement of π1
and we write π2 = K(π1 ). (See Example 6.3 for an example.)
The induced map K : N C(n) → N C(n) is an anti-isomorphism in the sense
that if σ ≤ π, then K(σ) ≥ K(π). Moreover, K(π) is the maximal σ such that
π ∈ N C(odd), σ ∈ N C(even) and π ∪ σ ∈ N C(2n).
Theorem 5.7. Let (A, ϕ) be a noncommutative probability space and let
{a1 , . . . , an }, {b1 , . . . , bn } be free. Then we have
X
κn (a1 b1 , a2 b2 , . . . , an bn ) = κπ (a1 , . . . , an ) · κK(π) (b1 , . . . , bn )
π∈N C(n)

and
X
ϕ(a1 b1 a2 b2 · · · an bn ) = κπ (a1 , . . . , an ) · ϕK(π) (b1 , . . . , bn ).
π∈N C(n)

Here, ϕK(π) is defined by factorization into blocks, similar to the definition of


cumulants.
32 R. Speicher

Translating this into power series gives Voiculescu’s description via the S-
transform.
Theorem 5.8. Let (A, ϕ) be a noncommutative probability space and a ∈ A.
We denote the moment series of a by Ma (z) and the S-transform of a by
1 + z h−1i
Sa (z) = Ma (z),
z
h−1i
where Ma denotes the inverse with respect to composition. Then, if b, c ∈ A
are free, we have
Sbc (z) = Sb (z)Sc (z).

6. Asymptotic freeness of random matrices


Finally, we want to come back to random matrices and find freeness itself
(at least asymptotically) making its appearance there.
Definition 6.1. Two sequences of random matrices (AN )N ∈N and (BN )N ∈N
are called asymptotically free if
dist.
• AN , BN −−−→ a, b, for some a, b ∈ A, where (A, ϕ) is a noncommutative
probability space; and
• a, b are free in (A, ϕ).
Recall that convergence in distribution means
lim ϕN (p(AN , BN )) = ϕ(p(a, b))
N →∞

for any polynomial p ∈ ChX1 , X2 i in two noncommuting variables and that


ϕN = E ⊗ tr denotes here the averaged trace on our N × N -random matrices.
As an appetizer let us first consider Voiculescu’s generalization of Wigner’s
theorem to the case of several independent GUE random matrices A(1) =
(1) (n)
(aij )N
i,j=1 , . . . , A
(n)
= (aij )N
i,j=1 . Then
 (p)
aij | i, j = 1, . . . , N ; p = 1, . . . , n
is a Gaussian family with
(p) (r) 1
E[aij akl ] =δil δjk δpr .
N
The same calculations as in Section 2 yield the mixed moments
X m
ϕ(A(p(1)) . . . A(p(m)) ) = N #(γπ)−1− 2 ,
(p)
π∈P2 (m)

(p)
where P2 (m) denotes the set of pair-partitions π that respect the “coloring”
of the matrices, i.e. those π ∈ P2 (m) for which (r, s) ∈ π implies p(r) = p(s).
Thus
(p)
lim ϕ(A(p(1)) . . . A(p(m)) ) = #N C 2 (m).
N →∞
Random matrices and combinatorics 33

These limiting mixed moments of A(1) , . . . , A(n) are exactly those of a semi-
circular family s1 , . . . , sn with diagonal covariance, i.e.
X Y
ϕ(si(1) . . . si(m) ) = ϕ(si(r) si(p) )
π∈N C 2 (m) (r,p)∈π

and ϕ(si sj ) = δij for i, j = 1, . . . , n. Note that we can also rewrite this in
terms of cumulants as κn (si(1) , . . . , si(n) ) = 0 for n 6= 2 and
(
1, if i = j,
κ2 (si , sj ) =
0, if i 6= j.

However, this shows that mixed cumulants in s1 , . . . , sn vanish and thus we


have the following theorem.

Theorem 6.2. Elements of a semicircular family with diagonal covariance are


free. Therefore, independent GUE random matrices are asymptotically free.

Now we want to go further and find asymptotic freeness not just for semi-
circular distributions. For this we consider a GUE of N × N -random matri-
ces (AN )N ∈N and a sequence of deterministic matrices (DN )N ∈N (i.e. DN ∈
MN (C)) such that
m
lim tr(DN )
N →∞
dist.
exists for all m ∈ N, i.e. DN −−−→ d, where

ϕ(dm ) = lim tr(DN


m
).
N →∞
dist. dist.
We know that AN −−−→ s where s is semicircular and DN −−−→ d, but what
about the limit of the joint distribution of AN and DN ? In other words, can
we calculate mixed moments in AN , DN ?
We recall that for AN = (aij )N i,j=1 , we have the Wick formula
X Y
E[ai(1)j(1) · · · ai(m)j(m) ] = E[ai(r)j(r) ai(s)j(s) ],
π∈P2 (m) (r,s)∈π

where
1
E[aij akl ] = δil δjk .
N
Now, we consider a deterministic N × N -matrix D = (dij )N
i,j=1 and write

(k)
Dq(k) = (dij )N
i,j=1

for q(k) ∈ N, k = 1, . . . , m.
34 R. Speicher

Then, for a general mixed moment we have


ϕ(ADq(1) . . . ADq(m) )
N
X
1 (1) (m)
= E[ai(1)j(1) dj(1)i(2) ai(2)j(2) · · · dj(m)i(1) ]
N
i(1),...,i(m)=1
j(1),...,j(m)=1
N
X
1 (1) (m)
= E[ai(1)j(1) · · · ai(m)j(m) ]dj(1)i(2) · · · dj(m)i(1)
N
i(1),...,i(m)=1
j(1),...,j(m)=1

X N
X m
Y
1 (1) (m)
= m δi(r)j(π(r)) dj(1)i(2) · · · dj(m)i(1)
N 1+ 2
π∈P2 (m) i(1),...,i(m)=1 r=1
j(1),...,j(m)=1

X N
X
1 (1) (m)
= 1+ m
dj(1)j(πγ(1)) · · · dj(m)j(πγ(m)) ,
N 2
π∈P2 (m) j(1),...,j(m)=1

where we denote as before the permutation (1, 2, 3, . . . , m − 1, m) ∈ Sm by γ.


Let us denote for a permutation σ ∈ Sm the product of traces along the cycles
of σ by
Y
trσ (Dq(1) , . . . , Dq(m) ) = tr(Dq(d1 ) · · · Dq(dl ) ).
c=(d1 ,...,dl )∈σ

Then the above can be written as


X m
ϕ(ADq(1) . . . ADq(m) ) = trπγ (Dq(1) , . . . , Dq(m) ) · N #(πγ)−1− 2 ,
π∈P2 (m)

which converges for N → ∞ to


X
ϕπγ (dq(1) , . . . , dq(m) ).
π∈N C 2 (m)

We recall that, if s, d are free, it holds that


X
ϕ(sdq(1) . . . sdq(m) ) = κπ (s, . . . , s)ϕK(π) (dq(1) , . . . , dq(m) )
π∈N C(m)
X
= ϕK(π) (dq(1) , . . . , dq(m) ),
π∈N C 2 (m)

as (
1, if π ∈ N C 2 (m),
κπ (s, . . . , s) =
0, otherwise.
Hence the asymptotic value of ϕ(ADq(1) . . . ADq(m) ) is given by the moment
ϕ(sdq(1) . . . sdq(m) ), provided that K(π) and πγ coincide. This is indeed the
case for general π ∈ N C 2 (m). We will check this in the following example.
Random matrices and combinatorics 35

Example 6.3. Consider the noncrossing pairing


π = {(1, 2), (3, 6), (4, 5), (7, 8)}.
Then we have
πγ = (1)(2, 6, 8)(3, 5)(4)(7).
That this agrees with K(π) can be seen from the graphical representation

1 1̄ 2 2̄ 3 3̄ 4 4̄ 5 5̄ 6 6̄ 7 7̄ 8 8̄.

Since s and d are free, we get the asymptotic freeness of AN and DN .


The above calculations can be generalized to several independent GUE and
deterministic matrices, resulting in the following theorem.
(1) (p)
Theorem 6.4. If AN , . . . , AN are p independent N × N -random GUE ma-
(1) (q)
trices and DN , . . . , DN are q deterministic N × N -matrices such that
(1) (q) dist.
DN , . . . , DN −−−→ d1 , . . . , dq
for some d1 , . . . , dq ∈ (A, ϕ), it holds that
(1) (p) (1) (q) dist.
AN , . . . , AN , DN , . . . , DN −−−→ s1 , . . . , sp , d1 , . . . , dq ,
where s1 , . . . , sp are semicircular and s1 , . . . , sp , {d1 , . . . , dq } are free.
Finally, we also want to mention a version of these asymptotic freeness
results for Haar unitary random matrices: Let U(N ) denote the set of unitary
N × N -matrices. As this is a compact group, we can equip U(N ) with its Haar
probability measure leading to the notion of Haar unitary random matrices.
Definition 6.5. (1) Random matrices distributed according to the Haar mea-
sure on U(N ) are called Haar unitary random matrices.
(2) Let (A, ϕ) be a noncommutative probability space. An element u ∈ A
is called a Haar unitary if
• u is unitary,
• ϕ(uk ) = δ0k for all k ∈ Z.
(1) (p)
Theorem 6.6. Let UN , . . . , UN be p independent Haar unitary N × N -
(1) (q)
random matrices and let DN , . . . , DN be q deterministic N × N -matrices
such that
(1) (q) dist.
DN , . . . , DN −−−→ d1 , . . . , dq
for some d1 , . . . , dq ∈ (A, ϕ). Then
(1) (1) (p) (p) (1) (q)
UN , (UN )∗ , . . . , UN , (UN )∗ , DN , . . . , DN
dist.
−−−→ u1 , u∗1 , . . . , up , u∗p , d1 , . . . , dq ,
36 R. Speicher

where u1 , . . . , up are Haar unitaries and {u1 , u∗1 }, . . . , {up , u∗p }, {d1 , . . . , dq } are
free.

Remark 6.7. Note that, if u is a Haar unitary and {u, u∗ } is free from {a, b},
then a and ubu∗ are free. Thus, if (AN )N ∈N and (BN )N ∈N are two sequences of
deterministic matrices with limit distributions and (UN )N ∈N is a sequence of

Haar unitary random matrices, then (AN )N ∈N and (UN BN UN )N ∈N are asymp-
totically free.

7. Comments
Random matrices have been studied in statistics and in physics since the
influential papers of Wishart [13] and Wigner [12], respectively. Random ma-
trices appear nowadays in different fields of mathematics and physics (such as
combinatorics, probability theory, statistics, operator theory, number theory,
quantum chaos, quantum field theory, etc.) or applied fields (such as electrical
engineering). Some idea of the diversity of random matrix appearances can be
gotten by looking on the collection of surveys in [1].
The genus expansion for Gaussian random matrices is a folklore result in
physics; for a mathematical exposition see, for example, [14].
The notion of “asymptotic freeness” was introduced by Voiculescu in [10].
Our presentation of the asymptotic freeness results for Gaussian random ma-
trices follows essentially the ideas of Voiculescu’s original proofs in [10, 11];
however, our presentation is more streamlined by using the Wick formula and
the genus expansion to make contact with the combinatorial description of
freeness.
The combinatorial approach to free probability theory originated in my
work [5] on free limit theorems. (At this time I was not aware of the work of
Kreweras on noncrossing partitions and addressed the latter, not very imagina-
tively, as “admissible” partitions.) Inspired by the work of Rota [4] around the
combinatorial structure of classical probability theory, featuring in particular
multiplicative functions on the lattice of all partitions, I developed a few years
later in [6] the full combinatorial description of freeness, resting on the notion
of multiplicative functions on the lattice of noncrossing partitions and “free cu-
mulants”. Andu Nica showed a bit later in [2] how this combinatorial approach
connects in general to Voiculescu’s operator-theoretic approach in terms of cre-
ation and annihilation operators on the full Fock space. I teamed then up with
Nica, pushing the combinatorial approach much further. Whereas in the be-
ginning we were mainly driven by the desire to understand Voiculescu’s work
by giving new and “simpler” (at least for the combinatorially inclined) proofs
of existing results of Voiculescu (like his R- and S-transform descriptions in
[8, 9] for the free additive and multiplicative convolutions, respectively), later
we could also initiate new directions in free probability. Prominent examples
here are the determination of the distribution of the free commutator, the
introduction of R-diagonal elements or the proof of the existence of the free
Random matrices and combinatorics 37

convolution power semigroup (µ⊞t )t≥1 . A good source for these developments
and the combinatorial approach in general is our monograph [3].

References
[1] G. Akemann, J. Baik, and P. Di Francesco, The Oxford handbook of random matrix
theory, Oxford Univ. Press, Oxford, 2011. MR2920518
[2] A. Nica, R-transforms of free joint distributions and non-crossing partitions, J. Funct.
Anal. 135 (1996), no. 2, 271–296. MR1370605
[3] A. Nica and R. Speicher, Lectures on the combinatorics of free probability, London Math.
Soc. Lecture Note Ser., 335, Cambridge Univ. Press, Cambridge, 2006. MR2266879
[4] G.-C. Rota, Gian-Carlo Rota on combinatorics, Contemporary Mathematicians, Birk-
häuser Boston, Boston, MA, 1995. MR1392961
[5] R. Speicher, A new example of “independence” and “white noise”, Probab. Theory
Related Fields 84 (1990), no. 2, 141–159. MR1030725
[6] R. Speicher, Multiplicative functions on the lattice of noncrossing partitions and free
convolution, Math. Ann. 298 (1994), no. 4, 611–628. MR1268597
[7] D. Voiculescu, Symmetries of some reduced free product C ∗ -algebras, in Operator alge-
bras and their connections with topology and ergodic theory (Buşteni, 1983), 556–588,
Lecture Notes in Math., 1132, Springer, Berlin, 1985. MR0799593
[8] D. Voiculescu, Addition of certain noncommuting random variables, J. Funct. Anal. 66
(1986), no. 3, 323–346. MR0839105
[9] D. Voiculescu, Multiplication of certain noncommuting random variables, J. Operator
Theory 18 (1987), no. 2, 223–235. MR0915507
[10] D. Voiculescu, Limit laws for random matrices and free products, Invent. Math. 104
(1991), no. 1, 201–220. MR1094052
[11] D. Voiculescu, A strengthened asymptotic freeness result for random matrices with ap-
plications to free entropy, Internat. Math. Res. Notices 1998, no. 1, 41–63. MR1601878
[12] E. P. Wigner, Characteristic vectors of bordered matrices with infinite dimensions, Ann.
of Math. (2) 62 (1955), no. 3, 548–564. MR77805
[13] J. Wishart, The generalised product moment distribution in samples from a normal
multivariate population, Biometrika 20A (1928), no. 12, 32–52.
[14] A. Zvonkin, Matrix integrals and map enumeration; an accessible introduction, Combi-
natorics and physics (Marseilles, 1995), Math. Comput. Modelling 26 (1997), no. 8-10,
281–304. MR1492512
Free monotone transport

Dimitri Shlyakhtenko

1. Introduction
The subject of optimal transportation [20] is an extremely rich and well-
developed theory.
Since its introduction by Voiculescu in the early 1980s [21, 26], free proba-
bility has established itself as a fertile source of both noncommutative analogs
of classical probability results as well as applications to operator algebras. It
is thus an important question to understand if free probability analogs of clas-
sical transportation results are available. In these notes we describe our joint
work with A. Guionnet [14] in which we were able to describe the beginnings
of such a theory.

2. Classical transportation theory


One of the earliest questions in what is now called transportation theory is
the following question due to Monge, studied in his memoir “Sur la théorie des
déblais et des remblais” (Mém. de l’Acad. de Paris, 1781): how to transport
soil from the ground to a given configuration in the “most optimal way”.
The mathematical formulation goes as follows. Let µ be a measure sup-
ported on the interval [a, b] (which describes the initial distribution of the soil)
and let ν be a measure supported on [c, d] representing the desired distribution
of the soil. The procedure of moving the soil to the desired configuration is
then given by a function f : [a, b] → [c, d] which describes how much dirt from
point x is moved to the point f (x).
One is then interested in minimizing the “cost”
Z
|x − f (x)|2 dµ(x)

subject to the condition that we end up with the desired configuration of dirt,
i.e., that the push-forward is
f∗ µ = ν.

Research support by NSF grant DMS-1161411.


40 D. Shlyakhtenko

(There are various choices for whatR is meant by the cost of f . In fact, Monge
originally considered the L1 -cost, |x − f (x)|dµ(x).)
More generally, let µ and ν be probability measures on Rn . One of the
first theorems from measure theory states that under mild assumptions on the
measures, there is always a function f : Rn → Rn such that the push-forward
of µ via the function f is exactly the measure ν (i.e. f∗ µ = ν).
Theorem 2.1. If µ is a nonatomic probability measure on Rn , then for any
probability measure ν there exists a function f : Rn → Rn such that f∗ µ = ν.
Proof. The “high-tech proof” goes as follows. The algebra A = L∞ (Rn , µ) may
be embedded into B(L2 (Rn , µ)) as multiplication operators. It is a weak closed
C ∗ -subalgebra, hence it is an abelian W ∗ -algebra. Furthermore, it is diffuse
(all minimal projections are zero) because µ has no atoms. The measure µ
gives one a state (even a trace) on A. From the general theory of W ∗ -algebras
it follows that A is universal in the class of separable abelian von Neumann
algebras: Any other separable abelian von Neumann algebra B with a trace ν
admits a trace-preserving embedding into A. But such an embedding exactly
corresponds to a map f from Rn → Rn so that f∗ µ = ν.
The “low-tech proof” (which of course underlies the high-tech one) proceeds
by cutting Rn into subsets of measures 2n , thus giving rise to an isomorphism
with a Cantor set. 
In fact, there are a lot of transport maps—given f with f∗ µ = ν we can
replace f by f ◦ g for any µ-preserving g, but we want to have an optimal
solution.
Theorem 2.2 (Brenier 1991 [4]). Assume that µ and ν are probability mea-
sures on Rn (with compact support) and Lebesgue absolutely continuous.1 Then
there exists a unique function f : Rn → Rn such that f∗ µ = ν and f is the
gradient of a convex function g. Moreover, f solves the Monge problem (i.e. f
is an optimal transport):
kf − id kL2 (µ) = inf kfˆ − id kL2 (µ) .
fˆ∗ µ=ν

Recall that kfˆ − id kL2 (µ) is given by


Z  12
kfˆ − id kL2 (µ) = kfˆ(x) − x)k2Rn dµ(x) .

An approach to the proof of Brenier’s theorem comes with the theory of Kan-
torevich duality (which is a kind of convex optimization problem, see [4, 20]).
It is convenient to introduce the following definition.
Definition 2.3. We call f : Rn → Rn monotone if the Jacobian satisfies
Jac f ≥ 0 (i.e., it is positive semi-definite as a matrix).
1A weaker condition is that they do not give weight to small sets, i.e., sets of dimension
less than or equal to n − 1.
Free monotone transport 41

This definition is motivated by the one-dimensional case, where a monotone


function is characterized by positivity of its derivative. Unlike in the one-
dimensional case, the composition of monotone functions is not necessarily
monotone again (this is related to the fact that the product of positive matrices
is not necessarily positive again).

3. Translation to the free case


We now look for free probability analogs of this transport question. Note
that instead of looking at the measure space (Rn , µ) we can equivalently study
the algebra R(x1 , . . . , xn ) of polynomialsR in n variables endowed with the lin-
ear expectation E defined by p 7→ p(x1 , . . . , xn )dµ(x1 , . . . , xn ). This leads
to consider such objects as L2 (Rn , µ) or the algebra C(supp µ) of continuous
functions on the support of µ.
In the noncommutative case one considers the ∗-algebra A = C[X1 , . . . , Xn ]
of polynomials in n noncommuting selfadjoint variables (so Xj∗ = Xj by def-
inition), together with a linear functional τ satisfying positivity and growth
conditions: τ (p∗ p) ≥ 0 for all p ∈ A and |τ (p)| ≤ Rdeg(p) for any monomial p
and some fixed R.
Using the GNS construction, we define the Hilbert space L2 (τ ) as the clo-
1
sure of A with respect to the norm kpk = τ (p∗ p) 2 . If moreover τ is a trace (i.e.
τ (xy) = τ (yx)), we obtain a left and a right action of A on L2 (τ ). We denote
by C ∗ (τ ) the C ∗ -algebra generated by the image of A under the left represen-
tation. By W ∗ (τ ) we denote the weak closure of C ∗ (τ ). The linear functional
τ can then be extended to these algebras by the formula τ (p) = hp · 1, 1iL2 (τ ) .
As is well known, W ∗ -algebras with specified traces can then be interpreted
as noncommutative measure spaces with a fixed measure.
Let us consider a concrete example. Recall that a noncrossing partition of
{1, . . . , k} is a partition of {1, . . . , k} into blocks B1 , . . . , Bk so that whenever
t < u < v < w and t, v are in the same block and u, w are in the same
block, then all t, u, v, w are in the same block. A partition is called a pairing
if all blocks consist of exactly two elements. For example, (1, 2), (3, 4) is a
noncrossing pairing of {1, 2, 3, 4}, but (1, 3), (2, 4) is not.
If in addition we are given a set of n colors {1, . . . , n}, we say that a coloring
of {1, . . . , k} is a choice of colors i1 , . . . , ik for each element of {1, . . . , k}. A
partition is called compatible with a coloring if each block of the partition
consists of points of the same color.

Example 3.1. Let S1 , . . . , Sn be a free semicircular system with law τ , i.e.


τ (p(S1 , . . . , Sn )) is given by
 
noncrossing pairings of {1, . . . , k} which are
τ (Xi1 · · · Xik ) = # .
compatible with the coloring {i1 , . . . , ik }

One can check that τ is both positive and satisfies our growth condition.
42 D. Shlyakhtenko

Proof (Sketch). Define noncommutative Chebychev polynomials of the second


kind Pr (k1 , . . . , kr ) recursively by the formulas
P0 = 1,
P1 (j) = Xj ,
Pr+1 (j, i1 , . . . , ir ) = Xj Pr (i1 , . . . , ir ) − δj=i1 Pr−1 (i2 , . . . , ir ).
Note that {Pr (i1 , . . . , ir ) | r = 0, 1, 2, . . . , i1 , . . . , ir ∈ {1, . . . , n}} form a linear
basis for the algebra of noncommutative polynomials.
Now let H be a Hilbert space with an orthonormal basis

Pr (i1 , . . . , ir ) | r = 0, 1, 2, . . . , i1 , . . . , ir ∈ {1, . . . , n} .
The maps
Lj : Pr (i1 , . . . , ir ) 7→ Pr+1 (j, i1 , . . . , ir ),
L†j : Pr (i1 , . . . , ir ) 7→ δj=i1 Pr−1 (i2 , . . . , ir )

are adjoints of one another and satisfy L†j Lj = 1 and are thus bounded. Let
τ = h·P0 , P0 i. One easily checks that τ (Q(X1 , . . . , Xn )) = τ (Q(S1 , . . . , Sn ))
for any noncommutative polynomial Q. Indeed, it is sufficient to check this for
Q = Pr (i1 , . . . , ir ). In this case, τ (Q(X1 , . . . , Xn )) = 0 if r 6= 0 by orthogo-
nality. On the other hand, one easily proves recursively that τ (Q(S)) = 0 as
well. Boundedness of Xj follows from Xj Q = (Lj + L†j )Q and boundedness of
Lj , L†j . 

If we think of C ∗ (τ ) and W ∗ (τ ) as replacements for function algebras, we can


think of f in Brenier’s theorem as the n-tuple f = (f1 , . . . , fn ) of measurable
functions. This motivates the following definition.
Definition 3.2. Given normal faithful traces τ0 and τ1 on C[X1 , . . . , Xn ], we
say that f = (f1 , . . . , fn ), fi ∈ W ∗ (τ0 ) is a transportation map from τ0 to τ1 ,
if τ0 (q(f1 , . . . , fn )) = τ1 (q(X1 , . . . , Xn )) for every polynomial q.
Out of a transportation map, we can construct an embedding of W ∗ (τ1 )
into W ∗ (τ0 ) sending Xi ∈ W ∗ (τ1 ) to fi ∈ W ∗ (τ0 ). Since this map is trace
preserving, this also extends to the von Neumann algebraic setting.
Note that the equation τ0 (q(f1 , . . . , fn )) = τ1 (q(X1 , . . . , Xn )) is an analog,
for f∗ µ = ν, of
Z Z
q ◦ f (x)dµ(x) = q(y)dν(y).

Recall that there is a measure µ such that for any measure ν there exists
a function f such that ν = f∗ µ. In particular, L∞ (Rn , µ), where µ is the
Gauss measure, contains all abelian W ∗ -algebras (see Theorem 2.1). Hence,
it is natural to ask whether there exists a faithful trace τ such that W ∗ (τ )
contains all tracial separable W ∗ -algebras. The next theorem states that there
is no noncommutative analog of this.
Free monotone transport 43

Theorem 3.3 (Ozawa). There is no separable tracial W ∗ -algebra N such


that N contains all other separable tracial W ∗ -algebras.
We won’t prove Ozawa’s theorem, but instead give an example of a von
Neumann algebra that cannot be embedded into the von Neumann algebra
associated to the semicircle law.
Let Γ be a discrete group (for instance Γ = SL3 (Z), the group of all 3 × 3-
matrices over Z with determinant 1).
Definition 3.4. A discrete group Γ has property (T) of Kazhdan, if there
exists an ε > 0 and a finite subset F ⊆ Γ such that the following holds. If
π : Γ → U(H) is a unitary representation on some Hilbert space H for which a
normalized vector h ∈ H exists such that kπ(γ)h − hk < ε for all γ ∈ F , then
there exists a nonzero vector h0 ∈ H such that π(γ)h0 = h0 for all γ ∈ Γ.
This means, if π almost contains the trivial representation, it actually does
contain it—this is another way of saying that the trivial representation is
isolated.
Property (T) is easy to show for finite groups. The true miracle is that it
holds for some infinite groups—for example, SL3 (Z).
If Γ is a discrete group, we consider its left regular representation λ : Γ →
U(ℓ2 (Γ)) and the reduced C ∗ -algebra Cred

(Γ), which is simply the C ∗ -algebra

generated by λ(Γ). By LΓ = W (λ(Γ)) we denote its von Neumann algebra.
If Γ is ICC (i.e. all conjugacy classes except for the trivial one are infinite),
then we have a unique trace on LΓ.
Theorem 3.5. Let Γ be a nontrivial ICC property (T) group. Then there is no
embedding of LΓ into the W ∗ -algebra generated by a free semicircular system.
This theorem goes back to the work of Connes and Jones on von Neumann
algebras with property (T); their proof relies on the fact that the von Neumann
algebra generated by a free semicircular system has the so-called Haagerup
approximation property, which descends to subalgebras. Thus if an embedding
of LΓ were to exist, it would follow that LΓ (and thus Γ) would have the
Haagerup property. One can then use group theory arguments to prove that
this is impossible if Γ has property (T).
We give a different proof, which relies on Popa’s deformation-rigidity theory
(see [19]). We first use some free probability to construct an “s-malleable
deformation”, and then use Popa’s patching argument (see [18]) to deduce a
contradiction.

Proof. First note that we can view the W ∗ -algebra M1 generated by a free
semicircular system S1 , S3 , . . . , S2n−1 as a W ∗ -subalgebra of
M := W ∗ (S1 , S2 , S3 , . . . , S2n ).
The von Neumann algebra M in turn is the free product of the algebras M1
and M2 := W ∗ (S2 , S4 , . . . , S2n ). Furthermore, M1 is isomorphic to the free
44 D. Shlyakhtenko

group factor LFn . Thus, we have to show that there is no embedding of LΓ


into M1 ⊂ M1 ∗ M2 . Assume the converse and denote by Υ this embedding.
Consider the map σt given by
σt (S2j ) = cos(t)S2j + sin(t)S2j−1 ,
σt (S2j−1 ) = − sin(t)S2j + cos(t)S2j−1
for j = 1, . . . , n. The map σt can be extended to the algebra generated by
S1 , . . . , S2n . It turns out that it preserves the trace τ . Thus σt can be extended
to the von Neumann algebra W ∗ (S1 , . . . , S2n ). We can similarly show that
there exists a period-two automorphism β of M1 ∗ M2 which sends Sj to Sj
if j is odd and Sj to −Sj if j is even. Moreover, β ◦ σt = σ−t ◦ β. Note that
β fixes elementwise M1 ⊂ M1 ∗ M2 ; in particular, β ◦ Υ = Υ . The pair σt , β
satisfying these properties is an s-malleable deformation.
For any t, we thus obtain a representation ρt of the group Γ on the Hilbert
space L2 (M1 ∗ M2 ) given by
ρt (g)h = σ0 (Υ (g))hσt (Υ (g)∗ ).
Clearly, ρt (g)h → h as t → 0. Property (T) then implies that we may find an
m large enough so that if t = 2−m Υ/2 then ρt fixes a nonzero vector h. Thus
for some h ∈ L2 (M1 ∗ M2 ), we have
σ0 (Υ (g))hσt (Υ (g)∗ ) = h for all g ∈ Γ.
Equivalently, viewing h as an unbounded operator affiliated to M1 ∗ M2 ,
σ0 (Υ (g))h = hσt (Υ (g)) for all g ∈ Γ.
Letting h = v|h| be the polar decomposition of h, we obtain a nonzero partial
isometry v ∈ M1 ∗ M2 satisfying
σ0 (Υ (g))v = vσt (Υ (g)) for all g ∈ Γ.
One says that v intertwines Υ = σ0 ◦ Υ and σt ◦ Υ .
Our task will now be to “patch” such partial isometries to find a partial
isometry that intertwines Υ and σπ/2 ◦ Υ . This is made possible by a beautiful
argument due to Popa, which uses the automorphism β.
We claim first that β fixes vv ∗ . To see this, we note that
σ0 (Υ (g))vv ∗ = vσt (Υ (g))v ∗ = v(vσt (Υ (g −1 )))∗
= v(σ0 (Υ (g −1 ))v)∗ = vv ∗ σ0 (Υ (g)).
In particular, it follows that vv ∗ ∈ Υ (Γ)′ ∩ (M1 ∗ M2 ). Since Γ is ICC and is
nontrivial, its von Neumann algebra contains a unitary u with diffuse spectral
measure. It follows that vv ∗ ∈ {u}′ ∩ (M1 ∗ M2 ) for some diffuse unitary u ∈ M1
On the other hand, as an M1 -bimodule, L2 (M1 ∗M2 ) = L2 (M1 )⊕R, where R is
the infinite direct sum of L2 (M1 )⊗ L2 (M1 ); thus as a W ∗ (u)-bimodule, R is an
infinite direct sum of the space of Hilbert–Schmidt operators on L2 (W ∗ (u)).
Thus {u}′ ∩ R = 0, since no Hilbert–Schmidt operator can commute with
an element with diffuse spectrum. It follows that vv ∗ ∈ M1 . In particular,
β(vv ∗ ) = vv ∗ , as claimed.
Free monotone transport 45

Using the fact that β fixes M1 ⊃ Υ (G), we deduce


σ0 (Υ (g))β(v) = β(v)σ−t (Υ (g)) for all g ∈ Γ.
Taking adjoints and replacing v by v ∗ yields
β(v ∗ )σ0 (Υ (g)) = σ−t (Υ (g))β(v ∗ ).
Combining these equations, we obtain that, for any g ∈ Γ,
σ−t (Υ (g))β(v ∗ )v = β(v ∗ )σ0 (Υ (g))v = β(v ∗ )vσt (Υ (g)).
Applying σt to both sides gives us finally the equation
Υ (g)σt (β(v ∗ )v) = σt (β(v ∗ )v)σ2t (Υ (g)).
In other words, if we set v ′ = σt (β(v ∗ )v)), then
Υ (g)v ′ = v ′ σ2t (Υ (g)) for all g ∈ Γ.
The trace of the left support of v ′ is the same as that of β(v ∗ )v, since σt
is an automorphism. At the same time, β(v)β(v ∗ ) = β(vv ∗ ) = vv ∗ , which
implies that the right support of β(v ∗ ) is equal to that of v ∗ . It follows that
β(v ∗ )v (and thus v ′ ) has left support of the same trace as that of v. Since v ′
intertwines Υ and σ2t ◦ Υ , we again note that v ′ (v ′ )∗ ∈ M1 and is thus fixed
by β. Repeating the same argument with v replaced by v ′ gives us a nonzero
partial isometry v ′′ intertwining Υ and σ4t ◦ Υ . Iterating, we finally obtain a
nonzero partial isometry w with
(1) Υ (g)w = wσπ/2 (Υ (g)) for all g ∈ Γ.
We now deduce a contradiction. Let us denote by N1 the image Υ (LΓ) ⊂ M1
and let N2 = σπ/2 (N1 ) ⊂ M2 . Note that N1 ∼
= N2 ; we fix this identification
and write N for both N1 and N2 when identified in this way.
It is not hard to see that as bimodules,

M1 L
2
(M1 ∗ M2 )M2 ∼
= M1 (L2 (M1 ) ⊗ L2 (M2 ))⊕∞
M2

which means that also

N1 L
2
(M1 ∗ M2 )N2 ∼
= N1 (L2 (N1 ) ⊗ L2 (N2 ))⊕∞
N2
∼ N HS(L2 (N1 ), L2 (N2 ))⊕∞
= 1 N2

= N HS(L2 (N ), L2 (N ))⊕∞
N .

The vector w in this bimodule is central: equation (1) states that


xw = wx for all x ∈ N.
But this is impossible, since this would mean that a Hilbert–Schmidt oper-
ator on L2 (N ) would commute with the action of N ; but since N ∼= LΓ is not
type I, its commutant contains no compact operators. 
46 D. Shlyakhtenko

Hence, there is no hope for a free version of Brenier’s theorem—but note


that Brenier’s theorem is only for nice measures. Thus, our hope is to find a
nice class of traces τ (which give rise to free group factors) where the transport
is possible. As Voiculescu mentioned (see his lecture), a characterization of this
class could lead to a characterization of the free group factors.

4. Free Gibbs laws


In the classical case a nice class of measures arises from “potential functions”
V : Rn → R such that
Z
Z= e−V (x1 ,...,xn ) dx1 . . . dxn < ∞
Rn
P 2
(for example V (x1 , . . . , xn ) = 12 xi ). The associated measure µV is then
given by the density Z1 e−V (x1 ,...,xn ) dx1 . . . dxn , and is sometimes called the
Gibbs measure (or Gibbs law) associated to V . The Gaussian measure is
such an example, corresponding to quadratic V . These laws have some nice
properties:
(i) For every differentiable function p we have
Z Z
∂V ∂p
p(~x) (~x)dµV (~x) = (~x)dµV (~x).
R n ∂x j R n ∂xj

If V is convex (and some conditions at infinity are satisfied), then this


property characterizes the potential V .
(ii) For a measure ν given by a density q the entropy of ν is
Z
H(ν) = q(x) log q(x)dx.

Now we consider the relative entropy


Z
HV (ν) = H(ν) − V (x)dν(x).

Then µV is the unique maximizer of HV .


In order to give a free analog of property (i), we first describe some non-
commutative differential calculus introduced in the free probability context by
Voiculescu [22, 24]. As before, denote by A = C[X1 , . . . , Xn ] the algebra of
noncommutative polynomials in n variables. Regard A ⊗ A as a bimodule over
A by setting
A(P ⊗ Q)B = AP ⊗ QB.
Definition 4.1. We define the free difference quotient ∂j : A → A ⊗ A by
∂j Xi = δij 1 ⊗ 1 and ∂j (P Q) = ∂j (P )Q + P ∂j (Q) (Leibniz rule).
P
So if P is a monomial, then ∂j P = P =AXj B A ⊗ B.
Definition 4.2. The j-th cyclic derivative ofPa polynomial P is the linear
function determined on monomials by Dj P = P =AXj B BA.
Free monotone transport 47

In other words, Dj = m ◦ σ ◦ ∂j , where σ(a ⊗ b) = b ⊗ a and m(a ⊗ b) = ab.


Note that for n = 1 this is the usual derivative.
Example 4.3. As an example, we compute
∂2 X1 X2 X3 X2 X1 X2 = X1 ⊗ X3 X2 X1 X2 + X1 X2 X3 ⊗ X1 X2
+ X1 X2 X3 X2 X1 ⊗ 1.
Definition 4.4. A state τ is called a free Gibbs law with potential V ∈ A if
τ satisfies the Schwinger–Dyson equation with potential V ,
τ (P Dj V ) = τ ⊗ τ (∂j P )
for all j = 1, . . . , n and all P ∈ A.
Note that this definition is given by a property exactly like in the above
item (i) for the classical case. There is a way to view the potential V as a
stationary measure of a stochastic PDE, namely
dXt = dBt − ∇V (xt )dt.
In such a situation, µV is stationary, and likewise τ is stationary for an analo-
gous PDE in the free situation [13].
Notation 4.5. For a polynomial P ∈ A given by
X∞ Xn
P = c(i1 , . . . , ik )Xi1 · · · Xik
k=0 i1 ,...,ik =1

and R ∈ (0, ∞) we write



X n
X
k
kP kR = R |c(i1 , . . . , ik )|.
k=0 i1 ,...,ik =1

This should be read as a supremum on radius R in the sense that


kP kR ≥ sup kP (X1 , . . . , Xn )k.
kXj k≤R

Theorem 4.6 (Guionnet, Maurel-Segala [12]). For all R > 0 and W ∈ A


there is a β(W ) > 0 such that for all |β| < β(W ) there exists a unique state
τβ such that
(i) τβ (P ) ≤ kP kR and
(ii) τβ (P Dj Vβ ) = τβ ⊗ τβ (∂j P ),
P 2
for all P ∈ A and Vβ = 12 Xj + βW . In addition τβ satisfies τβ (P ∗ P ) ≥ 0
for every polynomial P ∈ A.
P 2
If β = 0 (i.e. if Vβ = 12 Xj ), we get
1 X 
Dj V0 = Dj Xi2 = Xj .
2
Thus when β = 0, the equation
τ0 (Xj P ) = τ0 ⊗ τ0 (∂j P )
48 D. Shlyakhtenko

has a unique solution τ0 (satisfying τ0 (1) = 1) because this equation gives a


relation between polynomials of degree deg(P ) + 1 and degree deg(P ) − 1. In
fact, this unique solution is given by the semicircular law.
More generally, we have
1 X 
Dj Vβ = Dj Xi2 + βW = Xj + βDj W.
2
So the free Gibbs state τβ has to fulfill
τβ (Xj P ) = τβ ⊗ τβ (∂j P ) − βτβ (Dj W P ).
While the left-hand side has degree deg(P ) + 1 in X and degree deg(τβ ) in β,
the first summand on the right-hand side has degree deg(P )−1 in X and degree
deg τβ in β, whereas the second summand has degree deg(P ) + deg(W ) − 1 in
X and degree deg(τβ ) + 1 in β. This gives a recursion up to higher degrees
in β. This shows that there exists a unique formal power series solution τβ to
this equation. In fact, for small enough β this power series converges [12].
Let us remark that if V splits into a sum,
V (X1 , . . . , Vn ) = V1 (X1 , . . . , Xk ) + V2 (Xk+1 , . . . , Xn ),
then we expect to find freeness of (X1 , . . . , Xk ) and (Xk+1 , . . . , Xn ).

5. Connection between random matrices and free Gibbs states


We now turn to an analog of property
P 2(ii) of the classical Gibbs law. Fix a
polynomial W and consider Vβ = 12 Xj + βW . We define
(N ) 1
µβ (dA1 · · · dAn ) = e−N Tr(Vβ (A1 ,...,An )) .
ZVβ ,N
Here, the measure dA1 · · · dAn is given by the Lebesgue measure on the self-
sa
adjoint N × N -matrices MN with complex entries.
Theorem 5.1 (Guionnet, Maurel-Segala [12]). For W ∈ A andP|β| < β0 let
τVβ be the unique solution of Theorem 4.6 associated to Vβ = 12 Xj2 + βW .
Then for every polynomial P ∈ A we have
1
Eµ(N ) Tr(P (A1 , . . . , An )) → τVβ (P )
β N
as N → ∞. In particular, τVβ (P ∗ P ) ≥ 0 holds true.
R
Recall that the equation HV (ν) = H(ν) − V (x)dν(x) is maximized by the
Gibbs measure. Is there a noncommutative analog? There exists an analog of
the entropy H in free probability theory which is Voiculescu’s free entropy χ
(based on microstates). Let Fj be an element of the closure of C[Y1 , . . . , Yn ]
with respect to k·kR and R > kYj k. Let Xj = Fj (Y1 , . . . , Yn ). Then we have
χ(X1 , . . . , Xn ) = χ(Y1 , . . . , Yn ) + τ ⊗ τ (Tr log(|[∂j Fi ]ij |)).
This is like a change of variables (since Tr log behaves like the Jacobian of F ).
Note that [∂j Fi ]ij , as well as its absolute value, is well defined as an element
Free monotone transport 49

in Mn (M ⊗ M op ), where M = W ∗ (Y1 , . . . , Yn ). Since this expression only


depends on τ , we also write χ(τ ) instead of χ(X1 , . . . , Xn ).
Proposition 5.2. Fix V ∈ A and set χV (τ ) = χ(τ ) − τ (V ). If χV is maximal
at τ , then τ satisfies the Schwinger–Dyson equation.
Proof. Suppose Y1 , . . . , Yn are free random variables with law τ . For Pj ∈
C[Y1 , . . . , Yn ] we set Fj = Yj + εPj (Y1 , . . . , Yn ). Then we get
χV (F1 , . . . , Fn ) = (τ ⊗ τ ⊗ Tr)(log(∂j Fi ))
− τ (V (Y1 + εP1 , . . . , Yn + εPn )) + χ(Y1 , . . . , Yn ).
Here, ∂j Fi = I + ε[∂j Pi ]i,j .
Now if χV is maximal at ε = 0, it follows that ∂ε |ε=0 χV (F ) = 0. We get
X X
τ ⊗ τ ⊗ Tr(log([∂j Pi ]i,j )) = τ ⊗ τ (∂i Pi ) − τ (Dj V · Pj ).
i j

If we set Pj = 0 except Pn = P , it follows that τ satisfies the Schwinger–Dyson


equation. 
Theorem 5.3 ([14]). There exists a transport map F = (F1 , . . . , Fn ) from the
semicircle law τ to τVβ for all |β| < β(W ).
We denote DV = (D1 V, . . . , Dn V ) and J P = [∂j Pi ]i,j ∈ Mn (A ⊗ Aop ),
furthermore M = W ∗ (A, τ ). Then the Schwinger–Dyson equation is equivalent
to
τ (P DV ) = (τ ⊗ τ ) Tr(J P ) for all P = (P1 , . . . , Pn ).
P
Indeed, we simply use P DV = j Pj Dj V . The operator J can be viewed as a
densely defined map L2 (M )n → L2 (Mn (M ⊗ M )) so that the above equation
can be rewritten as
J ∗ (I) = DV.
This follows from
(τ ⊗ τ ) Tr(J P ) = (τ ⊗ τ ) Tr(J P · I) = τ (J ∗ (I)P ).
Lemma 5.4 (Voiculescu [22]). If J ∗ (I) exists then J is closable.
P (k) (k)
Proof. One can write an explicit formula for J ∗ ([ k aij ⊗ bij ]i,j ) in terms
of J ∗ (I). 
Our goal is the following. Given semicircle variables X1 , . . . , Xn we want to
construct F1 , . . . , Fn ∈ M = W ∗ (X1 , . . . , Xn ) such that the system F1 , . . . , Fn
has law τVβ . This means that F1 , . . . , Fn must satisfy
JF∗ (I) = DVβ (F1 , . . . , Fn ).
Here, JF P is given by [∂Fj Pi ]ij , i.e. ∂Fj (Fi ) = δij 1 ⊗ 1 etc. By a kind of chain
rule and the assumption that J F is invertible, this equation can be rewritten
as
J ∗ ((J F )−1 ) = (DV )(F1 , . . . , Fn ).
50 D. Shlyakhtenko

Here, J is with respect to X1 , . . . , Xn . We want to analyze this equation which


is equivalent to F1 , . . . , Fn ∈ Ak·kR , R > 4, having law τVβ , where X1 , . . . , Xn
have the semicircular law. We require that the operator J F is invertible in
Mn (Ak·kR ⊗ Ak·kR ). If we assume that Fj (X1 , . . . , Xn ) = Xj + fj , we infer
that the equation (or rather the n-tuple of equations)
J ∗ ((I + J f )−1 ) = X + f + βDW (X + f )
is equivalent to saying that F1 , . . . , Fn has law τVβ . Here, we need kJ f k < 1
and we use DVβ = X + βDW .
We do not know how to solve this equation, but it is implied if we apply
D to the following equation, in the case that our tuple f can be written as a
cyclic gradient of a function g:
(τ ⊗ 1 + 1 ⊗ τ )(Tr log(I + J f ))
n1 X o 1X
=S (Xj + fj )2 + βW (X + f ) − Xj2 .
2 2
Note thatPthe left-hand side is in Ak·kR . The last summand on the right-hand
side, − 21 Xj2 , comes into play since J ∗ (1) = X if and only if X has the semi-
circular law. In the above equation,
P S is the symmetrization operator sending
monomials Xi1 · · · Xin to n1 k Xik · · · Xin Xi1 · · · Xik−1 . We have DS = D.
The above equation is called the free Monge–Ampère equation as an analog of
the classical Monge–Ampère equation
Tr log Jac(F (x)) = V (F (x)) − V0 (x),
or in an equivalent formulation,
e−V0 (x)
det Jac(F (x)) = .
e−V (F (x))
This describes a change of densities.
One can prove that the Schwinger–Dyson equation has a solution by using
Picard iteration, as follows. Let f = Dg and consider
(τ ⊗ 1 + 1 ⊗ τ )(Tr log(I + J Dg))
X 1X
= Xj Dj g + (Dj g)2 + βW (X + Dg).
2
Summing over the equation
X
Xj Dj (Xi1 . . . Xin ) = Xj Xik+1 . . . Xin Xi1 . . . Xik−1
k,ik =j

yields
X X
Xj Dj (Xi1 . . . Xin ) = Xj Xik+1 . . . Xin Xi1 . . . Xik−1
j k
= nS(Xi1 . . . Xin ).
Free monotone transport 51

Let NPbe the operator that multiplies each monomial by its degree. We infer
that Xj Dj g = N g (cp. [23]). This allows us to re-write our equation as
h 1X i
g = N −1 S (τ ⊗ 1 + 1 ⊗ τ )(Tr log(I + J Dg)) − (Dj g)2 − βW (X + Dg) ,
2
i.e., in the form
g = F (g).
It turns out that our F is contractive on a ball for k·kR , hence we have a
fixpoint and thus a solution by Picard iteration.
As β → 0, the solution F = X + f converges to X and at some point it
becomes invertible. So for small β there exists F such that Y = F (X1 , . . . , Xn )
has law τVβ and F h−1i such that F h−1i (Y1 , . . . , Yn ) has semicircular law τ and
J F is positive.

6. Applications
It is a ten years old question asked by Voiculescu [25] if for all β we have
W ∗ (τβ ) ∼
= W ∗ (τsc ) with τsc the semicircle law. Now we know the following
result.
Corollary 6.1. For small β we have that W ∗ (τβ ) is isomorphic to W ∗ (τsc ).
It turns out that the so-called q-deformed free group factors of Bozejko and
Speicher [3] are particular examples of factors of the form W ∗ (τβ ). Thus we
can formulate the following corollaries.
Corollary 6.2. The q-deformed free group factors are isomorphic to free group
factors for small q.
Corollary 6.3. C ∗ (τβ ) is isomorphic to C ∗ (τsc ) for small values of β.
As C ∗ (τsc ) is projectionless by a result by Pimsner and Voiculescu (1981), so
is C ∗ (τβ ). This has important consequences for the histogram of eigenvalues.
In the classical case we know that for all measures µ on Rn with den-
sity ρ(x1 , . . . , xn ) there exists a unique transport map f which transports the
Gaussian distribution to the measure µ. The Jacobian of this transport map
more or less is the density ρ. Hence, having a transportation map is even
better than having a density, since it exists in more general settings.
In free probability however, we do not have densities, but given a law τ there
may be a transportation map F such that τ = F∗ τsc . In this case, we have the
following relation between the free entropy χ and the transport map F :
χ(τ ) = (τ ⊗ τ ) Tr log(J F ) + χ(τsc ).
(N ) sa n
We define a measure µV on (MN ) via
(N ) 1 −N Tr(V (X1 ,...,Xn ))
µV (dA1 · · · dAn ) = e .
ZV,N
52 D. Shlyakhtenko

2 2
By Brenier’s theorem, we find a transportation map F̃ (N ) : RnN → RnN
(N )
transporting the Gaussian measure µG to µV . We use the following random
matrix model:
(N )
H−N Tr(V (A1 ,...,An )) (µV ) → χV (τV ) as N → ∞.
Here, χV (τV ) is the free entropy relative to V . Applying F̃ (N ) to the random
matrices via functional calculus yields a map F (N ) with
kF (N ) − F̃ (N ) kL2 ( N1 Tr ◦EµG ) → 0 as N → ∞.

Taking a look at the effect of F (N ) and F̃ (N ) on HN Tr(V ) , we infer that F̃ (N )


(N )
changes the measure by H(µV ) − H(µG ) whereas F (N ) effects a change by
E(det(Jac)). Thus, asymptotically, the maps get close and we obtain a kind of
a density in free probability.

7. Is our map optimal?


In the classical case there exists a Wasserstein distance between two mea-
sures µ and ν defined by
Z
dW2 (µ, ν) = inf kx − yk22 dπ(x, y).
2
π

Here, the infimum is taken over all measures π on Rn × Rn with marginals


µ, ν. In some sense, integration yields µ at one end and ν at the other. The
Wasserstein distance can also be written as
X
dW2 (µ, ν)2 = inf E((Xi − Yj )2 ).
ij

Here, the infimum runs over all tuples X1 , . . . , Xn which are distributed by µ
and likewise Y1 , . . . , Yn according to ν.
In this context the Brenier map is optimal.
In the noncommutative case the metric dW2 is generalized by the Biane–
Voiculescu–Wasserstein distance [2]
X
d(τ, τ ′ )2 = x ,...,x
inf ∼τ kxi − yj k2L2 (M(τ )∗M(τ ′ )) .
1 n
y1 ,...,yn ∼τ ′

But it is still an open question if in this case our transport map is optimal.

8. Comments
Our result on the existence of free monotone transport is an example of the
use of analysis tools from free probability theory in the context of von Neumann
algebra theory, a topic that has a long history. Indeed, such applications were
the original motivation of Voiculescu in developing free probability.
Since the times of these lectures, there have been a number of important
developments, both around free transport and this more general area of ap-
plications of free probability theory. One such development is the work of
Free monotone transport 53

B. Nelson [16], who was able to extend our free monotone transportation re-
sults to nontracial context. The tracial requirement is absolutely essential in
our work, and the understanding of how to handle it in the nontracial context
is a big step forward. In another direction, Bekerman, Figalli and Guionnet
[1, 9] were able to substantially improve our results from [14] relating transport
between random matrix models of finite size and their free probability limit;
using these ideas they were able to obtain universality results for eigenvalues
of certain polynomials in arbitrary GUE matrices.
Although we did not mention it in our lecture, a lot of the work we have
described in the free probability setting has an extension to Jones subfactor
theory. The starting point [10] is the replacement of the ring of noncommu-
tative polynomials by a different ring, coming from a so-called Jones planar
algebra [15] of a subfactor inclusion. In a certain sense, this is akin to passage
from analysis on Rn to analysis on a certain symmetric space Rn /G. The
significant difference in the noncommutative case is the generality of the sym-
metry group G, which can be a “quantum symmetry” in the subfactor theory
sense. It turns out that both random matrix models [11], free Gibbs states
[5, 11] and transportation theory [17] are available in this greater generality.
Finally, it is worth mentioning that advanced tools from free probability
theory—such as stochastic differential equations—continue to play an impor-
tant role in von Neumann algebra theory. Let us just mention the work of
Dabrowski and Ioana [8], which builds up on applications of free probability
found in Dabrowski’s earlier work [6, 7].
An important future direction for research in free transportation theory is
the analysis of free transportation beyond the perturbative regime. This is a
subject of active research.

9. Exercises
Recall from Definitions 4.1 and 4.2 that for the algebra of noncommutative
polynomials A = C[X1 , . . . , Xn ], we define the derivations ∂j : A → A ⊗ A
given on monomials by
X
∂j P = A ⊗ B.
P =AXi B
Define also Dj P by
X
Dj P = BA.
P =AXi B

Finally, for an A-bimodule H, denote by # : (A ⊗ A) × H → H the map


#(a ⊗ b, h) := (a ⊗ b)#h := ahb.
Exercise 9.1. Show that if n = 1 and if we identify A with the algebra
C(X) of polynomials in one variable and A ⊗ A with the algebra C(X, Y ) of
polynomials in two (commuting) variables, then
∂ :A→ A⊗A
54 D. Shlyakhtenko

is precisely the difference quotient


p(X) − p(Y )
∂p = .
X −Y
(Hint: Check that the difference quotient is a derivation and compare values
on X.)
Exercise 9.2. Let f = f (X1 , . . . , Xn ) ∈ A be a polynomial, and assume that
B is some algebra containing A. Let Q1 , . . . , Qn ∈ B; also, regard B as an
A-bimodule so that (a ⊗ a′ )#b = aba′ for all a, a′ ∈ A and b ∈ B. Show that
X
∂t |t=0 f (X1 + tQ1 , . . . , Xn + tQn ) = (∂i f )#Qi .
i

Exercise 9.3. For F = (f1 , . . . , fn ) ∈ An , let J F ∈ Mn×n (A ⊗ A) be given


by
J F = (∂j fi )ij
Given F = (f1 , . . . , fn ), G = (g1 , . . . , gn ) ∈ An , define F ◦ G ∈ An by
(F ◦ G) = (f1 ◦ G, . . . , fn ◦ G),
where
(fj ◦ G)(X1 , . . . , Xn ) = fj (g1 (X1 , . . . , Xn ), . . . , gn (X1 , . . . , Xn )).
Express J (F ◦ G) in terms of J F and J G (a kind of chain rule for differenti-
ation).
Exercise 9.4. Let τ : A → C be a tracial linear functional, i.e., τ (U V ) =
τ (V U ) for all U, V ∈ A. Let Q1 , . . . , Qn ∈ A. Show that
X
∂t |t=0 τ (P (X1 + tQ1 , . . . , Xn + tQn )) = τ (Dj P Qj ).
j

Exercise 9.5. Let τ : A → C and assume that τ satisfies the Schwinger–Dyson


equation with potential V , i.e.,
τ (Dj V P ) = τ ⊗ τ (∂j P ).
P 2
1
(a) Show that if V = 2 Xj , then τ is the semicircle law: X1 , . . . , Xn are
freely independent under τ and the moments τ (Xjn ) are Catalan numbers.
(b) [harder] Show that if V = V1 + V2 where V1 ∈ C[X1 , . . . , Xk ] and V2 ∈
C[Xk+1 , . . . , Xn ], 1 ≤ k < n, then (X1 , . . . , Xk ) and (Xk+1 , . . . , Xn ) are
freely independent under τ .
(c) Let µ be a measure on R, let V be a polynomial, and let
ZZ Z
χV (µ) = log |s − t|dµ(s)dµ(t) − V (t)dµ(t).

Show that χV has a unique maximum among all probability measures on R


(hint: use convexity). Show moreover that if µ is the unique maximizer of
χV (µ), then V satisfies
Z ZZ
p(s) − p(t)
V ′ (t)p(t)dµ(t) = dµ(s)dµ(t)
s−t
Free monotone transport 55

for all polynomials p, i.e.,


τ (V ′ p) = τ ⊗ τ (∂p).
(t)
(Hint: Assuming that µ is a maximizer, consider µt = f∗ µ with f (t) (x) =
x + tp(x) and differentiate χV (µt ) in t.)
Exercise 9.6. For a probability measure µ with density ρ on R define
Z Z
HV (µ) = ρ(x) log ρ(x)dx − V (x)ρ(x)dx.
R
Show that for a given V for which Z = exp(−V (x))dx < ∞, the measure µ
with density ρV (x) = Z −1 exp(−V (x)) is the unique maximizer of HV (µ).
Show also that the unique maximizer µ satisfies
Z Z
V ′ (t)p(t)dµV (t) = p′ (t)dµV (t).

(Hint: Integrate by parts.)


R
Exercise 9.7. Suppose that V satisfies Z = exp(−V (x))dx < ∞. Let
µV = Z1V exp(−V (x))dx be the unique maximizer of HV as in Exercise 9.6.
Let ν be a measure for which HV (ν) < ∞. Assume moreover that V is a
strictly convex function.
(a) Let f : R → R be a function so that f ′ > 0 (i.e., f is monotone). Let
H(f ) := HV (f∗ ν).
Show that f 7→ H(f ) is strictly convex and thus there exists a unique f
with f ′ > 0 for which H(f ) achieves its maximum.
Rt Rt
(b) Let g1 (t) = −∞ dµV (x) and g2 (t) = −∞ dν(x). Show that g = g1−1 ◦ g2
is monotone and g∗ ν = µV .
(c) Conclude that H(g) = HV (µV ) and so H(g) is maximal among all H(f )
for f monotone. Thus g is the unique monotone map satisfying g∗ ν = µV .
(d) [harder] Show directly that the maximizer of H(f ) satisfies f∗ ν = µV .
(Idea: Replace a maximizer by fε = R f ◦(id+εp) for a smooth
R function p and
differentiate H(fε ) in ε to recover V ′ (t)p(t)d(f∗ ν)(t) = p′ (t)d(f∗ ν)(t).)
(e) Carry out (a)–(d) for χV in place of HV .
(f) [quite a bit harder] Carry out (a) and (d) for measures on Rn , n > 1. The
condition f ′ > 0 is replaced by the requirement that the Jacobian of f is,
at every point, positive-definite as a matrix.

References
[1] F. Bekerman, A. Figalli, and A. Guionnet, Transport maps for β-matrix models and
universality, Comm. Math. Phys. 338 (2015), no. 2, 589–619. MR3351052
[2] P. Biane and D. Voiculescu, A free probability analogue of the Wasserstein metric on
the trace-state space, Geom. Funct. Anal. 11 (2001), no. 6, 1125–1138. MR1878316
[3] M. Bożejko and R. Speicher, An example of a generalized Brownian motion, Comm.
Math. Phys. 137 (1991), no. 3, 519–531. MR1105428
56 D. Shlyakhtenko

[4] Y. Brenier, Polar factorization and monotone rearrangement of vector-valued functions,


Comm. Pure Appl. Math. 44 (1991), no. 4, 375–417. MR1100809
[5] S. Curran, Y. Dabrowski, and D. Shlyakhtenko, Free analysis and planar algebras.
arXiv.org:1411.0268 [math.OA] (2014).
[6] Y. Dabrowski, A note about proving non-Γ under a finite non-microstates free Fisher
information assumption, J. Funct. Anal. 258 (2010), no. 11, 3662–3674. MR2606868
[7] Y. Dabrowski, A free stochastic partial differential equation, Ann. Inst. Henri Poincaré
Probab. Stat. 50 (2014), no. 4, 1404–1455. MR3270000
[8] Y. Dabrowski and A. Ioana, Unbounded derivations, free dilations, and indecompos-
ability results for II1 factors, Trans. Amer. Math. Soc. 368 (2016), no. 7, 4525–4560.
MR3456153
[9] A. Figalli and A. Guionnet, Universality in several-matrix models via approximate trans-
port maps. Preprint, 2014.
[10] A. Guionnet, V. F. R. Jones, and D. Shlyakhtenko, Random matrices, free probability,
planar algebras and subfactors, in Quanta of maths, 201–239, Clay Math. Proc., 11,
Amer. Math. Soc., Providence, RI, 2010. MR2732052
[11] A. Guionnet, V. F. R. Jones, D. Shlyakhtenko, and P. Zinn-Justin, Loop models, random
matrices and planar algebras, Comm. Math. Phys. 316 (2012), no. 1, 45–97. MR2989453
[12] A. Guionnet and E. Maurel-Segala, Combinatorial aspects of matrix models, ALEA
Lat. Am. J. Probab. Math. Stat. 1 (2006), 241–279. MR2249657
[13] A. Guionnet and D. Shlyakhtenko, Free diffusions and matrix models with strictly con-
vex interaction, Geom. Funct. Anal. 18 (2009), no. 6, 1875–1916. MR2491694
[14] A. Guionnet and D. Shlyakhtenko, Free monotone transport, Invent. Math. 197 (2014),
no. 3, 613–661. MR3251831
[15] V. F. R. Jones, Planar algebras. Preprint, Berkeley, 1999.
[16] B. Nelson, Free monotone transport without a trace, Comm. Math. Phys. 334 (2015),
no. 3, 1245–1298. MR3312436
[17] B. Nelson, Free transport for finite depth subfactor planar algebras, J. Funct. Anal. 268
(2015), no. 9, 2586–2620. MR3325530
[18] S. Popa, Strong rigidity of II1 factors arising from malleable actions of w-rigid groups.
I, Invent. Math. 165 (2006), no. 2, 369–408. MR2231961
[19] S. Popa, Deformation and rigidity for group actions and von Neumann algebras, Inter-
national Congress of Mathematicians. Vol. I, 445–477, Eur. Math. Soc., Zürich, 2007.
MR2334200
[20] C. Villani, Topics in optimal transportation, Grad. Stud. Math., 58, Amer. Math. Soc.,
Providence, RI, 2003. MR1964483
[21] D. Voiculescu, Symmetries of some reduced free product C ∗ -algebras, in Operator alge-
bras and their connections with topology and ergodic theory (Buşteni, 1983), 556–588,
Lecture Notes in Math., 1132, Springer, Berlin, 1985. MR0799593
[22] D. Voiculescu, The analogues of entropy and of Fisher’s information measure in free
probability theory. V. Noncommutative Hilbert transforms, Invent. Math. 132 (1998),
no. 1, 189–227. MR1618636
[23] D. Voiculescu, A note on cyclic gradients, Indiana Univ. Math. J. 49 (2000), no. 3,
837–841. MR1803213
[24] D. Voiculescu, Cyclomorphy, Int. Math. Res. Not. 2002, no. 6, 299–332. MR1877005
[25] D. Voiculescu, Symmetries arising from free probability theory, in Frontiers in number
theory, physics, and geometry. I, 231–243, Springer, Berlin, 2006. MR2261097
[26] D. V. Voiculescu, K. J. Dykema, and A. Nica, Free random variables, CRM Monogr.
Ser., 1, Amer. Math. Soc., Providence, RI, 1992. MR1217253
Free group factors

Ken Dykema

1. Introduction
Infinite-dimensional von Neumann algebras that have trivial center and tra-
cial states are called II1 -factors. In the beginning, there were two of them:
Murray and von Neumann proved [16] that the hyperfinite II1 -factor (obtained
as a limit of finite-dimensional matrix algebras) is not isomorphic to the group
von Neumann algebra of the free group of two generators, L(F2 ). At the 1967
Batan Rouge conference, R. V. Kadison gave a list of open questions, includ-
ing the question of whether L(F2 ) and L(F3 ) are isomorphic (see [12]). Several
of Kadison’s questions have been answered, and today much more is known
about II1 -factors; in particular, there are uncountably many nonisomorphic ex-
amples, and some of them have quite exotic properties. But the isomorphism
question for free group factors remains open.
Voiculescu’s free probability theory and his random matrix results (of the
1980s and early 1990s) opened up new ways to try to understand these free
group factors. These lectures aim to provide an introduction to these random
matrix techniques and applications, by way of summarizing results obtained
in the 1990s and providing proofs (or sketches of proofs) of some of them. We
also include a section listing some more recent results about free group factors.

2. C∗ -noncommutative probability spaces


Recall that a noncommutative probability space (abbreviated n.c.p.s.) is a
pair (A, φ) where A is a unital algebra (over C) and where φ : A → C is a
linear map sending the identity element of A to 1. If A is a ∗-algebra and φ
satisfies in addition φ(a∗ ) = φ(a) and φ(a∗ a) ≥ 0 for all a ∈ A, then we say
(A, φ) is a ∗-n.c.p.s.
A C∗ -algebra A is a norm closed ∗ -subalgebra of the algebra of bounded
operators B(H) on some Hilbert space. An element a ∈ A is positive if and

The author would like to thank Nicolai Stammeier and Moritz Weber for organizing the
masterclass and for assisting mightily in the production of these notes. (Any mistakes are,
of course, the responsibility of the author.)
58 K. Dykema

only if a = b∗ b for some b ∈ A. C∗ -algebras are the best places to work with
positivity in a noncommutative context.
Definition 2.1. A C∗ -noncommutative probability space is a pair (A, φ), where
A is a unital C∗ -algebra and φ is a state on A, i.e., φ : A → C is a linear
functional such that φ(1) = 1 and φ(a) ≥ 0 for all positive elements a ∈ A.
Our first goal is the following. Given two C∗ -noncommutative probability
spaces (A1 , φ1 ) and (A2 , φ2 ), we want to find a C∗ -n.c.p.s. (A, φ) such that
• Ai ֒→ A for i = 1, 2,
• φ|Ai = φi for i = 1, 2, and
• A1 and A2 are free with respect to φ.
We can do it if the GNS-representations of φ1 and φ2 are faithful. Recall
that given a C∗ -n.c.p.s. (A, φ), we can define a sesquilinear form on A by
1
ha1 , a2 iφ := φ(a∗2 a1 ). This yields a semi-norm kak2 := φ(a∗ a) 2 . Now separa-
tion and completion leads to a Hilbert space L2 (A, φ) and a canonical map
A → L2 (A, φ), denoted a 7→ â. The GNS-representation is the map

πφ : A → B(L2 (A, φ)), d


πφ (a)b̂ := (ab).
Using the cyclic vector ξφ := 1̂, we recover our state φ as the vector state
φ(a) = hπφ (a)ξφ , ξφ i. This characterizes the GNS-representation in the follow-
ing sense. If π : A → B(H) is a representation of A and ξ ∈ H a normalized
cyclic vector such that φ(a) = hπ(a)ξ, ξi, then (π, H, ξ) is unitarily equivalent
to (πφ , L2 (A, φ), ξφ ). Finally, if φ is faithful, so is πφ , but the converse is not
true.

3. Reduced free products of C∗ -algebras and


von Neumann algebras
Let us first consider free products of Hilbert spaces or, more precisely,
free products of Hilbert spaces equipped with a specified unit vector. Let
(Hi , ξi )i∈I , where I is some index set, be a family of pairs consisting of a
Hilbert space Hi with a unit vector ξi . We let Hi0 := Hi ⊖ Cξi be the ortho-
complement. We define the free product Hilbert space (H, ξ) = ∗i∈I (Hi , ξi ) by
setting
M
H = Cξ ⊕ Hi01 ⊗ Hi02 ⊗ · · · ⊗ Hi0n .
n≥1
i1 ,...,in ∈I
ij 6=ij+1

Here, ξ represents a specified unit vector in the one-dimensional direct sum-


mand Cξ. We define maps σi : B(Hi ) → B(H) which act “on the left” and
which look something like
(
(T v1 ) ⊗ v2 ⊗ · · · ⊗ vn , if i1 = i,
σi (T )(v1 ⊗ · · · ⊗ vn ) =
(T ξi ) ⊗ v1 ⊗ · · · ⊗ vn , if i1 6= i.
Free group factors 59

Here, vj ∈ Hi0j , ij 6= ij+1 . But that is not quite correct. More precisely, for a
unit vector ηi , we set
M
K(i) := Cηi ⊕ Hi01 ⊗ Hi02 ⊗ · · · ⊗ Hi0n
n≥1
i1 ,...,in ∈I
ij 6=ij+1 ,i1 6=i

and we define unitary maps Vi : Hi ⊗ K(i) → H via


ξi ⊗ ηi 7→ ξ,
v ⊗ ηi 7→ v,
ξi ⊗ (v1 ⊗ · · · ⊗ vn ) 7→ v1 ⊗ · · · ⊗ vn ,
v ⊗ (v1 ⊗ · · · ⊗ vn ) 7→ v ⊗ v1 ⊗ · · · ⊗ vn .
Here, v ∈ Hi0 .Then σi (T ) ∈ B(H) is defined by σi (T ) := Vi (T ⊗ 1K(i) )Vi∗ .
Note Hi = Cξi ⊕ Hi0 is identified with the subspace Cξ ⊕ Hi0 of H by mapping
ξi to ξ.
By the following theorem of Voiculescu, we can now define the reduced free
product of C∗ -algebras. This is not to be confused with the full free product of
C∗ -algebras. In fact, “free product of C∗ -noncommutative probability spaces”
would be a better name. Independently, Avitzour [1] had a similar result (but
without defining the crucial notion of freeness), which he used to extend the
work of Powers [21] about simplicity.
Theorem 3.1 (Voiculescu [26]). Let I be a nonempty set and for all i ∈ I, let
(Ai , φi ) be C∗ -noncommutative probability spaces with faithful GNS-represen-
tations. Then there exists a unique C∗ -n.c.p.s. (A, φ) equipped with injective,
unital ∗ -homomorphisms λi : Ai → A such that
(1) φ ◦ λi = φi ,
(2) the family S (λi (Ai ))i∈I is free (with respect to φ),
(3) A = C ∗ ( i∈I λi (Ai )),
(4) the GNS-representation of φ is faithful.
We then denote by (A, φ) = ∗i∈I (Ai , φi ) or simply A = ∗i∈I Ai the reduced
free product.
Proof. Existence: Let Hi = L2 (Ai , φi ), ξi = ξφi and (H, ξ) = ∗i∈I (Hi , ξi ),
σi : B(Hi ) → B(H).S We put λi := σi ◦ πφi : Ai → B(H). Furthermore, we
define A = C ∗ ( i∈I λi (Ai )) and φ(·) = h· ξ, ξi.
Now, (1) is easy to see, (3) is by definition and (4) by the uniqueness prop-
erty for the GNS-representation. For (2), suppose aj ∈ Aij so that φij (aj ) = 0
(i.e., φ(λij (aj )) = 0, which yields haˆj , ξi i = 0). We have to show that the
equation φ(λi1 (a1 ) · · · λin (an )) = 0 holds for all i1 , . . . , in ∈ I, ik 6= ik+1 . Note
that
λin (an )ξ = σin (πφin (an ))ξ = ân ∈ Hi0n ⊆ H
and
λin−1 (an−1 )ân = ân−1 ⊗ ân ∈ Hi0n−1 ⊗ Hi0n .
60 K. Dykema

By iteration we obtain
φ(λi1 (a1 ) · · · λin (an )) = hλi1 (a1 ) · · · λin (an )ξ, ξi
= hâ1 ⊗ · · · ⊗ ân , ξi
= 0.
Uniqueness: Note that the linear span of the set
{1} ∪ {λi1 (a1 ) · · · λin (an ) | aj ∈ Aij ∩ ker φij , ij 6= ij+1 }
is dense in A and freeness determines φ and h·, ·i uniquely. 
Proposition 3.2. Some facts about the above construction:
(1) If the φi are traces for all i ∈ I, then φ is a trace (i.e. the tracial property
φ(ab) = φ(ba) is fulfilled).
(2) If the φi are faithful for all i ∈ I, then φ is faithful.
We leave the proof of (1) as an exercise. The first proof of (2) was in [8],
but a better proof is by E. Ricard, including the case of amalgamated free
products. His argument is reproduced in the paper [14].
Example 3.3. For a discrete group G we define the reduced group C∗ -algebra
by
Cr∗ (G) = spank·k {λ(g) | g ∈ G} ⊆ B(ℓ2 (G)).
Here λ is the left regular representation of G, given by λ(g)δh = δgh . A state
on Cr∗ (G) is given by τG (·) = h· δe , δe i. Thus
(
1, if g = e,
τG (λ(g)) =
0, otherwise.
Let (Gi )i∈I be a family of discrete groups such that G = ∗i∈I Gi , then
∗ (Cr∗ (Gi ), τGi ) = (Cr∗ (G), τG ).
i∈I

Why do we want to have positivity in our noncommutative probability


spaces, why do we consider C∗ -noncommutative probability spaces? So that
the moments of selfadjoint random variables are given by integration against
probability measures (and more generally for ∗-moments of normal elements):
Let a ∈ A be a random variable in a C∗ -n.c.p.s. (A, φ). If a = a∗ then the
distribution of a is given by a probability measure µa with support R equal to
the spectrum of a, which is a compact subset of R, i.e. φ(ak ) = R tk dµa (t).
If a = a∗ and b = b∗ are free, then the free convolution µa+b = µa ⊞ µb is a
probability measure supported on the spectrum of a + b. This follows from the
fact that free copies of a and b can be realized in a C∗ -n.c.p.s., which is by
virtue of the construction of the reduced free product of C∗ -algebras.
The reduced free product construction fulfills the following universal prop-
erty. We consider a C∗ -n.c.p.s. (A, φ) = ∗i∈I (Ai , φi ). Let (B, ψ) be a C∗ -
n.c.p.s. together with injective ∗ -homomorphisms πi : Ai → B such that
ψ ◦ πi = φi . Furthermore, assume that the states φi are faithful for all i ∈ I.
Then, there exists a ∗ -homomorphism π : A → B such that π|Ai = πi and
Free group factors 61

ψ ◦ π = φ. This is not necessarily true if the φi are nonfaithful, while still hav-
ing faithful GNS representations, of course (see [10] for a counter example).
However, by [4], if (Bi , ψi )i∈I are C∗ -noncommutative probability spaces with
faithful GNS-constructions and if there exist ∗-homomorphisms πi : Ai → Bi
satisfying ψi ◦ πi = φi , then there exists a “free product” ∗-homomorphism
π : A → B, where (B, ψ) := ∗i∈I (Bi , ψi ), satisfying π|Ai = πi and ψ ◦ π = φ.
We will also use the notion of a W ∗ -noncommutative probability space (ab-
breviated W ∗ -n.c.p.s.), which is a pair (M, φ), where M is a von Neumann
algebra and φ is a normal state. Recall that a von Neumann algebra is a
unital ∗ -subalgebra of B(H) which is closed in the weak operator topology.
Since M contains plenty of projections, this is the right place to do noncom-
mutative measure theory. The following is a von Neumann algebra analog of
Theorem 3.1.
Theorem 3.4 (Voiculescu [26]). Let I be a nonempty set and for all i ∈ I, let
(Ai , φi ) be W ∗ -noncommutative probability spaces with faithful GNS-represen-
tations. Then there exists a unique W ∗ -n.c.p.s. (A, φ) equipped with injective,
normal ∗ -homomorphisms λi : Ai → A such that
(1) φ ◦ λi = φi ,
(2) the (λi (AS i ))i∈I are free (with respect to φ),
(3) A = W ∗ ( i∈I λi (Ai )),
(4) the GNS-representation of φ is faithful.
Proof. Let (A, φ̊) = ∗i∈I (Ai , φi ) be the C∗ -algebraic free product with A rep-
resented on the free product Hilbert space L2 (A, φ̊). By construction, φ̊ is the
s.o.t.
restriction to A of the vector state h·ξ, ξi, where ξ = 1̂. Let A = A and let
φ be the vector state h·ξ, ξi on A. 
The analog of Proposition 3.2 was proved by Voiculescu.
Proposition 3.5 (Voiculescu [26]). About the W∗ -algebra free product:
(1) If the φi are traces for all i ∈ I, then φ is a trace.
(2) If the φi are faithful for all i ∈ I, then φ is faithful.
Example 3.6. For a discrete group G we define the group von Neumann
algebra by
s.o.t.
L(G) = Cr (G) ⊆ B(ℓ2 (G)).
Then the canonical trace τ is given by τG (·) = h· δe , δe i. If (Gi )i∈I is a family
of discrete groups such that G = ∗i∈I Gi , then
∗ (L(Gi ), τGi ) = (L(G), τG ).
i∈I

4. Applications of random matrices to von Neumann algebras


Consider a usual probability space (Ω, ω) and define
\
L= Lp (Ω, ω).
1≤p<∞
62 K. Dykema

R
Let E : L → C be the expectation E(f ) = f dω. For f ∈ L, we write
f ∼ N (0, σ 2 ) to mean that f has a Gaussian distribution with first moment
zero and second moment σ 2 . The algebra MN (L) of N × N -matrices over L
endowed with the tracial expectation φN = E ◦ trN is a ∗-noncommutative
probability space.
Definition 4.1. We define the following sorts of random matrices.
(1) A selfadjoint Gaussian random matrix2 is X = (xij ) ∈ MN (L) such that
xii ∼ N (0, σ 2 ), Re xij , Im xij ∼ N (0, σ2 ) for i < j; xij = xji ; the random
variables (xii )i , (Im xij )i<j and (Re xij )i<j are independent. We denote
this by X ∈ SGRM(N, σ 2 ).
(2) A Gaussian random matrix is X = (xij ) ∈ MN (L) such that Re xij ,
2
Im xij ∼ N (0, σ2 ) and (Im xij )i,j and (Re xij )i,j are independent. We
denote this by X ∈ GRM(N, σ 2 ).
(3) A Haar unitary random matrix is a UN -valued random matrix U ∈ MN (L)
(i.e., U (ω) is a unitary for all ω ∈ Ω) with Haar measure distribution. We
denote this by U ∈ HURM(N ).
Definition 4.2. Let K be an index set and let a(k, N ) ∈ MN (L). We say
that the family (a(k, N ))k∈K converges in ∗ -moments (or in ∗ -distribution) if
lim φN (a(k1 , N )ε1 a(k2 , N )ε2 · · · a(kp , N )εp )
N →∞
exists for all p ∈ N, εj ∈ {1, ∗} and kj ∈ K. The family converges in ∗ -moments
to a family (ak )k∈K of random variables in a ∗-n.c.p.s. (A, φ) (which may be
called limit random variables) if
ε
lim φN (a(k1 , N )ε1 a(k2 , N )ε2 · · · a(kp , N )εp ) = φ(aεk11 aεk22 · · · akpp )
N →∞
for all εj ∈ {1, ∗} and kj ∈ K. We further say that the family (a(k, N ))k∈K
is asymptotically free (respectively, asymptotically ∗-free) if it converges in ∗ -
moments to a family of random variables that is free (respectively, ∗-free).
Analogously, a family of sets of random variables is asymptotically free or
∗-free if the corresponding family of sets of limit random variables is free or
∗-free.
Voiculescu’s discovery [28] that certain random matrices are asymptotically
free (we call this the matrix model ) was a key result for applications of free
probability theory to von Neumann algebras. A strengthening of Voiculescu’s
theorem from [28] is the following.
Theorem 4.3 (Voiculescu [33]). For s ∈ N let X(s, N ) ∈ SGRM(N, N1 ),
Z(s, N ) ∈ GRM(N, N1 ) and U (s, N ) ∈ HURM(N ). Assume that they are all
independent, i.e. for each N , the σ-algebras
(ΣXs,N )∞ ∞ ∞
s=1 , (ΣZs,N )s=1 , (ΣUs,N )s=1
generated by these random matrices are independent. Suppose that B(s, N ) ∈
MN (C) is such that the family (B(s, N ))s∈N converges in ∗ -moments and for
all s, kB(s, N )k remains bounded as N → ∞.
Free group factors 63

Then the family


(X(s, N ))s∈N , (Z(s, N ))s∈N , (U (s, N ))s∈N , (B(s, N ))s∈N
converges in ∗ -moments and the family
  
{X(s, N )} s∈N , {Z(s, N )} s∈N , {U (s, N )} s∈N , {B(s, N ) | s ∈ N}
of sets of random variables is asymptotically ∗-free. The limit in ∗ -distribution
of
• X(s, N ) is a semicircular variable,
• Z(s, N ) is a circular variable,
• U (s, N ) is a Haar unitary.
We now turn to applications of the matrix model
R to von Neumann algebras.
Note that (L(Z), τZ ) is isomorphic to (L∞ (T), · dλ). The von Neumann al-
gebras L(Fn ) are factors (for n ≥ 2) since the groups Fn are ICC groups, and
are called the free group factors. Most applications of the matrix model to free
group factors are based on the fact that, by standard L∞ functional calculus,
L(Z) has a semicircular element generator, so that a family of n free semi-
circular elements generates L(Fn ) (also when n = ∞). The first and, perhaps,
most natural of these applications were by Voiculescu (see, for example, Theo-
rem 4.6 below). Let us, however, consider the following theorem, whose proof
is a bit easier.
Theorem 4.4 ([6]). For k ≥ 2 and 2 ≤ n ≤ ∞,
(L(Z), τZ ) ∗ (Mk (C), trk ) ∼
= (L(Fk2 ) ⊗ Mk (C), τF k2
⊗ trk ),

(L(Fn ), τFn ) ∗ (Mk (C), trk ) = (L(Fnk2 ) ⊗ Mk (C), τFnk2 ⊗ trk ).
Proof. We only prove the first isomorphism for k = 2, the other cases being
similar.
Put M = L(Z) ∗ M2 (C). Let x = x∗ ∈ L(Z) be a semicircular generator
(2)
and (eij )1≤i,j≤2 ∈ M2 (C) matrix units. Then, x can be modeled by X(N ) ∈
(2)
SGRM(N, N1 ) and, taking for convenience N even, eij can be modeled by
matrix units Eij (N ) ∈ MN (C) that are determined by
N/2
X (N )
E12 (N ) = ei,i+(N/2) .
i=1

By Theorem 4.3, X(N ) and {Eij (N ) | 1 ≤ i, j ≤ 2} are asymptotically


free and, therefore, model the chosen generators of L(Z) ∗ M2 (C). Note that
e11 Me11 is generated as a von Neumann algebra by the set
{e11 xe11 , e11 xe21 , e12 xe21 }.
The family

E11 (N )X(N )E11 (N ), E11 (N )X(N )E21 (N ), E12 (N )X(N )E21 (N )
converges in ∗-moments to this generating set. Since E11 (N )X(N )E11 (N )
and E12 (N )X(N )E21 (N ) are in SGRM( N2 , N1 ) and E11 (N )X(N )E21 (N ) ∈
64 K. Dykema

GRM( N2 , N1 ) and these are independent, applying Theorem 4.3, we obtain that
e11 Me11 is generated by four free semicircular elements e11 xe11 , Re(e11 xe21 ),
Im(e11 xe21 ) and e12 xe21 . Hence,
4
e11 Me11 = ∗ (L(Z), τZ ) ∼
= (L(F4 ), τF4 ).
i=1
Finally, we have M ∼
= L(F4 ) ⊗ M2 (C). 
What happens if we take the first isomorphism of Theorem 4.4 and let k
S to infinity? Recall that the hyperfinite II1 -factor R is given by the closure
tend
of k∈N M2k (C) in the weak operator topology. Could we hope to deduce an
isomorphism L(Z) ∗ R ∼ = L(F∞ ) ⊗ R from the above theorem? Such hope
would be in vain, since L(F∞ ) ⊗ R has property Γ whereas L(Z) ∗ R has not.
It is a question of central sequences. (Note that Murray and von Neumann
showed already in [16] that L(F2 ) ∼6 R because L(F2 ) does not have central
=
sequences.) Instead, we have the following theorem.
Theorem 4.5 ([6]). The free product von Neumann algebra L(Z) ∗ R is iso-
morphic to the free group factor L(F2 ). Consequently, for n ≥ 2,
L(Fn ) ∗ R ∼
= L(Fn+1 ).
The proof is similar in spirit to the proof of the above theorem, dividing
the semicircular element x = x∗ ∈ L(Z) into finer and finer matrix blocks and
using asymptotic freeness results for random matrices.
Using his matrix model, Voiculescu proved another theorem on the free
group factors.
Theorem 4.6 (Voiculescu [27]). If k ∈ N and n ∈ {2, 3, . . . , ∞}, then
L(Fn ) ∼
= L(F1+k2 (n−1) ) ⊗ Mk (C).
Let us now turn to Murray and von Neumann’s [16] rescaling of a II1 -
factor M with unique tracial state τ . Recall that a II1 -factor is an infinite-
dimensional von Neumann algebra M with trivial center (i.e. Z(M ) = C1) and
a normal faithful tracial state τ : M → C. II1 -factors have nice properties,
in some sense they behave like matrix algebras but infinite ones. We have
already encountered the free group factors L(Fn ), which are of type II1 , and
the hyperfinite II1 -factor R.
For 0 < t < 1 and a projection p ∈ M with τ (p) = t we define Mt = pM p. If
0 < t < ∞, then Mt = p(M ⊗ Mn (C))p , where p ∈ M ⊗ Mn (C) is a projection
such that τ ⊗ Trn (p) = t. Here, Trn is the unnormalized trace on Mn (C). This
is the rescaling of the II1 -factor M by t, and the result Mt is a II1 -factor. Note
that if p, q ∈ M are projections such that τ (p) = τ (q) then there exists v ∈ M
such that v ∗ v = p and vv ∗ = q, which implies that the choice of p is irrelevant,
so long as τ ⊗ Trn (p) = t. We have (Ms )t ∼ = Mst .
Definition 4.7. The fundamental group of a II1 -factor M is the subgroup of
the multiplicative group R∗+ := {t ∈ R | t > 0} defined by
F (M ) = {t ∈ R∗ | Mt = ∼ M }.
+
Free group factors 65

Theorem 4.8 ([16]). F (R) = R∗+ .


An immediate consequence of Theorem 4.6 is the following result.
Corollary 4.9 (Voiculescu [27]). F (L(F∞ )) contains Q∗+ .
This was improved upon by F. Rădulescu.
Theorem 4.10 (Rădulescu [22]). F (L(F∞ )) = R∗+ .
Further results are the following. Note that L(Z2 ) is the group von Neumann
algebra of the two-element group, and is simply C ⊕ C equipped with the trace
assigning value 1/2 to each minimal projection.
Proposition 4.11 ([7]). For two tracial von Neumann algebras A and B, we
have
(A ⊗ M2 (C)) ∗ (B ⊗ M2 (C)) ∼= (A ∗ B ∗ L(F3 )) ⊗ M2 (C),
(A ⊗ L(Z2 )) ∗ (B ⊗ M2 (C)) ∼= (A ∗ A ∗ B ∗ L(F2 )) ⊗ M2 (C),

(A ⊗ L(Z2 )) ∗ (B ⊗ L(Z2 ))) = (A ∗ A ∗ B ∗ B ∗ L(Z)) ⊗ M2 (C).
Combined with Theorem 4.5 and Voiculescu’s Theorem 4.6, the first of the
above isomorphisms yields the following corollary.
Corollary 4.12 ([7]). R ∗ R ∼= L(F2 ).
Proof. We have
R∗R∼
= (R ⊗ M2 (C)) ∗ (R ⊗ M2 (C))

= (R ∗ R ∗ L(F3 )) ⊗ M2 (C)

= L(F5 ) ⊗ M2 (C) ∼
= L(F2 ). 
One further result, to set the stage for the next section, is the following.
Theorem 4.13 (Rădulescu [23]). For any two integers k, n ≥ 2, the rescaled
factor L(Fn )1/√k is isomorphic to L(F1+k(n−1) ).

5. Interpolated free group factors and


some results about free products
The above results suggest a way of defining “interpolated” free group fac-
tors, i.e. free group factors L(Fn ) also for noninteger values n. This was done by
Rădulescu and the author independently and by technically different methods.
We follow the author’s approach. Note that the theorem in this full general-
ity was proved by Rădulescu but not by the author, whose result lacked one
implication (see the proof below for details).
Theorem 5.1 (Rădulescu [24], Dykema [7]). There is a family (L(Fr ))1<r≤∞
of II1 -factors such that:
(1) If r ∈ {2, 3, 4, . . . , ∞} then L(Fr ) is the indicated free group factor.
(2) We have L(Fr )t ∼ = L(F1+t−2 (r−1) ) for all t ∈ (0, ∞).
(3) L(Fr ) ∗ L(Fs ) ∼= L(Fr+s ).
66 K. Dykema

Furthermore, the isomorphism problem for the free group factors boils down to
the following dichotomy.
• either L(Fr ) ∼
= L(Fs ) for all 1 < r, s ≤ ∞ and then F (L(Fr )) = R∗+ ,
• or L(Fr ) ≇ L(Fs ) for all 1 < r < s ≤ ∞ and then F (L(Fr )) = {1}.
Proof. First we construct the interpolated free group factors. We consider
R ∗ L(F∞ ). Recall that L(F∞ ) is generated by a free semicircular P family
(xn )n∈N . Let p1 , p2 , . . . ∈ R be projections such that r = 1 + j τ (pj )2 . We
define
L(Fr ) := W ∗ (R ∪ {pj xj pj | 1 ≤ j < ∞}) ⊂ R ∗ L(F∞ ).
Using the matrix model, we can show that this is well-defined (i.e., it is inde-
pendent of the choice of the pj ).
To prove (2), consider L(Fr ) for 1 < r ≤ ∞ and let t ∈ (0, 1). Choose
a realization of L(Fr ) as above, such that there is a projection q ∈ R with
τ (q) = t and pj ≤ q for all j. We obtain
L(Fr )t ∼
= qL(Fr )q = W ∗ (qRq ∪ {pj xj pj | 1 ≤ j < ∞}) ∼
= L(Fx )
where, since qL(Fr )q is endowed with the trace τ (q)−1 τ |qL(Fr )q , we have
X
x=1+ (t−1 τ (pj ))2 = 1 + t−2 (r − 1).
j

This proves (2) in the case t ≤ 1, and the case t > 1 follows from this.
Assertion (3) follows from the result R ∗ R ∼ = L(Z) ∗ R and the above
construction of the interpolated free group factors.
In order to show the dichotomy result, let us first assume that L(Fr ) is
isomorphic to L(F∞ ) for some 1 < r < ∞. For every number s ∈ (1, ∞) we
find a 0 < t < ∞ such that s = 1 + t−2 (r − 1) and hence we obtain
= L(Fr )t ∼
L(Fs ) ∼ = L(F∞ )t = L(F∞ ).
Second, if L(Fr ) is isomorphic to L(Fs ) for some 1 < r < s < ∞, then for all
0 < t < ∞,
= L(Fs )t ∼
= L(Fr )t ∼
L(F1+t−2 (r−1) ) ∼ = L(F1+t−2 (s−1) ).
Now, let 0 < ε < 1 and choose 0 < t < ∞ such that ε = t−2 (r − 1). Then
s − 1
1 + t−2 (s − 1) = 1 + ε
r−1
and
L(F3 ) ∼
= L(F1+ε ) ∗ L(F2−ε ) ∼
= L(F s−1 ) ∗ L(F2−ε )
1+ε( r−1 )
∼ ∼
s−r ) = L(F3 )α
= L(F3+ε( r−1 )

where α−2 = 1 + 2ε ( r−1


s−r
). By letting ε vary from 0 to 1, we see that F (L(F3 ))
contains an open interval and, since it is a subgroup of R∗+ , it equals R∗+ .
Hence L(F3 ) ∼= L(Fx ) for all x ∈ (1, ∞). Finally, we will show that this implies
also L(F3 ) ∼
= L(F∞ ). (This implication was missing from [7] but was proved
Free group factors 67

in [24]). We can choose projections pj ∈ R such that τ (pj ) > 0 for all j ≥ 1,
P ∞ 2
j=1 τ (pj ) = 2 and p2k−1 = p2k for all k ≥ 1. Thus,

M := W ∗ (R ∪ {pj xj pj | j ≥ 1}) ∼
= L(F3 )
and for each k, the von Neumann subalgebra having pk for identity element and
generated by {p2k−1 x2k−1 p2k−1 , p2k x2k p2k } is isomorphic to L(F2 ). However,
from what we have already shown, it follows that L(F2 ) ∼ = L(Fm ) for all integers
m ≥ 2. We may replace {p2k x2k−1 p2k , p2k x2k p2k } by any set that generates
the same von Neumann algebra, and a set of m free semicircular generators
in this von Neumann algebra can be realized as the compressions by p2k of
m free semicircular elements in M. Thus, choosing integers mk ≥ 2, if we
renumber the semicircular family (xj )∞j=1 to be indexed by N × N, we see that
M is isomorphic to the von Neumann algebra generated by
R ∪ {p2k xk,j p2k | k ∈ N, 1 ≤ j ≤ mk },
and this von Neumann algebra
P∞is isomorphic to L(F∞ ) provided that we choose
the values of mk to ensure k=1 mk τ (p2k )2 = ∞. 
Recall we have
L(Z) ∗ R ∼
= L(F2 ),

R ∗ R = L(F2 ),
M2 (C) ∗ M2 (C) ∼
= L(F3 ) ⊗ M2 (C) ∼
= L(F 23 ),
L(Z) ∗ M2 (C) ∼= L(F4 ) ⊗ M2 (C) ∼
= L(F 7 ). 4

Thus, taking the free product with the hyperfinite factor R behaves like taking
the free product with L(Z), whereas taking the free product with M2 (C) rather
looks like taking the free product with L(F 34 ), whatever that may be. This
pattern is part of a larger picture, which is summarized in the next result and
ensuing description.
Theorem 5.2 ([5]). Let A and B be von Neumann algebras with normal,
faithful tracial states τA and τB respectively, and assume dim A ≥ 2, dim B ≥ 3.
Suppose each is
(1) either finite dimensional,
(2) or hyperfinite,
(3) or an interpolated free group factor,
(4) or a (possibly infinite) direct sum of these.
Then, the free product (M, τ ) := (A, τA ) ∗ (B, τB ) is of the form M ∼= L(Ft )
or M ∼ = L(Ft ) ⊕ N for some t ∈ (1, ∞] and a finite-dimensional von Neumann
algebra N (which can be described exactly).
To find the parameter t in the preceding theorem, we use a “free dimension”
with the following properties:
(1) fdim(M, τ ) = fdim(A, τA ) + fdim(B, τB ).
(2) fdim(Mk (C)) = 1 − k12 .
68 K. Dykema

(3) fdim(R) = 1 or in general fdim(M ) = 1 for any diffuse hyperfinite von


Neumann algebra M .
(4) fdim(L(Ft )) = t. P
(5) fdim(A1 ⊕A2 ⊕· · · ) = 1+ j t2j (fdim(Aj )−1), if the direct sum A1 ⊕A2 ⊕· · ·
is endowed with a trace assigning the value tj to the projection that is the
identity element of the j-th summand.
These rules are heuristic rather than formal. In particular, in view of the
property (4), it is clear that fdim will not be well-defined if the free group
factors turn out to be isomorphic to each other. The point is that fdim is only
used to find t in Theorem 5.2; if the free group factors are isomorphic to each
other, then we don’t need to find t.

6. Further results about free group factors


In fact, there is a well-defined notion of free dimension, namely, Voiculescu’s
free entropy dimension. It is a quantity assigned to finite sets of operators in
tracial von Neumann algebras. There is a microstate version and nonmicrostate
version. (See the papers [29, 30, 31, 32, 33, 34], or, for a survey, [35].) It
is known that if these two versions agree on all sets of operators, then free
entropy dimension is an invariant for von Neumann algebras. This would solve
the free group factor isomorphism problem! Biane, Capitaine and Guionnet
proved in [3] that one inequality always holds. K. Jung showed in [15] that
for hyperfinite algebras, the free entropy dimension of any finite generating set
coincides with the heuristically defined number fdim introduced above.
We end with a brief review (by no means complete) of some further results
about free group factors.
If M is a von Neumann algebra, then a Cartan subalgebra is a maximal
abelian selfadjoint subalgebra A ⊂ M such that the normalizer subalgebra
NM (A) of A, namely, the von Neumann algebra generated by the set of all
unitaries u ∈ M such that uAu∗ = A, is all of M . The next theorem, which
Voiculescu proved using free entropy, provided the first example of a von Neu-
mann algebra with no Cartan subalgebra.
Theorem 6.1 (Voiculescu [31]). L(Ft ) has no Cartan subalgebra, t ∈ (1, ∞).
A II1 -factor is prime if it is not isomorphic to a (von Neumann algebra)
tensor product M ⊗ N where M, N are II1 -factors. The next theorem provided
the first examples of prime II1 -factors with separable predual.
Theorem 6.2 (Ge [11]). L(Ft ) is prime, t ∈ (1, ∞).
A factor M is solid, if for all diffuse abelian subalgebras A ⊂ M the von
Neumann algebra M ∩ A′ is hyperfinite.
Theorem 6.3 (Ozawa [17]). L(Fn ) is solid, n ∈ N, n ≥ 2.
In fact, Ozawa proved the stronger result that L(G) is solid whenever G is
a hyperbolic ICC group.
Free group factors 69

This theorem has been strengthened by Ozawa and Popa. A II1 -factor M
is strongly solid, if for all diffuse hyperfinite subalgebras A ⊂ M of M , the
normalizer subalgebra NM (A) is hyperfinite.

Theorem 6.4 (Ozawa, Popa [19]). L(Ft ) is strongly solid, t ∈ (1, ∞).

In particular, this implies that free products such as (L(Ft ) ⊗ R) ∗ L(Z) and
(L(Ft ) ⊗ L(Z)) ∗ L(Z) are not interpolated free group factors.
From Theorem 5.1, we have L(Fr ) ⊗ R ∼ = L(Fs ) ⊗ R for all 1 < r, s < ∞
but not L(F2 ) ⊗ R ∼= L(F ∞ ) ⊗ R. Though it is not a priori clear whether such
an isomorphism would imply isomorphism of free group factors, this does, in
fact, follow from the following result.

Theorem 6.5 (Popa [20]). If for non-Γ II1 -factors N1 , N2 we have N1 ⊗ R ∼


=
N2 ⊗ R, then there exists t ∈ (0, ∞) such that (N1 )t ∼
= N2 .

There is also the following Kurosh-type theorem for II1 -factors. A II1 -factor
is semi-exact if it contains a weakly dense exact C∗ -algebra. It is semi-solid if
the relative commutant of every subfactor of type II1 is hyperfinite.

Theorem 6.6 (Ozawa [18]). Let A1 , . . . , An and B1 , . . . , Bm be nonprime,


nonhyperfinite, semi-exact II1 -factors with n > m ≥ 1. Let A0 and B0 be
semi-solid, semi-exact II1 -factors. Then

A1 ∗ A2 ∗ . . . ∗ An ≇ B1 ∗ B2 ∗ . . . ∗ Bm ,
A0 ∗ A1 ∗ A2 ∗ . . . ∗ An ≇ B0 ∗ B1 ∗ B2 ∗ . . . ∗ Bm .

Example 6.7. For n > m ≥ 1, t0 , . . . , tn , s0 , . . . , sm ∈ (1, ∞], we have

(L(Ft1 ) ⊗ R) ∗ · · · ∗ (L(Ftn ) ⊗ R) ≇ (L(Fs1 ) ⊗ R) ∗ · · · ∗ (L(Fsm ) ⊗ R)

and

L(Ft0 ) ∗ (L(Ft1 ) ⊗ R) ∗ · · · ∗ (L(Ftn ) ⊗ R)


≇ L(Fs0 ) ∗ (L(Fs1 ) ⊗ R) ∗ · · · ∗ (L(Fsm ) ⊗ R).

In contrast, the following result holds.

Proposition 6.8. We have

L(F∞ ) ∗ (L(F∞ ) ⊗ L(Z)) ∼


= L(F∞ ) ∗ (L(F∞ ) ⊗ L(Z)) ∗ (L(F∞ ) ⊗ L(Z)).

Proof. Starting with L(F∞ ) ⊗ L(Z) and splitting off either from the one side
of the tensor product or the other, we have the isomorphisms

L(F∞ ) ⊗ L(Z) ∼
= (L(F∞ ) ⊗ L(Z)) ⊗ M2 (C) ∼
= (L(F∞ ) ⊗ L(Z)) ⊗ L(Z2 ).
70 K. Dykema

Using each of these and Proposition 4.11, we have


(L(F∞ ) ⊗ L(Z)) ∗ L(F∞ )


= (L(F∞ ) ⊗ L(Z)) ⊗ L(Z2 ) ∗ (L(F∞ ) ⊗ M2 (C))


= (L(F∞ ) ⊗ L(Z)) ∗ (L(F∞ ) ⊗ L(Z)) ∗ L(F∞ ) ∗ L(F2 ) ⊗ M2 (C)


= (L(F∞ ) ⊗ L(Z)) ∗ (L(F∞ ) ⊗ L(Z)) ∗ L(F∞ ) ⊗ M2 (C)


= (L(F∞ ) ⊗ L(Z)) ⊗ M2 (C) ∗ (L(F∞ ) ⊗ M2 (C))


= (L(F∞ ) ⊗ L(Z)) ∗ L(F∞ ) ∗ L(F3 ) ⊗ M2 (C)


= (L(F∞ ) ⊗ L(Z)) ∗ L(F∞ ) ⊗ M2 (C).
Now taking each of the last right-indented isomorphisms and removing the
“⊗M2 (C)” yields the statement of the proposition. 

7. Exercises
Exercise 7.1. Suppose (A, φ) = ∗i∈I (Ai , φi ) is a reduced free product of C∗ -
algebras (or a free product of von Neumann algebras) and that for every i ∈ I,
φi is a trace. Show that φ is a trace.
Exercise 7.2. A useful tool for proving isomorphisms. Let (D, φ) = (A, φA ) ∗
(B, φB ) be a reduced free product of C∗ -algebras and suppose there is a central
projection p ∈ A. (For sake of clarity, assume φA and φB are faithful or even
traces, if you like.) Take the subalgebra A1 = Cp + (1 − p)A of A and let D1 be
the C∗ -subalgebra of D generated by A1 ∪ B. (Thus, D1 is the corresponding
reduced free product of A1 and B.) Show that pDp is generated by pD1 p and
pA, and that these two algebras are free in the C∗ -n.c.p.s. (pDp, φ(p)−1 φ|pDp ).
The next exercise is based partly on classical knowledge about two projec-
tions (or two subspaces) in Hilbert space that has been thoroughly explored in
the literature; see [13] and [25], for example.
Exercise 7.3. Let A be a unital C∗ -algebra that is generated by projections
p and q.
(a) The element 1 − p − q + pq + qp lies in the center of A.
(b) Every irreducible representation of A (namely, a ∗ -representation whose
image has no nontrivial reducing subspaces) must be one- or two-dimen-
sional.
(c) Every irreducible unital representation is unitarily equivalent to one of
these:
(1) p 7→ 0, q 7→ 0 in C,
(2) p 7→ 0, q 7→ 1 in C,
(3) p 7→ 1, q 7→ 0 in C,
(4) p 7→ 1, q 7→ 1 in C, √
  
t t(1−t)
(5) p 7→ 10 00 , q 7→ √ in M2 (C), for some t ∈ (0, 1).
t(1−t) 1−t
Free group factors 71

(d) Let
A = {f : [0, 1] → M2 (C) | f continuous, f (0) and f (1) diagonal}
with P, Q ∈ A given by the functions
  √ 
t t(1−t)
P ∼ 10 00 , Q ∼ √ .
t(1−t) 1−t

Show that A is a quotient of A by a ∗ -homomorphism sending P to p and


Q to q.
(e) Suppose τ is a faithful tracial state on A. Suppose you know the distri-
bution of pqp with respect to τ and the values τ (p) and τ (q). Write down
the C∗ -algebra A in terms of these.
(f) Remark: In the case that p and q are free with respect to τ , the S-transform
and Stieltjes inversion can be used to find explicitly the distribution of pqp
knowing only the values τ (p) and τ (q) (see [2]) and, thus, to determine the
C∗ -algebra generated by two free projections (see [9, Prop. 2.7]).

References
[1] D. Avitzour, Free products of C ∗ -algebras, Trans. Amer. Math. Soc. 271 (1982), no. 2,
423–435. MR0654842
[2] H. Bercovici and D. Voiculescu, Lévy-Hinčin type theorems for multiplicative and ad-
ditive free convolution, Pacific J. Math. 153 (1992), no. 2, 217–248. MR1151559
[3] P. Biane, M. Capitaine, and A. Guionnet, Large deviation bounds for matrix Brownian
motion, Invent. Math. 152 (2003), no. 2, 433–459. MR1975007
[4] E. F. Blanchard and K. J. Dykema, Embeddings of reduced free products of operator
algebras, Pacific J. Math. 199 (2001), no. 1, 1–19. MR1847144
[5] K. Dykema, Free products of hyperfinite von Neumann algebras and free dimension,
Duke Math. J. 69 (1993), no. 1, 97–119. MR1201693
[6] K. Dykema, On certain free product factors via an extended matrix model, J. Funct.
Anal. 112 (1993), no. 1, 31–60. MR1207936
[7] K. Dykema, Interpolated free group factors, Pacific J. Math. 163 (1994), no. 1, 123–135.
MR1256179
[8] K. J. Dykema, Faithfulness of free product states, J. Funct. Anal. 154 (1998), no. 2,
323–329. MR1612705
[9] K. J. Dykema, Simplicity and the stable rank of some free product C ∗ -algebras, Trans.
Amer. Math. Soc. 351 (1999), no. 1, 1–40. MR1473439
[10] K. J. Dykema and M. Rørdam, Projections in free product C ∗ -algebras, Geom. Funct.
Anal. 8 (1998), no. 1, 1–16. MR1601917
Erratum in Geom. Funct. Anal. 10 (2000), no. 4, 975. MR1791146
[11] L. Ge, Applications of free entropy to finite von Neumann algebras. II, Ann. of Math.
(2) 147 (1998), no. 1, 143–157. MR1609522
[12] L. M. Ge, On “Problems on von Neumann algebras by R. Kadison, 1967”, Acta Math.
Sin. (Engl. Ser.) 19 (2003), no. 3, 619–624. MR2014042
[13] P. R. Halmos, Two subspaces, Trans. Amer. Math. Soc. 144 (1969), 381–389.
MR0251519
[14] N. A. Ivanov, On the structure of some reduced amalgamated free product C ∗ -algebras,
Internat. J. Math. 22 (2011), no. 2, 281–306. MR2782689
[15] K. Jung, The free entropy dimension of hyperfinite von Neumann algebras, Trans. Amer.
Math. Soc. 355 (2003), no. 12, 5053–5089 (electronic). MR1997595
72 K. Dykema

[16] F. J. Murray and J. von Neumann, On rings of operators. IV, Ann. of Math. (2) 44
(1943), 716–808. MR0009096
[17] N. Ozawa, Solid von Neumann algebras, Acta Math. 192 (2004), no. 1, 111–117.
MR2079600
[18] N. Ozawa, A Kurosh-type theorem for type II1 factors, Int. Math. Res. Not. 2006, Art.
ID 97560, 21 pp. MR2211141
[19] N. Ozawa and S. Popa, On a class of II1 factors with at most one Cartan subalgebra,
Ann. of Math. (2) 172 (2010), no. 1, 713–749. MR2680430
[20] S. Popa, Deformation and rigidity for group actions and von Neumann algebras, in
International Congress of Mathematicians. Vol. I, 445–477, Eur. Math. Soc., Zürich,
2007. MR2334200
[21] R. T. Powers, Simplicity of the C ∗ -algebra associated with the free group on two gen-
erators, Duke Math. J. 42 (1975), 151–156. MR0374334
[22] F. Rădulescu, The fundamental group of the von Neumann algebra of a free group with
infinitely many generators is R+ \ 0, J. Amer. Math. Soc. 5 (1992), no. 3, 517–532.
MR1142260
[23] F. Rădulescu, Stable equivalence of the weak closures of free groups convolution alge-
bras, Comm. Math. Phys. 156 (1993), no. 1, 17–36. MR1234103
[24] F. Rădulescu, Random matrices, amalgamated free products and subfactors of the von
Neumann algebra of a free group, of noninteger index, Invent. Math. 115 (1994), no. 2,
347–389. MR1258909
[25] I. Raeburn and A. M. Sinclair, The C ∗ -algebra generated by two projections, Math.
Scand. 65 (1989), no. 2, 278–290. MR1050869
[26] D. Voiculescu, Symmetries of some reduced free product C ∗ -algebras, in Operator alge-
bras and their connections with topology and ergodic theory (Buşteni, 1983), 556–588,
Lecture Notes in Math., 1132, Springer, Berlin, 1985. MR0799593
[27] D. Voiculescu, Circular and semicircular systems and free product factors, in Opera-
tor algebras, unitary representations, enveloping algebras, and invariant theory (Paris,
1989), 45–60, Progr. Math., 92, Birkhäuser, Boston, MA, 1990. MR1103585
[28] D. Voiculescu, Limit laws for random matrices and free products, Invent. Math. 104
(1991), no. 1, 201–220. MR1094052
[29] D. Voiculescu, The analogues of entropy and of Fisher’s information measure in free
probability theory. I, Comm. Math. Phys. 155 (1993), no. 1, 71–92. MR1228526
[30] D. Voiculescu, The analogues of entropy and of Fisher’s information measure in free
probability theory. II, Invent. Math. 118 (1994), no. 3, 411–440. MR1296352
[31] D. Voiculescu, The analogues of entropy and of Fisher’s information measure in free
probability theory. III. The absence of Cartan subalgebras, Geom. Funct. Anal. 6 (1996),
no. 1, 172–199. MR1371236
[32] D. Voiculescu, The analogues of entropy and of Fisher’s information measure in free
probability theory. IV. Maximum entropy and freeness, in Free probability theory (Wa-
terloo, ON, 1995), 293–302, Fields Inst. Commun., 12, Amer. Math. Soc., Providence,
RI, 1997. MR1426847
[33] D. Voiculescu, A strengthened asymptotic freeness result for random matrices with ap-
plications to free entropy, Internat. Math. Res. Notices 1998, no. 1, 41–63. MR1601878
[34] D. Voiculescu, The analogues of entropy and of Fisher’s information measure in free
probability theory. V. Noncommutative Hilbert transforms, Invent. Math. 132 (1998),
no. 1, 189–227. MR1618636
[35] D. Voiculescu, Free entropy, Bull. London Math. Soc. 34 (2002), no. 3, 257–278.
MR1887698
Free convolution

Hari Bercovici

1. Introduction
It may have seemed at the inception of free probability theory that the
connection with actual probability theory is somewhat tenuous. Subsequent
developments have shown that such connections exist and are quite deep. For
instance, the asymptotics (as the size of the matrices tends to infinity) of
eigenvalue distributions for sums of large random matrices can often be studied
by considering sums of freely independent random variables [53]. Wigner’s
semicircle law can be viewed as a manifestation of the central limit theorem
in free probability. In these notes we discuss the basic apparatus for studying
sums of freely independent random variables and the free counterparts of the
classical limit theorems of probability theory. We focus especially on the case of
unbounded random variables. Products of freely independent random variables
are discussed briefly. We use some of the material presented in earlier lectures,
particularly the existence of free cumulants as described by Roland Speicher.

2. Limit theorems in classical probability theory


Consider a classical probability space Ω, denote by P the probability mea-
sure, and by EX the expected value of a random variable X. A family of
random variables {Xnm | n, m ∈ N, m ≤ kn } is called an infinitesimal array if
P[|Xnm | > ε] → 0 as n → ∞ uniformly in m. A limit theorem about such an ar-
ray states that the probability distributions of the sums Xn1 +Xn2 +· · ·+Xnkn
have a weak limit as n → ∞. The hypotheses usually include the independence
of {Xn1 , Xn2 , . . . , Xnkn } for every n and kn → ∞ as n → ∞. Here are some
of the most important examples. For the first two examples, we construct
an array out of a sequence X1 , X2 , . . . of independent, identically distributed
random variables defined on Ω.

The author was supported in part by a grant from the National Science Foundation.
74 H. Bercovici

(1) If X1 has finite expected value (that is, X1 is integrable) then the averages
X1 + X2 + · · · + Xn
n
converge weakly to EX1 . This is the weak law of large numbers.
(2) If EX1 = 0 and EX12 = 1, the central limit theorem states that
X1 + X2 + · · · + Xn

n
converges weakly to the standard Gaussian distribution N (0, 1).
(3) Suppose that the random variables√{Xnm | m ≤ n} are independent, and
P[Xnm = 1] = 1 − P[Xnm = 0] = 1/ n. In this case the sum Xn1 + Xn2 +
· · · + Xnn converges weakly to a Poisson distribution.
The proofs of these results are done easily using Fourier analysis. Under suit-
able assumptions on the random variables, a proof may be obtained using
classical cumulants.
Voiculescu [51] observed that an analog of (2) holds for freely independent
random variables. The limit distribution is semicircular in this case. There-
fore, it is natural to ask whether the other classical limit theorems have such
counterparts in free probability theory.

3. Limit theorems in free probability theory


The law of large numbers and the central limit theorem are easily obtained in
the free context via cumulants. We recall briefly that, given a random variable
x in a noncommutative space (A, ϕ), one constructs a sequence (κn (x))n∈N of
complex numbers such that
κn (λx) = λn κn (x) and κn (x + y) = κn (x) + κn (y)
for every scalar λ, and for all freely independent pairs (x, y). A crucial property
of the sequence (κn (x))n∈N is that the moments ϕ(xn ) of the variable x can
be written as polynomials in the numbers (κn (x))n∈N .
Assume then that x1 , x2 , . . . are freely independent, identically distributed
random variables.
(1) Denote
x1 + x2 + · · · + xk
yk =
k
for k ∈ N. We claim that limk→∞ ϕ(ykn ) = ϕ(x1 )n for every n ∈ N. In
other words, yk converges in moments to the constant variable equal to
the expected value of x1 . Indeed, observe that
x  k
1
κn (yk ) = kκn = n κn (x1 )
k k
tends to zero except when n = 1, in which case κ1 (yk ) = κ1 (x1 ) = ϕ(x1 ).
Since κn (1) = 0 for n ≥ 2, while κ1 (1) = 1, the conclusion follows.
Free convolution 75

2
(2) Assume now √ that ϕ(x1 ) = 0 and ϕ(x1 ) = 1. An analogous calculation
shows that kyk converges in moments to the centered semicircular law γ
of variance one. This is an absolutely continuous distribution on the real
line, with density
dγ 1 p
= 4 − t2 1[−2,2] .
dt 2π
The cumulants of this law are equal to zero except for κ2 (γ) = 1. The
calculation done above yields immediately
√ k
κn ( kyk ) = n/2 κn (x1 ),
k
and one sees again that these cumulants converge to κn (γ).
Now, we know that general distributions on the real line are not uniquely de-
termined by their moments and, indeed, they may fail to have any moments at
all. In addition, convergence in moments is not equivalent to weak convergence
of probability distributions, even when moments of all orders exist. It is true
however that convergence in moments can be promoted to weak convergence
for many limit distributions, including for instance point masses and semicircle
laws.

4. Unbounded random variables


In order to discuss unbounded random variables, let us first consider non-
commutative probability spaces consisting of bounded linear operators on a
Hilbert space H. Assume that A is a unital algebra contained in the algebra
B(H) of bounded linear operators on H, and the “expected value” ϕ : A → C
is given by
ϕ(x) = hxξ, ξi, x ∈ A,
for some unit vector ξ ∈ H. It is convenient to assume that A is a selfadjoint
algebra, and that ϕ is tracial, that is, ϕ(xy) = ϕ(yx) for all x, y ∈ A. The
selfadjoint operators in the algebra A may be viewed as an analog of the space
of (real-valued) bounded measurable functions on a classical probability space.
Carrying this analogy further, the unbounded random variables correspond to
densely defined (unbounded) selfadjoint operators x. Assume that x is such
an operator, and write it as
Z ∞
x= tEx (dt),
−∞

where Ex is the spectral measure associated to x.


Definition 4.1. An unbounded selfadjoint operator x is a real random variable
if Ex (σ) ∈ A for all Borel sets σ ⊆ R. Its distribution µx is the measure on R
defined by µx (σ) = ϕ(Ex (σ)). Arbitrary random variables are closed, densely
defined operators z = x + iy, where x and y are real random variables. We
denote by Ae the collection of (unbounded) random variables.
76 H. Bercovici

It can be shown [50, Chap. 9, §2] that Ae has an algebra structure such that
A is a subalgebra provided that, for instance, A is closed in the weak operator
topology and ϕ(x∗ x) > 0 for every x ∈ A\ {0}. In other words, we can think of
A as sitting inside Ae if A is a von Neumann algebra and ϕ is a faithful tracial
state. Observe that, given a selfadjoint x ∈ Ae and a bounded Borel function
u on R, we have
Z
u(t)µx (dt) = ϕ(u(x)) = hu(x)ξ, ξi.
R

Definition 4.2. Two real, unbounded variables x, y are said to be freely in-
dependent if the sets {Ex (σ) | σ Borel} and {Ey (σ) | σ Borel} are freely inde-
pendent in (A, ϕ).
As seen in Ken Dykema’s lectures, the construction of free products shows
that there exist noncommutative probability spaces (A, ϕ) which contain an
infinite sequence (An )∞
n=1 of freely independent algebras, each of which is iso-
morphic to L∞ (0, 1) (with the usual expectation). Unbounded random vari-
ables associated to (An , ϕ|An ) can also be viewed as random variables in A, e
and thus it is possible to construct unbounded, freely independent random
variables. In particular, given two probability distributions µ, ν on R, there
exist free random variables x and y such that µx = µ and µy = ν. Note that in
the context of classical probability, independent random variables can be con-
structed using products of probability spaces. This amounts to considering the
tensor products of the corresponding algebras of bounded random variables.
A free counterpart of classical convolution can be constructed because of
the following result (see [52, 41, 24] or the book [62]).
Proposition 4.3. If x, y are freely independent selfadjoint random variables,
then µx+y depends only on µx and µy .
Proof. Let us assume first that x and y are bounded. We use the notation
µ ⊞ ν for the distribution of x + y when x and y are free, bounded, µx = µ,
and µy = ν. The proposition, and the existence of µ ⊞ ν in that case, is
an immediate consequence of moment calculations. Indeed, we know that
κn (x + y) = κn (x) + κn (y) for all n. This uniquely determines the moments
of x + y with respect to ϕ, and hence the measure µx+y itself because it has
compact support.
The case of unbounded random variables can be treated using a continuity
property of the operation ⊞. Recall that for compactly supported probability
distributions µ and µ′ , the Lévy metric d is given by the infimum of all ε > 0
such that
µ((−∞, t − ε)) − ε ≤ µ′ ((−∞, t)) ≤ µ((−∞, t + ε)) + ε
for all t ∈ R. We will make use of the fact (see [24]) that for compactly
supported probability distributions µ, µ′ , ν and ν ′ , we have
d(µ ⊞ ν, µ′ ⊞ ν ′ ) ≤ d(µ, µ′ ) + d(ν, ν ′ ).
Free convolution 77

Note that the operation ⊞ is even Lipschitz continuous in the stronger


Kolmogorov metric dK defined by
dK (µ, ν) = sup |µ((−∞, t)) − ν((−∞, t))|.
t∈R

But we will stick to the Lévy metric because it defines the topology of weak
convergence on probability distributions.
The uniform continuity of µ ⊞ ν in both variables, along with the fact that
every probability distribution can be approximated weakly by compactly sup-
ported ones, implies immediately that the operation ⊞ of free additive convo-
lution extends by continuity to arbitrary probability distributions.
We argue now that the equality µx+y = µx ⊞ µy persists for unbounded
random variables x and y. This will conclude the proof of the proposition. To
do this, set xn = Ex ([−n, n])x and yn = Ey ([−n, n])y for n ∈ N, so that xn
and yn are bounded, freely independent, and µxn → µx , µyn → µy weakly. We
have µxn +yn = µxn ⊞ µyn → µx ⊞ µy weakly, and the desired conclusion follows
because µxn +yn → µx+y weakly. The last assertion follows from the fact that
xn +yn coincides with x+y on the common range of the projections Ex ([−n, n])
and Ey ([−n, n]), and the projection Pn = Ex ([−n, n]) ∧ Ey ([−n, n]) satisfies
ϕ(Pn ) → 1 as n → ∞. 

We turn now to the calculation of free additive convolution, again consid-


ering bounded variables first. Assume then that (A, ϕ) is a noncommutative
probability space and x ∈ A. Setting αn = ϕ(xn ) for n ≥ 0 and κn = κn (x)
for n ≥ 1, consider the generating (formal) series
X∞ ∞
X
αn
Gx (λ) = n+1
, Rx (λ) = κn+1 λn .
n=0
λ n=0

The moment-cumulant formulas are then seen to be equivalent to the single


equation
1 
Gx + Rx (λ) = λ
λ
between formal power series. When A is an algebra of bounded operators, the
series Gx (λ) does in fact converge for large |λ| so that Gx is analytic at infinity
and Gx (∞) = 0. In addition, Gx behaves like 1/λ at infinity and is therefore
conformal in a neighborhood of infinity. It follows that the inverse function
Gh−1i
x (with respect to composition) is meromorphic near zero and Gh−1i x (λ)− λ1
is analytic near zero, hence a convergent power series. This convergent power
series is, of course, Rx (λ). In order to avoid the use of moments, we rewrite
Gx for |λ| > kxk as follows:
X∞ X ∞ 
ϕ(xn ) xn
Gx (λ) = n+1
= ϕ n+1
= ϕ((λ − x)−1 ).
n=0
λ n=0
λ

This suggests an extension for unbounded variables. Assume that A ⊂ B(H)


and ϕ is, as before, a vector state.
78 H. Bercovici

Definition 4.4. Given a selfadjoint (not necessarily bounded) random variable


e we define the Cauchy transform by
x ∈ A,
Gx (λ) = ϕ((λ − x)−1 ), λ ∈ C \ R.
The name of the transform recalls its similarity to a Cauchy integral. Stielt-
jes was the first to use Cauchy transforms in the context of moment problems.
Given a probability distribution µ on R, we set
Z
1
Gµ (λ) = dµ(t), λ ∈ C \ R.
R λ − t
The above definition of a Cauchy transform Gµ for unbounded variables has
the great advantage of existence. Furthermore, it uniquely determines µx .
Proposition 4.5. The functions Gx and Gµx coincide and Gµ (λ̄) = Gµ (λ).
A measure µ is uniquely determined by its Cauchy transform Gµ .
Proof. We only verify the last assertion. Given λ = x + iy, the imaginary part
Z
y
Im Gµ (λ) = − 2 2
dµ(t)
R (x − t) + y

is the Poisson integral of µ multiplied by −π. Recall that


Z
1 ∞ y
dt = 1,
π −∞ (x − t)2 + y 2
and
y
lim =0
y↓0 (x − t)2 + y 2
uniformly in t and x if |t − x| ≥ δ > 0. Define then probability measures µy
for y > 0 by setting
1
dµy (t) = − Im Gµ (t + iy) dt.
π
Standard arguments show that µy → µ as y ↓ 0. This is nothing but the
Stieltjes inversion formula. In other words, µ can be recovered from the values
of Gµ near the real line. 

The preceding argument can be refined, as done by Fatou (and, earlier, by


H. A. Schwarz when µ is absolutely continuous with continuous density), to
show that
1
lim − Im Gµ (t + iy)
y↓0 π
exists for almost all t (relative to the Lebesgue measure), and it equals the
density of the absolutely continuous part of µ. This is particularly useful when
Gµ is given by an explicit analytic expression which extends to R. It is also
easy to see that limy↓0 (t + iy)Gµ (t + iy) = µ({t}). This determines the atoms
of µ.
Free convolution 79

The function Gµ is no longer analytic at infinity if µ has unbounded support,


but it still retains some of the behavior of 1/λ. We write
∡ lim
λ→∞

to indicate that λ tends to infinity nontangentially to R, that is, Re λ/ Im λ


remains bounded.
Proposition 4.6. For any probability measure µ on R, we have
∡ lim λGµ (λ) = 1.
λ→∞

Proof. Observe that


Z Z
λ 1
λGµ (λ) = dµ(t) = t dµ(t),
R λ−t R 1− λ
and
1
lim =1
λ→∞1 − λt
for all t ∈ R. The desired conclusion follows from the dominated convergence
theorem because the integrand remains bounded when λ → ∞ nontangentially.
Indeed, if Re λ
Im λ stays bounded as λ → ∞, then
t

1 − = |λ|−1 · |λ − t| ≥ |λ|−1 · | Im λ|
λ
remains bounded away from 0. 
Set now
1
Fµ (λ) = , Im λ > 0.
Gµ (λ)
Then the preceding proposition shows that Fµ (λ) ∼ λ as λ → ∞ nontan-
gentially, and the argument principle for analytic functions implies that Fµ is
conformal in a set of the form Λα,β = {λ = x + iy | y > α|x|, y > β}, where
α > 0 is arbitrary, provided that β is sufficiently large. The inverse function
Fµh−1i is also defined in a set of the form Λα,β . One can show that the difference
ϕµ (λ) = Fµh−1i (λ) − λ is precisely Rµ (1/λ) when the measure µ has compact
support (where we set Rµ = Rx for some x with distribution µ). Therefore,
the following result is not entirely surprising (see [24]).
Proposition 4.7. Given probability measures µ, ν on R, we have
ϕµ⊞ν (λ) = ϕµ (λ) + ϕν (λ)
for λ in any domain Λα,β where these three functions are defined.
The proof of this result follows easily from the case of compactly supported
measures once a continuity property of the map µ 7→ ϕµ is established (see
Lemma 4.9). We recall first the definition of a tight family of measures. A
family F of probability measures on R is said to be tight if
lim inf µ([−n, n]) = 1.
n→∞ µ∈F
80 H. Bercovici

A tight family can also be characterized as a totally bounded family in


the Lévy metric d we introduced after Proposition 4.3. For our purposes, the
following characterization of tightness is the most useful.
Lemma 4.8. Let F be a family of probability measures on R. Then F is tight
if and only if the following two conditions are satisfied.
(1) For every α > 0 there exists β > 0 such that ϕµ is defined on Λα,β for
every µ ∈ F .
(2) ∡ limλ→∞ ϕµ (λ)/λ = 0 uniformly in µ ∈ F .
The notion of tightness is useful because the collection of all probability
measures on R is not compact under weak convergence and tightness helps to
characterize weak convergence:
Lemma 4.9. A sequence {µn }∞ n=1 of probability measures on R converges
weakly to µ if and only if
(1) the sequence {µn }∞
n=1 is tight, and
(2) ϕµn → ϕµ pointwise in some domain Λα,β .
In order to deal with limit theorems for sums of freely independent variables,
a result of Nevanlinna is useful. The analogous result for functions defined in
the unit disk is due to Herglotz. We denote by C+ the upper half of the
complex plane.
Proposition 4.10. For every analytic function f : C+ → C+ there exist
α, β ∈ R, α ≥ 0, and a finite, positive Borel measure ρ on R, such that
Z
1 + λt
f (λ) = αλ + β + dρ(t), λ ∈ C+ .
R t−λ
Moreover, α is given by α = ∡ limλ→∞ f (λ)/λ.
The integral in the preceding lemma is closely related to Cauchy transforms.
Indeed, note that
1 + λt 1 + (λ − t + t)t 1 + t2
= = − t.
t−λ t−λ t−λ
Thus Z Z Z
1 + λt 1 + t2
dρ(t) = dρ(t) − t dρ(t),
t−λ R t−λ
R R R
provided that R |t| dρ(t) < ∞. It is easy to see that β and ρ can be recovered
from the values of Im f near the real line, just as it is the case for Cauchy
transforms.
We apply Nevanlinna’s result to the reciprocal Fµ of the Cauchy transforms
of a probability distribution µ on R. It is clear from the construction that Fµ
maps C+ to itself, and Proposition 4.6 implies that ∡ limλ→∞ Fµ (λ)/λ = 1.
Proposition 4.10 implies then the existence of a constant β ∈ R and of a finite,
positive Borel measure ρ on R such that
Z
1 + λt
Fµ (λ) = λ + β + dρ(t), λ ∈ C+ .
R t−λ
Free convolution 81

The correspondence between µ and ρ has good continuity properties. This


allows the construction of a good limit theory for the addition of freely indepen-
dent variables. In addition, the inversion of Fµ can be approximated very sim-
ply when the measure µ is very close in the Lévy metric to the unit point mass
at zero δ0 . Indeed, writing Fµ (λ) = λ + ε(λ), we have ∡ limλ→∞ ε(λ)/λ = 0,
and from this one deduces that Fµh−1i (λ) ∼ λ − ε(λ). Therefore a good approx-
imation of ϕµ (λ) = Fµh−1i (λ) − λ as λ → ∞ nontangentially is the function
−ε(λ). This approximation is exact when µ is a point mass, and it is still quite
good for infinitesimal measures.

5. Univariate limit theorems


As mentioned earlier, the classical limit theorems of probability theory deal
with the distributional limits of the row sums of an infinitesimal array of in-
dependent random variables. Equivalently, one can consider an infinitesimal
array of probability measures, and consider the limits of the row-wise (classical,
additive) convolutions. We replace now classical convolution by free additive
convolution. Consider a family {µnm | n ∈ N, m = 1, 2, . . . , kn } of probability
measures on R such that
lim min µnm ([−ε, ε]) = 1
n→∞ 1≤m≤kn

for every ε > 0. The question is then: Under what conditions do the measures
νn = µn1 ⊞ µn2 ⊞ · · · ⊞ µnkn converge weakly to a probability measure ν? For
simplicity, we restrict ourselves to the special case where the measures in each
row are identical: µnm = µn does not depend on m. This certainly covers
the analogs of the theorems described in Section 2. In this case νn = µ⊞k n
n

is the kn -th convolution power of µn , and one discards the trivial case when
(kn )n is a bounded sequence, in which case νn → δ0 as n → ∞ (note that
δ0 ⊞ δ0 = δ0 ). The convergence of νn can then be studied using Lemma 4.9.
Thus we need to study the limiting behavior of the functions kn ϕµn , according
to Proposition 4.7. By the remarks at the end of the preceding section, we can
equivalently analyze the limiting behavior of the functions kn (λ − Fµn (λ)):
 1 
kn (λ − Fµn (λ)) = kn λ −
Gµn (λ)
kn λ  1
= Gµn (λ) −
Gµn (λ) λ
 1 
∼ kn λ2 Gµn (λ) −
Z h λ
2 1 1i
= kn λ − dµn (t)
λ−t λ
ZR
λt
= −kn dµn (t)
R t−λ
Z
1 + λt
= βn − dσn (t),
R t−λ
82 H. Bercovici

where
Z
t t2
βn = kn 2
dµn (t) and dσn (t) = kn dµn (t),
R 1+t 1 + t2
and the approximation in the third line is uniform in n as λ → ∞ nontangen-
tially, building on Proposition 4.6. These considerations lead to the following
result, see [22].
Theorem 5.1. The following assertions are equivalent.
(1) The sequence (µ⊞k
n
n
)n converges weakly to a probability measure.
(2) The numerical sequence (βn )n converges, and the measures (σn )n have a
weak limit as n → ∞.
Remarkably, Gnedenko and Kolmogorov proved earlier that the conditions
in (2) are also equivalent to the weak convergence of the sequence µ∗k n , where
n

µ∗k
n
n
denotes the k n -fold classical convolution of µ n with itself. The above
arguments can in fact be refined to yield an even more general result, see [31].
Theorem 5.2. Consider an infinitesimal array {Xnm | n ∈ N, 1 ≤ m ≤ kn } of
classically independent random variables and an array {Ynm | n ∈ n, 1 ≤ m ≤
kn } of freely independent variables such that µXnm = µYnm for all n, m. Let
(cn )∞
n=1 be a sequence of real numbers. The following assertions are equivalent.
(1) The sequence cn + Xn1 + Xn2 + · · · + Xnkn converges in distribution.
(2) The sequence cn + Yn1 + Yn2 + · · · + Ynkn converges in distribution.
Thus the entire body of classical limit theorems transfers to the free context
simply by replacing classical independence by free independence. The methods
of proof in the free case have a vague family resemblance with the classical ones,
but they are quite different. One can hope that a common proof for these two
families of limit theorems will be found, but our optimism should be tempered,
for instance by the fact that the multiplicative analog of Theorem 5.2 is not
quite correct (but almost correct)—in fact, there are more limit theorems in
the free world [23].

6. Multiplicative free convolution


The multiplication of classically independent random variables with val-
ues in the positive real line or in the unit circle also yields interesting limit
theorems, but these can largely be deduced from the additive case via expo-
nentiation. Indeed, for classical real-valued random variables X, Y we have
eX+Y = eX eY and ei(X+Y ) = eiX eiY . These identities are no longer true in
a noncommutative probability space, which is why the multiplication of freely
independent random variables leads to essentially new convolutions. We dis-
cuss here the case of probability measures defined on the unit circle, which
can be viewed as the distributions of unitary operators in a noncommutative
probability space.
Assume that A ⊂ B(H) is a selfadjoint algebra, and ϕ : A → C is a vector
state. Given a unitary operator x ∈ A, its distribution µx is a probability
Free convolution 83

measure on the unit circle T = {λ ∈ C | |λ| = 1}, uniquely determined by the


requirement that Z
λn dµ(λ) = ϕ(xn ), n ∈ N.
T
If x, y ∈ N are freely independent unitary operators then, as in the additive
case, µxy is uniquely determined by µx and µy via an operation called the
multiplicative free convolution:
µxy = µx ⊠ µy .
The analytic tool for the study of this convolution is a new transform associated
to a measure µ on T. Given such a measure, we define
Z
λt ψµ (λ)
ψµ (λ) = dµ(t), ηµ (λ) = ,
T 1 − λt 1 + ψµ (λ)
R
for |λ| < 1. The function ηµ is analytic and, provided that T t dµ(t) 6= 0, it is
conformal near zero. Under this assumption one considers then the convergent
power series Ση (λ) = η h−1i (λ)/λ. This power series satisfies the equation
Σµ⊠ν = Σµ Σν
when both measures have a nonzero first moment. This equation allows one to
construct a theory of infinite divisibility and limit theorems in the multiplica-
tive context. Essentially the same formulas define transforms for probability
distributions of the positive half-line, though one must change the domains
where the functions are defined, and Σµ is no longer a convergent power series
if µ has unbounded support. The multiplicative convolution on the positive
half-line extends to arbitrary probability measures, and a complete analog of
Theorem 5.2 exists, see [23, 27, 32]. See also [24, 62] for details concerning this
section.

7. Multivariate limit theorems


In classical probability theory one is also interested in the addition of inde-
pendent random variables with values in Rn or even with values in an infinite-
dimensional Banach space. When n = 2, this amounts to the consideration
of pairs (or independent families of pairs) of random variables. Given a pair
X = (X1 , X2 ) of classical random variables, its distribution µX is a measure on
R2 obtained by pushing forward the probability measure on the space where X
is defined. When X1 and X2 are bounded, the measure µX can be recovered
from its moments because
Z
p(t1 , t2 ) dµX (t1 , t2 ) = E(p(X1 , X2 ))
R2
for every polynomial p in two (commuting) variables. Pairs of variables in a
noncommutative probability space no longer have a well-defined probability
distribution on R2 , but at least they have moments. Consider then a pair x =
(x1 , x2 ) of elements in a noncommutative probability space (A, ϕ). In a first
approximation, the distribution of x is simply embodied by the collection of
84 H. Bercovici

numbers ϕ(p(x1 , x2 )) as p runs through the collection Cht1 , t2 i of polynomials


in two noncommuting variables t1 and t2 .
There is a way to replace the study of x by the study of a single random
variable in a more complicated kind of probability space. More precisely, one
considers the algebra C = M2 (A) of 2 × 2 matrices with entries in A, and views
B = M2 (C) as a subalgebra of C. There is a canonical conditional expectation
Φ : C → B obtained by simply applying ϕ entrywise. (The fact that Φ is a
conditional expectation means that
Φ(b1 cb2 ) = b1 Φ(c)b2
for all c ∈ C and b1 , b2 ∈ B, see [49].) The usual definition of free independence
extends to elements in the probability space (C, Φ) replacing ϕ by Φ. Given a
pair x = (x1 , x2 ) of elements in A, consider the element
 
x 0
X= 1 ∈ C.
0 x2
All the moments of the pair x can be recovered by looking at expressions of
the form
Φ(b0 Xb1 X · · · bn−1 Xbn ),
with b0 , b1 , . . . , bn ∈ B. When b0 = b1 = · · · = bn = b, these generalized mo-
ments appear naturally in the power series expansion of the Cauchy transform
GX (b) = Φ((b − X)−1 ).
More generally, the functions
GX,n (b) = Φn ((b − X ⊗ In )−1 ), b ∈ Mn (B),
contain all the information of the distribution of the pair x; here Φn is simply
the map Φ applied to the entries of an n × n matrix of elements in C. The
addition of freely independent pairs amounts to the addition of freely indepen-
dent elements of (C, Φ) and this can be studied using the generalized Cauchy
transforms GX,n . The family (GX,n )∞ n=1 is an example of a fully matricial
analytic function or noncommutative function. The theory of such functions is
naturally more difficult than the theory of analytic functions of one variable.
There has nonetheless been progress in extending some of the study of infinite
divisibility and limit laws to this context, see [15, 16, 46, 65].

8. Subordination
The study of limit laws only required understanding the behavior of the
functions Gµ and Fµ at infinity. To study finer analytic properties of free
convolutions of probability measures we need to understand the behavior of
Gµ near the real line. This requires the finer tool of analytic subordination.
Consider two analytic functions g1 , g2 : C+ → C. We say that g2 is sub-
ordinate to g1 if there exists an analytic function ω : C+ → C+ such that
∡ limλ→∞ ω(λ)/λ = 1 and g2 (λ) = g1 (ω(λ)) for all λ ∈ C+ . The following
Free convolution 85

result was proved under various forms and degrees of generality by Voiculescu,
Maassen, and Biane.
Theorem 8.1. Let µ, ν be two probability measures on R. Then Gµ⊞ν is
subordinate to both Gµ and Gν .
Proof. We need to prove that there exist analytic functions ω1 and ω2 such
that Fµ⊞ν = Fµ ◦ ω1 = Fν ◦ ω2 . We can define
ω1 = Fµh−1i ◦ Fµ⊞ν and ω2 = Fνh−1i ◦ Fµ⊞ν
in some domain of the form Λα,β . This yields functions with the desired
behavior at infinity, so the issue is whether the functions thus defined continue
analytically to selfmaps of C+ . Observe first that the equation ϕµ⊞ν (λ) =
h−1i
ϕµ (λ) + ϕν (λ) is equivalent to Fµ⊞ν (λ) = Fµh−1i (λ) + Fνh−1i (λ) − λ. Replacing
λ by Fµ⊞ν (λ), we obtain
λ = Fµh−1i (Fµ⊞ν (λ)) + Fνh−1i (Fµ⊞ν (λ)) − Fµ⊞ν (λ)
= ω1 (λ) + ω2 (λ) − Fµ⊞ν (λ).
Of course, this identity only holds in some domain Λα,β where all the functions
are already defined. Note however that once the functions ωj are continued
analytically to C+ , this identity will extend to the entire upper half-plane. This
suggests the following approach. Fix λ ∈ C+ and search for points ωλ,1 , ωλ,2
such that
Fµ⊞ν (λ) = Fµ (ωλ,1 ) = Fν (ωλ,2 )
and
ωλ,1 + ωλ,2 = λ + Fµ⊞ν (λ).
We already know that such points exist if λ belongs to some Λα,β . Next, we
eliminate ωλ,2 :
ωλ,1 = λ + Fµ⊞ν (λ) − ωλ,2
= λ + Fν (ωλ,2 ) − ωλ,2
= λ + Fν (λ + Fµ⊞ν (λ) − ωλ,1 ) − (λ + Fµ⊞ν (λ) − ωλ,1 ).
Setting
Hλ (ω) = λ + Fν (λ + Fµ⊞ν (λ) − ω) − (λ + Fµ⊞ν (λ) − ω),
we see that Hλ (ωλ,1 ) = ωλ,1 . Now, Hλ is easily seen to be an analytic selfmap
of C+ . Moreover, Im Hλ (w) ≥ Im λ > 0 holds. Indeed, using the application
of Nevanlinna’s result presented right after Proposition 4.10, we may rewrite
Fν as Z
1 + zt
Fν (z) = z + β + dρ(t), z ∈ C+ ,
R t−z
where β ∈ R and ρ is a finite positive Borel measure on R. Then
1 + zt 1 + t2
Im = Im z > 0
t−z |t − z|2
86 H. Bercovici

proves the claim. Now Im Hλ (ω) ≥ Im λ > 0 implies that Hλ is not a conformal
automorphism. Thus, Hλ has a fixed point for λ in some open set of the form
Λα,β . Therefore, the iterates of Hλ applied to the imaginary unit i (or any
other point of the upper half-plane) converge to that fixed point. Denote by
Hλ◦n the n-th iterate of Hλ , and set
Fn (λ) = Hλ◦n (i).
Then (Fn )∞ +
n=1 is a normal family of analytic selfmaps of C , and limn→∞ Fn (λ)
exists for λ in some open set of the form Λα,β . Montel’s theorem shows that
this limit exists for all λ ∈ C+ . Moreover, it is analytic and it extends ω1 .
The extension of ω2 is proved analogously or by using the relation between ω1
and ω2 . 
Interestingly, in case µ = ν, we obtain a result in complex analysis which
had not been observed before its appearance in this context, see [13, 29].
Theorem 8.2. Let u : C+ → C be an analytic function such that
(1) limt↑+∞ u(it)
it = 1 and
(2) Im u(z) ≤ Im z for all z ∈ C+ .
Then there exists a map ω : C+ → C+ such that u(ω(λ)) = λ for all λ ∈ C+ .
Proof. Consider a probability measure µ on R, and consider the subordination
function ω satisfying Fµ⊞µ = Fµ ◦ ω. We have then 2ω(λ) = λ + Fµ⊞µ (λ), so
that λ = 2ω(λ) − Fµ (ω(λ)) = u(ω(λ)), where u(λ) = 2λ − Fµ (λ). To conclude
the proof, we only need to observe that conditions (1) and (2) imply that the
function u is of the form u(λ) = 2λ−Fµ (λ) for some probability measure µ. 
We list a few consequences of the subordination result (Theorem 8.1).
(1) If µ is absolutely continuous relative to the Lebesgue measure, and its
density belongs to Lp for some p, then the same is true for µ ⊞ ν. This is
similar to results in classical probability. Note, however, that it is possible
that µ has an infinitely differentiable density on R while the density of
µ ⊞ ν is not everywhere differentiable.
(2) We have

(µ ⊞ ν)({s}) = max 0, max(µ({t}) + ν({s − t}) − 1) .
t∈R

This limits the number of atoms of a free convolution of measures.


(3) If µ and ν are not point masses, and if µ has compact support, then
µ ⊞ ν has finitely many atoms and no continuous singular part. This is
quite different from classical probability, where the convolution of discrete
measures yields another discrete measure.
(4) For any probability measure µ there exists a continuous family {µt | t ≥ 1}
such that µ1 = µ and µt ⊞ µs = µt+s for all t, s ≥ 1. When t > 1,
the measure µt has finitely many atoms and no continuous singular part.
Moreover, Gµt is subordinate to Gµ for all t ≥ 1. The support of µt has
at most countably many connected components for t > 1, and the number
Free convolution 87

of these components is a nonincreasing function of t. Again, this is quite


different from the classical situation where the “fractional” convolutions
µ∗t are usually nonexistent for noninteger t.
(5) If µ and ν are not point masses and µ ⊞ ν has two atoms α < β, then
(µ ⊞ ν)((α, β)) > 0.
Analogs of these results are true for free multiplicative convolutions on the
circle and on the positive half-line.

9. Comments and exercises


In this section we indicate some of the original sources for the results in the
text, along with a few technical remarks. The discussion follows the order in
which the material was presented in the preceding sections. Some exercises are
given along the way.
General references for free probability are the books [62] which describes
succinctly the initial development of the subject and [43] which is a gentler
introduction emphasizing the combinatorial aspects of the subject. The col-
lection [56] is very useful for following some later developments. The survey
[57] brings the reader all the way to current problems in the field. Biane’s
note [30] shows how classical and free probability theories can be described
in parallel using the medium of random matrices. The book [1] contains a
thorough discussion of the moment problem and its connection to selfadjoint
operators, Cauchy integrals, and continued fractions. The book [35] is a very
good reference for the theory of sums of independent random variables.
The first form of the free central limit theorem appeared in [51]. The proof
is done directly using moments; indeed, the R-transform and free cumulants
had not been discovered at the time. The R-transform first appears in [52]
in the context of bounded random variables. A first extension to unbounded
random variables appears in [41], where the variables are assumed to have a
finite second moment. For such a variable x, the function ϕx is shown to exist
in {λ ∈ C | Im λ > β} for some β > 0. Arbitrary random variables are treated
in [24].
Assume that (µn )∞ n=1 and µ are positive, Borel, R measures on R.
R locally finite
We recall that µn → µ weakly as n → ∞ if R f dµn → R f dµ for every
continuous function f : R → C with compact support.
Exercise 9.1. Assume that µn and µ are probability measures. Show that
µn → µ weakly if and only if µn → µ, with respect to the Lévy metric d. Show
that d cannot be replaced by the Kolmogorov metric dK in this statement.
(See Proposition 4.3 for the definitions of the two metrics.)
The Lipschitz continuity of free additive convolution is a consequence of
a realization theorem which is not available in commutative probability, see
[24]: Assume that d(µ, µ′ ) < ε for some compactly supported measures µ, µ′ .
Then there exists a noncommutative probability space (A, ϕ) and there are
(selfadjoint) elements x, x′ , p ∈ A such that µx = µ, µx′ = µ′ , p is a projection,
xp = x′ p, and ϕ(p) > 1 − ε.
88 H. Bercovici

Exercise 9.2. Prove the converse of the above statement. Use the statement
and its converse, along with a free product construction, to prove that ⊞ is
Lipschitz in both variables with respect to the Lévy metric d.
Consider an element x in a noncommutative probability space (A, ϕ), denote
by αn = ϕ(xn ) its moments, and let κn = κn (x) be the free cumulants. The
formula relating Rx and the Cauchy transform Gx can be written using the
generating series
X∞ ∞
X
M (λ) = 1 + αn λn , C(λ) = 1 + κn λn
n=1 n=1
associated to these sequences.
Exercise 9.3. Show that the relation Gx (Rx (λ) + 1/λ) = λ is equivalent to
C(λM (λ)) = M (λ). Deduce this last equality from the combinatorial formulas
relating αn and κn .
It is generally difficult to find explicit formulas for a measure for which
Rµ (λ) is known. Here are a couple of cases where this is possible.
Exercise 9.4. Use Stieltjes inversion in order to find the measures γ and ρ
for which Rγ (λ) = λ and Rρ (λ) = −i.
The relation Rµ⊞ν = Rµ + Rν gives a method for calculating µ ⊞ ν: First
calculate Rµ and Rν by inverting the Cauchy transforms, then calculate Gµ⊞ν
by inverting the function (1/λ) + Rµ (λ) + Rν (λ), and finally find µ ⊞ ν via
Stieltjes inversion.
Exercise 9.5. Let µ be defined by µ({0}) = ε ∈ (0, 1) and µ({1}) = 1 − ε.
Calculate explicitly the measure µ ⊞ µ. (You should obtain an absolutely
continuous measure except for one atom in case ε 6= 1/2.)
Herglotz proved that an analytic function f with positive real part in the
unit disk can be represented as
Z 2π it
e +λ
f (λ) = i Im f (0) + dµ(t), |λ| < 1,
0 eit − λ
for some finite, positive Borel measure on [0, 2π]. The argument consists in
writing the Poisson formula for Re f (rλ), where r ∈ (0, 1), and using the weak
compactness of the set of probability measures on a compact interval.
Exercise 9.6. Use a conformal map to deduce the Nevanlinna representation
from this result of Herglotz, compare Proposition 4.10.
Theorem 5.1 was proved in [22]. Special cases were considered earlier; see
for instance [45] for the general free central limit theorem. The more general
Theorem 5.2 is from [31]. See also [23, 27, 32] for counterparts of these results
in the context of free multiplicative convolution.
The proof of Theorem 5.1 yields some information about the limiting prob-
ability measure: Its ϕ transform is defined in the entire upper half-plane, and
Free convolution 89

it has negative imaginary part there. This is precisely the characterization of


infinitely divisible measures relative to free additive convolution [24]. There-
fore this result shows that there is a bijection ν⊞ ↔ ν∗ between freely and
classically infinitely divisible laws such that the domain of free attraction of
ν⊞ is equal to the domain of attraction of ν∗ . This has been used in the study
of free infinite divisibility and free stochastic analysis [7, 8, 9].

Exercise 9.7. Define measures µn by µn ({1}) = 1 − µn ({0}) = 1/ n and
νn = µ⊞nn . Show that νn converge weakly to a probability measure ν. Deter-
mine Rν and use Stieltjes inversion to determine ν explicitly.
The measure ν in the preceding exercise was first discovered by Marčenko
and Pastur in the study of certain random matrices. It is the free analog of
the Poisson distribution.
Exercise 9.8. Let X1 , X2 , . . . be free identically distributed symmetric vari-
ables, that is, Xj and −Xj have the same distribution. Show that
X1 + X2 + · · · + Xn
n
converges to zero in probability if and only of limn→∞ nP(|X1 | > n) = 0.

RExercise 9.9.R Consider two probability measures µ and ν on T such that


T λ dµ(λ) = T λ dν(λ) = R 0. Show that µ ⊠ ν is the normalized arclength
measure m on T, that is, T λn d(µ ⊠ ν)(λ) = 0 for all n ≥ 1. (For this exercise,
it is most convenient to consider µ = µx and ν = νy , where x and y are free
unitary elements, and calculate the moments ϕ((xy)n ).)
Exercise 9.10. Fix a natural number n, and denote by µ the measure which
assigns mass 1/n to each root of order n of unity. Show that ηµ (λ) = λn , for
the function η defined in Section 6.
Exercise 9.11. Given a probability measure µ on (0, +∞), we set
Z
λt
ψµ (λ) = dµ(t)
(0,+∞) 1 − λt
for λ ∈ C \ (0, +∞). Show that the restriction of ψµ to the left half-plane
{z ∈ C | Re z ≤ 0} is conformal, and its range contains the segment (−1, 0).
(Look at the derivative ψµ′ .)
The preceding exercise shows that the function Σµ is analytic in a neigh-
borhood of (−∞, 0) if µ is supported on (0, +∞).
The generalized (operator-valued) random variables described in Section 7
were first studied in [55], and fully matricial analytic functions appeared in
[60]. There has been significant progress in understanding infinite divisibility
in the operator-valued context, though the results obtained so far are certainly
not definitive, see [2, 15, 16, 46, 64, 65] for details.
Theorem 8.1 was first stated in [54] under a genericity assumption for mea-
sures with bounded support. Related considerations appear in [41]. The result
90 H. Bercovici

was proved in full generality in [29]. The most conceptual approach is based on
coalgebras, see [58]. A direct analytic approach, inspired by the earlier work
in [41], is given in [33]. The argument we give in Section 8 is taken from [13].
The following exercise justifies the statement concerning fixed points in the
proof of Theorem 8.1 since C+ is conformally equivalent to D.
Exercise 9.12. Assume that f : D → D is analytic, f (0) = 0, but f is not a
bijection. Show that for every r ∈ (0, 1) there exists a constant c ∈ (0, 1) such
that |f (z)| ≤ c|z| for |z| ≤ r. Deduce that f ◦n (z) → 0 as n → ∞ for every
z ∈ D.
Subordination also holds for free multiplicative convolution. For measures
on T one can use the following result.
Exercise 9.13. Let f : D → C be an analytic function such that f (0) = 0
and |f (λ)| ≥ |λ| for all λ ∈ D. Show that there exists an analytic function
g : D → D such that f (g(λ)) = λ for all λ ∈ D.
The first regularity results using subordination, for instance item (1) in Sec-
tion 8, appeared in [54]. Details on the absence of a singular unitary part for
µ ⊞ ν can be found in [10]. The atoms of µ ⊞ ν were identified in [26]. The ex-
istence of µ⊞t for large t was proved in [25] when µ has compact support. The
extension to t ≥ 1 is taken from [42] for compact supports and from [12] for
arbitrary measures. The fact that connected components in the support of con-
volution powers tend to coalesce was shown in [36, 37, 66] in various contexts.
The fact that free convolutions have no “consecutive” atoms is explained in [28];
see also [33] for related results on freely indecomposable measures. We have not
mentioned many other interesting results, some of which are in the references;
see [3, 4, 5, 6, 11, 14, 17, 18, 19, 20, 21, 34, 38, 39, 40, 44, 47, 48, 59, 61, 63].
Exercise 9.14. Assume that µ({1}) = µ({−1}) = 1/2. Calculate the mea-
sures µ⊞t for all t ≥ 1. (Observe that Rµ⊞t = tRµ . The measure µ⊞t has two
atoms when t < 2.)

References
[1] N. I. Akhiezer, The classical moment problem and some related questions in analysis,
Translated by N. Kemmer, Hafner Publishing Co., New York, 1965. MR0184042
[2] M. Anshelevich, S. T. Belinschi, M. Février, and A. Nica, Convolution powers in the
operator-valued framework, Trans. Amer. Math. Soc. 365 (2013), no. 4, 2063–2097.
MR3009653
[3] M. Anshelevich, J.-C. Wang, and P. Zhong, Local limit theorems for multiplicative free
convolutions, J. Funct. Anal. 267 (2014), no. 9, 3469–3499. MR3261117
[4] O. Arizmendi and S. T. Belinschi, Free infinite divisibility for ultrasphericals, Infin. Di-
mens. Anal. Quantum Probab. Relat. Top. 16 (2013), no. 1, 1350001, 11 pp. MR3071453
[5] O. Arizmendi and T. Hasebe, Semigroups related to additive and multiplicative, free
and Boolean convolutions, Studia Math. 215 (2013), no. 2, 157–185. MR3071490
[6] O. Arizmendi and V. Pérez-Abreu, The S-transform of symmetric probability mea-
sures with unbounded supports, Proc. Amer. Math. Soc. 137 (2009), no. 9, 3057–3066.
MR2506464
Free convolution 91

[7] O. E. Barndorff-Nielsen and S. Thorbjørnsen, Lévy laws in free probability, Proc. Natl.
Acad. Sci. USA 99 (2002), no. 26, 16568–16575. MR1947756
[8] O. E. Barndorff-Nielsen and S. Thorbjørnsen, Self-decomposability and Lévy processes
in free probability, Bernoulli 8 (2002), no. 3, 323–366. MR1913111
[9] O. E. Barndorff-Nielsen and S. Thorbjørnsen, A connection between free and classical
infinite divisibility, Infin. Dimens. Anal. Quantum Probab. Relat. Top. 7 (2004), no. 4,
573–590. MR2105912
[10] S. T. Belinschi, The Lebesgue decomposition of the free additive convolution of two
probability distributions, Probab. Theory Related Fields 142 (2008), no. 1-2, 125–150.
MR2413268
[11] S. T. Belinschi, F. Benaych-Georges, and A. Guionnet, Regularization by free additive
convolution, square and rectangular cases, Complex Anal. Oper. Theory 3 (2009), no. 3,
611–660. MR2551632
[12] S. T. Belinschi and H. Bercovici, Partially defined semigroups relative to multiplicative
free convolution, Int. Math. Res. Not. 2005, no. 2, 65–101. MR2128863
[13] S. T. Belinschi and H. Bercovici, A new approach to subordinationresults in free prob-
ability, J. Anal. Math. 101 (2007), 357–365. MR2346550
[14] S. T. Belinschi, M. Bożejko, F. Lehner, and R. Speicher, The normal distribution is
⊞-infinitely divisible, Adv. Math. 226 (2011), no. 4, 3677–3698. MR2764902
[15] S. T. Belinschi, M. Popa, and V. Vinnikov, Infinite divisibility and a non-commutative
Boolean-to-free Bercovici-Pata bijection, J. Funct. Anal. 262 (2012), no. 1, 94–123.
MR2852257
[16] S. T. Belinschi, M. Popa, and V. Vinnikov, On the operator-valued analogues of the
semicircle, arcsine and Bernoulli laws, J. Operator Theory 70 (2013), no. 1, 239–258.
MR3085826
[17] F. Benaych-Georges, Failure of the Raikov theorem for free random variables, in
Séminaire de Probabilités XXXVIII, 313–319, Lecture Notes in Math., 1857, Springer,
Berlin, 2005. MR2126982
[18] F. Benaych-Georges, Infinitely divisible distributions for rectangular free convolution:
classification and matricial interpretation, Probab. Theory Related Fields 139 (2007),
no. 1-2, 143–189. MR2322694
[19] F. Benaych-Georges, Rectangular random matrices, related convolution, Probab. The-
ory Related Fields 144 (2009), no. 3-4, 471–515. MR2496440
[20] F. Benaych-Georges, Rectangular R-transform as the limit of rectangular spherical in-
tegrals, J. Theoret. Probab. 24 (2011), no. 4, 969–987. MR2851240
[21] F. Benaych-Georges and T. Cabanal-Duvillard, Marčenko-Pastur theorem and
Bercovici-Pata bijections for heavy-tailed or localized vectors, ALEA Lat. Am. J.
Probab. Math. Stat. 9 (2012), no. 2, 685–715. MR3069381
[22] H. Bercovici and V. Pata, Stable laws and domains of attraction in free probability
theory, Ann. of Math. (2) 149 (1999), no. 3, 1023–1060. MR1709310
[23] H. Bercovici and V. Pata, Limit laws for products of free and independent random
variables, Studia Math. 141 (2000), no. 1, 43–52. MR1782911
[24] H. Bercovici and D. Voiculescu, Free convolution of measures with unbounded support,
Indiana Univ. Math. J. 42 (1993), no. 3, 733–773. MR1254116
[25] H. Bercovici and D. Voiculescu, Superconvergence to the central limit and failure of the
Cramér theorem for free random variables, Probab. Theory Related Fields 103 (1995),
no. 2, 215–222. MR1355057
[26] H. Bercovici and D. Voiculescu, Regularity questions for free convolution. Nonselfadjoint
operator algebras, operator theory, and related topics, 37–47, Oper. Theory Adv. Appl.,
104, Birkhäuser, Basel, 1998. MR1639647
[27] H. Bercovici and J.-C. Wang, Limit theorems for free multiplicative convolutions, Trans.
Amer. Math. Soc. 360 (2008), no. 11, 6089–6102. MR2425704
92 H. Bercovici

[28] H. Bercovici and J.-C. Wang, On freely indecomposable measures, Indiana Univ. Math.
J. 57 (2008), no. 6, 2601–2610. MR2482992
[29] P. Biane, Processes with free increments, Math. Z. 227 (1998), no. 1, 143–174.
MR1605393
[30] P. Biane, Free probability for probabilists, in Quantum probability communications,
Vol. XI (Grenoble, 1998), 55–71, QP-PQ, XI, World Sci. Publ., River Edge, NJ, 2003.
MR2032363
[31] G. P. Chistyakov and F. Götze, Limit theorems in free probability theory. I, Ann.
Probab. 36 (2008), no. 1, 54–90. MR2370598
[32] G. P. Chistyakov and F. Götze, Limit theorems in free probability theory. II, Cent. Eur.
J. Math. 6 (2008), no. 1, 87–117. MR2379953
[33] G. P. Chistyakov and F. Götze, The arithmetic of distributions in free probability theory,
Cent. Eur. J. Math. 9 (2011), no. 5, 997–1050. MR2824443
[34] G. P. Chistyakov and F. Götze, Asymptotic expansions in the CLT in free probability,
Probab. Theory Related Fields 157 (2013), no. 1-2, 107–156. MR3101842
[35] B. V. Gnedenko and A. N. Kolmogorov, Limit distributions for sums of independent
random variables. Translated from the Russian, annotated, and revised by K. L. Chung.
With appendices by J. L. Doob and P. L. Hsu. Revised edition. Addison-Wesley Pub-
lishing Co., Reading, MA, 1968. MR0233400
[36] H.-W. Huang, Supports, regularity, and ⊞-infinitedivisibility for measures of the form
(µ⊞p )⊎q , arXiv:1209.5787 [math.CV] (2012).
[37] H.-W. Huang and P. Zhong, On the supports of measures in free multiplicative convo-
lution semigroups, Math. Z. 278 (2014), no. 1-2, 321–345. MR3267581
[38] V. Kargin, Berry-Esseen for free random variables, J. Theoret. Probab. 20 (2007), no. 2,
381–395. MR2324538
[39] V. Kargin, On superconvergence of sums of free random variables, Ann. Probab. 35
(2007), no. 5, 1931–1949. MR2349579
[40] V. Kargin, An inequality for the distance between densities of free convolutions, Ann.
Probab. 41 (2013), no. 5, 3241–3260. MR3127881
[41] H. Maassen, Addition of freely independent random variables, J. Funct. Anal. 106
(1992), no. 2, 409–438. MR1165862
[42] A. Nica and R. Speicher, On the multiplication of free N -tuples of noncommutative
random variables, Amer. J. Math. 118 (1996), no. 4, 799–837. MR1400060
[43] A. Nica and R. Speicher, Lectures on the combinatorics of free probability, London Math.
Soc. Lecture Note Ser., 335, Cambridge Univ. Press, Cambridge, 2006. MR2266879
[44] V. Pata, Lévy type characterization of stable laws for free random variables, Trans.
Amer. Math. Soc. 347 (1995), no. 7, 2457–2472. MR1311913
[45] V. Pata, The central limit theorem for free additive convolution, J. Funct. Anal. 140
(1996), no. 2, 359–380. MR1409042
[46] M. Popa and V. Vinnikov, Non-commutative functions and the non-commutative free
Lévy-Hinčin formula, Adv. Math. 236 (2013), 131–157. MR3019719
[47] M. Popa and J.-C. Wang, On multiplicative conditionally free convolution, Trans. Amer.
Math. Soc. 363 (2011), no. 12, 6309–6335. MR2833556
[48] R. Speicher, Free probability theory and random matrices, in Asymptotic combinatorics
with applications to mathematical physics (St. Petersburg, 2001), 53–73, Lecture Notes
in Math., 1815, Springer, Berlin, 2003. MR2009835
[49] M. Takesaki, Theory of operator algebras. I, Reprint of the first (1979) edition. Ency-
clopaedia Math. Sci., 124, Springer, Berlin, 2002. MR1873025
[50] M. Takesaki, Theory of operator algebras. II, Encyclopaedia Math. Sci., 125, Springer,
Berlin, 2003. MR1943006
[51] D. Voiculescu, Symmetries of some reduced free product C ∗ -algebras, in Operator alge-
bras and their connections with topology and ergodic theory (Buşteni, 1983), 556–588,
Lecture Notes in Math., 1132, Springer, Berlin, 1985. MR0799593
Free convolution 93

[52] D. Voiculescu, Addition of certain noncommuting random variables, J. Funct. Anal. 66


(1986), no. 3, 323–346. MR0839105
[53] D. Voiculescu, Limit laws for random matrices and free products, Invent. Math. 104
(1991), no. 1, 201–220. MR1094052
[54] D. Voiculescu, The analogues of entropy and of Fisher’s information measure in free
probability theory. I, Comm. Math. Phys. 155 (1993), no. 1, 71–92. MR1228526
[55] D. Voiculescu, Operations on certain non-commutative operator-valued random vari-
ables, Astérisque No. 232 (1995), 243–275. MR1372537
[56] D. Voiculescu (ed.), Free probability theory, Fields Inst. Commun., 12, Amer. Math.
Soc., Providence, RI, 1997. MR1426832
[57] D. Voiculescu, Lectures on free probability theory, in Lectures on probability theory and
statistics (Saint-Flour, 1998), 279–349, Lecture Notes in Math., 1738, Springer, Berlin,
2000. MR1775641
[58] D. Voiculescu, The coalgebra of the free differencequotient and free probability, Internat.
Math. Res. Notices 2000, no. 2, 79–106. MR1744647
[59] D. Voiculescu, Analytic subordination consequences of free Markovianity, Indiana Univ.
Math. J. 51 (2002), no. 5, 1161–1166. MR1947871
[60] D. Voiculescu, Free analysis questions. I, Int. Math. Res. Not. 2004, no. 16, 793–822.
MR2036956
[61] D.-V. Voiculescu, Free analysis questions II: the Grassmannian completion and the series
expansions at the origin, J. Reine Angew. Math. 645 (2010), 155–236. MR2673426
[62] D. V. Voiculescu, K. J. Dykema, and A. Nica, Free random variables, CRM Monogr.
Ser., 1, Amer. Math. Soc., Providence, RI, 1992. MR1217253
[63] J.-C. Wang, Local limit theorems in free probability theory, Ann. Probab. 38 (2010),
no. 4, 1492–1506. MR2663634
[64] J. D. Williams, An analogue of Hinc̆in’s characterization of infinite divisibility for
operator-valued free probability, J. Funct. Anal. 267 (2014), no. 1, 1–14. MR3206507
[65] J. D. Williams, Analytic function theory for operator-valued free probability, J. Reine
Angew. Math. (2015), DOI 10.1515/crelle-2014-0106.
[66] P. Zhong, On the free convolution with a freemultiplicative analogue of the normal
distribution, J. Theoret. Probab. (2014) 1–26 (published online).
Easy quantum groups

Moritz Weber

1. Introduction
In the field of noncommutative operator algebras, quantum groups are a
good notion of what “symmetries” should be. In 1987, Woronowicz [72] gave
a definition of compact matrix quantum groups based on the theory of C ∗ -
algebras. They generalize compact Lie groups G ⊂ Mn (C). About twenty
years later, Banica and Speicher [21] isolated a class of compact matrix quan-
tum groups with an intrinsic combinatorial structure. These so-called easy
quantum groups are determined by categories of partitions (via some Tannaka–
Krein type result).
They have been proven useful in order to understand various aspects of
quantum groups, in particular those linked to free probability theory. Further-
more, they open a way to find examples of compact quantum groups apart
from q-deformations and quantum isometry groups. The class of easy quantum
groups contains the symmetric group Sn and the orthogonal group On as well
as Wang’s quantum permutation group Sn+ and his free orthogonal quantum
group On+ .
We begin with an introduction to compact matrix quantum groups, before
we turn to easy quantum groups and their relation to free probability.

2. Compact matrix quantum groups


There are several perspectives on what a quantum group should be, which
makes it a bit difficult to get an overview over the field. There are alge-
braic, C ∗ -algebraic and von Neumann algebraic versions of quantum groups;
furthermore, there are compact and locally compact quantum groups. In these
lecture notes, we will focus on C ∗ -algebraic compact quantum groups, but let
us provide at least some remarks about other approaches.
To start with, let us state clearly: A quantum group is not a group—it is
a more general object. In several fields of mathematics and physics, it turned
out that the notion of a group is not enough to describe the symmetries that
show up, in particular, when dualizations of algebraic structures (i.e. passing
to algebras of functions over the objects) play a role. One particular algebraic
96 M. Weber

structure occurring in connection with such dualizations is the Hopf algebra,


but we will not go into that.
An early definition of a quantum group was given by Drinfeld and Jimbo
in 1985, on the way to deformation of Lie algebras [43, 44, 50]. Another
perspective on quantum groups comes from Kac algebras, solving the question
of what Pontryagin duality should be for nonabelian locally compact groups.
Recall that if G is a locally compact, abelian group, its dual Ĝ (the set of
continuous group homomorphisms from G to T) is again a group and the double
dual of G is again G (up to isomorphism). This duality fails for nonabelian
groups. Extending the notion of groups solves this problem, see for instance
Enock and Schwartz’s book [36].
In 1987, Woronowicz came up with a definition of a quantum group in the
C ∗ -algebraic setting, because he considered the (von Neumann algebraic) Kac
algebra setting as too restrictive. He first gave a definition of compact matrix
quantum groups (which he at the time called compact matrix pseudogroups)
and later extended it to compact quantum groups [72, 76]. Later, locally
compact quantum groups were defined. Woronowicz’s motivation came from
the observation that some symmetries in classical physics break down on the
quantum physics level. He gave an important example of a compact matrix
quantum group, namely SUq (2), a q-deformed version of the group SU (2) (see
Example 2.5). This is a non-Kac-type quantum group. Woronowicz proved
that all compact quantum groups possess a Haar state, the natural generaliza-
tion of a Haar measure for groups. This is an extremely powerful tool in the
theory of quantum groups justifying the point of view that quantum groups
are a reasonable generalization of groups.
Let us now begin with a (maybe personal) motivation of Woronowicz’s ap-
proach to compact quantum groups. Let X be a compact, topological Haus-
dorff space. The algebra C(X) of continuous C-valued functions over X fulfills
the axioms of a unital commutative C ∗ -algebra. On the other hand, every
unital commutative C ∗ -algebra is exactly of this form. Thus, noncommutative
C ∗ -algebras may be seen as “noncommutative functions” over some “noncom-
mutative spaces”, or as a kind of “noncommutative topology”.
If G is a compact group acting on X, we immediately deduce that C(G)
coacts on C(X): The action G × X → X lifts to a ∗ -homomorphism C(X) →
C(G) ⊗ C(X) by composition, if we identify C(G × X) with C(G) ⊗ C(X).
If we now “quantize” C(X) (i.e. we replace it by some noncommutative C ∗ -
algebra), we should also be able to quantize C(G) or rather G. In this sense,
quantized versions of groups should yield a richer notion of symmetries of
noncommutative spaces in operator algebras.
Let us consider in more detail how a quantization of a compact group G
should work. An important part of the structure of a group is the group
law G × G → G sending (s, t) to st. On the dual level, we have a map
∆ : C(G) → C(G × G) mapping f ∈ C(G) to the map (s, t) 7→ f (st). Hence,
a quantum group should be equipped with a map ∆ : A → A ⊗min A encoding
the idea of a group law. This yields the following definition by Woronowicz.
Easy quantum groups 97

Definition 2.1 ([76]). A compact quantum group is a unital C ∗ -algebra A


equipped with a unital ∗ -homomorphism ∆ : A → A ⊗min A (the comultipli-
cation) such that (∆ ⊗ id)∆ = (id ⊗∆)∆ (the coassociativity) and the linear
spans of the sets ∆(A)(1 ⊗ A) and ∆(A)(A ⊗ 1) are dense in A ⊗min A.
We often write A = C(G) even if A is noncommutative and speak of G as
the quantum group. In this sense, a quantum group G is only defined via its
associated (possibly noncommutative) C ∗ -algebra C(G) and the comultiplica-
tion ∆. We say that a quantum group G is a quantum subgroup of H (we write
G ⊂ H), if there is a surjective ∗ -homomorphism φ : C(H) → C(G) respecting
the comultiplications, i.e. ∆G ◦ φ = φ ⊗ φ ◦ ∆H , see for instance [69].
Note that the existence of the neutral element and inverse elements in a
group G can also be dualized to maps on the level of C(G). Extracting their
interplay with the dualized group law ∆ yields the notions of a counit and
an antipode, the latter one encoding the map to the inverse of an element.
These objects appear in the definition of algebraic quantum groups and Hopf
algebras—in Woronowicz’s definition however they are not required [75]. In
fact, the denseness condition in Definition 2.1 ensures that we really dualize
the group structure and not only the semigroup structure: If G is a com-
pact semigroup with dual law ∆ on C(G), and if ∆(C(G))(1 ⊗ C(G)) and
∆(C(G))(C(G) ⊗ 1) are dense in C(G × G), then G has the cancellation
property—and hence it is a group [51].
Definition 2.1 is an extension of the notion of a compact group in the fol-
lowing sense. Every compact group G is a compact quantum group seen as
the pair (C(G), ∆) where ∆ maps f to (s, t) 7→ f (st). Conversely, a compact
quantum group (A, ∆) is a group if and only if the C ∗ -algebra A is commuta-
tive. Indeed, use Gelfand–Naimark’s theorem to show A ∼ = C(X) and reveal a
group law on X using ∆.
Woronowicz’s definition has the striking advantage that it ensures the exis-
tence of a Haar state. Again, we first consider the classical case. Let G be a
compact group. Then there is a unique Haar measure µh such that
Z Z
f (ts)dµh (s) = f (s)dµh (s)
G G
for allR f ∈ C(G) and t ∈ G. This yields a state h : C(G) → C mapping
f 7→ f dµh with the property (h ⊗ id)∆(f ) = h(f )1. Indeed, using f (ts) =
∆(f )(t, s), we check that
Z
(id ⊗h)∆(f )(t) = f (ts)dµh (s) = h(f )

for all t. The next theorem is due to Woronowicz [72, 76] with an improvement
by Van Daele [61].
Theorem 2.2. Let G be a compact quantum group. Then there exists a unique
state h on C(G) (the Haar state) such that (h ⊗ id)∆ = (id ⊗h)∆ = h1.
The Haar state is a very useful tool. For instance, the representation theory
of quantum groups very much relies on the existence (and use) of a Haar state.
98 M. Weber

Moreover, we may associate other operator algebraic objects to our quantum



group G; via the GNS construction we obtain a reduced version Cred (G) and
a von Neumann algebra L∞ (G) = LG. This is one of the reasons why we pass
from the pair (A, ∆) of Definition 2.1 to G with C(G) = A when we speak
of G as the quantum group: We can study a quantum group G in its several

disguises C(G), Cred (G), LG or even in a purely algebraic version Pol(G).
We should now take a look at some examples of quantum groups.
Example 2.3. Consider the group On ⊂ Mn (C) of orthogonal matrices. How
does C(On ) look like? Consider the universal, unital C ∗ -algebra generated by
n2 selfadjoint elements uij , 1 ≤ i, j ≤ n such that
X X
uik ujk = uki ukj = δij
k k
for all i and j. This simply encodes the fact that the matrix u = (uij ) is
orthogonal, hence uut = ut u = 1. Furthermore, we require that the generators
uij commute. Since this universal C ∗ -algebra is commutative, it is isomorphic
to the algebra of continuous functions over its space of characters. This space
in turn is homeomorphic to On as can be easily checked (see Exercise 11.1).
Hence, we have

C(On ) ∼= C ∗ uij , 1 ≤ i, j ≤ n | u∗ = uij , uut = ut u = 1, uij ukl = ukl uij .
ij
Note that under this isomorphism the generators uij are mapped to the coor-
dinate functions u′ij : On → C given by (akl ) 7→ aij .
Now what does the comultiplication on C(On ) look like? Recall that ∆(f )
is given by ∆(f )(s, t) = f (st) for matrices s, t ∈ On . Hence,
X
∆(u′ij )(s, t) = u′ik (s)u′kj (t)
k
simply by matrix multiplication. Under the P identification of C(On × On ) and
C(On ) ⊗ C(On ), we thus have ∆(u′ij ) = k u′ik ⊗ u′kj .
In 1995, S. Wang [69] defined the free orthogonal quantum group On+ as the
universal C ∗ -algebra

C(On+ ) := C ∗ uij , 1 ≤ i, j ≤ n | u∗ij = uij , uut = ut u = 1
P
endowed with the comultiplication ∆(uij ) = k uik ⊗ ukj (check that it is a

-homomorphism indeed, using the universal property). This can be seen as the
first example of obtaining a quantum group by “liberations” of groups, which
is very different from quantum groups obtained by “deformations” of groups
(appearing in the Drinfeld–Jimbo setting or Woronowicz’s first examples, see
also Example 2.5). Note that in the literature C(On+ ) is also denoted by Ao (n).
Example 2.4. As a second example, consider the symmetric group Sn ⊂
Mn (C) of permutation matrices. Like in the previous example, we first try
to find out which relations are fulfilled by the coordinate functions u′ij on Sn .
We infer that we can write C(Sn ) as a universal C ∗ -algebra generated by
elements uij with certain relations. Letting drop the commutativity relations
Easy quantum groups 99

uij ukl = ukl uij yields Wang’s [70] free symmetric quantum group (also called
the quantum permutation group) Sn+ given by
 X X 
C(Sn+ ) := C ∗ uij , 1 ≤ i, j ≤ n | u∗ij = uij = u2ij , uik = ukj = 1 ∀i, j
k k
P
together with the comultiplication ∆(uij ) = k uik ⊗ ukj .
Example 2.5. For the sake of historical completeness, we should also mention
Woronowicz’s seminal example SUq (2), the first example of a compact quantum
group in his setting. For this, we observe that  the group SU (2) ⊂ M2 (C)
consists of unitary matrices of the form ac −c̄
ā . For q ∈ [−1, 1], Woronowicz
[73] defined the quantum group SUq (2) by the universal ∗unital
 C ∗ -algebra
a −qc
generated by elements a and c such that the matrix c a∗ is unitary. The
comultiplication is given by
∆(a) = a ⊗ a − qc∗ ⊗ c and ∆(c) = c ⊗ a + a∗ ⊗ c.
This is an example of obtaining a quantum group via deformation, in con-
trast to Wang’s liberations. There exist generalizations to SUq (n), n ∈ N. See
also [58] for details.
The quantum groups On+ , Sn+ and SUq (2) are compact quantum groups
of a special type. The following definition (under the name compact matrix
pseudogroups) was introduced by Woronowicz [72] (see also [75] for an improve-
ment). In fact, the definition of compact matrix quantum groups preceded the
one of compact quantum groups (Definition 2.1) and one can check that the
latter one generalizes the first one [58, Prop. 6.1.4]. Once this is done, it is clear
that the above examples give rise to compact matrix quantum groups—and
hence to compact quantum groups.
Definition 2.6. Given n ∈ N, a compact matrix quantum group consists of a
unital C ∗ -algebra A and a ∗ -homomorphism ∆ : A → A ⊗min A such that
• A is generated by n2 elements uij , 1 ≤ i, j ≤ n in the sense that the

-algebra generated by the uij is dense in A,
• the matrices u = (uij ) and ut = (uji ) are invertible, P
• the ∗ -homomorphism ∆ : A → A ⊗min A sends uij to k uik ⊗ ukj .
Again, we write A = C(G) even if A is noncommutative, and speak of G
as the quantum group. Since the matrix u = (uij ) contains the essential
data of a compact matrix quantum group (in particular, it determines the
comultiplication ∆), it is also common to denote a compact matrix quantum
group by (A, u) or (C(G), u). In the sequel, we will only deal with compact
matrix quantum groups and C(G) will typically be a universal C ∗ -algebra
generated by selfadjoint elements uij such that u is orthogonal (and hence
also ut ) and some further relations are fulfilled. Sometimes, it is convenient to
work with the matrix ū := (u∗ij ) instead of ut = ū∗ .
Compact matrix quantum groups carry a very nice feature: They obey a
Tannaka–Krein type result, proved by Woronowicz [74]. In order to understand
100 M. Weber

it, we again begin with the classical case. Let G be a compact group and let
U : G → B(H) be a unitary representation of G on some finite-dimensional
Hilbert space H. Hence B(H) = Mm (C) for some m ∈ N. Thus, we can view
U as an element in C(G, Mm (C)) = Mm (C) ⊗ C(G), i.e. as a matrix with
entries from
P C(G). Let eij denote the matrix units in Mm (C). We can write
U as U = eij ⊗ Uij .
Passing to quantum groups, a unitary finite-dimensional corepresentation
of a compact quantum group G is by definition P a unitary matrix v ∈ Mm (C) ⊗
C(G) for some m ∈ N such that ∆(vij ) = k vik ⊗ vkj . This reflects exactly
U (gh) = U (g)U (h) in the Pcase of groups. WePcan form the tensor product of
two representations u = euij ⊗ uij and v = evij ⊗ vij by
X
u⊗v = euij ⊗ evkl ⊗ uij vkl ∈ Mmu (C) ⊗ Mmv (C) ⊗ C(G)

= Mm m (C) ⊗ C(G).
u v

If now G is a compact matrix quantum group, we observe that the matrix


u = (uij ) is a unitary corepresentation in case u is unitary (we can also de-
fine nonunitary corepresentations). It is called the fundamental (co-)represen-
tation. Taking tensor powers u⊗k ∈ Mn (C)⊗k ⊗ C(G) of it, we denote by
HomG (k, l) the space of intertwiners of G for k, l ∈ N0 , i.e. the set of all linear
maps T : (Cn )⊗k → (Cn )⊗l such that T u⊗k = u⊗l T . This makes sense once
we view T as a scalar-valued nk × nl -matrix in Mnk ×nl (C) ⊗ C(G). Due to
Woronowicz’s Tannaka–Krein result [74], we can reconstruct a compact ma-
trix quantum group from its intertwiner spaces. In other words, the quantum
group is determined by its intertwiner spaces, i.e. just by linear maps. This
gives us a powerful tool to actually deal with compact matrix quantum groups.
In some sense, it is plausible that the intertwiner spaces contain some infor-
mation about the quantum group: Any equality T u⊗k = u⊗l T yields concrete
relations on linear combinations of products ui1 j1 . . . uik jk (which are the en-
tries of u⊗k ), see for instance Example 3.2.

3. Categories of partitions
We now want to calculate some intertwiner spaces in an explicit way. For
instance, what are the intertwiner spaces of Sn+ and of On+ ? What are concrete
intertwiners of these quantum groups? Following the work of [21], we shall see
that the theory of categories of partitions provides a useful framework for the
construction of such intertwiners. Moreover, this will lead to the definition of
easy quantum groups.
A partition p ∈ P (k, l) is given by k ∈ N0 upper and l ∈ N0 lower points
which are connected by some strings. This gives rise to a partition of the
ordered set on k + l points, with the additional information which points are
upper and which are lower. A block of p is a maximal subset of connected
points. Here are two examples p1 consisting of four blocks and p2 having five
blocks:
Easy quantum groups 101

• • • • • • • • •

p1 = ✄ p2 =

• • • • • • • • •
A partition is called noncrossing (p ∈ N C(k, l)), if the strings connecting the
points may be drawn in such a way that they do not cross. In the above
example, p1 ∈ P (3, 6) but p1 ∈ / N C(3, 6), whereas p2 ∈ N C(6, 3). The set of
all partitions is denoted by P , the set of all noncrossing partitions is N C.
Now, let e1 , . . . , en be the canonical basis of Cn and let p ∈ P (k, l) be a
partition. We define a linear map Tp : (Cn )⊗k → (Cn )⊗l by
n
X
Tp (ei1 ⊗ · · · ⊗ eik ) = δp (i, j)ej1 ⊗ · · · ⊗ ejl .
j1 ,...,jl =1

We label the upper points of p with the multi-index i = (i1 , . . . , ik ) and


the lower ones with j = (j1 , . . . , jl ). We put δp (i, j) = 1 if and only if all
strings of the partition p connect only equal indices; otherwise, δp (i, j) = 0.
By convention, we put (Cn )⊗0 = C. As an example, the labeling i = (2, 2, 3)
and j = (2, 5, 3, 5, 5, 2) yields δp1 (i, j) = 1 with the above partition p1 , whereas
δp1 (i′ , j ′ ) = 0 for i′ = (2, 2, 3) and j ′ = (3, 5, 3, 5, 5, 2).

2 2 3 2 2 3

✄ δ=1 ✄ δ=0

2 5 3 5 5 2 3 5 3 5 5 2
As an example of such a linear map Tp , consider the partition
p= ✁❆ ∈ P (2, 2).
Here, δp ((i1 , i2 ), (j1 , j2 )) = 1 if and only if i1 = j2 and i2 = j1 . We thus have
the flip map Tp (ei1 ⊗ ei2 ) = ei2 ⊗ ei1 . As a second example, consider the
partition p ∈ P (2, 2) consisting of four points which are all connected. Then
Tp (ei1 ⊗ ei2 ) = δi1 i2 ei1 ⊗ ei1 .
We can define the tensor product p ⊗ q of two partitions p ∈ P (k, l) and
q ∈ P (k ′ , l′ ) by vertical concatenation, i.e. p ⊗ q ∈ P (k + k ′ , l + l′ ) is the
partition obtained by writing p and q side by side. We have Tp ⊗ Tq = Tp⊗q .
This is quite nice since intertwiner spaces are always closed under tensor
products, i.e. whenever we have two intertwiner maps S and T such that
′ ′ ′ ′
Su⊗k = u⊗l S and T u⊗k = u⊗l T , then also (S ⊗ T )u⊗k+k = u⊗l+l (S ⊗ T ).
Thus, we are able to model this operation on intertwiner maps already on the
partition level.
We define the composition of two partitions p ∈ P (k, l) and q ∈ P (l, m)
as the partition qp ∈ P (k, m) obtained from horizontal concatenation, i.e., we
first connect k upper points by p to l middle points, and then by q to m lower
points. The l middle points are removed which yields a partition connecting k
102 M. Weber

upper and m lower points. During this procedure, certain loops may arise—
there may be middle points which are neither connected to upper points nor
to lower points. Connected components of such middle points are called loops
and we denote their number by l(q, p). We have Tq Tp = nl(q,p) Tqp . As an
example, consider the composition of the above partitions p1 and p2 . One loop
appears (points marked in white):
• • •

✄ • • •

p2 p1 = • ◦ • ◦ ◦ • =

• • •

• • •
Note that in general, the intertwiner spaces of compact matrix quantum
groups are closed under composition of (composable) maps.
Finally, we also have an involution p∗ ∈ P (l, k) of a partition p ∈ P (k, l)
obtained by turning p upside down (reflection at the horizontal axis). This
yields (Tp )∗ = Tp∗ . We conclude that the operations on the partitions behave
nicely with the map p 7→ Tp :

Tp ⊗ Tq = Tp⊗q , Tq Tp = nl(q,p) Tqp , (Tp )∗ = Tp∗ .

If we normalize the maps Tp in a suitable way, we may even arrange such


that they are partial isometries; moreover Tp is then a projection whenever
p = pp = p∗ (see [39]).
By ⊓ ∈ P (0, 2) we denote the pair partition on two connected points; by
| ∈ P (1, 1) we denote the identity partition connecting an upper to a lower
point. The following definition is due to Banica and Speicher [21].

Definition 3.1. Let C(k, l) ⊂ P (k, l) be subsets for all k, l ∈ N0 and let C ⊂ P
be the collection of these subsets. We say that C is a category of partitions,
if it is closed under taking tensor products, composition and involution (the
category operations), and if the pair partition ⊓ and the identity partition |
are in C.

Examples of categories of partitions are P , N C, the set P2 of all pair par-


titions (all blocks are of size two), and the set N C2 of all noncrossing pair
partitions.
Using the construction p 7→ Tp of linear maps associated to partitions, one
can show that the following intertwiner spaces can be described explicitly, see
for instance [21]:

HomSn+ (k, l) = span{Tp | p ∈ N C(k, l)},


HomSn (k, l) = span{Tp | p ∈ P (k, l)},
Easy quantum groups 103

HomOn+ (k, l) = span{Tp | p ∈ N C2 (k, l)},


HomOn (k, l) = span{Tp | p ∈ P2 (k, l)}.
We observe that the passage from the classical group Sn to its quantum
counterpart Sn+ is given by restricting to noncrossing partitions; likewise in
the case of On and On+ . This feature is known to be an essential aspect in the
combinatorics of free probability. See Section 5 for more on this.
Let us work out a concrete example of relations on the generators uij arising
from an intertwiner map.
Example 3.2. Let G be any compact matrix quantum group. We consider
the partition p = ❆✁ ∈ P (2, 2). The map Tp : (Cn )⊗2 → (Cn )⊗2 gives rise to a
matrix in Mn2 ×n2 (C) ⊂ Mn2 ×n2 (C(G)). The matrix u⊗2 in turn is given by
(ui1 j1 ui2 j2 ) in Mn2 ×n2 (C(G)). We study the matrix u⊗2 entrywise, thus
n
X
u⊗2 (ei1 ⊗ ei2 ) = (eα1 ⊗ eα2 ) ⊗ uα1 i1 uα2 i2 .
α1 ,α2 =1

We compute
X X
Tp u⊗2 (ei ⊗ ej ) = Tp (ek ⊗ el ) ⊗ uki ulj = el ⊗ ek ⊗ uki ulj ,
k,l k,l
X
u⊗2 Tp (ei ⊗ ej ) = u⊗2 (ej ⊗ ei ) = el ⊗ ek ⊗ ulj uki .
l,k

We infer that Tp ∈ HomG (2, 2) (i.e. Tp u⊗2 = u⊗2 Tp ) if and only if the
generators uij commute. Hence, if Tp is in the intertwiner space of a compact
matrix quantum group G with selfadjoint generators uij , it is actually a group.
One can convince oneself that if G is a quantum subgroup of H (i.e. there is
a surjection C(H) → C(G) mapping generators to generators), then HomG ⊃
HomH . Thus, for any compact matrix quantum group G with Sn ⊂ G ⊂ On+ ,
we have
span{Tp | p ∈ P (k, l)} ⊃ HomG (k, l) ⊃ span{Tp | p ∈ N C2 (k, l)}.
Amongst such quantum groups G there are some with a particularly nice
description of the intertwiner space. These so-called easy quantum groups were
defined by Banica and Speicher in 2009 [21].
Definition 3.3. A compact matrix quantum group Sn ⊂ G ⊂ On+ is called
easy if there is a category of partitions C ⊂ P such that
HomG (k, l) = span{Tp | p ∈ C(k, l)}, k, l ∈ N0 .
Hence, easy quantum groups are determined by the combinatorics of parti-
tions. This is why one could also think of them as “partition quantum groups”.
The structure of the intertwiner space is exactly reflected in the category of
partitions. One of the major philosophies of easy quantum groups is that “all
of the quantum group structure should be visible in the associated category of
partitions”. See Sections 7 and 8 for more on this.
104 M. Weber

4. Examples and classification of easy quantum groups


One of the nice features of the concept of easy quantum groups is that it
provides a way of producing many new examples of compact matrix quantum
groups. This can be seen as a substantial extension of Wang’s liberation idea.
The main advantage of Banica and Speicher’s approach is the well-behaved ma-
chinery allowing to jump back and forth between combinatorics and operator
algebras.
Before taking a closer look at further examples we infer that there is another
operation on partitions which may be deduced from the category operations.
Let p ∈ P (k, l), k 6= 0 be a partition and let p′ ∈ P (k − 1, l + 1) be the partition
obtained from p by moving the leftmost of the k upper points to the left of the
lower line of points. We do not change the partition structure, i.e. all strings
attached to this point remain attached. This procedure is called the rotation
of partitions. Let us quickly prove that p′ is in C if p is in C. Observe that the
tensor product | ⊗ p ∈ P (k + 1, l + 1) of the identity partition | and p is in C
and similarly ⊓ ⊗ |⊗k−1 ∈ C(k − 1, k + 1). Composing these two partitions,
we obtain p′ which is hence in C. More generally, we can prove the following
proposition, where we also include rotation of the leftmost lower point to the
left of the upper points, as well as rotation on the right-hand side of the points.
This proposition is often used to restrict the study to partitions on one line.

Proposition 4.1 ([21, Lem. 2.7]). Categories of partitions are closed under
rotation of partitions.

Using the category operations, we may construct many partitions out of


others. This is yet another step of simplifying the analysis of the intertwiner
space of an easy quantum group (besides that it is spanned by maps labeled
by partitions), and we write C = hp1 , . . . , pn i if C is the smallest category con-
taining the partitions p1 , . . . , pn . In other words: C is generated by p1 , . . . , pn .
For instance, the set N C2 of noncrossing pair partitions is generated by
the two partitions ⊓ and |. Since it is part of the definition that a category
contains ⊓ and |, we omit these two generators when writing C = hp1 , . . . , pn i,
so N C2 = h∅i.
As a second example, note that the four block partition ∈ P (0, 4) gener-
ates all partitions of even length consisting of a single block. Indeed, composing

⊗ with the partition |⊗3 ⊗ ⊗|⊗3 , we obtain the partition in P (0, 6)
consisting of six connected points, an argument which we may use inductively.
Now, compositions using the singleton partition ↑ ∈ P (0, 1) (we use this sym-
bol in order to have no confusion with the identity partition |) and its adjoint
↑∗ ∈ P (1, 0), the partitions consisting of a single point respectively, yield par-
titions of arbitrary length consisting only of a single block. We may now nest
a partition p ∈ N C(0, l) between two legs of a partition q ∈ N C(0, l′ ) by com-
posing q with |⊗α ⊗ p ⊗ |⊗β , α + β = l′ . By such compositions and by rotation,
we can then produce any partition in N C. Thus we have N C = h , ↑i, see
Exercise 11.2.
Easy quantum groups 105

Natural further examples of easy quantum groups are obtained from the
categories h i and h↑i. The first one gives rise to the hyperoctahedral quan-
tum group Hn+ of Banica, Bichon and Collins [9], whereas the second one is
the bistochastic quantum group Bn+ (see [21]). In [21] and [71], all free easy
quantum groups were classified, i.e. all easy quantum groups whose category
of partitions satisfies C ⊆ N C, see also Exercise 11.3. Besides the above four
examples, there are h↑ ⊗ ↑i, h i and h , ↑ ⊗ ↑i = h , i (for the latter
equality, see [71, Lem. 2.6 (c)]).
Theorem 4.2 ([21, 71]). For n ∈ N, there are exactly seven free easy quantum
groups. They contain Sn+ , On+ , Hn+ and Bn+ .
One should note that for small n ∈ N, certain quantum groups may coincide.
Hence, Theorem 4.2 is rather a statement on sequences (Gn )n∈N of quantum
groups.
A second natural subclass was completely classified in [21]—easy quantum
groups whose categories contain the partition ✁❆ . These categories give rise to
(classical) groups (see Example 3.2).
Theorem 4.3 ([21]). There are exactly six easy quantum groups which are
actually groups. They contain Sn , On , the hyperoctahedral group Hn = Z2 ≀ Sn
and the bistochastic group Bn .
These two theorems are yet another example of the fact that the noncommu-
tative world can be richer than the commutative one: The expected one-to-one
correspondence between classical easy groups and free easy quantum groups
breaks down; one of the groups splits into two quantum groups on the free
side, in the sense that there are two quantum groups which yield the same
classical group, if we add the relations that the generators commute. On the
combinatorial side, this is reflected by the fact that the categories h , ✁❆ i and
h↑ ⊗ ↑, ❆✁ i coincide, whereas their noncrossing counterparts h i and h↑ ⊗ ↑i
do not.
Another subclass of easy quantum groups is obtained by half-liberation, see
[21] for details. A category of partitions is called half-liberated, if it contains
the partition ❅ ❅ but not ❆✁ . The partition ❅❅ ∈ P (3, 3) gives rise to the relations
uij ukl ust = ust ukl uij in the sense of Example 3.2. The corresponding quantum
group is in general neither a classical group nor a free easy quantum group—in
case that the commutativity of the generators cannot be deduced from the
other relations. An example of such a half-liberated quantum group is On∗ ,
given by adding the above relations to those of On+ . The corresponding cate-
∗ +
gory is h ❅❅ i. For n ≥ 4, we have On ( On ( On .

Theorem 4.4 ([71]). The half-liberated easy quantum groups are exactly On∗ ,
(s)
Hn∗ , Bn#∗ and the hyperoctahedral series Hn , s ≥ 3 of [16].
In joint work with Raum [54, 55, 56], the author could completely classify
all easy quantum groups, see also [56] for an overview on the history of the
classification program. We briefly review the main results here.
106 M. Weber

When working on the classification of easy quantum groups, Banica, Curran


and Speicher distinguished between hyperoctahedral categories of partitions,
i.e. those containing but not ↑ ⊗ ↑, and the complementary case of non-
hyperoctahedral categories. The latter one is quite simple.
Theorem 4.5 ([16, 71]). There are exactly 13 non-hyperoctahedral easy quan-
tum groups.
The case of hyperoctahedral categories is more complicated, and we sub-
divide it again. In [55] a category of partitions is called group-theoretical, if

❍✡
✡ ∈ C. Note that most of the group-theoretical categories are hyperoctahe-

dral (up to two examples). Group-theoretical categories give rise to quantum


groups where the relations u2ij ukl = ukl u2ij are fulfilled and the elements u2ij
are central projections.
The structure of group-theoretical categories is quite algebraic. Using the
partition ❍✡❍✡ , we may shift consecutive pairs belonging to the same block
to arbitrary positions, i.e. we compose a partition p ∈ C with partitions
|⊗α ⊗ ❍✡❍✡ ⊗ |⊗β or |⊗α ⊗ ❍ ❍

✡∗
⊗ |⊗β iteratively and the resulting partition is
still contained in the category. Furthermore, we may erase such pairs of points
⊓ ⊓
using the pair partition (composition with |⊗α ⊗ ⊗|⊗β ), but we can also
reproduce them: The composition with the rotated version ∈ P (1, 3) of
(which is always contained in group-theoretical categories) effects that we can
make “three points out of one”.
This leads to the insight that only partitions of a very special form matter
in group-theoretical categories, namely those in single leg form. This is the
case if p is—as a word—of the form p = ai(1) ai(2) . . . ai(n) ∈ P (0, n), where
ai(j) 6= ai(j+1) for j = 1, . . . , n − 1. Note that we can always restrict to
partitions having no upper points using Proposition 4.1. The letters a1 , . . . , ak
correspond to the points connected by the partition p. In other words, in a
partition in single leg form no two consecutive points belong to the same block.
These partitions look like words in the infinite free product Z∗∞ 2 (note that
pairs aa may be neglected due to the above discussion—this corresponds to
aa = e in Z2 ).
In this way, we label all partitions in a group-theoretical category C in all
possible ways by letters of Z∗∞ ∗∞
2 , and we obtain a subgroup F (C) ⊂ Z2 . The
product of two elements is (up to some technical consideration) given by the
tensor product of two partitions, whereas the inverse in the group setting comes
from the involution on the partition side.
In [54, 55] we prove that the map C 7→ F (C) is a lattice isomorphism be-
tween group-theoretical categories and a large class of subgroups of Z∗∞ 2 and
we deduce that there are uncountably many pairwise nonisomorphic easy quan-
tum groups. This proves that the class of easy quantum groups is very rich!
Moreover, the map F translates the problem of classifying group-theoretical
categories to a problem in group theory, a problem which is far from being
solvable. In [55] we show that it contains the problem of understanding all
varieties of groups. Nevertheless, the map F helps to explain the structure
Easy quantum groups 107

of group-theoretical easy quantum groups in a satisfying way—they are deter-


mined by a group. Denote by Fn (C) the restriction of F (C) to Z∗n
2 .

Theorem 4.6 ([55]). Let Sn ⊂ G ⊂ On+ be an easy quantum group with


associated category C of partitions. If C is group-theoretical, then G may be
written as a semi-direct product
G∼ \
= Z∗n
2 /Fn (C) ✶ Sn ,
i.e.
C(G) ∼
= C ∗ (Z2∗n /Fn (C)) ⊗ C(Sn )
with uij ↔ ugi ⊗ vij .
This decomposition contains a lot of information. See [55] for consequences
of this picture. Note that this theorem can be extended to noneasy quantum
groups. This proves that easy quantum groups are indeed an “easy” step into
the world of compact matrix quantum groups—we only obtained the more
general version of the above theorem by investigating the easy case first.
The non-group-theoretical hyperoctahedral categories ( ❍✡❍✡ ∈
/ C) in turn be-
have more like combinatorial objects rather than like groups. They are given
by a one-parameter series hπk i, k ∈ N and hπl , l ∈ Ni, where πk is given by k
blocks in the following way:
πk = a1 . . . ak ak . . . a1 a1 . . . ak ak . . . a1 .
The work in [54, 55, 56] draws a line between those easy quantum groups
which are closer to the group setting and those which are more like “free” ob-
jects. The precise meaning of this distinction awaits further investigation. The
full classification of easy quantum groups amounts to the following theorem.
Theorem 4.7 ([56]). If G is an easy quantum group, its corresponding category
of partitions
(i) either is non-hyperoctahedral, and hence it is one of the 13 cases of [71],
(ii) or coincides with hπk i for some k ∈ N ∪ {∞} or with hπl , l ∈ Ni, see [56],
(iii) or is group-theoretical and hyperoctahedral and thus
G∼ \
= Z∗n
2 /Fn (C) ✶ Sn ,
see [54, 55].
Wang also defined a quantum version of the unitary group Un , the free
unitary quantum groups, see [69]. Its associated C ∗ -algebra C(Un+ ) is given by
not necessarily selfadjoint generators uij , such that u = (uij ) and ut = (uji ) are
unitaries. Note that by Woronowicz’s definition of compact matrix quantum
groups (Definition 2.6), we do not only need that u is invertible, but also ut . It
is straight-forward to extend the notion of easy quantum groups to the setting
Sn ⊂ G ⊂ Un+ , using partitions where the points are colored with two colors
(corresponding to u and ū = (ut )∗ ). In joint work in progress with Tarrago,
the author is currently undertaking the classification of unitary easy quantum
groups [57].
108 M. Weber

5. De Finetti theorems in free probability


Let us now turn to a major application of easy quantum groups to free
probability. The most striking feature of quantum groups is that they provide
more symmetries in noncommutative (operator algebraic) settings. In free
probability, easy quantum groups seem to be the right symmetries: In 2009,
Köstler and Speicher proved a remarkable quantum version of the classical de
Finetti theorem from probability theory [47]. Let us prepare the statement.
Let (xi )i∈N be (classical) random variables, and let φ = E be the expec-
tation. The (distribution of the) sequence is exchangeable or invariant under
permutation, if
φ(xi1 . . . xim ) = φ(xσ(i1 ) . . . xσ(im ) )
holds for all permutations σ ∈ Sn , all m, n ∈ N and all 1 ≤ i1 , . . . , im ≤ n.
The following theorem is due to de Finetti (1931).
Theorem 5.1. A sequence (xi )i∈N of (classical) random variables is exchange-
able if and only if it is independent and identically distributed over the tail
∞ ∞
T i.e. with respect to E : L (Ω, Σ, µ) → L (Ω, Σtail , µtail ), where
σ-algebra,
Σtail := n∈N σ(xk | k ≥ n).
This is an important theorem in classical probability theory because it set-
tled the question how to define independence for sequences of random variables:
Invariance under the action of Sn is an equivalent characterization.
We want to formulate a similar theorem in free probability using Sn+ . Let
us first consider an action α : G × X → X of a compact group G on a compact
space X. Dualization yields a ∗ -homomorphism
α̃ : C(X) → C(G × X) ∼ = C(G) ⊗ C(X)
given by f 7→ f ◦ α. This motivates the following definition.
Definition 5.2. Let G be a compact quantum group and A be a C ∗ -algebra.
A (left) coaction of G on A is given by a ∗ -homomorphism α : A → C(G) ⊗ A
such that (id ⊗α)α = (∆ ⊗ id)α (coassociativity).
Let (A, φ) be a noncommutative C ∗ -probability space and (xi )i∈N ⊂ A
be random variables. Suppose Sn+ coacts on C ∗ (x1 , . . . , xn ) ⊂ A by xi 7→
P
j uji ⊗ xj . Here, the idea is that the matrix u acts by mapping a basis vector
ei to uei . We say that (xi )i∈N is quantum exchangeable or invariant under
quantum permutations if its joint distribution is invariant under this coaction
of Sn+ , i.e. if we have
X
φ(xi1 . . . xim )1C(Sn+ ) = uj1 i1 . . . ujm im φ(xj1 . . . xjm )
j1 ,...,jm

as an equality in C(Sn+ ),
for all m, n ∈ N and all 1 ≤ i1 , . . . , im ≤ n. Note
that it would be more precise to speak of invariance under the coaction of the
sequence (Sn+ )n∈N .
Comparing this equation to the classical case, let u′ij be the coordinate
functions of C(Sn ) and σ ∈ Sn . Then u′ij (σ) = δiσ(j) . Hence, evaluating
Easy quantum groups 109

P
u′j1 i1 . . . u′jm im φ(xj1 . . . xjm ) at σ yields exactly φ(xσ(i1 ) . . . xσ(im ) ). There-
fore, quantum exchangeability implies classical exchangeability.
We are ready for the quantum version of de Finetti’s theorem. Köstler and
Speicher revealed a nice parallelism: While invariance under Sn is equivalent to
classical independence (with amalgamation), invariance under Sn+ is equivalent
to freeness (with amalgamation).
Theorem 5.3 ([47]). A sequence (xi )i∈N of selfadjoint random variables in a
noncommutative W ∗ -probability space (A, φ) is quantum exchangeable if and
only if it is conditionally free and identically distributed over the tail von Neu-
mannT algebra, i.e. with respect to the conditional expectation E : A → T , where
T := n∈N W ∗ (xk | k ≥ n) and φ ◦ E = φ.
Proof. Assume that (xi ) is free and identically distributed with respect to the
tail algebra T . The moment-cumulant formula (see Definition 4.3 of Speicher’s
lecture) can be extended to the operator-valued setting. Using it, together with
φ ◦ E = φ, we infer
Xn
uj1 i1 . . . ujm im φ(xj1 . . . xjm )
j1 ,...,jm =1
X X 
= uj1 i1 . . . ujm im φ κE
π (xj1 , . . . , xjm ) .
π∈N C(0,m) j1 ,...,jm

By the freeness assumption (“mixed cumulants vanish”), the cumulants


κEπ (xj1 , . . . , xjm ) ∈ T are zero if ker j 6≥ π, i.e. if there are distinct indices
js 6= jt belonging to the same block of π. By ker j we denote the partition
connecting equal indices in j = (j1 , . . . , jm ) to equal indices; by “≥” we denote
the partial order given by refinement of the block structure (see Speicher’s
lecture).
Furthermore, identical distribution yields that κE π (xj1 , . . . , xjm ) is indepen-
dent of the choice of the indices js as long as ker j ≥ π. We denote the
corresponding cumulant simply by κE π . We thus have
X X X
uj1 i1 . . . ujm im φ(xj1 . . . xjm ) = φ(κE
π) uj1 i1 . . . ujm im .
j1 ,...,jm π∈N C ker j≥π

Finally, the intertwiner map Tπ : (Cn )⊗m → C for π ∈ N C(m, 0) may simply
⊗m
P as Tπ (ei1 ⊗ · · · ⊗ eim ) = δker i≥π . Thus, the equality Tπ u
be written = Tπ
yields ker j≥π uj1 i1 . . . ujm im = δker i≥π in C(Sn+ ) and thus
X  X 
uj1 i1 . . . ujm im φ(xj1 . . . xjm ) = φ δker i≥π κE
π .
j1 ,...,jm π∈N C

Using again the identical distribution, freeness, the moment cumulant formula
and φ ◦ E = φ (in this order), we infer that (xi ) is quantum exchangeable.
The converse direction is a bit more elaborate. One first has to show that
such a conditional expectation E exists once the sequence is quantum ex-
changeable, and that quantum exchangeability with respect to φ is equiva-
lent to quantum exchangeability with respect to E (defined analogously as for
110 M. Weber

states). Furthermore, quantum exchangeability implies (classical) exchange-


ability which yields the identical distribution of (xi ).
In a second step, let n ∈ N and p1 , . . . , pn be polynomials in a selfadjoint
variable with coefficients in T . Moreover, assume E(pi (x1 )) = 0 for all i =
1, . . . , n and let i1 6= i2 6= · · · 6= in . We have to show E(p1 (xi1 ) . . . pn (xin )) = 0
in order to establish conditional freeness.
Now use
X
E(p1 (xi1 ) . . . pn (xin )) = uj1 i1 . . . ujn in E(p1 (xj1 ) . . . pn (xjn ))
j1 ,...,jn

and the following ingredients. Consider E(x7 x2 x7 x9 ). Due to identical distri-


PN
bution, this coincides with E(x7 ( N1 k
k=1 x9+k )x7 x9 ). Using the shift α (x9 ) =
x9+k and von Neumann’s mean ergodic theorem, we infer that
  X N  
1
E x7 x9+k x7 x9 → E(x7 E(x2 )x7 x9 ).
N
k=1

With techniques like this, in combination with identical distribution, we


basically replace E(p1 (xj1 ) . . . pn (xjn )) by E(p1 (xi1 ) . . . pn (xin )) in the above
equation. This yields
X 
E(p1 . . . pn ) = uj1 i1 . . . ujn in E(p1 . . . pn ).
P
Now, all we have to prove is that the sum ui1 j1 . . . uin jn (running over indices
j subject to some conditions) is not one—which hence implies E(p1 . . . pn ) = 0.
For doing so, we extend the following consideration about S4+ .
Let p and q be two noncommuting projections. The following matrix gives
rise to a representation of the matrix u = (uij ),
 
p 1−p 0 0
1 − p p 0 0 
 .
 0 0 q 1 − q
0 0 1−q q
This representation allows us to simplify our calculations for proving
X
uj1 i1 . . . ujn in 6= 1. 

As a next step, it is natural to ask: What happens, if we replace invariance


under Sn+ by other quantum group coactions? Banica, Curran and Speicher
proved the following theorem.
Theorem 5.4 ([18]). Invariance of a sequence of noncommutative random
variables under quantum group actions corresponds to distributional features
in the following cases:
• Sn+ : freeness with amalgamation, identical distribution
• Hn+ : freeness with amalgamation, identical distribution, even distribution
Easy quantum groups 111

• Bn+ : freeness with amalgamation, identical distribution, the sequence is an


operator valued free semicircular family with common mean and common
variance
• On+ : freeness with amalgamation, identical distribution, the sequence is
an operator valued free semicircular family with mean zero and common
variance
There is also a de Finetti result invoking the representation of certain easy
quantum groups as semi-direct products in [55], which is related to Theo-
rem 4.6.

6. Laws of characters
An easy quantum group G comes with an algebra C(G) and a state, the
Haar state h. Hence, we can do free probability on the quantum group itself. A
first step is the computation of the laws of characters, which is, by the way, also
relevant for understanding the object from a quantum group perspective. P Let G
be a compact matrix quantum group of dimension n. Denote by χ := ni=1 uii
its character. If G ⊂ On+ , the uij are selfadjoint, hence also χ = χ∗ .
What are the moments of χ? Due to Woronowicz we know that h(χk ) is
exactly the dimension of the fixpoint algebra Fix(u⊗k ) = Hom(0, k). In the
case of easy quantum groups, this is given by the number of partitions in
C(0, k), see [21]. The computation of the law of characters of easy quantum
groups thus reduces to the problem of counting partitions. By [16, 21], we have
the following laws:
• On : real Gaussian
• On+ : semicircle
• Sn : Poisson
• Sn+ : free Poisson
Furthermore, the law on On∗ is given by the squeezed complex Gaussian, i.e.
the moments are given by φ((aa∗ )k ) for k even, and zero for k odd. Here, a is
a complex Gaussian.
There are several refinements of the study of laws of characters. For in-
stance, one can investigate Tr(um ) instead of Tr(u) = χ. One can also Ppass to
[nt]
tuples of elements, or one can study the truncated characters χt := i=1 uii .
See [17, 21] for research in this direction and further investigation of stochastic
aspects of easy quantum groups.
We want to point out that the investigation of the nature of these laws
of characters and of further possible links to free probability is still somehow
underdeveloped. We quote from Voiculescu’s lecture, contained in this volume:
It remains an open question whether the noncommutative dis-
tributions of the variables generating the free quantum groups
can be well integrated into the free probability framework. So,
do the distributions which arise in the free quantum group
setting fit in the free probability context or do these laws go
beyond?
112 M. Weber

7. The Haar state on easy quantum groups


As mentioned earlier, the philosophy of easy quantum groups is that opera-
tor algebraic properties of the quantum groups should be reflected by combina-
torial properties of the categories of partitions. We want to give two examples
in this direction.
As described in Section 2, the existence of the Haar state is an important
feature of compact quantum groups. The essence is the question of how to
evaluate monomials ui1 j1 . . . uik jk , a problem also considered in classical group
theory. Attacking this question in the quantum group setting, Banica and
Collins developed a Weingarten calculus for On+ and Sn+ . In the following
formulation, we use the aesthetics resembling classical group theory.
R
Theorem 7.1 ([13]). Let h = On+ . . . du be the unique Haar state on On+ .
Then, for all k ≤ n,
Z X
ui1 j1 . . . uik jk du = Wkn (p, q).
+
On p,q∈N C2 (0,k)
p≤ker i; q≤ker j

Here, Wkn is the Weingarten matrix given as the inverse of the Gram matrix
Gkn (p, q) := hTp , Tq i.

Proof. Due to Woronowicz’s Tannaka–Krein result [72], the matrix


Z 
P := ui1 j1 . . . uik jk du
i,j

projects onto the fixpoint algebra Fix(u⊗k ), which in turn has a basis
{Tp | p ∈ N C2 (0, k)}. By linear algebra, the entry Pij is of the form
X
Pij = G−1 (p, q),
p,q∈N C2 (0,k)
p≤ker i; q≤ker j

where G(p, q) is obtained from evaluating the inner product hTp , Tq i, for p, q ∈
N C2 (0, k). 

The result can be extended to other free easy quantum groups, see [14,
17, 21], but the definition of the Gram matrix has to be adapted a bit. It is
then defined as Gkn (p, q) := n#{blocks in p∨q} where p ∨ q denotes the minimal
partition r with r ≥ p and r ≥ q. In principal, this allows us to compute the
Haar state on easy quantum groups by combinatorial means. In practice, it
is quite hard to invert the Gram matrix, see for instance [14] for some low-
dimensional computations in the case of Sn+ . √
It follows from the Weingarten formula, that the elements ( nuij )1≤i,j≤m
(with m ∈ N fixed) of C(On+ ) become asymptotically free semicircular elements,
as n → ∞ (see [13]).
Easy quantum groups 113

8. Fusion rules of easy quantum groups


As a second example of how to read quantum algebraic properties from
the partitions, we want to calculate the fusion rules of easy quantum groups,
following the work of Freslon and the author [39]. Let v be a unitary (co-)
representation of a compact quantum group. By Woronowicz [76], it decom-
poses into a direct sum of irreducible finite-dimensional (co-)representations.
Thus, the representation theory for compact quantum groups reduces to the
following two tasks:
(a) Find all irreducible representations
P uα .
(b) In which way does uα ⊗ uβ = γ uγ decompose (fusion rules)?
The fusion rules of On+ and Sn+ have been found by Banica [1, 4]. The
irreducible representations of On+ are indexed by N0 and tensor products of
them decompose according to
uk ⊗ ul = u|k−l| ⊕ u|k−l|+2 ⊕ u|k−l|+4 ⊕ · · · ⊕ uk+l .
In the case of Sn+ , they are again indexed by N0 , but the fusion rules are
uk ⊗ ul = u|k−l| ⊕ u|k−l|+1 ⊕ u|k−l|+2 ⊕ · · · ⊕ uk+l .
How can we describe the fusion rules in terms of partitions? Recall that
the category corresponding to On+ is N C2 and the one of Sn+ is N C. As a
motivation, consider the tensor product u⊗4 in the case On+ . Since the maps
Tp for p ∈ N C2 (4, 4) span the intertwiner space of u⊗4 , any subrepresentation
is given by T u⊗4T for some projection T ∈ span{Tp | p ∈ N C2 (4, 4)}. How do
we obtain projections in this span? We first normalize Tp in such a way that
Tp is a projection whenever p ∈ N C2 (4, 4) is such that p = p∗ = pp. Here are
three examples of such projective partitions:

And here are two more projective partitions which are in N C(4, 4) but not
in N C2 (4, 4):

The indices in the decomposition of uk ⊗ ul as a direct sum raise in steps


of two in the case of On+ and in steps of one in the case of Sn+ . For instance
u2 ⊗ u2 = u0 ⊕ u2 ⊕ u4 in On+ and u2 ⊗ u2 = u0 ⊕ u1 ⊕ u2 ⊕ u3 ⊕ u4 in Sn+ .
114 M. Weber

Maybe the reason for having three summands vs. five summands comes from
the fact that the above two partitions are in the category corresponding to Sn+
but not in the one of On+ ?
Even more enlightning is the following observation. Due to Banica’s inves-
tigation, we know that the fundamental representation u of Sn+ decomposes as
u = u0 ⊕ u1 . Now
u⊗2 = (u0 ⊕ u1 ) ⊗ (u0 ⊕ u1 ) = u0 ⊕ u1 ⊕ u1 ⊕ u0 ⊕ u1 ⊕ u2 .
The projective partitions in N C(2, 2) are exactly the following six:

Counting the number of through-blocks, i.e. of blocks containing upper as well as


lower points, we recover exactly the pattern of the irreducible representations
in the decomposition of u⊗2 , namely 0, 1, 1, 0, 1, 2. Hence, we suspect that the
irreducible representations of an easy quantum group are indexed by projective
partitions and that we obtain them using Tp u⊗k Tp . This idea needs to be
refined a bit, since such representations would not be irreducible in general
(note for instance that for p = |⊗k , we have Tp = id).
The systematic approach [38, 39] to the fusion rules of an easy quantum
group G with category C goes as follows. To each projective partition p ∈
C(k, k) we define
_
Rp := Tq , Pp := Tp − Rp , up := Pp u⊗k ⊂ u⊗k .
q∈C(k,k)
q is projective; q≺p
W
Here, q ≺ p denotes the case that pq = qp = q 6= p and Tq denotes the maxi-
mum of the projections Tq . In general, even up is too big, but a detailed study
of Aut(up ) shows that it is a quotient of a certain group algebra associated
to p. This gives rise to a decomposition of up into irreducible representations.
Nevertheless, we can work with the representations up —even when they are
not irreducible—and we can give a somewhat rougher version of the fusion
rules in general, a kind of “partition fusion rules”.
Theorem 8.1 ([39]). If G is an easy quantum group with category of parti-
tions C, tensor products of the representations up decompose in the following
way: X
up ⊗ uq = um .
m∈XC (p,q)
The set XC (p, q) ⊂ P can be given explicitly. If C ⊂ N C, the representations
up are irreducible and the above equation yields the complete fusion rules.
Note that certain representations up and uq might be unitarily equivalent.
This is the case if and only if there is a partition r ∈ C such that p = r∗ r and
q = rr∗ mimicking a Murray von Neumann equivalence.
Easy quantum groups 115

Let us quickly sketch how the fusion rules for Sn+ may be deduced from the
above theorem. We may label the irreducible representations of Sn+ by projec-
tive partitions in N C. Now, two such representations are unitarily equivalent
if and only if they have the same number of through-blocks. Thus, the irre-
ducible representations may be labeled by N0 and we may take pk = |⊗k as a
representative for k 6= 0. Then, the elements of the set XC (pk , pl ) are mainly of
the same form as the above five examples of projective partitions in N C(4, 4).
This translates to the known fusion rules. The case of On+ is similar.
The above approach yields a unified proof for the fusion rules of all free easy
quantum groups. Furthermore, it can be extended to unitary easy quantum
groups. By the work of Freslon [38], operator algebraic properties like the
Haagerup property may be deduced from the fusion rules. Also, he studied
the possible fusion rules in more detail, isolating a “free part” and a “group
part”.
Note that this example of determining quantum algebraic properties by
combinatorial means is really a substantial one: The fusion rules of a quantum
group are an essential information not only of the full C ∗ -algebraic version of
the quantum group, but also of the reduced one, the von Neumann algebraic
one and the purely algebraic one. The fusion rules are intrinsic for the quantum
group as such.

9. Associated von Neumann algebras


Since we have a Haar state on an easy quantum group, we can associate a re-
duced C ∗ -algebra and a von Neumann algebra to it, via the GNS-construction.
These objects are mainly studied in the cases On+ and Un+ , or (a bit less) for Sn+ .
Other easy quantum groups are still rarely covered. The current state of the
art is the following compilation of results by Banica, Vaes, Vergnioux, Bran-
nan, Freslon, Voigt, Isono and others (see for instance the introduction of [27]
for an overview).
Theorem 9.1 ([2, 27, 60]). The C ∗ -algebras Cred
∗ ∗
(On+ ) and Cred (Un+ ) are non-
nuclear, exact and simple and they have the metric approximation property.
Theorem 9.2 ([2, 27, 42, 59, 60]). The von Neumann algebras L(On+ ) and
L(Un+ ) are strongly solid, noninjective, full, prime II1 -factors having no Cartan
subalgebra. They have the Haagerup property.
Thus, L(On+ ) and L(Un+ ) share many properties with the free group factors
L(Fk ), but it is unknown whether we have LOn+ ∼
= LFk for some n, k or maybe
LUn+ ∼= LFk . For n = 2 however, we have L(U2+ ) ∼
= L(F2 ) (see [2]).
Theorem 9.3 ([66, 67, 68]). The quantum groups On+ and Un+ fulfill the Baum–
Connes conjecture for quantum groups and we have the following table of K-
groups:
K0 (C(On+ )) = Z generated by [1],
K1 (C(On+ )) =Z generated by [u],
116 M. Weber

K0 (C(Un+ )) = Z generated by [1],


K1 (C(Un+ )) = Z2 generated by [u], [ū] = [ut ],
2
−2n+2
K0 (C(Sn+ )) = Zn generated by [uij ], i, j < n and [1],
K1 (C(Sn+ )) =Z generated by [u].
Theorem 9.4 ([37, 63, 64]). The quantum groups On+ and Un+ are weakly
amenable and they satisfy the Akemann–Ostrand property and the property of
rapid decay.
For further results, see also [28, 29, 35, 65].

10. Comments
For an introduction to algebraic quantum groups, Hopf algebras and the
Drinfeld–Jimbo approach, see [43, 44, 50]. For Kac algebras, we refer to [36].
For a quick start to Woronowicz’s approach to C ∗ -algebraic compact quantum
groups, the author recommends the book by Timmermann [58]. The best is,
to jump directly to Chapters 4 and 5; the first chapters of that book are more
algebraic. Another excellent book on compact quantum groups is the recent
one by Neshveyev and Tuset [52]. There are also good surveys [48, 51] from
the 90s. The latter one contains also locally compact quantum groups (see
also [58]). Besides, it is certainly worthwhile to take a look at Woronowicz’s
original papers on compact matrix quantum groups [72, 75], compact quantum
groups [76], Tannaka–Krein [74] and SUq (2) [73]. See also the introduction of
[69] for an overview on the general history of quantum groups.
The story of liberation of Lie groups begins with the work of Wang [69, 70].
There he defined Sn+ , On+ and the free unitary quantum group Un+ . By the
way, see [69, §4.1] for a comment on why both matrices u and ut need to
be unitary in Un+ . Wang and van Daele [62] also defined deformations of
On+ and Un+ by matrices Q ∈ GLn (C), which are related to Woronowicz’s
SUq (2), see [1]. Please consider also Banica’s slight adaption of the definition
(which is the one used nowadays) [1]. Banica worked a lot on refinements and
further studies of Wang’s quantum groups, see [1, 2, 4, 5, 7, 8, 12, 19] and the
nice survey (joint with Bichon and Collins) [10]. Bichon was also one of the
pioneers in free quantum groups, see for instance [24, 25] besides the many
articles in joint work with Banica. The hyperoctahedral quantum group Hn+
was introduced in [9]. The Weingarten calculus on quantum groups—i.e. the
Haar state computations—was developed by Banica and Collins [13, 14], the
latter one also being an expert on the Weingarten calculus on other structures.
Building on all the above work on free quantum groups, easy quantum
groups were first defined in 2009 by Banica and Speicher [21]. Articles directly
related to the classification are [16, 54, 55, 56, 71]. Stochastic aspects of easy
quantum groups may be found in [11, 17, 53]. Further articles somehow related
to easy quantum groups are [6, 15, 20, 23, 26]. For first steps to extend the easy
quantum group setting from quantum subgroups of On+ to quantum subgroups
Easy quantum groups 117

of Un+ , we refer to the ongoing research by Tarrago and the author [57]. In
order to get a better understanding of the operations on partitions, consult
[56] or [57].
The de Finetti result for Sn+ was stated by Köstler and Speicher [47]. It
was extended to other easy quantum groups in [18]. Further exchangeability
studies have been performed by Köstler partially in joint work with Gohm (for
actions of Sn and the braid group) [40, 41, 45, 46] and Curran partially in joint
work with Speicher (quantum setting) [30, 31, 32, 33, 34]. For the fusion rules
of easy quantum groups and others, see [3, 22, 38, 39, 49]. For literature on
von Neumann algebraic or other operator algebraic aspects, see Section 9.
This list is not guaranteed to be complete, but it might help to get an
overview of the subject. Here are some open questions and calls for possible
further developments.
• Find more proofs of the meta conjecture: “All quantum algebraic proper-
ties of easy quantum groups should be visible in terms of partitions.”
• Formulate analogs of classical results for Sn , On or Un for their quantum
versions Sn+ , On+ or Un+ .
• Extend any result obtained so far for Sn+ , On+ or Un+ to the other of these
three quantum groups. Even better: Extend it to any easy quantum group,
ideally with a uniform proof.
• Extend it to all compact matrix quantum groups Sn ⊂ G ⊂ On+ (see [55]
for a first, partial result in this direction—Theorem 4.6 can be extended
to noneasy quantum groups).
• We need more actions! Find (co-)actions of easy quantum groups on some
C ∗ -algebras. Can it be done in some systematic way?
• Do more free probability on easy quantum groups; for instance, answer
Voiculescu’s question about the laws of characters.

11. Exercises
Exercise 11.1. Consider the universal C ∗ -algebra A generated by commut-
ing selfadjoint elements uij , 1 ≤ i, j ≤ n such that the matrix u = (uij ) is
orthogonal (Example 2.3). Let φ : A → C be a character (i.e. a nonzero

-homomorphism). Check that the matrix (φ(uij )) ∈ Mn (C) is orthogonal
and that it determines φ completely. Conversely, every orthogonal matrix in
Mn (C) gives rise to a character. Conclude that the space of characters of A is
homeomorphic to On and that C(On ) ∼ = A. Check that the generators uij are
mapped to the coordinate functions u′ij in C(On ).
Exercise 11.2.
(a) Check that N C2 and P2 are categories of partitions in the sense of Def-
inition 3.1. Check that every noncrossing pair partition p ∈ N C(0, l)
having no upper points may be obtained inductively from composing p′ ∈
N C(0, l − 2) with |⊗α ⊗ ⊓ ⊗ |⊗β for suitable α and β. Using rotation
(Proposition 4.1), infer that N C2 = h⊓, |i. Verify that we can permute the
points of a partition using ✁❆ and composition, and deduce that P2 = h ✁❆ i.
118 M. Weber

(We omit ⊓ and | as generating partitions since they are always contained
in a category by definition.)
(b) Check that N C and P are categories of partitions. Follow the lines of the
argumentation following Proposition 4.1 to show that N C = h , ↑i and
P =h , ↑, ❆✁ i.
(c) Prove that h i ⊂ N C is the category of all noncrossing partitions with
blocks of even size, whereas h↑i is the category of all noncrossing partitions
with blocks of size one or two.
Details may be found in [71].
Exercise 11.3. Let C ⊆ N C be a category of noncrossing partitions. Show
the following implications:
(a) If ∈ C and ↑ ∈ C, then C = N C. (Use Exercise 11.2.)
(b) If ∈
/ C and ↑ ∈ C, then C = h↑i. (Hint: Assume that C contains a
partition with a block of size at least three. Deduce that ∈ C. Now,
use Exercise 11.2.)
(c) If ∈ C and ↑ ⊗ ↑ ∈ / C, then C = h i. (Hint: Assume that C con-
tains a partition with a block of odd size. Deduce that ↑ ⊗ ↑ ∈ C. Use
Exercise 11.2.)
(d) If ∈
/ C and ↑ ⊗ ↑ ∈
/ C, then C = N C2 .
Extending these observations, one can show that there are exactly seven cate-
gories of noncrossing partitions, see [71].
Exercise 11.4. Let Xn = {x1 , . . . , xn } be a finite set. We can write the

algebra C(Xn ) of continuous functions over XP n as the universal C -algebra
n
generated by projections e1 , . . . , en such that i=1 ei = 1. Let A be a C ∗ -
algebra generated by elements uij for 1 ≤ i, j ≤ n. Assume that there is a
unital ∗-homomorphism α : C(Xn ) → A ⊗ C(Xn ) such that
n
X
α(ei ) = uji ⊗ ej .
j=1

Show that the elements uij in A have to fulfill the following relations:
n
X
uij = u∗ij = u2ij , uij = 1.
j=1

We know that the automorphism group of Xn (set of bijective maps) is


the permutation group Sn . The above shows that the quantum automorphism
group of C(Xn ) is the quantum permutation group Sn+ , see also [69].

References
[1] T. Banica, Théorie des représentations du groupe quantique compact libre O(n), C. R.
Acad. Sci. Paris Sér. I Math. 322 (1996), no. 3, 241–244. MR1378260
[2] T. Banica, Le groupe quantique compact libre U(n), Comm. Math. Phys. 190 (1997),
no. 1, 143–172. MR1484551
Easy quantum groups 119

[3] T. Banica, Fusion rules for representations of compact quantum groups, Exposition.
Math. 17 (1999), no. 4, 313–337. MR1734250
[4] T. Banica, Symmetries of a generic coaction, Math. Ann. 314 (1999), no. 4, 763–780.
MR1709109
[5] T. Banica, A note on free quantum groups, Ann. Math. Blaise Pascal 15 (2008), no. 2,
135–146. MR2468039
[6] T. Banica, S. Belinschi, M. Capitaine, and B. Collins, Free Bessel laws, Canad. J. Math.
63 (2011), no. 1, 3–37. MR2779129
[7] T. Banica and J. Bichon, Quantum automorphism groups of vertex-transitive graphs of
order ≤ 11, J. Algebraic Combin. 26 (2007), no. 1, 83–105. MR2335703
[8] T. Banica and J. Bichon, Quantum groups acting on 4 points, J. Reine Angew. Math.
626 (2009), 75–114. MR2492990
[9] T. Banica, J. Bichon, and B. Collins, The hyperoctahedral quantum group, J. Ramanu-
jan Math. Soc. 22 (2007), no. 4, 345–384. MR2376808
[10] T. Banica, J. Bichon, and B. Collins, Quantum permutation groups: a survey, in Non-
commutative harmonic analysis with applications to probability, 13–34, Banach Center
Publ., 78, Polish Acad. Sci. Inst. Math., Warsaw, 2008. MR2402345
[11] T. Banica, J. Bichon, B. Collins, and S. Curran, A maximality result for orthogonal
quantum groups, Comm. Algebra 41 (2013), no. 2, 656–665. MR3011789
[12] T. Banica, J. Bichon, and J.-M. Schlenker, Representations of quantum permutation
algebras, J. Funct. Anal. 257 (2009), no. 9, 2864–2910. MR2559720
[13] T. Banica and B. Collins, Integration over compact quantum groups, Publ. Res. Inst.
Math. Sci. 43 (2007), no. 2, 277–302. MR2341011
[14] T. Banica and B. Collins, Integration over quantum permutation groups, J. Funct. Anal.
242 (2007), no. 2, 641–657. MR2274824
[15] T. Banica, B. Collins and P. Zinn-Justin, Spectral analysis of the free orthogonal matrix,
Int. Math. Res. Not. IMRN 2009, no. 17, 3286–3309. MR2534999
[16] T. Banica, S. Curran, and R. Speicher, Classification results for easy quantum groups,
Pacific J. Math. 247 (2010), no. 1, 1–26. MR2718205
[17] T. Banica, S. Curran, and R. Speicher, Stochastic aspects of easy quantum groups,
Probab. Theory Related Fields 149 (2011), no. 3-4, 435–462. MR2776622
[18] T. Banica, S. Curran, and R. Speicher, De Finetti theorems for easy quantum groups,
Ann. Probab. 40 (2012), no. 1, 401–435. MR2917777
[19] T. Banica and S. Moroianu, On the structure of quantum permutation groups, Proc.
Amer. Math. Soc. 135 (2007), no. 1, 21–29. MR2280170
[20] T. Banica and A. Skalski, Two-parameter families of quantum symmetry groups, J.
Funct. Anal. 260 (2011), no. 11, 3252–3282. MR2776569
[21] T. Banica and R. Speicher, Liberation of orthogonal Lie groups, Adv. Math. 222 (2009),
no. 4, 1461–1501. MR2554941
[22] T. Banica and R. Vergnioux, Fusion rules for quantum reflection groups, J. Noncommut.
Geom. 3 (2009), no. 3, 327–359. MR2511633
[23] T. Banica and R. Vergnioux, Invariants of the half-liberated orthogonal group, Ann.
Inst. Fourier (Grenoble) 60 (2010), no. 6, 2137–2164. MR2791653
[24] J. Bichon, Quantum automorphism groups of finite graphs, Proc. Amer. Math. Soc.
131 (2003), no. 3, 665–673 (electronic). MR1937403
[25] J. Bichon, Free wreath product by the quantum permutation group, Algebr. Represent.
Theory 7 (2004), no. 4, 343–362. MR2096666
[26] J. Bichon and M. Dubois-Violette, Half-commutative orthogonal Hopf algebras, Pacific
J. Math. 263 (2013), no. 1, 13–28. MR3069074
[27] M. Brannan, Approximation properties for free orthogonal and free unitary quantum
groups, J. Reine Angew. Math. 672 (2012), 223–251. MR2995437
[28] M. Brannan, Quantum symmetries and strong Haagerup inequalities, Comm. Math.
Phys. 311 (2012), no. 1, 21–53. MR2892462
120 M. Weber

[29] M. Brannan, Reduced operator algebras of trace-perserving quantum automorphism


groups, Doc. Math. 18 (2013), 1349–1402. MR3138849
[30] S. Curran, Quantum exchangeable sequences of algebras, Indiana Univ. Math. J. 58
(2009), no. 3, 1097–1125. MR2541360
[31] S. Curran, Quantum rotatability, Trans. Amer. Math. Soc. 362 (2010), no. 9, 4831–4851.
MR2645052
[32] S. Curran, A characterization of freeness by invariance under quantum spreading, J.
Reine Angew. Math. 659 (2011), 43–65. MR2837010
[33] S. Curran and R. Speicher, Asymptotic infinitesimal freeness with amalgamation for
Haar quantum unitary random matrices, Comm. Math. Phys. 301 (2011), no. 3, 627–
659. MR2784275
[34] S. Curran and R. Speicher, Quantum invariant families of matrices in free probability,
J. Funct. Anal. 261 (2011), no. 4, 897–933. MR2803836
[35] K. De Commer, A. Freslon, and M. Yamashita, CCAP for universal discrete quantum
groups, Comm. Math. Phys. 331 (2014), no. 2, 677–701. MR3238527
[36] M. Enock and J.-M. Schwartz, Kac algebras and duality of locally compact groups,
Springer, Berlin, 1992. MR1215933
[37] A. Freslon, Examples of weakly amenable discrete quantum groups, J. Funct. Anal. 265
(2013), no. 9, 2164–2187. MR3084500
[38] A. Freslon, Fusion (semi)rings arising from quantum groups, J. Algebra 417 (2014),
161–197. MR3244644
[39] A. Freslon and M. Weber, On the representation theory of partition (easy) quantum
groups, J. Reine Angew. Math. (2014), DOI 10.1515/crelle-2014-0049.
[40] R. Gohm and C. Köstler, Noncommutative independence from the braid group B∞ ,
Comm. Math. Phys. 289 (2009), no. 2, 435–482. MR2506759
[41] R. Gohm and C. Köstler, Noncommutative independence in the infinite braid and sym-
metric group, in Noncommutative harmonic analysis with applications to probability
III, 193–206, Banach Center Publ., 96, Polish Acad. Sci. Inst. Math., Warsaw, 2012.
MR2986827
[42] Y. Isono, Examples of factors which have no Cartan subalgebras, Trans. Amer. Math.
Soc. 367 (2015), no. 11, 7917–7937. MR3391904
[43] C. Kassel, Quantum groups, Grad. Texts in Math., 155, Springer, New York, 1995.
MR1321145
[44] A. Klimyk and K. Schmüdgen, Quantum groups and their representations, Texts
Monogr. Phys., Springer, Berlin, 1997. MR1492989
[45] C. Köstler, A noncommutative extended de Finetti theorem, J. Funct. Anal. 258 (2010),
no. 4, 1073–1120. MR2565834
[46] C. Köstler, On Lehner’s ‘free’ noncommutative analogue of de Finetti’s theorem, Proc.
Amer. Math. Soc. 139 (2011), no. 3, 885–895. MR2745641
[47] C. Köstler and R. Speicher, A noncommutative de Finetti theorem: invariance under
quantum permutations is equivalent to freeness with amalgamation, Comm. Math. Phys.
291 (2009), no. 2, 473–490. MR2530168
[48] J. Kustermans and L. Tuset, A survey of C ∗ -algebraic quantum groups. I, Irish Math.
Soc. Bull. No. 43 (1999), 8–63. MR1741102
[49] F. Lemeux, The fusion rules of some free wreath product quantum groups and applica-
tions, J. Funct. Anal. 267 (2014), no. 7, 2507–2550. MR3250372
[50] G. Lusztig, Introduction to quantum groups, Progr. Math., 110, Birkhäuser Boston,
Boston, MA, 1993. MR1227098
[51] A. Maes and A. Van Daele, Notes on compact quantum groups, Nieuw Arch. Wisk. (4)
16 (1998), no. 1-2, 73–112. MR1645264
[52] S. Neshveyev and L. Tuset, Compact quantum groups and their representation cate-
gories, Cours Spécialisés, 20, Soc. Math. France, Paris, 2013. MR3204665
Easy quantum groups 121

[53] S. Raum, Isomorphisms and fusion rules of orthogonal free quantum groups and
their free complexifications, Proc. Amer. Math. Soc. 140 (2012), no. 9, 3207–3218.
MR2917093
[54] S. Raum and M. Weber, The combinatorics of an algebraic class of easy quantum groups,
Infin. Dimens. Anal. Quantum Probab. Relat. Top. 17 (2014), no. 3. MR3241025
[55] S. Raum and M. Weber, Easy quantum groups and quantum subgroups of a semi-direct
product quantum group, J. Noncommut. Geom. 9 (2015), no. 4, 1261–1293. MR3448336
[56] S. Raum and M. Weber, The full classification of orthogonal easy quantum groups,
Comm. Math. Phys., 341 (2016), no. 3, 751–779.
[57] P. Tarrago and M. Weber, Unitary easy quantum groups: the free case and the group
case. arXiv:1512.00195 [math.QA] (2015).
[58] T. Timmermann, An invitation to quantum groups and duality, EMS Textbk. Math.,
Eur. Math. Soc., Zürich, 2008. MR2397671
[59] S. Vaes and N. Vander Vennet, Poisson boundary of the discrete quantum group A\ u (F ),
Compos. Math. 146 (2010), no. 4, 1073–1095. MR2660685
[60] S. Vaes and R. Vergnioux, The boundary of universal discrete quantum groups, exact-
ness, and factoriality, Duke Math. J. 140 (2007), no. 1, 35–84. MR2355067
[61] A. Van Daele, The Haar measure on a compact quantum group, Proc. Amer. Math.
Soc. 123 (1995), no. 10, 3125–3128. MR1277138
[62] A. Van Daele and S. Wang, Universal quantum groups, Internat. J. Math. 7 (1996),
no. 2, 255–263. MR1382726
[63] R. Vergnioux, Orientation of quantum Cayley trees and applications, J. Reine Angew.
Math. 580 (2005), 101–138. MR2130588
[64] R. Vergnioux, The property of rapid decay for discrete quantum groups, J. Operator
Theory 57 (2007), no. 2, 303–324. MR2329000
[65] R. Vergnioux, Paths in quantum Cayley trees and L2 -cohomology, Adv. Math. 229
(2012), no. 5, 2686–2711. MR2889142
[66] R. Vergnioux and C. Voigt, The K-theory of free quantum groups, Math. Ann. 357
(2013), no. 1, 355–400. MR3084350
[67] C. Voigt, The Baum-Connes conjecture for free orthogonal quantum groups, Adv. Math.
227 (2011), no. 5, 1873–1913. MR2803790
[68] C. Voigt, On the structure of quantum automorphism groups, J. Reine Angew. Math.
(2015), DOI 10.1515/crelle-2014-0141.
[69] S. Wang, Free products of compact quantum groups, Comm. Math. Phys. 167 (1995),
no. 3, 671–692. MR1316765
[70] S. Wang, Quantum symmetry groups of finite spaces, Comm. Math. Phys. 195 (1998),
no. 1, 195–211. MR1637425
[71] M. Weber, On the classification of easy quantum groups, Adv. Math. 245 (2013), 500–
533. MR3084436
[72] S. L. Woronowicz, Compact matrix pseudogroups, Comm. Math. Phys. 111 (1987),
no. 4, 613–665. MR0901157
[73] S. L. Woronowicz, Twisted SU(2) group. An example of a noncommutative differential
calculus, Publ. Res. Inst. Math. Sci. 23 (1987), no. 1, 117–181. MR0890482
[74] S. L. Woronowicz, Tannaka-Kreı̆n duality for compact matrix pseudogroups. Twisted
SU(N ) groups, Invent. Math. 93 (1988), no. 1, 35–76. MR0943923
[75] S. L. Woronowicz, A remark on compact matrix quantum groups, Lett. Math. Phys. 21
(1991), no. 1, 35–39. MR1088408
[76] S. L. Woronowicz, Compact quantum groups, in Symétries quantiques (Les Houches,
1995), 845–884, North-Holland, Amsterdam, 1998. MR1616348
Participants of the Masterclass on
Free Probability and Operator Algebras

Nilin Abrahamsen, University of Copenhagen


Vadim Alekseev, University of Göttingen
Johannes Alt, LMU München
Duygu Altinok, University of Bonn
Hiroshi Ando, IHES Paris / ESI Vienna
Scott Atkinson, University of Virginia
Selçuk Barlak, WWU Münster
Hari Bercovici, Indiana University Bloomington
Alcides Buss, UFSC Florianópolis
Simon Campese, University of Luxembourg
Martijn Caspers, WWU Münster
Jins de Jong, WWU Münster
Gauthier Dierickx, Free University Brussels
Philip Dowerk, University of Leipzig
Ken Dykema, Texas A&M University
Christoph Gamm, University of Leipzig
Malte Gerholt, University of Greifswald
Yinzheng Gu, Queen’s University
Tarek Hamdi, University of Tunis
Adrien Hardy, KU Leuven
Mitchell Hawkins, University of Wollongong
Yusuke Isono, University of Tokyo
Bas Jordans, NTNU Oslo
Pawel Józiak, University of Warsaw
Magdalena Kersting, University of Göttingen
Stephanie Lachs, University of Greifswald
François Lemeux, Besançon
Snigdhayan Mahanta, WWU Münster
Tobias Mai, Saarland University
Sebastian Mentemeier, WWU Münster
Tomasz Miller, University of Warsaw
Zachary Mitchell, Texas A&M University
Joseph Noles, Texas A&M University
Jolanta Pielaskiewicz, University of Linköping
Sven Raum, KU Leuven
Anna Reshetenko, University of Bielefeld
James Rout, University of Wollongong
Dominik Schillo, Saarland University
Cédric Schonard, Saarland University
Konrad Schrempf, TU Graz
Dimitri Shlyakhtenko, UC Los Angeles
124 Participants

Roland Speicher, Saarland University


Nicolai Stammeier, WWU Münster
Michael Stiller, University of Hamburg
Karen Strung, WWU Münster
Pierre Tarrago, Saarland University
Felipe Torres, WWU Münster
Michaël Ulrich, ENS Paris / Besançon
Carlos Vargas Obieta, Saarland University
Josue Daniel Vasquez Becerra, Queen’s University
Peter Verraedt, KU Leuven
Dan-V. Voiculescu, UC Berkeley
Jonas Wahl, Saarland University
Simeng Wang, Besançon
Moritz Weber, Saarland University
Xiao Xiong, Besançon
Impressions of the Masterclass
126 Impressions

Dan-V. Voiculescu (UC Berkeley)

Roland Speicher (Saarland University)

Dimitri Shlyakhtenko (UC Los Angeles)


Impressions 127

Ken Dykema (Texas A&M University)

Hari Bercovici (Indiana University)

Moritz Weber (Saarland University)


Contributors

Hari Bercovici
Mathematics Department, Indiana University
Bloomington, IN 47405, USA
E-mail: bercovic@indiana.edu

Ken Dykema
Department of Mathematics, Texas A&M University
College Station, TX 77843-3368, USA
E-mail: kdykema@math.tamu.edu

Dimitri Shlyakhtenko
Department of Mathematics, UCLA,
Los Angeles, CA 90095, USA
E-mail: shlyakht@math.ucla.edu

Roland Speicher
Fachrichtung Mathematik, Saarland University,
Postfach 151150, 66041 Saarbrücken, Germany
E-mail: speicher@math.uni-sb.de

Dan-Virgil Voiculescu
Department of Mathematics, University of California, Berkeley,
970 Evans Hall #3840, Berkeley, CA 94720-3840, USA
E-mail: dvv@math.berkeley.edu

Moritz Weber
Faculty of Mathematics, Saarland University,
66123 Saarbrücken, Germany
E-mail: weber@math.uni-sb.de
Index
amenable, 116 free max-stable laws, 5
arcsine law, 14 free Monge-Ampère equation, 50
free orthogonal quantum group, 98
Baum-Connes conjecture, 115 free Poisson/Marchenko–Pastur law, 15,
Bercovici–Pata bijection, 82, 89 89
Bernoulli variable, 15, 31 compound, 15
free Poisson/Marchenko-Pastur law, 111
Cartan subalgebra, 68, 115
free product
Catalan number, 13, 20, 30, 54
C∗ -algebras, 59
category of partitions, 102, 117
Hilbert spaces, 58
Cauchy distribution, 15
von Neumann algebras, 61, 76
Cauchy transform, 29, 78
free symmetric quantum group, 99, 118
central limit theorem
free unitary quantum group, 107
classical, 23, 74
fundamental group, 64
free, 3, 17, 24, 36, 73, 87
fusion rules, 113
Chebychev polynomials, 42
circular element, 14, 63 Gaussian distribution, 23, 62, 74
compact matrix quantum group, 99 Gaussian family, 18, 32
compact quantum group, 97 Gaussian random matrix, 18, 21, 36, 62
conditional expectation, 8, 84 Gaussian unitary ensemble, 17, 18, 32,
convergence in distribution, 12, 22, 62, 35, 53
74 genus expansion, 20, 36
convergence in moments, see also group algebra, 8, 9
convergence in distribution GUE, see also Gaussian unitary
creation operator, 1, 10, 36 ensemble
cumulant series, 28
cumulants, 3, 13, 25, 29, 36, 74, 87 Haagerup approximation property, 43,
vanishing of, 26, 109 115
cyclic derivative, 46 Haar state, 97, 112
Haar unitary, 14, 35, 63
de Finetti theorem k-Haar unitary, 14
classical, 108 random matrix, 35, 62
free, 5, 109 hyperfinite factor, 64
easy quantum group, 103 ICC group, 43, 63, 68
eigenvalue distribution, 18 independence
eigenvalue distribution, 13 asymptotically free, 12, 32, 36, 62
entropy, see also free entropy bi-free, 2, 9, 10
exchangeable, 5, 108 Boolean, 9
classical/tensor, 2, 9
Fock space, 1, 10, 14, 36
free, 2, 9
free convolution
matricial free, 9
additive, 3, 28, 30, 36, 60, 77, 87
monotone, 9
multiplicative, 31, 36, 83
operator-valued free/amalgamated
free cumulants, see also cumulants
free, 11
free difference quotient, 46, 53
traffic free, 9
free entropy, 4, 48
infinitely divisible, 4, 89
dimension, 68
intertwiner space, 100
free Gibbs law, 47, 53
free group, 1 join, 25
free group factors, 2, 10, 57, 61, 63
q-deformed, 51 Kolmogorov metric, 77, 87
interpolated, 3, 65 Kreweras complement, 31
132

Lévy metric, 76, 87 transport


laws of characters, 111 free monotone, 42, 53
monotone, 40
Marchenko–Pastur law, see also free optimal, 40
Poisson/Marchenko–Pastur law transportation map, 42, 49
max-stable laws, see also free max-stable
laws Wasserstein distance, 52
meet, 25 weak law of large numbers, 74
moment series, 28 Weingarten calculus, 112
moment-cumulant formula, 25, 28, 77, Wick formula, 18, 33, 36
109
moments, 12
Monge problem, 40

noncommutative distribution, 1, 12
operator-valued, 13
noncommutative probability space, 1, 7,
57, 61
operator-valued, 8, 84, 89
noncommutative random variable, 1, 9
unbounded, 75, 87

partitions, 25, 100


category of, 102, 117
noncrossing, 3, 13, 20, 25, 36, 41, 101
partial order on, 25
Poisson distribution, 74
prime factor, 68, 115
property T of Kazhdan, 43

quantum exchangeable, 5, 108


quantum permutation group, see also
free symmetric quantum group

R-transform, 29, 36, 77, 87


random matrix, 4, 17, 36, 48, 53, 57, 87
Gaussian, 18, 21, 36, 62
Haar unitary, 35, 62
matrix model, 62
reduced group C∗ -algebra, 60

S-transform, 32, 36, 71


Schwinger-Dyson equation, 47, 49, 54
semi-exact factor, 69
semicircle, 3, 13, 17, 21, 24, 30, 48, 49,
51, 54, 63, 73, 75, 111
semicircular family, 24, 33, 35, 41, 43,
49, 111
solid factor, 68, 115
space of intertwiners, see also
intertwiner space
Stieltjes inversion, 30, 71, 78, 88, 89
subordination, 84, 90

Tannaka-Krein for quantum groups, 99


Mathematics
Münster Lectures Münster Lectures
in Mathematics in Mathematics
Free Probability and Operator Algebras

Free Probability and Operator Algebras


Dan-Virgil Voiculescu, Nicolai Stammeier and

Free Probability
Moritz Weber, Editors

Free probability is a probability theory dealing with variables having


the highest degree of noncommutativity, an aspect found in many

and Operator
areas (quantum mechanics, free group algebras, random matrices
etc). Thirty years after its foundation, it is a well-established and very
active field of mathematics. Originating from Voiculescu’s attempt
to solve the free group factor problem in operator algebras, free
probability has important connections with random matrix theory,
combinatorics, harmonic analysis, representation theory of large
groups, and wireless communication.
Algebras
These lecture notes arose from a masterclass in Münster, Germany
and present the state of free probability from an operator algebraic
Dan-Virgil Voiculescu

D.-V. Voiculescu, N. Stammeier and M. Weber, Eds.


perspective. This volume includes introductory lectures on random

Nicolai Stammeier
matrices and combinatorics of free probability (Speicher), free
monotone transport (Shlyakhtenko), free group factors (Dykema),
free convolution (Bercovici), easy quantum groups (Weber), and a
historical review with an outlook (Voiculescu). In order to make it
more accessible, the exposition features a chapter on basics in free
Moritz Weber
probability, and exercises for each part.

This book is aimed at master students to early career researchers Editors


familiar with basic notions and concepts from operator algebras.

ISBN 978-3-03719-165-1

www.ems-ph.org

Voiculescu / Font: NewsGothic / Pantone: 287 / Pantone: 116 / Format: 170 x 240 / RB: 7.2 mm

You might also like