You are on page 1of 154

Lectures on C ∗ -Algebras

Math 582
John Roe
FALL 2015

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
2

Lecture 1
Introduction

This is a course about C ∗ -algebras. The study of these algebras combines techniques
from functional analysis, topology, algebra, and lattice theory.
The course text is Murphy, C ∗ -algebras and operator theory. This book gives
an economical presentation of the theory but it does not contain many examples.
Supplementary reading will therefore come from Davidson, C ∗ -algebras by example.
A background in analysis on the level of the MATH 501-503 sequence is assumed.
The specific items that we will need from the Math 503 syllabus are:
(1) Normed spaces. Banach spaces. Linear operators. Examples.
(2) Spaces of bounded linear operators. The uniform boundedness principle and
the open mapping theorem.
(3) Bounded linear functionals. Dual spaces. The Hahn-Banach extension the-
orem. Separation of convex sets.
(4) Spaces of continuous functions. Ascoli’s theorem, Stone-Weierstrass theo-
rem.
(5) Hilbert spaces. Perpendicular projections. Orthonormal bases. Self-adjoint
operators.
(6) Compact operators on a Hilbert space. Fredholms alternative. Spectrum
and eigenfunctions of a compact, self-adjoint operator.
We will need some extra background in functional analysis relating to duality in
locally convex spaces. We’ll wait until later to introduce some of these results (e.g.
the Krein-Milman theorem) but for now let’s just set up the basic terminology.
Definition 1.1. Let E be a vector space1. A seminorm on E is a function p : E →
R+ such that
(a) p(λx) = |λ|p(x) for all λ ∈ C, x ∈ E;
(b) p(x + y) 6 p(x) + p(y) for all x, y ∈ E.
A seminorm is a norm if, in addition, p(x) = 0 implies x = 0.
Many examples of topological vector spaces arise from the following construction.
Let {pα }α∈A be a family of seminorms on a vector space E. We can define a topology
on E by calling U ⊆ E open if for every x0 ∈ U there exist α1 , . . . , αn ∈ A and real
ε1 , . . . , εn > 0 such that
{x ∈ E : pαi (x − x0 ) < εi , (i = 1, . . . , n)} ⊆ U.
This topology makes E into a TVS; it is called the locally convex topology defined
by the seminorms {pα }. It is the weakest vector topology (fewest open sets) which
makes all the seminorms continuous. The family of seminorms is called separating
if for every nonzero x ∈ E there is a pα with pα (x) 6= 0: this is equivalent to the
Hausdorffness of the associated locally convex topology.
1All vector spaces in this course are over C unless otherwise specified.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
3

Example 1.2. Suppose that E and F are vector spaces equipped with a bilinear
or sesquilinear pairing
h·, ·i : E × F → C.
Then for each y ∈ F , the expression py (x) = |hx, yi| gives a seminorm on E. The
locally convex topology on E generated by these seminorms is called the σ(E, F )-
topology on E. In the same way we can define the σ(F, E)-topology on F . These
topologies are Hausdorff if the pairing is nondegenerate.
Example 1.3. Let E be a Banach space and let F = E ∗ , the dual space of E.
It follows from the Hahn-Banach theorem that the natural pairing between E and
E ∗ is nondegenerate. The σ(E, E ∗ )-topology on E is called the weak topology. The
σ(E ∗ , E)-topology on E ∗ is called the weak-star topology.
Proposition 1.4. (Banach-Alaoglu theorem) Let E be a Banach space. Then
the closed unit ball of E ∗ is weak-star compact.
Proof. Let B be the closed unit ball of E, and let B ∗ be the closed unit ball of
E ∗ equipped with its weak-star topology. Let D be the closed unit disc in the
complex plane. Let X be the space of all maps B → D, equipped with the product
topology; by Tychonoff’s theorem (1.13), X is a compact space. Observe that B ∗ is
identified with a subspace of X (the subspace consisting of linear maps) and that
this identification is topological: the product topology on X restricts to the weak-
star topology on B ∗ . It suffices therefore to show that B ∗ is closed in X. But B ∗
is just the subset of X consisting of those maps ϕ satisfying the uncountably many
relations
ϕ(λ1 x1 + λ2 x2 ) − λ1 ϕ(x1 ) − λ2 ϕ(x2 ) = 0;
for λ1 , λ2 , x1 , x2 fixed the LHS depends continuously on ϕ, so it cuts out a closed
subset, and the intersection of all these closed subsets is B ∗ . 
Questions related to nonmetrizable vector topologies in operator algebras are
traditionally discussed using the language of nets (Moore-Smith convergence). We
review this language now (without detailed proofs: the reader may supply them as
exercises).
Definition 1.5. A directed set Λ is a partially ordered set in which any two elements
have an upper bound. A net in a topological space X is a function from a directed
set to X. Usually we denote a net like this: {xλ }λ∈Λ .
For example, a sequence is just a net based on the directed set N. Notice that in
general it is not required that there exist a least upper bound for any two elements.
Definition 1.6. Let U ⊆ X. A net {xλ }λ∈Λ in X is eventually in U if there is some
λ0 ∈ Λ such that xλ ∈ U for all λ > λ0 . It is frequently in U if it is not eventually
in X \ U .
Definition 1.7. If there is some point x ∈ X such that {xλ } is eventually in every
neighborhood of x, we say that {xλ } converges to x.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
4

As an exercise in manipulating this definition, the reader may prove that a space
X is Hausdorff if and only if every net converges to at most one point. Similarly
a function f : X → Y is continuous if and only if, whenever {xλ } is a net in X
converging to x, then net {f (xλ )} converges in Y to f (x). The basic idea here is to
use the system of open neighborhoods of a point, directed by (reverse) inclusion, as
the parameter space of a net.
Lemma 1.8. Let E be a vector space with a locally convex topology defined by
seminorms pα . Then a net {xλ } in E converges to x if and only if pα (xλ − x)
converges to 0 for all α.
Remark 1.9. Watch out for the fact that a convergent net in a metric space need
not be bounded! For example, consider the directed set Λ = {1 − 1/n : n ∈
N} ∪ {3 − 1/n : n ∈ N} with the usual ordering. A convergent net parameterized by
Λ can do “whatever it wants” on the first segment (from 0 to 1) of the directed set.
A subnet of a net is defined as follows. Let {xλ }λ∈Λ be a net based on the directed
set Λ. Let Ξ be another directed set. A function λ : Ξ → Λ is called a final function
if, for every λ0 ∈ Λ, there is ξ0 ∈ Ξ such that λ(ξ) > λ0 whenever ξ > ξ0 . For such a
function the net {xλ(ξ) }ξ∈Ξ is a net based on the directed set Ξ; it is called a subnet
of the originally given one. Notice that this notion of subnet is more relaxed than
the usual one of subsequence; we allow some repetition and backtracking.
Definition 1.10. A net in X is universal if, for every subset A of X, the net is
either eventually in A or eventually in the complement of A.
Lemma 1.11. Every net has a universal subnet.
Proof. This requires the axiom of choice. See the appendix to this lecture for an
account of the proof. 
Proposition 1.12. Let X be a topological space. The following are equivalent:
(a) X is compact.
(b) Every universal net in X converges.
(c) Every net in X has a convergent subnet. 
This makes possible a rather short proof of Tychonoff’s theorem.
Theorem 1.13. A product of compact topological spaces is compact.
Q
Proof. Let X = Xα be such a product, πα the coordinate projections. Pick a
universal net {xλ } in X; then a one-line proof shows that each of the nets {πα (xλ )}
is universal in Xα , and hence is convergent (a universal net with a convergent subnet
must itself be convergent), say to xα . But then by definition of the product topology
the net {xλ } converges to the point x whose coordinates are xα . 

Appendix to the lecture

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
5

Proof. (of Lemma 1.11.) This depends unavoidably on the axiom of choice.
Let N : D → X be a net in X, parameterized by a directed set D. A collection
F of subsets of X will be called an N -filter if N is frequently in every member
of F , and if F is closed under finite intersections and the formation of supersets.
Such objects exist: for example, {X} is an N -filter. The collection of N -filters is
partially ordered by inclusion, and every chain in this partially ordered set has an
upper bound (the union). Thus Zorn’s Lemma provides a maximal N -filter; call it
F0 .
Suppose that S ⊆ X has the property that N is frequently in A ∩ S for every
A ∈ F0 . Then the union of F0 with the set of all sets A ∩ S, A ∈ F0 , is again an
N -filter. By maximality we deduce that S itself belongs to F0 .
We will use this property to construct the desired universal subnet. Let D0 be
the collection of pairs (A, i) with A ∈ F0 , i ∈ D, and N (i) ∈ A; it is a directed set
under the partial order
(B, j) > (A, i) ⇔ B ⊆ A, j > i.
The map (A, i) 7→ i is final so defines a subnet N 0 of the net N . We claim that this
subnet is universal.
Let S ⊆ X have the property that N 0 is frequently in S. Let A ∈ F0 and let
i be arbitrary. By definition, there exist B ∈ F0 , B ⊆ A, and j > i, such that
N (j) = N 0 (B, j) ∈ B ∩ S ⊆ A ∩ S. We conclude that N is frequently in A ∩ S for
every A ∈ F0 and hence, as observed above, that S itself belongs to F0 .
Now let S be arbitrary. It is not possible that N 0 be frequently both in S and in
X \ S, for then (by the above) both S and X \ S would belong to F0 , and then their
intersection, the empty set, would do so as well, a contradiction. Thus N 0 fails to
be frequently in one of these sets, which is to say that it is eventually in the other
one. 

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
6

Lecture 2
Hilbert space

A solid foundation in Hilbert space theory is essential to understand C ∗ -algebras.


Here we review the basic terminology and ideas.
A (complex) Hilbert space is an inner product space which is complete in the
associated norm. We denote the inner product by h·, ·i and the corresponding norm
by k · k, so that kvk2 = hv, vi. Our convention is that the inner product hv, wi is
conjugate linear in v and linear in w.
If H is a Hilbert space and v ∈ H is fixed the map
ϕv : w 7→ hv, wi
is a continuous linear functional on H, i.e. an element of the dual space H ∗ . The
Riesz representation theorem says that every element of H ∗ is of this form, i.e. the
map v 7→ ϕv identifies H with H ∗ . This identification is antilinear (complex conju-
gation gets involved): the linear isomorphism is between H ∗ and H̄, the conjugate
Hilbert space to H.
Let H be a Hilbert space. A linear map T : H → H is bounded if its operator
norm, defined by
kT k = sup{kT vk : kvk 6 1}
is finite. Such maps are also called operators on H and the collection of such maps
is denoted B(H): it is a Banach space under the operator norm.
By the Riesz representation theorem, for any T ∈ B(H) and any v ∈ H there
exists T ∗ v ∈ H characterized by
hT ∗ v, wi = hv, T wi.
The Cauchy-Schwarz inequality gives
kT k2 = sup hT v, T vi = sup hT ∗ T v, vi 6 kT ∗ T k.
kvk61 kvk61

Thus
kT k2 6 kT ∗ T k 6 kT ∗ kkT k.
This gives kT k 6 kT ∗ k; but since ∗ is an involution we deduce that kT k = kT ∗ k
and also that
(2.1) kT ∗ T k = kT k2 .
Equation 2.1 is called the C ∗ -identity and will become one of the axioms for abstract
C ∗ -algebras. The operator T ∗ is called the adjoint of T .
We recall some standard terminology for special kinds of operators on a Hilbert
space.
• T ∈ B(H) is selfadjoint if T = T ∗ .
• U ∈ B(H) is unitary if U U ∗ = U ∗ U = I. Geometrically, U is an isometric
isomorphism from H to itself.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
7

• T ∈ B(H) is normal if T T ∗ = T ∗ T . Unitary and selfadjoint operators are


examples of normal operators.
• P ∈ B(H) is a projection if P = P 2 = P ∗ . Geometrically, P corresponds to
a direct sum decomposition of H into mutually orthogonal closed subspaces,
Ker(P ) ⊕ Im(P ).
• V ∈ B(H) is an isometry if V ∗ V = I. Geometrically, V is an isometric
map of H into itself, but the range may be a proper subspace. V V ∗ is the
orthogonal projection onto this range.
• V ∈ B(H) is a partial isometry if it restricts to an isometry from Ker(V )⊥
to Im(V ) There are four equivalent algebraic ways of expressing this: V ∗ V
is a projection; V V ∗ is a projection; V V ∗ V = V ; and V ∗ V V ∗ = V ∗ .
• W ∈ B(H) is invertible if there exists W −1 ∈ B(H) with W W −1 =
W −1 W = I. A standard consequence of the Closed Graph Theorem is that
W is invertible if and only if it is bijective as a map H → H (in other words,
the boundedness of the inverse comes for free).
Definition 2.2. Let T ∈ B(H). The spectrum of T is the set
Spectrum(T ) = {λ ∈ C : (T − λI) is not invertible.}
It is a standard fact that the spectrum of T is a nonempty, compact subset of
C, contained within the closed disc of radius kT k. We will prove this again in the
context of Banach algebras in a moment.
Example 2.3. The number λ ∈ C is an eigenvalue of T if Ker(T − λI) 6= 0. Every
eigenvalue belongs to the spectrum of T , but in infinite dimensions the converse
need not be the case.
Definition 2.4. Let H be a Hilbert space. An operator T ∈ B(H) is compact if,
for every bounded subset B ⊆ H, the image T (B) ⊆ H has compact closure.
The collection of compact operators on H is denoted K(H). If H is finite dimen-
sional then K(H) = B(H). If H is infinite dimensional then the closed unit ball is
not compact and thus I ∈ / K(H). We have
Proposition 2.5. For an infinite dimensional Hilbert space H, K(H) is a closed,
two-sided, proper ∗-ideal of B(H).
If the range of T is finite dimensional (such operators are called finite rank ) then
T is compact. It can be shown that the finite rank operators are dense in K(H).
This is an application of the spectral theorem for compact operators, which we recall
next.
Theorem 2.6. Let T be a compact selfadjoint operator on an infinite-dimensional
Hilbert space H. Then there is a sequence λj of real numbers tending to zero, and a
sequence Pj of finite rank orthogonal projections on H, such that
X
T = λj Pj
j

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
8

where the series converges in norm. The spectrum of T consists of the set {λj }∪{0};
each λj is an eigenvalue with a finite-dimensional eigenspace.
In brief, a compact selfadjoint operator can be “diagonalized” relative to a suitable
orthonormal basis.
Definition 2.7. Let T be a selfadjoint operator on a Hilbert space H. One says
that T is positive if the quadratic form defined by T is positive semidefinite, that
is, if
hT v, vi > 0 for all v ∈ H.
Clearly, the positive operators form a cone — the sum of two positive operators
is positive, as is any positive real multiple of a positive operator.
Example 2.8. If T = S ∗ S, then hT v, vi = hSv, Svi > 0 and thus T is positive. In
fact, this condition is necessary and sufficient for positivity. To see this, we will need
a version of the spectral theorem that applies to all selfadjoint operators, not merely
compact ones. This will be proved as a consequence of our work on C ∗ -algebras.
It is helpful to know something about Hilbert-Schmidt and trace-class operators.
These are important subclasses of the compacts.
Let H and H 0 be (separable, infinite dimensional) Hilbert spaces, and choose
orthonormal bases (ei ) and (e0j ) in H and H 0 . A bounded linear operator A : H → H 0
can be represented by an “infinite matrix” with coefficients2
cij (A) = he0j , Aei i.
Proposition 2.9. The quantity
X
kAk2HS = |cij (A)|2 ∈ [0, ∞]
i,j

is independent of the choice of orthonormal bases in H and H 0 . Moreover, kAkHS =


kA∗ kHS .
Proof. By Parseval’s theorem
X X
kAk2HS = |cij (A)|2 = kAei k2
i,j i

which is certainly independent of the choice of basis in H 0 . But since cij (A) =
cji (A∗ ), kAk2HS = kA∗ k2HS which is independent of the choice of basis in H by the
same argument. 
Definition 2.10. An operator A such that kAkHS < ∞ is called a Hilbert-Schmidt
operator, and kAkHS is called its Hilbert-Schmidt norm.
Proposition 2.11. Hilbert-Schmidt operators have the following properties
2Ofcourse, not every such infinite matrix represents a bounded operator; but this does not
matter here.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
9

(i) The Hilbert-Schmidt norm is induced by an inner product


X
hA, BiHS = cij (A)cij (B) .
i,j

(ii) Relative to this inner product, the space of Hilbert-Schmidt operators is a


Hilbert space.
(iii) The Hilbert-Schmidt norm dominates the operator norm.
(iv) Hilbert-Schmidt operators are compact.
(v) Hilbert-Schmidt operators form an ideal in B(H).
Proof. The proofs are all easy. Please do them as exercises. 
Definition 2.12. A bounded operator T on a Hilbert space H is said to be of
trace class if it belongs to the linear span of the products AB, where A and B are
Hilbert-Schmidt. ThePtrace is the linear functional on trace class operators defined
by Tr( λk Ak Bk ) = λk hA∗k , Bk iHS .
P

A priori, the trace depends on how T is represented


P in terms of Hilbert-Schmidt
operators. However, the calculation, for T = λk Ak Bk ,
X X X
(2.13) Tr(T ) = λk cij (A∗k )cij (Bk ) = λk cji (Ak )cij (Bk ) = cjj (T )
i,j.k i,j,k j

shows that it in fact depends only on T .


Proposition 2.14. Let T be self-adjoint and of trace class. Then Tr(T ) is the sum
of the eigenvalues of T .
Proof. Choose an orthonormal basis of eigenvectors (which exists by the spectral
theorem for compact self-adjoint operators) and apply (2.13). 

The conclusion still holds if T is not self-adjoint, a result known as Lidskii’s


theorem. This is very much harder to prove and we will not really need it. For the
proof see Simon, Trace ideals and their applications, chapter 4.
The most important fact about the trace is its commutator property:
Proposition 2.15. Let A and B be bounded operators on a Hilbert space H, and
suppose that either both A and B are Hilbert-Schmidt or one of them is of trace
class. Then Tr(AB) = Tr(BA).
Remark 2.16. The proposition is still true under the minimal hypothesis that AB
and BA separately are trace class, but it is a good bit harder to prove. The usual
argument applies Lidskii’s theorem together with the algebraic fact that AB and
BA have the same nonzero eigenvalues, including multiplicity. For a proof not
depending on Lidskii’s theorem (but using some other techniques developed later in
this course), see the “loose ends” appendix, Proposition 37.10.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
10

Proof. Think about the Hilbert-Schmidt case first. Choose an orthonormal basis
(ei ) for H, and write
X
Tr(AB) = hei , ABei i
i
X
= hA∗ ei , Bei i
i
X
= cij (B)cij (A) (by Parseval’s theorem).
i,j

This sum is absolutely convergent, and it is symmetrical in A and B, so the result


follows. To obtain the other case from this, suppose that B is of trace class. It
suffices to consider B = CD where C, D are Hilbert-Schmidt. Then
Tr(AB) = Tr(ACD) = Tr(DAC) = Tr(CDA) = Tr(BA),
applying the Hilbert-Schmidt case twice (and using the fact that Hilbert-Schmidt
operators form an ideal). 
Remark 2.17. Just as the collection of Hilbert-Schmidt operators is a Hilbert space
under the Hilbert-Schmidt norm (the Hilbert-Schmidt norm of T is the `2 norm of
the eigenvalue sequence of |A| = (A∗ A)1/2 ), the collection of trace class operators
is a Banach space under the trace norm (the trace norm of T is the `1 norm of the
eigenvalue sequence of |A|). This is an important structure, but I will not develop
it in class (maybe in some exercises).
Exercise 2.18. Let (X, µ) be a compact metrizable space equipped with a Radon
measure. A bounded operator T on L2 (X, µ) is described by a continuous kernel k
on X × X: that is, Z
T u(x) = k(x, y)u(y)dµ(y).
Show that, if T is of trace class, its trace is given by the formula
Z
Tr(T ) = k(x, x)dµ(x).

(It is not the case that every continuous kernel defines a trace-class operator;
but every smooth kernel on a compact manifold defines a trace-class operator with
respect to the smooth measure class).
Remark 2.19. We can use Hilbert-Schmidt operators to develop the theory of Hilbert
space tensor products. Specifically, let H and H 0 be Hilbert spaces. Let HS(H, H 0 )
be the space of Hilbert-Schmidt operators from H to H 0 ; it is a Hilbert space.
When H = C this Hilbert space is canonically identified with H 0 ; when H 0 = C it is
canonically identified with the dual space H ∗ . This leads us to make the following
definition: the Hilbert space tensor product of H and H 0 is
H ⊗ H 0 = HS(H ∗ , H 0 ).

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
11

If ξ ∈ H, η ∈ H 0 then ξ ⊗ η ∈ H ⊗ H 0 is defined to be the rank one Hilbert-Schmidt


operator H ∗ → H 0 given by ϕ 7→ ϕ(ξ)η. The expected properties then hold: ξ ⊗ η is
bilinear in ξ, η and has norm kξkkηk, the span of such “elementary tensors” is dense
in H ⊗ H 0 , and if {ξi }, {ηj } are orthonormal bases for H and H 0 then {ξi ⊗ ηj } is
an orthonormal base for H ⊗ H 0 . (The reader can easily verify these facts).
Suppose that S and T are operators on H and H 0 respectively. If x ∈ H ⊗ H 0 =
HS(H ∗ , H 0 ) we define (S ⊗ T )(x) ∈ H ⊗ H 0 as the composite
S∗ / x / T /
H∗ H∗ H0 H0 .
Then S ⊗ T is a bounded operator on H ⊗ H 0 with norm kSkkT k, and its action on
elementary tensors is as expected: (S ⊗ T )(ξ ⊗ η) = (Sξ) ⊗ (T η).
Exercise 2.20. In the purely algebraic context, the tensor product is characterized
by a universal property: every bilinear map V1 × V2 → V factors uniquely through
the universal bilinear map V1 × V2 → V1 ⊗ V2 given by (v1 , v2 ) 7→ v1 ⊗ v2 . Is the
Hilbert space analog of this statement true? In other words, does every bounded
bilinear map H1 × H2 → H factor through H1 ⊗ H2 ? (Hint: The answer is no, and
this is because there exist Hilbert-Schmidt operators that are not trace-class.)

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
12

Lecture 3
Fredholm theory

The material in this lecture goes back to the roots of functional analysis in the
theory of integral equations. It also provides a fundamental example in C ∗ -algebra
theory, specifically relating to the Calkin algebra, which is the quotient B(H)/K(H)
of the bounded operators on Hilbert space by the ideal of compact operators.
Definition 3.1. Let V and W be Hilbert spaces and let T : V → W be a bounded
linear operator. We say T is a Fredholm operator if
(a) The kernel of T has finite dimension,
(b) The range of T has finite codimension.
Proposition 3.2. The kernel and the range of a Fredholm operator on Hilbert space
are closed subspaces.
Proof. The kernel of any bounded linear operator is closed, since it is the inverse
image of a closed set, namely {0}, under a continuous map. As for the range, let
T : V → W be Fredholm. Let {w1 , . . . , wn } be a basis for a complement of Im T in
W . Define a bounded linear operator
Xn
⊥ n
L : (Ker T ) ⊕ C → W, (v, λ1 , . . . , λn ) 7→ T v + λi wi .
i=1
−1
L is bijective, and therefore has a bounded inverse M = L by the Closed Graph
Theorem. Then Im T is the inverse image M −1 ((Ker T )⊥ ) of the closed subspace
(Ker T )⊥ , so it is closed. 
Definition 3.3. The index Index(T ) of a Fredholm operator T is the difference of
dimensions Nullity T − Corank T . (By definition, the corank of T is the dimension
of a complementary subspace to Im(T ), i.e. it is dim(Im(T )⊥ ) = dim(Ker(T ∗ )))
The rank-nullity theorem can be restated as follows: if V, W are finite-dimensional
and T : V → W is a linear map, then Index(T ) = dim V − dim W . In other words,
the index does not depend on T at all! In particular, an operator from a finite-
dimensional vector space to itself must have index zero. This statement is not true
for maps from an infinite-dimensional space to itself, as the following important
example shows.
Example 3.4. Let V = W = `2 , the Hilbert space of square-summable sequences.
Let T : V → W be the linear operator defined by
T (a0 , a1 , a2 , a3 , . . .) = (0, a0 , a1 , a2 , . . .)
called the unilateral shift. Clearly, Nullity T = 0 while Corank T = 1, so T is a
Fredholm operator of index −1. The adjoint operator T ∗ (the unilateral backward
shift) defined by
T ∗ (a0 , a1 , a2 , a3 , . . .) = (a1 , a2 , a3 , a4 , . . .)
has index +1.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
13

Lemma 3.5. Let H be a Hilbert space and let S ∈ B(H) with kSk < 1. Then I − S
is an invertible operator.
Proof. Define R by the series

X
2
R = I + S + S + ··· = S n.
n=0

If kSk = s < 1, then kS n k 6 sn , and so the series converges absolutely. We have



X
SR = RS = Sn = R − I
n=1

whence R(I − S) = (I − S)R = I as required. 


Corollary 3.6. The set of invertible operators (on a single Hilbert space, or from
one Hilbert space to another ) is open.
Proof. Let T : H1 → H2 be invertible, with inverse S. Let ε = kSk−1 . If kT − T 0 k <
ε, then kI − ST 0 k < 1 and kI − T 0 Sk < 1, whence ST 0 and T 0 S are invertible. It
follows that T 0 is invertible. 
Proposition 3.7. Let V, W be Hilbert spaces. The set Fred(V, W ) of all Fredholm
operators from V to W is an open subset of B(V, W ), and the index is constant on
path components of this open set.
Proof. Let T be Fredholm. We are going to show that there exists ε > 0 such that,
if kT 0 − T k < ε, then T 0 is Fredholm and has the same index as T . This will clearly
show that the set of Fredholm operators is open. It also implies that the index is
a continuous integer-valued function on the set of Fredholm operators, and hence
that it is constant on path components.
We consider the orthogonal direct sum decompositions of V and W given by
V = V0 ⊕ V1 , V0 = Ker(T ), V1 = Ker(T )⊥
W = W0 ⊕ W1 , W0 = Im(T )⊥ , W1 = Im(T ).
Note that V0 and W0 are finite-dimensional, and Index(T ) = dim V0 −dim W0 . Every
linear transformation from V to W has a 2 × 2 matrix representation with respect
to this decomposition. In particular T itself has such a representation
 
0 0
T = ,
0 T11
where T11 : V1 → W1 is invertible.
−1 −1
Let T 0 = T + L, where the perturbation L has norm smaller than ε = kT11 k ,
and write  
L00 L10
T +L= ,
L01 T11 + L11

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
14

By our assumption, (T11 + L11 ) is invertible (Corollary 3.6). Now we are going per-
form “elementary row and column operations” on the matrix T + L. The operations
we want to carry out are these:
−1
• Add
 −L10 (T11 + L11 ) −1times row 2 to row 1; that is, multiply on the left by
1 −L10 (T11 + L11 )
.
0 1
−1
• Add −(T 11 + L11 ) L01 times column
 2 to column 1; that is, multiply on the
1 0
right by .
−(T11 + L11 )−1 L01 1
These operations are effected by invertible matrices, so they don’t change the di-
mension of the kernel or the codimension of the image (and in particular they don’t
change the Fredholm index). Their effect is to reduce T + L to the matrix
L00 − L10 (T11 + L11 )−1 L01
 
0
.
0 T11 + L11
The index of this diagonal matrix is clearly the sum of the indices of the diagonal
entries. But the first diagonal entry is just a linear transformation between finite-
dimensional vector spaces, so its index is dim V0 − dim W0 = Index(T ), and the
second entry is invertible so it has index zero. Thus T + L is Fredholm and has the
same index as T , completing the proof. 
Proposition 3.8. (Atkinson’s theorem) Let T be a bounded operator on a Hilbert
space H. The following conditions are equivalent
(a) T is Fredholm.
(b) T is invertible modulo finite-rank operators: there is a bounded operator S such
that I − ST and I − T S are of finite rank.
(c) T is invertible modulo compact operators: there is a bounded operator S such
that I − ST and I − T S are compact operators.
Proof. Suppose that T is Fredholm (a). Then T maps the orthogonal comple-
ment (Ker(T ))⊥ bijectively onto Im(T ). Let Q be the inverse map from Im(T )
to (Ker(T ))⊥ ; by the Closed Graph Theorem, Q is a bounded operator. Let P be
the orthogonal projection from H onto the closed subspace Im(T ) and let S = QP .
Then by construction, I − T S and I − ST are the orthogonal projections onto
Im(T )⊥ and Ker(T ) respectively. Since these are finite-dimensional, the associated
projections have finite rank. Thus T is invertible modulo finite-rank operators (b).
It is obvious that (b) implies (c). Suppose (c), that T is invertible modulo com-
pacts, and let S be such that I − ST ∈ K and I − T S ∈ K. There is a finite rank
operator F such that kI − ST − F k < 21 . By Lemma 3.5, this implies that ST + F
is invertible. Consider now the identity map
I = (ST + F )−1 (ST + F )
when restricted to the kernel of T ; the restriction of ST + F to Ker(T ) has finite
rank, whence the restriction of I to Ker(T ) has finite rank, and thus Ker(T ) is

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
15

finite-dimensional. Similarly there is a finite-rank operator F 0 such that T S + F 0 is


invertible. The equation
v = (T S + F 0 )(T S + F 0 )−1 v = T S(T S + F 0 )−1 v + F 0 (T S + F 0 )−1 v
shows that Im(T ) + Im(F 0 ) = H and, since Im(F 0 ) is finite dimensional, this shows
that Im(T ) has finite codimension. Thus T is Fredholm (a), as required. 
Corollary 3.9. Let T be a Fredholm operator and K a compact operator. Then
T + K is Fredholm and has the same index as T .
Proof. Any inverse for T modulo compacts is also an inverse for T + K modulo
compacts, so T +K is Fredholm by Atkinson’s theorem. The linear path s 7→ T +sK
shows that T and T +K belong to the same path component of the space of Fredholm
operators, so they have the same index. 
Proposition 3.10. If T1 , T2 are Fredholm operators on a Hilbert space H then so
is their composite T1 T2 , and moreover
Index(T1 T2 ) = Index(T1 ) + Index(T2 ).
Proof. It follows from Atkinson’s theorem that the composite of Fredholm operators
is Fredholm. To prove the formula for the index, choose an operator S2 that is an
inverse for T2 modulo compacts. Consider the one-parameter family of 2×2 matrices
(operators on H ⊕ H)
 
T2 cos(πs/2) I sin(πs/2)
Vs = , s ∈ [0, 1].
−I sin(πs/2) S2 cos(πs/2)
These are all invertible modulo compacts (hence Fredholm) with
   
T2 0 0 I
V0 = , V1 = .
0 S2 −I 0
Note that Index(V0 ) = Index(T2 ) + Index(S2 ), whereas V1 is invertible so has index
0; therefore Index(T2 ) = − Index(S2 ). Consider now the path of operators
 
T1 0
Ws = Vs .
0 I
This is also a continuous path of Fredholm operators with
   
T1 T2 0 0 T1
W0 = , W1 = .
0 S2 −I 0
The equality Index(W0 ) = Index(W1 ) now gives
Index(T1 T2 ) + Index(S2 ) = Index(T1 ),
which, together with Index(S2 ) = − Index(T2 ), implies the desired result. 

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
16

Exercise 3.11. It is known that the group of invertible operators on an infinite-


dimensional separable Hilbert space is path connected in the norm topology. (We
will prove this later as a consequence of the spectral theorem. In fact, the group
is contractible, but this requires a more elaborate proof.) Granted this, show that
two Fredholm operators on such a space belong to the same connected component
of Fred(H) if and only if they have the same index.
Let T be a Fredholm operator on H. An operator S on H is called a parametrix
for T if I −ST and I −T S are compact. (The existence of parametrices is guaranteed
by Atkinson’s theorem.)
Proposition 3.12. (Trace index formula) Suppose that S is a parametrix for
the Fredholm operator T with the additional property that I − T S and I − ST are
trace class operators. Then for any n ∈ N
Index(T ) = Tr((I − ST )n ) − Tr((I − T S)n ).
Moreover, such parametrices S always exist.
Proof. The proof of Atkinson’s theorem gives us a parametrix S such that I − ST
and I − T S are the orthogonal projections onto Ker(T ) and Ker(T ∗ ). For this
parametrix the result follows from the fact that the trace of a finite-rank projection
is equal to the dimension of its range. So what we need to do is to show two things:
(a) for any parametrix S, the right hand side of the formula is independent of n, and
(b) for one fixed value of n (we will choose n = 2) the right hand side is independent
of the choice of parametrix S.
To prove (a), write

Tr((1 − ST )n ) − Tr((1 − ST )n+1 ) = Tr(ST (1 − ST )n ) =


Tr(T (1 − ST )n S) = Tr(T S(1 − T S)n ) = Tr((1 − T S)n − Tr((1 − T S)n+1 )
(The equality between the first and second lines uses Proposition 2.15 applied to S
and T (1 − ST )n ; the rest is just algebra.) This rearranges to give (a).
Let us prove (b). Let R and S be two parametrices modulo trace-class operators.
We will use several times the algebraic identity x2 − y 2 = x(x − y) + (x − y)y (valid
in any ring). Applying this we get
Tr((1 − RT )2 ) − Tr((1 − ST )2 ) = Tr((1 − RT )(S − R)T + (S − R)T (1 − ST )).
Consider this expression as the sum of two traces. In the first we use Proposition 2.15
to bring T round to the front; in the second we use the algebraic identity T (1−ST ) =
(1 − T S)T . This gives
Tr(T (1 − RT )(S − R) + (S − R)(1 − T S)T ).
Play the same game the other way round, using the algebraic identity T (1 − RT ) =
(1 − T R)T on the first term and the trace property on the second term, to get
Tr((1 − T R)T (S − R) + T (S − R)(1 − T S)) = Tr((1 − T R)2 ) − Tr((1 − T S)2 ).

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
17

Putting this equal to the original expression Tr((1 − RT )2 ) − Tr((1 − ST )2 ) and


rearranging, we get (b). 
(This slightly convoluted argument is needed so that we can use 2.15 only in
the generality that we have proved it. If we were allowed to use the most general
version, the proof could be simplified quite a bit, especially if we only wanted the
most important case n = 1. )

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
18

Lecture 4
Banach algebras

We review some basic information about Banach algebras.


Definition 4.1. A normed algebra is a normed vector space that is equipped with
a bilinear, associative multiplication A × A → A such that kabk 6 kakkbk for all
a, b ∈ A. It is a Banach algebra if the underlying normed vector space is complete.
It is unital if there is 1 ∈ A which is a unit for multiplication and has k1k = 1.
Definition 4.2. An involution on a normed algebra is an antilinear map A → A,
denoted a 7→ a∗ , such that a∗∗ = a, kak = ka∗ k, and (ab)∗ = b∗ a∗ . A Banach algebra
equipped with an involution is called a Banach ∗-algebra.
It is easy to see that if a Banach ∗-algebra is unital, then 1∗ = 1 (exercise).
Example 4.3. Let H be a Hilbert space. The bounded operators B(H) form a
unital Banach ∗-algebra (relative to the operator norm). The compact operators
K(H) form a Banach ∗-subalgebra (in fact an ideal) which is not unital unless H is
finite-dimensional.
Example 4.4. With H a Hilbert space as above, the Hilbert-Schmidt operators
(equipped with the Hilbert-Schmidt norm) form a Banach ∗-algebra, which does
not have a unit unless H is finite-dimensional.
Notice that the norm in Example 4.3 satisfies the C ∗ -identity (2.1), namely
ka∗ ak = kak2 , whereas the norm in Example 4.4 does not.
Example 4.5. Let X be a locally compact Hausdorff space and let C0 (X) be the
Banach space of continuous functions f : X → C that vanish at infinity (that is,
for every ε > 0 there is K ⊆ X compact with x ∈ X \ K =⇒ |f (x)| < ε). Then
C0 (X) is a Banach ∗-algebra with pointwise addition, multiplication and complex
conjugation as the operations. It is unital iff X is compact (in which case we denote
it C(X)).
Example 4.6. Let U = {z ∈ C : |z| < 1} denote the unit disc. Let D denote the
collection of continuous functions U → C that are holomorphic on U. We equip it
with the supremum norm, with pointwise addition and multiplication, and with the
involution f ∗ (z) = f (z̄). Then D is a Banach ∗-algebra.
Example 4.7. Let Γ be a discrete group, and let `1 (Γ) be the usual sequence space
with basis vectors eγ (for γ ∈ Γ). We equip it with the convolution product defined
on basis elements by eγ eδ = eγδ , and the involution defined on basis elements by
(eγ )∗ = eγ −1 . Then `1 (Γ) becomes a unital Banach ∗-algebra. A similar construction
can be carried out for general locally compact groups G using L1 (G, µ) for a Haar
measure µ; this is not a unital algebra if G is not discrete.
Exercise: which of the preceding algebras satisfy the C ∗ -identity?

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
19

Definition 4.8. Let A be a unital Banach algebra. The spectrum Spectrum(a) of


a ∈ A is the set of complex numbers λ such that a − λ1 does not have an inverse in
A. The complement of the spectrum is called the resolvent set.
We’ll use the notation SpectrumA (a) if it is necessary to specify the algebra A —
although, as we shall see later, this will seldom be necessary for C ∗ -algebras.
Remark 4.9. If A ia a non-unital algebra, it can always be embedded as a codi-
mension one ideal in the unitalization à which is the set of formal symbols a + λ1,
a ∈ A, λ ∈ C , with the obvious rules for addition and multiplication. If A is a
Banach algebra, one can always make à into a Banach algebra also; there may be
several ways to do this—we will detail the one appropriate for C ∗ -algebras later.
We define the spectrum of an element of a non-unital algebra A to be its spectrum
in the unitalization. Notice that 0 always belongs to this spectrum (why?).
Lemma 4.10. Let A be a unital Banach algebra. The subset Ainv ⊆ A of invertible
elements is open. Moreover, the inversion map Ainv → Ainv is continuous; in fact, it
is differentiable, with its derivative at a ∈ Ainv being the linear map x 7→ −a−1 xa−1 .
Remark 4.11. A continuous function f : E → F between Banach spaces is differ-
entiable at x ∈ E if there exists a bounded linear transformation T : E → F such
that
kf (x + h) − f (x) − T · hk
→ 0 as h → 0
khk
for h ∈ E \{0}. The linear transformation T (necessarily unique) is the derivative of
f at x. “Differentiation” in the proposition above is taken in this sense. Reference:
Dieudonné, Foundations of Modern Analysis.
Proof. For kxk < 1 the power series
(1 + x)−1 = 1 − x + x2 − · · · ,
converges and shows that 1 + x is invertible (we already used this argument in
Lemma 3.5); thus 1 is an interior point of Ainv . Moreover simple estimates give
(1 + x)−1 = 1 − x + O(kxk2 ) for kxk < 12 ; this shows that the inversion map is
differentiable at 1 with derivative x 7→ −x. Corresponding results at other points
of Ainv follow since Ainv is a Banach manifold and left and right multiplication by
a ∈ Ainv are smooth linear maps. 
Corollary 4.12. Let a be an element of a unital Banach algebra A. The resolvent
set of a ∈ A is open, and so the spectrum is closed.
Proof. The resolvent set is the inverse image of Ainv under a continuous map. 
Corollary 4.13. Maximal ideals in a unital Banach algebra A are closed. Every
algebra homomorphism α : A → C is continuous, of norm 6 1.
Proof. Let m be a maximal ideal in A. Then m does not meet the open set of
invertible elements in A. The closure m then does not meet the set of invertibles
either, so it is a proper ideal, and by maximality m = m.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
20

Since α is a homomorphism to a field, its kernel is a maximal ideal, hence closed.


Thus α is continuous, by the usual continuity criterion for linear functionals. The
norm estimate is obtained by refining the argument: if kαk > 1 then there is a ∈ A
with kak < 1 and α(a) = 1. But then 1 − a is invertible, so 1 − α(a) is invertible
too, and this is a contradiction. 
Proposition 4.14. Let A be a unital Banach algebra, and let a ∈ A. Then the
spectrum Spectrum(a) is a nonempty compact subset of C.
Proof. The power series expansion
(4.15) (λ1 − a)−1 = λ−1 + λ−2 a + λ−3 a2 + · · ·
converges for |λ| > kak and shows that the spectrum of a is contained within
D(0; kak). We already showed that the spectrum is closed, so it is compact. To
show that it is nonempty, suppose the contrary. Let ϕ be any continuous linear
functional on A. Then the complex-valued function
λ 7→ ϕ (λ1 − a)−1


is entire (holomorphic on C) by the differentiability assertion in Lemma 4.10. Since


it also tends to 0 at infinity, it is identically 0 by Liouville’s theorem. Since this is
true for all choices of ϕ, the Hahn-Banach theorem implies that (λ1 − a)−1 = 0 for
all λ, which is absurd. 
Corollary 4.16. (Gelfand-Mazur theorem) The only Banach algebra which is
also a field (or a division ring) is C. Consequently, if m is a maximal ideal in a
Banach algebra A, then A/m = C.
Proof. Let A be such an algebra, a ∈ A. Then a has nonempty spectrum, so there is
some λ ∈ C such that a − λ1 is not invertible. But in a field the only non-invertible
element is zero; so a = λ1 is a scalar. 
Lemma 4.17. (Spectral mapping theorem) Let A be a unital algebra, let a ∈
A and let p be a polynomial (with complex coefficients). Then Spectrum(p(a)) =
p(Spectrum(a)) as subsets of C.
Proof. Factor p(λ) − µ = c(λ − λ1 ) · · · (λ − λn ) using the fundamental theorem of
algebra, and observe that
p(a) − µ1 = c(a − λ1 1) · · · (a − λn 1)
is invertible iff all of (a − λ1 1), . . . , (a − λn 1) are; that is, if µ ∈
/ p(Spectrum(a)). 
Definition 4.18. Let A be a Banach algebra. The spectral radius of a ∈ A is the
radius of the smallest disc around the origin enclosing the spectrum of a; that is
sprA (a) = inf{r ∈ R+ : SpectrumA (a) ⊆ D(0; r)}.
Lemma 4.19. (Spectral radius formula) In a Banach algebra A the spectral
radius of an element a is given by the formula
sprA (a) = lim kan k1/n = inf kan k1/n
n→∞ n

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
21

(the limit always exists, and equals the infimum).


Remark 4.20. In the proof we will use a special case of the uniform boundedness
theorem. We recall this briefly. Let E be a Banach space and S a collection of
continuous linear functionals on E. One says that S is point bounded if for each
x ∈ E, the set {s(x) : s ∈ S} is a bounded subset of C. The UBP (also known as
the Banach-Steinhaus theorem) says that if S is point bounded then it is uniformly
bounded, that is, sup{ksk : s ∈ S} < ∞.
Proof. We showed in the proof of 4.14 that spr(a) 6 kak. By the spectral mapping
theorem spr(a)n = spr(an ) 6 kan k and thus spr(a) 6 kan k1/n for every n. It suffices
then to show that
spr(a) > lim sup kan k1/n .
n→∞
This uses complex analysis again. Let ϕ be a continuous linear functional on A as
before. Then the function
fϕ : λ 7→ ϕ((1 − λa)−1 )
is holomorphic on D = {λ : |λ| < spr(a)−1 }. Its Taylor expansion, which is

X
fϕ (λ) = ϕ(an )λn ,
n=0

must converge on D, so for each λ ∈ D the set Sλ = {λn an } has the property
that ϕ(Sλ ) is bounded for all ϕ ∈ A∗ . In other words, Sλ ⊂ A is point bounded
(when considered as a set of linear functionals on A∗ ). By the Uniform Boundedness
Principle (with E = A∗ ), Sλ is uniformly bounded, i.e., it is a bounded subset of
A ⊆ A∗∗ . Consequently
lim sup kan k1/n = |λ|−1 lim sup kλn an k1/n 6 |λ|−1 .
This holds for all λ ∈ D, that is, whenever |λ| < spr(a)−1 , so we finally get
lim sup kan k1/n 6 spr(a) as required. 
Remark 4.21. P The basic idea of the above proof is the following: the series λ 7→
−1
(1 − λa) = an λn has radius of convergence equal to spr(a)−1 . We apply the
Cauchy-Hadamard formula for the radius of convergence (supposing that this is
valid for Banach space valued holomorphic functions) to get
1
spr(a)−1 = .
lim sup kan k1/n
The point of the argument we gave is to use the functionals ϕ to avoid having
to develop the theory of Banach valued holomorphic functions, reducing to the
scalar case. Essentially, we went through part of the proof of the Cauchy-Hadamard
formula and then applied the uniform boundedness principle.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
22

Lecture 5

C -algebra basics and the Gelfand-Naimark theorem

Gelfand used the theory of maximal ideals to analyze the structure of commutative
unital Banach algebras. If A is such an algebra then its Gelfand dual A b is the space
of maximal ideals of A; equivalently, by 4.16 and 4.13 above, it is the space of
algebra-homomorphisms A → C. Every such homomorphism is of norm 6 1, as we
saw, so Ab may be regarded as a subset of the unit ball of the Banach space dual

A of A (namely, it is the subset comprising all those linear functionals α which are
also multiplicative in the sense that α(xy) = α(x)α(y)). This is a weak-star closed
subset of the unit ball of A∗ , and so by the Banach-Alaoglu Theorem 1.4 it is a
compact Hausdorff space in the weak-star topology. If a ∈ A then we may define a
continuous function â on A b by the usual double dualization:
â(α) = α(a).
In this way we obtain a contractive3 algebra-homomorphism G : a 7→ â, called the
Gelfand transform, from A to C(A),b the algebra of continuous functions on the
Gelfand dual.
Theorem 5.1. Let A be a commutative unital Banach algebra. An element a ∈ A
is invertible if and only if its Gelfand transform â = Ga is invertible. Consequently,
the Gelfand transform preserves spectrum: the spectrum of Ga is the same as the
spectrum of a.
Proof. If a is invertible then Ga is invertible, since G is a homomorphism. On the
other hand, if a is not invertible then it is contained in some maximal ideal m,
which corresponds to a point of the Gelfand dual on which Ga vanishes. So Ga isn’t
invertible either. 
Example 5.2. Let A = `1 (Z), the Banach algebra of summable two-sided sequences
under convolution. Each such sequence (a n ) may be identified with the correspond-
an z n , z ∈ T, and in this way we obtain
P
ing absolutely convergent Fourier series
a homomorphism
A → C(T).
In fact, this homomorphism is the Gelfand transform: the dual space A b is identified
with the circle. (Each character on A is determined by what it does to the generator
of the group Z, and it must send this generator to a complex number of modulus
one.) Thus the Gelfand transform generalizes some of the basic ideas of Fourier
analysis.
Remark 5.3. Norbert Wiener proved that if a continuous function f is nowhere
vanishing and has an absolutely convergent Fourier series, then its pointwise inverse
has an absolutely convergent Fourier series too. This is a classic application of “soft
analysis”: it follows immediately from Theorem 5.1 above.
3This means kGak 6 kak.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
23

The Gelfand theory works for all commutative Banach algebras: an involution is
not required. However, in that generality the Gelfand transform need not be either
injective or surjective. (We have already seen that it is not surjective for `1 (Z); for
injectivity, consider the 2-dimensional commutative algebra of matrices of the form
( a0 ab ) for a, b ∈ C; the Gelfand dual consists of only one point.)
Definition 5.4. A homomorphism ϕ of Banach ∗-algebras is a ∗-homomorphism if
it preserves the involution, that is, ϕ(a∗ ) = ϕ(a)∗ for all a.
Let A be a unital Banach ∗-algebra. We say that A is symmetric if, for all a ∈ A
such that a = a∗ , we have SpectrumA (a) ⊆ R. For example, if X is a compact
Hausdorff space, then C(X) is symmetric. On the other hand, the disk algebra D
is not symmetric (every element with all Taylor coefficients real has a = a∗ , but
the points of U give elements of the Gelfand dual, and no nonconstant holomorphic
function is real-valued at all these points).
Lemma 5.5. Let A be a commutative unital Banach ∗-algebra. The Gelfand trans-
form G : A → C(A)
b is a ∗-homomorphism if and only if A is symmetric.
Proof. If G is a ∗-homomorphism, then for a = a∗ ∈ A, the function â is real-valued.
Since the spectrum of a is the same as the spectrum (that is, the closure of the
range) of â (by theorem 5.1), it is a subset of R. Conversely, if A is symmetric, then
for any a ∈ A, â + ab∗ and i(â − ab∗ ) are real-valued functions: a bit of algebra shows
that ab∗ is the complex conjugate of â, so G is a ∗-homomorphism. 
Lemma 5.6. Let A ⊆ B be symmetric unital Banach ∗-algebras. Let a ∈ A. Then
SpectrumA (a) = SpectrumB (a).
Proof. It is enough to show that A is invertible in A if and only if it is invertible
in B. “Only if” is obvious. To show “if”, suppose first that a = a∗ . Suppose that
a has an inverse b ∈ B. By the symmetric property of the algebra A, (a − (i/n)1)
is invertible in A for each n, and (a − (i/n)1)−1 → b as n → ∞ (by continuity of
inversion in B). But then b is a limit of members of A, so it belongs to A after all.
We have verified the claim when a = a∗ . To get the general case from this,
suppose that a is invertible in B, with inverse b. Then, in B, bb∗ a∗ a = b · 1 · a = 1,
and similarly a∗ abb∗ = 1, so a∗ a is invertible in B, hence in A by the special case
that we have already proved. That is, bb∗ ∈ A. Now it follows that b = bb∗ a∗ ∈ A
as required. 
We now focus on algebras satisfying the C ∗ -identity 2.1.
Definition 5.7. A Banach ∗-algebra A is a C ∗ -algebra if ka∗ ak = kak2 for all a ∈ A.
A normed ∗-algebra that satisfies the C ∗ -identity but need not be complete is
sometimes called a pre-C ∗ -algebra. Such an algebra can be completed to a C ∗ -
algebra in the usual way.
Proposition 5.8. Let A be a unital C ∗ -algebra. If a ∈ A is normal (that is,
commutes with a∗ ) then kak = spr(a). For every a ∈ A, kak = spr(a∗ a)1/2 .

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
24

Proof. If a is normal then


kak2 = ka∗ ak = ka∗ aa∗ ak1/2 = ka2 a∗2 k1/2 = ka2 k
using the C ∗ -identity several times. Thus the spectral radius formula gives kak =
spr(a). Whatever a is, the element b = a∗ a is normal (indeed selfadjoint), so the
second part follows from the first applied to b, together with the C ∗ -identity again.

It follows that “the norm on a C ∗ -algebra is unique”. That is, the algebraic
structure of a C ∗ -algebra (including the involution) uniquely determines the norm.
Proposition 5.9. Every (unital) C ∗ -algebra is symmetric. Consequently, the Gelfand
transform for a commutative unital C ∗ -algebra is a ∗-homomorphism.
Proof. Let a ∈ A with a = a∗ where A is a unital C ∗ -algebra. Let r ∈ R. Then
from the C ∗ -identity
ka ± ri1k2 = ka2 + r2 1k 6 kak2 + r2 .
Consequently the spectrum of a lies within the lozenge-shaped region
1/2
Lr = {λ : |λ ± ri| 6 r2 + kak2 }.
As r → ∞ the intersection of all these lozenge-shaped regions is the interval
[−kak, kak] of the real axis. 
Corollary 5.10. The Gelfand transform for a commutative unital C ∗ -algebra is an
isometric ∗-homomorphism (in particular, it is injective).
Proof. This follows from the facts that G preserves the spectrum and the involution,
and that the norm on a C ∗ -algebra is determined by the spectral radius. 
What about the range of the Gelfand transform?
Lemma 5.11. The Gelfand transform for a commutative C ∗ -algebra is surjective.
Proof. The range of G is a ∗-subalgebra of C(A) b which separates points of A b (for
tautological reasons). By the Stone-Weierstrass theorem, then, the range of G is
dense. But the range is also complete, since it is isometrically isomorphic (via G) to
a Banach algebra; hence it is closed, and thus is the whole of C(A).b 
Putting it all together:
Theorem 5.12. (Gelfand-Naimark theorem) If A is a commutative unital C ∗ -
algebra, then A is isometrically isomorphic to C(X), for some (uniquely determined
up to homeomorphism) compact Hausdorff space X. Any unital ∗-homomorphism of
commutative unital C ∗ -algebras is induced (contravariantly) by a continuous map of
the corresponding compact Hausdorff spaces. Consequently, the Gelfand transform
gives rise to an equivalence of categories between the category of commutative unital
C ∗ -algebras and the opposite of the category of compact Hausdorff spaces.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
25

Corollary 5.13. Every ∗-homomorphism between (unital) C ∗ -algebras is contrac-


tive, and injective ∗-homomorphisms are isometric.
Proof. It suffices to check this for self-adjoint elements by the C ∗ -identity. Then
by restricting to the subalgebras generated by a single self-adjoint element and its
image we may assume A and B are commutative. Now the result follows from the
Gelfand-Naimark theorem (the map C(Y ) → C(X) induced by f : X → Y is always
contractive, and it is isometric if f is surjective). 
Exercise 5.14. Suppose that a ∗-algebra is a C ∗ -algebra in one norm k · k1 and a
pre-C ∗ -algebra in another norm k · k2 . Prove that k · k1 = k · k2 .
Let A be a unital C ∗ -algebra, not necessarily commutative, and let a ∈ A be
a normal element. Then the unital C ∗ -subalgebra C ∗ (a) ⊆ A generated by a is
commutative, so according to Gelfand-Naimark it is of the form C(X) for some
compact Hausdorff space X. We ask: What is X? Notice that every homomor-
phism α : C ∗ (a) → C is determined by the complex number α(a). Thus â defines a
\
continuous injection C ∗ (a) → C.

Proposition 5.15. Let a be a normal element of a unital C ∗ -algebra A. Then the


Gelfand transform identifies C ∗ (a) with C(Spectrum(a)), the continuous functions
on the spectrum of a. Under this identification, the operator a corresponds to the
identity function z 7→ z, and the operator a∗ corresponds to the complex conjugation
function z 7→ z̄.
\
Proof. What is the image of the injection â : C ∗ (a) → C defined above? It is simply

the spectrum of the function â considered as an element of the commutative Ba-


nach algebra C(C\ ∗ (a)). By Gelfand’s theorem above, this is the same thing as the

spectrum of a, considered as an element of the commutative Banach algebra C ∗ (a).


Thus α̂ is a homeomorphism4 from C \ ∗ (a) to Spectrum
C ∗ (a) (a). But by Lemma 5.6,

the spectrum of a in C (a) is the same as the spectrum of a in A. 

4A continuous bijective map of compact Hausdorff spaces is a homeomorphism.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
26

Lecture 6
Functional calculus and positivity

A slight reformulation of Proposition 5.15 yields the so-called functional calculus


for C ∗ -algebras. Namely, given a a normal element of a C ∗ -algebra A, and given
a continuous function f on Spectrum(a), there is a unique element of A (in fact
of C ∗ (a)) which corresponds to f under the Gelfand transform for C ∗ (a). What is
usually called the functional calculus is simply the following:
Decision: We denote the element described above by f (a).
This procedure has all the properties which the notation would lead you to expect
(so long as you confine yourself to thinking about only a single normal element).
For instance, (f + g)(a) = f (a) + g(a), (f · g)(a) = f (a)g(a), (f ◦ g)(a) = f (g(a))
and so on. Moreover, if f is a polynomial (or a rational function with poles in the
resolvent set of a), the above definition of f (a) agrees with the naive one obtained
by “substituting” a into the polynomial or rational function. If f is a holomorphic
n
P
function defined by a power series f (z)P= cn z whose radius of convergence
is greater than spr(a), then the series cn an converges in A and defines f (a).
The proofs all follow from the fact that the Gelfand transform is an isomorphism,
sometimes also using polynomial approximation (see last part of next proof).
Lemma 6.1. Let A be a unital C ∗ -algebra. Suppose that a, b ∈ A, that a is normal,
and that a commutes with b. Then g(a) commutes with b for any continuous function
g on Spectrum(a).
Proof. The key to the proof is to show that a∗ commutes with b (this is the special
case f (z) = z̄ of the lemma). This part is sometimes called the Fuglede-Putnam
theorem. To see this notice first that eλ̄a be−λ̄a = b for any complex number λ. Thus
∗ ∗
f (λ) := e−λa beλa = Uλ bUλ∗

where Uλ = e−λa +λ̄a is unitary (being the exponential of a skew-adjoint operator).
Thus kf (λ)k is constant, in particular, f (λ) is a bounded A-valued holomorphic
function. Using duality and Liouville’s theorem as in the proof of Proposition 4.14,
it follows that f (λ) is constant. Now f 0 (0) = 0 gives the desired identity a∗ b = ba∗ .
To get the general case from this, let pn (z, z̄) be a sequence of polynomials tending
uniformly to g on Spectrum(a) (exists by the Stone-Weierstrass theorem). Then
pn (a, a∗ ) → g(a) in norm. By the special case, pn (a, a∗ ) commutes with b for all n.
Hence, so does g(a). 
To conclude this discussion, we make some comments about how all this goes in
the non-unital case. Let A be a non-unital C ∗ -algebra. As in Remark 4.9, we can
embed A as a codimension one ideal in A, e the set of formal symbols a + λ1 with
a ∈ A, λ ∈ C. There is a natural involution: (a + λ1)∗ = a∗ + λ̄1.
Lemma 6.2. The unitalization of a nonunital C ∗ -algebra A is a unital C ∗ -algebra,
and A is a C ∗ -subalgebra of it.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
27

Proof. Let A be a non-unital C ∗ -algebra. We must define a C ∗ -algebra norm, nec-


essarily unique, on A. e Notice that each element x = a + λ1 ∈ A e defines a left
multiplication map Lx (b) = ab + λb from A to A; we define kxk to be the norm of
Lx as an operator on the Banach space A. One must check that this satisfies the
C ∗ -identity: we have
kLx (b)k2 = kb∗ (a∗ a + λa∗ + λ̄a + |λ|2 )bk
6 kb∗ kkLx∗ Lx bk 6 kbk2 kLx Lx∗ k.
Taking the sup over kbk 6 1 gives us kLx k2 6 kLx Lx∗ k 6 kLx kkLx∗ k; by symmetry
we have equality all through, which is the C ∗ -identity. It is another easy exercise to
show that if a ∈ A the norm of La is equal to the norm of a, which completes the
proof. 
Let A be a nonunital C ∗ -algebra and let a ∈ A be normal. We define the spectrum
of a in this case to be its spectrum in A. e It always contains zero, and it is easy to
see that
C0∗ (a) ∼
= C0 (Spectrum(a) \ {0}),
where C0 (a) denotes the non-unital C ∗ -subalgebra of A generated by a. So, in this

case we have a functional calculus for continuous functions on the spectrum that
vanish at zero. The Gelfand-Naimark theorem says that every nonunital commuta-
tive C ∗ -algebra is of the form C0 (X) for some locally compact Hausdorff space X.
The process of unitalization corresponds to forming the one-point compactification
of X.
Remark 6.3. Let A be a unital C ∗ -algebra and let a be a normal element of A. For
every connected component K of Spectrum(a) there is defined a spectral projection
p = χK (a) ∈ A, which is given by the functional calculus using the (continuous)
function χK , the characteristic function of K; we have p = p2 = p∗ . It is sometimes
important to know that p can be given by the Cauchy integral formula
Z
1
(6.4) p= (z1 − a)−1 dz,
2πi Γ
where Γ is a cycle in the resolvent set of a which has winding number 1 about K
and 0 about all other components of Spectrum(a). (The integrand is a continuous
function on a compact set with values in a Banach space; there is no difficulty
about defining such integrals as limits of Riemann sums in the usual way.) To prove
equation 6.4 it suffices, by the Gelfand-Naimark theorem, to consider the case where
A = C(X) for some compact X; then the result follows from the integral formula
for the winding number Z
1 dz
wn(Γ; w) =
2πi Γ z − w
together with an easy “Fubini-type” theorem that equates the result of evaluating a
C(X)-valued integral at x0 ∈ X with the ordinary integral of the function obtained
by evaluating the integrand at x0 .

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
28

Lecture 7
Positivity and order

Let T be a selfadjoint operator on a Hilbert space H. As we already said in


Example 2.8, one says that T is positive if the quadratic form defined by T is
positive semidefinite, that is, if
hT v, vi > 0 for all v ∈ H.
The positive operators form a cone — the sum of two positive operators is positive,
as is any positive real multiple of a positive operator. It is easy to prove that the
following are equivalent for a selfadjoint T ∈ B(H):
(a) The spectrum of T is contained in the positive reals.
(b) T is the square of a selfadjoint operator.
(c) T = S ∗ S for some operator S.
(d) T is positive.
Indeed, (a) implies (b) by using the functional calculus (in the C ∗ -algebra B(H))
to construct a square root of T , (b) obviously implies (c) which implies (d) since
hS ∗ Sv, vi = kSvk2 , and finally if (d) holds let us define
(7.1) T+ = f (T ), T− = g(T )
where f (x) = max(x, 0) and g(x) = − min(x, 0). We have T = T + − T − , and for
any w ∈ H,
3/2
hT T− w, T− wi = −hT−2 w, T− wi = −kT− wk2 6 0.
So positivity implies that T− = 0, which yields (a).
Remark 7.2. A positive operator T may have many self-adjoint square roots as in (b)
above (even the complex number 1 has two!) but it can only have one positive square
root. To see this, let A be the positive square root of T constructed above (that
is, A = f (T ) where f (λ) = λ1/2 for λ > 0) and let B be another positive operator
with B 2 = T . Since B commutes with T , it commutes with A (Lemma 6.1). Now
let ξ ∈ H and write Aξ − Bξ = η. Then
hAη, ηi + hBη, ηi = h(A + B)η, ηi = h(A2 − B 2 )ξ, ηi = 0,
where we used AB = BA to write A2 − B 2 = (A + B)(A − B). By positivity
Aη = Bη = 0. Now
kηk2 = h(A − B)ξ, ηi = hξ, (A − B)ηi = 0,
so η = 0 whatever ξ is; hence A = B and we have the uniqueness.
Let H be a Hilbert space.
Definition 7.3. Let T ∈ B(H). A polar decomposition for T is a factorization
T = V P , where P is a positive operator and V is a partial isometry with initial
space Im(P ) and final space Im(T ).

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
29

Here we are using the notation Im(T ) for the closure of the range of the operator
T . Note that V and P need not commute, so that there are really two notions of
polar decomposition (left and right handed); we have made the conventional choice.
Proposition 7.4. Every operator T ∈ B(H) has a unique polar decomposition
1
T = V P . The operator P = |T | = (T ∗ T ) 2 belongs to C ∗ (T ), the C ∗ -subalgebra of
B(H) generated by T . If T happens to be invertible, then V is unitary, and it also
belongs to C ∗ (T ) in this case.
Proof. Let T = V P be a polar decomposition. Then T ∗ T = P ∗ V ∗ V P = P 2 , so P is
the unique positive square root of T ∗ T , which we denote by |T |. Fixing this P let
us now try to construct a polar decomposition. For ξ ∈ H,
kP ξk2 = hP ∗ P ξ, ξi = hT ∗ T ξ, ξi = kT ξk2
so an isometry Im(P ) → Im(T ) is defined by P v 7→ T v. Extending by zero on the
orthogonal complement we get a partial isometry V with the property that T = V P ;
and it is uniquely determined since the definition of polar decomposition requires
that V (P v) = T v and that V = 0 on the orthogonal complement of Im(P ).
1
If T is invertible then so is T ∗ T , and we have V = T (T ∗ T )− 2 ; this shows that
V ∈ C ∗ (T ) in this case, and unitarity is a simple check. 
A key challenge in setting up the theory of abstract C ∗ -algebras is to prove the
equivalence of (a)–(c) above without using quadratic forms on Hilbert space as a
crutch. This was done by Kelley and Vaught some years after C ∗ -algebras were in-
vented! (This time lapse is responsible for a now-obsolete terminological difference in
the older literature: there was a distinction between ‘B ∗ -algebras’ and ‘C ∗ -algebras’,
and Kelley and Vaught’s theorem showed that this distinction was otiose.)
Proposition 7.5. Let A be a unital C ∗ -algebra. The following conditions on a
selfadjoint a ∈ A are equivalent:
(a) The spectrum of a is contained in the positive reals.
(b) a is the square of a selfadjoint element of A.
(c) kt1 − ak 6 t for all t > kak.
(d) kt1 − ak 6 t for some t > kak.
Proof. Conditions (a) and (b) are equivalent by a functional calculus argument.
Suppose that (a) holds. Then Spectrum(a) is contained in the interval [0, kak] of
the real axis and so the supremum of the function t1 − λ, for λ ∈ Spectrum(a), is
at most t; this proves (c) which implies (d). Finally, assume (d). Then |t − λ| 6 t
for all λ ∈ Spectrum(a); so λ is positive on Spectrum(a), yielding (a). 
We will call a ∈ A positive if it is selfadjoint and satisfies one of these four
conditions. Part (d) of this characterization implies that the positive elements form
a cone:
Proposition 7.6. The sum of two positive elements of A is positive.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
30

Proof. Let a0 , a00 be positive elements of A. Let t0 , t00 be as in (d). Let a = a0 + a00
and t = t0 + t00 . Then by the triangle inequality t > kak and kt1 − ak 6 kt0 1 − a0 k +
kt00 1 − a00 k 6 t0 + t00 = t. So a is positive. 
We can therefore define a (partial) order on Asa by setting a 6 b if b − a is posi-
tive. This order relation is very useful, especially in connection with von Neumann
algebras, but it has to be handled with care — not every ‘obvious’ property of the
order is actually true. For instance, it is not true in general that if 0 6 a 6 b, then
a2 6 b2 . In Proposition 7.8 we will give some true properties of the order relation.
These depend on the following tricky result.
Theorem 7.7. (Kelley–Vaught) An element a of a unital C ∗ -algebra A is positive
if and only if a = b∗ b for some b ∈ A.
Proof. Any positive element can be written in this form with b = a1/2 . To prove the
converse we will argue first that b∗ b cannot be a negative operator (except zero).
Indeed, put b = x + iy with x and y selfadjoint; then
b∗ b + bb∗ = 2(x2 + y 2 ) > 0.
Thus if b∗ b is negative, bb∗ = 2(x2 + y 2 ) − b∗ b is positive. But the spectra of b∗ b and
bb∗ are the same (apart possibly from zero), by a well-known algebraic result; we
deduce that Spectrum(b∗ b) = {0}, and hence b∗ b = 0.
Now for the general case: suppose that a = b∗ b; certainly a is selfadjoint, so we
may write a = a+ − a− where the positive elements a+ and a− are defined as in
1/2
Equation 7.1. We want to show that a− = 0. Let c = ba− . Then we have
1/2 1/2
c∗ c = a− aa− = −a2−
so c∗ c is negative. Hence it is zero, by the special case above; so a− = 0 and a is
positive. 
Proposition 7.8. Let x and y be positive elements of a unital C ∗ -algebra A, with
0 6 x 6 y. Then
(a) We have x 6 kxk1 (and similarly for y); moreover kxk 6 kyk.
(b) For any a ∈ A, a∗ xa 6 a∗ ya.
(c) If x and y are invertible, we have 0 6 y −1 6 x−1 .
Proof. The first part of (a) is obvious from the functional calculus, and it implies
the second part by arguing that
x 6 y 6 kyk1
so x 6 kyk1 whence kxk 6 kyk. Item (b) becomes obvious if we use Theorem 7.7;
if x 6 y then y − x = c∗ c, so a∗ ya − a∗ xa = (ca)∗ ca is positive. For (c), if x 6 y
put z = x1/2 y −1/2 ; then z ∗ z = y −1/2 xy −1/2 6 1 (by (b)). But z ∗ z 6 1 if and only if
zz ∗ 6 1 (because the spectra of these are the same apart possibly from {0}), which
is to say that x1/2 y −1 x1/2 6 1; and applying (b) again we get y −1 6 x−1 . 

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
31

A (continuous) function f defined on a subset I of R is said to be operator mono-


tone increasing if, for selfadjoint elements x, y of a C ∗ -algebra with spectrum con-
tained in I, x 6 y implies f (x) 6 f (y). There is a similar definition of operator
monotone decreasing; for instance, (c) above says that λ 7→ λ−1 is operator mono-
tone decreasing on R+ .
Lemma 7.9. The following functions are operator monotone increasing on the in-
dicated domains.
(i) The function gµ (λ) = (1+µλ)−1 λ = µ−1 (1 − (1 + µλ)−1 ) defined on (−1/µ, ∞);
(ii) The function λ 7→ λα , for 0 < α 6 1, defined on R+ ;
(iii) The function equal to 0 at 0 and 1 elsewhere, defined on {0} ∪ [ε, ∞), for any
fixed ε > 0.
Proof. (i) follows immediately from the operator monotone decreasing property of
the inverse function. This also gives (ii), once we observe that the function λ 7→ λα ,
defined on R+ , is a norm limit of convex combinations of the functions gµ ; this
follows from the integral formula
Z
λ = cα gµ (λ)µα dµ
α

which is valid for 0 < α 6 1. Finally, (iii) follows from (ii) since λα → 1 uniformly
on [ε, M ] for any fixed M as α → 0. 
Remark 7.10. The function λ 7→ λα is not operator monotone for any α > 1. It
suffices to consider α = 2. Let p = ( 10 00 ) and q = 21 ( 11 11 ); these are orthogonal
projections. Clearly p 6 p + q. But p2
(p + q)2 since (p + q)2 − p = q + pq + qp =
1 3 2
( ) has negative determinant and so cannot be a positive operator.
2 2 1

Let A be a C ∗ -algebra (unital or not) and let Λ(A) be the set of positive a ∈ A
such that kak < 1. This is a partially ordered set.
Lemma 7.11. For any A, the partially ordered set Λ(A) is directed (Definition 1.5).
Proof. We must show that any two elements x, y ∈ Λ(A) have an upper bound. Let
a = (1 − x)−1 x and b = (1 − y)−1 y, which are positive elements of A (even though
the functional calculus takes place in the unitalization A).
e Let c = a + b and let
−1
z = (1 + c) c, so that z ∈ Λ(A). I claim z is an upper bound for x and y. Indeed
z = g1 (c) (in the notation of Lemma 7.9) and x = g1 (a); since a 6 c, operator
monotonicity gives x 6 z, and similarly y 6 z, as required. 

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
32

Lecture 8
Ideals and quotients

Definition 8.1. Let A be a C ∗ -algebra, probably without unit. An approximate


unit for A is an increasing net {uλ } in A made up of positive elements of norm 6 1,
such that for each a ∈ A, lim uλ a = lim auλ = a.
For example, let A = C0 (R). Then the sequence of functions
2 /n
un (x) = e−x ∈A
tends to 1 uniformly on compact sets, and so constitutes an approximate unit.
A very similar argument produces an approximate unit for any commutative C ∗ -
algebra C0 (X) (if A is separable, so that X is second countable, our approximate
unit will be a sequence; otherwise we must use a net).
Theorem 8.2. Every C ∗ -algebra A has an approximate unit. In fact, the set Λ(A)
of all positive elements u ∈ A having kuk < 1, ordered by the usual order relation,
is an approximate unit.
Note that Λ(A) is directed (by Lemma 7.11). It is called the canonical approximate
unit for A.
Proof. To prove the approximate unit property, we first note that for any positive
x ∈ A there is a sequence {un } in Λ(A) with k(1 − un )xk and kx(1 − un )k tending
to zero. Indeed, the closure of the polynomials in x is a commutative, separable C ∗ -
algebra and we may take {un } to be an approximate unit for it, as in the example
above. If u, u0 ∈ Λ(A) with u 6 u0 , then5

k(1 − u0 )xk2 = kx(1 − u0 )2 xk 6 kx(1 − u0 )xk


6 kx(1 − u)xk 6 kxkk(1 − u)xk.
Since Λ(A) is directed this allows us to infer, from the existence of the sequence
{un } in D for which limn k(1 − un )xk = 0, that the limit limu∈Λ(A) k(1 − u)xk exists
and is zero also. This shows that Λ(A) acts as an approximate unit at least on
positive elements, and since x∗ x is positive for any x, another simple argument (see
the next proof) shows that it acts as an approximate unit on all elements. 
By an ideal in a C ∗ -algebra we mean a norm-closed, two-sided ideal. We could
also require ∗-closure but this fact is automatic:
Proposition 8.3. Any ideal J in a C ∗ -algebra A is closed under the adjoint oper-
ation (that is J = J ∗ ).
5It
is convenient here and elsewhere to write the approximate unit condition in the form
lim kx(1 − u)k = 0. Formally speaking this is an identity in the unitalization of A. However,
the elements x(1 − u) = x − xu in fact belong to A itself.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
33

Proof. Let x ∈ J and let {uλ } be an approximate unit for the C ∗ -algebra J ∩ J ∗ .
Using the C ∗ -identity
kx∗ (1 − uλ )k2 = k(1 − uλ )xx∗ (1 − uλ )k 6 kxx∗ (1 − uλ )k
which tends to zero since xx∗ ∈ J ∩ J ∗ . It follows that x∗ uλ → x∗ , whence
x∗ ∈ J. 
It is an important fact that the quotient of a C ∗ -algebra by an ideal (which is
certainly a Banach algebra in the induced norm) is in fact a C ∗ -algebra. We need
approximate units to check this.
Theorem 8.4. Let A be a C ∗ -algebra and J an ideal in A. Then A/J is a C ∗ -algebra
with the induced norm and involution.
Proof. A/J is a Banach algebra with involution (note that we need J = J ∗ here).
Recall that the quotient norm is defined as follows:
k[a]k = inf ka + xk.
x∈J

Here [a] denotes the coset (in A/J) of the element a ∈ A. I claim that
(8.5) k[a]k = lim ka(1 − u)k
u

where u runs over an approximate unit for J. Granted this it is plane sailing to the
C ∗ -identity, since we get
k[a∗ ][a]k = lim ka∗ a(1 − u)k > lim k(1 − u)a∗ a(1 − u)k
u u
= lim ka(1 − u)k2 = k[u]k2 .
u

To establish the claim notice that certainly ka(1 − u)k > k[a]k for every a. On the
other hand, given ε > 0 there exists x ∈ J with ka − xk 6 k[a]k + ε. It is eventually
true that kx(1 − u)k 6 ε also and then
ka(1 − u)k 6 k(a − x)(1 − u)k + kx(1 − u)k 6 k[a]k + 2ε
by the triangle inequality. This establishes the claim. 
Remark 8.6. It is instructive to think about how this proof works where A = C(X)
and J is the ideal of functions that vanish on a closed subset F ⊆ X.
We can use this result to set up the fundamental ‘isomorphism theorems’ in the
category of C ∗ -algebras.
Corollary 8.7. Let α : A → B be any homomorphism of C ∗ -algebras. Then α is
contractive, and it can be factored as
A / A/ ker(α) / B
where the first map is a quotient map and the second is an isometric injection whose
range is a C ∗ -subalgebra of B.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
34

Proof. By passing to the unitalization, we may assume that A and B are unital and
that α is a unital ∗-homomorphism. By Corollary 5.13, α is contractive, hence in
particular it is continuous and ker(α) is a closed ideal. The standard construction
from algebra now factors α through an injective ∗-homomorphism β : A/ ker(α) →
B. Since A/ ker(α) is a C ∗ -algebra from the previous result, Corollary 5.13 shows
that β is an isometry of A/ ker(α) onto a subalgebra of B. This subalgebra is
complete (by the isometry property), hence closed. This completes the proof. 
We can also prove the ‘diamond isomorphism theorem’.
Proposition 8.8. Let A be a C ∗ -algebra. Let J be an ideal in A and let B be a
C ∗ -subalgebra of A. Then B +J is a C ∗ -subalgebra of A and there is an isomorphism
B/(B ∩ J) = (B + J)/J.
Proof. The ∗-homomorphism B → A → A/J has range a C ∗ -subalgebra of A/J,
isomorphic to B/B ∩ J by the previous theorem. Thus the inverse image in A of
this C ∗ -subalgebra (namely B + J) is closed, and so is a C ∗ -algebra also. The result
now follows by the usual algebra argument. 
Remark 8.9. When B is another ideal J there is a useful addendum to this result:
for two ideals I and J in A, their intersection is equal to their product. To see this,
observe
(I ∩ J)2 ⊆ IJ ⊆ I ∩ J.
But for any C ∗ -algebra B, we have B = B 2 , by an easy functional calculus argument.
Definition 8.10. Let A be a C ∗ -algebra. A C ∗ -subalgebra B of A is called heredi-
tary if for positive elements a ∈ A, b ∈ B, the inequality 0 6 a 6 b implies a ∈ B.
Example 8.11. Let p ∈ A be a projection. The subalgebra B = pAp (the “corner”
defined by p) is hereditary. Indeed, suppose 0 6 a 6 b = pbp, and write a = x∗ x.
Then
kx(1 − p)k2 = k(1 − p)a(1 − p)k 6 k(1 − p)b(1 − p)k = 0,
so x(1 − p) = 0 and thus pap = a.
Proposition 8.12. Every closed ideal of a C ∗ -algebra is a hereditary subalgebra.
Proof. Let J C A be such an ideal, and let 0 6 a 6 b with b ∈ J. Let {uλ } be an
approximate unit for J. Then lim(1 − uλ )b = 0 and hence
lim k(1 − uλ )a(1 − uλ )k 6 lim k(1 − uλ )b(1 − uλ )k = 0.
It follows that a1/2 = lim uλ a1/2 ∈ J, so a ∈ J. 
A theorem of Effros says that every hereditary subalgebra is of the form L ∩ L∗ ,
where L is a closed left ideal, and that every such left ideal gives rise to a hereditary
subalgebra. This includes the above result.
Exercise 8.13. Let A be a C ∗ -algebra and let B = xAx where x is a positive
element of A. Prove that B is a hereditary subalgebra.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
35

Definition 8.14. Let I be an ideal in a C ∗ -algebra A. We say that I is essential


in A if it has trivial annihilator: i.e., if xa = 0 for all x ∈ I implies a = 0.
Example 8.15. Let H be a Hilbert space. The compact operators K(H) form an
essential ideal in B(H). Indeed, let T ∈ B(H) be nonzero. Then T ξ 6= 0 for some
ξ ∈ H and therefore T P 6= 0 where P is the rank one operator v 7→ ξhξ, vi.
Lemma 8.16. An ideal I C A is essential if and only if I ∩ J 6= 0 for every nonzero
ideal J C A.
Proof. If I is essential and I ∩ J = 0, then IJ = 0 and hence J = 0. Conversely, if
I has nonzero intersection with every nontrivial ideal, then for every nonzero a ∈ A
there is a nonzero element in I · AaA, by Remark 8.9, which implies Ia 6= 0. 
Remark 8.17. The relationship between ideals and approximate units, which we
have used repeatedly, can be tightened up. Let J be an ideal in a C ∗ -algebra A.
An approximate unit {uλ } for J is called quasicentral (in A) if for every a ∈ A the
commutator
auλ − uλ a
tends to zero in norm. Arveson and, independently, Akemann-Pedersen introduced
this notion in 1977 and showed that quasicentral approximate units always exist.
We will prove this later on (Proposition 29.6) when we need these approximate units
in connection with nuclear C ∗ -algebras.
Exercise 8.18. Show that K(H) is the only nontrivial C ∗ -ideal in B(H). (Hint:
Suppose that an ideal I contains an element T which is not compact. Show that
there is f ∈ C0 (R) and a projection p with infinite dimension and codimension such
that p 6 f (T ∗ T ). Deduce that I contains p and hence that it contains all projections
with infinite dimension and codimension. But the identity is the sum of two such
projections.)

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
36

Lecture 9
Examples of C ∗ -algebras

In this lecture we are going to list a number of important examples of C ∗ -algebras


that you need to know about. We have already seen that every commutative C ∗ -
algebra is of the form C0 (X) for some locally compact Hausdorff space X, so we
will focus on examples with a more “noncommutative” flavor.

Finite-dimensional algebras
The obvious examples of finite-dimensional C ∗ -algebras are the algebras of bounded
operators on a finite-dimensional Hilbert space, in other words, the matrix algebras
Mn (C) of n × n matrices. The norm of a matrix A is of course the square root of
the largest eigenvalue of A∗ A, as follows from the relationship between norm and
spectral radius (5.8).
Lemma 9.1. Every finite-dimensional C ∗ -algebra is unital.
Proof. Let A be such a C ∗ -algebra. For any selfadjoint a ∈ A let a0 be the self-
adjoint projection f (a∗ a), where f (λ) equals 0 at λ = 0 and equals 1 for all λ > 0.
Finite-dimensionality implies that a has discrete spectrum, so f is continuous on
the spectrum of a and a0 is well defined. For any x ∈ A we have
kx(1 − (x∗ x)0 )k2 = k(1 − (x∗ x)0 )x∗ x(1 − (x∗ x)0 )k = 0,
so x = x(x∗ x)0 . Now suppose that p is a projection with p > (x∗ x)0 ; then
kx(1 − p)k2 = kx(1 − p)x∗ k 6 kx(1 − (x∗ x)0 )x∗ k = 0,
so xp = x. Choose then a vector space basis of A consisting of self-adjoint elements
xj and let p = (x21 + · · · + x2N )0 . By Lemma 7.9 (iii), p > (x2j )0 for each j so then
xj p = pxj = xj . Linearity now shows that p is a unit. 
Definition 9.2. A C ∗ -algebra is called simple if it has no non-trivial 2-sided (closed)
ideals.
Lemma 9.3. Every simple finite-dimensional C ∗ -algebra is a matrix algebra. 
We will prove this later, after we have developed some ideas about representations
of C ∗ -algebras. It follows that
Proposition 9.4. Every finite-dimensional C ∗ -algebra is isomorphic to a direct sum
of matrix algebras.
Proof. Let A be such an algebra and let B = Z(A) be the center of A, which is a
finite-dimensional commutative C ∗ -algebra. The Gelfand-Naimark theorem shows
that B ∼ = Ce1 ⊕ · · · ⊕ Cek where {e1 , . . . , ek } are mutually orthogonal central pro-
jections. Thus the Aj = ej A = Aej are subalgebras of A and A is their direct sum.
Moreover, I claim that each Aj is simple. For suppose not and let J C Aj be a non-
trivial ideal. As a finite-dimensional C ∗ -algebra, J is unital: its unit, considered as

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
37

an element of A, is a projection f which commutes with all a ∈ J and therefore


with all a ∈ A (since
f a = f (f a) = (f a)f = f (af ) = (af )f = af.)
But ej is a minimal central projection and f ej = f , so f = ej and J = Aj . The
result now follows from Lemma 9.3. 
So all finite-dimensional C ∗ -algebras can be described in a simple way. The ∗-
homomorphisms between these objects also have a rather simple description.
Lemma 9.5. Let k and n be integers. For every d such that dk 6 n there is
a ∗-homomorphism from Mk (C) to Mn (C), embedding Mk (C) d times along the
diagonal in Mn (C); and every ∗-homomorphism between these algebras is conjugate,
by a unitary in Mn (C), to one of these.
Proof. Let e be a rank one projection in A = Mk (C). If ρ : A → Mn (C) is a homo-
morphism, then ρ(e) is a projection of some rank d. Let ξ1 , . . . , ξd be an orthonormal
basis for its range. Then the subspaces ρ(A)ξj of Cn are mutually orthogonal irre-
ducible representations of A, which (as we will later see) are isomorphic to copies of
the standard representation Ck . 
It follows that any ∗-homomorphism between finite-dimensional C ∗ -algebras A
and B is specified (up to unitary equivalence in B) by giving a list of the multiplicities
d of the embeddings of the matrix components of A in the matrix components of
B. This information is traditionally specified by means of a Bratteli diagram. For
example, the diagram
1 /2

/ 
2 /5
describes the homomorphism from C⊕M2 (C) to M2 (C)⊕M5 (C) that sends a⊕( db ec )
to  
a 0 0 0 0
   0 b c 0 0 
a 0  
⊕
 0 d e 0 0 .
0 0  0 0 0

b c 
0 0 0 d e

Commutants and essential commutants


Let A ⊆ B(H) be a C∗-subalgebra. The commutant A0 of A is the set
A0 = {T ∈ B(H) : T a = aT ∀a ∈ A}.
It is easily seen to be a C ∗ -algebra. In fact, as we will later see, A0 is an example of
a von Neumann algebra, an object which has a more “measure theoretic” and less
“topological” structure than a C ∗ -algebra does.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
38

Example 9.6. One can see this measure theory/topology contrast even in the com-
mutative case. Let X be a compact metric space (the circle for example) and let µ
be a Radon measure on X. Every continuous complex-valued function on X acts
on H = L2 (X, µ) as a multiplication operator, and therefore we obtain an embed-
ding6 of the C ∗ -algebra A into B(H). The commutant A0 of this representation
of A is precisely the algebra L∞ (X, µ) of essentially bounded measurable functions
on X (proving this is a good exercise). This algebra is almost independent of the
topological structure of X, it only reflects the measure theory.
In the same situation, the essential commutant of A is the algebra
D(A) = {T ∈ B(H) : T a − aT ∈ K(H) ∀a ∈ A}.
In contrast to the von Neumann commutant, we will see that this algebra en-
codes significant topological information about A, even in the commutative example
that we mentioned above. For example, let X = S 1 as above, A = C(S 1 ), and
H = L2 (S 1 ). The essential commutant of A contains in particular the algebra of
pseudodifferential operators of order zero on the circle, closely related to analysis
and PDE.
Since D(A) ⊇ K by definition, we can define the reduced commutant to be the

C -algebra quotient
QD(A) = D(A)/K.
An important example is the Calkin algebra
Q(H) = B(H)/K(H),
which is the reduced commutant of the identity. This arises in index theory: by
Atkinson’s theorem 3.8, an operator T ∈ B(H) is Fredholm precisely when it maps
to an invertible in Q(H).

Group algebras
Let G be a locally compact group. To keep things simple, we will assume for now
that G is a discrete group; but most of what we say extends easily to unimodular
locally compact groups (just by replacing sums over G with integrals with respect to
Haar measure); and indeed, with a bit of extra care about left v right, to all locally
compact groups whatsoever.
The group algebra C[G] is the vector space with basis G, given a multiplication
operation which linearly extends the group multiplication on basis elements. To put
it another way, C[G] consists of finitely supported functions f : G → C equipped
with the convolution product
X X
(9.7) (f1 ∗ f2 )(g) = f1 (g1 )f2 (g2 ) = f1 (h)f2 (h−1 g).
g1 g2 =g h∈G

6Provided µ is strong-starly positive on each open set, as Lebesgue measure is for example.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
39

This algebra also has an involution, which is the antilinear extension of the map
g 7→ g −1 , or in other words
f ∗ (g) = f¯(g −1 ).
This algebra has a faithful representation on a Hilbert space. Namely, let H = `2 (G)
and define the convolution of f1 ∈ C[G] and f2 ∈ H by the same formula as in
Equation 9.7 above. Then f1 ∗ f2 belongs to H also. The operator
Tf1 : f2 7→ f1 ∗ f2
is bounded on H, and the map f 7→ Tf is an injective ∗-homomorphism from C[G]
to B(`2 (G)), which we may use to regard C[G] as a ∗-subalgebra of B(`2 (G)). The
closure of C[G] inside B(`2 (G)) is therefore a C ∗ -algebra.
Definition 9.8. The C ∗ -algebra so defined is called the reduced group C ∗ -algebra
of G, denoted Cr∗ (G).
Example 9.9. Let G be the group Z. Then the group algebra C[G] can be identified
with the algebra C[z, z −1 ] of finite Laurent series, or trigonometric polynomials
(think of z = eiθ ). Similarly, Parseval’s theorem from Fourier analysis identifies
H = `2 (Z) with the space L2 (S 1 ) of square integrable functions on the circle, on
which the trigonometric polynomials operate by pointwise multiplication. The norm
of such an operator is the supremum norm of the multiplying function. Thus Cr∗ (Z)
is the closure, in the sup norm, of the trigonometric polynomials on S 1 . By the
Stone-Weierstrass theorem, this is the C ∗ -algebra C(S 1 ) of continuous functions
on the circle. As this example suggests, it is easy to see that if the group G is
commutative, so is the C ∗ -algebra Cr∗ (G).
Remark 9.10. The reason for the terminology “reduced” is the close association of
our construction with one particular representation of G, the regular representation
on `2 (G). In the classical representation theory of finite or compact groups, the
regular representation has a universal property (every irreducible representation is
a subrepresentation of it). But this is not true for all groups, even in a weak sense.
When it is not, the full scope of the representation-theory of G is not encompassed
by Cr∗ (G); one needs to use a larger construction, the so-called maximal C ∗ -algebra
of G. We will talk about this in much more detail later.

The irrational rotation algebra


Consider again Example 9.9. The generator of Z corresponds to a unitary operator
U on H = L2 (S 1 ) (the operator of multiplication by z), and Cr∗ (Z) = C(S 1 ) is the
C ∗ -algebra of operators on H generated by U .
Now there are other natural unitary operators on H. Fix an irrational number θ,
and let V be the unitary operator that rotates the circle through an angle 2πθ: in
other words
V f (z) = f (e−2πiθ z).
Then U and V do not commute; in fact, U V = e2πiθ V U . Any monomial in U and
V can be rearranged (using the above relation) to the form U k V l times a constant;

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
40

it follows that the linear combinations of such monomials form a ∗-subalgebra of


B(H), whose closure is called the irrational rotation algebra associated to θ, and
denoted by Aθ . This algebra has been an important example in the development
of noncommutative geometry. For example, the discovery by M. Rieffel (in the
1970s) that Aθ contains nontrivial projections gave an important impetus to C ∗ -
algebra K-theory. (Notice that the subalgebra generated by U or by V individually
does not contain nontrivial projections — such a projection would be a nontrivial
{0, 1}-valued continuous function on the circle, which can’t exist because the circle
is connected.)
Example 9.11. Let X be a locally compact metric space, which for simplicity
we assume us uniformly discrete, that is, d(x, x0 ) > 1 if x 6= x0 . Generalizing the
irrational rotation algebra, we can define an algebra of operators on H = `2 (X) as
follows:
• A bounded function f ∈ `∞ (X) defines a multiplication operator Mf on H;
• A partial translation of X is a bijection v from a subset S ⊆ X to v(S) ⊆ X
having the property that
sup{d(s, v(s) : s ∈ S} < ∞.
Each such partial translation acts on basis elements of H to generate a partial
isometry Vv : H → H.
The multipliers Mf and partial isometries Vv do not commute. Together they gener-
ate a subalgebra C ∗ (X) of B(L2 (X)) which reflects the coarse geometry of X. This
is called the translation algebra (or, by some people, the Roe algebra) of X.

Generators and relations


We’ve seen above that C(S 1 ) = C ∗ (Z) is generated by a single unitary element
U . In fact, it has a universal property: if A is a C ∗ -algebra generated by a unitary
u ∈ A, then there is a unique ∗-homomorphism C(S 1 ) → A sending U to u. (Reason:
The spectrum of u is contained within the unit circle, since both u and u−1 have
norm bounded by 1; the unique ∗-homomorphism is just the functional calculus
map f 7→ f (u).) This might lead the optimistic student to assume that there is a
general procedure for defining C ∗ -algebras by generators (e.g. u) and relations (e.g.
uu∗ = u∗ u = 1).
This does not work in general, and the reason in that there are no free objects
in the category of C ∗ -algebras. Suppose for instance that there was such a thing
as the C ∗ -algebra X freely generated by an element x. Then, for any a in any
C ∗ -algebra A, there would exist a ∗-homomorphism X → A sending x to a. Since
C ∗ -homomorphisms are contractive (5.13), we would have kak 6 kxk whatever a
was. This is an obvious contradiction.
So one must proceed carefully with this idea of generators and relations. General
theories can be given (e.g. Terry Loring, C ∗ -algebra relations, Math. Scand. 107,
43–72) but we will give ad hoc constructions when we discuss these notions: for

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
41

example, we will construct the Toeplitz algebra, which is the universal algebra gen-
erated by a single isometry in roughly the same sense that C(S 1 ) is the universal
algebra generated by a single unitary. More generally we shall make the following
definition:
Definition 9.12. Let n = 2, 3, 4, . . .. The Cuntz algebra On is the universal Pn C ∗-
algebra generated by n isometries s1 , . . . , sn subject to the single relation i=1 si s∗i =
1.
For more about when one can and can’t make such definitions, see Remark 20.11.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
42

Lecture 10
Hilbert modules and multipliers

Let A be a C ∗ -algebra. A Hilbert module over A is a right A-module M equipped


with an A-valued C-sesquilinear ‘inner product’
h·, ·iM × M → A
satisfying the following axioms analogous to the usual ones for a Hilbert space:
(i) hx, ya + y 0 a0 i = hx, yia + hx, y 0 ia0 , for all x, y, y 0 ∈ M and a, a0 ∈ A;
(ii) hx, yi = hy, xi∗ ;
(iii) hx, xi > 0 (the inequality in terms of the order on Asa );
(iv) If hx, xi = 0 then x = 0;
1
(v) M is complete in the norm kxkM = khx, xikA2 (we will prove in a moment that,
given (i) through (iv), this really is a norm).
Obviously, A is a Hilbert module over itself, with inner product hx, yi = x∗ y. The
basic commutative examples are sections of bundles. Indeed, let A = C(X) and let
V be a Hermitian vector bundle over the compact space X. Let M be the space of
continuous sections of V , and let hi : M × M → A be the fiberwise inner product.
Then M is a Hilbert A-module. Later, we will see that all finitely generated Hilbert
A-modules are of this form.
Remark 10.1. General Hilbert modules over C(X) can be thought of as made up of
sections of ‘generalized bundles’ over X. Indeed, suppose that M is a Hilbert C(X)-
module. Then, for each fixed p ∈ X, hx, yip = hx, yi(p) gives a positive semidefinite
sesquilinear form on M (a ‘pre-inner-product’) and completing we obtain a Hilbert
space Mp . Thus p 7→ Mp assigns to each point of X a Hilbert space and M provides
a space of ‘continuous sections’ of p 7→ Mp , whose pointwise inner products are
continuous and whose restrictions to each fiber are dense in that fiber. This data
is said to describe a continuous field of Hilbert spaces. Conversely, the sections of a
continuous field of Hilbert spaces form a Hilbert C(X)-module. For an example of
such a continuous field that is not a vector bundle, consider the space M = C0 (0, 1]
as a Hilbert module over A = C[0, 1]. This space is the space of sections of a field of
Hilbert spaces with fiber 0 over 0 and C over the other points of the interval. Note
that M is not finitely generated over A.
Return to the proof that the norm kxkM defined above really is a norm. As in
the classical case this will follow from a Cauchy–Schwarz inequality.
Lemma 10.2. Let M be a right A-module satisfying (i)–(iii) above and suppose that
x, y ∈ M . Then we have the inequality (in A)
hx, yi∗ hx, yi 6 khx, xikhy, yi.
Consequently, with the definition of k · kM in (v) above we have
khx, yikA 6 kxkM kykM ,

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
43

and it follows easily that k · kM is indeed a norm.


Proof. Imitate the usual proof, as follows: consider the inequalities in Asa
0 6 hxa + y, xa + yi
= a∗ hx, xia + a∗ hx, yi + hy, xia + hy, yi
6 khx, xika∗ a + a∗ hx, yi + hy, xia + hy, yi
where we have used Proposition 7.8. Putting a = λhx, yi, λ ∈ C, we get
 
λ khx, xik + 2λ hx, yi∗ hx, yi + hy, yi > 0.
2

If hx, xi = 0, then taking λ large and negative in this inequality we see that hx, yi = 0
also. Otherwise, put λ = −khx, xik−1 to get Cauchy-Schwarz. 
Remark 10.3. Note that the proof of Cauchy-Schwarz only requires (i)–(iii). It
follows that given an A-module M satisfying (i)–(iii), the set N of vectors of norm
zero forms a submodule, and dividing by that N we obtain a module M/N satisfying
(i)–(iv). Then by completing we may obtain a Hilbert module.
Example 10.4. An important example of a Hilbert A-module is the ‘universal’
module HA which is comprised of sequences {an } of elements of A such that a∗n an
P
converges in A. Using the Cauchy-Schwarz inequality it is not hard to show that
this is a Hilbert module. P Note the convergence condition carefully however: it is
not equivalent to say that kan k2 converges (a series of positive elements of a C ∗ -
algebra
P1 can converge in norm without converging absolutely, for instance the series
n
e n , where en is the orthogonal projection onto the n’th basis vector, converges
in norm but not absolutely in K(`2 )).
Lemma 10.5. Let M be a Hilbert A-module. Then M A is dense in M . In fact
M hM, M i is dense in M , where hM, M i denotes the subspace of A spanned by inner
products.
Notice that hM, M i need not be dense in A (ideals give obvious examples). We
say that M is full if this is so.
Proof. The closure of hM, M i is a C ∗ -ideal J in A. Let uα be an approximate unit
for J. Then for x ∈ M put xα = xuα . We have
hx − xα , x − xα i = hx, xi − uα hx, xi − hx, xiuα + uα hx, xiuα → 0
and thus xα → x, giving the result. 
In the theory of Hilbert space a critical role is played by the existence of orthogonal
complements to closed subspaces. If M is a Hilbert space (or Hilbert module) and
N a closed subspace, then the orthogonal subspace is
N ⊥ = {y ∈ M : hx, yi = 0 ∀x ∈ N }.
The Projection Theorem states that if M is a Hilbert space then N ⊥ is complemen-
tary to N ; each x ∈ M can be decomposed uniquely into a component in N and a

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
44

component in N ⊥ . The Projection Theorem however fails for Hilbert modules. For
instance if A = C[0, 1], M = A, and N = C0 (0, 1] considered as a Hilbert A-module,
then N ⊥ = {0} and we have no direct sum decomposition of M into N ⊕ N ⊥ . This
is the fundamental difficulty of Hilbert module theory.
Definition 10.6. Let M and N be Hilbert A-modules. An adjointable map from
M to N is a linear map T : M → N for which there exists an adjoint T ∗ : N → M ,
necessarily unique, satisfying
hT x, yi = hx, T ∗ yi, x ∈ M, y ∈ N.
The set of adjointable maps will be denoted B(M, N ), or B(M ) if M = N .
Because of the failure of the Projection Theorem, it need not be the case that a
norm bounded linear map M → N is adjointable. On the other hand, an adjointable
map must be bounded (a Uniform Boundedness Principle argument).
Proposition 10.7. Let M be a Hilbert A-module. Then B(M ) is a C ∗ -algebra.
Proof. B(M ) is clearly a normed ∗-algebra. The C ∗ -identity can be proved by
mimicking the Hilbert space proof: for x ∈ M , kxk = 1,
kT xk2 = khT x, T xik = khx, T ∗ T xik 6 kT ∗ T k.
The supremum of the left side over all possible x is kT k2 , giving the result. It
follows as usual that kT k = kT ∗ k for all T ∈ B(M ), and so we see that B(M ) is a
closed subspace of the Banach algebra of all bounded C-linear maps M → M , and
therefore is complete. 
Let E and F be Hilbert A-modules. A rank one operator from E to F is a linear
map E → F of the form
θx,y (z) = xhy, zi, x ∈ F, y ∈ E.
It is adjointable, with adjoint θy,x . The closed linear span of rank one operators is
denoted K(E, F ) and called the space of compact operators from E to F . (Warning:
They need not be compact in the sense of Banach space theory!) If E = F , the
subspace K(E) of compact operators in B(E) is a C ∗ -ideal, in fact an essential ideal
(same proof as Remark 8.15).
Lemma 10.8. Let M be a Hilbert A-module, and let uα be an approximate unit for
the C ∗ -algebra K(M ). Then uα (ξ) → ξ for all ξ ∈ M .
Proof. By lemma 10.5 it suffices to show this assuming that ξ ∈ M hM, M i. But if
ξ = xhy, zi then
uα ξ = (uα θx,y )(z) → θx,y (z) = ξ
as required. 
Example 10.9. If A is a unital C ∗ -algebra, and we consider A as a Hilbert module
over itself, the it is easy to see that B(A) = K(A) = A.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
45

Example 10.10. Now suppose that A is a non-unital C ∗ -algebra. It is still the case
that A = K(A); the isomorphism sends a ∈ A to the operator of left multiplication
La (a0 ) = aa0 on the Hilbert module A. It is clear that La is right A-linear; to see
that it is compact, choose an approximate unit {uλ } for A and write La = lim θa,uλ .
In the non-unital case the algebra B(A) is a unital C ∗ -algebra containing A =
K(A) as an ideal. It is called the multiplier algebra of A, and denoted M (A); by the
preceding remarks, it contains A as an essential ideal.
Proposition 10.11. The multiplier algebra M (A) of a non-unital C ∗ -algebra A has
a universal property: if A is also embedded as an ideal in a C ∗ -algebra B, then there
is a unique ∗-homomorphism B → M (A) that restricts to the identity on A; and
this extension is injective if A is essential in B.
Proof. Left multiplication Lb by b ∈ B defines an adjointable Hilbert A-module map
A → A (the adjoint is left multiplication by b∗ ), that is, an element of M (A). If A
is essential in B then Lb = 0 implies b = 0, which is to say that the induced map
B → M (A) is injective. 
Example 10.12. Let A = C0 (X) where X is a locally compact Hausdorff space.
Then M (A) = Cb (X), the algebra of all bounded continuous functions on X. In-
deed, it is clear that A is an essential ideal in Cb (X); on the other hand, if T is a
Hilbert module map A → A, then using linearity one sees that for each x ∈ X, the
quantity f (x) = (T ux )(x) is independent of the choice of ux ∈ A having ux (x) = 1;
the function f is continuous and bounded, and T is given by multiplication by f .
Thus the construction of the multiplier algebra corresponds to the Stone-Cech com-
pactification in the commutative case. (Contrast the process of unitalization, which
corresponds to the one-point compactification.)

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
46

Lecture 11
Review of locally convex spaces

The key objects of study in functional analysis are Banach spaces (of which C ∗ -
algebras are a particular case, of course). However, more general topological vector
space structures arise naturally in the study of C ∗ -algebras. In this lecture we’ll do
a quick review.
Definition 11.1. A topological vector space is a vector space E over the field F = R
or C equipped with a topology in which the vector space operations (addition and
scalar multiplication) are continuous as maps E × E → E, F × E → E.
Topological vector spaces form a category in which the morphisms are the con-
tinuous linear maps. Many authors (e.g. Rudin) restrict attention to Hausdorff
topological vector spaces, but it is occasionally useful to consider non-Hausdorff
examples.
Exercise 11.2. A TVS is Hausdorff if and only if the origin {0} is a closed subset.
There is a natural notion of Cauchy net in a topological vector space (a net {xλ }
is Cauchy if xλ − xµ → 0 as λ, µ → ∞) and therefore we may speak of a complete
TVS (one in which every Cauchy net converges).
Definition 11.3. A seminorm on a vector space E is a function p : X → R+ that
satisfies p(λx) = |λ|p(x) for all λ ∈ F, and p(x + y) 6 p(x) + p(y) (the triangle
inequality). It is a norm if in addition p(x) = 0 implies x = 0.
Many examples of topological vector spaces arise from the following construction.
Let {pα }α∈A be a family of seminorms on a vector space E. We can define a topology
on E be calling U ⊆ E open if for every x0 ∈ U there exist α1 , . . . , αn ∈ A and real
ε1 , . . . , εn > 0 such that
{x ∈ E : pαi (x − x0 ) < εi , (i = 1, . . . , n)} ⊆ U.
This topology makes E into a TVS; it is called the locally convex topology defined
by the seminorms {pα }.
Exercise 11.4. Let xj be a net in a LCTVS whose topology is defined by a family
of seminorms {pα }. Then xj → x if and only if, for each α, pα (xj − x) → 0. What
condition on the family of seminorms will ensure that E is Hausdorff?
There is a geometric reason for the “locally convex” terminology (namely, a TVS
is locally convex iff there is a basis for the neighborhoods of 0 comprised of convex
sets). To see this, suppose that p is a seminorm on the vector space E. Then the
set A = {x ∈ E : p(x) 6 1} has the following properties:
• A is convex : for any finite P subset {a1 , . . . , an } of A, and any
P positive real
numbers λ1 , . . . , λn with λi = 1, the convex combination λi ai belongs
to A.
• A is balanced : if a ∈ A and λ ∈ F with |λ| 6 1, then λa ∈ A also.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
47

• A is absorbing: for any x ∈ E there is λ > 0 such that λx ∈ A.


Regarding this terminology, note
Lemma 11.5. In a TVS every 0-neighborhood is absorbing. Every 0-neighborhood
contains a balanced 0-neighborhood; every convex 0-neighborhood contains a convex,
balanced 0-neighborhood.
Proof. These facts follow from the continuity of (scalar) multiplication. For instance,
let U be any 0-neighborhood. By continuity, there exist
S ε > 0 and an 0-neighborhood
W such that λW ⊆ U for all |λ| < ε. The union |λ|<ε λW is then a balanced 0-
neighborhood contained in U . 
We have seen that each seminorm gives a convex, balanced, absorbing 0-neighborhood.
To reverse this process, let A be a convex, balanced, absorbing subset of a vector
space E. Define µA : E → R+ (the Minkowski functional of A) by
µA (x) = inf{λ ∈ R+ : x ∈ λA}.
It is not hard to prove that µA is a seminorm on E.
It is natural (and traditional!) to begin a functional analysis course by under-
standing the finite-dimensional examples. It turns out that there aren’t very many.
Proposition 11.6. Let E be a finite-dimensional Hausdorff TVS. For P any basis
{v1 , . . . , vn } of E the map ϕ : Fn → E defined by ϕ(λ1 , . . . , λn ) = λi vi is an
isomorphism of topological vector spaces.
Proof. Clearly ϕ is bijective and continuous; we must prove that its inverse is con-
tinuous. Equip Fn with the standard Euclidean norm, and let B be the open unit
ball in that norm, S the unit sphere; it suffices to show that ϕ(B) contains a neigh-
borhood of 0 (for the given topology on E). Consider the compact set ϕ(S) ⊆ E;
because E is Hausdorff, for each x ∈ ϕ(S) there exist a neighborhood Ux of x and
a neighborhood Wx of 0 that do not meet. By compactness, there are finitely many
points x1 , . . . , xm such that Ux1 , . . . , Uxm cover S.
Let W be a balanced 0-neighborhood contained in Wx1 ∩ · · · ∩ Wxm . I claim that
W ⊆ ϕ(B). Suppose not. Then there exists v ∈ E with kvk > 1 and ϕ(v) ∈
W . Then ϕ(v/kvk) belongs both to W and to ϕ(S), which is impossible. This
contradiction proves the result. 
Proposition 11.7. A linear functional ϕ : E → F on a topological vector space E
is continuous if and only if its kernel is closed.
Proof. By definition the kernel K = ker(ϕ) = ϕ−1 {0} is the inverse image of a closed
set. So, if ϕ is continuous, then K is closed. Conversely suppose that K is closed.
Then factor ϕ (algebraically) as
E / E/K / F
The first arrow is the canonical quotient map, so is continuous; the second is an
isomorphism of finite-dimensional Hausdorff TVS, so it is continuous by proposi-
tion 11.6. 

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
48

Lemma 11.8. A finite dimensional subspace of a Hausdorff TVS is closed.


Proof. A finite-dimensional subspace is linearly homeomorphic to Fn ; hence it is
complete, and therefore closed. 
We need to develop some basic results about convexity and linear functionals. Let
E be a real vector space.
Definition 11.9. A map p : E → R is sublinear if p(x + y) 6 p(x) + p(y) and
p(λx) = λp(x) for λ > 0.
Obviously, a seminorm is an example of a sublinear map. There are other ex-
amples, e.g. the map f 7→ sup f on the space of real-valued continuous functions
on [0, 1]. The Minkowski functional of a convex (but not necessarily balanced),
absorbing set is sublinear.
Theorem 11.10. (Hahn-Banach 1) Let E be a real vector space equipped with
a sublinear functional p. Let F 6 E be a subspace and let ϕ : F → R be a linear
functional with ϕ(x) 6 p(x) for all x ∈ F . Then ϕ can be extended to a linear
functional ψ : E → R with ψ(x) 6 p(x) for all x ∈ E.
We emphasize that no topology is involved in this statement. “Extended” means
that the diagram
E /F

ψ
 
ϕ
R
must commute.
Proof. Let C be the collection of pairs (G, χ), where F 6 G 6 E, and χ : G → R
is a linear functional extending ϕ and having χ 6 p. Order this set by saying that
(G1 , χ1 ) 6 (G2 , χ2 ) if G1 6 G2 and χ2 extends χ1 .
In this partially ordered set, each chain has an upper bound (the union). So by
Zorn’s Lemma, there is a maximal element (G, χ) in C. We claim that this maximal
G is the whole of E.
Suppose not. Choose x1 ∈ E \ G and try to extend χ to a functional χ1 on
G1 = hG, x1 i. Such an extension is determined by α = χ1 (x1 ); in fact, it is given by
the formula
χ1 (x + λx1 ) = χ(x) + λα (x ∈ G);
we need to pick α so that the condition χ1 6 p is satisfied.
Taking λ = ±1 it is evident that we must have, for all x ∈ G,
χ(x) + α 6 p(x + x1 ), χ(x) − α 6 p(x − x1 )
and conversely a scaling argument shows that this would be sufficient. Can we
choose α to make this condition true? Equivalently, can we choose α so that
χ(y) − p(y − x1 ) 6 α 6 p(x + x1 ) − χ(x) ∀x, y ∈ G?

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
49

Such an α can be found if and only if


χ(y) − p(y − x1 ) 6 p(x + x1 ) − χ(x) ∀x, y ∈ G,
which is equivalent to
χ(x) + χ(y) 6 p(x + x1 ) + p(y − x1 ).
But we have
χ(x) + χ(y) = χ(x + y) 6 p(x + y) 6 p(x + x1 ) + p(y − x1 ),
so the latter condition is true and we have shown that χ can be extended as required.
This contradicts the supposed maximality of χ so we conclude that E = G and the
theorem is proved. 
Theorem 11.11. (Hahn-Banach 2) Let E be a (real or complex) vector space
and let p be a seminorm on E. Let F 6 E and let ϕ : F → F be a linear functional
such that |ϕ(x)| 6 p(x) for all x ∈ F . Then ϕ can be extended to a linear functional
on E satisfying the same bound.
Proof. (Real case) Apply Theorem 11.10 to obtain an extension ψ with ψ 6 p. We
have ψ(x) 6 p(x) and −ψ(x) = ψ(−x) 6 p(−x) = p(x), so |ψ| 6 p as required.
(Complex case) Consider Fr , Er , the underlying real vector spaces of F and E;
and let ϕr = Re ϕ : Fr → R. Clearly |ϕr | 6 p; so there is an extension ψr : Er → R
with ψr 6 p. We can always find a complex linear ψ with Re ψ = ψr , namely
ψ(x) = ψr (x) − iψr (ix)
and ψ will extend ϕ. Given x ∈ E there is θ such that eiθ ψ(x) ∈ R+ ; then
|ψ(x)| = |ψ(eiθ x)| = ψr (eiθ x) 6 p(eiθ x) = p(x)
as required. 
Corollary 11.12. Let E be a LCTVS, F a subspace of E. Any continuous linear
functional on F extends to one on E. 
The conclusion may be expressed as follows: the ground field is an injective object
in the category of (real or complex) LCTVS.
Definition 11.13. The dual space E ∗ of a TVS E is the space of continuous linear
functionals on E.
Proposition 11.14. Let S be any subset of a Hausdorff LCTVS E. The closed
linear span of S (the closure of the set of finite linear combinations of members of
S) is equal to the intersection of the kernels of all ϕ ∈ E ∗ that annihilate S.
Proof. Let F be the closed linear span of S. If ϕ ∈ E ∗ annihilates S, then ker ϕ is a
closed subspace containing S, hence containing F . Conversely, suppose that x0 ∈ /F
and let F0 = hF, x0 i. The quotient space F0 /F is one-dimensional, spanned by the
equivalence class of x0 , so (by Proposition 11.6) there is a linear functional
F0 → F0 /F → F

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
50

that annihilates F and maps x0 to 1. Extend this to ϕ ∈ E ∗ and we have F ⊆ ker ϕ,


ϕ(x0 ) = 1. This completes the proof. 
Corollary 11.15. The dual space of a Hausdorff LCTVS E separates points of E;
that is, given x 6= 0 in E, there exists ϕ ∈ E ∗ with ϕ(x) 6= 0.
Proof. Apply the previous proposition to S = {0}. 
The next equivalent form of the Hahn-Banach theorem is stated geometrically.
Theorem 11.16. (Hahn-Banach 3) Let E be a real Hausdorff LCTVS, and let
A and B be disjoint closed convex subsets one of which (say A) is compact. Then
there is ϕ ∈ E ∗ such that
sup{ϕ(a) : a ∈ A} < inf{ϕ(b) : b ∈ B}.
In other words, A and B are separated by the hyperplane {x : ϕ(x) = λ}, for any λ
strong-starly between the sup and the inf above.
Proof. A standard compactness argument shows that there is a 0-neighborhood V
such that A0 = A + V does not meet B. Since E is locally convex, we may and shall
assume that V is convex. Then A0 is convex also.
Pick a0 ∈ A, b0 ∈ B, let x0 = b0 − a0 and let C = A0 − B + x0 ; C is a convex
0-neighborhood. Let p be its Minkowski functional. Convexity implies that p is
sublinear. Since x0 ∈
/ C, p(x0 ) > 1. Using Theorem 11.10 we find that there is a
linear map ϕ : E → R with ϕ 6 p and ϕ(x0 ) = 1.
Since p 6 1 on C, |ϕ(x)| 6 1 for x in the 0-neighborhood C ∩ (−C); hence ϕ is
continuous. Moreover, for a ∈ A0 and b ∈ B we have
ϕ(a) − ϕ(b) + 1 = ϕ(a − b + x0 ) 6 p(a − b + x0 ) 6 1,
since a − b + x0 ∈ C. Thus ϕ(a) 6 ϕ(b). Now ϕ(A0 ) and ϕ(B) are convex subsets of
R, that is, intervals; and we have shown that ϕ(A0 ) lies to the left of ϕ(B). Moreover
ϕ(A0 ) is open. Let λ = sup ϕ(A0 ). Then inf ϕ(B) > λ by the inequality above. On
the other hand, ϕ(A) is a compact interval contained in the open interval ϕ(A0 ), so
sup ϕ(A) < λ. This proves the theorem. 
Remark 11.17. A consequence of this is that if you know the dual space of a LCTVS
E, you know its closed convex sets: each such set is the intersection of a (usually
infinite) family of half-spaces defined by inequalities of the form Re ϕ(x) 6 c. Thus,
if two topologies on E have the same linear functionals, they have the same closed
convex sets. We will use this remark later.
Let S be any subset of a vector space E. The convex hull of S is the smallest
convex set containing S: it may be defined as the intersection of all convex sets
containing S, or more concretely as the collection of all finite convex combinations
of members of S. If E is a TVS, the closed convex hull of S is the smallest closed
convex set containing S; it is the closure of the convex hull of S.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
51

Let C be a convex subset of a vector space E. A point p ∈ C is called an extreme


point of C if it cannot be written as a nontrivial convex combination of points of C;
that is, if p = λp0 + (1 − λ)p1 , λ ∈ (0, 1), p0 , p1 ∈ C, then in fact p0 = p1 = p.
Theorem 11.18. (Krein-Milman theorem) Let E be a Hausdorff LCTVS. Any
nonempty compact convex subset of E is the closed convex hull of its extreme points.
In particular, it has some extreme points.
An example is the space of Radon probability measures on [0, 1], with its weak-∗
topology as a subset of the dual of the C ∗ -algebra C[0, 1]. This is a compact set by
the Banach-Alaoglu theorem, and it is clearly convex. The extreme points are the
Dirac measures at the various points of [0, 1] (exercise—prove this!). The theorem
tells us that any such measure is the weak-∗ limit of a net of convex combinations
of Dirac measures.
Proof. Let K be a nonempty compact convex subset of E. Generalizing the notion
of extreme point, let us call a subset S ⊆ K an extreme set if it is closed and
nonempty and, for each p ∈ S, if p = λp0 + (1 − λ)p1 , λ ∈ (0, 1), p0 , p1 ∈ K, then
in fact p0 , p1 ∈ S. Extreme sets exist (K is one); they can be partially ordered by
inclusion; every decreasing chain has a lower bound (use compactness to show that
the intersection of a chain is nonempty). By Zorn’s Lemma, there exist minimal
extreme sets. We’re going to show that they consist of single points.
Let S be any extreme set and ϕ ∈ E ∗ . Let M = sup{Re ϕ(x) : x ∈ S}. It is easy
to see that
{x ∈ S : Re ϕ(x) = M }
is also an extreme set. Thus, if S is minimal extreme, every ϕ ∈ E ∗ is constant on
S. By Corollary 11.15, S must consist of a single point only.
Let K 0 be the closed convex hull of the extreme points of K. Clearly, K 0 ⊆ K,
and thus K 0 is compact. Suppose that x0 ∈ K \ K 0 . Then by Theorem 11.16, there
is ϕ ∈ E ∗ with Re ϕ(x) < Re ϕ(x0 ) for all x ∈ K 0 . Reprising a previous argument,
let M = sup{Re ϕ(x) : x ∈ K}. Then {x ∈ K : Re ϕ(x) = M } is an extreme set
and it does not meet K 0 . Since every extreme set contains a minimal extreme set,
i.e. an extreme point, this is a contradiction. 
The Krein-Milman theorem tells us that if we take the set of extreme points of
a compact convex set, and then take the closed convex hull of that, we recover the
original set. We might ask what happens if we reverse the order of operations: start
with a set G, take its closed convex hull, and then take the extreme points of that.
The answer is given by Milman’s theorem. For simplicity we state and prove this
in a dual space context, because that’s where we are going to use it; but a similar
result holds in general.
Proposition 11.19. (Milman’s theorem) Let E be a separable Banach space and
let G be a closed subset of the unit ball of E ∗ (equipped with its weak-star topology).
Let K be the closed convex hull of G. Then all the extreme points of K are contained
in G.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
52

Proof. Notice that both G and K are compact metrizable spaces. Let P (G) be the
space of probability measures on G, equipped with its vague topology (that is, the
weak-star topology as a subset of the dual of C(G)). It is a compact convex set.
Define a map Φ : P (G) → E ∗ by
Z
Φ(µ)(x) = σ(x) dµ(σ).
G
If µ is a convex combination of Dirac measures, Φ(µ) is a convex combination of
points of G. Since every µ is a vague limit of such convex combinations, one sees
easily that the range of Φ is exactly K.
Suppose that ψ is an extreme point of K. Let
S = {µ ∈ P (G) : Φ(µ) = ψ}.
It is nonempty (since the range of Φ is K), and since ψ is an extreme point of K, it
follows that S is an extreme set in P (G) (see the proof of 11.18 for the definition of
extreme set). Every extreme set contains an extreme point, as we saw above, and
the extreme points of P (G) are Dirac measures, so there is some Dirac measure in
S, say the unit mass δϕ at ϕ ∈ G. But then
Z
ψ(x) = Φ(δϕ )(x) = σ(x) dδϕ (σ) = ϕ(x),
G
so ψ = ϕ and belongs to G as required. 
Proposition 11.20. (Goldstine’s theorem) Let E be a Banach space, and regard
E as a subspace of its double dual E ∗∗ via the canonical embedding. The unit ball
of E is weak-∗ dense in the unit ball of E ∗∗ .
Proof. Let B be the unit ball of E; we want to know what is the weak-∗ closure of
B in E ∗∗ . Since B is convex, this is the same as finding its weak-∗ closed convex
hull, i.e., the intersection of all the weak-∗ closed half-spaces that contain it. But
the weak-∗ continuous linear functionals on a dual space F ∗ are just the points of
F . In other words, the weak-∗ closure of B is the intersection of all the half-spaces
{x ∈ E ∗∗ : ϕ(x) 6 1},
as ϕ runs over the unit ball of E ∗ , and this is just the unit ball of E ∗∗ . 

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
53

Lecture 12
Topologies on B(H)

Let H be a Hilbert space. The C ∗ -algebra B(H) carries a number of important


locally convex topologies in addition to the familiar norm topology. These are weak
topologies — they are defined to have the minimal number of open sets required to
make certain maps continuous.
Definition 12.1. Let H be a Hilbert space.
(a) The weak operator topology on B(H) is the weakest topology making the maps
B(H) → C given by T 7→ hT ξ, ηi continuous, for all fixed ξ, η ∈ H. In other
words, it is the locally convex topology defined by the seminorms T 7→ |hT ξ, ηi|.
(b) The strong operator topology on B(H) is the weakest topology making the maps
B(H) → H given by T 7→ T ξ continuous. In other words, it is the locally convex
topology defined by the seminorms T 7→ kTξk.
It is clear that the weak operator topology is weaker (fewer open sets) than the
strong operator topology, which in turn is weaker than the norm topology. These
relations are strict.
Here are some easy properties of these topologies.
(a) Addition is jointly continuous in both topologies; multiplication is not jointly
continuous in either topology.
(b) However, multiplication restricted to the unit ball is jointly continuous in the
strong topology.
(c) Multiplication by a fixed operator (on the left or the right) is both weakly and
strongly continuous (that is, multiplication is separately continuous).
(d) The unit ball of B(H) is weakly compact, but not strongly compact.
(e) A weakly (hence, also, a strongly) convergent sequence is bounded in norm.
(f) The adjoint operation T 7→ T ∗ is weakly continuous.
(g) The adjoint operation is not strongly continuous.
The proofs are mostly easy exercises, involving powers of the unilateral shift op-
erator and its adjoint. For the hardest one (multiplication is not strongly jointly
continuous) see Exercise 4.3 in Murphy for hints.
Remark 12.2. It is sometimes worth noting that the adjoint operation is strongly
continuous on the set N of normal operators. This is left as another (slightly tricky)
exercise. Beware that N is not a vector space!
Remark 12.3. You may sometimes encounter the ultraweak (also called σ-weak)
topology also. This is the weakest topology that makes all the linear functionals T 7→
Tr(ST ) continuous for every trace class operator S (see Definition 2.12)). When S
is of rank one we get exactly the linear functionals appearing in the definition of
the weak topology; it follows that the ultraweak topology is stronger than the weak
topology (confusing, huh?). However it can be shown that the ultraweak and weak
topologies agree on any bounded subset of B(H). The theoretical importance of

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
54

the ultraweak topology arises from the fact that the trace-class operators can be
made into a Banach space, and the above pairing makes B(H) the dual of this
Banach space: the ultraweak topology is just the weak-star topology associated to
this duality pairing. But I will try not to mention it any more. (Reading over these
notes makes it clear that this resolution was not successful.)
There is an intimate relation between positivity and the strong (and weak) oper-
ator topologies:
Lemma 12.4. If a norm-bounded net of positive operators converges weakly to zero,
then it converges strongly to zero.
Proof. Let {Tλ } be such a net in B(H). Let ξ ∈ H. By the Cauchy-Schwarz
inequality
1/2 3/2
kTλ ξk4 = hTλ ξ, Tλ ξi2 6 hTλ ξ, ξihTλ3 ξ, ξi 6 kTλ k3 kξk2 hTλ ξ, ξi
and the right-hand side tends to zero by weak convergence. 
The “monotone convergence theorem” holds in the strong topology.
Proposition 12.5. Let {Tλ } be an increasing net of selfadjoint operators in B(H),
and suppose that {Tλ } is bounded above by a selfadjoint operator S. Then {Tλ } is
strongly convergent to an operator T with T 6 S.
Proof. We need to review some standard material on operators, sesquilinear forms,
and quadratic forms. Let T be a bounded operator on a Hilbert space. Then the map
σT : H × H → C defined by σT (ξ, η) = hT ξ, ηi is called the sesquilinear form associ-
ated to T . Clearly it is bounded in the sense that |σT (ξ, η)| 6 Ckξkkηk; conversely,
it follows from the Riesz representation theorem that any bounded sesquilinear form
arises from a unique bounded operator T .
The quadratic form associated to a sesquilinear form σ is the map ϕ defined by
ϕ(ξ) = σ(ξ, ξ). One can recover σ from ϕ via the polarization identity
3
X
4σ(ξ, η) = ik ϕ(ik ξ + η)
k=0

and, abstractly, one can define a quadratic form as a mapping ϕ such that the
polarization identity yields a sesquilinear form. A quadratic form is hermitian if
it has values in R and positive if it has values in R+ ; in the bounded case these
correspond to the hermitian and positive properties of the corresponding operator
T.
After this preparation, return to the proof. For each fixed ξ ∈ H the limit of
the increasing net hTλ ξ, ξi of real numbers exists and is bounded by hSξ, ξi. The
function ξ 7→ limλ hTλ ξ, ξi is a bounded quadratic form, and using the polarization
identity one may define a positive bounded operator T such that
hT ξ, ξi = limhTλ ξ, ξi.
λ

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
55

By construction, the net Tλ converges to T in the weak operator topology. So T −Tλ


is a bounded net of positive operators converging weakly to zero; by Lemma 12.4 it
converges strongly to zero. 
Remark 12.6. To what extent does the above discussion generalize when the Hilbert
space H is replaced by a Hilbert A-module M over a C ∗ -algebra A? In this case
it is customary to use yet another variant topology. The strong-star topology on
B(M ) is the topology defined by the seminorms T 7→ kT ξk and T 7→ kT ∗ ξk. (Thus,
if M happens to be a Hilbert space, the strong-star topology lies in between the
norm topology and the strong topology; it is the weakest topology stronger than the
strong topology for which the adjoint is continuous.) Multiplication of operators is
jointly continuous in the strong-star topology on the unit ball of B(M ) (essentially
the same proof as in the Hilbert space case).
The following lemma is important even in the Hilbert space case.
Lemma 12.7. Let M be a Hilbert A-module. Then the unit ball of K(M ) is strong-
starly dense in the unit ball of B(M ).
This implies of course that K(M ) is strong-starly dense in B(M ); but the state-
ment about the unit balls is a stronger result.

Proof. Let uα be an approximate unit for K(M ). By Lemma 10.8, uα ξ → ξ for all
ξ ∈ M . Thus for each T ∈ B(M ),
k(T uα − T )ξk → 0, k(uα T ∗ − T ∗ )ξk → 0,
and therefore the net T uα , which consists of compact operators of norm 6 kT k,
tends strong-starly to T . The result follows. 

Since the Riesz representation theorem is not true for Hilbert modules in general,
there is no correspondence between sesquilinear forms and operators as there is in
the Hilbert space case. Thus Proposition 12.5 fails for B(M ) where M is a Hilbert
module (i.e., as we shall see, B(M ) is not a von Neumann algebra in general).
However, there is a partial result:
Lemma 12.8. Let M be a Hilbert A-module and T an adjointable operator on M .
Then T is positive (in the C ∗ -algebra B(M )) if and only if hx, T xi > 0 (in A) for
all x ∈ M .
Proof. Suppose that T is adjointable and satisfies hx, T xi > 0 for all x. Using the
obvious analog of the polarization identity it follows that T is self-adjoint. write
T = T+ − T− in the usual way. Then for all x,
0 6 hx, T−3 xi = −hT− x, T T− xi 6 0,
whence T− = 0 and T is positive. 

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
56

Continuity properties of functional calculus


In this section we’ll look at the following question: let f be a function on R, and
consider the map
T 7→ f (T )
from the selfadjoint part of B(H) to B(H). When is this a continuous map, relative
to the various topologies that we have been discussing on B(H)?
Proposition 12.9. Let f : R → C be any continuous function. Then T 7→ f (T ) is
continuous relative to the norm topologies on B(H)sa and B(H).
Proof. Since neighborhoods in the norm topology are bounded, and the spectral
radius is bounded by the norm, only the behavior of f on a compact subinterval of
R is significant for continuity. Thus we may and shall assume that f ∈ C0 (R).
If f (λ) = (λ + z)−1 , for z ∈ C \ R, then T 7→ f (T ) is continuous; for we have
(12.10) f (S) − f (T ) = f (S)(T − S)f (T )
and kf (S)k and kf (T )k are bounded by | Im z|−1 .
Let A denote the collection of all functions f ∈ C0 (R) for which T 7→ f (T ) is
norm continuous. It is easy to see that A is a C ∗ -subalgebra. Since it contains
the functions λ 7→ (z + λ)−1 it separates points of R. Thus A = C0 (R) by the
Stone-Weierstrass theorem. 
Remark 12.11. One could replace B(H) by any C ∗ -algebra without changing the
argument here.
Now we will look at continuity in the strong-star topology. Note that strong-
star neighborhoods need not be bounded, so that the behavior of the function f at
infinity will become important. We say that a function f : R → C is of linear growth
if |f (t)| 6 A|t| + B for some constants A and B.
Proposition 12.12. Let M be a Hilbert module over a C ∗ -algebra. Let f : R → C
be a continuous function of linear growth. Then T 7→ f (T ) is continuous relative to
the strong-star topologies on B(M )sa and B(M ).
Proof. Let V denote the vector space of all continuous functions f : R → C for which
T 7→ f (T ) is strong-starly continuous, and let V b denote the bounded elements of
V . Note that if f belongs to V (or V b ), then so does f¯, because the adjoint is a
strong-starly continuous map. We claim first that V b V ⊆ V ; this follows from the
identity
f g(S) − f g(T ) = f (S)(g(S) − g(T )) + (f (S) − f (T ))g(T )
together with the joint continuity of multiplication in the strong-star topology on
bounded sets. In particular, V b is closed under multiplication, so it is an algebra.
One sees easily that V b is a C ∗ -algebra. Arguing with Equation 12.10 exactly as
above, and using joint continuity of multiplication as above, we see that V b ⊇ C0 (R).
Now let f be a function of linear growth. Then the function t 7→ (1 + t2 )−1 f (t)
belongs to C0 (R), hence to V b . The identity function of course belongs to V , hence

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
57

so does the function g : t 7→ t(1 + t2 )−1 f (t), as it is the product of a member of


V and a member of V b . The function g is bounded, so belongs to V b , hence the
function t 7→ tg(t) = t2 (1 + t2 )−1 f (t) belongs to V . Finally, then, the function
t 7→ (1 + t2 )−1 f (t) + t2 (1 + t2 )−1 f (t) = f (t)
belongs to V , as required. 

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
58

Lecture 13
Von Neumann Algebras

Closed convex sets


We discuss closed convex sets in the various topologies that we have introduced
on B(H).
Proposition 13.1. The only strongly continuous linear functionals on B(H) are of
the form
Xn
T 7→ hξi , T ηi i
i=1
for certain vectors η1 , . . . , ηn and ξ1 , . . . , ξn in H.
Proof. Let ϕ : B(H) → C be a strongly continuous linear functional. Then by
definition of the strong operator topology there exist vectors η1 , . . . , ηn such that
n
! 12
X
|ϕ(T )| 6 kT ηi k2 .
i=1
n
Let L be the linear subspace of H comprised of those vectors of the form (T η1 , . . . , T ηn )
for T ∈ B(H). We may define a linear functional of norm 6 1 on L by
(T η1 , . . . , T ηn ) 7→ ϕ(T ).
Extend this functional to the whole of H n by the Hahn-Banach theorem, and then
represent it by an inner product with a vector (ξ1 , . . . , ξn ) in H n . Restricting back
down to L we find that
X n
ϕ(T ) = hξi , T ηi i
i=1
as required. 
Notice that all the functionals in the above proposition are also weakly continuous.
Thus the weak and strong duals of B(H) are the same. Using Remark 11.17 this
gives us the first part of
Corollary 13.2. Let H be a Hilbert space and C a convex subset of B(H). Then
(i) C is weakly closed if and only if it is strongly closed;
(ii) Suppose in addition that C is selfadjoint in the sense that T ∈ C ⇒ T ∗ ∈ C.
Then C is strong-starly closed if and only if it is weakly (or strongly) closed.

Proof. We have already proved (i). As for (ii), let C be strong-starly closed and
suppose Tα is a net in C converging weakly to T . Write Tα = Xα + iYα where
Xα = 21 (Tα + Tα∗ ) and Yα = 2i (Tα∗ − Tα ) are selfadjoint nets in C and converge weakly
to X and Y with T = X + iY . By part (i) we may replace Xα , Yα by convex
combinations which will converge strongly to X and Y , hence strong-starly since

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
59

they are selfadjoint. Thus T = X + iY is the strong-star limit of a net in C, so it


belongs to C. 
Let H be a Hilbert space and let A be a unital ∗-subalgebra of B(H). By
Corollary 13.2, A is strong-starly closed if and only if it is strongly closed if and
only if it is weakly closed.
Definition 13.3. A unital ∗-subalgebra of B(H) which is closed in any and hence
all of the above topologies is called a von Neumann algebra of operators on H.
If M is any selfadjoint set of operators on H its commutant M 0 is defined to be
the set
M 0 = {T ∈ B(H) : ST = T S ∀ S ∈ M }.
It is a von Neumann algebra. For tautological reasons, M 00 (the double commutant)
contains M .
Theorem 13.4. (Double commutant theorem) A unital C ∗ -subalgebra of B(H)
is a von Neumann algebra if and only if it is equal to its own double commutant. In
general, the double commutant of S ⊆ B(H) is the smallest von Neumann algebra
of operators on H that contains S.
Proof. The thing we have to prove is this: if A is a unital C ∗ -subalgebra of B(H)
and T ∈ A00 , then T is the strong limit of a net of operators in A. In other words,
we have to show that for any finite list ξ1 , . . . , ξn of vectors in H one can find an
operator R ∈ A such that
Xn
k(T − R)ξi k2 < 1.
i=1
Let us first address the case n = 1, and ξ1 = ξ. Let P be the orthogonal projection
operator onto the A-invariant closed subspace generated by ξ (that is Aξ). Then P
belongs to the commutant A0 of A, because if R ∈ A then R preserves the range
of P and R∗ preserves the kernel of P . Since T ∈ A00 , T commutes with P , and in
particular P T ξ = T P ξ = T ξ, so T ξ belongs to the range of P . That is, T ξ is a
limit of vectors of the form Rξ, R ∈ A, which is what was required.
Now consider the general case. Let K = H ⊕ · · · ⊕ H (n times) and let ξ ∈ K be
the vector (ξ1 , . . . , ξn ). Let ρ : B(H) → B(K) be the ‘inflation’ ∗-homomorphism
which is defined by
T
 

T 7→  ..
. 
T
(in a more sophisticated language this is T → 7 T ⊗ In ). A routine calculation
shows that ρ(A)0 consists of n × n matrices all of whose elements belong to A0 , and
therefore that ρ(A)00 = ρ(A00 ). Now apply the previous case to the C ∗ -algebra ρ(A),
the operator ρ(T ), and the vector ξ. We find that there is ρ(R) ∈ ρ(A) such that
k(ρ(R) − ρ(T ))ξk < 1, and when written out componentwise this is exactly what we
want. 

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
60

Remark 13.5. Our hypothesis that A must be unital is stronger than necessary.
Suppose simply that A is non-degenerate, meaning that AH is dense in H, or equiv-
alently that Rη = 0 for all R ∈ A implies η = 0. Then the conclusion of the double
commutant theorem holds. To see this, we need only check that the vector ξ still
belongs to Aξ — the rest of the proof will go through as before. Indeed, one can
write ξ = P ξ + (1 − P )ξ, where P is the projection onto Aξ as above, and since
P ∈ A0 we have, for every R ∈ A,
R(1 − P )ξ = (1 − P )Rξ = Rξ − Rξ = 0;
so by non-degeneracy (1 − P )ξ = 0 and ξ = P ξ ∈ Aξ.
Let H be a Hilbert space and T ∈ B(H). Recall (Definition 7.3) that a polar
decomposition for T is a factorization T = V P , where P is a positive operator and
V is a partial isometry with initial space Im(P ) and final space Im(T ). We proved in
Proposition 7.4 that a polar decomposition exists and is unique, and that moreover
P = |T | belongs to the C ∗ -algebra generated by T . The partial isometry V does
not always belong to this C ∗ -algebra (example?), but we do have:
Lemma 13.6. Let T ∈ B(H) have polar decomposition T = V P . Then the partial
isometry V belongs to W ∗ (T ), the von Neumann algebra generated by T .
Proof. Using the bicommutant theorem, it is enough to show that V commutes with
every operator S that commutes with T and T ∗ . We have H = Ker(P ) ⊕ Im(P )
so it suffices to check that V Sξ = SV ξ separately for ξ ∈ Ker(P ) = Ker(T ) and
for ξ = P η. In the first case, ST ξ = T Sξ = 0 so Sξ ∈ Ker(T ), and then V ξ = 0,
V Sξ = 0. In the second case
V Sξ = V SP η = V P Sη = T Sη = ST η = SV P η = SV ξ
since P belongs to the C ∗ -algebra generated by T . 
Remark 13.7. It follows of course that the orthogonal projection V V ∗ onto Im(T )
also belongs to the von Neumann algebra generated by T (this is also easy to see
directly). A corollary is that every von Neumann algebra of dimension > 1 contains
nontrivial projections. For suppose T is a self-adjoint element of such a von Neu-
mann algebra B which is not a multiple of the identity. Then the spectrum of T
contains at least two points, λ0 and λ1 . Let f be a continuous function equal to zero
in a neighborhood of λ0 and equal to one in a neighborhood of λ1 . Then f (T ) ∈ B
and the range of f (T ) is neither 0 nor all of H (the latter because there is g(T )
with nonzero range such that f (T )g(T ) = 0). The projection onto Im(f (T )) then
is nontrivial and belongs to B.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
61

Lecture 14
Von Neumann Algebras, representations, and states

The Kaplansky density theorem strengthens the double commutant theorem by


showing that every element of A00 can be strongly approximated by a bounded net
from A (note that strongly convergent nets need not be bounded). The main step
in the proof has already been taken: it is the strong continuity of the functional
calculus discussed in the preceding section.
Lemma 14.1. Let A be any C ∗ -subalgebra of B(H) (unital or not), and let {uλ }
be an approximate unit for A. Then uλ converges strong-starly to the orthogonal
projection on the subspace AH of H.
(In a moment we will see that, in fact, AH is automatically closed; see Lemma 14.7.)
Proof. Since uλ is an increasing net of positive operators, bounded above by 1, it
converges to a positive operator P by Proposition 12.5. If ξ ∈ AH, say ξ = aη,
then P ξ = P aη = lim uλ aη = aη = ξ. If ξ ∈ (AH)⊥ then kuλ ξk2 = hξ, u2λ ξi = 0 so
P ξ = 0. Thus P is the orthogonal projection on AH as asserted. 
A C ∗ -subalgebra of B(H) is called nondegenerate if AH is dense in H.
Theorem 14.2. (Kaplansky density theorem) Let A be a nondegenerate C ∗ -
subalgebra of B(H) and let B = A00 be its double commutant. The the unit ball of
A is strong-starly dense in the unit ball of B, the unit ball of Asa is strong-starly
dense in the unit ball of Bsa , and the unit ball of A+ is strong-starly dense in the
unit ball of B+ .
Proof. Without loss of generality we may assume that A is unital (if not, adjoin
a unit and note that by Lemma 14.1 A is strong-starly dense in its unitalization).
Thus we may apply the double commutant theorem
Let’s start with the case of Bsa . Let T ∈ Bsa with kT k 6 1. Using the Double
Commutant Theorem, we see that T is the strong-star limit of a net Tλ of elements of
A, which we may take to be selfadjoint by replacing Tλ with 12 (Tλ +Tλ∗ ). The norms of
the operators Tλ need not be bounded; but if we replace Tλ by f (Tλ ), where f (x) =
min(1, max(−1, x)), then f (Tλ ) → f (T ) = T strong-starly by Proposition 12.12.
The same argument (with f (x) = min(0, max(−1, x))) works for B+ . To prove the
result for the full unit ball we go to 2 × 2 matrices: if T belongs to the unit ball of
B then let Tb = ( T0∗ T0 ), which is selfadjoint and belongs to the unit ball of M2 (B).
Hence it can be strong-starly approximated by elements of the unit ball of M2 (A) (by
the previous result applied to 2 × 2 matrices) and looking at the top right entries we
see that T can be strong-starly approximated by elements of the unit ball of A. 
Remark 14.3. If A is unital, it is also true that the unitary group of A is strong-
starly dense in the unitary group of M . The proof of this uses the Borel functional
calculus to reduce to the self-adjoint case: specifically, what is needed is that every
unitary u in a von Neumann algebra is of the form e2πit with t positive and 6 1.
We will prove this later (16.7).

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
62

Let H be a Hilbert space and A a C ∗ -subalgebra of B(H). We say that A is


topologically irreducible on H if there are no closed A-invariant subspaces of H
apart from 0 and H itself. It is a non-trivial consequence of the double commutant
theorem that A is then algebraically irreducible on H (there are no non-trivial
invariant subspaces at all, closed or not). In fact
Theorem 14.4. (Kadison transitivity theorem) Let A be a topologically irre-
ducible C ∗ -subalgebra of B(H). Then for any two unit vectors ξ, η ∈ H, there exists
T ∈ A with T ξ = η.
Proof. Closed A-invariant subspaces of H correspond to projections in the commu-
tant A0 ; thus, topological irreducibility tells us that A0 has no nontrivial projections.
Since A0 is a von Neumann algebra, it must consist only of multiples of the identity,
by Remark 13.7; thus A00 = B(H). So by the Kaplansky Density Theorem, the unit
ball of A is strong-starly dense in the unit ball of B(H); in particular, given any
two unit vectors ξ and η there exists T ∈ A with kT k 6 1 and kT ξ − ηk < 21 .
Now we apply this argument, and an obvious rescaling, iteratively as follows:
given ξ and η = η0 , produce a sequence {Tj } in A and a sequence {ηj } of vectors of
H with
η = ηj − Tj ξ, kηj k < 2−j , kTj k 6 2 · 2−j .
P j+1
The series T = Tj then converges in A to an element of norm 6 2 having T ξ = η.
(With more care we can reduce the norm estimate from 2 to 1 + ε. We can also
move n linearly independent ξ’s simultaneously onto arbitrary η’s, rather than just
one. ) 
Remark 14.5. Remember from Remark 12.3 that B(H) is the dual of the Banach
space L1 (H) of trace-class operators. It is not hard to generalize this to show that
a von Neumann algebra B ⊆ B(H) is the dual of L1 (H)/B ⊥ , where
B ⊥ = {S ∈ L1 (H) : Tr(ST ) = 0 ∀T ∈ B}.
It was shown by Sakai that an (abstract) C ∗ -algebra is representable as a von Neu-
mann algebra of operators on a Hilbert space if and only if it is isomorphic, as a
Banach space, to the dual of some Banach space.
Definition 14.6. Let A be a C ∗ -algebra. A representation of A is a ∗-homomorphism
ρ : A → B(H) for some Hilbert space H. The representation is faithful if ρ is injec-
tive, and non-degenerate if ρ(A)H = H. The representation is irreducible if there
are no proper closed subspaces of H that are invariant under ρ(A). If A is unital,
the representation is unital if ρ(1) is the identity operator.
It follows from the Kadison transitivity theorem 14.4 that an irreducible repre-
sentation has no proper invariant subspaces at all, closed or not. Moreover, the
definition of non-degeneracy can be relaxed: a representation is non-degenerate if
and only if ρ(A)H is dense in H. This follows from the following lemma.
Lemma 14.7. For any representation ρ : A → B(H), the subspace ρ(A)H is closed
in H.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
63

Proof. Let E = ρ(A)H and let y ∈ E. Let {uλ } be an approximate unit for A. We’ll
construct a ∈ A, x ∈ E such that y = ρ(a)x by an inductive process. It’s convenient
to carry out the intermediate steps of the induction inside the unitalization A e (notice
that ρ extends to a unital action of A e on E), but the final result will lie in A.
To start the induction define x0 = y ∈ E and a0 = 1 ∈ A. e Supposing that
x0 , . . . , xn−1 and a0 , . . . , an−1 have been defined, select un from the approximate
unit so that kρ(un )xn−1 − xn−1 k < 2−n (this can be done for any element of E). Let
Xn
−n −n
an = an−1 − 2 (1 − un ) = 2 1 + 2−n un
k=1
and note that the latter expression is a sum of positive terms so an is invertible in
e with ka−1 k 6 2n . Set xn = ρ(a−1 )y. Now
A n n

xn − xn−1 = ρ(a−1 −1
n (an−1 − an )an−1 )y = 2
−n
ρ((1 − un ))xn−1 ,
so kxn − xn−1 k 6 2−n . Thus {xn }, {an } are Cauchy sequences convergent
P to x−kand a
respectively, and y = ρ(a)x since y = ρ(an )xn for each n. Finally, a = ∞
k=1 2 uk ∈
A as required. 
Remark 14.8. The construction used in the lemma above is called the Cohen fac-
torization theorem. Inspection of the proof shows that no Hilbert space properties
of H are used; it follows in the same way that for any Banach space X which is a
module over a C ∗ -algebra A, the subspace AX is closed in X. For instance, suppose
that A is a subalgebra of a C ∗ -algebra B and J C B. Then AJ is closed in J.
Representations are studied by way of states.
Definition 14.9. A linear functional ϕ : A → C on a C ∗ -algebra A is called positive
if ϕ(a∗ a) > 0 for all a ∈ A. A state is a positive linear functional of norm one.
Example 14.10. Let ρ : A → B(H) be a representation and let ξ ∈ H be a unit
vector. Then the functional
ϕ(a) = hξ, ρ(a)ξi
is a state, called the vector state associated to ξ.
Any positive linear functional is bounded; for, if ϕ were positive and unbounded,
P −k
we could find elements ak ∈ A+ of norm one with ϕ(ak ) > 4k , and then a = 2 ak
−k k
is an element of A+ satisfying ϕ(a) > 2 ϕ(ak ) > 2 for all k, contradiction. Thus
there is little loss of generality in restricting attention to states. A state on C(X)
is a Radon probability measure on X, by the Riesz representation theorem from
measure theory. Note that the state defined by the Radon probability measure µ is
a vector state of C(X); take H = L2 (X, µ), ρ the representation by multiplication
operators, and ξ the constant function 1.
If σ is a state (or just a positive linear functional) on A then we may define a
(semidefinite) ‘inner product’ on A by
ha, biσ = σ(a∗ b).

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
64

The Cauchy–Schwarz inequality continues to hold (with the usual proof): we have
|ha, biσ |2 6 ha, aiσ hb, biσ .
Note that ha, aiσ is different from the C ∗ -norm kak2 .
Proposition 14.11. A bounded linear functional ϕ : A → C is positive if and only
if lim ϕ(uλ ) = kϕk for some approximate unit uλ for A; and in particular, if and
only if ϕ(1) = kϕk in case A happens to be unital.
Proof. There is no loss of generality in supposing that kϕk = 1.
Suppose that ϕ is positive. Then ϕ(uλ ) is an increasing net of real numbers,
bounded above by 1, and therefore convergent to some real r 6 1. For each a in the
unit ball of A, we have by Cauchy–Schwarz
|ϕ(auλ )|2 6 ϕ(a∗ a)ϕ(u2λ ) 6 ϕ(uλ ) 6 r.
Thus, since {uλ } is an approximate unit, |ϕ(a)|2 6 r, whence r > 1 on taking the
supremum over a in the unit ball of A. Since we already know r 6 1, this gives
r = 1 as required.
Conversely suppose that lim ϕ(uλ ) = 1. First we will prove that ϕ takes real
values on selfadjoint elements. Take a selfadjoint element a ∈ A, of norm one, and
write ϕ(a) = x + iy ∈ C; we want to prove that y = 0; assume for a contradiction
that y > 0. For each n one can extract a un from the approximate unit which has
ϕ(un ) > 1 − 1/n3 and commutes well enough with a that we have the estimate
knun + iak2 6 n2 + 2
(of course if it were the case that un a = aun exactly then the bound would be n2 +1).
Therefore   2
2 3 2
x + n(1 − 1/n ) + y 6 |ϕ(nun + ia)|2 6 n2 + 2

as this yields a contradiction as n → ∞.


We have now shown that ϕ is a selfadjoint linear functional; to show positivity, let
x ∈ A with 0 6 x 6 1. Then for each λ, uλ −x is in the unit ball and so ϕ(uλ −x) 6 1.
Let λ → ∞ to obtain 1 − ϕ(x) 6 1, whence ϕ(x) > 0 as required. 
Corollary 14.12. Every state on a non-unital C ∗ -algebra extends uniquely to a
state on its unitalization.
Proof. Let σ : A → C be a state, and define σ̃ on the unitalization A
e in the only
possible way,
σ̃(a + µ1) = σ(a) + µ.
We need only to know that σ̃ is positive (and hence continuous), and this follows if
we write
σ̃(a + µ1) = lim σ(a + µuλ )
using the previous proposition. 

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
65

Definition 14.13. Let A be a C ∗ -algebra. A representation ρ : A → B(H) is cyclic


if there is a vector ξ ∈ H (called a cyclic vector ) such that ρ[A]ξ is dense in H.
Proposition 14.14. Every non-degenerate representation of a C ∗ -algebra is a (pos-
sibly infinite) direct sum of cyclic representations.
Proof. Let ρ : A → B(H) be a representation. For each unit vector ξ ∈ H, let
Hξ be the cyclic subspace of H generated by ξ, that is the closure of ρ[A]ξ. By
non-degeneracy, ξ ∈ Hξ (see Remark 13.5), and it clearly follows that Hξ is a cyclic
subrepresentation of H with cyclic vector ξ. Define a partially ordered set S as
follows: a member of S is a set S of unit vectors of H such that the correspond-
ing cyclic subspaces are pairwise orthogonal, and S is partially ordered by setwise
inclusion.
L Zorn’s Lemma clearly applies and produces a maximal S ∈ S. Let
K = ξ∈S Hξ 6 H; I claim that K = H. If not, there is a vector η orthogonal to
K, and then Hη is orthogonal to K since K is ρ[A]-invariant. Thus η may be added
to S, contradicting maximality. The claim, and thus the Proposition, follows. 

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
66

Lecture 15
The GNS Construction

Theorem 15.1. (Gelfand-Naimark-Segal) Suppose A is a C ∗ -algebra and let σ


be a state on A. Then there exist a representation ρ of A on a Hilbert space Hσ ,
and a (unit) cyclic vector ξ ∈ Hσ , such that
σ(a) = hξ, ρ(a)ξiHσ ;
that is, σ is the vector state corresponding to ξ.
Proof. The Hilbert space Hσ is built out of A with the inner product ha, biσ = σ(a∗ b)
defined above, and the representation is the left regular representation. That is the
idea: here are the details.
Let N ⊆ A be defined by N = {a ∈ A : ha, aiσ = 0}. By virtue of the Cauchy–
Schwarz inequality, N is a closed subspace: in fact
N = {a ∈ A : ha, bi = 0 ∀b ∈ A}.

Since hca, biσ = ha, c biσ , N is actually closed under multiplication on the left by
members of A; it is a left ideal.
The quotient space A/N inherits a well-defined positive definite inner product
from the inner product on A. We complete this quotient space to obtain a Hilbert
space Hσ . Since N is a left ideal, the left multiplication representation of A on
A descends to a representation ρ (clearly a ∗-representation) of A on A/N . This
representation is norm-decreasing because
kρ(a)k2 = sup σ(b∗ a∗ ab) 6 ka∗ ak = kak2 ,
σ(b∗ b)61

where for the inequality we used the relation


b∗ a∗ ab 6 ka∗ akb∗ b
between positive operators. Thus ρ extends by continuity to a representation of A
on Hσ .
Now we must produce the cyclic vector ξ in Hσ . Let uλ be an approximate unit
for A. The inequality
k[uλ ] − [uλ0 ]k2 = σ((uλ − uλ0 )2 ) 6 σ(uλ − uλ0 ) (for λ > λ0 )
shows that the classes [uλ ] form a Cauchy net in Hσ . Let ξ be the limit of this
Cauchy net. For each a ∈ A we have ρ(a)ξ = lim[auλ ] = [a], so ρ(A)ξ is dense in
Hσ and ξ is cyclic. Finally, for any positive element a∗ a of A,
hξ, ρ(a∗ a)ξiσ = h[a], [a]iσ = σ(a∗ a).
Hence by linearity hρ(b)ξ, ξiσ = σ(b) for all b ∈ A. 
The device used here to build the Hilbert space Hσ out of A and σ is called the
Gelfand-Naimark-Segal (GNS ) construction.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
67

Remark 15.2. There is a uniqueness theorem to complement the existence theo-


rem provided by the GNS construction. Namely, suppose that ρ : A → B(H) is a
cyclic representation with cyclic vector η. Then a vector state of A is defined by
σ(a) = hη, ρ(a)ηi, and from this state one can build according to the GNS con-
struction a cyclic representation ρσ : A → B(Hσ ) with cyclic vector ξ. The desired
uniqueness statement is then that this newly-built representation is unitarily equiv-
alent to the original one: in other words there is a unitary isomorphism U : H → Hσ ,
intertwining ρ and ρσ , and mapping η to ξ. It is easy to see this: from the GNS
construction the map
ρ(a)η 7→ [a]
is a well-defined isometric map from the dense subspace ρ[A]η of H to the dense
subspace A/N of Hσ ; so it extends by continuity to the required unitary isomor-
phism.
We are now going to show that every C ∗ -algebra has plenty of states, enough in
fact to produce a faithful Hilbert space representation by the GNS construction.
Lemma 15.3. Let A be a C ∗ -algebra. For any positive a ∈ A, there is a state σ of
A for which σ(a) = kak.
Proof. In view of Corollary 14.12 we may assume that A is unital. The commutative
C ∗ -algebra C ∗ (a) = C(Spectrum(a)) admits a state ϕ (in fact, a ∗-homomorphism
to C) corresponding to a point of Spectrum(a) where the spectral radius is attained;
for this ϕ, one has ϕ(a) = kak, and of course ϕ(1) = 1, kϕk = 1. Using the Hahn–
Banach theorem we may extend ϕ to a linear functional σ on A of norm one. Since
σ(1) = 1, Proposition 14.11 shows that σ is a state. 
Remark 15.4. By the same argument we may show that if a ∈ A is normal, there is
a state σ such that |σ(a)| = kak.
Theorem 15.5. (Gelfand-Naimark Representation Theorem) Every C ∗ -algebra
is ∗-isomorphic to a C ∗ -subalgebra of some B(H).
Proof. Let A be a C ∗ -algebra. For each a ∈ A, use the previous lemma to manufac-
ture a state σa having σa (a∗ a) = kak2 , and then use the GNS construction to build
a representation ρa : A → B(Ha ), with cyclic vector ua , from the stateL
σa . We have
2 ∗ ∗ 2
kρa (a)ua k = hρ(a a)ua , ua i = σa (a a) = kak . It follows that ρ = a∈A ρa is a
faithful representation, as required. 
When A is separable the enormously large Hilbert space we used above can be
replaced by a separable one; this is a standard argument.
Remark 15.6. We have applied the GNS construction only to C ∗ -algebras. The
construction, however, has a wider scope. Let A be a unital complex ∗-algebra
(without topology). A linear functional ϕ : A → C is said to be positive if ϕ(a∗ a) > 0
for all a ∈ A. It is representable if, in addition, for each x ∈ A there exists a constant
cx > 0 such that
ϕ(y ∗ x∗ xy) 6 cx ϕ(y ∗ y)

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
68

for all y ∈ A.
By following through the GNS construction exactly as above, one can show that to
any representable linear functional ϕ there corresponds a Hilbert space representation
ρ : A → B(H) and a cyclic vector u such that ϕ(a) = hρ(a)u, ui.
Representability is automatic for positive linear functionals on unital Banach
algebras:
Lemma 15.7. Any positive linear functional ϕ on a unital Banach ∗-algebra7 A is
continuous and representable.
Proof. Suppose x = x∗ ∈ A with kxk < 1. Using Newton’s binomial series we may
construct a selfadjoint
1 1 3
y = (1 − x) 2 = 1 − x + x2 − · · ·
2 8
2
with y = 1 − x. We conclude that ϕ(1 − x) > 0 by positivity, so ϕ(x) 6 ϕ(1). Now
for any a ∈ A we have
|ϕ(a)|2 6 ϕ(1)ϕ(a∗ a) 6 ϕ(1)2 ka∗ ak 6 ϕ(1)2 kak2
(the first inequality following from Cauchy-Schwarz and the second from the pre-
ceding discussion), so ϕ is continuous and kϕk 6 ϕ(1). To show representability, we
apply the preceding discussion to the positive linear functional a 7→ ϕ(y ∗ ay) to get
the desired conclusion with cx = kx∗ xk. 
It was shown by Varopoulos that this theorem extends to non-unital Banach
algebras having bounded approximate units.

7By definition, we require that the involution on a Banach ∗-algebra should be isometric.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
69

Lecture 16
Abelian von Neumann algebras and the spectral theorem

Now we will discuss the GNS construction in the abelian case, and what can be
obtained from it. Let us begin by constructing a standard example of an abelian
von Neumann algebra.
Let (X, µ) be a finite measure space. For each essentially bounded function f ∈

L (X, µ), the corresponding multiplication operator Mf on the Hilbert space H =
L2 (X, µ) is bounded, with operator norm equal to the L∞ -norm of f ; and in this
way one identifies L∞ (X, µ) with a ∗-subalgebra of B(H).
Proposition 16.1. The algebra L∞ (X, µ) is a von Neumann algebra of operators
on L2 (X, µ).
Proof. We will show that M = L∞ (X, µ) is equal to its own commutant. Suppose
that T ∈ B(H) commutes with each Mf , and let g = T · 1, where 1 denotes the
constant function 1, considered as an element of H. Then for all f ∈ L∞ (X, µ) ⊆
L2 (X, µ) we have
T f = T Mf 1 = Mf T 1 = Mf g = f g.
∞ 2
Since L is dense in L it follows that T = Mg , and standard estimates show that
kgk∞ 6 kT k, so that g ∈ L∞ . Thus M 0 ⊆ M ; but M ⊆ M 0 since M is abelian, and
consequently M = M 0 . 
The proposition still holds for a σ-finite measure space (X, µ), with essentially
the same proof.
The content of Spectral Theorem is that this is essentially the only example of
an abelian von Neumann algebra. However, to explain what is meant here we need
to distinguish between two notions of isomorphism for von Neumann algebras (or
other algebras of operators on Hilbert space). Let M ⊆ B(H) and N ⊆ B(K) be
von Neumann algebras. We will say that they are abstractly isomorphic if there is a
∗-isomorphism (an isomorphism of C ∗ -algebras) from M to N ; and we will say that
they are spatially isomorphic if there is a unitary U : H → K such that U M U ∗ = N .
Clearly spatial isomorphism implies abstract isomorphism but the converse is not
true: the 1-dimensional von Neumann algebras generated by the identity operators
on two Hilbert spaces of different finite dimension are abstractly isomorphic, but
they are not spatially isomorphic.
Proposition 16.2. Let X be a compact metrizable space. Every cyclic representa-
tion of C(X) is unitarily equivalent to a representation of C(X) by multiplication
operators on L2 (X, µ), where µ is some regular Borel probability measure on X.
Proof. Let ρ be a cyclic representation of A = C(X) with cyclic vector ξ. Then
σ(f ) = hξ, ρ(f )ξi defines a state on A. By the uniqueness of the GNS construction
(Remark 15.2), ρ is unitarily equivalent to the GNS representation constructed from
the state σ. Now we appeal to the Riesz Representation Theorem from measure
theory (see Rudin, Real and complex analysis, Theorem 2.14, for example): each

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
70

state σ of C(X) is the functional of integration with respect to some regular Borel
probability measure µ. The GNS construction then produces the space L2 (X, µ)
and the multiplication representation. 
Let M be an abelian von Neumann algebra in B(H). We say that M is maximal
abelian if it is contained in no larger abelian von Neumann algebra. It is easy
to see that this condition is equivalent to M 0 = M . In particular, the proof of
Proposition 16.1 shows that L∞ (X, µ) is a maximal abelian von Neumann algebra
of operators on L2 (X, µ).
Theorem 16.3. Let M be an abelian von Neumann algebra in B(H), where H is
separable. Then the following are equivalent:
(a) M has a cyclic vector;
(b) M is maximal abelian;
(c) M is spatially equivalent to L∞ (X, µ) acting on L2 (X, µ), for some finite mea-
sure space X.

Proof. Suppose that M has a cyclic vector. We begin by remarking that M has
a separable C ∗ -subalgebra A such that A00 = M . (Proof: The unit ball of B(H)
is separable and metrizable in the strong topology, hence second countable. Thus
the unit ball of M is second countable, hence M itself is separable, in the strong
topology. Let A be the C ∗ -algebra generated by a countable strongly dense subset
of M .) The representation of A on B(H) is cyclic (since A is strongly dense in
M ) so by Proposition 16.2 it is unitarily equivalent to the representation of C(X)
on L2 (X, µ) by multiplication operators. Now L∞ (X, µ) is a von Neumann algebra
which contains C(X) as a strongly dense subset (by the Dominated Convergence
Theorem) so the unitary equivalence must identify L∞ (X, µ) with M . This proves
that (a) implies (c). We have already remarked that (c) implies (b). Finally, suppose
that M is any abelian von Neumann algebra of operators on H. Decompose H as
a countable direct sum of cyclicP subspaces for the commutant M 0 , say with unit
cyclic vectors ξn ; and let ξ = 2−n ξn . The projection Pn : H → Hn belongs to
00 0
M = M ⊆ M , using the bicommutant theorem and the fact that M is abelian.
Hence each ξn = 2n Pn ξ belongs to M 0 ξ, so ξ is a cyclic vector for M 0 . We have shown
that the commutant of any abelian von Neumann algebra on a separable Hilbert
space has a cyclic vector; in particular if M is maximal abelian, then M = M 0 and
M has a cyclic vector. 
We can use this to get the structure of a general abelian von Neumann algebra,
up to abstract (not spatial!) isomorphism:
Theorem 16.4. Each abelian von Neumann algebra on a separable Hilbert space is
abstractly isomorphic to some L∞ (X, µ).
Proof. Let M be an abelian von Neumann algebra on B(H). Recall from the last
part of the preceding proof that the commutant M 0 has a cyclic vector ξ; let P
denote the orthogonal projection onto M ξ. Then P is a projection in M 0 . The

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
71

map α : T 7→ T P is a ∗-homomorphism from M onto the von Neumann algebra


M P , and, by construction, ξ is a cyclic vector for M P (thought of as acting on
the range of P ). Thus M P is isomorphic to an algebra L∞ (X, µ) by the previous
theorem. I claim that Ker(α) = 0; this will show that M and M P are isomorphic
and so will complete the proof. If T ∈ Ker(α) then T ξ = 0. For any R ∈ M 0 , then,
T Rξ = RT ξ = 0; but since ξ is cyclic for M 0 , the vectors Rξ are dense in H, and it
follows that T = 0. 

Let us specialize these constructions to the case of a single operator.


Corollary 16.5. (Spectral Theorem) Let T be a normal operator on a separable
Hilbert space H. Then there exist a measure space (Y, µ) and a unitary equiva-
lence U : H → L2 (Y, µ), such that U T U ∗ is the multiplication operator by some L∞
function on Y .
Proof. Let X be the spectrum of T , so that the functional calculus gives a represen-
tation ρ of C(X) on H. By Proposition 14.14, we can break ρ up into a direct sum
of cyclic representations; there are only countably many summands since H is sepa-
rable. According to Proposition 16.2, each such cyclic representation ρn is unitarily
equivalent to the representation of C(X) by multiplication operators on F L2 (X, µn )
for some regular Borel probability measure µn on X. Put now (Y, µ) = n (X, µn )
to get the result. 

Notice that the spectrum of the operator of multiplication by an L∞ function


g on L2 (X, µ) is the essential range of g (that is, the set of complex numbers
z such that µ(g −1 (B(z; ε))) > 0 for each ε > 0). As a corollary of the above
representation theorem we therefore obtain the Borel functional calculus. Let T
be a normal operator as above, unitarily equivalent to multiplication by some L∞
function g on L2 (Y, µ). Given any bounded Borel function f on the spectrum of T ,
we may define f (T ) to be the bounded operator that is unitarily equivalent (via the
same equivalence) to multiplication by f ◦ g.
Proposition 16.6. The Borel functional calculus defined above has the following
properties:
(i) It is well-defined (f (T ) does not depend on the choice of representing space).
(ii) The operator f (T ) belongs to any von Neumann algebra that contains T .
(iii) If fn is a uniformly bounded sequence of Borel functions that tends pointwise
to f , then fn (T ) tends strong-starly to f (T ).
(iv) The norm of f (T ) is at most equal to sup |f |. The operator f (T ) is selfadjoint
if f is real-valued, positive if f > 0, and unitary if |f | = 1.
Proof. Item (iv) is a standard property of multiplication operators. It suffices to
prove (iii), since (i) and (ii) follow using the fact that every bounded Borel function
is a bounded, pointwise limit of continuous functions. To do this observe that if
fn → f pointwise and boundedly, and if ξ ∈ H corresponds under the unitary

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
72

equivalence to the L2 function u, then


Z
2
k(fn (T ) − f (T )ξk = |(fn − f ) ◦ g(y)|2 |u(y)|2 dµ(y) → 0
Y
by Lebesgue’s dominated convergence theorem. 
Example 16.7. Let U be a unitary element of a von Neumann algebra B; then
Spectrum(U ) ⊆ T. Let g be the Borel function T → R that sends z ∈ T to t ∈ [0, 1)
with z = e2πit . Then T = g(U ) is selfadjoint and belongs to B, and e2πiT = U .
We used this fact when discussing the Kaplansky density theorem for unitaries
(Remark 14.3).
Remark 16.8. An alternative, classic formulation of these results makes use of the
notion of spectral measure (also called resolution of the identity). Let H be a Hilbert
space and let (X, M) be a measurable space, that is a space equipped with a σ-
algebra of subsets. A spectral measure associated to the above data is a map E
from M to the collection P (H) of selfadjoint projections in H, having the following
properties:
(a) E(∅) = 0, E(X) = 1;
(b) E(S ∩ T ) = E(S)E(T );
(c) if S ∩ T = ∅, then E(S ∪ T ) = E(S) + E(T );
(d) E is countably additive relative to the strong operator topology; that is,Pif Sn is a
sequence of mutually disjoint subsets of X with union S, then E(S) = E(Sn )
(with strong operator convergence).
Given such a spectral measure it is straightforward to define an ‘integral’
Z
f (x)dE(x) ∈ B(H)
X
for every function f on X which is bounded and M-measurable; one simply mimics
the usual processes of measure theory.
R For instance if f is the characteristic function
of some S ∈ M, then we define X f (x)dE(x) = E(S). By linearity we extend this
definition to simple functions (finite linear combinations of characteristic functions),
and then by a limit argument we extend further to all measurable bounded functions.
We may then formulate the spectral theorem as follows: for every normal operator
T on a separable Hilbert space, there is a resolution of the identity E on the σ-algebra
of Borel subsets of Spectrum(T ), such that
Z
T = λdE(λ).
Spectrum(T )

For the proof, one need only verify that the definition E(S) = χS (T ) (using the Borel
functional calculus described above) describes a resolution of the identity. To show
that T has an integral decomposition as above one approximates the integral by
Riemann sums and uses the fact that, when restricted to the range of the projection
E((k/n, (k + 1)/n]), the operator T lies within 1/n in norm of the operator of
multiplication by k/n.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
73

Remark 16.9. The point of the spectral theorem is that it provides a set of canonical
models (up to unitary equivalence) for normal operators: every normal operator is
equivalent to one of the models. For a complete picture, this existence statement
needs to be supplemented by a uniqueness statement saying when two of the models
are or are not unitarily equivalent to one another. This is not a completely straight-
forward matter. Even in the finite dimensional case, two normal operators with the
same spectrum need not be unitarily equivalent (because the multiplicities of the
eigenvalues may differ). In the infinite-dimensional case this “multiplicity” idea has
to be elaborated with a heavy dose of measure theory. (Example: Let X denote the
multiplication operator by x on L2 [−1, 1]. Show that X and X ⊕X are not unitarily
equivalent. Note that both operators have spectrum [−1, 1] and no eigenvalues at
all.) The final result is called spectral multiplicity theory (see Halmos’ book with
that title). We won’t discuss it further here.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
74

Lecture 17
Pure States and Irreducible Representations

In this section we will begin a more detailed study of the representation theory of
a C ∗ -algebra A. By application to group C ∗ -algebras, this theory will include the
theory of unitary representations of groups. We’ll discuss this connection.
Let A be a C ∗ -algebra (unital for simplicity). Recall that a linear functional
σ : A → C is a state if it has norm one and moreover σ(1) = 1. Thus the collection
of states on A is a weak-star compact8, convex subset of A∗ . It is called the state
space of A, and denoted by S(A).
The general yoga of functional analysis encourages us to consider the extreme
points of S(A), called the pure states. By the Krein–Milman Theorem, S(A) is the
closed convex hull of the pure states.
Lemma 17.1. Let σ be a state of a C ∗ -algebra A, and let ρ : A → B(H) be the
associated GNS representation, with unit cyclic vector ξ. For any positive functional
ϕ 6 σ on A there is an operator T ∈ ρ[A]0 such that
ϕ(a) = hξ, ρ(a)T ξi
and 0 6 T 6 1.
This is a kind of ‘Radon-Nikodym Theorem’ for states.
Proof. Recall that H is obtained by completing the inner product space A/N , where
N is the set {a ∈ A : σ(a∗ a) = 0}. Define a sesquilinear form on A/N by (a, b) 7→
ϕ(a∗ b). This form is well-defined, positive semidefinite, and bounded by 1, so there
is an operator T on H such that
ϕ(a∗ b) = h[a], T [b]i = hρ(a)ξ, T ρ(b)ξi.
The computation
h[a], T ρ(b)[c]i = ϕ(a∗ bc) = h[b∗ a], T [c]i = h[a], ρ(b)T [c]i
now shows that T commutes with ρ[A]. 
Proposition 17.2. A state is pure if and only if the associated GNS representation
is irreducible.
Proof. Suppose first that the representation ρ : A → B(H) is reducible, say H =
H1 ⊕ H2 where H1 and H2 are closed, A-invariant subspaces. Write a cyclic vector
ξ = ξ1 + ξ2 where ξi ∈ Hi , i = 1, 2. We cannot have ξ1 = 0, otherwise ρ[A]ξ ⊆ H2 ;
and for similar reasons we cannot have ξ2 = 0. Now we have
σ(a) = hξ, ρ(a)ξi = hξ1 , ρ(a)ξ1 i + hξ2 , ρ(a)ξ2 i
8Warning: In the non-unital case the state space is not closed in the unit ball of A∗ , and hence
is not compact, because the condition lim σ(uλ ) = 1 is not preserved by weak-star convergence—
the zero functional can arise as a weak-star limit of states (consider the evaluations at the points
1, 2, 3, . . . as functionals on C0 (R)). One must replace the state space by the quasi-state space
Q(A) consisting of positive functionals of norm 6 1. See Pedersen’s book for details.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
75

which is a nontrivial convex combination of the states corresponding to the unit


vectors ξ1 /kξ1 k and ξ2 /kξ2 k. Thus σ is impure.
Conversely, suppose that σ = tσ1 + (1 − t)σ2 where σ1 , σ2 are states and 0 <
t < 1. Consider the GNS representation ρ associated to σ, with cyclic vector ξ.
By Lemma 17.1 above there is an operator T in the commutant ρ[A]0 such that
tσ1 (a) = hξ, ρ(a)T ξi. If ρ is irreducible then ρ[A] has trivial commutant and so
T = cI for some constant c and thus σ1 = cσ. Normalization shows σ1 = σ.
Similarly σ2 = σ and thus σ is pure. 
Remark 17.3. It is important to know that there are enough irreducible unitary
representations of a C ∗ -algebra A to separate points of A, that is to say C ∗ -algebras
are semi-simple. To show this we need to refine Lemma 15.3 to show that, for any
positive a ∈ A, there exists a pure state σ with σ(a) = kak. Here is how to do this:
Lemma 15.3 shows that states having this property exist. The collection Σ of all
states having this property is a closed convex subset of the state space, so (by the
Krein–Milman Theorem) it has extreme points — indeed it is the closed convex hull
of these extreme points. Let σ be an extreme point of Σ. Now suppose that σ is a
convex combination σ = tσ1 + (1 − t)σ2 , 0 < t < 1, where σ1 , σ2 are arbitrary states.
The inequalities
σ(a) = kak, σ1 (a) 6 kak, σ2 (a) 6 kak
now imply that σ1 , σ2 ∈ Σ. Hence σ1 = σ2 = σ since σ is extreme in Σ. Thus σ is
extreme in S and we are done.
Definition 17.4. Let A be a (unital) C ∗ -algebra. The spectrum of A, denoted A, b is
the collection of unitary (= spatial) equivalence classes of irreducible representations
of A. We topologize it as a quotient of the pure state space: that is, we give A b the
topology which makes the surjective map P (A) → A, which associates to each pure
b
state its GNS representation, open and continuous. This is called the Fell topology.
Remark 17.5. The unitary group of A acts on P (A): σ u (a) = σ(u∗ au). Clearly,
σ u and σ generate the same irreducible representation. The converse is also true
(exercise! see Theorem 10.2.6 in Kadison and Ringrose). Thus A
b = P (A)/U (A).

Remark 17.6. Let A = C(X) be commutative. The states of A are measures on


X; the pure states are the Dirac δ-measures at the points of X. The correspond-
ing irreducible representations are 1-dimensional, just given by the corresponding
characters of C(X) (homomorphisms to C). It is clear that the irreducible represen-
tations corresponding to distinct points of X are not unitarily equivalent. In this
case therefore we have P (A) = A b = X. Needless to say the general case is not so
simple.
Exercise 17.7. Show “by hand” (i.e. without using the above general theory) that
every irreducible representation of a commutative C ∗ -algebra is 1-dimensional.
Let ρ : A → B(H) be a representation. Then the kernel of ρ is an ideal of A.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
76

Definition 17.8. An ideal in a C ∗ -algebra A is called primitive if it arises as the


kernel of an irreducible representation. The space of primitive ideals is denoted
Prim(A).
Remark 17.9. The C ∗ -algebra A itself is called primitive if 0 is a primitive ideal
(that is, if A has a faithful irreducible representation). Since there are plenty of
irreducible representations by Remark 17.3, every simple C ∗ -algebra is primitive (a
simple C ∗ -algebra is one with no non-trivial ideals). The example A = B(H) shows
that there are primitive C ∗ -algebras that are not simple.
For the statement of the next result, recall from ring theory that a proper ideal I
in a ring is prime if, whenever an intersection J1 ∩ J2 of two ideals is a subset of I,
then either J1 ⊆ I or J2 ⊆ I.
Lemma 17.10. Every primitive ideal of a C ∗ -algebra is closed and prime.
Proof. We begin by proving a general fact. Suppose that ρ : A → B(H) is a rep-
resentation and J is an ideal in A. Then the orthogonal projection onto the closed9
subspace ρ[J]H lies in the center of the bicommutant ρ[A]00 . To see this, note that
the subspace ρ[J]H is both ρ[A]-invariant (because J is an ideal) and ρ[J]0 -invariant.
Therefore P lies in ρ[A]0 ∩ρ[J]00 . But this is contained in ρ[A]0 ∩ρ[A]00 , which is simply
the center of the bicommutant.
If we apply this when ρ is irreducible, the bicommutant of ρ[A] is B(H) which
has trivial center; so ρ[J]H is either zero or H, and J ⊆ Ker(ρ) precisely when
ρ[J]H = 0. It is now easily seen that if J1 and J2 are ideals such that J1 J2 ⊆ Ker(ρ),
then at least one of them is itself contained in Ker(ρ). That is, Ker(ρ) is prime. 
The converse of this result is also true, at least for separable C ∗ -algebras. How-
ever, to prove this will require something of a detour.
Let R be any ring. The collection of prime ideals of R always has a topology,
called the Zariski topology, according to which the non-empty closed sets of prime
ideals are just those sets of the form
Hull(X) = {p : X ⊆ p}
as X ranges over the set of subsets of R. (To prove that this defines a topology
you need to use the defining property of a prime ideal; it is also helpful to observe
that Hull(X) = Hull(JX ), where JX is the intersection of all Tthe ideals
 containing
X. The closure of any set S of prime ideals is the set Hull p∈S p . The intersec-
tion appearing here is sometimes called the ‘kernel’ of S, a terminology that I find
confusing.) Since primitive ideals in a C ∗ -algebra A are prime, the Zariski topology
gives rise to a topology on Prim(A), which in this context is called the Jacobson
topology or the hull-kernel topology.
Remark 17.11. The Zariski topology is rather bad from the point of view of the
separation axioms. A single point of Prim(A) is closed iff it represents a maximal
9See Lemma 14.7.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
77

ideal. Consequently, unless all prime ideals are maximal (a rare occurrence) the
topology is not even T1 , let alone Hausdorff. However, it is T0 — given any two
distinct points, at least one of them is not in the closure of the other one. (Proof is
an easy exercise, but a good check that you are following the definitions.)
Exercise 17.12. If A is unital show that Prim(A) is compact.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
78

Lecture 18
Around Fell’s theorem

Theorem 18.1. (Fell) Let A be a separable C ∗ -algebra. The canonical map A b→


Prim(A) is a topological pullback: that is, the open sets in A
b are precisely the inverse
images of the open sets in Prim(A).
Corollary 18.2. The canonical map P (A) → Prim(A) is a quotient map.
The converse of Lemma 17.10 in the separable case is an important deduction
from this:
Proposition 18.3. (Dixmier) In a separable, unital C ∗ -algebra A every closed
prime ideal is primitive.
Proof. The space P (A) is a closed convex subset of the unit ball of A∗ , and hence is
compact and metrizable; in particular it is second countable and it is a Baire space.
This latter assertion means that the Baire category theorem holds: each countable
intersection of dense open sets is dense. Both properties pass to quotients, and so
we conclude that Prim(A) is also a second countable Baire space.
Now suppose that 0 is a prime ideal of A (the general case can be reduced to this
one by passing to quotients), in other words that any two nontrivial ideals of A have
nontrivial intersection. We want to prove that 0 is a primitive ideal, that is, A has
a faithful irreducible representation. I claim first that every nonempty open subset
of Prim(A) is dense. Indeed, let U ⊆ Prim(A) and let I and J be the ideals
\ \
I= p, J = p
p∈U p∈Prim(A)\U

(these are the “kernels” of U and Prim(A) \ U in the terminology mentioned above).
Since Prim(A) \ U is closed, it is equal to the hull of its kernel J; in particular,
since U is nonempty, J 6= 0. Note that I ∩ J is the intersection of the kernels of
all irreducible representations; so by Remark 17.3 I ∩ J = 0. Since nontrivial ideals
must intersect nontrivially, and J is nontrivial, it follows that I = 0, and therefore
that its hull is all of Prim(A). But Hull(I) = U so U is dense.
Now applying the Baire category theorem to a countable base for the topology we
find that there is a dense point in Prim(A). This point corresponds to a primitive
ideal which is contained in every primitive ideal, and since the irreducible represen-
tations separate points of A, the only possibility for such an ideal is (0) which is
therefore primitive, as required. 
Remark 18.4. It is worthwhile to ask when the natural map  → Prim(A) will be a
bijection (and therefore a homeomorphism). In such a circumstance an irreducible
representation of A is determined up to spatial equivalence by its kernel. We know
this is true for commutative C ∗ -algebras but we will (perhaps?) see that it extends
to a much wider class of algebras, the so-called type I algebras. These include many
algebras occurring in the representation theory of Lie groups, for example. The

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
79

antithesis of type I algebras are antiliminary algebras. UHF algebras (to be studied
later) provide examples of these.
Exercise 18.5. Show that  → Prim(A) is bijective iff  is a T0 space.
Now we will prove Theorem 18.1.
Lemma 18.6. Let A be a C ∗ -algebra, let E be a collection of states of A, and suppose
that for every a > 0 in A we have
sup{σ(a) : σ ∈ E} = kak.
Then the weak-∗ closure of E contains every pure state.
Proof. Once again, we stick with unital A. We’ll show in fact that the weak-∗ closed
convex hull C of E is the entire state space S of A. This will suffice, since the extreme
points of the closed convex hull of E must belong to the closure of E by Milman’s
theorem 11.19.
Think of Asa as a real Banach space, and think of C and S as compact convex
subsets of the LCTVS E which is the dual of Asa equipped with the weak-∗ topology.
The dual of E is then Asa once again. Suppose for a contradiction that C 6= S,
so that there exists ϕ ∈ S \ C. By the Hahn-Banach theorem applied to the space
E, there exist a ∈ Asa and c ∈ R such that ϕ(a) > c whereas σ(a) < c for all
σ ∈ C. We may assume a is positive (add a large multiple of 1). Now we get
ϕ(a) > sup{σ(a) : σ ∈ E} = kak which is a contradiction. 
Remark 18.7. A similar but easier Hahn-Banach argument proves that if S is the
state space of a unital C ∗ -algebra A, then the convex hull of S ∪ (−S) is the space of
all real linear functionals on A of norm 6 1. For suppose, if possible, that ψ is a real
linear functional of norm 1 not belonging to the convex hull K of S ∪ (−S). Then
by the Hahn–Banach theorem there exists a ∈ Asa and α ∈ R with ψ(a) > α and
ϕ(a) 6 α for all ϕ ∈ K. The latter condition implies in particular that |σ(a)| 6 α
for all σ ∈ S. But kak = sup{|σ(a)| : σ ∈ S} by Lemma 15.3 and the remark
following, so we get ψ(a) > kak, a contradiction.
Proof of Theorem 18.1. Let Π ⊆ A b be a set of irreducible representations of A, and
let ρ be another such representation. Then ρ belongs to the closure of Π for the
quotient topology from P (A) if and only if
(a) some vector state associated to ρ is a weak-∗ limit of states associated to repre-
sentations π ∈ Π.
On the other hand, ρ belongs to the closure of Π for the pull-back of the Jacobson
topology from Prim(A) if and only if
T
(b) the kernel Ker(ρ) contains {Ker(π) : π ∈ Π}.
We need therefore to prove that conditions (a) and (b) are equivalent.
Suppose
T (b). By passing to the quotient, we may assume without loss of generality
that {Ker(π) : π ∈ Π} = 0. The direct sum of all the representations π is then

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
80

faithful, and so for every a ∈ A


sup{σ(a) : σ ∈ E} = kak,
where E denotes the collection of vector states associated to the representations
belonging to Π. By the lemma, every pure state of A (and every state associated to
ρ in particular) belongs to the weak-∗ closure of E.
T This proves (a).
Conversely suppose (a). Let J denote the ideal {Ker(π) : π ∈ Π}. Suppose that
the state σ(a) = hξ, ρ(a)ξi is a weak-∗ limit of states associated to representations
π ∈ Π. All such states vanish on every a ∈ J, so σ(a) = 0 for all a ∈ J. But then
for all y, z ∈ A,
hρ(y)ξ, ρ(a)ρ(z)ξi = σ(y ∗ az) = 0
since J is an ideal. Because v is a cyclic vector for ρ this implies that ρ(a) = 0, so
Ker(ρ) ⊇ J, which is (b). 
Remark 18.8. Using Voiculescu’s theorem (which may be discussed later) one can
show that two points of A b map to the same point of Prim(A) if and only if the
corresponding representations are approximately unitarily equivalent; we say that
two representations ρ1 and ρ2 are approximately unitarily equivalent if there exists
a sequence of unitaries Un such that, for each a ∈ A, ρ1 (a)Un − Un ρ2 (a) → 0 as
n → ∞. (It is clear that this condition implies that the two representations ρ1 , ρ2
have the same kernel; the problem is to prove the converse.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
81

Lecture 19
Representations by compact operators

Now we will discuss representations by compact operators.


Lemma 19.1. Let A be a C ∗ -algebra, let J be an ideal in A, and let ρJ : J →
B(H) be a non-degenerate representation. There is a unique extension of ρJ to
a representation ρA : A → B(H). Moreover, ρA is irreducible if and only if ρJ is
irreducible.
Proof. By definition of “non-degenerate”, every ξ ∈ H is of the form ξ = ρJ (j)ξ 0 for
some j ∈ J, ξ 0 ∈ H. Define then
ρA (a)ξ = ρJ (aj)ξ 0 .
To see that this is well-defined, take an approximate unit uλ for J and write
ρJ (aj)ξ 0 = lim ρJ (auλ j)ξ 0 = lim ρJ (auλ )ξ,
which also shows that ρA (a) is the strong limit of the operators ρJ (auλ ) and therefore
has norm bounded by kak. It is routine to verify that ρA is a ∗-representation of A.
It is obvious that if ρA is reducible, so is ρJ . On the other hand, if ρJ has a proper
closed invariant subspace H 0 say, the construction by way of an approximate unit
shows that ρA will map H 0 to H 0 . So ρA is reducible too. 
Lemma 19.2. Let H be a Hilbert space and let A ⊆ K(H) be a C ∗ -algebra of
compact operators on H that is irreducibly represented on H. Then A = K(H).
Proof. Since A consists of compact operators, it contains finite-rank projections by
the functional calculus. Let P be such a projection, having minimal rank. I claim
that P is of rank one. To see this, note first that P T P = λT P for all T ∈ A (one
may assume T to be selfadjoint, and if the assertion were false then a nontrivial
spectral projection of P T P would have smaller rank than P , contradiction). Now
if there are orthogonal vectors ξ, η in the range of P , then for all T ∈ A,
hT ξ, ηi = hP T P ξ, ηi = λT hξ, ηi = 0.
Thus Aξ is a nontrivial invariant subspace, contradicting irreducibility. This estab-
lishes the claim that P has rank one.
Any rank-one projection P has the form
P (η) = ξhξ, ηi
for some unit vector ξ. Since the representation of A is nondegenerate and irre-
ducible, the Kadison transitivity theorem 14.4 tells us that for any unit ξ 0 ∈ H
there exists an element T ∈ A with T ξ = ξ 0 . The operator T P T ∗ ∈ A is a multiple
of the orthogonal projection onto the 1-dimensional subspace spanned by T ξ. It
follows that A contains all rank-one orthogonal projections, hence it contains the
compacts. 

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
82

Remark 19.3. We can use these ideas to pay a debt from 9.3, namely, to give the
proof that every simple finite-dimensional C ∗ -algebra is a matrix algebra. Indeed, let
A be such an algebra. Choose a non-trivial state and apply the GNS construction;
we obtain a finite-dimensional representation H which is non-trivial and therefore
faithful (its kernel must be zero). A standard argument shows that if V is a sub-
representation, so is V ⊥ ; so, by induction, H is a direct sum of irreducible faithful
representations. Picking just one of these we see that A is a matrix algebra by
Lemma 19.2.
Corollary 19.4. The algebra K(H) is simple (it has no non-trivial ideals).
Proof. Let J ⊆ K(H) be a non-zero ideal. The subspace J · H of H is invariant
under K(H), so it is all of H. Thus J is non-degenerately represented on H. By
Lemma 19.1, this representation of J is irreducible. By Lemma 19.2, J = K(H). 
Corollary 19.5. If A is an irreducible C ∗ -subalgebra of B(H) which contains a
single non-zero compact operator, then A contains all of K(H).
Proof. Consider the ideal J = A ∩ K(H) in A. The space J · H is A-invariant, and
A is irreducible, so JH = H and J is non-degenerate. Now we argue as before: by
Lemma 19.1 J is irreducible, and then by Lemma 19.2, J = K(H). 
Remark 19.6. Of course it follows that Prim(K(H)) consists of a single point (cor-
responding to the zero ideal). Let us also go on to prove that K(H) has only one
[ also consists just of a single point. Let
irreducible representation, so that K(H)
0
ρ : K(H) → B(H ) be an irreducible representation. Let P be a rank-one projection
in K(H). Since K(H) is simple, ρ is faithful; so ρ(P ) is a nonzero projection on H 0 .
Now let ξ 0 ∈ H 0 be a unit vector in the range of the projection ρ(P ), and let σ be
the associated state, that is
σ(T ) = hξ 0 , ρ(T )ξ 0 i.
Since ρ is irreducible, ξ 0 is a cyclic vector. By the GNS uniqueness theorem, ρ is
unitarily equivalent to the GNS representation associated to the state σ. But
σ(T ) = hξ 0 , ρ(P T P )ξ 0 i = λT = Tr(P T P )
and the GNS representation associated to this state is the standard one.
A C ∗ -algebra A is called liminal if, for every irreducible representation ρ : A →
B(H), one has ρ(A) = K(H). (Kaplansky originally called these CCR algebras,
for “completely continuous representations”, “completely continuous” being an old
term for what are nowadays called compact operators; but this abbreviation is not
used much anymore.) The argument in the above remark shows that for every
liminal C ∗ -algebra A, A
b = Prim(A); irreducible representations are determined up
to spatial equivalence by their kernels.
Definition 19.7. A positive element x of a C ∗ -algebra A is abelian if the hereditary
subalgebra xAx is commutative.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
83

For example, a rank one projection in K(H) is abelian.


Proposition 19.8. If a C ∗ -algebra is generated by its abelian elements, then it is
liminal. More generally, if an ideal in a C ∗ -algebra is generated (as an ideal ) by
abelian elements, it is liminal.
Proof. Let A be such a C ∗ -algebra and let ρ : A → B(H) be an irreducible repre-
sentation. Let x ∈ A be an abelian element. Then ρ(x)ρ(A)ρ(x) is a commutative
subalgebra of B(H). Since ρ(A) is strongly dense in B(H) by the bicommutant
theorem, ρ(x)B(H)ρ(x) is abelian. It follows easily that ρ(x) has rank one at most.
Thus, in particular, ρ(x) is compact. So ρ(A) is contained in the compact operators,
since the abelian elements generate A. Moreover, at least one ρ(x) is nonzero, so
ρ(A) = K(H) by Lemma 19.2.
The proof for an ideal is the same if we make use of Lemma 19.1; an irreducible
representation ρ of J C A extends to an irreducible representation of A, and ρ(x) is
compact for each abelian x ∈ J. 
Remark 19.9. A C ∗ -algebra A is called postliminal or Type I if every non-zero
quotient of A contains a non-zero abelian element. Clearly a liminal C ∗ -alegbra is
postliminal. Conversely, suppose that A is postliminal. Define ideals I0 , I1 , . . . of
A by transfinite induction as follows: I0 = 0, I1 is the ideal generated by all the
abelian elements of A, I2 is the ideal generated by all elements whose images in
A/I1 are abelian, and so on (at a limit ordinal, take the union of all the preceding
ideals and form the closure). The postliminal assumption means that the process
should terminate with Iβ = A for some ordinal β, and the successive quotients in
this “composition series” are generated by abelian elements and therefore are liminal
by Proposition 19.8. Thus a postliminal algebra is obtained from liminal ones by a
(possibly transfinite) “composition series”. If A is not postliminal, the construction
terminates at a maximal postliminal ideal J of A. The quotient A/J then has no
nonzero abelian elements at all. Such algebras are called antiliminal.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
84

Lecture 20
The Toeplitz algebra and its representations

Let H = `2 be the usual sequence space. The unilateral shift operator V is the
isometry H → H given by
V (x0 , x1 , x2 , . . .) = (0, x0 , x1 , . . .).
Notice that V ∗ V = 1, whereas V V ∗ = 1 − P , where P is the rank one projection
onto the span of the basis vector e0 = (1, 0, 0, . . .). Thus V is a Fredholm operator,
of index −1.
Definition 20.1. The Toeplitz C ∗ -algebra T is the C ∗ -subalgebra of B(H) generated
by the unilateral shift operator.
Lemma 20.2. The algebra T is irreducible and contains the compact operators.
Proof. Suppose that K ⊆ H is a closed invariant subspace for T. If K 6= 0, then K
contains a nonzero vector, and by applying a suitable power of V ∗ if necessary we
may assume that K contains a vector v = (x0 , x1 , x2 , . . .) with x0 6= 0. Since P ∈ T
we find that P v = v0 e0 ∈ K and so e0 ∈ K. Now applying powers of V we find
that each en = V n e0 ∈ K, so K = H. Thus T is irreducible. Since it contains the
compact operator P , the second statement follows from Corollary 19.5. 
The unilateral shift operator is unitary modulo the compacts, and so its essen-
tial spectrum, that is the spectrum of its image in the Calkin algebra Q(H) =
B(H)/K(H), is a subset of the unit circle S 1 .
Lemma 20.3. The essential spectrum of V is the whole unit circle.
Proof. This is a consequence of Fredholm index theory. Suppose the contrary: then
there is a continuous path of complex numbers λ, running from λ = 0 to λ = 2,
that avoids the essential spectrum of V . But then λ 7→ (V − λ) is a continuous
path of Fredholm operators, and Index V = −1 as we observed above, whereas
Index(V − 2) = 0 since V − 2 is invertible. This is a contradiction. 
We see therefore that the C ∗ -algebra T/K is commutative, generated by a single
unitary element whose spectrum is the whole unit circle. Consequently there is a
short exact sequence of C ∗ -algebras
0 → K → T → C(S 1 ) → 0
called the Toeplitz extension.
It is helpful to make the link with complex function theory.
Definition 20.4. Let S 1 denote the unit circle in C. The Hardy space H 2 (S 1 ) is
the closed subspace of L2 (S 1 ) spanned by the functions z n , for n > 0. A Toeplitz
operator on H 2 (S 1 ) is a bounded operator Tg of the form
Tg (f ) = P (gf ) (f ∈ H 2 (S 1 )),

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
85

where g ∈ L∞ (S 1 ) and P is the orthogonal projection from L2 (S 1 ) onto H 2 (S 1 ).


The function g is called the symbol of Tg .
Lemma 20.5. The C ∗ -algebra generated by the Toeplitz operators is isomorphic to
T, and the map g 7→ Tg is a linear splitting for the quotient map T → C(S 1 ) that
appears in the Toeplitz extension.
Proof. We have V = Tz (in the obvious basis) so certainly the C ∗ -algebra generated
by the Toeplitz operators contains T. In the other direction, suppose that p(z) =
a−n z̄ n +· · ·+a0 +· · ·+an z n is a trigonometric polynomial. Then Tp = a−n (V ∗ )n +· · ·+
a0 1+· · ·+an V n belongs to T. Since kTg k 6 kgk, we find using the Stone-Weierstrass
theorem that Tg belongs to T for each continuous function g, and therefore that the
C ∗ -algebra generated by the Tg is contained in T. 
Toeplitz operators serve as a prototype for the connection between C ∗ -algebras,
index theory, and K-homology. It is not the purpose of this course to focus on
these connections, but we should at least mention the fundamental index theorem
for Toeplitz operators. Recall that the winding number wn(g) ∈ Z of a continuous
function g : S 1 → C \ {0} is its class in the fundamental group π1 (C \ {0}), which
we identify with Z in such a way that the winding number of the function g(z) = z
is +1.
Theorem 20.6. (Toeplitz Index Theorem) If Tg is a Toeplitz operator on
H 2 (S 1 ) whose symbol is a continuous and nowhere vanishing function on S 1 then
Tg is a Fredholm operator and
Index(Tg ) = − wn(g).
Proof. Denote by Mg the operator of pointwise multiplication by g on L2 (S 1 ). The
set of all continuous g for which the commutator P Mg − Mg P is compact is a
C ∗ -subalgebra of C(S 1 ). A direct calculation shows that g(z) = z is in this C ∗ -
subalgebra, for in this case the commutator is a rank-one operator. Since the func-
tion g(z) = z generates C(S 1 ) as a C ∗ -algebra (by the Stone–Weierstrass Theorem)
it follows that P Mg − Mg P is compact for every continuous g. Identifying Tg with
P Mg , we see that
Tg1 Tg2 = P Mg1 P Mg2 = P P Mg1 Mg2 + compact operator
= P Mg1 g2 + compact operator
= Tg1 g2 + compact operator,
for all continuous g1 and g2 . In particular, if g is continuous and nowhere zero then
Tg−1 is inverse to Tg , modulo compact operators, and so by Atkinson’s Theorem,
Tg is Fredholm. Now the continuity of the Fredholm index shows that Index(Tg )
only depends on the homotopy class of the continuous function g : S 1 → C \ {0}.
So it suffices to check the theorem on a representative of each homotopy class; say
the functions g(z) = z n , for n ∈ Z. This is an easy computation, working in the
orthogonal basis { z n | n > 0 } for H 2 (S 1 ). 

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
86

Remark 20.7. There is also a matrix version of the Toeplitz index theorem. Taking
n × n matrices over the Toeplitz extension yields
0 → K → Mn (T) → Mn (C(S 1 )) → 0,
since Mn (K) ∼
= K. Thus, any continuous map g : S 1 → GL(n, C) is the symbol of a
matrix Toeplitz operator which is Fredholm, and we can ask about its index. The
answer is minus the winding number of the determinant det g, which is a loop in
C\{0}. To see this, use induction on n, together with a fact from algebraic topology:
every loop S 1 → GL(n), n > 2, is homotopic to the direct sum of a constant loop
and a loop S 1 → GL(n − 1). In turn, this follows from the homotopy equivalence
GL(n) → U (n) and the fact that
U (n − 1) → U (n) → S 2n−1
is a fibration.
Now let us classify the irreducible representations of the Toeplitz algebra T. Let
ρ : T → B(H) be such a representation. We consider two cases:
(a) ρ annihilates the ideal K ⊆ T;
(b) ρ does not annihilate the ideal K ⊆ T.
In case (b), ρ(K) · H is a T-invariant subspace, hence is equal to H; so the
restriction of ρ to K(H) is a non-degenerate representation. Hence it is irreducible
(Lemma 19.1) and thus it is unitarily equivalent to the identity representation. It
follows (Lemma 19.1 again) that the representation ρ of T is unitarily equivalent to
the identity representation of T on H 2 (S 1 ).
In case (a), ρ passes to an irreducible representation of T/K = C(S 1 ). Such a
representation is 1-dimensional, determined by a point of S 1 ; conversely all such
1-dimensional representations of T are (of course!) irreducible.
Thus Tb = Prim(T) consists of the union of S 1 and a disjoint point. What is the
topology? It is easy to see that the circle S 1 gets its usual topology. On the other
hand, the disjoint point corresponds to the zero ideal, which is of course contained
in every other ideal; so that the only neighborhood of that point is the whole space.
Thus the ‘extra point’ cannot be separated from any other representation. We
summarize:
Proposition 20.8. The space T b of irreducible representations of the Toeplitz algebra
T is of the form S t {p0 }, where S 1 has its usual topology and the only open set
1

containing p0 is the whole space.


Remark 20.9. It follows that the Toeplitz operators is not liminal (it has an irre-
ducible representation that does not consist only of the compact operators), but it
is postliminal (the Toeplitz extension gives a composition series).
The Toeplitz algebra has a universal property. To state it, let V : H → H be
an isometry of a Hilbert space. It is called a proper isometry if Im(V ) is a proper
subspace of H, i.e., if V is not a unitary.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
87

Theorem 20.10. (Wold-Coburn) Let V ∈ B(H) be an isometry and let A be the


C ∗ -subalgebra of B(H) generated by V . There is a unique ∗-homomorphism from T
to A that takes the unilateral shift to V , and this homomorphism is an isomorphism
if and only if the isometry V is proper.
Proof. If V is unitary, spectral theory gives a unique ∗-homomorphism C(S 1 ) → A
that sends z to V ; compose with the quotient map T → C(S 1 ) to get the result of
the theorem.
Otherwise, let K = Im(V )⊥ = ker V ∗ . Observe that if ξ, η ∈ K then
hV n ξ, V m ηi = hV n−m ξ, ηi = 0 for n > m > 0,
and thus the spaces V n (K) are all mutually orthogonal. Also, if ξ and η ∈ K are
orthogonal then so are V n ξ and V n η. It is now evident that the restriction of V to
the subspace
L = K ⊕ V (K) ⊕ V 2 (K) ⊕ · · ·
is spatially equivalent to the direct sum of m = dim(K) copies of the unilateral
shift. Consider now the restriction of V to L⊥ . Clearly V ∗ (L) = L, so V (L⊥ ) = L⊥ ;
in other words, V restricts to a unitary L⊥ → L⊥ . The desired ∗-homomorphism
takes the unilateral shift to the sum of n copies of itself plus a unitary (treated as
in the first part of the proof). It is a monomorphism because m > 1. 
Remark 20.11. This is a good moment to talk a little more about generators and re-
lations and universal C ∗ -algebras. For our purposes a C ∗ -relation means a collection
R of noncommutative polynomials in finitely many formal variables (say x1 , . . . , xn )
and their adjoints. If we want to talk about general C ∗ -algebras the relations should
have no constant term; if we allow unital algebras, a constant term is okay. A rep-
resentation of a set R of relations is a C ∗ -algebra A containing elements x1 , . . . , xn
for which
p(x1 , . . . , xn , x∗1 , . . . , x∗n ) = 0
for all p ∈ R. Representations of R form a category, with ∗-homomorphisms pre-
serving the distinguished objects as morphisms. A universal C ∗ -algebra for R is a
universal object in this category. For example, the Wold-Coburn theorem above
shows that the Toeplitz algebra is the universal object for the relation x∗ x − 1 = 0
(on unital C ∗ -algebras).
Example 20.12. Universal C ∗ -algebras may not exist. For example, consider the
single relation x2 − x = 0, which says that x is idempotent. A (non-selfadjoint)
idempotent may have any norm, so no universal C ∗ -algebra for this relation can
exist by the contractive property of ∗-homomorphisms.
Example 20.13. Universal C ∗ -algebras may exist but not be algebraically faithful,
in the sense that the map from the quotient
Chx1 , . . . , xn , x∗1 , . . . , x∗n i/hRi,
of the noncommutative polynomial ring by the ideal generated by R, may not be
injective. For example consider the relations (in one variable) x∗ x = 0, xx∗ = 1.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
88

There is a nontrivial involutive ring satisfying these relations, but no C ∗ -algebra can
satisfy them except the zero C ∗ -algebra.
Definition 20.14. A collection R of C ∗ -relations is compact if they imply norm
bounds: that is, if there is some constant K such that, whenever x1 , . . . , xn and their
adjoints satisfy the relations R in a C ∗ -algebra, then kxj k 6 K for j = 1, . . . , n.
Lemma 20.15. If R is a compact collection of C ∗ -relations, then it has a universal
representation. The universal representation is algebraically faithful if and only if
some representation of R is algebraically faithful.
Proof. We need the direct product α Aα of a family of C ∗ -algebras. This is defined
Q
to be the collection of all bounded sequences a = (aα ), aα ∈ Aα , with the supremum
norm
kak = sup{kaα k}.
Making whatever obeisance you find necessary to the gods of set theory, take the
direct product of all the representations of R. This is a C ∗ -algebra and, because of
the compactness condition, the product of all the representatives of xn is an element
of this C ∗ -algebra. Clearly, we have obtained a universal representation. 

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
89

Lecture 21
The Cuntz Algebras

Let us recollect that an isometry in a unital C ∗ -algebra A is an element w such


that w∗ w = 1. It then follows that ww∗ is a projection, which we sometimes call
the range projection of the isometry. Any isometry has norm 6 1. Any isometry on
a finite-dimensional Hilbert space is a unitary, but on infinite-dimensional Hilbert
spaces there exist isometries that are not unitaries, the most famous example being
the unilateral shift which we have discussed earlier in our work on Toeplitz operators.
Definition 21.1. Let n = 2, 3, 4, . . .. The Cuntz algebra On is the universal unital
C ∗ -algebra generated ∗
Pnby n ∗isometries s1 , . . . , sn (that is, si si = 1) subject to the
additional relation i=1 si si = 1.
Since the si are explicitly forced to be isometries they have norm 6 1, so this
family of relations is compact and the universal representation can be defined. In
a moment (Remark 21.2) we will explicitly construct an algebraically faithful rep-
resentation of these relations, and it will follow that the universal representation is
also algebraically faithful. See Lemma 20.15.
Since the projections si s∗i sum to 1, they must be mutually orthogonal. (Proof —
suppose that p1 + · · · + pn = 1 where the pi are self-adjoint projections. Multiplying
on the left and on the right by p1 we get
p1 p2 p1 + p1 p3 p1 + · · · + p1 pn p1 = 0.
The operators here are all positive and sum to zero, so they are all zero. In particular
p1 p2 p1 = (p2 p1 )∗ (p2 p1 ) = 0, and so p2 p1 = 0. Similarly all the other products pi pj
are zero for i 6= j.) Thus we have
s∗i sj = δij 1.
Remark 21.2. One can produce an explicit Hilbert space representation of the Cuntz
relations, by taking for generators the isometries S1 , . . . , Sn on the Hilbert space
H = `2 (N) defined by
Sk (ej ) = enj+k−1
where e0 , e1 , . . . are the standard basis vectors for H. We will see later that the
algebra generated by these isometries actually is isomorphic to On .
Let us use the term word for a finite product of s’s and their adjoints, something
like s1 s22 s3 s∗5 . Because of the relations that we just noted, every word can be written
in a reduced form in which all the terms with adjoints appear to the right of all those
without. If µ is a list whose members come from {1, 2, . . . , n}, we use the shorthand
notation sµ to denote the word sµ1 sµ2 · · · sµk . Thus any word can be written in
reduced form as sµ s∗ν . We use the notation |µ| for the number of members of the
list µ, and we call the number |µ| − |ν| the weight of the word sµ s∗ν . The linear span
of the words is a dense subalgebra of On .

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
90

The product of a word of weight l and a word of weight m is a word of weight


l + m, or else zero. Thus the words of weight zero span a subalgebra of the Cuntz
algebra. We are going to identify this subalgebra.
Lemma 21.3. Let Akn be the subspace of On spanned by the words sµ s∗ν where |µ| =
|ν| = k. Then Akn is a subalgebra, and it is isomorphic to the full matrix algebra
Mnk (C).
Proof. How does one identify a matrix algebra? A m2 -dimensional ∗-algebra over C
is a matrix algebra if and only if it is spanned by a system of matrix units, which is
a basis eIJ , where I, J = 1, . . . , m, with respect to which the multiplication law of
the algebra is
eIJ eKL = δJK eIL
and the adjunction law is
e∗IJ = eJI .
(Of course, eIJ corresponds to the matrix with 1 in the (I, J)-position and zeroes
elsewhere.) Now, in Akn , the n2k words sµ s∗ν where |µ| = |ν| = k are linearly indepen-
dent (for instance because their images in the explicit representation of Remark 21.2
are linearly independent). Moreover by repeated application of Equation 21 we find
that
sµ s∗ν sµ0 s∗ν 0 = δνµ0 sµ s∗ν 0
so that the sµ s∗ν comprise a system of matrix units. 
The fundamental relation
1 = s1 s∗1 + · · · + sn s∗n
expresses the identity operator (which we regard as the generator of A0n = C1) as
a linear combination of generators of A1n . Generalizing this observation we have
inclusions Ak−1
n ⊆ Akn of algebras (namely the following: define µk to be the result
of adjoining a k to the right-hand end of the list µ, and then note the identity
n
X

sµ sν = sµk s∗νk .)
k=1

In terms of the representations of Ak−1


n and Akn as matrix algebras these inclusions
amount to the n-fold inflation maps
T
 

T 7→  ... .
T
It follows that the C ∗ -subalgebra An of On defined as the closed linear span of
the words of weight zero is the direct limit of matrix algebras
C → Mn (C) → Mn2 (C) → Mn3 (C) → · · ·
which is called the uniformly hyperfinite (UHF) algebra of type n∞ . We will discuss
UHF algebras, and more general inductive limits of matrix algebras, later in the

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
91

course; we won’t need any of their theory in our discussion of the Cuntz algebra for
now.
Definition 21.4. Let A be a C ∗ -algebra and B a C ∗ -subalgebra of A. A conditional
expectation of A onto B is a positive linear map Φ : A → B which is the identity
on B. (Positive means that Φ carries positive elements to positive elements.) The
expectation Φ is faithful if, whenever a is positive and non-zero, Φ(a) is non-zero
also.
Our analysis of the Cuntz algebra will be based on a certain faithful conditional
expectation of On onto An . We will represent this expectation in a certain sense as
a limit of inner endomorphisms, and we will use this representation to show that On
is simple.
Let z ∈ T be a complex number of modulus one. The elements zs1 , . . . , zsn of On
are isometries which satisfy the defining relation for the Cuntz algebra, so by the
universal property there is an automorphism
θz : On → On
which sends si to zsi for each i. One has θz (sµ s∗ν ) = z |µ|−|ν| sµ s∗ν , so that An is
precisely the fixed-point algebra for the one-parameter group of automorphisms θz .
It is easy to see that z 7→ θz (a) is continuous for each a ∈ On .
Lemma 21.5. With notation as above, the map Φ defined by
Z 2π
1
Φ(a) = θeit (a) dt
2π 0
is a faithful conditional expectation of On onto An .
The proof is easy. We will now show how to approximate Φ by inner endomor-
phisms:
Lemma 21.6. For each positive k one can find an isometry wk ∈ On with the
property that
Φ(a) = wk∗ awk
for every a in the span of those words sµ s∗ν where |µ|, |ν| < k.
Proof. Let us begin with the following observation. Suppose that m > k, that
∗ ∗
x = sm 1 s2 , and that y = sµ sν is a word where |µ|, |ν| 6 k. Then x yx = 0 in all
cases except when y is of the form s`1 s∗` ∗
1 , when x yx = 1. Indeed, in order that x sµ

not equal zero it is necessary that µ be of the form (1, 1, 1, . . .) and in order that
s∗ν x not equal zero it is necessary that ν be of the form (1, 1, 1, . . .). Thus we need
0
only consider y = s`1 s∗`1 and direct computation yields the stated result.

Now put x = s2k
P
1 s 2 and let w = s γ xs γ , where the sum runs over all words sγ
of length k. Notice that w is an isometry. Indeed, we have
X X X
w∗ w = sδ x∗ s∗δ sγ xs∗γ = sγ x∗ xs∗γ = sγ s∗γ = 1.
γ,δ γ γ

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
92

Let us consider what is w∗ yw for words y = sµ s∗ν with |µ|, |ν| 6 k. If |µ| =
6 |ν| then
our previous observation shows that each of the terms in the double sum that makes
up w∗ yw is equal to zero, so the whole sum is zero. Suppose now that |µ| = |ν| = k.10
Then y is a matrix unit sµ s∗ν . We have
X
wy = sγ xs∗γ y = sµ xs∗ν ,
γ

since only the term with γ = µ contributes, and similarly


yw = sµ xs∗ν .
Thus w commutes with y, so w∗ yw = y. We have shown that (on the set of words
with |µ|, |ν| 6 k), the map y 7→ w∗ yw fixes those words of weight zero and annihilates
those words of nonzero weight. That is to say, w∗ yw = Φ(y) on those words. 
Theorem 21.7. Let a ∈ On be nonzero. Then there are elements x, y ∈ On such
that xay = 1.
As we shall see, this property implies that On is a purely infinite C ∗ -algebra.
Corollary 21.8. The Cuntz algebra On is simple. In particular, therefore, the
concrete C ∗ -algebra of operators on `2 described in Remark 21.2 is isomorphic to
On .
Proof. To begin the proof, notice that each word a = sµ s∗ν has the stated property;
just take x = s∗µ and y = sν . The proof is an approximation argument to use this
property of the words in the generators. We need to be careful because it is not at
all obvious that if a0 , a00 have the desired property then a = a0 + a00 will have it too.
Since a 6= 0, the faithfulness of the expectation Φ gives Φ(a∗ a) > 0, and by a
suitable normalization we may assume that Φ(a∗ a) is of norm one. Approximate
a∗ a by a finite linear combination of words b well enough that ka∗ a − bk < 41 . In
particular we have kΦ(b)k = λ > 43 .
Now Φ(b) is a positive element of some matrix algebra over C, and has norm
λ > 34 . The norm of a positive element of a matrix algebra is just its greatest
eigenvalue. So there must be some one-dimensional eigenprojection e for Φ(b), in
the matrix algebra, for which eΦ(b) = Φ(b)e = λe > 34 e. We can find a unitary u in
the matrix algebra which conjugates e to the first matrix unit, that is
u∗ eu = sm ∗m
1 s1 .
1
It is now simple to check that if we put z = λ− 2 eusm ∗
1 , then z Φ(b)z = 1.
By the preceding lemma, Φ(b) = w bw for some isometry w ∈ On . Put v = wz;

then v ∗ bv = 1. Moreover, kvk < √23 .


Now we compute
k1 − v ∗ a∗ avk 6 kvk2 kb − a∗ ak < 31 ,
10Remember that the terms with |µ| = |ν| < k are included in the linear span of those with
|µ| = |ν| = k, so we do not have to consider them separately.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
93
1
and so v ∗ a∗ av is invertible. Let y = v(v ∗ a∗ av)− 2 and let x = y ∗ a∗ . Then
xay = y ∗ a∗ ay = 1
as required. 
Remark 21.9. Nothing we have said so far rules out the possibility that On and Om
might be isomorphic for m 6= n. That is not the case, however. This was proved by
Cuntz using K-theory in a famous paper (Annals, 1981).

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
94

Lecture 22

Basics on Group C -Algebras and Harmonic Analysis

Let G be a group. Usually, later on in the course, we will assume that G is a


countable, discrete group; but for the moment we will allow continuous groups as
well, so let us assume that G is a topological group, with a Hausdorff and second
countable topology. A unitary representation of G is a (group) homomorphism
π : G → U (H), where U (H) is the unitary group of a Hilbert space H, and π is
required to be continuous relative to the strong topology on U (H). That is, if
gλ → g, then π(gλ )v → π(g)v, for each fixed vector v ∈ H.
Remark 22.1. It would not be a good idea to require that π be norm continuous.
To see why, recall that a locally compact topological group G carries an (essentially
unique) Haar measure µ: a regular Borel measure which is invariant under left
translation. One can therefore form the Hilbert space H = L2 (G, µ) and the unitary
representation of G on H by left translation
π(g)f (h) = f (g −1 h)
which is called the regular representation. It is a simple consequence of the Domi-
nated Convergence Theorem that this representation is strongly continuous; but it
is not norm continuous for instance when G = R.
The Haar measure µ on G, which is invariant under left translation, is usually
not invariant under right translation. However, for each fixed g ∈ G, the map
E 7→ µ(Eg) is another left-translation-invariant measure on G, and hence it differs
from µ only by a scalar multiple which we denote ∆(g): thus
µ(Eg) = ∆(g)µ(E).
It is easily checked that ∆ : G → R+ is a topological group homomorphism. ∆ is
called the modular function of G; and groups such as discrete groups or abelian
groups, for which ∆ ≡ 1, are called unimodular.
Example 22.2. An example of a non-unimodular group is the “(ax + b) group” —
the Lie group G of matrices  
a b
A=
0 1
with a > 0. A left Haar measure is dadb/a2 , a right Haar measure is dadb/a.
To see this, we identify G with an open subset of R2 in the obvious way. The left
action of an element A of G (represented as above) on R2 is given by
    
a b x y ax ay + b
= .
0 1 0 1 0 1
The Jacobian of the transformation (x, y) 7→ (ax, ay + b) is a2 . Via the change of
variables formula, this tells us that a left Haar measure on G is da db/a2 . We can

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
95

also, however, make a similar calculation for the right action of G on itself:
    
x y a b ax bx + y
= .
0 1 0 1 0 1
The Jacobian of (x, y) 7→ (ax, bx+y) is a, which tells us that the right Haar measure
is da db/a.
We’ll use the notation dg instead of dµ(g) for integration with respect to the Haar
measure. Notice the identities (for fixed h)
d(hg) = dg, d(gh) = ∆(h)dg, d(g −1 ) = ∆(g −1 )dg.
To prove the last identity note that both sides define right Haar measures, so they
agree up to a scalar multiple; consider integrating the characteristic function of a
small symmetric neighborhood of the identity (where ∆ ≈ 1) to show that the scalar
is 1.
Definition 22.3. We make L1 (G, µ) into a Banach ∗-algebra by defining
Z
f1 ∗ f2 (g) = f1 (h)f2 (h−1 g)dh, f ∗ (g) = ∆(g)−1 f¯(g −1 ).
G

Exercise: Check that the involution is isometric and that (f1 ∗ f2 )∗ = f2∗ ∗ f1∗ .
Theorem 22.4. There is a one-to-one correspondence between strongly continuous
unitary representations of G and nondegenerate ∗-representations of the Banach
algebra L1 (G).
Proof. To keep matters simple let us assume first of all that G is discrete. (In
this case the Haar measure on G is just counting measure, and we write `1 (G) for
L1 (G).) Let π : G → U (H) be a unitary representation. We define a representation
ρ : L1 (G) → B(H) by
X X
ρ( ag [g]) = ag π(g);
the series on the right converges in norm. This representation has ρ([e]) = 1,
so it is non-degenerate. Conversely, given a representation ρ as above, the map
π : G → U (H) defined by
π(g) = ρ([g])
is a unitary representation of G.
In the general case we must replace summation by integration, handle the compli-
cations caused by the modular function, and use an approximate unit to deal with
the fact that the ‘Dirac masses’ at points of G are no longer elements of L1 (G). This
is routine analysis, the details are on page 183 of Davidson’s book. 
The L1 -norm on L1 (G) is not a C ∗ -norm in general. To manufacture a C ∗ -algebra
related to the representation theory of G, we should form a completion of L1 (G) in an
appropriate C ∗ -norm. There are several different ways to form such a completion (as
will become apparent, this kind of issue turns up many times in C ∗ -algebra theory.)

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
96

Definition 22.5. The maximal C ∗ -algebra of G is the enveloping C ∗ -algebra of the


Banach ∗-algebra L1 (G).
Here is the explanation of the term ‘enveloping C ∗ -algebra’. As we saw in Re-
mark 15.6, the GNS construction can be applied in any Banach ∗-algebra (having
a norm-bounded approximate unit) and it shows that each state of such an algebra
(continuous positive linear functional of norm one) gives rise to a cyclic Hilbert space
representation. In particular there exists a universal representation ρA : A → B(HA )
for each Banach ∗-algebra A, obtained as the direct sum of the GNS representations
coming from all the states; and the C ∗ -subalgebra of B(HA ) generated by ρA [A] is
called the enveloping C ∗ -algebra of A. It has the universal property that any ∗-
homomorphism from A to a C ∗ -algebra B factors uniquely through the enveloping
C ∗ -algebra. In particular, then, there is a 1:1 correspondence between unitary rep-
resentations of G and non-degenerate representations of C ∗ (G). We usually denote
C\∗ (G) by G,
b regarding it as the space of irreducible unitary representations of G —
the ‘unitary dual’.
There is a canonical ∗-homomorphism from any Banach ∗-algebra A to its C ∗ -
envelope. In general this ∗-homomorphism need not be injective (think about the
disk algebra again!); it will be injective precisely when A has a faithful Hilbert space
representation. To see that this is so in the case of A = L1 (G) we consider once
again the left regular representation λ of L1 (G) on L2 (G):
Z
(λ(f1 )f2 )(g) = f1 (h)f2 (h−1 g)dh.
G

It is an easy measure-theoretic fact that this representation is faithful. Thus the


natural homomorphism L1 (G) → C ∗ (G) is injective.
Remark 22.6. If G is a finitely generated discrete group, then its maximal C ∗ -algebra
can be defined by generators and relations (see 20.11): there is a unitary generator
for each element of a generating set for G, and the relations between these unitary
generators are the same as the relations in G. For instance, the maximal C ∗ -algebra
of the free group on n generators is just the C ∗ -algebra defined by n generators
u1 , . . . , un and the relations uj u∗j = 1 = u∗j uj .
The special rôle of the regular representation in the above argument prompts us
to define another ‘group C ∗ -algebra’. The reduced C ∗ -algebra Cr∗ (G) of G is the C ∗ -
subalgebra of B(L2 (G)) generated by the image of the left regular representation.
Thus we have a surjective ∗-homomorphism
C ∗ (G) → Cr∗ (G)
coming from λ via the universal property. Dually, the spectrum of Cr∗ (G) is identified
with a closed subspace of the spectrum G b of C ∗ (G); this is the reduced dual Gbr of
G.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
97

Lemma 22.7. Let G be a discrete (or a locally compact) group. The commutant of
the left regular representation of G on L2 (G) is the von Neumann algebra generated
by the right regular representation (and vice versa of course).
Proof. Obviously the left and right regular representations commute. The commu-
tant R = λ[G]0 of the left regular representation consists of those operators on
L2 (G) that are invariant under the left-translation action of G on L2 (G), and the
commutant L = ρ[G]0 of the right regular representation consists of those operators
that are invariant under the right-translation action. It will be enough to show that
L and R commute, since then ρ[G] ⊆ R ⊆ (L)0 = ρ[G]00 .
So let S ∈ L and T ∈ R. For simplicity, assume that G P is discrete. Then (by
translation invariance) S is completelyP determined by S[e] = sx [x], and similarly
T is completely determined by T [e] = ty [y].
For group elements g and h we have
DX X E X
hT S[g], [h]i = hS[g], T ∗ [h]i = sx [gx], t̄y−1 [yh] = sx thx−1 g−1 .
By a similar calculation
X
hST [g], [h]i = ty sg−1 y−1 h .
The substitution x = g −1 y −1 h shows that these sums agree, and thus T S = ST . 
Let’s take a moment to develop an important fact about these group von Neumann
algebras. For definiteness consider the algebra L(G) generated by the left regular
representation of G (i.e. the commutant of the right regular representation). There
is an important linear functional τ on L(G) defined by
τ (T ) = h[e], T [e]i,
where [e] is the basis element of `2 (G) corresponding to the identity of G. Clearly
τ is a state (in fact a vector state).
Lemma 22.8. The state τ is a trace on L(G); that is, τ (ST ) = τ (T S) for all
S, T ∈ L(G).
Proof. Let T ∈ L(G). Since T is in the commutant of the right regular representa-
tion, for any g ∈ G,
h[g], T [g]i = h[g], T [e.g]i = h[e]g, (T [e])gi = h[e], T [e]i = τ (T ).
Thus τ (g −1 T g) = τ (T ) for all T . Since τ is weakly continuous it follows that
τ (ST ) = τ (T S) for all S ∈ L(G). 

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
98

Lecture 23
Some More Harmonic Analysis

Let us now consider the special case of abelian groups.


Lemma 23.1. Let G be a locally compact abelian group. Then its unitary dual G b
is equal to its Pontrjagin dual Hom(G, T), the space of characters of G, that is
continuous homomorphisms of G into the circle group. The topology on G b is the

restriction of the weak-∗ topology on L (G); that is, a net αλ of characters converges
to the character α if and only if
Z Z
αλ (g)f (g)dg → α(g)f (g)dg
G G
1
for all f ∈ L (G). If G is discrete, therefore, the topology on G
b is the topology of
pointwise convergence.
Proof. Since G is commutative, so is C ∗ (G). Thus by the theory of commutative C ∗ -
algebras, C ∗ (G) = C0 (G),
b and Gb is the space of ∗-homomorphisms from C ∗ (G) → C,
that is one-dimensional representations of C ∗ (G). By the discussion above, these are
the same thing as one-dimensional unitary representations of G, that is characters.
The topologies match up since (by construction) states on C ∗ (G) are all obtained by
extending states on L1 (G), which themselves are given by (positive) L∞ functions
on G. 
Lemma 23.2. Let G be a locally compact abelian group. Then C ∗ (G) = Cr∗ (G) =
C0 (G).
b

Proof. All we need to do is to show that C ∗ (G) = Cr∗ (G), and for this it will be
enough to show that every character α of G extends to a continuous linear map
Cr∗ (G) → C.
Let us first establish the following: pointwise multiplication by a character of G
preserves the norm on the regular representation. In other words, if f ∈ L1 (G) and
α is a character, then
kλ(αf )k = kλ(f )k.
To see this write
λ(αf ) = Uα λ(f )Uα∗
where Uα : L2 (G) → L2 (G) is the unitary operator of pointwise multiplication by α.
It follows that if α is any character of G, and β is a character which extends
continuously to Cr∗ (G) (that is, β ∈ Gbr ), then the pointwise product αβ belongs to
G
br also. Since G b is a group under pointwise product, and G br is nonempty (after all,
Cr∗ (G) must have some characters), we deduce that G b=G br . 
We can use our results to discuss the bare bones of the Pontrjagin duality theory
for abelian groups. For starters let G be a discrete abelian group. As we saw above,
Gb is a compact abelian group also, and C ∗ (G) = Cr∗ (G) = C(G). b The implied

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
99

isomorphism Cr∗ (G) → C(G)


b is called the Fourier transform, and denote F. For an
1
element f ∈ L (G) it can be defined by the formula
Z
Ff (α) = f (g)α(g)dg
G
where α is a character of G. (Of course this integral is really a sum in the discrete
case, but we use notation which is appropriate to the continuous case as well.)
Notice that it is immediate that the Fourier transform converts convolution into
multiplication.
Theorem 23.3. (Plancherel Theorem) The Fourier transform extends to a uni-
tary isomorphism of L2 (G) with L2 (G)
b (for an appropriate choice of Haar measure).

The phrase ‘for an appropriate choice of Haar measure’ conceals all the various
powers of 2π which show up in the classical theory of Fourier analysis.
Proof. The regular representation of Cr∗ (G) is a cyclic representation with cyclic
vector [e]. The associated vector state
τ (f ) = h[e], f ∗ [e]i
corresponds to a measure on G b (by the Riesz representation theorem). Translation
by a character α on C(G) b corresponds to pointwise multiplication by α on Cr∗ (G);
and since α(e) = 1 we see that the measure corresponding to τ is translation invari-
ant, so it is (up to a multiple) the Haar measure ν on G.b By the uniqueness of the

GNS construction, the regular representation of Cr (G) is spatially equivalent to the
multiplication representation of C(G) b on L2 (G,
b ν), with cyclic vector equal to the
constant function 1. In particular there is a unitary U : L2 (G) → L2 (G)
b such that
U (f ∗ h) = (Ff ) · (U h)
for f ∈ L1 (G), h ∈ L2 (G). In particular, taking h = [e] (which is the cyclic vector,
so that U h = 1), we see that U f = Ff for f ∈ L1 . Thus the unitary U is an
extension of the Fourier transform, as asserted. 
The Plancherel theorem extends to non-discrete locally compact groups too. Now
the representing measure ν on G
b is not finite, and we cannot appeal directly to the
GNS theory. Instead we build the representing measure by hand, in the following
way. For each x ∈ L1 ∩ L2 (G) we define a measure νx by the GNS construction, so
that Z
Ff dνx = hx, f ∗ xiL2 (G) .
G
b
If both x and y belong to L ∩ L2 (G) then a simple argument using commutativity
1

shows that
|Fx|2 νy = |Fy|2 νx
as measures on G.
b For any compact subset K of G b one can find an x ∈ L1 ∩ L2 (G)
such that Fx is positive on K. Now we may define an unbounded measure ν on

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
100

Gb via the Riesz representation theorem as a positive linear functional on Cc (G):


b if
f ∈ Cc (G)
b is supported within K we put
Z Z
f
f dν = dνx ,
|Fx|2
for any x with Fx > 0 on K; and this is independent of the choice of x. Thus we
obtain a measure ν on G.b
Let uλ be an approximate unit for Cr∗ (G); then the functions Fuλ increase point-
wise to the constant function 1. For g ∈ L1 ∩ L2 (G) we have
Z Z Z
kgk = limhg, uλ ∗ gi = lim Fuλ dνg = dνg = |Fg|2 dν.
2

Consequently, the Fourier transform extends to a unitary equivalence between L2 (G)


and L2 (G).
b A similar argument with approximate units shows that ν is a Haar
measure on G.
b
The next step in the duality theory is to show that G = G. We won’t do this
bb
here. Instead, we will develop some ideas in the harmonic analysis of nonabelian
groups. To simplify, let’s think about the discrete case.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
101

Lecture 24
Amenability I

From now on, therefore, let us restrict attention to discrete and countable, but not
necessarily abelian, groups. Let G be such a group. A function p on such a group is
called positive definite if for every positive integer n and every n-tuple (x1 , . . . , xn ) of
group elements, the n × n matrix whose entries are p(x−1 j xi ) is positive as a matrix;
that is, for all α1 , . . . , αn ∈ C we have
n
X
(24.1) αi ᾱj p(x−1
j xi ) > 0.
i,j=1

The collection of positive definite functions on G is denoted by B+ (G). Let us


observe that every positive definite function is bounded: since for every such function
the matrix  
p(e) p(x)
p(x−1 ) p(e)
is positive, which implies that |p(x)| 6 p(e) for all x.
Lemma 24.2. Let ϕ be a positive linear functional on C ∗ (G). Then the mapping
p : G → C defined by p(x) = ϕ([x]) is positive definite; and every positive definite
function on G arises in this way from a unique positive linear functional. The states
of C ∗ (G) correspond to the positive definite functions with p(e) = 1.
Proof. If ϕ is positive linear and we define p as suggested above, then
n
X
αi ᾱj p(x−1 ∗
j xi ) = ϕ(f ∗ f ) > 0
i,j=1
Pn
where f = i=1 αi [xi ]. Thus p is positive definite. Conversely if p is positive definite
and we define ϕ to be the linear extension of p to `1 [G], then the same computation
shows that ϕ is a positive linear functional on the Banach algebra `1 [G]. Every such
functional is automatically continuous for the C ∗ -norm and so extends to a positive
linear functional on C ∗ [G]. 
We let B+ (G) denote the space of positive definite functions on G, and we let B(G)
denote the vector space of complex-valued functions on G spanned by B+ (G), that
is, the set {p1 − p2 + i(p3 − p4 )} where p1 , . . . , p4 are positive definite functions. But
every self-adjoint linear functional on a C ∗ -algebra is the difference of two positive
linear functionals (Remark 18.7) and thus we find that B(G) is isomorphic, as a
vector space, to the dual of C ∗ (G). We make B(G) a Banach space by giving it
the norm it acquires from C ∗ (G)∗ via this isomorphism. (Notice that if p is itself
positive definite, the norm of p is just equal to p(e), by Proposition 14.11.)
Proposition 24.3. The pointwise product of two positive definite functions is again
positive definite.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
102

Proof. To prove that the product of positive definite functions is again positive def-
inite, it is necessary to prove that the pointwise product (Schur product) of positive
matrices is positive. This is a standard result of matrix theory. We reproduce the
proof. Let A = (aij ) and B = (bij ) be positive matrices. Their Schur product
C =P (aij bij ). Now because B is positive, B = X ∗ X for some matrix X; that is,
bij = k xik x̄jk . It follows that for any vector ξ,
XX
ξ ∗ Cξ = aij (x̄jk ξ¯j )(xik ξi )
k i,j

and for each k the inner sum is positive. Hence C is a positive matrix, as required.

Remark 24.4. Since (normalized) positive definite functions correspond to states on
C ∗ (G), which in turn correspond via the GNS construction to (cyclic) representa-
tions of G, it is natural to ask: is there an operation on representations of G which
corresponds to the pointwise product of positive definite functions on G? The an-
swer of course is the tensor product. Suppose that ρk : G → U (Hk ) are unitary
representations with cyclic vectors ξk ∈ Hk , k = 1, 2. The corresponding positive
definite functions on G are then
ϕk (g) = hξk , ρk (g)ξk i.
Their pointwise product is the positive definite function associated to the tensor
product ρ = ρ1 ⊗ ρ2 on H = H1 ⊗ H2 , with cyclic vectot ξ = ξ1 ⊗ ξ2 .
It follows that B(G) is an algebra under pointwise multiplication. In fact it is a
Banach algebra: that is, with the norm described above (as the dual space of C ∗ (G))
we have kpqk 6 kpkkqk for p, q ∈ B(G). This is obvious if p, q are positive definite
because positive definite functions attain their norm at the identity element of G,
as we mentioned above. The general case follows from this. For example, suppose p
and q are real (i.e., correspond to self-adjoint linear functionals on C ∗ (G)) and have
norm 6 1. Then by Remark 18.7, each of them can be written
p = sp+ − (1 − s)p− , q = tq+ − (1 − t)q− ,
where s, t ∈ [0, 1] and p± , q± are positive definite functions of norm 1. It follows that
pq can also be written in this way, that is as λ(pq)+ − (1 − λ)(pq)− , where (pq)± are
positive definite of norm 1 and λ ∈ [0, 1]. Therefore pq has norm 6 1 as required.
Definition 24.5. B(G) is called the Fourier-Stieltjes algebra of G.
To explain this terminology, consider the case G = Z. The positive definite
functions on Z are the Fourier series of positive measures on the circle, and thus the
algebra B(G) is just the algebra of Fourier series of measures on the circle, classically
known as the Stieltjes algebra.
Can every positive definite function on G be approximated (in some sense) by
compactly supported positive definite functions? This turns out to be a delicate
question, and its answer leads us to consider the relationship between the maximal

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
103

C ∗ -algebra C ∗ (G) and the reduced C ∗ -algebra Cr∗ (G) which is associated to the
regular representation. To see why, suppose that p(g) = hξ, λ(g)ξi is the positive
definite function associated to a vector state of the regular representation, ξ ∈ `2 (G).
If ξ has finite support, so does the function p. Moreover, we have
kpkB(G) 6 kξk2 ;
and therefore, since vectors v of finite support are dense in `2 (G), we find that every
positive definite function associated to a vector state of the regular representation
of G belongs to the closure (in B(G)) of the space of functions of finite support.
Conversely we also have:
Lemma 24.6. Any positive definite function of finite support is associated to a
vector state of the regular representation.
Proof. Let p be such a function. The operator T η = η ∗ p of right convolution by p
is bounded on `2 (G), by finiteness of the support; and the fact that p is a positive
definite function shows11 that T is a positive operator. Moreover, T commutes with
1
the left regular representation λ[G]. Now put ξ = T 2 [e] (notice that v need not
have finite support). Then for g ∈ G,
1 1
hξ, λ(g)ξi = hT 2 [e], λ(g)T 2 [e]i = hT [e], [g]i = hp, [g]i = p(g)
as required. 
Denote by A+ (G) the space of positive definite functions on G of the form
X
p(g) = hξi , λ(g)ξi i
i
2
kξi k2 < ∞ (notice that this implies that
P
where ξi is a sequence in ` (G) having
the above sequence converges uniformly to a positive definite function.) More canon-
ically we may write
p(g) = Tr(λ(g)T )
where T is a positive trace-class operator on `2 (G). We have kpkB(G) = kξi k2 =
P
kT k1 , the trace norm of T , and it follows from the fact that the space of trace-class
operators is a Banach space under the trace norm that A+ (G) is a closed cone in
B+ (G). In fact the arguments above prove
Proposition 24.7. A+ (G) is the closure of C[G] ∩ B+ (G) (the compactly supported
positive definite functions on G) in B+ (G).
Indeed, Lemma 24.6 shows that every compactly supported element of B+ (G)
belongs to A+ (G), and the remarks preceding the lemma show that every element
of A+ (G) can be approximated by compactly supported elements of B+ (G).

11Check this!

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
104

We can also introduce the notation A(G) for the subalgebra of B(G) generated
by A+ (G). It is the closure of C[G] in B(G). Elements of A(G) are functions of the
form X
p(g) = hξi , λ(g)ηi i = Tr(λ(g)T )
where kξi k2 < ∞ and kηi k2 < ∞ (the trace-class operator T need no longer be
P P
positive). In other words, the corresponding elements of C ∗ (G)∗ are precisely those
that arise, via the regular representation, from the ultraweakly continuous linear
functionals on B(`2 (G)).
Remark 24.8. The algebra A(G) is called the Fourier algebra of G; remember that
B(G) is the Fourier–Stieltjes algebra. In the example G = Z, the algebra A(G) is
generated by sequences of the form
Z 2π
n 7→ eint u(t)v̄(t) dt
0
where u, v ∈ L (T). This is the sequence of Fourier coefficients of the L1 function
2

uv̄, and the collection of products of this sort has dense span in L1 , so we see that
A(Z) is just the algebra of Fourier transforms of L1 functions on the circle.
Lemma 24.9. Let G be a discrete group. The weak-∗ topology on a bounded subset
of C ∗ (G)∗ is equivalent to the topology of pointwise convergence on the corresponding
bounded subset of B(G).
Proof. It is apparent that weak-∗ convergence in the dual of C ∗ (G) implies point-
wise convergence in B(G). Conversely suppose that pλ is a bounded net in B(G)
converging pointwise to p. Then the corresponding functionals
X  X
ϕλ ag [g] = ag pλ (g)
converge to X  X
ϕ ag [g] = ag p(g)
ag [g] belonging to `1 (G). Since the {ϕλ } are uniformly bounded and
P
for all a =
` (G) is norm dense in C ∗ (G), it follows that ϕλ (a) → ϕ(a) for all a ∈ C ∗ (G). 
1

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
105

Lecture 25
Amenability II

Theorem 25.1. Let G be a countable discrete group. Then the following are equiv-
alent:
(a) Cr∗ (G) = C ∗ (G),
(b) The Banach algebra A(G) possesses a bounded approximate unit,
(c) Every function in B(G) can be pointwise approximated by a norm bounded se-
quence of compactly supported functions.
Proof. Suppose (c). Then (by lemma 24.9) each state σ of C ∗ (G) is a weak-∗ limit of
vector states a 7→ hξ, λ(a)ξi, where ξ is a unit vector in the regular representation.
It follows that for each such state σ,
|σ(a)| 6 kλ(a)k
for all a ∈ C (G); and consequently kakC ∗ (G) 6 kλ(a)k, whence C ∗ (G) = Cr∗ (G),

which is (a).
Now suppose (a), that C ∗ (G) = Cr∗ (G). Let a ∈ C ∗ (G) have ϕ(a) = 0 for each
ultraweakly continuous linear functional ϕ on B(`2 (G)). Then λ(a) = 0 and thus
a = 0 since λ is faithful on C ∗ (G). It follows that the space of ultraweakly con-
tinuous functionals on the regular representation is weak-∗ dense in C ∗ (G)∗ . Using
lemma 24.9, this implies that A+ (G) is dense in B+ (G) in the topology of pointwise
convergence. Now the constant function 1 belongs to B+ (G) (it corresponds to the
trivial representation of G) and thus there is a net in A+ (G) converging pointwise
to 1. Since A(G) is generated by compactly supported functions, such a net is nec-
essarily an approximate unit for A(G), and it is bounded because the norm in A(G)
is given by evaluation at e. This proves (b).
Finally, suppose (b). Let fλ be an approximate unit for A(G). We may assume
without loss of generality that fλ consists of compactly supported functions. Let
f ∈ B(G); then f fλ is a net in A(G) and it converges pointwise to f . 
Let G be a discrete group. An invariant mean on G is a state m on the com-
mutative C ∗ -algebra `∞ (G) which is invariant under left translation by G: that is
m(Lg f ) = m(f ), where Lg f (x) = f (g −1 x).
Definition 25.2. A group G that possesses an invariant mean is called amenable.
Example 25.3. The group Z is amenable. Indeed, consider the states of `∞ (Z)
defined by
n
1 X
σn (f ) = f (j),
2n + 1 j=−n
and let σ be a weak-∗ limit point of these states (which exists because of the com-
pactness of the state space). Since for each k ∈ Z, and each f ,
2k
|σn (Lk f − f )| 6 kf k
2n + 1

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
106

tends to zero as n → ∞, we deduce that σ is an invariant mean.


Example 25.4. Generalizing the construction above, suppose that G is a group
and that {Gn } is a sequence of finite subsets having the property that for each fixed
h ∈ G,
#Gn 4 hGn
→0
#Gn
as n → ∞. (Such a sequence is called a Følner sequence.) Then any weak-∗ limit
point of the sequence of states
1 X
f 7→ f (g)
#Gn g∈G
n

is an invariant mean on G. It can be proved that every amenable group admits a


Følner sequence; we may do this at the end of the lecture.
Example 25.5. The free group on two or more generators is not amenable. To see
this, denote the generators of the free group G by x and y, and define a bounded
function fx on the group G as follows: fx (g) is equal to 1 if the reduced word for g
begins with the letter x, and otherwise it is zero. Notice then that fx (xg)−fx (g) > 0,
and that fx (xg) − fx (g) = 1 if the reduced word for g begins with y or y −1 . Define
fy similarly; then we have
[fx (xg) − fx (g)] + [fy (yg) − fy (g)] > 1
for all g ∈ G. But this contradicts the hypothesis of amenability: an invariant mean
m would assign zero to both terms on the left-hand side, yet would assign 1 to the
term on the right.
There are numerous equivalent characterizations of amenability. For our purposes
the most relevant one is the following.
Proposition 25.6. A countable discrete group G is amenable if and only if there
is a sequence {ξn } of unit vectors in `2 (G) such that the associated positive definite
functions
g 7→ hξn , λ(g)ξn i
tend pointwise to 1.
Proof. Suppose that we can find such a sequence {ξn }. Since ξn and λ(g)ξn are unit
vectors we may write
kξn − λ(g)ξn k2 = 2 − 2 Rehλ(g)ξn , ξn i → 0
so that, for each fixed g, λ(g)ξn − ξn → 0 as n → ∞. Now define states σn on `∞ (G)
by
σn (f ) = hξn , Mf ξn i
where Mf is the multiplication operator on `2 associated to f , and let σ be a weak-∗
limit point of the states {σn }. Since
|σn (Lg f − f )| 6 kf kkξn − λ(g)ξn k

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
107

we see that σ(Lg f − f ) = 0, that is, σ is an invariant mean.


Conversely, suppose that G is amenable and let m be an invariant mean. Then
m is an element of the unit ball of `∞ (G)∗ = `1 (G)∗∗ . Goldstine’s Theorem 11.20
states that for any Banach space E, the unit ball of E is weak-∗ dense in the unit
ball of E ∗∗ . Thus we can find a net {ϕλ } in the unit ball of `1 (G) that converges
weak-∗ to the invariant mean m. We have
X X
|ϕλ (g)| 6 1, ϕλ (g) → 1 as λ → ∞;
g g

and these facts together show that we may assume without loss of generality that
each ϕλ is a positive function of norm one. For each g ∈ G we have Lg ϕλ − ϕλ → 0
weakly in `1 (G) as λ → ∞. Now we use the fact (a consequence of the Hahn-Banach
Theorem) that the weak and norm topologies on `1 have the same closed convex
sets to produce a sequence {ψ1 , ψ2 , . . .} of convex combinations of the {ϕλ } such
that for each g ∈ G, Lg ψn − ψn → 0 in the norm topology of `1 (G). Each ψn is a
positive function of `1 -norm equal to one. Take ξn to be the pointwise square root
of ψn . 
Remark 25.7. Examining the proof, we see that we can assume without loss of
generality that the vectors ξn are finitely supported.
Now we can finally prove
Theorem 25.8. A discrete group G is amenable if and only if Cr∗ (G) = C ∗ (G).
Proof. Suppose that G is amenable. Then by Proposition 25.6 there is a sequence
{pn } of positive definite functions on G of norm one , associated to vector states of
the regular representation, that tend pointwise to 1. These functions pn constitute
a bounded approximate unit for A(G), so by Theorem 25.1, C ∗ (G) = Cr∗ (G).
Conversely suppose that C ∗ (G) = Cr∗ (G). Then A(G) has a bounded approximate
unit, which we may assume without loss of generality is made up of compactly
supported positive definite functions of norm one. By Lemma 24.6, such functions
are associated to vector states of the regular representation. By Proposition 25.6,
G is amenable. 
To conclude let us prove Følner’s theorem.
Proposition 25.9. Any (discrete) amenable group admits a Følner sequence.
Proof. It will suffice to show that for any finite subset S ⊆ G and any ε > 0 there
exists a finite subset F ⊆ G with
#(xF 4 F )
< ε ∀x ∈ S;
#F
a diagonal argument then completes the proof. By Proposition 25.6, there exists a
positive, finitely supported function ψ with kψk`1 = 1 on G such that kψ −Lx ψk`1 <
ε/#S for all x ∈ S.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
108

PNow
k
we use what Terry Tao calls the “layer-cake decomposition” to write ψ =
i=1 ci χEi for some nested non-empty sets E1 ⊃ E2 ⊃ . . . ⊃ Ek and some positive
constants ci . Since ψ has norm one,
X
ci #Ei = 1.
i

Moreover, |ψ(g) − Lx ψ(g)| > ci if g ∈ xEi 4 Ei . Therefore


k k
X ε X
ci #(xEi 4 Ei ) ≤ ci #Ei
i=1
#S i=1
for all x ∈ S, and thus
k
X X k
X
ci #(xEi 4 Ei ) ≤ εci #Ei .
i=1 x∈S i=1

It follows that there must be some i for which


X
#(xEi 4 Ei ) ≤ ε#Ei ,
x∈S
which gives the desired result. 

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
109

Lecture 26
Free Group C ∗ -Algebras

Let G be a discrete group. The canonical trace on Cr∗ (G)) is the vector state τ
associated with [e] ∈ `2 (G); that is,
τ (a) = h[e], λ(a)[e]i.
The associated positive definite function is equal to 1 at the identity and 0 at every
other group element. Notice that we may also write
τ (a) = h[g], λ(a)[g]i
for any fixed g ∈ G (the right-hand side agrees with τ on group elements, and it
is continuous, so it agrees with τ everywhere). As we already observed in the von
Neumann algebra context (Lemma 22.8), this implies that the state τ is a trace,
that is, τ (aa0 ) = τ (a0 a) for every a, a0 ∈ Cr∗ (G).
Remark 26.1. The trace τ is faithful, meaning that if a ∈ Cr∗ (G) is positive and
τ (a) = 0, then a = 0. Indeed, this is a consequence of the fact that the regular
representation (a faithful representation) is the GNS representation associated to τ .
To prove it from first principles, use the Cauchy–Schwarz inequality to estimate the
matrix coefficients of λ(a):
1 1
|h[g], λ(a)[g 0 ]i| 6 hg, λ(a)gi 2 hg 0 , λ(a)g 0 i 2 = τ (a).
We are going to use the faithful trace τ as a key tool in analyzing the reduced
group C ∗ -algebra Cr∗ (F2 ). Notice that since the free group is non-amenable, this
is different from the full C ∗ -algebra C ∗ (F2 ). We are going to distinguish them by
C ∗ -algebraic properties. Specifically, we will prove that Cr∗ (F2 ) is simple. This is
very, very different from C ∗ (F2 ): let G be any finite group with two generators, then
according to the universal property of C ∗ (F2 ) there is a surjective ∗-homomorphism
C ∗ (F2 ) → C ∗ (G), and of course the kernel of this ∗-homomorphism is a non-trivial
ideal.
Here is the key lemma. Denote by u, v the unitaries in Cr∗ (F2 ) corresponding to
the generators x, y of the group.
Lemma 26.2. Let a ∈ Cr∗ (F2 ). Then the limit
m n
1 X X i j −j −i
lim u v av u
m,n→∞ mn
i=1 j=1

exists in the norm of Cr∗ (F2 ) and equals τ (a)1.


Before proving this lemma let’s see how we can apply it.
Theorem 26.3. The C ∗ -algebra Cr∗ (F2 ) has no non-trivial ideals (it is simple).

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
110

Remark 26.4. As we earlier pointed out (corollary 4.13), maximal ideals in C ∗ -


algebras are closed. Thus it makes no difference whether we say topologically simple
(no nontrivial closed ideals) or algebraically simple (no nontrivial ideals, closed or
not).
Proof. Let J be a nonzero ideal and let a ∈ J be positive and nonzero. Then the
norm limit
m n
1 X X i j −j −i
lim u v av u
m,n→∞ mn
i=1 j=1
belongs to J. But this limit equals τ (a)1 by Lemma 26.2, and τ (a) 6= 0 by the
faithfulness of the trace. Consequently 1 ∈ J so J is the whole C ∗ -algebra. 
Theorem 26.5. The trace on Cr∗ (F2 ) is unique.
Proof. If τ 0 is another trace, apply it to the left side of the formula in Lemma 26.2
to find that τ 0 (a) = τ (a)τ 0 (1) = τ (a). 
Let us now begin the proof of Lemma 26.2. The basic observation that is needed
is this: if Hilbert space operators A and B have orthogonal ranges, then kA + Bk2 6
kAk2 + kBk2 . We are going to argue then that if a = [g], g 6= e, then the ranges of
the terms in the sum
Xm X n
ui v j av −j u−i
i=1 j=1
1
are ‘essentially’ orthogonal, so that the norm of the sum is of order O((mn) 2 ), and
thus becomes zero in the limit after we divide by mn.
Here is a more precise statement of the idea in the form that is needed.
Lemma 26.6. Let H = H1 ⊕ H2 be a Hilbert space equipped with an orthogonal
direct sum decomposition. Let T ∈ B(H) be an operator mapping H2 into H1 , and
let U1 , . . . , Un ∈ B(H) be unitary operators such that all the compositions Uj∗ Ui ,
j 6= i, map H1 into H2 . Then
n
X √
Ui T Ui∗ 6 2 nkT k.


i=1

Proof. Start with a special case: assume that T maps H (and not just H2 ) into H1 .
Write
n n
X 2 X 2
Ui T Ui∗ = T + U1∗ Ui T Ui∗ U1 .
i=1 i=2
The first term T in the sum above has range contained in H1 , whereas the remaining
sum has range contained in H2 (since U1∗ Ui maps H1 into H2 ). Thus the ranges of
the two terms are orthogonal and we get
n n
X 2 X 2
Ui T Ui∗ 6 kT k2 + Ui T Ui∗ .
i=1 i=2

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
111

By induction we get
n
X 2
Ui T Ui∗ 6 nkT k2
i=1
giving the result (without the factor 2) in the special case. The general case follows
since any operator T that maps H2 into H1 can be written as T = T 0 + (T 00 )∗ , where
kT 0 k 6 kT k, kT 00 k 6 kT k, and both T 0 and T 00 map H into H1 (to see this, put
T 0 = T P2 and T 00 = P1 T ∗ , where P1 and P2 are the orthogonal projections of H
onto H1 and H2 respectively. 
Now we will use this to prove
Pm Lemma
Pn 26.2. It will be enough to prove that if
1
a = [g], then the average mn i=1 j=1 u i j
v av −j u−i converges in norm to 1 if g = e
and to 0 otherwise. For simplicity work with one sum at a time, so consider the
expression
n
1 X j −j
Sn (a) = v av .
n j=1
1
Clearly, Sn ([g]) = [g] if g is a power of y. Otherwise, I claim, kSn ([g])k 6 2n− 2 .
To see this decompose the Hilbert space H = `2 (G) as H1 ⊕ H2 , where H1 is the
subspace of `2 spanned by those free group elements represented by reduced words
that begin with a non-zero power of x (positive or negative), and similarly H2 is
spanned by those free group elements represented by reduced words that begin with
a non-zero power of y (positive or negative), together with the identity. Suppose
that g is not a power of y. Then the reduced word for g contains at least one x and
by multiplying on the left and right by suitable powers of y (an operation which
commutes with Sn and does not change the norm) we may assume that g begins
and ends with a power of x. Now we apply Lemma 26.6 with T the operation of left
multiplication by g and Ui the operation of left multiplication by v i . Once we have
checked that these elements indeed map H2 and H1 in the way that Lemma 26.6
applies, the result follows immediately.
To complete the proof of Lemma 26.2 all we need to do is to reiterate the above
argument to show that
m n
1 X X i j −j −i 4
k u v av u k 6 √ √
mn i=1 j=1 min{ m, n}
whenever a = [g] and g is not the identity.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
112

Lecture 27
Free Group C ∗ -Algebras II

Now we are going to prove that the reduced C ∗ -algebra Cr∗ (G) for G = F2 has no
non-trivial projections. Thus Cr∗ (G) is an example of a simple, unital C ∗ -algebra
with no non-trivial projections. Such examples had long been sought in C ∗ -algebra
theory. This is not the first such example (that was constructed by Blackadar in the
seventies using the theory of AF algebras), but it is the ‘most natural’.
We begin with some abstract ideas which lie at the root of K-homology theory for
C ∗ -algebras. They are developments of a basic notion of Atiyah (1970) to provide
a ‘function-analytic’ setting for the theory of elliptic pseudodifferential operators.
Definition 27.1. Let A be a C ∗ -algebra. A Fredholm module M over A consists
of the following data: two representations ρ0 , ρ1 of A on Hilbert spaces H0 and
H1 , together with a unitary operator U : H0 → H1 that ‘almost intertwines’ the
representations, in the sense that U ρ0 (a) − ρ1 (a)U is a compact operator from H0
to H1 for all a ∈ A.
It is often convenient to summarize the information contained in a Fredholm
module in a ‘supersymmetric’ format: we form H = H0 ⊕ H1 , which we consider
as a ‘super’ Hilbert space (that is, the direct sum decomposition, which may be
represented by the matrix γ = ( 10 −1
0
), is part of the data), we let ρ = ρ0 ⊕ ρ1 , and
we put
0 U∗
 
F = ,
U 0
which is a self-adjoint operator, odd with respect to the grading (that is, anticom-
muting with γ), having F 2 = 1, and commuting modulo compacts with ρ(a) for each
a ∈ A. Thus we may say that a Fredholm module is given by a triple M = (ρ, H, F ).
Lemma 27.2. Suppose that a Fredholm module over the C ∗ -algebra A is given (and
is denoted as above). Then for each projection e ∈ A the operator Ue = ρ1 (e)U ρ0 (e)
is Fredholm, when considered as an operator from the Hilbert space ρ0 (e)H0 to the
Hilbert space ρ1 (e)H1 .
Proof. We have
Ue Ue∗ = ρ1 (e)U ρ0 (e)U ∗ ρ1 (e) ∼ ρ1 (e)U U ∗ ρ1 (e) = ρ1 (e)
where we have used ∼ to denote equality modulo the compact operators. Similarly
Ue∗ Ue ∼ ρ0 (e). Thus Ue is unitary modulo the compacts. 
It follows that each Fredholm module M on A gives rise to an integer-valued
invariant
IndexM : Proj(A) → Z
defined on the space of projections in A.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
113

Let M = (ρ, H, F ) be a Fredholm module over a C ∗ -algebra A. Its domain of


summability is the subset
A = {a ∈ A : [F, ρ(a)] is of trace class}
of A (here [x, y] denotes the commutator xy − yx). One sees easily that A is a
∗-subalgebra of A. In fact, it is a Banach algebra under the norm
kak1 = kak + Tr |[F, ρ(a])|
which is in general stronger than the norm of A itself.
Lemma 27.3. Let M be a Fredholm module over A, with domain of summability
A. The linear functional
τM (a) = 12 Tr(γF [F, a])
is a trace (that is to say τ (ab) = τ (ba) for a, b ∈ A) on A.
Proof. We write (suppressing for simplicity explicit mention of the representation
ρ)
τM (ab) = 12 Tr(γF a[F, b] + γF [F, a]b)
= 12 Tr(γF a[F, b] − γ[F, a]F b)
= 12 Tr(γF a[F, b] − F bγ[F, a])
= 21 Tr(γF a[F, b] + γF b[F, a])
and the last expression is now manifestly symmetric in a and b. (We used the
identity F [F, a] + [F, a]F = [F 2 , a] = 0, the symmetry property of the trace, and the
fact that γ anticommutes with F and commutes with b.) 
We have now used our Fredholm module M to construct an integer-valued in-
dex invariant of projections in A, and a complex-valued trace invariant of general
elements of A. These invariants agree where both are defined:
Proposition 27.4. Let M be a Fredholm module over a C ∗ -algebra A and let e be
a projection in the domain of summability A of M . Then τM (e) = − IndexM (e). In
particular, τM (e) is an integer.
Proof. Using the trace index formula 3.12 and the computation in the proof of the
preceding lemma, we write
τM (e) = τM (e2 ) = Tr(γF ρ(e)[F, ρ(e)]).
Putting
0 U∗
   
e0 0
ρ(e) = , F =
0 e1 U 0
we get
U ∗ e1 U − e0
 
0
[F, ρ(e)]F = F ρ(e)F − ρ(e) = .
0 U e0 U ∗ − e1

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
114

Thus
Ue∗ Ue − e0
 
0
τM (e) = Tr(γρ(e)[F, ρ(e)]F ρ(e)) = Tr .
0 e1 − Ue Ue∗

Since Ue∗ is a parametrix for Ue the index formula now gives

τM (e) = − Index(Ue ) = − IndexM (e)

as asserted. 

Definition 27.5. We will say that a Fredholm module over a unital C ∗ -algebra A
is summable if its domain of summability is a dense, unital subalgebra of A.

Theorem 27.6. Let A be a unital C ∗ -algebra. Suppose that there exists a summable
Fredholm module M over A, for which the associated trace τM on the dense subal-
gebra A ⊆ A is positive, faithful, and has τM (1) = 1. Then A has no non-trivial
projections.

“Faithful” here means that for a positive a ∈ A, τ (a) = 0 implies a = 0.

Proof. Suppose first that e is a projection in A. Since 0 6 e 6 1, 0 6 τ (e) 6 1;


since e is a projection, τ (e) is an integer by Proposition 27.4. Thus τ (e) equals zero
or one. Since τ is faithful, if τ (e) = 0, then e = 0; if τ (e) = 1 then τ (1 − e) = 0 so
1 − e = 0 and e = 1.
To complete the proof we need only show that Proj A is dense in Proj A. Suppose
that p ∈ Proj A. Given a small ε > 0, we can (by density) find x ∈ A such that
x = x∗ and kx − pk < ε, so that the spectrum of x lies in (−ε, ε) ∪ (1 − ε, 1 + ε). Let
q be the spectral projection of x associated to (1 − ε, 1 + ε); then , by the functional
calculus, kq − xk < ε and therefore kq − pk < 2ε. We must show that q ∈ A. This
is a consequence of the Cauchy integral formula 6.4
Z
1
q= (z1 − x)−1 dz,
2πi Γ
where Γ is, say, the circular contour center 1 and radius 12 . The well-known identity

[F, a−1 ] = −a−1 [F, a]a−1

shows that the integrand (z1 − x)−1 belongs to A and is a continuous function, in
the norm of A, of z ∈ Γ∗ . Therefore the integral converges in A and q ∈ A as
asserted. 

So much for abstraction. Following an idea of Cuntz, we will now construct a


Fredholm module over A = Cr∗ (F2 ) that meets the criteria of Theorem 27.6. This
Fredholm module makes use of the geometry of the tree associated to the free group
G = F2 , which is shown in the figure below.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
115

The tree is an infinite graph (1-dimensional simplicial complex) which is equipped


with a natural action of G by translation. Let V denote the set of vertices of the
tree, E the set of oriented edges. Considered as a representation of G, the Hilbert
space `2 (V ) is a copy of the regular representation; the Hilbert space `2 (E) is the
direct sum of two copies of the regular representation, one copy being made up by
the edges in the ‘a-direction’ and the other by the edges in the ‘b-direction’.
We construct a Fredholm module as follows. First we describe the Hilbert spaces:
H0 = `2 (V ), H1 = `2 (E) ⊕ C.
The representations of Cr∗ (G) on H0 and H1 are the regular ones, and we use the
zero representation on the additional copy of C. We define the unitary U : H0 → H1
by first picking a ‘root’ vertex of the tree (labeled e in the figure). For each vertex
v 6= e we define
U [v] = ([lv ], 0),
where lv is the unique edge of the tree originating from v and pointing in the direction
of e (this can be ±1 times a basis element for `2 (E), depending on the orientation).
For the vertex e itself we define
U [e] = (0, 1),
thus mapping to the basis vector of the additional copy of C. It is clear that U is
unitary. Moreover, if g is a group element, then λ(g)U λ(g −1 ) differs from U only in
that the special rôle of the vertex e is now played by ge. This affects the definition

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
116

of `(v) only for those finitely many vertices v that lie along the geodesic segment
from e to ge. Thus λ(g)U λ(g −1 ) − U is of finite rank. It follows that U commutes
modulo finite rank operators with elements of C[G], and hence by an approximation
argument that U commutes modulo compact operators with elements of Cr∗ (G). We
have shown that our data define a summable Fredholm module.
To complete the proof it is necessary to identify the trace τM on the domain
of summability. We need only compute τM ([g]) for group elements g. If g = e
is the identity, we easily calculate τM ([e]) = 1. (For instance this follows from
Proposition 27.4 applied to the projection [e] = 1.) If g is not the identity, then to
compute τM ([g]) = 12 Tr(γF [F, [g]]), we must compute traces of operators such as
λ(g) − U ∗ λ(g)U
on H0 . Represent this operator by an infinite matrix relative to the standard basis
of H0 . The diagonal entries of the matrix are all zeros, so the trace is zero. It follows
that X
τm ( ag [g]) = ae
and thus τM agrees with the canonical trace on Cr∗ (F2 ), which we know to have the
properties listed in Theorem 27.6.
This completes the proof of
Theorem 27.7. The C ∗ -algebra Cr∗ (F2 ) has no non-trivial projections.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
117

Lecture 28
Completely Positive Maps and Nuclearity I

Recall
Definition 28.1. Let A and B be C ∗ -algebras. A linear map (usually bounded)
ϕ : A → B is said to be positive if ϕ(a) > 0 whenever a > 0. If A and B are unital
and ϕ(1) = 1 it is unital positive.
A unital positive map is automatically bounded, with norm 1. Since an element
a ∈ A is positive iff a = x∗ x, we see that ∗-homomorphisms are positive maps.
Let ϕ : A → B be a linear map. Then ϕ also induces linear maps (inflations)
ϕn : Mn (A) → Mn (B). If ϕ is a ∗-homomorphism (and therefore positive) then all
its inflations are too. However, in general the inflation of a positive map need not
be positive: the standard example is the transposition map on A = M2 (C). Indeed,
consider the matrices in M4 (C) = M2 (M2 (C))
   
1 0 0 1 1 0 0 0
 0 0 0 0   0 0 1 0 
 0 0 0 0  and  0 1 0 0  ;
   
1 0 0 1 0 0 0 1
the first is positive, the second is not, but the second is obtained by transposing the
2 × 2 blocks of the first.
Definition 28.2. A positive linear map ϕ is completely positive if all its inflations
ϕn are positive also.
Remark 28.3. We use the abbreviations cp = “completely positive” and ucp =
“unital completely positive”.
Clearly a ∗-homomorphism is completely positive. To get a more general under-
standing of complete positivity we need to know something about positive elements
of matrix algebras.
Lemma 28.4. Let A be a C ∗ -algebra and let X = (xij ) ∈ Mn (E). The following
conditions are equivalent:
(i) X is positive in Mn (A);
(ii) X belongs to the span of the matrices aa∗ = (ai a∗j ), where a = (ai ) is a column
vector in An .
(iii) For every column vector b ∈ An , the element b∗ Xb = i,j b∗i xij bj is positive
P
in A.

Proof. For (i)⇒(ii), suppose that X is positive. Then

P X ∗= Y Y where Y ∈ Mn (A).
Let a1 , . . . , an be the columns of Y . Then Y Y = i ai ai (check this!). This proves
(ii).
For (ii)⇒(iii), note that if X = aa∗ , then b∗ Xb = (b∗ a)(b∗ a)∗ > 0.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
118

For (iii)⇒(i), consider a cyclic representation π of A on H, with cyclic vector


ξ. Condition (iii) implies that hη, Xηi > 0 for each η = bξ ∈ H n , b ∈ An . But
since ξ is cyclic, such vectors η are dense in H n . Thus X is positive in every cyclic
representation, thus in the universal representation; so positive in Mn (A). 
Thus we obtain the “working definition” of complete positivity.
Corollary 28.5.
P A∗ linear map ϕ : A → B between C ∗ -algebras is completely positive

if and only if i,j bi ϕ(ai aj )bj > 0 for all n-tuples a1 , . . . , an ∈ A and b1 , . . . , bn ∈ B.
From this we see that a positive linear functional (e.g. a state) is completely pos-
itive, and that the “cut-down” of a homomorphism, that is a map ϕ(a) = V ∗ π(a)V
with π a homomorphism and V ∈ B fixed, is completely positive. These two ex-
amples are related by the GNS construction: if ϕ is a state and π the associated
GNS representation on B(H), then ϕ(a) = V ∗ π(a)V where V is the inclusion of
the 1-dimensional subspace spanned by a cyclic vector. Stinespring’s theorem, a
generalization of GNS, states that every completely positive map to B(H) arises in
this way.
Theorem 28.6. (Stinespring) Let A be a unital C ∗ -algebra and ϕ : A → B(H) a
completely positive map. Then there exist a Hilbert space K, a representation π of
A on K, and an operator V : H → K such that ϕ(a) = V ∗ π(a)V for all a ∈ A.
Proof. It follows the line of the usual GNS theorem 15.1. Let W be the tensor
product A ⊗ H (as complex vector spaces). Equip W with the sesquilinear form
* +
X X X
a0i ⊗ ξi0 , aj ⊗ ξj = hϕ(a∗j a0i )ξi0 , ξj iH .
i j W i,j

Complete positivity of ϕ implies that this form is positive (semidefinite), so dividing


W by the null space and completing gives us a Hilbert space K, exactly as in the
GNS construction. The left multiplication representation of A on A ⊗ H extends to
a unitary representation π of A on K, and if V : H → K is given by ξ 7→ 1 ⊗ ξ then,
by construction, V ∗ π(a)V = ϕ(a). 
Notice that if ϕ is unital, then V is an isometry: in this case H may be identified
with the subspace V (H) of K. Using the 2 × 2 matrix decomposition associated
to K = H ⊕ H ⊥ , we get the traditional statement of Stinespring’s theorem: every
unital completely positive map to B(H) can be dilated to a homomorphism (i.e. is
the top left corner entry in a homomorphism to 2 × 2 matrices).
Remark 28.7. Let ϕ be a unital completely positive map. According to Stinespring’s
theorem, there is a ∗-homomorphism π to 2 × 2 matrices with ϕ in the top left; that
is,  
ϕ(a) α(a)
π(a) = .
β(a) γ(a)
This gives us
ϕ(a∗ a) = ϕ(a)∗ ϕ(a) + β(a)∗ β(a).

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
119

It follows that ϕ(a∗ a) = ϕ(a∗ )ϕ(a) if and only if β(a) = 0. Similarly, ϕ(aa∗ ) =
ϕ(a)ϕ(a∗ ) if and only if α(a) = 0. Thus the set M of a ∈ A with ϕ(a∗ a) = ϕ(a)∗ ϕ(a)
and ϕ(aa∗ ) = ϕ(a)ϕ(a)∗ is a subalgebra, characterized by α(a) = β(a) = 0; it is
called the multiplicative domain of ϕ. If a ∈ M and a0 ∈ A then
ϕ(aa0 ) = ϕ(a)ϕ(a0 ), ϕ(a0 a) = ϕ(a0 )ϕ(a)
(the bimodule property).
As might be expected, cp maps involving matrix algebras are particularly impor-
tant and explicit. These characterizations are due to Choi.
Example 28.8. (CP maps from matrices) Let B be a unital C ∗ -algebra and
ϕ : Mn (C) → B a linear map. Then ϕ is completely positive if and only if the
matrix Φ = [ϕ(eµν )] is positive in Mn (B), where eµν are the standard matrix units.
To prove this, go back to the working definition (Corollary 28.5). We see that ϕ is
completely positive if and only if the expression
X
S= b∗i ϕ(a∗i aj )bj
i,j

is positive for all m-tuples ai ∈ Mn (C) and bi ∈ B. If this positivity condition


take ai = e1i for i = 1, . . . , n; then a∗i aj = eij and so the expression
is satisfied, P
S becomes i,j b∗i Φij bj , which shows that Φ is positive in Mn (B) by lemma 28.4.
Conversely, if we know thatPΦ is positive, then write each ai as a linear combination
of matrix units, say ai = cµν
i eµν . Then after some algebra, S becomes a linear
combination of expressions of the form
X
S0 = x∗µ ϕ(eµν )xν = x∗ Φx,
µ,ν
P ρν
where the x’s are of the form xν = ci bi for various ρ. These expressions are
positive because Φ ∈ Mn (B) is positive (using lemma 28.4 again).
Remark 28.9. An important special case: if b1 , . . . , bn ∈ B then the map Mn (C) → B
defined by
ϕ(eµν ) = bµ b∗ν (µ, ν = 1, . . . , n)
is completely positive.
Example 28.10. (CP maps to matrices) Let A be a unital C ∗ -algebra and let
ϕ : A → Mn (C) be a linear map. Then ϕ is completely positive if and only if the
linear functional Φ on Mn (A) defined by
X
Φ(a) = ϕ(aµν )µν
µ,ν

is positive. To see this, we once again use the “working definition” 28.5 of complete
positivity. First, suppose that ϕ is a cp map. Then for all lists b1 , . . . , bn in Mn (C)

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
120

and all positive a ∈ Mn (A) we have


X
b∗µ ϕ(aµν )bν > 0 (in Mn (C)).
µ,nu

Take bν = eν1 (the matrix unit). Then the left side is Φ(a)e11 , so we must have
Φ(a) > 0.
For the converse, suppose that Φ is known to be a positive map. The map ϕ is
completely positive if and only if for all lists ai in A and bi in Mn (C), and all ξ ∈ Cn ,
the expressions X
S0 = ξ ∗ b∗i ϕ(a∗i aj )bj ξ > 0
i,j
are
P positive in C. Write each b as a linear combination of matrix units, say bi =
µν 0
c
µ,ν i e µν Then we can expand S as
.
X µν
S0 = ξ¯ν c̄i ϕ(a∗i aj )µρ cρσ
j ξσ .
P ρσ
Put yρ = cj aj ξσ ∈ A; then
X
S0 = ϕ(yµ∗ yρ )µρ = Φ(Y) > 0,
since the matrix Y = y∗ y is positive.
Here is an important corollary.
Lemma 28.11. Suppose that A ⊆ B are C ∗ -algebras. Every ucp map from A to a
matrix algebra extends to a ucp map from B to the same matrix algebra.
Proof. By Example 28.10, ucp maps from A to Mn (C) are in one-to-one correspon-
dence with states on Mn (A). Thus, we need only show that every state ϕ on A
extends to a state on B. By the Hahn-Banach theorem, there is a linear functional
ψ on B of norm one and extending ϕ. We must show that ψ is positive, but this
follows from Proposition 14.11. 

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
121

Lecture 29
Completely Positive Maps and Nuclearity II

We will be interested in limits of completely positive maps. Such limits are con-
sidered in the point-norm sense: a sequence (or net) ϕn of completely positive maps
A → B converges (in this sense) to ϕ if and only if ϕn (a) → ϕ(a) (in norm) for each
a ∈ A. Clearly, complete positivity is preserved under this sort of limit.
Definition 29.1. A completely positive map ϕ : A → B is factorable if it is equal
to a composite of completely positive maps A → Mn (C) → B (for some n ∈ N). It
is nuclear if it is the point-norm limit of factorable maps. A C ∗ -algebra A is nuclear
if the identity map A → A is nuclear.
Example 29.2. Every finite-dimensional C ∗ -algebra is a direct sum of matrix al-
gebras, and is therefore nuclear. In fact, we obtain the same definition if we replace
Mn (C) by a general finite-dimensional algebra in the definitions above.
Example 29.3. Every commutative (unital) C ∗ -algebra A = C(X) is nuclear, by
the following argument. Let {U1 , . . . , UN } be a finite open cover of X, let xi ∈ Ui
and let {fi } be a subordinate partition of unity. Then the maps
A → CN , f 7→ (f (x1 ), . . . , f (xN ))
and X
CN → A, (λ1 , . . . , λN ) 7→ λi f i
i
are completely positive and, as the mesh of the cover decreases, their composite
point-norm approximates the identity.
Nuclearity has many applications in C ∗ -algebra theory. Here we will develop one
which is particularly relevant to K-homology.
Let 0 → J → A → B → 0 be a short exact sequence of C ∗ -algebras and ∗-
homomorphisms (an extension). A splitting is a map B → A which is a right
inverse to the quotient map A → B.
We know from algebra that the existence of a splitting which is a homomorphism
implies that the extension is a direct sum, A = J ⊕ B. But we might ask for a
splitting which has weaker properties, such as complete positivity. An extension
that admits a completely positive splitting is called semisplit.
Proposition 29.4. (Choi-Effros lifting theorem) Every extension (as above)
for which the quotient B = A/J is separable and nuclear is semisplit.
Proving this will require some new technology, foreshadowed in Remark 8.17.
Recall (Definition 8.1) that an approximate unit for a C ∗ -algebra is an increasing net
of positive contractions uλ such that the operations of left (and right) multiplication
by uλ tend to 1 in the point-norm topology. Now suppose that the algebra J for
which we are forming an approximate unit is in fact an ideal in some larger C ∗ -
algebra A. Consider the expressions uλ a and auλ , for a ∈ A. We should not expect

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
122

these to converge to a in general (after all, they belong to the ideal J), but we can
still ask what happens to their difference:
Definition 29.5. An approximate unit {uλ } for an ideal J C A is quasicentral if
uλ a − auλ → 0 for all a ∈ A.
Proposition 29.6. Quasicentral approximate units always exist.
Proof. Let D = {uλ } be the canonical approximate unit (Theorem 8.2). I claim that
for each fixed a ∈ A the commutators auλ − uλ a tend to zero in the weak topology
of the Banach space A. Indeed, we must show that ϕ(auλ − uλ a) → 0 for all ϕ ∈ A∗ ,
and it suffices to consider the case where ϕ is a state. Let π be the corresponding
GNS representation of A on a Hilbert space H. Note that π(uλ ) tends strongly
to the orthogonal projection onto the closed (14.7) subspace K = π(J)H (check
this separately for ξ ∈ K and ξ ∈ K ⊥ ). But K is an A-invariant subspace, so P
commutes with A, giving the result.
Now we apply the Hahn-Banach theorem to prove the following assertion: for
fixed a ∈ A and ε > 0 and any u ∈ D, there is v > u in D such that kav − vak < ε.
Indeed, let C = {av − va : v > u, v ∈ D}; it is a closed convex set (as D is closed
under the formation of convex combinations) and we want to prove it contains 0. If
it didn’t, by the Hahn-Banach theorem there would exist a linear functional ϕ that
has absolute value > 1 on C, and that contradicts the first part.
A simple “inflation” argument extends the previous paragraph to finite subsets:
for fixed a1 , . . . , ak ∈ A and ε > 0 and any u ∈ D, there is v > u in D such
that kaj v − vaj k < ε for j = 1, . . . , k. Now use this to define a subnet {uµ } of D
(parameterized by µ ∈ F × R+ × D, where F is the set of finite subsets of A) for
which kauµ − uµ ak → 0 for all a; that is, a quasicentral approximate unit. 
Let J C A and let B be another C ∗ -algebra. A unital completely positive map
B → A/J is liftable if there exists another unital completely positive map B → A
making the obvious diagram commute. We will use quasicentral approximate units
to prove
Proposition 29.7. In the above situation, suppose B is a separable, unital C ∗ -
algebra. Then the collection of liftable ucp maps B → A/J is closed in the point-
norm topology.
Proof. Let ϕn : B → A/J be a sequence of ucp maps converging in the point-norm
topology to ϕ, and suppose that ψn : B → A are liftings of ϕn . We want to modify
the ψn inductively to new liftings ψn0 which also converge in the point-norm topology;
then their limit ϕ will be a lifting of ψ.
Let {bj } be a dense subset of the unit ball of B. Without loss of generality we
can assume a “fast convergence” condition that for each N = 2, 3, . . . we have
kϕN (bj ) − ϕN −1 (bj )k < 2−N ∀j < N.
We’ll modify the lifts ψn inductively to new ucp lifts ψn0 satisfying
0 0 −N
kψN (bj ) − ψN −1 (bj )k < 2 · 2 ∀j < N.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
123

This condition (together with the density of the bj and the fact that ucp maps
have norm 1) implies that the sequence ψn0 converges in the point-norm topology,
as required.
Suppose that the lifts ψ10 , . . . , ψN
0 0
have been constructed. Our formula for ψN +1 is
1 1 1 1
0 2 0
ψN +1 (b) = (1 − u) ψN +1 (b)(1 − u) + u ψN (b)u ,
2 2 2

where u ∈ J with 0 6 u 6 1. This is a ucp map and differs by an element of J from


ψN +1 , so it certainly lifts ϕN +1 . Now let us choose u from a quasicentral approximate
unit for J in A. Using quasicentrality, we can arrange that for b ∈ {b1 , . . . , bN +1 }
we have
 0 0
 0 −N
k ψN +1 (b) − ψN (b) − [(ψN +1 (b) − ψN (b))(1 − u)] k < 2 .
However, according to Equation 8.5, the expression in the right hand set of square
brackets tends (as u runs over the approximate unit) to the norm of ϕN +1 (b)−ϕN (b)
in A/J, which by assumption is less than 2−N . This ends the proof. 
Proof of Proposition 29.4. Let J C A and suppose B = A/J is separable and nu-
clear. We want to show that the identity map B → B is liftable. Since B is nuclear,
the identity map is a point-norm limit of factorable maps; by Proposition 29.7,
liftability is preserved under point-norm limits. So it suffices to show that any map
Mn (C) → B is liftable. By Example 28.8, completely positive maps Mn (C) → B
are identified with positive elements of Mn (B), and we want to lift these to positive
elements of Mn (A). But it is easy to lift a positive element from a quotient A/J
to A (first lift to a self-adjoint, and then apply the functional calculus to take the
positive part of the resulting lift). 
Remark 29.8. We have been a little sloppy about unitality here; to be specific,
the argument produces a completely positive lifting but not necessarily a unital
completely positive lifting. To fix this problem, we should notice that when we
identify cp maps Mn (C) → A with positive elements ofPMn (A), the unital cp maps
correspond to matrices (aij ) ∈ Mn (A) whose “trace” aii ∈ A is equal to 1. So
what we want to know is that a positive element b of Mn (B) with trace 1 lifts to a
positive element of Mn (A) with trace 1. Let a be any positive lift of b. Its trace is
certainly equal to 1 + j, j ∈ J, and by perturbing a by an element of Mn (J) we can
ensure that j is positive so 1 + j is invertible. Then
1 1
a0 = (1 + j)− 2 a(1 + j)− 2
gives a positive lift with trace 1.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
124

Lecture 30
Nuclearity and Group Algebras

Let G be a discrete group. Recall from Lemma 24.2 that a function p : G → C


with p(e) = 1 is positive definite if and only if the map p (defined on G) extends
linearly to a state of C ∗ (G). We can also characterize positive definite functions in
terms of their multiplier action on C[G]:
Definition 30.1. Let p be a function defined on a group G. The multiplier as-
sociated to p is the linear map Mp : C[G] → C[G] defined on basis elements by
Mp ([g]) = p(g)[g].
Definition 30.2. A function p on G having p(e) = 1 is positive definite if and only
if the corresponding multiplier Mp extends to a unital completely positive map from
C ∗ (G) to itself, or from Cr∗ (G) to itself.
Proof. Suppose that Mp extends to a completely positive map on C ∗ (G) or Cr∗ (G).
Let x1 , . . . , xn be group elements and α1 , . . . , αn scalars, Apply the criterion 28.5
for complete positivity to ai = [xi ] and bi = λi [x−1 i ]. The resulting group element
is a multiple of the identity, which multiple must be positive. This yields positive-
definiteness of p according to Equation 24.1.
Conversely, suppose that p is a positive definite function on G and let ϕ be the
corresponding state of A = C ∗ (G). Let H be the universal representation of G (i.e.
the Hilbert space on which C ∗ (G) acts via π : A → B(H)), and let ξ ∈ H be a unit
cyclic vector for ϕ, so that ϕ(a) = hπ(a)ξ, ξi.
Consider now the Hilbert space H ⊗ H (see Remark 2.19). There is a natural
representation π ⊗ π of G on this Hilbert space, which extends to a representation of
C ∗ (G) because of the universal property of the latter algebra. Let V : H → H ⊗ H
be the isometry η 7→ ξ ⊗ η. Then the composite map
a 7→ V ∗ (π ⊗ π)(a)V : C ∗ (G) → B(H)
is completely positive, agrees with Mp of C[G], and maps C[G] to C[G]; by continuity
it maps C ∗ (G) to C ∗ (G) as required.
The proof for the reduced C ∗ -algebra is similar but involves one additional nuance.
Now we let λ be the regular representation on `2 (G) and we consider H ⊗ `2 (G) with
the representation π ⊗ λ. We need to know that π ⊗ λ extends to a representation
of the reduced C ∗ -algebra Cr∗ (G). Here we cannot appeal to a universal property;
instead we must use Fell’s trick, see below, Remark 30.3. The rest of the argument
proceeds as before. 
Remark 30.3. Fell’s trick is (the proof of) the following statement: suppose G is a
discrete group, λ is the regular representation on `2 (G), and ρ is any representation
on a Hilbert space H. Then λ ⊗ ρ is unitarily equivalent to the direct sum of dim H
copies of λ. In particular, λ ⊗ ρ extends to a C ∗ -homomorphism on Cr∗ (G).

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
125

To prove this, simply consider the unitary U : `2 (G) ⊗ H → `2 (G) ⊗ H defined on


basis vectors by
[g] ⊗ ξ 7→ [g] ⊗ ρ(g)ξ.
A simple calculation shows
(λ ⊗ ρ(h))U ([g] ⊗ ξ) = U (λ(h) ⊗ 1)([g] ⊗ ξ),
so that U intertwines λ ⊗ ρ and λ ⊗ 1, proving the result.
Recall now the notion of amenability. A discrete group G is amenable if and
only if there is a translation-invariant state on `∞ (G). In Proposition 25.6 and
the subsequent remark, we proved that G is amenable if and only if there is a
sequence of finitely supported vectors ξn ∈ `2 (G) such that the associated positive
definite functions pn (g) = hg · ξn , ξn i converge pointwise to 1. We then used this to
characterize amenability in C ∗ -algebraic terms: G is amenable iff Cr∗ (G) = C ∗ (G).
Now we will give another characterization of amenability: G is amenable iff Cr∗ (G)
is nuclear. Since the two directions of the proof are quite different, we’ll separate
them out.
Proposition 30.4. Suppose G is an amenable (discrete) group. Then Cr∗ (G) is
nuclear.
Proof. Let (ξn ) be a sequence of finitely supported vectors as guaranteed by Propo-
sition 25.6, and let Sn ⊆ G be the support of ξn . The inclusion of Sn into G induces
an isometry Vn : `2 (Sn ) → `2 (G).
Define ϕn : Cr∗ (G) → B(`2 (Sn )) by T 7→ V ∗ T V ; this is a completely positive map.
Note that B(`2 (Sn )) is a matrix algebra.
¯
Define Ξn : B(`2 (Sn )) → B(`2 (Sn )) by sending the matrix unit egh to ξ(g)ξ(h)e gh .
This is a completely positive map by Remark 28.9.
Define ψn : B(`2 (Sn )) → Cr∗ (G) by sending S ∈ B(`2 (Sn )) to g∈G V SV ∗ ; the
P

series converges and defines an element of Cr∗ (G) (indeed of C[G]). This is a com-
pletely positive map.
Calculation shows that the composite of these three maps is the multiplier Mpn .
Since pn tends to 1 pointwise, these multipliers tend to the identity operator in the
point-norm topology. Thus Cr∗ (G) is nuclear. 
Proposition 30.5. Suppose Cr∗ (G) is nuclear. Then G is an amenable group.
Proof. By definition of nuclearity, there are nets of ucp maps
ϕα : Cr∗ (G) → Mnα (C), ψα : Mnα (C) → Cr∗ (G)
whose composites tend point-normwise to the identity map. Using Lemma 28.11,
extend the maps ϕλ to all of B(`2 (G)). We obtain composite maps αλ : B(`2 (Γ)) →
Cr∗ (Γ) ⊆ L(Γ) (the group von Neumann algebra). Now use the compactness of the
unit ball of B(`2 (Γ)) in the weak topology to pass to a point-weak limit
α : B(`2 (Γ)) → L(Γ)
which restricts to the identity on Cr∗ (Γ).

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
126

We can regard `∞ (Γ) as a subalgebra of B(`2 (Γ)). Consider the map


m = τ ◦ α : `∞ (Γ) → C,
where C is the canonical trace (Lemma 22.8). I claim that m is an invariant mean.
To see this, for g ∈ G let Ug be the unitary given by left multiplication by g. Then
Ug ∈ Cr∗ (G) belongs to the multiplicative domain of α (Remark 28.7). Thus for
T ∈ B(H),
τ ◦ α(Ug∗ T Ug ) = τ Ug∗ α(T )Ug = τ ◦ α(T ),


the first equality because of the bimodule property (Remark 28.7) and the second
because τ is a trace. Applied to `∞ (G) this shows that m is an invariant mean. 

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
127

Lecture 31
Approximate Equivalence and the Weyl-von Neumann Theorem

Unitary equivalence (conjugation by a unitary) is a fundamentally important


equivalence relation in C ∗ -algebra theory. Many deep results arise, though, when
we introduce compact perturbations into the mix also. The ancestor of all these
results is the Weyl-von Neumann theorem.
Definition 31.1. A bounded operator T on a Hilbert space H is diagonal if there
exists an orthonormal basis of H made up of eigenvectors for T .
Theorem 31.2. (Weyl-von Neumann theorem) Every bounded selfadjoint op-
erator on a Hilbert space is the sum of a compact operator and a diagonal operator.
Moreover, the norm of the compact part can be taken smaller than any prescribed ε.
Proof. According to the proof of Corollary 16.5 (the spectral theorem), every self-
adjoint operator is unitarily equivalent to a sum of countably many operators of the
form
T f (λ) = λf (λ)
on H = L2 (X, µ) where X is a compact subset of R and µ some Borel measure on
it. It suffices therefore to prove the result for just one operator of this sort.
Given ε > 0, let Hn be the subspace of H consisting of all functions that are
constant on each of the intervals
[kε2−n , (k + 1)ε2−n )
(for k ∈ Z). Let Pn be the orthogonal projection onto Hn . It is easy to compute
that
kT Pn − Pn T k 6 2ε2−n .
Consequently if we put Qn = Pn − Pn−1 , which is also an orthogonal projection
because Hn−1 ⊆ Hn , we have
kT Qn − Qn T k 6 6ε2−n .
P
The series Qn converges strongly to 1, and hence
X X
T = Qn T Qn + (Qn T − T Qn )Qn ;
the first term is a direct sum of self-adjoint operators on finite-dimensional spaces
(hence diagonal by standard linear algebra), and the second has finite-rank terms
and converges in norm (by our estimates) — hence the limit is a compact operator
and we estimate that its norm is at most 12ε. 
Remark 31.3. This theorem goes back to the 1930s. It was not until 1971 that David
Berg proved the same statement for normal operators. What was so hard — where
does the above proof go wrong in the normal case? Can you fix the problem?
In what follows we will denote by π the quotient map B(H) → Q(H).

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
128

Definition 31.4. Let T be an operator on a Hilbert space H. We say that T is


essentially normal if T T ∗ − T ∗ T is compact (that is, if π(T ) is normal in the Calkin
algebra Q(H)). The essential spectrum of T is the spectrum of π(T ) in Q(H).
Bearing in mind Atkinson’s theorem (3.8), it is equivalent to say that the essential
spectrum of T is the set of λ ∈ C for which T − λ1 is not Fredholm.
Definition 31.5. Two operators T1 , T2 on H are essentially unitarily equivalent if
there exist a unitary U and a compact K such that T1 = U ∗ T2 U + K.
We want to understand the classification of essentially normal operators up to
essential unitary equivalence. As Brown, Douglas and Fillmore discovered, it is
helpful to introduce the following notion.
Definition 31.6. Let A be a C ∗ -algebra. An extension of A is an injective ∗-
homomorphism τ : A → Q(H); two extensions τ1 , τ2 are equivalent if there is a
unitary U ∈ B(H) with τ1 (a) = π(U )∗ τ2 (a)π(U ). The collection of equivalence
classes of extensions of A is denoted by Ext(A). (If A = C(X) is commutative, then
we refer to Ext(X) instead of Ext(C(X)).)
Example 31.7. Suppose that T is an essentially normal operator with essential
spectrum X ⊆ C. Then π(T ) generates a commutative subalgebra of Q(H), iso-
morphic to C(X). The inclusion C(X) → Q(H) gives an element of Ext(X). Thus
every essentially normal operator defines an extension. Essentially unitarily equiv-
alent operators define the same extension. Conversely, if two essentially normal
operators with the same essential spectrum define the same extension, then they
are essentially unitarily equivalent.
Let us analyze the essential spectrum of a normal operator in more detail.
Lemma 31.8. The essential spectrum of a normal operator is the whole of its (or-
dinary) spectrum except for the isolated eigenvalues of finite multiplicity.
Proof. It is equivalent to say that if T is a normal Fredholm operator, then the only
way 0 can appear in Spectrum(T ) is as an isolated eigenvalue of finite multiplicity.
Recall that if T is Fredholm then the restriction of T to a map ker(T )⊥ → Im(T )
has a bounded inverse; let M be the norm of that inverse. Then
kT ξk > M −1 kξk ∀ξ ∈ ker(T )⊥ ,
and it follows (via the spectral theorem) that
(Spectrum(T ) \ {0}) ∩ D(0; M −1 ) = ∅.
Thus 0 is an isolated point of the spectrum. Such an isolated point is necessarily an
eigenvalue, and the definition of “Fredholm” implies that it has finite multiplicity.

Note that this result fails for operators that are not normal. The essential spec-
trum of the unilateral shift is the unit circle, but its actual spectrum is the whole
unit disc.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
129

Corollary 31.9. The essential spectrum of a diagonal operator is the set of limit
points of its eigenvalue sequence.
Lemma 31.10. Let X be a metric space. Suppose that (xn ) and (yn ) are two
sequences in X that have the same limit points. Then there is a bijection α : N → N
such that d(xn , yα(n) ) → 0 as n → ∞.
Proof. An exercise, using induction. 
Proposition 31.11. Two diagonal operators that have the same essential spectrum
are essentially unitarily equivalent.
Proof. Let D and D0 be diagonal operators, with corresponding eigenvalue sequences
(λn ), (λ0n ) and eigenbases (en ), En0 ). Using lemma 31.10, choose a permutation α of
N such that
|λn − λ0α(n) | → 0
as n → ∞. Then the unitary U defined by U en = e0α(n) implements an approximate
equivalence between D and D0 . 
Definition 31.12. An extension τ : A → Q(H) splits if there is a commutative
diagram of ∗-homomorphisms
B(H)
=


A
τ / Q(H)
We are going to prove that split extensions are unique up to equivalence. A
consequence (in the single operator case) will be that an extension is split if and
only if it comes from an operator of the form normal plus compact.
Proposition 31.13. Let X be a compact metric space. Then any two split exten-
sions of C(X) are equivalent.
Proof. We’ll define a special class of split extensions called diagonal extensions. To
construct such an extension, let (xn ) be a sequence in X which is dense and has every
point of X as a limit point. Let H = `2 and let ϕ(f ) be the diagonal operator on
H with diagonal elements (f (xn )). Then ϕ : C(X) → B(H) is a ∗-homomorphism
and τ = π ◦ ϕ is injective, defining a split extension.
Given two sequences (xn ) and (x0n ), we can always construct by Lemma 31.10 a
bijection α : N → N such that d(xn , x0α(n) ) → 0. The unitary U on `2 defined by
α then implements an equivalence between the corresponding extensions τ and τ 0 .
Thus all diagonal extensions are equivalent and our problem is to show that every
split extension is equivalent to a diagonal one.
Suppose that Y → X is a surjective map of compact metric spaces. Then the
induced map C(X) → C(Y ) is injective so every extension of C(Y ) gives rise to an

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
130

extension of C(X). Moreover, if the extension of C(Y ) is diagonal, so is the induced


extension of C(X).
Now suppose that τ is an extension of C(X), split by a ∗-monomorphism ϕ : C(X) →
B(H). Consider the commutative C ∗ -subalgebra of B(H) generated by the image
of ϕ together with the projections corresponding to the characteristic functions of
a countable basis for the open subsets of X (via the Borel functional calculus 16.6).
Its maximal ideal space Y is compact and metric and surjects onto X, giving us the
following diagram:
ψ
C(Y ) / B(H)
O <
ϕ
π


C(X)
τ / Q(H)
As we remarked above, it suffices to show that the extension π ◦ ψ of C(Y ) is
diagonal. But C(Y ) is generated by countably many projections, whence Y is a zero-
dimensional compact metrizable space. By a standard result of general topology,
such a space is homeomorphic to a compact subset of R. Let g : Y → R be such a
homeomorphism. Then T = ψ(g) is a self-adjoint operator which generates ψ(C(Y ))
as a C ∗ -algebra. By the Weyl-von Neumann theorem, T is a compact perturbation of
a diagonal operator T 0 whose essential spectrum is g(Y ) ∼
= Y . Thus the extension
π ◦ ψ is equivalent to one generated by a diagonal operator (and moreover, by
Proposition 31.11, we may assume without loss of generality that the eigenvalue
sequence of that operator belongs to Y ). Such an extension is, by construction,
diagonal. 

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
131

Lecture 32
Brown-Douglas-Fillmore Theory

Proposition 32.1. The extension defined by an essentially normal operator T splits


if and only if T is the sum of a normal operator and a compact operator.
Proof. Suppose that T is essentially normal with essential spectrum X. Construct
(as in the proof above) a diagonal (normal) operator T 0 with the same essential
spectrum. By Proposition 31.13 above, the extension defined by T splits if and only
if it is equivalent to the extension defined by T 0 . But this is true if and only if T
and T 0 are essentially unitarily equivalent. 
Proposition 32.2. Two normal operators are essentially unitarily equivalent if and
only if they have the same essential spectrum.
Proof. If they are equivalent then they have the same essential spectrum, of course.
Conversely, if they have the same essential spectrum, then they both define split
extensions (by the previous proposition) and these extensions are equivalent by
Proposition 31.13. 
We are now going to equip Ext with an algebraic structure. Notice that there is
an embedding Q(H) ⊕ Q(H) → Q(H ⊕ H) ∼ = Q(H) (the last step requires a choice
of a unitary H ⊕ H → H, but the choice will wash out once we pass to equivalence
classes). This means that the operation of direct sum of extensions
τ1 ⊕ τ2 : A → Q(H ⊕ H) ∼ = Q(H)
is well defined on Ext(A) and gives Ext(A) the structure of a (commutative, asso-
ciative) semigroup. Here are the two key theorems about this semigroup.
Theorem 32.3. (Voiculescu’s theorem) For any separable, unital C ∗ -algebra
A, all split extensions are equivalent in Ext(A), and the class of the unique trivial
element is an identity for the addition on Ext(A).
In the last lecture we proved the first of these statements when A is commutative.
We haven’t yet proved the second statement even in the commutative case.
Theorem 32.4. (Arveson) If A is nuclear, then Ext(A) is a group.
Example 32.5. Suppose that A = C(S 1 ), the continuous functions on the circle.
I claim that Ext(A) = Z. To see this, notice that any extension of S 1 is generated
by an essentially unitary Fredholm operator, T say, and that such an operator has
an index. In this way we obtain a homomorphism Ext(S 1 ) → Z which is easily
seen to be surjective, so we must prove that its kernel is zero. In other words, we
must prove that if T has index 0, then it is essentially unitarily equivalent to a
unitary operator. But if Index(T ) = 0 then there is a compact (in fact finite-rank)
1
perturbation of T that is invertible. And if T is invertible, then U = T (T ∗ T )− 2 , the
partial isometry in the polar decomposition of T , is actually a unitary. Moreover,

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
132

since π(T ∗ T ) = 1, we find π(U ) = π(T ); in other words U is a compact perturbation


of T as required. (Notice how we made use of the group structure of Ext in this
example. Not knowing this in advance, BDF had to work quite a bit harder to do
this calculation.)
Proof of Theorem 32.4, assuming Theorem 32.3. Suppose that A is separable nu-
clear. Let τ : A → Q(H) be an extension. By the Choi-Effros lifting theorem
(Proposition 29.4), there exists a unital completely positive map σ : A → B(H)
that lifts τ . By Stinespring’s theorem (Theorem 28.6), σ can be dilated to a ∗-
homomorphism ϕ : A → B(H ⊕ H 0 ): in other words, we have a ∗-homomorphism
 
ϕ11 (a) ϕ12 (a)
ϕ(a) = ,
ϕ21 (a) ϕ22 (a)
where π ◦ ϕ11 (a) = τ (a). By considering multiplicative domains modulo K (see
Remark 28.7) it follows that π ◦ ϕ12 = 0 and π ◦ ϕ21 = 0, and that τ 0 = π ◦ ϕ22 is a
homomorphism to the Cakin algebra (an extension). By construction τ ⊕ τ 0 = π ◦ ϕ
is split, so τ 0 is an inverse of τ . 
Remark 32.6. One might be concerned that τ 0 might not be injective here. But we
can always make it injective by adding a split extension. A similar device can be
used to make Ext into a functor: given a unital ∗-homomorphism α : A → B and an
extension τ : B → Q(H), the composite τ ◦ α : A → Q(H) might fail to be injective;
but taking the direct sum with a split extension yields an injective map, and thus a
functorially induced homomorphism α∗ : Ext(B) → Ext(A).
Now we’ll address the proof of Voiculescu’s theorem. I’m only going to talk about
one simple case (you can find the general proof in Analytic K-Homology). We’ll
look at extensions generated by essentially normal operators; in other words, the
case A = C(X) where X is a compact subset of C. Here is the basic observation.
Lemma 32.7. Let T be an essentially normal operator on a Hilbert space H. If T is
not Fredholm, then there exists an orthonormal sequence ξn in H such that T ξn → 0
and T ∗ ξn → 0.
Proof. If T fails to be Fredholm, then at least one of the following must hold:
dim ker(T ) = ∞, dim ker(T ∗ ) = ∞, or Im(T ) is not closed. If T has infinite-
dimensional kernel then we can find an orthonormal sequence ξn ∈ ker(T ). Then
T ξn = 0 and
kT ∗ ξn k2 = hT T ∗ ξn , ξn i = h(T ∗ T − T T ∗ )ξn , ξn i → 0,
since a compact operator takes any orthonormal sequence to one that tends to 0. The
same argument works if T ∗ has infinite-dimensional kernel. If T fails to have closed
(i.e. complete) range, use the polar decomposition: the range of T is isometric to
1
the range of X = (T ∗ T ) 2 , which therefore can’t be complete (i.e. closed) either. It
follows from the spectral theorem that if the positive operator X fails to have closed
range, the spectral projections Pn = χ[0,1/n] (X) must all be infinite-dimensional.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
133

But now inductively we can choose an orthonormal sequence ξn with ξn ∈ Im(Pn );


we have
kT ξn k = hXξn , ξn i 6 1/n

so T ξn → 0, and T ξn → 0 follows using essential normality in the same way as
above. 
Lemma 32.8. Suppose that T is an essentially normal operator with essential spec-
trum X. Then T is essentially unitarily equivalent to D ⊕ T 0 , where D is a diagonal
operator with essential spectrum X, and T 0 is some other operator.
Proof. I claim that it is enough to prove the following claim: given ε > 0 and a
point λ ∈ X, the operator T is equal to a sum
 
λI 0
+ K,
0 T0
where K is a compact operator of norm < ε (and the identity operator I is on an
infinite-dimensional subspace). If this is true, then we can proceed inductively with
λn running over a dense sequence in X and ε = 2−n , to get the result desired.
To prove the claim, using Lemma 32.7 find an orthonormal sequence ξn with
kT ξn − λξn k < ε2−n−2 , kT ∗ ξn − λξn k < ε2−n−2 .
Let Q be the orthogonal projection onto the complement of the span of the {ξn },
and let T 0 = QT Q. Then simple estimates show that T − (T 0 + λ(I − Q)) is compact
and has norm less than ε, as required. 
Proof of Theorem 32.3 in the single operator case. We already know that all split
extensions are equivalent (Proposition 31.13), so what we need to show is that
the equivalence class of the (unique) split extension acts as the identity. Now
Lemma 32.8, translated into the language of extensions, says that every extension
τ is equivalent to ϕ ⊕ τ 0 , where ϕ is a split extension. But now if ψ is another split
extension, we have
ψ ⊕ τ ≈ ψ ⊕ ϕ ⊕ τ 0 ≈ ϕ ⊕ τ 0 ≈ τ,
since the split extensions ψ ⊕ ϕ and ϕ are equivalent. 

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
134

Lecture 33
More about BDF Theory

I want to relate the Ext theory that we have been learning about to the Fredholm
modules that showed up in our discussion of Cr∗ (F2 ) (Definition 27.1). Let A be a
separable nuclear C ∗ -algebra and τ an extension of A. According to Theorem 32.4,
there is a Hilbert space H = H 0 ⊕ H 00 and a ∗-homomorphism ϕ : A → B(H) with
the property that in the 2 × 2 matrix decomposition
 
ϕ11 ϕ12
ϕ= ,
ϕ21 ϕ22
we have τ = π ◦ ϕ11 . Moreover, we saw in the proof of Theorem 32.4 that the
off-diagonal terms ϕ12 and ϕ21 take values in the compact operators. Equivalently,
if we define
 
1 0
F = ,
0 −1
then F 2 = 1, F = F ∗ , and F commutes modulo compacts with ϕ(a) for all a ∈ A.
These are exactly the criteria that we used to define a Fredholm module in Defini-
tion 27.1. The only difference is that in that section, we demanded an additional
piece of structure, namely a grading on our Hilbert space with respect to which
F is odd and ϕ is even. Let us agree to call a Fredholm module equipped with a
grading (as in 27.1) an even Fredholm module and one without this structure an
odd Fredholm module. Then we have shown that every extension of a separable
nuclear C ∗ -algebra gives rise to an odd Fredholm module.
Conversely, suppose (H, F, ϕ) is an odd Fredholm module over an algebra A. Then
P = 12 (1 + F ) is a projection; let H 0 be the range of P . For each a ∈ A we can define
a “Toeplitz” operator Ta = P ϕ(a)P on H 0 , and the map a 7→ Ta is a homomorphism
modulo compacts and therefore gives rise to an extension of A. Thus extensions and
odd Fredholm modules are equivalent (for nuclear C ∗ -algebras).
We saw in Lecture 20 that Toeplitz operators with invertible symbol are Fredholm
and therefore have an index. The corresponding notion in this general case makes
use of K-theory.
Let A be a unital C ∗ -algebra. The group U (A) of unitary elements of A is a
topological group (in the norm topology) and therefore π0 U (A), the collection of
connected components of U (A), is a discrete group. Moreover for any n, Mn (A) is
also a C ∗ -algebra and there is a natural embedding of U (Mn (A)) in U (Mn+1 (A))
(“direct sum with 1”).
Definition 33.1. For a unital C ∗ -algebra A, the K1 group K1 (A) is the (abelian)
group defined as a direct limit
K1 (A) = lim π0 (Mn (A)).
n→∞

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
135

Suppose that τ is an extension of A. If a is unitary in Mn (A), then τ (a) is unitary


in Mn (Q(H)) = Q(H n ). Thus τ (a) lifts to an essentially unitary operator on H n
which has an index denoted Index(τ (a)).
Proposition 33.2. The pairing (a, τ ) 7→ Index(τ (a)) respects the equivalence rela-
tions on Ext and K1 and therefore passes to a bilinear pairing
Ext(A) ⊗ K1 (A) → Z.
Proof. This is just a bunch of easy things to check.
If two extensions τ, τ 0 are equivalent by a unitary U , the corresponding Fredholm
operators τ (a) and τ 0 (a) are conjugate by U and therefore have the same index.
The index of the direct sum of two Fredholm operators is the sum of their indices:
this shows the homomorphism property on the Ext side. It also shows that the
“adding 1” in the direct limit that defines K-theory does not affect the index.
The homotopy invariance of the Fredholm index (Proposition 3.7) shows that the
pairing is well-defined after passing to connected components (π0 ) in the definition
of K1 .
Finally, Proposition 3.10 shows that the pairing has the homomorphism property
on the K1 side. 
K-theory is computable because it is a cohomology theory. For more about this,
come to the K-theory course that I’ll be teaching in SP17 hopefully. The key prop-
erties of a cohomology theory are two:
(a) Homotopy invariance: given a one-parameter family αt of ∗-homomorphismss
A → B, the induced maps α0∗ and α1∗ on K-theory are the same. (Note: A
“one parameter family of homomorphisms” is just a homomorphism α : A →
B ⊗ C[0, 1] = C([0, 1]; B).)
(b) Mayer-Vietoris property: given a pushout diagram of C ∗ -algebras
C / A0

 
A1 / B
there is an exact sequence relating the K-theory groups of C, A0 and A1 , and
B.
The key to BDF’s results is that Ext has the same properties. In contrast to K-
theory, where the homotopy invariance (for example) is a more or less trivial conse-
quence of the definitions, the homotopy invariance of Ext is a major theorem.
Remark 33.3. As well as the “odd” pairing defined in Proposition 33.2 there is also
an “even” pairing between graded Fredholm modules and the even K-group K0 .
We have, in fact, already made use of this pairing in a special case, lemma 27.2.
That lemma shows how to give an integer-valued pairing between even Fredholm
modules and projections in the algebra A. But it can easily be extended to give a

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
136

pairing between such modules and projections in matrix algebras Mn (A), and such
projections generate the even K-group K0 (A).
To finish our brief discussion of Ext I will give an example of a C ∗ -algebra B
(necessarily non-nuclear) for which Ext(B) is not a group. Let Γ be the discrete
group SL(3, Z). The key fact from representation theory that we need to use is that
Γ has Kazhdan’s Property T.
Definition 33.4. A finitely generated discrete group Γ has property T if the fol-
lowing is true: for each finite generating set S (or for just one such set) there is a
constant εS > 0 such that, if π : Γ → U (H) is a unitary representation and there is
a unit vector ξ ∈ H which is “almost fixed” by Γ, in the sense that
kξ − π(g)ξk < εs ∀g ∈ S,
then there is also a unit vector ξ 0 ∈ H that is fixed (that is, π(g)ξ 0 = ξ 0 for all
g ∈ Γ).
Example 33.5. Finite groups have property T.
Example 33.6. No infinite amenable group has property T: the normalized char-
acteristic functions of a Folner sequence, thought of as vectors in the regular rep-
resentation, become more and more “almost fixed”, but the regular representation
has no fixed vectors (because the group is infinite).
It is a deep fact that any infinite property T groups exist; but they do, and
SL(3, Z) is an example.
Remark 33.7. A fixed vector in a representation of Γ is the same thing as a subrep-
resentation which is a copy of the trivial representation. Therefore, the definition
of property T can be rephrased: if a unitary representation of Γ “approximately”
contains a copy of the trivial representation, then it actually does contain a copy of
the trivial representation. When formulated this way, Property T has an extension
to any finite-dimensional irreducible representation. Let π : Γ → U (Hπ ) be such a
representation and let ρ : Γ → U (Hρ ) be any unitary representation. We say that ρ
“approximately contains” π if there is an isometry V : Hπ → Hρ such that
kπ(g) − V ∗ ρ(g)V k < ηS ∀g ∈ S,
where S is a generating set as before. Then, for a property T group, there is an
ηS such that this approximate containment implies that ρ actually does contain a
subrepresentation isomorphic to π. To prove this, apply the definition of property T
to the space of bounded operators Hπ → Hρ ; because Hπ is finite-dimensional, this
is actually a Hilbert space in the Hilbert-Schmidt norm, and its invariant elements
are the intertwining operators.
Remark 33.8. It follows from the previous remark that for each finite-dimensional
irreducible representation π of a property T group, there is a nonzero projection
pπ ∈ C ∗ (Γ) (the maximal group C ∗ -algebra) with the following property: for any

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
137

representation ρ of Γ on H, ρ(pπ ) ∈ B(H) is an orthogonal projection whose image


is the π-isotypical subspace
HomΓ (Hπ , H) ⊗ Hπ
of H. (When π is the trivial representation, the π-isotypical subspace is the space of
invariant vectors.) To see why this is so, observe first (this is the key point) that the
π-isotypical subspace in any faithful representation of C ∗ (G) must be nonzero. For,
let ρ be such a representation; by “inflating” it by tensoring with the identity oper-
ator on another Hilbert space we may assume that its kernel contains no non-zero
compact operators. By Voiculescu’s theorem 32.3, the representation π is approxi-
mately a subrepresentation of ρ; by Remark 33.7 this “approximate” subrepresenta-
tion can be promoted to an actual subrepresentation. Now represent C ∗ (Γ) faithfully
on a Hilbert space H whose π-isotypical part is a single copy of Hπ . The action of
C ∗ (Γ) is diagonal relative to the direct sum decomposition H ∼ = Hπ ⊕Hπ⊥ , and it must

contain some nonzero element that acts as zero on Hπ (otherwise the representation
on Hπ⊥ would be faithful, contradicting our previous observation). By irreducibility
of π, C ∗ (G) contains the whole algebra B(Hπ ) ⊕ 0 ⊆ B(Hπ ⊕ Hπ⊥ ) = B(H). The
Kazhdan projection pπ corresponds to IHπ ⊕ 0 in this representation.
Now to the construction of a C ∗ -algebra for which Ext is not a group. It uses the
fact that if Γ = SL(3, Z) then Γ has an infinite family {πn } of pairwise inequivalent
finite-dimensional irreducible representations. Let A = C ∗ (Γ) and let pn ∈ A be the
Kazhdan projections corresponding to πn . Let J be the closed ideal generated L by
the pn and let B = A/J, with π : A → B the natural quotient map. Let H = Hπn
be the direct sum of the given finite-dimensional representations and let ρ be the
natural representation of A on H. For each n, ρ(pn ) ∈ B(H) is the finite-rank
projection onto Hπn 6 H. Thus ρ maps J to the compact operators and therefore
gives rise to an extension
τ : B = A/J → Q(H).
I claim that this extension has no inverse in Ext(B).
To see why, suppose that it did have such an inverse. There would then be
a representation ϕ : A/J → B(H ⊕ H 0 ) whose top left corner was a completely
positive lifting of τ . Then ψ = ϕ ◦ π would be a representation of A whose top left
corner was a compact perturbation of ρ. If Qn denotes the orthogonal projection
onto Hπn 6 H 6 H ⊕ H 0 , compactness implies that for each a ∈ A,
lim kπn (a) − Qn ψ(a)Qn k = 0.
n→∞
By Remark 33.7, πn appears as a subrepresentation of ψ for all sufficiently large n.
But ψ factors through J and thus takes all Kazhdan projections pn to zero, which
is a contradiction.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
138

Lecture 34
UHF Algebras

In this section we shall study C ∗ -algebras (and associated von Neumann algebras)
that are constructed as unital inductive limits of matrix algebras. An important
example is the algebra An ⊆ On which was used in our earlier discussion of Cuntz
algebras.
Lemma 34.1. Let Mm (C) and Mn (C) be matrix algebras. There exists a unital
∗-homomorphism Mm (C) → Mn (C) if and only if m|n; and, when this condition is
satisfied, all such unital ∗-homomorphisms are unitarily equivalent to that given by
T
 

T 7→  ..
. 
T
with T repeated n/m times down the diagonal and zeroes elsewhere.
Proof. This is a special case of Lemma 9.5. 
Suppose now that k1 |k2 |k3 | · · · is an increasing sequence of natural numbers, each
of which divides the next. By the lemma, there is associated to this sequence
a unique (up to unitary equivalence) sequence of matrix algebras and unital ∗-
homomorphisms
α1 α2
Mk1 (C) / Mk2 (C) / Mk3 (C) / ··· .
Let A denote the (algebraic) inductive limit of this sequence. (Reminder about
what this means: Think of unions. More formally, the elements of A are equivalence
classes of sequences {Ti }, Ti ∈ Mki (C), which are required to satisfy αi (Ti ) = Ti+1
for all but finitely many i, and where two sequences are considered to be equivalent
if they differ only in finitely many places. These may be added, multiplied, normed
(remember that the αi are isometric inclusions!), and so on, by pointwise operations.)
The algebra A is a pre-C ∗ -algebra; that is, a normed algebra which satisfies all the
C ∗ -axioms except that it need not be complete. Its completion is a C ∗ -algebra which
is called the uniformly hyperfinite (UHF) algebra (or Glimm algebra) determined by
the inductive system of matrix algebras.
Proposition 34.2. All UHF algebras are simple, separable, unital, and have a
unique trace.
Proof. Obviously they are separable and unital. Let A be a UHF algebra and let
α : A → B be a unital ∗-homomorphism to another C ∗ -algebra. Since each ma-
trix subalgebra Mki (C) of A is simple, the restriction of α to it is an injective
∗-homomorphism and is therefore isometric. Thus α is isometric on the dense sub-
algebra A of A, whence by continuity α is isometric on the whole of A and in
particular has no kernel. Thus, A is simple.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
139

For each matrix algebra Mki (C) let τki : Mki (C) → C denote the normalized trace,
that is
1
τki (T ) = Tr(T ).
ki
The τki then are consistent with the embeddings αi so they pass to the inductive limit
to give a trace τ on the dense subalgebra A of A. Moreover, τ is norm continuous on
A (since the τki are all of norm one on the matrix algebras Mki (C)) and so extends
by continuity to a trace, indeed a tracial state, on A. The uniqueness of the trace
on matrix algebras12 shows that every other trace must agree, up to normalization
of course, with this one. 
Lemma 34.3. Any nonzero tracial state on a simple C ∗ -algebra is faithful (Re-
mark 26.1). In particular, the unique trace on a Glimm algebra is faithful.
Proof. Let τ be a nonzero tracial state on A and let N = {a ∈ A : τ (a∗ a) = 0}.
Since τ is a state, N is a left ideal in A (see the GNS construction, Theorem 15.1).
But N = N ∗ by the trace property, so N is in fact a two-sided ideal, which must
vanish by simplicity. Hence τ is faithful. 
Every UHF algebra contains many projections (because the matrix subalgebras
contain many projections). We are going to study the structure of UHF algebras by
looking at the projections that they contain. We need a couple of general lemmas
first.
Lemma 34.4. Let p and q be projections in a unital C ∗ -algebra A, and suppose that
kp − qk < 1. Then τ (p) = τ (q) for any trace τ on A. In fact, p and q are unitarily
equivalent.
Proof. Define x = qp + (1 − q)(1 − p). Then xp = qx, and a simple calculation shows
that
x − 1 = 2qp − q − p = (2q − 1)(p − q).
1
Thus kx − 1k < 1 and so x is invertible. We may define a unitary u = x(x∗ x)− 2 by
the functional calculus. Now x∗ x commutes with p, and so
1 1
up = xp(x∗ x)− 2 = qx(x∗ x)− 2 = qu
as required. 
Lemma 34.5. Let A be a unital C ∗ -algebra and let x ∈ A be a self-adjoint element
which is approximately a projection in the sense that kx2 − xk < 14 . Then there is a
projection p ∈ A such that kx − pk < 12 .
Proof. From the functional calculus, the inequality kx2 − xk < 14 implies that x has
spectrum in (− 21 , 12 ) ∪ ( 12 , 32 ). Let f be the function which equals 0 on (− 21 , 12 ) and
1 on ( 12 , 32 ). Then f is continuous on the spectrum of x, f (x) is a projection, and
kf (x) − xk < 21 . 
12And how do you prove that? Compute with matrix units to show that every trace-zero finite
matrix lies in the span of commutators.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
140

Definition 34.6. A unital C ∗ -algebra A is called finite if it has no proper isometries:


i.e., v ∗ v = 1 implies vv ∗ = 1 for v ∈ A. It is stably finite if all the matrix algebras
Mk (A) are finite.
Remark 34.7. More generally, a projection q in a C ∗ -algebra is called finite if, for
any partial isometry v such that v ∗ v = q and vv ∗ 6 q, we have in fact vv ∗ = q.
When q = 1 this reduces to the definition above.
Proposition 34.8. UHF algebras are stably finite.
Proof. We only need prove they are finite (because a matrix algebra over a UHF
algebra is UHF). Let A be a UHF algebra which is the direct limit of matrix algebras
Mki (C). Let v ∗ v = 1 and put vv ∗ = p which is a projection. By density we can find
1
an index i such that Mki (C) contains a selfadjoint element x with kx − pk < 20 .
2 3
Simple estimates give kx − xk < 16 . Thus, by lemma 34.5, there is a projection
q ∈ Mki (C) with kq − xk < 12 and therefore kq − pk < 1. By lemma 34.4, τ (q) =
τ (p) = τ (v ∗ v) = τ (vv ∗ ) = 1. This implies that q is the identity, since that is the
only projection in a matrix algebra which has normalized trace equal to 1. But now
p = 1 on applying lemma 34.4 again (the only projection unitarily equivalent to 1
is 1). 
By generalizing these ideas we can get a complete classification of UHF algebras.
Suppose that k1 |k2 | · · · is the sequence of orders of matrix algebras generating a
UHF algebra. For each prime p there is a natural number or infinity np defined
to be theQsupremum of the number of times p divides ki , as i → ∞. The formal
n
product pr pr is called the supernatural number associated to the UHF algebra.
Remark 34.9. Different sequences of matrix algebras can produce the same UHF
algebra, and can also produce the same supernatural number. For instance, consider
the two sequences
M2 (C) / M4 (C) / M8 (C) / ···
and
M4 (C) / M16 (C) / M64 (C) / ··· .
Both of these are associated with the supernatural number 2∞ . On the other hand,
it is easy to see that they are both associated with the same UHF algebra (the
second sequence is a subsequence of the first, and, just as with ordinary limits, the
limit of a subsequence is equal to the limit of the larger sequence). This example is
particularly important; it is called the CAR algebra (we will see why in a moment)
or the Fermion algebra.
The key classification theorem for UHF algebras is the following: there is a one-to-
one correspondence between isomorphism classes of UHF algebras and supernatural
numbers. In other words, we have the following two results.
Proposition 34.10. If two UHF algebras arise from matrix algebra sequences hav-
ing the same supernatural number, then they are isomorphic.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
141

Proposition 34.11. If two UHF algebras are isomorphic, then any two matrix
algebra sequences generating them have the same supernatural number.
Proof. First, let us reformulate the idea that two matrix algebra sequences have the
same supernatural number. If the two sequences have matrix orders k1 |k2 |k3 | · · ·
and l1 |l2 |l3 | · · · respectively, to say that the associated supernatural numbers are
the same is just to say that for each i there is a j such that ki divides lj , and that
for each i there is a j such that li divides kj .
Let us prove proposition 34.10. If the two sequences above have the same super-
natural number, then by passing to subsequences (which doesn’t change the direct
limit) we can arrange that ki |li |ki+1 . Then consider embeddings of matrix algebras
as in the diagram below
/ / /
Mk1 (C) Mk2 (C)
9
Mk3 (C)
9 ;···

  
Ml1 (C) / Ml2 (C) / Ml3 (C) / ···
We obtain a ∗-isomorphism between dense subalgebras of A and B, which extends
automatically to a ∗-isomorphism from A to B.
Now let us prove proposition 34.11. Let the two algebras A and B be obtained
from generating sequences k1 |k2 |k3 | · · · and l1 |l2 |l3 | · · · respectively. Let α : A → B
be an isomorphism. Notice that α takes the unique trace τA on A to the unique
trace τB on B.
We will show that for each i there is a j such that ki divides lj . Indeed, let p
denote the matrix unit e11 in Mki (C) ⊆ A. Then α(p) is a projection in B. By
density there is x = x∗ ∈ Mlj (C) ⊆ B such that kx − α(p)k < 20 1
. By lemma 34.5
there is a projection q ∈ Mlj (C) such that kq − α(p)k < 1. By the lemma 34.4 and
the uniqueness of the trace
1
τB (q) = τB (α(p)) = τA (p) = .
ki
Since the normalized trace of every projection in Mlj (C) is a multiple of 1/lj , we
conclude that ki divides lj , as required. 
Notice as a corollary that there are uncountably many isomorphism classes of
UHF algebras.
Example 34.12. The Fermion algebra (CAR algebra) arises in quantum field the-
ory. The second quantization of a fermion field leads to the following problem: given
a Hilbert space H, find a C ∗ -algebra A generated by operators cv , v ∈ H, such that
v 7→ cv is linear and isometric and
cv cw + cw cv = 0, c∗v cw + cw c∗v = hw, vi1.
These are called the canonical anticommutation relations. Compare the definition
of a Clifford algebra. We will show that the C ∗ -algebra A arising in this way is the
unique UHF algebra with supernatural number 2∞ .

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
142

Let v be a unit vector on H. Then cv has square zero, and c∗v cv + cv c∗v = 1.
Multiplying through by c∗v cv we find that c∗v cv is a projection, call it ev , and that cv
is a partial isometry with range projection ev and domain projection 1 − ev . We can
therefore identify the C ∗ -subalgebra of A generated by cv : it is a copy of M2 (C),
generated by matrix units e21 = cv , e11 = ev , e22 = 1 − ev , e12 = c∗v .
The idea now is to apply the process inductively to the members v1 , v2 , . . . of
an orthonormal basis for the Hilbert space H. Let A1 , A2 , . . . be the subalgebras
of A (all isomorphic to M2 (C)) generated by cv1 , cv2 , . . .. If the algebras Ai , Aj all
commuted with one another for i 6= j then we would be done, because the subalgebra
spanned by A1 , . . . , An would then be the tensor product M2 (C) ⊗ · · · ⊗ M2 (C) =
M2n (C) and we would have represented A as an inductive limit of matrix algebras.
Unfortunately the Ai ’s do not commute; but they do ‘supercommute’ when they are
considered as superalgebras in an appropriate way. Rather than developing a large
machinery for discussing superalgebras, we will inductively modify the algebras Aj
to algebras Bj which are each individually isomorphic to 2 × 2 matrix algebras,
which do commute, and which have the property that the subalgebra of A spanned
by A1 , . . . , An is equal to the subalgebra of A spanned by B1 , . . . , Bn . This, then,
will complete the proof.
Let γi be the grading operator for Ai , which is to say the operator corresponding
to the 2 × 2 matrix ( 10 −10
). Notice that γi is self-adjoint, has square 1 and commutes
with cvj for j 6= i, while it anticommutes with cvi ; in particular γi commutes with
γj . Let
B1 = A1 = C ∗ (cv1 ), B2 = C ∗ (cv2 γ1 ), . . . Bn = C ∗ (cvn γ1 · · · γn−1 ).
By the same argument as for the Ai above, each Bi is isomorphic to M2 (C). Moreover
the generators of the algebras Bi all commute. Finally, it is clear that the subalgebra
spanned by B1 , . . . , Bn is the same as the subalgebra spanned by A1 , . . . , An . The
proof is therefore complete.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
143

Lecture 35
UHF Algebras and von Neumann Factors

We are going to use UHF algebras to construct interesting examples of von Neu-
mann algebras. (This is something of a reversal of history: Glimm’s UHF-algebra
construction is explicitly patterned after a construction of Murray and von Neu-
mann, that of the so-called hyperfinite II1 factor.) Let H be a Hilbert space.
Recall that a von Neumann algebra of operators on H is a unital, weakly closed
∗-subalgebra M ⊆ B(H). Its commutant
M 0 = {T ∈ B(H) : T S = ST ∀ S ∈ M }
is also a von Neumann algebra.
Definition 35.1. If the center
Z(M ) = M ∩ M 0
consists only of scalar multiples of the identity, then the von Neumann alegbra M
is called a factor.
There is a sophisticated version of the spectral theorem which expresses every von
Neumann algebra as a ‘direct integral’ of factors.
We are going to manufacture examples of factors by the following procedure: take
a C ∗ -algebra A (in our examples, a UHF algebra) and a state σ of A. By way of the
GNS construction, form a representation ρσ : A → B(Hσ ) and let Mσ be the von
Neumann algebra generated by the representation, that is Mσ = ρσ [A]00 . If Mσ is a
factor, then we call σ a factorial state. As we shall see, many different examples of
factors can be generated from the factorial states on one and the same C ∗ -algebra.
By way of a warm-up let us see that UHF C ∗ -algebras have trivial center.
Lemma 35.2. Let A be a UHF algebra; then Z(A) = C1.
Proof. The proof applies to any unital C ∗ -algebra with a unique faithful trace, τ .
Let z ∈ Z(A). Then the linear functional a 7→ τ (az) is a trace on A, so by uniqueness
it is equal to a scalar multiple of τ , say λτ . But then
τ (a(z − λ1)) = 0
for all a ∈ A, in particular for a = (z − λ1)∗ , so by faithfulness z − λ1 = 0 and z is
a multiple of the identity. 
Remark 35.3. Of course this does not prove that various von Neumann algebra
completions of A must have trivial center. We need more work to see that! Moreover,
the argument above does not give much insight into how the von Neumann algebra
proof is going to go. Consider the following alternative argument: let z ∈ Z(A).
Then for any ε > 0 there is a matrix subalgebra Mn (C) ⊆ A and an x ∈ Mn (C)
with kx − zk < ε. Let Z
x̂ = u∗ xu dλ(u),
Un (C)

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
144

where λ is Haar measure on the compact group Un (C). Then x̂ is a multiple of the
identity, but kx̂ − zk < ε. Since ε is arbitrary, z is a multiple of the identity. This
proof is more like the von Neumann algebra argument we will give in Proposition 36.1
below.
We will discuss a special class of states on UHF algebras, the product states.
Recall that a UHF algebra A is a completed inductive limit of matrix algebras
Mk1 (C) ⊆ Mk2 (C) ⊆ Mk3 (C) ⊆ · · · .
Putting d1 = k1 , di+1 = ki+1 /ki for i > 1, we can write the inductive sequence
equivalently as
Md1 (C) ⊆ Md1 (C) ⊗ Md2 (C) ⊆ Md1 (C) ⊗ Md2 (C) ⊗ Md3 (C) ⊆ · · · .
Notice that Mdi+1 (C) is the commutant of Mki (C) inside Mki+1 (C). For each i, let σi
be a state of Mdi (C). (We will discuss the explicit form of these states in a moment.)
Then one can easily define a state σ on the algebraic inductive limit A by
σ(a1 ⊗ a2 ⊗ · · · ) = σ1 (a1 )σ2 (a2 ) · · · .
Note that all but finitely many of the ai are equal to one. When each of the σi
are the normalized trace, this is exactly the construction we performed in the last
lecture to obtain τ . The functional σ has norm one, so it extends by continuity to
a state on the UHF algebra A. Such a state is called a product state.
Lemma 35.4. Let A be a UHF algebra and let σ be a product state on it. Let
M = Mki (C) be one of the matrix algebras in an inductive sequence definining
A. If x ∈ M and y ∈ M 0 (by which I mean the commutant of M in A), then
σ(xy) = σ(x)σ(y).
Proof. The result is apparent if y ∈ Mkj (C) for some j > i. For then the commuta-
tion condition allows us to write x = X ⊗ 1, y = 1 ⊗ Y relative to the tensor product
decomposition of the matrix algebra, and the result follows from the definition of
a product state. The proof is completed by an approximation argument, similar to
Remark 35.3 above.
Let y ∈ A commute with Mki (C) and let ε > 0. Then there is a y 0 ∈ Mkj (C) (for
some big j) with ky 0 − yk < ε. Now y 0 probably does not commute with Mki (C)
anymore, but
Z
ŷ = u∗ y 0 u dλ(u)
U (ki )

does, and it lies within ε of y. Moreover ŷ ∈ Mkj (C) and therefore σ(xŷ) = σ(x)σ(ŷ).
Thus
|σ(xy) − σ(x)σ(y)| < 2ε|σ(x)|.
Since ε is arbitrary this gives us the result. 

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
145

Recall that a state σ is pure if it is an extreme point of the space of states. It is


equivalent13 to say that if ϕ is a positive linear functional with ϕ 6 σ, then ϕ is a
scalar multiple of σ. Let M = Mr (C) be a finite matrix algebra; then the states
σq (T ) = Tqq , q = 1, . . . , r
are pure states of M . To see this we can argue as follows, taking q = 1 for simplicity
of notation: if ϕ 6 σq and ϕ is positive, then ϕ(eii ) = 0 for i > 2; therefore
ϕ(eij e∗ij ) = 0 for i > 2; therefore by Cauchy-Schwarz ϕ(eij ) = 0 for i > 2, and
symmetrically for j > 2. Thus ϕ = ϕ(1)σ1 .
Lemma 35.5. Let A = Md1 (C) ⊗ Md2 (C) ⊗ · · · be a UHF algebra and let σ =
σq1 ⊗ σq2 ⊗ · · · be a product state in which each of the σqi is a pure state of Mdi (C)
of the sort described above. Then σ is a pure state of A.
Proof. Let ϕ 6 σ and prove by induction on i that ϕ = ϕ(1)σ when restricted to
Mki (C) ⊆ A, using the argument sketched above which proves that the σq are pure
states of the corresponding matrix algebras. 
Let us call the state described above the pure product state σq associated to the
sequence of indices q = (q1 , q2 , . . .). We are going to investigate when the irreducible
representations associated to two pure product states are equivalent. We need the
following lemma (which we in fact mentioned earlier, see Remark 17.5), which says
that unitarily equivalent irreducible representations are ‘internally’ unitarily equiv-
alent.
Lemma 35.6. Let A be a unital C ∗ -algebra and let σ, σ 0 be pure states of A. Then
the irreducible representations associated to σ and σ 0 are unitarily equivalent if and
only if there is a unitary u ∈ A for which σ(a) = σ 0 (uau∗ ).
Proof. Suppose that the associated irreducible representations are unitarily equiva-
lent. If we identify the representation spaces by the supposed unitary, we are reduced
to proving the following statement: suppose that ρ : A → B(H) is an irreducible
representation and ξ, ξ 0 ∈ H are two unit vectors; then there is a unitary u ∈ A such
that ρ(u)ξ = ξ 0 . To do this, let W ⊆ H be the two-dimensional subspace spanned
by ξ and ξ 0 ; we can always find a unitary operator on W mapping ξ to ξ 0 , and this
unitary can be written as exp(iT ) for some self-adjoint T . By Kadison’s transitivity
theorem there exists t ∈ A such that ρ(t) and T agree on W , and we may assume t
is selfadjoint (replace by 21 (t + t∗ )). Then u = exp(it) ∈ A does the job. 
Proposition 35.7. Two pure product states σq and σq0 define unitarily equivalent
irreducible representations if and only if the sequences q and q0 agree except for
finitely many terms.
Proof. If the sequences q and q0 agree for subscripts i > n, say, then the associated
pure states define two different irreducible representations of Mkn (C). Any two
irreducible representations of a full matrix algebra are equivalent, so there is a
13This follows from the “Radon-Nikodym” theorem 17.1.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
146

unitary u ∈ Mkn (C) such that σq (a) = σq0 (u∗ au) for all a ∈ Mkn (C). This equality
extends to all a ∈ A by construction, and then it extends to A by continuity. So
the associated irreducible representations of A are unitarily equivalent.
Conversely, suppose that σq (a) = σq0 (u∗ au) for some unitary u. Find an element
x ∈ Mkn (C) (n sufficiently large) with ku − xk < 21 . Suppose for a contradiction
that qj = 1, qj0 = 2 for some j > n, and let e ∈ Mkj (C) be of the form 1 ⊗ e11 ,
where e11 is the matrix unit in Mdj (C). Now σ 0 (e) = 0, so σ 0 (x∗ ex) = σ 0 (x∗ xe) =
σ 0 (x∗ x)σ 0 (e) = 0 using the multiplicative property of product states and the fact
that e commutes with x. Now write
1 = σ(e) = σ 0 (u∗ eu) 6 |σ 0 ((u∗ − x∗ )eu)| + |σ 0 (x∗ e(u − x))| + |σ 0 (x∗ ex)| < 1
to obtain a contradiction. 
Corollary 35.8. Let A be a UHF algebra. Then the unitary dual A
b is uncountable,
but has the indiscrete topology. Moreover, A is antiliminal.
Proof. Proposition 35.7 shows that a UHF algebra A has uncountably many in-
equivalent unitary representations; but since it is simple the primitive ideal space is
trivial, and hence A
b has the indiscrete topology by Theorem 18.1.
Let us show that A is not postliminal (see Remark 19.9 for the definitions here).
Consider an irreducible representation ρ of A. If A is postliminal, then ρ(A) con-
tains an abelian element of norm one, that is, a rank one projection. But then it
contains all the compact operators (Corollary 19.5). Simplicity implies A ∼ = K, a
contradiction.
Since A is not postliminal, its maximal postliminal ideal I is not equal to A.
Remark 19.9 tells us that A/I is antiliminal. But by simplicity I = 0 so A is
antiliminal as required. 

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
147

Lecture 36
The Classification of Factors

Irreducible representations are (of course!) factorial. However UHF algebras also
possess factorial representations that are not irreducible, In fact we have
Proposition 36.1. Each product state of a UHF algebra is factorial.
Proof. Let σ be a product state of the UHF algebra A and let (ρσ , Hσ , uσ ) be a
cyclic representation associated with σ via the GNS construction. Suppose that
z ∈ Z(ρσ [A]00 ). By the Kaplansky Density Theorem 14.2, there is a sequence yj in
A with ρ(yj ) → z weakly and kyj k 6 kzk for all j. Fix n, and let zj be obtained
from yj by averaging over the unitary group of Mkn (C) ⊆ A, as in the proof of
Lemma 35.4. For each fixed such unitary u we have
ρ(uyj u∗ ) → ρ(u)zρ(u∗ ) = z
in the weak topology, boundedly in norm; so by applying the Dominated Conver-
gence Theorem we see that ρ(zj ) → z weakly. Since zj commutes with Mkn (C) we
have for x, y ∈ Mkn (C),
hz[x], [y]i = limh[x], ρ(zj )[y]i =
lim σ(x∗ zj y) = σ(x∗ y)σ(z) = h[x], [y]iσ(z)
using Lemma 35.4. Since this holds for all n, it follows that z = σ(z)1 is a scalar
multiple of the identity. Thus ρσ (A)00 is a factor. 
Many different interesting factors can be obtained from this construction. But
how are these factors to be distinguished? In this section we will briefly outline the
classification of factors that was developed by Murray and von Neumann.
Definition 36.2. Let M be a von Neumann algebra. A weight on M is a function
ϕ : M+ → [0, ∞] (notice that the value ∞ is permitted) which is positive-linear in
the sense that
ϕ(α1 x1 + α2 x2 ) = α1 ϕ(x1 ) + α2 ϕ(x2 )
for α1 , α2 ∈ R+ and x1 , x2 ∈ M+ .
The canonical example is the usual trace on B(H). We say that ϕ is semi-finite
if the domain
M+ϕ = {x ∈ M+ : ϕ(x) < ∞}
is weakly dense in M+ , and that ϕ is normal if
lim ϕ(xλ ) = ϕ(lim xλ )
whenever xλ is a monotone increasing net in M that is bounded above (and hence
weakly convergent). If ϕ(u∗ xu) = ϕ(x) for all x ∈ M+ and unitary u we say that ϕ
is a tracial weight, or sometimes just a trace (so long as we remember that it does
not have to be bounded).

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
148

Lemma 36.3. For each weight ϕ on M , the linear span M ϕ of M+ϕ is a hereditary
∗-subalgebra to which ϕ extends as a positive linear functional. If ϕ is tracial, this
extension of ϕ has the trace property ϕ(xy) = ϕ(yx).
Definition 36.4. A von Neumann algebra M is said to be finite if it has a faithful
normal finite trace (i.e. a tracial state). It is semifinite if it has a faithful normal
semifinite trace.
Recall the definition of finiteness for projections (Remark 34.7).
Theorem 36.5. (Murray-von Neumann) A factor M is finite if and only if every
projection in M is finite. Any finite or semifinite factor contains finite projections.
Murray and von Neumann introduced the idea of studying factors by looking
at the comparison properties of projections. Let p and q be projections in a von
Neumann algebra M . One says that p is weaker that q (or that q is stronger than p)
if p is equivalent to a subprojection of q: that is, if there is a partial isometry v ∈ M
with v ∗ v = p and vv ∗ 6 q. We write p - q in this case. (Thus, an infinite projection
is one that is strictly weaker than itself.) Murray and von Neumann showed that, if
M is a factor, then the projections form a totally ordered set under the relation -.
This led to the classification of factors:
Finite faithful Semifinite No faithful
normal trace faithful nor- normal trace
mal trace
Minimal projec- Type In Type I∞ Impossible
tion
No minimal pro- Type II1 Type II∞ Type III
jection
It is easy to give examples of factors of type I. In fact, the matrix algebras Mn (C)
are factors of type In , and the bounded operators B(H) form a factor of type I∞ .
These can be shown to be the only examples.
Lemma 36.6. The completion of a UHF algebra relative to the canonical trace is a
type II1 factor.
Proof. Let’s check the definition.
The canonical trace is a product state, so we have obtained a factor R by Propo-
sition 36.1.
The canonical trace extends to a linear functional on R which is weakly continuous
(by construction!) and hence normal. Weak continuity also shows that the extended
functional is still a trace.
The extended trace is still faithful: Suppose that T ∈ R with τ (T ∗ T ) = 0. Then
(letting ξ denote the cyclic vector), for any unitary u ∈ A
0 = τ (ρ(u∗ )T ∗ T ρ(u)) = hξ, ρ(u∗ )T ∗ T ρ(u)ξi = kT ρ(u)ξk2 ,
so T ρ(u)ξ = 0. But as u runs over the unitaries of A, the ρ(u)ξ span H, so T = 0.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
149

There is no minimal projection: Any projection in R has a positive trace (by


faithfulness). The UHF algebra A contains a projection with a smaller positive
trace, so the original projection cannot have been minimal. 
It turns out that the factor R obtained by this construction is independent of the
particular UHF algebra used to construct it. It is called the hyperfinite II1 factor.
Remark 36.7. Group von Neumann algebras provide another example of II1 factors.
Let Γ be a discrete group such that every non-identity conjugacy class is infinite (a
so-called ICC group). Then the group von Neumann algebra (lemma 22.7) L(Γ) is a
Psee that it2 is a factor, observe that if T ∈ L(Γ) is a central element,
II1 factor. (To
then T [e] = tγ [γ] ∈ ` (Γ) where the coefficients tγ are constant on conjugacy
classes. For an ICC group this implies that T is a multiple of the identity.) If Γ is
amenable, then we obtain the hyperfinite factor R defined above.
Now we are going to sketch the construction of some type III examples. Let
A be the CAR algebra (the UHF algebra with supernatural number 2∞ ) and let
α ∈ (0, 21 ). Define a state ϕα on A to be the product state obtained from the state
 
a11 a12
7→ αa11 + (1 − α)a22
a21 a22
on each copy of the 2 × 2 matrices, and let πα be the associated factorial repre-
sentation. The factor Mα = πα (A)00 is called the Powers factor with parameter
α.
Proposition 36.8. The Powers factors are all factors of type III. (Moreover, they
are mutually non-isomorphic, although we won’t even start to prove this.)
The key idea is that Mα has plenty of inner automorphisms—enough that we can
“move” any one element by an inner automorphism to make it almost commute
with another. Specifically
Proposition 36.9. There is a sequence {un } of unitaries in the CAR algebra A
with the following property: for every x ∈ A,
πα (u∗n xun ) → ϕα (x)1,
where the convergence is in the weak topology.
Proof. For any finitely supported permutation σ of N there is a unitary uσ ∈ A (in
fact in one of its matrix subalgebras) implementing the corresponding permutation
of the tensor product factors in
A = M2 ⊗ M2 ⊗ M2 ⊗ · · · .
Choose a sequence σn of finitely supported permutations with the property that
σn (m) > n for all m 6 n and let un be the corresponding unitaries. I claim that
{un } has the property stated.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
150

Notice that ϕα (u∗n xun ) = ϕα (x) for all x. Notice also that if x, y ∈ M2m (C) ⊆ A
then u∗n xun commutes with y for all n > m. Now suppose that x, y, z ∈ M2k (C) ⊆ A.
Then
ϕα (z ∗ u∗n xun y) = ϕα (u∗n xun z ∗ y) = ϕα (u∗n xun )ϕα (z ∗ y) = ϕα (x)ϕα (z ∗ y)
using the multiplicative property of product states (Lemma 35.4). In terms of the
GNS representation this means
h[z], πα (u∗n xun )[y]i → ϕα (x)h[z], [y]i.
But the vectors [z], [y] span a dense subspace of the representation space, so this
shows πα (u∗n xun ) → ϕα (x)1 weakly as required. 
It follows immediately that Mα has no finite normal trace (so it is not a factor of
type II1 ). For, supposing for a moment that τ was such a trace, we would have
τ (u∗n xun ) = τ (x) ∀n.
A normal functional is weakly continuous on bounded sets (dominated convergence
theorem) and so we would have
τ (x) = τ ( lim u∗n xun ) = ϕα (x)τ (1).
n→∞
But ϕα is not a trace so this is ridiculous.
To extend this argument to show there is no semifinite trace either (i.e. Mα is
not a factor of type II∞ ) requires some techniques for working with (unbounded)
normal functionals. I won’t get started on that here, but simply refer to Theorem
6.5.15 in Pedersen.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
151

Lecture 37
Weak containment: loose ends

The objective of these notes is to tidy up a few loose ends that were left dangling
regarding
• the Fell topology on the spectrum of a C ∗ -algebra (definition 17.4);
• the definitions of amenability and property T;
• the notion of “weak containment” of representations, defined below;
• and the proof of the trace property Tr(AB) = Tr(BA) under minimal as-
sumptions, see Lecture 2.
Throughout this section Γ denotes a discrete, finitely generated group. (The results
extend to the locally compact case.) We will consider unitary representations of
Γ, which are the same thing as unitary representations of the maximal group C ∗ -
algebra C ∗ (Γ). If π is such a representation, a (normalized) positive definite function
associated to that representation is a function on Γ of the form
γ 7→ hξ, π(γ)ξi
for some unit vector ξ ∈ Hπ . In other words, a positive definite function associated
to π corresponds to a vector state of C ∗ (Γ) coming from that representation. (Via
the GNS construction, one can go back from the positive definite function to an
associated representation, though what you get may not be the whole of the original
π; you will get its restriction to the cyclic subspace generated by ξ.)
Let P (π) denote the collection of normalized positive definite functions associated
to π.
Definition 37.1. Let π and ρ be representations of Γ. Then π is weakly contained
in ρ (in symbols: π  ρ) if every member of P (π) is a pointwise limit of a sequence
of convex combinations of members of P (ρ).
Remark 37.2. Weak containment is clearly a transitive relation, and containment
(π is a direct summand in ρ) implies weak containment. On the other hand, weak
containment takes no account of multiplicities: if π  ρ then ⊕n π  ρ also. More
generally, if a family of representations are each individually contained in ρ, so is
their sum.
Remark 37.3. A simple normalization argument shows that it is equivalent to de-
mand that every positive definite function associated to π (normalized or not) is in
the pointwise closed subspace of `∞ (Γ) spanned by P (ρ). We will use this below.
Lemma 37.4. Suppose that π has a cyclic vector. Then, in order to check that π 
ρ, it suffices to prove that the single positive definite function ϕ(γ) = γ 7→ hξ, π(γ)ξi
is in the closed linear span of P (ρ).
Proof. Consider a vector η of the form
X n
η= λi π(γi )ξ,
i=1

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
152

where λi ∈ C and γi ∈ Γ are given. The associated positive definite function is


X
γ 7→ λ̄i λj ϕ(γi−1 gγj ).
i,j

Call the right side of this expression A(ϕ). Then A is an operation which maps the
span of P (ρ) to itself. Consequently, all the positive definite functions γ 7→ hη, π(γ)ηi
belong to the pointwise-closed linear span of P (ρ). But the collection of η of this
sort is dense in H, since ξ is a cyclic vector. 
Lemma 37.5. Suppose that π is an irreducible representation. Then π  ρ if and
only if each member of P (π) can be pointwise approximated by members of P (ρ) (it
is not necessary to take linear or convex combinations).
Proof. We work in the state space S = S(C ∗ (Γ)). Let σ be a vector state associated
to π. Let X denote the collection of vector states associated to ρ (i.e., the states
corresponding to the members of P (ρ)) and let Y be its closed convex hull. The
weak containment relation π  ρ says exactly that σ ∈ Y . But σ is a pure state,
that is an extreme point of S, and in particular therefore it is an extreme point of Y .
It follows from Milman’s theorem 11.19 that σ belongs to the closure of X, which
is what was required. 
The unitary dual of Γ is the collection of equivalence classes of irreducible unitary
representations of Γ, i.e., the spectrum of C ∗ (Γ). It is given the Fell topology.
Definition 37.6. Let ρ be any representation of Γ. The support of ρ is the subset
of Γ
b consisting of those irreducible representations weakly contained in ρ. It is a
closed subset of Γ.
b

Proposition 37.7. A group Γ is amenable if and only if the trivial representation


belongs to the support of the regular representation.
Proof. This is a reformulation of Proposition 25.6. 
Lemma 37.8. A group Γ has property T if and only if, for any representation ρ,
π0  ρ =⇒ π0 6 ρ,
where π0 denotes the trivial representation.
Proof. If ρ weakly contains the trivial representation, then it almost has invariant
vectors. To see this, let ε > 0 and a generating set S be given. Since the constant
function 1 is a positive definite function associated to the trivial representation,
there exists a unit vector ξ ∈ Hρ such that
|1 − hξ, ρ(x)ξi| < 21 ε2 ,
and this implies that kρ(x)ξ − ξk < ε by an elementary argument. Consequently,
given that Γ has property T, ρ must have an invariant vector, i.e., the trivial repre-
sentation is a subrepresentation.

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
153

Conversely suppose that Γ does not have property T. Then (fixing the generating
set S) for each ε = 1/n, say, there is L
a representation ρn that contains an ε-fixed
vector but no fixed vector. Form ρ = n ρn . Then π0  ρ but ρ contains no fixed
vector. 
Proposition 37.9. A group Γ has property T if and only if the point representing
the trivial representation is isolated (that is, the one-point set containing it is open)
in the unitary dual Γ.
b

Proof. Suppose that π0 is not isolated in Γ.


b Then there is a net πν in Γ b \ {π0 }
L
that converges to π0 . Then ρ = ν πν weakly contains π0 but does not conatin it
(compare the previous proof). Thus Γ does not have property T.
Conversely, suppose that Γ does not have property T . Then there is an irreducible
representation ρ that weakly contains π0 but does not contain it. This means that
given any ε > 0 we can find a positive definite function ϕ associated to ρ such
that |ϕ(γ) − 1| < ε for all γ in a generating set S. Now we can approximate ϕ by
convex combinations of pure states, so we can write ϕ = limν ϕν and single out the
contribution (if any) from the trivial representation to decompose
ϕν = (1 − λν )ψν + λν 1,
where λν ∈ [0, 1] and ψν is a convex combination of pure states corresponding to
representations other than π0 . Moreover, we may assume by passing to a universal
subnet that ψν → ψ (pointwise) and that λν → λ. We must have λ = 0; otherwise,
ρ has a trivial subrepresentation, contrary to hypothesis. Thus any element of P (ρ)
can be arbitrarily well approximated by convex combinations of positive definite
functions corresponding to irreducible representations in Γb \ {π0 }. By hypothesis,
it follows that the constant function can be so approximated, which is to say that
π0 is non-isolated in Γ
b using Milman’s theorem again. 
Our final “loose end” goes back to Lecture 2.
Proposition 37.10. Suppose A and B are bounded operators on a Hilbert space,
and AB and BA are trace class. Then Tr(AB) = Tr(BA).
Proof. (Compare Lemma 2.1 of On triangularization of algebras of operators, by
Nordgren and Rosenthal, Crelle 327 (1981), 143155.) Consider first the case where
A is self-adjoint. Let Pn be the spectral projection of A corresponding to R \
(−1/n, 1/n). We have
Tr(AB) = lim Tr(Pn AB) = lim Tr(Pn APn · Pn BPn ).
n n

(We moved the extra Pn ’s into the middle because they commute with A, and
round to the far end using the trace property.) Now consider Pn APn and Pn BPn as
operators on the Hilbert space Hn which is the range of Pn . Their composite equals
Pn ABPn so is of trace class. But Pn APn is invertible on Hn by construction. Thus
Pn BPn is trace class on Hn . Consequently
Tr(Pn APn · Pn BPn ) = Tr(Pn BPn · Pn APn )

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25
154

by the easy case of the result. Unwinding the limits as before completes the proof
for self-adjoint A.
Now for the general case, we use polar decomposition to write A = V P with P
positive and V a partial isometry. Then P B = V ∗ AB is of trace class. Consequently
Tr(AB) = Tr(V P B) = Tr(P BV ) = Tr(BV P ) = Tr(BA)
where the first step (commuting V) uses the ”easy case” and the second step (com-
muting P) uses the self-adjoint case proved above. 

AMS Open Math Notes: Works in Progress; Reference # OMN:201701.110660; 2017-01-01 20:43:25

You might also like