You are on page 1of 50

Lecture Notes: Mathematical Methods I

S Chaturvedi
October 16, 2017

Contents
1 Finite dimensional Vector Spaces 4
1.1 Vector space . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Linear combinations, Linear Span . . . . . . . . . . . . . . . . 5
1.4 Linear independence . . . . . . . . . . . . . . . . . . . . . . . 5
1.5 Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.6 Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.7 Representation of a vector in a given basis . . . . . . . . . . . 6
1.8 Relation between bases . . . . . . . . . . . . . . . . . . . . . . 6
1.9 Subspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.10 Basis for a vector space adapted to its subspace . . . . . . . . 7
1.11 Direct Sum and Sum . . . . . . . . . . . . . . . . . . . . . . . 7
1.12 Linear Operators . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.13 Null space, Range and Rank of a linear operator . . . . . . . . 8
1.14 Invertibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.15 Invariant subspace of a linear operator . . . . . . . . . . . . . 8
1.16 Eigenvalues and Eigenvectors of a linear operator . . . . . . . 8
1.17 Representation of a linear operator in a given basis . . . . . . 9
1.18 Change of basis . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.19 Diagonalizability . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.20 From linear operators to Matrices . . . . . . . . . . . . . . . . 10
1.21 Rank of a matrix . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.22 Eigenvalues and Eigenvectors of a matrix . . . . . . . . . . . . 10
1.23 Diagonalizability . . . . . . . . . . . . . . . . . . . . . . . . . 11

1
1.24 Jordan Canonical Form . . . . . . . . . . . . . . . . . . . . . . 12
1.25 Cayley Hamilton Theorem . . . . . . . . . . . . . . . . . . . . 13
1.26 Scalar or inner product, Hilbert Space . . . . . . . . . . . . . 14
1.27 Orthogonal complement . . . . . . . . . . . . . . . . . . . . . 15
1.28 Orthonormal Bases . . . . . . . . . . . . . . . . . . . . . . . . 15
1.29 Relation between orthonormal bases . . . . . . . . . . . . . . . 15
1.30 Gram Schmidt Orthogonalization procedure . . . . . . . . . . 15
1.31 Gram Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.32 Adjoint of a linear operator . . . . . . . . . . . . . . . . . . . 16
1.33 Special kinds of linear operators, their matrices and their prop-
erties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.34 Simultaneous Diagonalizability of Self adjoint operators . . . . 20
1.35 Simultaneous reduction of quadratic forms . . . . . . . . . . . 20
1.36 Standard constructions of new vector spaces from old ones . . 20

2 Fourier Series and Fourier Transforms 23


2.1 Periodic functions . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.2 Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.3 Parseval Identity . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.4 T → ∞: Fourier Series to Fourier transform . . . . . . . . . . 25
2.5 Delta function . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.6 Parseval Identity . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.7 Discrete Fourier Transform . . . . . . . . . . . . . . . . . . . . 27

3 Second order differential equations 29


3.1 Power series, interval of convergence . . . . . . . . . . . . . . . 29
3.2 Ordinary, singular and regular singular points . . . . . . . . . 29
3.3 Solution around an ordinary point . . . . . . . . . . . . . . . 30
3.4 Solution around a regular singular point . . . . . . . . . . . . 30
3.5 Example: Bessel Equation . . . . . . . . . . . . . . . . . . . . 32
3.6 Second order diff. eqns : Sturm Liouville form . . . . . . . . . 34
3.7 Sturm Liouville form: Polynomial solutions . . . . . . . . . . . 35

4 Group theory 38
4.1 Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.2 Subgroup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.3 Finite groups: Multiplication table . . . . . . . . . . . . . . . 38
4.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

2
4.5 The symmetric or the permutation group Sn . . . . . . . . . . 39
4.6 Some important ways of constructing subgroups . . . . . . . . 39
4.7 Decopositions of a group into disjoint subsets . . . . . . . . . 40
4.7.1 Conjugacy Classes . . . . . . . . . . . . . . . . . . . . 40
4.7.2 Decompsitions into cosets with respect to a subgroup . 41
4.8 Normal or invariant subgroups . . . . . . . . . . . . . . . . . . 41
4.9 Factor or Quotient group . . . . . . . . . . . . . . . . . . . . . 41
4.10 Group homomorphisms . . . . . . . . . . . . . . . . . . . . . . 42
4.10.1 Isomorphisms . . . . . . . . . . . . . . . . . . . . . . . 42
4.10.2 Automorphism . . . . . . . . . . . . . . . . . . . . . . 42
4.10.3 Inner Automorphisms . . . . . . . . . . . . . . . . . . 42
4.11 Direct product of groups . . . . . . . . . . . . . . . . . . . . . 43
4.12 Semi-direct product of groups . . . . . . . . . . . . . . . . . . 43
4.13 Action of a group on a set . . . . . . . . . . . . . . . . . . . . 44
4.14 Orbits, Isotropy groups, Fixed points . . . . . . . . . . . . . . 44
4.15 Burnside’s Lemma . . . . . . . . . . . . . . . . . . . . . . . . 45
4.16 Representations of a group . . . . . . . . . . . . . . . . . . . . 45
4.17 Basic questions in representation theory . . . . . . . . . . . . 46
4.18 Characters of a representation . . . . . . . . . . . . . . . . . . 47
4.19 Orthogonality properties of irreducible characters . . . . . . . 47
4.20 Character table . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.21 The trivial and the Regular representation of a group . . . . . 48
4.22 Two important questions in representation theory with rele-
vance to physics . . . . . . . . . . . . . . . . . . . . . . . . . . 50

3
1 Finite dimensional Vector Spaces
1.1 Vector space
A vector space V is a set of mathematical objects, called vectors written as
x, y, z, u, · · · equipped with two operations - addition and multiplication by
scalars, such that the following hold
• For any pair x, y ∈ V , x+y = y+x is also in V [Closure under additon
and commutativity of addition]
• For any x, y, z ∈ V , x + (y + z) = (x + y) + z [Associativity of addition]
• There is a unique zero vector 0 ∈ V such that, for any x ∈ V , x+0 = x
[Additive identity]
• For each x ∈ V there is a vector denoted by −x such that x+(−x) = 0
[Additive inverse]
• For any scalar α and any x ∈ V , αx is also in V [ Closure under scalar
multiplication]
• For any x ∈ V 0.x = 0, 1.x = x
• For any scalar α and any pair x, y ∈ V , α(x + y) = αx + αy. Further,
for any pair of scalars α, β and any x ∈ V , α(βx) = αβx and (α+β)x =
αx + βx
Depending on whether the scalars are real numbers or complex numbers one
speaks of a real or a complex vector space. In general, the scalars may
be drawn from any field, usually denoted by F and in that case we speak
of a vector space over the field F. ( A field F is a set equipped with two
composition laws–addition and multiplication such that F is an abelian group
under addition (with ‘0’ denoting the additive identity element) and F ∗ =
F − {0} is an abelian group with respect to multiplication (with ‘1’ denoting
the multiplicative identity element). Two familiar examples are fields are the
set of real and complex numbers. Both of these are infinite fields. Another
not so familiar example of an infinite field is the field of rational numbers.
Finite fields also exist but they come only in sizes pn where p is a prime
number). In what follows we will consider only the real or the complex field.
It is, however, important to appreciate that the choice of the field is an
integral part of the definition of the vector space.

4
1.2 Examples
• Mn×m (C): the set of n × m complex matrices.

• Mn (C): the set of n × n complex matrices.

• Cn : the set of n-dimensional columns.

• Pn (t): the set of polymomials {a0 + a1 t + a2 t2 + · · · + an−1 tn−1 } in t of


degree less than n with real or complex coefficients.

( The vector space Mn (C) can also be viewed as the set of all linear operators
on Cn . We note that the vector space Cn is of special interest as all finite
dimensional vector spaces of dimension d are isomorphic to Cd as we shall
see later )

1.3 Linear combinations, Linear Span


For vectors x1 , x2 , · · · , xn ∈ V and scalars α1 , α2 , · · · , αn , we say that the
vector x = α1 x1 + α2 x2 + · · · + αn xn is a linear combination of x1 , x2 , · · · , xn
with coefficients α1 , α2 , · · · , αn .
The set of all linear combinations of a given set of vectors x1 , x2 , · · · , xn ∈
V is called the linear span of the vectors x1 , x2 , · · · , xn and is itself a vector
space.

1.4 Linear independence


A set of vectors x1 , x2 , · · · , xn ∈ V is said to be linearly independent if
α1 x1 + α2 x2 + · · · + αn xn = 0 ⇒ α1 = α2 = · · · = αn = 0. Otherwise the set
is said to be linearly dependent.

1.5 Dimension
A vector space is said to be of dimension n if there exists a set of n linearly
independent vectors but every set of n + 1 vectors is linearly dependent.
On the other hand, if for every integer n it is possible to find n linearly
independent vectors then the vector space is said to be infinite dimensional.
In what follows we will exclusively deal with finite dimensional vector
spaces.

5
1.6 Basis
In a finite dimensional vector space V of dimension n any n linearly indepen-
dent vectors x1 , x2 , · · · , xn ∈ V of linearly independent vectors are said to
provide a basis for V . In general there are infinitely many bases for a given
vector space.

1.7 Representation of a vector in a given basis


Given a basis e1 , e2 , · · · , en ∈ V any x ∈ V can be uniquely written as x =
x1 e1 +x2 e2 +· · ·+xn en . The coefficients x1 , · · · , xn , called the components of
x in the basis e1 , e2 , · · · , en , can be arranged in the form of a column vector
 
x1
 · 
x 7→ x = 
 · 

xn
In particular
   
1 0
 0   0 
e1 7→ e1 = 
 ·  , · · · , en 7→ en =  ·
  

0 1
The column vector x is called the representation of x ∈ V in the basis
e1 , e2 , · · · , en .

1.8 Relation between bases


Let e1 , e2 , · · · , en and e01 , e02 , · · · , e0n be two bases for a vector space V . Since
e1 , e2 , · · · , en is a basis each e0i can be written as a linear combination of
e1 , e2 , · · · , en :
Xn
0
ei = Sji ej
j=1

The matrix S must necessarily be invertible as since e01 , e02 , · · · , e0n is a basis
and each ei can be written as a linear combination of e01 , e02 , · · · , e0n :
n
X
−1 0
ei = Sji ej
j=1

6
Two bases are thus related to each other through an invertible matrix S –
there are as many bases in a vector space of dimension n as there are n × n
invertible matrices. The set of all n × n invertible real (complex) matrices
form a group denoted by GL(n, R)(GL(n, C)) Under a change of basis the
components x of a vector x in e1 , e2 , · · · , en are related to the components
x0 of x in the basis e01 , e02 , · · · , e0n as follows
n
X
−1
x0i = Sji xj
j=1

1.9 Subspace
A subset V1 of V which is a vector space in its own right is called a subspace
of V .

1.10 Basis for a vector space adapted to its subspace


If V1 is a subspace of dimension m of a vector space V of dimension n then a
basis e1 , e2 , · · · , em , em+1 , · · · , en such that the first m vectors e1 , e2 , · · · , em
provide a basis for V1 is called a basis for V adaped to V1 .

1.11 Direct Sum and Sum


A vector space V is said to be a direct sum of its subspacces V1 and V2 ,
V = V1 ⊕ V2 if every vector x ∈ V can be uniquely written as x = u + v
where u ∈ V1 and u ∈ V2 . The requirement of uniqueness implies that the
two subspaces V1 and V2 of V can have no vectors in common except the zero
vector. This has the consequence that dimV1 + dimV2 = dimV .
Given a subspace V1 of V , there is no unique choice for the subspace V2
such that V = V1 ⊕ V2 – there are infinitely many ways in which this can be
done.
If the requirement of the uniqueness of the decomposition x = u + v is
dropped then one says that V is a sum of V1 and V2 . In this case V1 and
V2 do have vectors other than the zero vector. The set of common vectors
themselves form a subspace of V of dimension equal to dimV1 +dimV2 −dimV .

7
1.12 Linear Operators
A linear operator A on a vector space V is a rule which assigns, to any vector
x, another vector Ax such that A(αx + βy) = αAx + βAy. for any x, y ∈ V
and any scalars α, β.
Linear operators on a vector space V of dimension n themselves form a
vector space of dimension n2 .

1.13 Null space, Range and Rank of a linear operator


Given a linear operator, the set of vectors obtained by applying A to all of
V written symbolically as AV form a subspace of V called the range of A.
The dimension of the range of A is called the rank of A The set of all vectors
x such that Ax = 0 i.e. the set of all vectors which get mapped to the zero
vector also form a subspace of V called the null space of A. Clearly the rank
of A is equal to the dimension of V minus the dimension of the null space.

1.14 Invertibility
An operator is said to be invertible if its range is the whole of V or in other
words its null space is trivial– there is no nonzero vector x ∈ V such that
Ax = 0.

1.15 Invariant subspace of a linear operator


A subspace V1 of V is said to be an invariant subspace of a linear operator
A if Ax ∈ V1 whenever x ∈ V1 .

1.16 Eigenvalues and Eigenvectors of a linear operator


A non zero vector x ∈ V is said to be an eigenvector of A if Ax = λx and λ
is called the corresponding eigenvalue. Note that if x is an eigenvector of A
corresponding to the eigenvalue λ then so is αx for any scalar α.

8
1.17 Representation of a linear operator in a given ba-
sis
It is evident that a linear operator on V , owing to linearity, is completely
specified by its action on a chosen basis e1 , e2 , · · · , en for V
n
X
Aei = Aji ej
j=1

The matrix A of the coefficients Aij is called the representation of the linear
operator in the basis e1 , e2 , · · · , en
It can further be seen that the if the linear operators A and B are re-
spectively represented by A and B respectvely in a given basis in V then the
operator AB is represented in the same basis by AB.

1.18 Change of basis


Clearly the representation of a linear operator depends on the chosen ba-
sis. If we change the basis the matrix representing the operator will also
change. Let A and A0 be the representation of the linear operator in the
bases e1 , e2 , · · · , en and e01 , e02 , · · · , e0n related to each other as
n
X
e0i = Sji ej
j=1

then the representation A and A0 are related to each other as A0 = S −1 AS.


Thus under a change of basis the representation of a linear operator undergoes
a ‘similarity’ transformation : A → A0 = S −1 AS.

1.19 Diagonalizability
A linear operator is said to be diagonalizable if one can find a basis in V
such that it is represented in that basis by a diagonal matrix. If this can
be done then clearly each of the basis vector must be an eigenvector of A.
This also means that a for an operator to be diagonalizable its eigenvectors
must furnish a basis for V i.e. the n eigenvectors of A must be linearly
independent.

9
1.20 From linear operators to Matrices
From the discussion above it is evident that for any vector space V of dimen-
sion n, whatever be its nature, after fixing a basis, we can make the following
identifications:
x ∈ V ↔ x ∈ Cn
Linear operator A on V ↔ A ∈ Mn (C)
Rank of A ↔ Rank of A
Invertibility of A ↔ Invertibility of A
Diagonalizability of A ↔ Diagonalizability of A
Eigenvalues and eigenvectors of A ↔ Eigenvalues and eigenvectors A
In mathematical terms, every finite dimensional vector space of dimension n
is isomorphic to Cn .

1.21 Rank of a matrix


The rank of an n×n matrix A = (x1 , x2 , · · · , xn ) equals the size of the largest
set of vectors x1 , x2 , · · · , xn that are linearly independent. It also equals the
size of the largest non vanishing minor of A. Alternatively one may compute
the number of linearly independent solutions to Ax = 0. This gives the
dimension of the null space of A. This number subtracted from n gives the
rank of A.

1.22 Eigenvalues and Eigenvectors of a matrix


The eigenvalue problem Ax = λx may be rewritten as (A − λI)x = 0. This
set of homogeneous linear equations has a non trivial solution if and only if
Det(A − λI) equals zero. This yields an nth degree polynomial equation in
λ:
C(λ) = λn + cn−1 λn−1 + cn−2 λn−2 · · · + c0 = 0
whose roots give the eigenvalues. The polynomial C(λ) is called the charac-
teristic polynomial and the equation C(λ) = 0 the characteristic equation of
A. It is here that the role of the field F comes to fore. In general, there is
no gurantee that an nth degree polynomial with coefficients in F has n roots
also in F. This is, however, true for the field of complex numbers and one

10
says that the complex field is algebraically complete and is the main reason
behind considering vector spaces over the complex field.
The roots λ1 , · · · , λn of the characteristic equation, the eigenvalues A may
all be distinct or some of them may occur several times. An eigenvalue that
occurs more than once is said to be degenerate and the number of times it
occurs is called its (algebraic) multiplicity or degeneracy. Having found the
eigenvalues one proceeds to construct the corresponding eigenvectors. Two
situations may arise
• An eigenvalue λk is non degenerate. In this case, there is essentially (
or upto multiplication by a scalar) only one eigenvector corresponding
to that eigenvalue.
• An eigenvalue λk occurs κ fold degenerate. In this case one may or may
not find κ linearly independent eigenvectors. Further, there is much
greater freedom in choosing the eigenvectors– any linear combination
of the eigenvectors corresponding to a degenerate eigenvalue is also an
eigevector corresponding to that eigenvalue.
Given the fact that the eigenvectors corresponding to distinct eigenvalues are
always linearly independent, we can make the following statements:
• If the eigenvalues of an n × n matrix A are all distinct then the corre-
sponding eigenvectors, n in number are linearly independent and hence
form a basis in Cn
• If this is not so, the n eigenvectors may or may not be linearly indepen-
dent. (Special kinds of matrices for which the existence of n linearly
independent eigenvectors is guranteed regardless of the degeneracies or
otherwise in its spectrum, will be considered later.)

1.23 Diagonalizability
An n × n matrix A is diagonalizable i.e. there exists a matrix S such that
S −1 AS = Diag(λ1 , · · · , λn )
if and only if the eigenvectors x1 , · · · , xn corresponding to the eigenvalues
(λ1 , · · · , λn ) are linearly independent and the matrix S is simply obtained
by putting the eigenvectors side by side:
S = (x1 x2 · · · xn )

11
In view of what has been said above, a matrix whose eigenvalues are all
distinct can certainly be diagonalized. When this is not so i.e. when one or
more eigenvalues are degenerate we may or may not be able to diagonalize
depending on whether or not it has n linearly independent eigen vectors. If
the matrix can not be diagonalized what is the best we can do? This leads us
to the Jordan canonical form ( of which the diagonal form is a special case).

1.24 Jordan Canonical Form


Consider a matrix A whose eigenvalues are (λ1 , λ2 , · · · , λn ). Some of the en-
tries in this list may be the same. Notationally it proves convenient to replace
this list by a shorter list λ̃1 , · · · , λ̃r with all distinct entries and specify the
(algebraic) multiplicity κi of the entry λ̃i , i = 1, · · · , r. (Thus, for instance,
the list (0, 5, 0.5, 1.5, 1.5, 1.5, 0.3) would get abridged to (0.5, 1.5, Pr 0.3) with
κ1 = 2, κ2 = 3, κ3 = 1). Clearly all the κ’s must add up to n : i=1 κi = n.
Now let µi , i = 1, · · · , r denote the number of linearly inependent eigenvec-
tors corresponding to the eigenvalue λ̃i . This number is also referred to as
the geometric multiplicity
Pr of λ̃i . It is evident that 1 ≤ µi ≤ κi , i = 1, · · · , r.
Further, the sum i=1 µi = ` gives the total number of linearly independent
eigenvectors of A. It can be shown that for every matrix A there is an S
such that S −1 AS = J where J, the Jordan form, can brought to a block
diagonal form J = Diag(J1 , J2 , · · · , J` ) where each block Ji , i = 1, · · · , ` has
the structure  
λ 1

 λ 1 

Ji =   · · 

 · 1 
λ
where λ is one of the eigenvalues of A. Some general statements that can be
made at this stage

• The sizes of the blocks add up to n

• The number of times each eigenvalue occurs along the diagonal equals
its algebraic multiplicity

• The number of blocks in which each eigenvalue occurs equals its geo-
metric multiplicity.

12
Needless to say that the diagonal form is a special case of the Jordan form
in which each box is of 1 dimension.
Further details concerning the sizes of the blocks, explicit construction of
S which effects the Jordan form have to be worked out case by case and will
be omitted.

1.25 Cayley Hamilton Theorem


Cayley Hamilton theorem states that every matrix satisfies its characteristic
equation. Thus if the characteristic equation of a 3 × 3 matrix is λ3 + c2 λ2 +
c1 λ + c0 then A satisfies A3 + c2 A2 + c1 A + c0 I = 0. This thus expresses A3 ,
and hence any higher power of A, as a linear combination of the A2 , A and
I. This result very useful in explicit computation of functions f (A) of any
n × n an matrix. We illustrate below the procedure for a 3 × 3 matrix.
Recall that if A has eigenvalues λ1 , · · · , λn with corresponding eigenvec-
tors x1 , · · · , xn then f (A) has eigenvalues f (λ1 ), · · · , f (λn ) with x1 , · · · , xn
as eigenvectors.
Now consider a function f (A) of, say, a 3 × 3 matrix. Cayley Hamilton
theorem tells us that computing any function of A ( which can meaningfully
be expanded in a power series in A) reduces, in this instance, to computing
powers of A upto two.

f (A) = a2 A2 + a1 A + a0 I

Only thing that remains is to determine the three coefficients a2 , a1 , a0 and


to do that we need three equations. If the three eigenvalues are distinct, by
virtue of what was said above one obtains

f (λ1 ) = a2 λ21 + a1 λ1 + a0

f (λ2 ) = a2 λ22 + a1 λ2 + a0
f (λ3 ) = a2 λ23 + a1 λ3 + a0
which when solved for a2 , a1 , a0 yield the desired result.
What if one of the eigenvalues λ1 is two fold degenerate i.e what if the
eigenvalues turn out to be λ1 , λ1 , λ2 ? We then get only two equations for the
three unknowns. It can be shown that in such a situation the third equation
needed to supplement the two equations

f (λ1 ) = a2 λ21 + a1 λ1 + a0

13
f (λ2 ) = a2 λ22 + a1 λ2 + a0
is
∂f (λ)
|λ=λ1 = 2a2 λ1 + a1
∂λ
What if all the three eigenvalues are the same i.e. if the eigenvallues turn
out to be λ1 , λ1 , λ1 . The three desired equations then would be
f (λ1 ) = a2 λ21 + a1 λ1 + a0
∂f (λ)
|λ=λ1 = 2a2 λ1 + a1
∂λ
∂ 2 f (λ)
|λ=λ1 = 2a2
∂λ2
One can easily recognise how the pattern outlined above extends to more
general situations.

1.26 Scalar or inner product, Hilbert Space


A scalar product is a rule which assigns a scalar, denoted by (x, y), to any
pair of vectors in x, y ∈ V such that
• (x, y) = (y, x)∗ (hermitian symmetry)
• (x, αy + βz) = α(x, y) + β(x, z) (linearity)
• (x, x) ≥ 0. Equality holds if and only x is 0 (Positivity)
Examples:
• Cn : (x, y) = x† y

• Mn (C) : (x, y) = Tr[x† y]


Rb
• Pn (t) : (x, y) = a dt w(t)x∗ (t)y(t), for any fixed w(t) such that w(t) ≥
0 for t ∈ (a, b)
p
A vector x is said to be normalized if its norm ||x|| ≡ (x, x) = 1. If
a vector is not normalized, it can be normalized by dividing it by its norm.
Two vectors x, y ∈ H are said to be orthogonal if (x, y) = 0. A vector space
V equipped with a scalar product is called a Hilbert space H. On a given
vector space one can define a scalar product in infinitely many ways. Hilbert
spaces corresponding to the same vector space with distinct scalar products
are regarded as distinct Hilbert spaces.

14
1.27 Orthogonal complement
Given a subspace H1 of a Hilbert space H, the set of all vectors orthogonal
to all vectors in H1 forms a subspace, denoted by H1⊥ , called the orthogonal
complement of H1 in H. Further, as the nomenclature suggests, H = H1 ⊕
H1⊥ .

1.28 Orthonormal Bases


A basis e1 , e2 , · · · , en ∈ H such that (ei , ej ) = δij is said to be an orthonormal
basis in H. If a vector x ∈ H is expressed in terms of the orthonormal basis
ei , · · · , en as x = x1 ei +· · ·+xn en then its components xi are simply equal to
(ei , x). Similarly if a linear operator is represented in an orthonormal basis
e1 , e2 , · · · , en ∈ H by a matrix A
n
X
Aei = Aji ej
j=1

then the matrix elements Aij are simply given by

Aij = (ei , Aej )

Remember that this holds only when the chosen basis is an orthonormal basis
and not otherwise.

1.29 Relation between orthonormal bases


Two orthonormal bases e1 , e2 , · · · , en and e01 , e02 , · · · , e0n are related to each
other by a unitary matrix :
n
X
e0i = Uji ej , U † U = I
j=1

Thus in an n dimensional Hilbert space there are as many orthonormal bases


as n × n unitary matrices.

1.30 Gram Schmidt Orthogonalization procedure


Given a set of linearly independent vectors x1 , x2 , · · · , xn the Gram Schmidt
procedure enables one to construct out of it an orthonormal set z1 , z2 , · · · , zn

15
in a recursive way. The first step consits constructing an orthogonal basis
y1 , y2 , · · · , yn as follows
i−1
X (yj , xi )
y1 = x1 , yi = xi − yj , i = 2, · · · , n
j=1
(yj , yj )

The desired orthonormal basis z1 , z2 , · · · , zn is then obtained by normalizing


yi
this orthogonal set zi = .
||y||
There are infinitely many orthogonalization procedures. However only the
Gram-Schimidt procedure has the advantage of being sequential – if one more
vector is added to the set the construction upto the previous step remains
unaffected.

1.31 Gram Matrix


Given a set of linearly independent vectors x1 , x2 , · · · , xn , one can associate
with it a matrix G with Gij = (xi , xj ) called the Gram matrix. A neces-
sary and sufficient condition for x1 , x2 , · · · , xn to be linearly independent is
that the determinant of the Gram matrix must be non zero. ( In fact the
determinant of a Gram matrix is always ≥ 0)

1.32 Adjoint of a linear operator


An operator, denoted by A† , such that (x, Ay) = (A† x, y) for all pairs x, y ∈
H is called the adjoint of A. Stated in terms of a basis e1 , e2 , · · · , en ∈ H
these may equivalently expressed as

(ei , Aej ) = (A† ei , ej )

(ei , Aej ) = (ej , A† ei )∗


If the basis
(ei , Aej ) = (A† ei , ej )
is chosen to be an orthonormal basis, after recognising that (ei , Aej ) and
(ei , A† ej ) are simply the matrix elements Aij A†ij of the matrices representing
A and A† respectively in the chosen basis, the last equation translates into

A† ij = A∗ji

16
i.e. the matrix for A† is simply the complex conjugate transpose of the matrix
A for A. Remember that this is so only if the basis chosen is an orthonormal
basis and is not so otherwise.

1.33 Special kinds of linear operators, their matrices


and their properties
• Self adjoint or Hermitian operator : An opearator A for which A† = A
or in otherwords (x, Ay) = (Ax, y) for all pairs x, y is called a self
adjoint operator. Such an operator can be shown to have the following
properties:
– Its eigenvalues are real
– The eigenvectors corresponding to distinct eigenvalues are orthog-
onal
– Its eigenvectors are linearly independent and therefore can always
be diagonalized regardless of whether its eigenvalues are distinct or
not. Its eigenvectors can always be chosen to form an orthonormal
basis
– In an orthonormal basis, a self adjoint operator A is represented
by a Hermitian matrix A, A† = A
– A hermitian matrix A can always be diagonalixed by a unitary
matrix U † AU = Diag
• Unitary operator: An opearator U such that (Ux, Uy) = (x, y) for all
pairs x, y is called a unitary operator. Such an operator can be shown
to have the following properties:
– Its eigenvalues are of unit modulus
– The eigenvectors corresponding to distinct eigenvalues are orthog-
onal
– Its eigenvectors are linearly independent and therefore can always
be diagonalized regardless of whether its eigenvalues are distinct or
not. Its eigenvectors can always be chosen to form an orthonormal
basis
– In an orthonormal basis, a unitary operator U is represented by a
unitary matrix U, U † U = I

17
• Positive operator : An opearator A such that (x, Ax) ≥ 0 for all pairs
x is called a positive ( or non negative) operator. For such an operator
it can be shown to have the following properties:

– its eigenvalues are ≥ 0


– It is necessarily self adjoint and hence inherits all the properties
of a self adjoint operator.

• Projection operator: A self adoint opearator P such that P 2 = P is


called a projection operator Such an operator can be shown to have the
following properties:

– its eigenvalues are either 1 or 0


– Being self adjoint, it has all the properties of a self adjoint opera-
tor.
– the number of 1’s in its spectrum give its rank.
– A projection operator P of rank m fixes an m dimensional sub-
space of H. The operator Id − P is also a projection operator of
rank n − m and fixes the orthogonal complement of the subspace
corresponding to P.
– If P1 , P2 , · · · , Pn denote the projection operators corresponding
to the one dimensional subspaces determined by an orthonormal
basis e1 , e2 , · · · , en ∈ H then

Tr[Pi Pj ] = δij Pi , P1 + P2 + · · · + Pn = Id

– If e1 , e2 , · · · , en ∈ H is an eigenbasis of of a self adjoint operator


A corresponding to the eigenvalues λ1 , · · · , λn then A may be
resolved as :

A = λ1 P1 + λ2 P2 + · · · + λn Pn Spectral Decomposition

• Real symmetric matrices: These arise in the study of quadratic forms


over the real field. An expression q(x1 , · · · , xn ) of the form
n
X
q(x1 , · · · , xn ) = Aij xi xj , Aij ∈ R, x ∈ Rn
i,j

18
, a real homogeneous polynomial of degree 2, is called a real quadratic
form in n variables. A real quadratic form can be compactly expressed
as q(,x) = xT Ax where A is a real symmetric matrix. Under a linear
change of variables x → y = S −1 x, A suffers a congruence transfor-
mation : A → A0 = S T AS. Given a real symmetric matrix A can we
always find a matrix S such that S T AS is diagonal so that the quadratic
expression in the new variables has only squares and no ‘cross terms’ ?
The answer is yes :

– Every real symmetric A has real eigenvalues


– Its eigenvectors are real and and can always be chosen to form an
orthonormal basis.
– An orthogonal matrix S, S T S = I can always be found such that
S T AS = Diag. The entries along the diagonal are the eigenvalues
of A.
– The matrix S is contructed by putting the eigenvectors of A side
by side.

• 2n dimensional real symmetric positive matrices A can always be diag-


onalized by a congruence transformation through a symplectic matrix
:  
T T 0 I
S AS = Diag, S βS = β, β =
−I 0
The entries along the diagonal are not the eigenvalues of A but rather
what are known as symplectic eigenvalues of A.
Symplectic matrices arise naturally in the context of linear canonical
transformations in the Hamiltonian formulation of classical mechanics
and quantum mechanics ( Linear canonical transformation in classi-
cal mechanics (quantum mechanics ) are those linear transformations
which preserve the fundamental Poisson brackets (commutation rela-
tions))

All these operators (matrices) are referred to as normal operators (matrices)


– A and its adjoint A† commute with each other: i.e. [A, A† ] = 0 where
[A, B] ≡ AB − BA.

19
1.34 Simultaneous Diagonalizability of Self adjoint op-
erators
Two self adjoint operators A, B, A† = A, B † = B can be diagonalized simul-
taneously by a unitary transformation if and only if the commute [A, B] = 0.
The task of diagonalizing two commuting self adjoint operators essential
consists in consisting a common eigenbasis which, as we know , can always
be chosen as an orthonormal basis. The unitary operator which effects the
simultaneous diagonalization is then obtained by putting the elements of
the common basis side by side as usual. If one of the two has degenerate
eigenvalues then its eigenbasis is also the eigenbasis of the other. More work
is needed If neither of the two has a non degenerate spectrum- suitable linear
cobinations of the eigenvectors corresponding to a degenerate eigenvalues
have to be constructed so that they are also the eigenvectors of the other.
The significance of this result in the context of quantum mechanics arises
in the process of labelling the elements of a basis in the Hilbert space by the
eigenvalues of a commuting set of operators.

1.35 Simultaneous reduction of quadratic forms


If A is a real symmetrix strictly positive matrix and B a real symmetric
matrix then there is an S such that

S T AS = Id, S T BS = Diag

This result is of consderable relevance in the context finding normal modes


of oscillations of coupled harmonic oscillators:

d2
M x = −Kx,
dt2
where M is a real positve matrix and K is a real symmetric matrix.

1.36 Standard constructions of new vector spaces from


old ones
• Quotient spaces : Given a vector space V and a subspace V1 thereof one
can decompose V into disjoint subsets using the equivalence relation
that two elements of V1 are equivalent if they differ from each other by

20
an element of V1 . The subsets, the equivalence classes themselves form
a vector space V /V1 , called the quotient of V by V1 , of dimension equal
to the difference in the dimensions of V and V1 .

• Dual of a vector space : Given a vector space V , the set of all linear
functionals on V themselves form a vector space V ∗ of the same dimen-
sion as V . Here by a linear functionnal on V one means a rule which
assigns a scalar to each element in V respecting linearity.

• Tensor product of vector spaces: Consider two vector spaces V1 and


V2 of dimensions n and m respectively. Let e1 , · · · , en and f1 , · · · , fm
denote the bases in V1 and V2 respectively. By introducing a formal
symbol ⊗, we construct a set of nm objects ei ⊗ fj ; i = 1, · · · n, j =
1, · · · , m. and decree them as the basis for a new vector space V1 ⊗ V2
of dimension nm : elements x of V1 ⊗ V2 are taken to be all linear
combinations of ei ⊗ fj ; i = 1, · · · n, j = 1, · · · , m
n X
X m
x= αij ei ⊗ fj ; i = 1, · · · n, j = 1, · · · , m
i=1 j=1

(It is assumed that the formal symbol ⊕ satisfies certain ‘common sense’
properties such as (u + v) ⊗ z = u ⊗ z + v ⊗ z; (αu) ⊗ z = α(u ⊗ z) =
u ⊗ (αz) etc. )
Here a few comments are in order:
Elements x of V1 ⊗ V2 can be divided into two categories, product or
separable vectors i.e those which can be written as u⊗v; u ∈ V1 , v ∈ V2
and non separable or entangled vectors i.e. those which can not be
written in this form.
Operators A and B on V1 and V2 may respectively be extended to
operators on V1 ⊗ V2 as A ⊗ I and I ⊗ B.
Operators on V1 ⊗V2 can be divided into two categories : local operators
i.e. those which can be written as A ⊗ B and non local operators i.e.
those which can not be written in this way.
If the operators A and B on V1 and V2 are represented by the matrices
A and B in the bases e1 , · · · , en and f1 , · · · , fm then the operator A ⊗
B is represented in the lexicographically ordered basis ei ⊗ fj ; i =

21
1, · · · n, j = 1, · · · , m by the matrix A ⊗ B. where
 
A11 B · · A1n B
 · · · · 
A⊗B =  ·

· · 
An1 B · · Ann B

This construction can easily be extended to tensor products of three or


more vector spaces.
Tensor products of vector spaces arise naturally in the description of
composite systems in quantum mchanics. The notion of entanglement
pays a crucial role in quantum information theory.

22
2 Fourier Series and Fourier Transforms
2.1 Periodic functions
A (real or complex) function f (t) of a real variable t is said to be a periodic
function if there is a smallest T such that

f (t) = f (t + T )

for all t and T is called its period.


If f1 (t) and f2 (t) are two periodic functions with periods T1 and T2 then
their linear combination

g(t) = af1 (t) + bf2 (t)

is periodic function if and only if T1 and T2 are commensurate i.e. T1 /T2 is


a rational number:
T1
== m/n, m, n integers
T2
or in other words T1 = mα and T2 = nα. The period T of g(t) will be the
smallest number divisible by both T1 and T2 i.e LCM(m, n)α.
A function f (t) defined over a finite interval say 0 ≤ t ≤ τ can be em-
bedded into a periodic function g(t) with period T in many different ways.
The function g(t) is called the periodic extension of f (t). For instance, two
possible periodic extensions of f (t) shown below

are:

23
Any piecewise continuous periodic function f (t) of period T can be ex-
panded as

X 2π
f (t) = cn einωt , ω=
n=−∞
T
in terms of the set of functions

fn (t) = {einωt , ω ≡ 2π/T ; n = 0, ±1, ±2, · · · }.

This set of functions form an orthonormal basis with respect to the scalar
product:
1 T /2 ∗
Z
(f, g) = f (t)g(t)
T −T /2
i.e Z T /2 Z T /2
1 1
(fn , fm ) = fn∗ (t)fm (t) = ei(m−n)ωt = δnm
T −T /2 T −T /2

Using this orthogonality property, given an f (t), we can compute the cn ’s,
the Fourier coefficients, as follows:
Z T /2
1
cn = (fn , f ) = dt e−inωt f (t)
T −T /2

The Fourier series thus ‘digitizes’ a periodic ‘signal’ f (t) in that it stores
the periodic signal f (t), 0 ≤ t ≤ T in terms of a denumerable set cn , n =
0, ±1, ±2, · · · . Often one does not need all the cn ’s and the signal can be
fairly well approximated by a small number of cn ’s.
Using einωt = cos nωt + i sin nωt, the Fourier series can also be expressed
in the ‘sine-cosine’ form as
inf ty
a0 X
f (t) = + [an cos nωt+bn sin nωt], a0 ≡ 2c0 , an ≡ (cn +c−n ); an ≡ i(cn −c−n )
2 n=1

Given an f (t) the Fourier coefficients that appear in this form can be com-
puted using the relations
Z T /2 Z T /2
2 2
an = dtf (t) cos nωt, n = 0, 1, · · · , bn = dtf (t) sin nωt, n = 1, 2 · · ·
T −T /2 T −T /2

24
2.2 Convergence
If f (t) is continuous at t = t0 then the Fourier series evaluated at t = t0
converges to f (t0 )
If f (t) is discontinuous at t = t0 the the Fourier series evaluated at t = t0
converges to the average value of f (t) at t = t0 i.e to [f (t0+ ) + f (t0− )]/2

2.3 Parseval Identity


Z T /2 ∞ ∞
1 2
X 1 1X
dt|f (t)| = |cn | = |a0 |2 +
2
[|an |2 + |bn |2 ]
T −T /2 n=−∞
4 2 n=0

2.4 T → ∞: Fourier Series to Fourier transform


A non periodic function may be viewed as a periodic function with T = ∞.
We now consider this limit of the Fourier series. Substituting for the cn ’s in
Fourier series we have
∞ Z T /2
inωt 1 0
X
f (t) = e dt0 f (t0 )e−inωt
n=−∞
T −T /2
Z T /2 ∞
1 X inω(t−t0 )
= dt0 f (t0 ) e
−T /2 T n=−∞
Z T /2 ∞
0 1 2π inω(t−t0 )
X
0
= dt f (t ) e
−T /2 2π n=−∞ T

Introducing a discrete variable x taking values {nω, n = 0, ±1, ±2} with the
difference ∆x between adjacent values being 2π/T we find that the sum on
the RHS becomes an integral in the limit T → ∞ leading to
Z ∞ Z ∞
0 0 1 0
f (t) = dt f (t ) dxeix(t−t )
−∞ 2π −∞

which after interchanging the order of integration and replacing x by ω may


also be written as
Z ∞ Z ∞
1 iωt 1 0
f (t) = √ dωe √ dt0 e−iωt f (t0 )
2π −∞ 2π −∞

25
The first of the two equations leads to the notion of the Dirac delta function
Z ∞
0 1 0
δ(t − t ) = dxeix(t−t )
2π −∞
and the second to the notion of the Fourier transform
Z ∞
1
f (t) = √ dωeiωt F (ω)
2π −∞
Z ∞
1
F (ω) ≡ F[f (t)] = √ dte−iωt f (t)
2π −∞
|F (ω)|2 is called the power spectrum of f (t) : it gives the amount of the
frequency ω present in the signal f (t)

2.5 Delta function


The delta function, by definition, has the property:
Z ∞
f (t)δ(t − t0 ) = f (t0 )
−∞

It can be pictured as a function which is zero everywhere except at the point


where its argument vanishes where it has an infinite spike. As a result the
integral Z b
f (t)δ(t − t0 )
a

equals f (t0 ) if t0 is contained in the interval (a, b) and equals 0 if not.

2.6 Parseval Identity


Z ∞ Z ∞
2
dt|f (t)| = dω|f (ω)|2
−∞ −∞

Functions f (t) for which


Z ∞
dt|f (t)|2 < ∞
−∞

are called square integrable functions. From the Parseval identity it follows
that the Fourier transforms of square integrable functions are also square

26
integrable. Further, it can be shown that linear combinations of square in-
tegrable functions are also square integrable : the set of square integrable
functions form a Hilbert space– a vector space equipped with the scalar prod-
uct Z ∞
(f, g) = dt f ∗ (t)g(t)
−∞

2.7 Discrete Fourier Transform


The Fourier transform F (j), j = 0, 1, · · · , N − 1 of a function f (k), k =
0, 1, · · · , N − 1 over a discrete set of N points labelled 0, 1, 2, · · · , N − 1 is
defined as
N −1
1 X jk
F (j) = √ ω f (k), ω ≡ e2πi/N
N k=0
[ Note that ω here stands for the N th root of unity in contrast to the variable
ω which appears in the definition of Fourier transforms.] The inverse relation
expressing f in terms of its Fourier transform is
N −1
1 X −jk
f (k) = √ ω F (j),
N j=0

which follows from the ‘orthonality relation’


N −1
1 X (j−`)k
ω = δj`
N k=0

[When j = `, the RHS is clearly equal to 1. When j 6= `, the RHS is evidently


the sum of the N roots of unity which we know equals 0.]
As before we have the Parseval identity
N
X −1 N
X −1
2
|F (j)| = |f (j)|2
j=0 j=0

The Discrete Fourier transform above in the special case when N = 2n is


variously referred to in the literature as the Fast Fourier transform as in this
case there are efficient algorthims for computing the Fourier transform. In
the Quantum Information Theory literature it is referred to as the Quantum
Fourier Transorm

27
The three ‘Fourier transforms’ that we have considered respectively deal
with functions defined on a circle, the real line, and a periodic lattice of N
points (N points on a circle )

28
3 Second order differential equations
3.1 Power series, interval of convergence
· · · An infinite series of the form ∞ n
P
n=0 an (x − x0 ) is called a power series
around x0 . It converges for values of x lying in the interval |x − x0 | < R
where R, the ‘radius of convergence’ of the power series is given by
lim an
R =n → ∞ | |
an+1
Stated in words, the infinite series converges for values of x lying in the
interval x0 − R to x0 + R where R is to be computed as above.

3.2 Ordinary, singular and regular singular points


A point x = x0 of a second order differential equation

d2 y dy
2
+ p(x) + q(x)y = 0
dx dx
is said to be an ordinary point of the differential equation if both p(x) and
q(x) are ‘analytic’ at x = x0 i.e. both can be expanded in a power series
around x0 :

X ∞
X
n
p(x) = pn (x − x0 ) , q(x) = qn (x − x0 )n
n=0 n=0

If this is not so i.e if either p(x) or q(x) or both are not analytic at x = x0
then x0 is said to be a singular point.
If x = x0 is a singular point such that (x − x0 )p(x) and (x − x0 )2 q(x) are
analytic at x = x0 then x0 is called a regular singular point of the differential
equation.

29
3.3 Solution around an ordinary point
Hereafter, without any loss of generality, we will choose x0 = 0.
It can be shown that if both p(x) and q(x) are analytic at x = 0 then so
is the solution y(x) of the differential equation. Further the power series for
y(x) around x = 0 converges at least in the common of interval of convergence
of that for p(x) and q(x). To explicitly solve the differential equation one
therefore makes the ansatz:

X
y(x) = an x n
n=0

and puts it in the differential equation. On equating like powers of x on both


sides one obtains, in general, a two step recursion formula of the form :

(·)an+2 + (·)an+1 + (·)an = 0, n = 0, 1, 2, · · ·

for the coefficients an . The recursion formula therefore determines all the
an ’s in terms of a0 and a1 . Putting the expressions for an in the power series
y(x) then yields
y(x) = a0 y1 (x) + a1 y2 (x)
where the two functions y1 (x), y2 (x) provide us with the two independent
solutions of the differential equation. Any solution can be expressed as a
linear combination thereof.

3.4 Solution around a regular singular point


To explicitly solve the differential equation around a regular singular point
one makes the ansatz: ∞
X
y(x, λ) = an xn+λ
n=0

Substituting it in the differential equation and on equating like powers of x


on both sides one obtains,

1. the indicial equation, a quadratic equation for λ:

(λ − λ1 )(λ − λ2 ) = 0

30
2. a one step recursion relation for the an ’s of the form :

(·)an+1 + (·)an = 0, n = 0, 1, · · ·

The coefficient of an+1 always turns out to be proportional to (n + 1 +


λ − λ1 )(n + 1 + λ − λ2 )

Note that this method can also be used for solving the differential equation
around an ordinary point as well.
Three situations may arise
Case I : (λ1 − λ2 ) 6= 0, or an integer In this case solve the recursion relation
to obtain an , n = 1, 2, · · · in terms of a0 for arbitrary λ to obtain

X
y(x, λ) = a0 (·)xn+λ
n=0

The two independent solutions y1 (x) and y2 (x) are then given by

y1 (x) = y(x, λ1 ); y2 (x) = y(x, λ2 )

Case II : (λ1 − λ2 ) = 0, In this case the two independent solutions y1 (x) and
y2 (x) are then given by

∂y(x, λ)
y1 (x) = y(x, λ1 ); y2 (x) = |λ=λ1
∂λ

Case III : (λ1 − λ2 ) = N, a positive integer In this case finding the solution
y1 (x) corresponding to the larger root λ1 presents no difficulties and ,as
before, is given by
y1 (x) = y(x, λ1 )
Difficulties arise when one tries to find the solution corresponding to the
smaller root. Here for λ = λ2 one finds that the factor in front of aN in the
expression relating aN to aN −1 becomes zero. Two situations may arise

1. A 0.aN = 0

2. B 0.aN 6= 0

31
In the case A one can simply put aN = 0 ( If one doesn’t, one simply
generates an expression proportional to the first solution)
In the case B the second solution is obtained by putting a0 = (λ − λ2 )b0 ,
solving for an ’s in terms of b0 to obtain

X
y(x, λ) = b0 (·)xn+λ
n=0

The second solution is then given by

∂y(x, λ)
y2 (x) = |λ=λ2
∂λ
Note that in the cases II and III B, the second solution, in general, has
the structure ∞
X
y2 (x) = log(x) (·)xn+λ2 + · · ·
n=0

which is singular at x = 0 and hence not of much physical interest.


Further, if this method is used for solving a differential equation around
and ordinary point than one would find that both λ1 and λ2 are integers and
that the case III A always obtains ensuring that the two solutions have the
structure of a power series as they should.

3.5 Example: Bessel Equation


A particularly good example of a second order differential equation where all
the cases discussed above obtain is that of the Bessel equation

d2 y dy
x2 2
+ x + (x2 − α2 )y = 0
dx dx
Here the roots of the indicial equation turn out to be α and −α and one has

• Case I : 2α not an integer

• Case II : α = 0

• Case III A 2α an odd integer

• Case III B 2α an even integer i.e α an integer

32
In Case I, proceeding as above the two solutions turn out to be :
∞ ∞
X (−1)n  x 2n+α X (−1)n  x 2n−α
Jα (x) = ; J−α (x) = ;
n=0
n!Γ(n + α + 1) 2 n=0
n!Γ(n − α + 1) 2

and the general solution of the Bessel equation can be written as


y(x) = c1 Jα (x) + c2 J−α (x)
In cases II, IIIA, IIIB, while the first solution (corresponding to the larger
root of the indicial equation) is still Jα (x), to obtain the second solution
one has to follow the procedure outlined above. However, in the context of
Bessel’s equation, one finds that the two soultions continue to remain valid
in the Case III A as well. So the only problematic cases that remain are Case
II and Case III B i.e. when α is zero or an integer. Here ( and only in this
context) to find the second solution the following trick works. One defines a
suitable linear combination of Jα (x) and J−α (x) as follows
Jα (x) cos πα − J−α (x)
Yα (x) =
sin πα
and for cases I and IIIA the general solution of the Bessel equation can
equally well be written as
y(x) = c1 Jα (x) + c2 Yα (x)
When α = 0 or an integer one finds the function Yα (x) the way it is defined be-
comes an indeterminate form and has to be computed by applying L’Hospital
’s rule. [ This happens because for N integer or zero , JN (x) = (−1)N JN (x)
as can be verified from the definition of Jα (x)]. With this caveat the general
solution of the Bessel equation can always be written as
y(x) = c1 Jα (x) + c2 Yα (x)
The functions Yα (x) are called Bessel functions of the second kind
The equation
d2 y dy
x2 2 + x − (x2 + α2 )y = 0
dx dx
obtained by replacing x by ix in the Bessel equation is called the modified
Bessel equation. The considerations given above apply here as well and its
general solution is given by
y(x) = c1 Iα (x) + c2 Kα (x)

33
where ∞
X 1  x 2n+α
Iα (x) =
n=0
n!Γ(n + α + 1) 2
and
π I−α (x) − Iα (x)
Kα (x) =
2 sin πα
As before when α is zero or an integer, Kα (x) becomes an indeterminate
form ( by virtue of the fact that IN (x) = I−N (x) ) and has to be computed
as a limit. The functions Kα (x) are called modified Bessel functions of the
second kind
There are several equations of mathematical physics which are related to
the Bessel equation by suitable changes of variables. It can be shown that
the family of equations

d2 y dy
x2 2
+ (1 − 2s)x + [(s2 − r2 p2 ) + a2 r2 x2r ]y = 0
dx dx
after putting t = axr ; y = xs u transforms into the Bessel equation

d2 u du
t2 2
+ t + (t2 − p2 )u = 0
dt dt
and hence their general soluion can be written as

y(x) = xs [c1 Jp (axr ) + c2 Yp (axr )]

3.6 Second order diff. eqns : Sturm Liouville form


Second order differential equations with the structure
 
1 d dy
s(x)w(x) = λy
w(x) dx dx

are said to have the Sturm Liouville form. They have the stucture of the
eigenvalue problem for the second order differenial operator
 
1 d d
L≡ s(x)w(x)
w(x) dx dx

If w(x) and s(x) are such that

34
• w(x) ≥ 0 in the interval (a, b)

• s(a)w(a) = s(b)w(b) = 0

then it can easily be shown that the solutions yλ (x) and yλ0 (x) with λ 6= λ0
are orthogonal to each other with respect to the weight function w(x) in the
interval (a, b)
Z b
dxw(x)yλ (x)yλ0 (x) = 0 if λ 6= λ0
a
Aternatively this may be seen as a consequence of the fact that L is self
adjoint with respect to the scalar product
Z b
(f, g) = dxw(x)f (x)g(x)
a

and that the eigenvectors of a self adjoint operator corresponding to distinct


eigenvalues are orthogonal.

3.7 Sturm Liouville form: Polynomial solutions


If in addition to the two conditions on s(x) and w(x) above one further
stipulates that

• s(x) is a polynomial in x of degree ≤ 2 with real roots


1 d
• C1 (x) ≡ [s(x)w(x)]; K1 a constant , is a polynomial of
K1 w(x) dx
degree 1.

then it can be shown that


1 dn n
Cn (x) ≡ [s (x)w(x)]; n = 0, 1, 2, · · · [Rodriguez Formula]
Kn w(x) dxn

1. are polynomials in x of degree n

2. satisfy the second order differential equation

d2 s(x)
 
d 1
LCn (x) = λn Cn (x); with λn = n K1 C1 (x) + (n − 1)
dx 2 dx2

35
3. form an orthogonal system
Z b
dxw(x)Cn (x)Cm (x) = 0 if n 6= m
a

4. satisfy recursion relations of the form Cn+1 (x) = (An x + Bn )Cn (x) +
Dn Cn−1 (x)

A systematic analysis of the four conditions on s(x) and w(x) leads one
to eight distinct systems of orthogonal polynomials - Hermite, Legendre,
Laguerre, Jacobi, Gegenbauer and Tchebychef. [For details see, for instance
Mathematics for Physicists Dennery and Krzywicki]

s(x) w(x) Interval Name of the polynomial


2
1 e−x (−∞, ∞) Hermite: Hn (x)
x e−x [0, ∞) Laguerre: Ln (x)
(1 − x2 ) 1 [−1, 1] Legendre : Pn (x)
ν −x
x x e [0, ∞) Associated Laguerre: Lνn (x), ν > −1
(ν,µ)
(1 − x2 ) (1 − x)ν (1 + x)µ [−1, 1] Jacobi: Pn (x), ν, µ > −1
(1 − x2 ) (1 − x2 )λ−1/2 [−1, 1] Gegenbauer: Cnλ (x), λ > −1/2
(1 − x2 ) (1 − x2 )−1/2 [−1, 1] Tchebychef of the first kind: Tn (x)
(1 − x2 ) (1 − x2 )1/2 [−1, 1] Tchebychef of the second kind: Un (x)

The table below gives the values of Kn appearing in the Rodriguez formula
and those of hn appearing in the orthogonality relation
Z b
dxw(x)Cn (x)Cm (x) = hn δnm
a

for the first three orthogonal polynomial systems.

Cn (x) Kn hn

Hn (x) (−1)n n
2 n! π
Ln (x) n! 1
2
Pn (x) 1
2n + 1

36
It is often not possible to remember detailed expressions for these poly-
nomial. However their suitably defined generating functions

X
F (x, t) = an Cn (x)tn
n

have simple analytical expressions which can easily be remembered and these
can be used directly to deduce various properties of the corresponding poly-
nomials. The table below gives the generating functions for the first three
polynomial systems

Cn (x) an F (x, t)
1 2
Hn (x) e2xt−t
n!
  xt
1 −
Ln (x) 1 e 1−t
1−t
1
Pn (x) 1 √
1 − 2xt + t2

37
4 Group theory
4.1 Group
A group is a set G = {g 0 , g 00 , g 000 , · · · } equiped with a composition rule cot
such that

• g · g 0 is in G for all g, g 0 in G (closure)

• g · (g 0 · g”) = (g · g 0 ) · g 00 is in G for all g, g 0 , g 00 in G (associativity)

• there is an element e ∈ G such that e · g = g · e for all g ∈ G (existence


of identity element)

• for every g ∈ G there is an element g −1 such that g · g −1 = g −1 · g = e


(existence of inverses) G is said to be abelian ff g · g 0 = g 0 · g for all pairs
g, g 0 ∈ G, non abelian otherwise.

Hereafter, for notational simplicity, we will write g · g 0 simply as gg 0 .

4.2 Subgroup
A subset H = {h, h0 , h00 , · · · } of G which is group by itself (under the same
composition rule as in G) is called a subgroup of G. A subset H of G is a
subgroup of G if and only if h−1 h0 ∈ H for all pairs or all h, h0 in H.

4.3 Finite groups: Multiplication table


The table displaying the result of two group elements is called its multipli-
cation table. It can be presented either in the g − g form
e g1 g2 · gk

e e g1 g2 · gk
g1 g1 g1 g1 g1 g2 · g1 gk
g2 g2 g2 g1 g2 g2 · g2 gk
· · · · · ·
gk gk gk g1 gk g2 · gk gk

or more usefully in the g − g −1 form

38
e g1−1 g2−1 · gk−1

e e g1−1 g2−1 · gk−1


g1 g1 e g1 g2−1 · g1 gk−1
g2 g2 g2 g1−1 e · g2 gk−1
· · · · · ·
gk gk gk g1−1 gk g2−1 · e
From the fact that we are dealing with a group it follows that every element
of G does appear in each row or column and does so exactly once. (Note that
given a multiplication table of with this property alone we can not conclude
it is the multiplication table of a group. Only after checking associativity
can one conclude that it is the multiplication table of a group.)

4.4 Examples
• Zn , the symmetic group Sn , symmetries of a triangle, sqaure, cube,
tetrahedron....

• Z

• GL(n, C SL(n, C, U (n), SU (n), O(n), SO(n), translations, affine group,


Lorentz group....

4.5 The symmetric or the permutation group Sn


4.6 Some important ways of constructing subgroups
• Cyclic subgroup generated by an element g ∈ G: Pick an element g ∈ G
and compute g, g 2 , g 3 · · · . For a finite group G, owing to the closure
property, there would be a smallest integer m, called the of g, such
that g m = e. The set of elements {e, g, g 2 , · · · , g m−1 } consitute a cyclic
subgroup of G. Such subgroups are necessarily abelian.

• The center Z(G) of G: The set of all elements of G, which commute


with all elements of G. This subgroup, is by construction, abelian.

• Commutator subgroup : It consists of all possible products of any


number of commuators q(g, g 0 ) = gg 0 g −1 g 0−1 one for each pair g, g 0 ∈ G.

39
• Given one subgroup H of G we can construct another Hg = gHg −1 , g ∈
G, g fixed called the conjugate of H by g.

4.7 Decopositions of a group into disjoint subsets


Recall that given an equivalence relation on a set, we may put it to use to
decompose the set into disjoint subsets. Each subset called the equivalence
class, consisting of all the elements of the set related to each other by the
equivalence relation. Each equivalence class is completely specified by any
one of its elements, the class representative, as all the others in the class can
be generated by applying the equivalence relation to the class representative.
In the context of groups two equivalence relations given below are particularly
useful and important.

4.7.1 Conjugacy Classes


The equivalence relation underlying this decomposition reads g1 ∼ g2 if there
is a g ∈ G such that g2 = gg1 g −1 and one says that g2 is conjugate to g1
by g. To carry out this decomposition one starts with an element g1 and
constructs its class by letting g run over G. Next one starts with an element
of G not present in this subset and generates its class in the same fashion
and continues the process until all the elements of G are exhausted. The
equivalence classes are referred to as conjugacy classes C1 , C2 , · · · , Cs . In
general the sizes c1 , c2 , · · · , cs of the classes C1 , C2 , · · · , Cs . are not the same.
The class of e consists of e itself. The first class C1 is by convention taken to
be the class of the identity e. Further if the group is abelian, each element
is a a class by itself.
An interesting property of the classes is that if we compose all the ele-
ments of a class, say Ci with all those of Cj then in the set of elements thus
obtained whole classes appear a certain number of times. This is symbolli-
cally expressed as
Xs
Ci Cj = ckij Ck
k=1

where the integers ckij give the number of times the class Ck appears when Ci
and Ck are multiplied, and are referred to as the class constants.

40
4.7.2 Decompsitions into cosets with respect to a subgroup
Given a group G and a subgroup H thereof, one may define two equivalence
relations (a) g1 ∼ g2 if g2 = g1 h for some h in H. Thus to generate the
equivalence class of g one simply multiplies g on the right by all elements
of H. The subset of G thus obtained is called the right coset of g by H
and symbollically denoted by gH. The decomposition of G based on this
equivalence relation is referred to as the right coset decomposition.
(b) g1 ∼ g2 if g2 = hg1 for some h in H. Thus to generate the equivalence
class of g one simply multiplies g on the left by all elements of H. The
subset of G thus obtained is called the left coset of g by H and symbollically
denoted by Hg. The decomposition of G based on this equivalence relation
is referred to as the left coset decomposition.
The set of (left or right) cosets of G by H is referred to as the (left or
right) coset space and denoted by G/H.
In general the two decompositions, left and right, are not the same. In
both the cases, however, the number of elements in each coset is exactly |H|.
As a result it immediately follows that |G|/|H| = integer, i.e. order of any
subgroup H of G is a divisor of the order |G| of G (Lagrange’s theorem).
Note that this does not imply that given a divisor of |G| there is a subgroup
H of G of that order. However there are special kind of groups, the Sylow
groups, for which this is true.

4.8 Normal or invariant subgroups


A subgroup for which the right cosets and left cosets are the same i.e. gH =
Hg for any g ∈ G ( the equality sign here understood as equality between
sets) is called a normal or an invariant subgroup of G. The requirement
gH = Hg may be rexpressed H = gHg −1 and we may say that a subgroup
H of G is an invariant subgroup if all the subgroups conjugate to H by g,
for any g ∈ G is H itself.

4.9 Factor or Quotient group


An invariant subgroup leads naturally to the notion of a factor or a quotient
grooup G/H. Its elements are the cosets themselves. Owing to the equality
of the right and left cosets one has gHg 0 H = gg 0 HH = gg 0 H. Stated in
words, the coset of g composed with the coset of g 0 yield the coset of gg 0 .

41
Further, the identity element in this composition rule is the coset of the
identity i.e. H itself. The order of the new group thus constructed is clearly
equal to the number of cosets i.e. |G|/|H|.

4.10 Group homomorphisms


A map τ from G to G0 preserving the group composition is called a homor-
phism from G to G0 :
τ : G → G0 , τ (g1 g2 ) = τ (g1 )τ (g2 )
and one says that G is homomorphic to G0
The set of elements of G which map to the identity of e0 of G under the
homomorphism τ form a subgroup of G. This subgroup is called the kernel
of the homomorphism τ and is denoted by Ker(τ ).
Ker(τ ) = {g ∈ G|τ (g) = e0 } (1)
Further, Ker(τ ) is a normal subgroup of G.
The image of G, Im(G), under τ is a subgroup of G0

4.10.1 Isomorphisms
A homorphism from G to G0 such that Ker(τ ) = e0 , Im(G) = G0 is called
an isomorphism from G to G0 . Two groups G and G0 which are isomorphic
to each other ( we denote this by G ' G0 ) are essentially the same.

4.10.2 Automorphism
A homorphism from G to G such that Ker(τ ) = e, Im(G) = G is called an
automorphism of G.
The set of all automorphisms of a group G, Aut(G), forms a group under
usual composition of maps with the trivial automorphism g → g as the
identity element

4.10.3 Inner Automorphisms


These are special automorphisms τg one for each g :
τg : G → G, τg (g 0 ) = gg 0 g −1
The set of all inner automorphism form a normal subgroup of Aut(G).

42
4.11 Direct product of groups
Given two groups G1 and G2 one can formally construct out of them a larger
group G1 × G2 of order equal to the product of the orders of G1 and G2
consisting the set of all pairs (g1 , g2 ); g1 ∈ G1 , g2 ∈ G2 and endowing it with
the composition rule

(g1 , g2 )(g10 , g20 ) = (g1 g10 , g2 g20 )

.
Some obvious subgroups of G1 × G2 are {(g1 , e2 )} ' G1 and {(e1 , g2 )} '
G2 . If G1 = G2 then there is another obvious subgroup, called the diagonal
subgroup, {g1 , g1 )} ' G1 . Further, it is also evident that if G1 and G2 is
abelian then so is their direct product
Note that
• every element (g1 , g2 ) of G1 ×G2 can be uniquely expressed as a product
of elements of its subgroups {(g1 , e2 )} ' G1 and {(e1 , g2 )} ' G2 as
(g1 , e2 )(e1 , g2 )
• the elements belonging to the subgroups commute with each other and
• the two subgroups have no elements in common except for the identity
(e1 , e2 ).
This motivates the following definition:
A Group G is said to be the direct product of its subgrouups H1 and H2
provided
• every element g of G can be uniquely expressed as g = h1 h2 ; h1 ∈
H1 , h2 ∈ H2
• the elements belonging to the subgroups commute with each other
• the two subgroups have no elements in common except for the identity.

4.12 Semi-direct product of groups


The semi-direct product of two group G1 o G2 again consists of ordered pairs
(g1 , g2 ); g1 ∈ G1 , g2 ∈ G2 as earlier except that now the composition rule is
given a ’twist’
(g1 , g2 )(g10 , g20 ) = (g1 g10 , g2 τg1 (g20 ))

43
where τg1 are a set of automorphisms of G2 labelled by elements of G1 satis-
fying τg1 τg10 = τg1 g10 . One consequence of this twist is that G1 o G2 may not
be abelian even if G1 and G2 are individually abelian.
The semidirect product reduces to the direct product when all the auto-
morphisms {τg , g ∈ G1 } are taken to be trivial automorphisms.

4.13 Action of a group on a set


Let X be a set consisting of elements x, y, · · · and G a group consisting of
elements e, g, g 0 · · · We say that we have an action of G on X if we can
associate with each g in a one to one fashion a map ψg taking X → X such
that

ψg ψg0 = ψgg0 , for all pairs g, g 0 ∈ G


i.e. ψg ψg0 (x) = ψg (ψg0 (x)) = ψgg0 (x) for all pairs g, g 0 ∈ G and for all x ∈ X

Since the correspondence g → ψg is one to one we may simplfify the notation


by writing ψg (x) as just gx with the understanding that gx = y ∈ X.

4.14 Orbits, Isotropy groups, Fixed points


The structure assumed above naturally leads to the following notions:

• Orbit ϑx of x ∈ X is the set of all X obtained by acting all elements g


of G on X. It is easy to convince onself that an orbit is completely de-
termined by a point on it and that the orbits either overlap completely
or are disjoint. (Decomposition of X into orbits may also been seen to
arise by regarding x0 ∼ x if x0 = gx as an equivalence relation on X.

• Isotropy group Gx of x is the set of elements of G which leave the


chosen x unmoved or fixed. It can be easily seen that Gx is a subgroup
of G.

• Fixed points Xg of g ∈ G is the set of elements of X which remain


fixed under the action of the chosen g.

From the definition of the orbit ϑx of x and the isotropy group Gx of x it


is immediately obvious that the points on the orbit of x are in one to one

44
correspondence with the cosets of G by Gx . Hence the size |ϑx | of the orbit
is the same as the size |G|/|Gx | of the coset space G/Gx .

|ϑx | = |G|/|Gx |

Further, from the definition of Gx and Xg it is clear that the following equality
holds X X
|Xg | = |Gx |
g∈G x∈X

4.15 Burnside’s Lemma


This beautiful theorem relates the number of orbits to the sum of the number
of fixed points of each g ∈ G :
1 X
number of orbits = |Xg |
|G| g∈G

4.16 Representations of a group


Having learnt what is meant by the action of a group on a set, the notion of
a representation of a group arises when the set in question is a vector space
V:

g → D(g), D(g) linear operators on V satisfying D(g1 )D(g2 ) = D(g1 g2 )

The collection Γ = {D(g)} is said to furnish a representation of G. If V


is of a finite dimension n, in some chosen basis in V {D(g)} can be rep-
resented by n × n matrices {D(g)} and one speaks of an n-dimensional
marrix representation Γ = {D(g)}, D(g1 )D(g2 ) = D(g1 g2 ). The condition
D(g1 )D(g2 ) = D(g1 g2 ) is a strong requirement on the set of matrices {D(g)}
one for each g for them to furnish a representation of G. In particular, it
implies that D(e) = In×n and D(g −1 ) = D−1 (g).
Under a change of basis in V the matrices D(g) suffer a similarity trans-
formation D(g) → S −1 D(g)S. As the choice of a basis can not be of any real
mathematical significance, we say that two representations are equivalent if
they are related to each other in this way.
A representation is said to be reducible if there is a non trivial subspace
of V1 of V invariant under {D(g)} i.e. it goes into itself under the action of

45
{D(g)}. If so then in a basis adapted to V1 the matrices {D(g)} will have
the form  (1) 
D (g) X(g)
D(g) =
0 D(2) (g)
By virtue of the fact that {D(g)} is a representation i,e D(g1 )D(g2 ) =
D(g1 g2 ) it follows that {D(1) (g)} and {D(1) (g)} are also representations
(though smaller) i.e. they also satisfy D(1) (g1 )D(1) (g2 ) = D(1) (g1 g2 ) and
D(2) (g1 )D(2) (g2 ) = D(2) (g1 g2 ) respectively. Thus a representation for which
this can be done is clearly reducible in the sense that we can pass from it
to smaller representations. In the same spirit a representation is said to be
irreducible if this can not be done. Clearly this means that the vector space
on which {D(g)} act has no non trivial subspaces invariant under {D(g)}.
A representation is said to be decomposable if there are subspaces V1
and V2 of V such that V = V1 ⊕ V2 and both V1 and V2 are invariant under
the action of {D(g)}. In that case, in a basis adapted to the two invariant
subspaces, the matrices {D(g)} will have the form
 (1) 
D (g) 0
D(g) =
0 D(2) (g)

and the representation is said to be decomposable.


For finite groups, on can establish the following results:

• Every representation is decomposable

• The matrices {D(g)} can always be chosen to be unitary matrices.

Given this, we may now focus on {D(1) (g)} and {D(2) (g)}, decompose them
further and continue the process until we reach a stage where no further
decomposition is possible. We would have then decomposed the given repre-
sentation into irreducible representations.
Conversely we can build any representation (upto equivalence) from the
knowledge of all the irreducible representations by simply putting them along
the diagonal a certain number of times.

4.17 Basic questions in representation theory


Given a group G

46
• What are all its irreducible representations Γα = {D(α) (g)}? One ir-
reducible representaion, the trivial reprsentation in which each group
element is represented by the number 1, is always available for any
group. By convention the first in the list of irreducible representations
is taken to be this trivial representation and is denoted by Γ(1) .

• How many of them are there? As we shall see shortly, the number of
irreducible representations is the same as s, the number of conjugacy
classes.

• What are their dimensions?

4.18 Characters of a representation


Given a representation Γ = {D(g)} its character is defined by the set of
numbers {χ(g)}
χ(g) = Tr[D(g)]
It can be easily seen that

• χ(g) is a class function i.e. it has the same value for all g belonging
to the same conjugacy class. As a consequence the list of characters
{χ(g)} can be abbreviated to a shorter list χi where i labels the classes.

• χ(e) gives the dimension of the representation.

4.19 Orthogonality properties of irreducible charac-


ters
Γ(α) = {D(α) (g)}, α = 1, 2, · · · , irreducible representations of G, χ(α) =
{χ(α) (g)}.
1 X (α)∗
(χ(α) , χ(β) ) ≡ χ (g)χ(β) (g) = δ αβ
|G| α∈G
s
1 X (α∗) (β)
i.e. = ci χi χj = δ αβ
|G| i=1
s r r
X ci (α)∗ cj (β)
or = χi χj = δ αβ column orthogonality
i=1
|G| |G|

47
Here ci are the number of elements in the class Ci , i = 1, 2, ·, s

1 X (α)∗
χ (g)χ(α) (g 0 ) = δij
|G| α
s r r
X ci α∗ cj (α)
or χ χ = δij row orthogonality
α=1
|G| i |G| j

4.20 Character table


The characters of the irreducible representations can be assembled in the
form of a table, called the character table, as follows:
χ(1) χ(2) · · χ(s)

1 C1 · · · · ·
c2 C2 · · · · ·
· · · · · · ·
· · · · · · ·
cs Cs · · · · ·
For most applications in physics and chemistry all we need to know about a
group G is its character table.
Given a reducible representation Γ = {D(g)} of G, and the irreducible
representations Γα = {D(α) (g)} of G we wish to know which irreducible
representations are present in Γ and how many times? In symbols, in Γ =
(α)
what are mα ’s giving the multiplicity of occurrence of Γ(α) ? This
P
α mα Γ
is easily answered from the knowledge of the characters {χ(g)} of Γ and
{χ(α) (g)} of the irreducible representations Γ(α) :
X X
Γ= mα Γ(α ) ⇒ χ(g) = mα χ(α)
α α
1 X (α)∗
mα = (χ(α) , χ) = χ (g)χ(g)
|G| g∈G

4.21 The trivial and the Regular representation of a


group
For any group G two representations are readily available, one irreducible
and the other irreducible:

48
• The trivial representation : g → D(1) (g) = 1.
(R)
• The regular representation : g → D(R) (g), Dkj (g) = 1 if g =
gk gj−1 , zero otherwise

The regular representation matrices D(R) (g) are easily constructed : Put 1
in the multiplication table in the g − −g −1 wherever g occurs therein and 0
every where else. Hence χR (g) = |G| ifg = e, and 0 otherwise. As a result
1 X (α)∗
mα = χ (g)χ(R) (g) = χ(α)∗ (e)χ(R) (e) = χ(α)∗ (e)
|G| g∈G
= nα , the dimension of the irreducible representation Γ(α)

Further, X X
χ(R) (e) = mα χ(α) (e); or |G| = n2α
α α

Thus we have two results :

• The sum of the squares of the irreducible representations add up to the


order of the group

• Every irreducible representation is present in the regular representation


as many times as its dimension.

It can also be shown that nα divide |G|. Hence one can make the following
general statements about the irreducible representations of a finte group

• The number of irreducible representations equals the number of conju-


gacy classes

• The sum of the squares of the irreducible representations add up to the


order of the group

• The dimensions of the irreducible representations divide the order of


the group

49
4.22 Two important questions in representation the-
ory with relevance to physics
• Given two representations Γ0 = {D0 (g)} Γ00 = {D00 (g)} we can build a
bigger representation of dimension equal to the product of the dimen-
sions of the two by taking tensor products Γ0 = {D0 (g) ⊕ D00 (g)}. By
construction, it is evident that the characters {χ(g)} of this represen-
tation are related to {χ0 (g)} and {χ00 (g)} by χ(g) = χ0 (g)χ00 (g). Such
a representation is general reducible even if Γ and Γ0 are irreducible.
Decomposition of the tensor product of irreducible representations into
irreducible representations constitutes an important activity in appli-
cations of group theory to physics.

• Given an irreducible representation of Γ = {D(g)} of a group G, one


can immediately construct a representation γ of a subgroup H of G by
simply restricting g to H (subduction). The representation of H thus
obtained will in general be a reducible representation of H. Decom-
position of this representation into irreducible representations of H is
another activity of great importance in physics.

50

You might also like