Professional Documents
Culture Documents
LIVIU I. NICOLAESCU
C ONTENTS
1. Multilinear forms and determinants 3
1.1. Mutilinear maps 3
1.2. The symmetric group 4
1.3. Symmetric and skew-symmetric forms 7
1.4. The determinant of a square matrix 8
1.5. Additional properties of determinants. 11
1.6. Examples 16
1.7. Exercises 18
2. Spectral decomposition of linear operators 23
2.1. Invariants of linear operators 23
2.2. The determinant and the characteristic polynomial of an operator 24
2.3. Generalized eigenspaces 26
2.4. The Jordan normal form of a complex operator 31
2.5. Exercises 41
3. Euclidean spaces 44
3.1. Inner products 44
3.2. Basic properties of Euclidean spaces 45
3.3. Orthonormal systems and the Gramm-Schmidt procedure 48
3.4. Orthogonal projections 51
3.5. Linear functionals and adjoints on Euclidean spaces 54
3.6. Exercises 62
4. Spectral theory of normal operators 65
4.1. Normal operators 65
4.2. The spectral decomposition of a normal operator 66
4.3. The spectral decomposition of a real symmetric operator 68
4.4. Exercises 69
5. Applications 70
5.1. Symmetric bilinear forms 70
5.2. Nonnegative operators 73
5.3. Exercises 76
6. Elements of linear topology 79
6.1. Normed vector spaces 79
6.2. Convergent sequences 82
6.3. Completeness 85
6.4. Continuous maps 86
Date: Started January 7, 2013. Completed on . Last modified on April 28, 2014.
These are notes for the Honors Algebra Course, Spring 2013.
1
2 LIVIU I. NICOLAESCU
| {z
:U U} F
k
is called a k-linear form on U . When k = 2, we will refer to 2-linear forms as bilinear forms. We
will denote by T k (U ) the space of k-linear forms on U . t
u
Example 1.2. Suppose that U is an F-vector space and U is its dual, U := Hom(U , F). We have
a natural bilinear map
h, i : U U F, U U 3 (, u) 7 h, ui := (u).
The bilinear map is called the canonical pairing between the vector space U and its dual. t
u
Example 1.3. Suppose that A = (aij )1i,jn is an n n matrix with real entries. Define
X
A : Rn Rn R, (x, y) = aij xi yj ,
i,j
x1 y1
.. .. .
x = . , y =
.
xn yn
To show that is indeed a bilinear form we need to prove that for any x, y, z Rn and any R
we have
A (x + z, y) = A (x, y) + A (z, y), (1.1a)
A (x, y + z) = A (x, y) + A (x, z), (1.1b)
A (x, y) = A (x, y) = A (x, by). (1.1c)
To verify (1.1a) we observe that
X X X X
A (x + z, y) = aij (xi + zi )yj = (aij xi yj + aij zi yj ) = aij xi yj + aij zi yj
i,j i,j i,j ij
The equalities (1.1b) and (1.1c) are proved in a similar fashion. Observe that if e1 , . . . , en is the
natural basis of Rn , then
A (ei , ej ) = aij .
This shows that A is completely determined by its action on the basic vectors e1 , . . . en . t
u
Proposition 1.4. For any bilinear form T 2 (Rn ) there exists an n n real matrix A such that
= A , where A is defined as in Example 1.3. t
u
A transposition is defined to be a permutation of the form ij for some i < j. Observe that
|ij | = 2|j i| 1,
so that
sign(ij ) = 1, i 6= j. (1.2)
6 LIVIU I. NICOLAESCU
t
u
has the same sign as the signature of . Hence, to prove (1.3) it suffices to show that
Y
(j) (i)
= | sign()| = 1,
1i<jn j i
i.e.,
Y Y
|(j) (i)| = |j i|. (1.5)
i<j i<j
This is now obvious because the factors in the left-hand side are exactly the factors in the right-hand
side multiplied in a different order. Indeed, for any i < j we can find a unique pair i0 < j 0 such that
(j 0 ) (i0 ) = (j i).
(b) Observe that
Y (j) (i) Y ((j)) ((i)
sign() = =
ji (j) (i)
i<j i<j
and we deduce
Y ((j)) ((i)) Y (j) (i)
sign() sign() =
(j) (i) ji
i<j i<j
Y ((j)) ((i))
= sign( ).
ji
i<j
Example 1.10. Suppose that n U , and u1 , . . . , un U . The skew-linearity implies that for
any i < j we have
(u1 , . . . , ui1 , ui , ui+1 , . . . , uj1 , uj , uj+1 , . . . , un )
= (u1 , . . . , ui1 , uj , ui+1 , . . . , uj1 , ui , uj+1 , . . . , un ).
Indeed, we have
(u1 , . . . , ui1 , uj , ui+1 , . . . , uj1 , ui , uj+1 , . . . , un )
= (uij (1) , . . . , uij (k) , . . . , uij (n) )
and sign(ij ) = 1. In particular, this implies that if i 6= j, but ui = uj then
(u1 , . . . , un ) = 0. t
u
Proposition 1.11. Suppose that U is an n-dimensional F-vector space and e1 , . . . , en is a basis of
U . Then for any scalar c F there exists a unique skew-symmetric n-linear form n U such
that
(e1 , . . . , en ) = c.
Proof. To understand what is happening we consider first the special case n = 2. Thus dim U = 2.
If 2 U and u1 , u2 U we can write
u1 = a11 e1 + a21 e2 , u2 = a12 e1 + a22 e2 ,
for some scalars aij F , i, j {1, 2}. We have
(u1 , u2 ) = (a11 e1 + a21 e2 , a12 e1 + a22 e2 )
= a11 (e1 , a12 e1 + a22 e2 ) + a21 (e2 , a12 e1 + a22 e2 )
= a11 a12 (e1 , e1 ) + a11 a22 (e1 , e2 ) + a21 a12 (e2 , e1 ) + a21 a22 (e2 , e2 ).
The skew-symmetry of implies that
(e1 , e1 ) = (e2 , e2 ) = 0, (e2 , e1 ) = (e1 , e2 ).
Hence
(u1 , u2 ) = (a11 a22 a21 a12 )(e1 , e2 ).
If dim U = n and u1 , . . . , un U , then we can write
X n n
X
u1 = ai1 1 ei1 , . . . , uk = aik k eik
i1 =1 ik =1
8 LIVIU I. NICOLAESCU
n n
!
X X
(u1 , . . . , un ) = a i1 1 e i1 , . . . , ain n ein
i1 =1 in =1
n
X
= ai1 1 ain n (ei1 , . . . , ein ).
i1 ,...,in =1
Observe that if the indices i1 , . . . , in are not pairwise distinct then
(ei1 , . . . , ein ) = 0.
Thus, in the above sum we get contributions only from pairwise distinct choices of indices i1 , . . . , in ,
Such a choice corresponds to a permutation Sn , (k) = ik . We deduce that
X
(u1 , . . . , un ) = a(1)1 a(n)n (e(1) , . . . , e(n) ).
Sn
X
= sign()a(1)1 a(n)n (e1 , . . . , en ).
Sn
Thus, n U
is uniquely determined by its value on (e1 , . . . , en ).
Conversely, the map
X n
X
(u1 , . . . , un ) c sign()a(1)1 a(n)n , uk = aik ei ,
Sn i=1
is indeed n-linear, and skew-symmetric. The proof is notationally bushy, but it does not involve any
subtle idea so I will skip it. Instead, Ill leave the proof in the case n = 2 as an exercise.
t
u
1.4. The determinant of a square matrix. Consider the vector space Fn we canonical basis
1 0 0
0 1 0
e1 = .. , e2 = .. , . . . , en = .. .
. . .
0 0 1
According to Proposition 1.11 there exists a unique, n-linear skew-symmetric form on Fn such that
(e1 , . . . , en ) = 1.
We will denote this form by det and we will refer to it as the determinant form on Fn . The proof of
Proposition 1.11 shows that if u1 , . . . , un Fn ,
u1k
u2k
uk = .. , k = 1, . . . , n,
.
unk
then X
det(u1 , . . . un ) = sign()u(1)1 u(2)2 u(n)n . (1.6)
Sn
Note that X
det(u1 , . . . un ) = sign()u1(1) u2(2) un(n) . (1.7)
Sn
LINEAR ALGEBRA 9
Definition 1.12. Suppose that A = (aij )1i,jn is an n n-matrix with entries in F which we regard
as a linear operator A : Fn Fn . The determinant of A is the scalar
det A := det(Ae1 , . . . , Aen )
where e1 , . . . , en is the canonical basis of Fn , and Aek is the k-th column of A,
a1k
a2k
Aek = .. , k = 1, . . . , n.
.
ank
t
u
Remark 1.13. Consider a typical summand in the first sum in (1.8), a(1)1 a(n)n . Observe that
the n entries
a(1)1 , a(2)2 , . . . , a(n)n
lie on different columns of A and thus occupy all the n columns of A. Similarly, these entries lie on
different rows of A.
A collection of n entries so that no two lie on the same row or the same column is called a rook
placement.1 Observe that in order to describe a rook placement, you need to indicate the position of
the entry on the first column, by indicating the row (1) on which it lies, then you need to indicate
the position of the entry on the second column etc. Thus, the sum in (1.8) has one term for each rook
placement. t
u
aij = aji
we deduce that
sign()a(1)1 a(n)n =
X X
det A = sign()a1(1) an(n) = det A. (1.9)
Sn Sn
1If you are familiar with chess, a rook controls the row and the column at whose intersection it is situated.
10 LIVIU I. NICOLAESCU
Proof. To keep the ideas as transparent as possible, we carry the proof in the special case n = 3.
Suppose first that A is upper tringular, Then
a11 a12 a13
A = 0 a22 a23
0 0 a33
so that
Ae1 = a11 e1 , Ae2 = a12 e1 + a22 e2 , Ae3 = a13 e1 + a23 e2 + a33 e3
Then
det A = det(Ae1 , Ae2 , Ae3 )
= det(a11 e1 , a12 e1 + a22 e2 , a13 e1 + a23 e2 + a33 e3 )
= det(a11 e1 , a12 e1 , a13 e1 + a23 e2 + a33 e3 ) + det(a11 e1 , a22 e2 , a13 e1 + a23 e2 + a33 e3 )
= a11 a12 det(e1 , e1 , a13 e1 + a23 e2 + a33 e3 ) +a11 a22 det(e1 , e2 , a13 e1 + a23 e2 + a33 e3 )
| {z }
=0
= a11 a22 det(e1 , e2 , a13 e1 ) + det(e1 , e2 , a23 e2 ) + det(e1 , e2 , a33 e3 )
| {z } | {z }
=0 =0
= a11 a22 a33 det(e1 , e2 , e3 ) = a11 a22 a33 .
This proves the proposition when A is upper triangular. If A is lower triangular, then its transpose A
is upper triangular and we deduce
det A = det A = a11 a22 a33 = a11 a22 a33 .
t
u
Recall that we have a collection of elementary column (row) operations on a matrix. The next
result explains the effect of these operations on the determinant of a matrix.
Proposition 1.16. Suppose that A is an n n-matrix. The following hold.
(a) If the matrix B is obtained from A by multiplying the elements of the i-th column of A by the
same nonzero scalar , then
det B = det A.
(b) If the matrix B is obtained from A by switching the order of the columns i and j, i 6= j then
det B = det A.
(c) If the matrix B is obtained from A by adding to the i-th column, the j-th column, j 6= i then
det B = det A.
(d) Similar results hold if we perform row operations of the same type.
Proof. (a) We have
det B = det(Be1 , . . . , Ben ) = det(Ae1 , . . . , Aei , Aen )
= det(Ae1 , . . . , Aei , Aen ) = det A.
(b) Observe that for any Sn we have
det(Ae(1) , . . . , Ae(n) ) = sign() det(Ae1 , . . . , Ae(n) ) = sign() det A.
Now observe that the columns of B are
Be1 = Aeij (1) , . . . , Ben = Aeij (n)
and sign(ij ) = 1.
LINEAR ALGEBRA 11
The above results represents one efficient method for computing determinants because we know
that by performing elementary row operations on a square matrix we can reduce it to upper triangular
form.
Here is a first application of determinants.
Proposition 1.17. Suppose that A is an n n-matrix with entries in F. Then the following statements
are equivalent.
(a) The matrix A is invertible.
(b) det A 6= 0.
Proof. A matrix A is invertible if and only if by performing elementary row operations we can reduce
to an upper triangular matrix B whose diagonal entries are nonzero, i.e., det B 6= 0. By performing
elementary row operation the determinant changes by a nonzero factor so that
det A 6= 0 det B 6= 0.
t
u
b
X
= bi1 1 bin n det(Aei1 , . . . Aein )
i1 ,...,in =1
In the above sum, the only nontrivial terms correspond to choices of pairwise distinct indices i1 , . . . , in .
For such a choice, the sequence i1 , . . . , in describes a permutation of I n . We deduce
X
det AB = b(1)1 b(n)n det(Ae(1) , . . . , Ae(n) )
S
| {z }
n
=sign() det A
X
= det A sign()b(1)1 b(n)n = det A det B.
Sn
t
u
Proof. We denote by sij the (i, j)-entry of S, i, j I m+n . From the block description of S we
deduce that
j m and i > n sij = 0. (1.11)
We have
X m+n
Y
det S = sign() s(i)i ,
Sm+n i=1
From (1.11) we deduce that in the above sum the nonzero terms correspond to permutations Sm+n
such that
(i) m, i m. (1.12)
If is such a permutation, ten its restriction to I m is a permutation of I m and its restriction to
I m+n \ I m is a permutation of this set, which we regard as a permutation of I n . Conversely, given
Sm and Sn we obtain a permutation = Smn satisfying (1.12) given by
(
(i), i m,
(i) =
m + (i m), i > m.
LINEAR ALGEBRA 13
Observe that
sign( ) = sign() sign(),
and we deduce
X m+n
Y
det S = sign( ) s(i)i
Sm , Sn i=1
X m
Y X n
Y
= sign() s(i)i sign() sm+(j),j+m = det A det B.
Sm i=1 Sn j=1
t
u
Definition 1.22. If A is an n n-matrix and i, j I n , we denote by A(i, j) the matrix obtained from
A by removing the i-th row and the j-th column. t
u
Corollary 1.23. Suppose that the j-th column of an n n-matrix A is sparse, i.e., all the elements
on the j-th column, with the possible exception of the element on the i-th row, are equal to zero. Then
det A = (1)i+j aij det A(i, j).
Proof. Observe that if i = j = 1 then A has the block form
a11
A=
0 A(1, 1)
and the result follows from Proposition 1.21.
We can reduce the general case to this special case by permuting rows and columns of A. If we
switch the j-th column with (j 1)-th column we can arrange that the (j 1)-th column is the sparse
column. Iterating this procedure we deduce after (j 1) such switches that the first column is the
sparse column.
By performing (i 1) row-switches we can arrange that the nontrivial element on this sparse
column is situated on the first row. Thus, after a total of i + j 2 row and column switches we obtain
a new matrix A0 with the block form
0 aij
A =
0 A(i, j)
We have
(1)i+j det A = det A0 = aij det A(i, j).
t
u
Corollary 1.24 (Row and column expansion). Fix j I n . Then for any n n-matrix we have
Xn X n
det A = (1)i+j aij det A(i, j) = (1)i+k ajk A(j, k).
i=1 k=1
The first equality is referred to as the j-th column expansion of det A, while the second equality is
referred to as the j-th row expansion of det A.
Proof. We prove only the column expansion. The row expansion is obtained by applying to column
expansion to the transpose matrix. For simplicity we assume that j = 1. We have
n
X
det A = det(Ae1 , Ae2 , . . . , Aen ) = det ai1 ei , Ae2 , . . . , Aen
i1
14 LIVIU I. NICOLAESCU
n
X
= ai1 det ei , Ae2 , . . . , Aen .
i1
Denote by Ai the matrix whose first column is the column basic vector ei , and the other columns are
the corresponding columns of A, Ae2 , . . . , Aen . We can rewrite the last equality as
n
X
det A = ai1 det Ai .
i=1
The first column of Ai is sparse, and the submatrix Ai (i, 1) is equal to the submatrix A(i, 1). We
deduce from the previous corollary that
det Ai = (1)i+1 det Ai (i, 1) = (1)i+1 det A(i, 1).
This completes the proof of the column expansion formula. t
u
Proof. Denote by A0 the matrix obtained from A by removing the j-th column and replacing with
the k-th column of A. Thus, in the new matrix A0 the j-th and the k-th columns are identical so that
det A0 = 0. On the other hand A0 (i, j) = A(i, j) Expanding det A0 along the j-th column we deduce
n
X n
X
0 = det A0 = (1)i+j a0ij det A(i, j) = (1)ij aik det A(i, j).
i=1 i=1
t
u
Definition 1.26. For any n n matrix A we define the adjoint matrix A to be the n n-matrix with
entries
aij = (1)i+j det A(j, i), i, j I n . t
u
Form Corollary 1.24 we deduce that for any j we have
n
X
aji aij = det A,
i=1
Denote by Aj (u) the matrix obtained from A by replacing the j-th column with the column vector u.
Then
det Aj (u)
xj = , j = 1, . . . , n. (1.15)
det A
Proof. By expanding along the j-th column of Aj (u) we deduce
n
X
det Aj (u) = (1)j+k det A(k, j). (1.16)
k=1
1.6. Examples. To any list of complex numbers (x1 , . . . , xn ) we associate the n n matrix
1 1 1
x1 x2 xn
V (x1 , . . . , xn ) = .. .. . (1.17)
.. ..
. . . .
xn1
1 x2n1 xn1 n
This matrix is called the Vandermonde matrix associated to the list of numbers (x1 , . . . , xn ). We want
to compute its determinant. Observe first that
det V (x1 , . . . xn ) = 0.
if the numbers z1 , . . . , zn are not distinct. Observe next that
1 1
det V (x1 , x2 ) = det = (x2 x1 ).
x1 x2
Consider now the 3 3 situation. We have
1 1 1
det V (x1 , x2 , x3 ) = det x1 x2 x3 .
x21 x22 x23
Subtract from the 3rd row the second row multiplied by x1 to deduce
1 1 1
det V (x1 , x2 , x3 ) = det x1 x2 x3
2 2
0 x2 x1 x2 x3 x3 x1
1 1 1
= det x1 x2 x3 .
0 x2 (x2 x1 ) x23 x3 x1
Subtract from the 2nd row the first row multiplied by x1 to deduce
1 1 1
x2 x1 x3 x1
det V (x1 , x2 , x3 ) = det 0 x2 x1 x3 x1 = det
x2 (x2 x1 ) x3 (x3 x1 )
0 x2 (x2 x1 ) x3 (x3 x1 )
1 1
= (x2 x1 )(x3 x1 ) det = (x2 x1 )(x3 x1 ) det V (x2 , x3 ).
x2 x3
= (x2 x1 )(x3 x1 )(x3 x1 ).
We can write the above equalities in a more compact form
Y Y
det V (x1 , x2 ) = (xj xi ), det V (x1 , x2 , x3 ) = (xj xi ). (1.18)
1i<j2 1i<j3
Proof. We will argue by induction on n. The case n = 2 is contained in (1.18). Assume now that
(1.20) is true for n 1. This means that
Y
det V (x2 , . . . , xn ) = (xj xi ).
2i<jn
Using this in (1.19) we deduce
Y Y
det Vn (x1 , . . . , xn ) = (x2 x1 ) (xn x1 ) (xj xi ) = (xj xi ). t
u
2i<jn 1i<jn
Here is a simple application of the above computation.
Corollary 1.30. If x1 , . . . , xn are distinct complex numbers then for any complex numbers r1 , . . . , rn
there exists a polynomial of degree n 1 uniquely determined by the conditions
P (x1 ) = r1 , . . . , p(xn ) = rn . (1.21)
Proof. The polynomial P must have the form
P (x) = a0 + a1 x + + an1 xn1 ,
where the coefficients a0 , . . . , an1 are to be determined. We will do this using (1.21) which can be
rewritten as a system of linear equations in which the unknown are the coefficients a0 , . . . , an1 ,
a0 + a0 x1 + + an1 x1n1 = r1
a0 + a1 x2 + + an1 xn1 = r2
2
.. .. ..
. . .
a0 + a1 xn + + an1 xnn1 = rn .
1.7. Exercises.
Exercise 1.1. Prove that the map in Example 1.2 is indeed a bilinear map. t
u
Exercise 1.7. (a) Show that a bilinear form : U U F is skew-symmetric if and only if
(u, u) = 0, u U .
Hint: Expand (u + v, u + v) using the bilinearity of .
(b) Prove that an n-linear form T n (U ) is skew-symmetric if and only if for any i 6= j and any
vectors u1 , . . . , un U such that ui = uj we have
(u1 , . . . , un ) = 0.
Hint. Use the trick in part (a) and Exercise 1.3. t
u
Exercise 1.9. Fix complex numbers x and h. Compute the determinant of the matrix
1 1 0 0
2 h 1 0 .
x
x hx h 1
x hx2 hx h
3
Exercise 1.13. Suppose that A = (aij )1i,jn is an n n matrix with complex entries.
(a) Fix complex numbers x1 , . . . , xn , y1 , . . . , yn and consider the n n matrix B with entries
bij = xi yj aij .
Show that
det B = (x1 y1 xn yn ) det A.
20 LIVIU I. NICOLAESCU
Exercise 1.14. (a) Suppose we are given three sequences of numbers a = (ak )k1 , b = (bk )k1 and
c = (ck )k1 . To these seqences we associate a sequence of Jacobi matrices
a1 b1 0 0 0 0
c1 a2 b2 0 0 0
0 c2 a3 b3 0 0
Jn = . (J )
.. .. .. .. .. .. ..
. . . . . . .
0 0 0 0 cn1 an
Show that
det Jn = an det Jn1 bn1 cn1 det Jn2 . (1.22)
Hint: Expand along the last row.
(b) Suppose that above we have
ck = 1, bk = 2, ak = 3, k 1.
Compute J1 , J2 . Using (1.22) determine J3 , J4 , J5 , J6 , J7 . Can you detect a pattern? t
u
Exercise 1.15. Suppose we are given a sequence of polynomials with complex coefficients (Pn (x))n0 ,
deg Pn = n, for all n 0,
Pn (x) = an xn + , an 6= 0.
Denote by V n the space of polynomials with complex coefficients and degree n.
(a) Show that the collection {P0 (x), . . . , Pn (x)} is a basis of V n .
(b) Show that for any x1 , . . . , xn C we have
P0 (x1 ) P0 (x1 ) P0 (xn )
P1 (x1 ) P1 (x2 ) P1 (xn ) Y
det = a0 a1 an1 (xj xi ). t
u
.. .. .. ..
. . . .
i<j
Pn1 (x1 ) Pn1 (x2 ) Pn1 (xn )
Exercise 1.16. To any polynomial P (x) = c0 + c1 x + . . . + cn1 xn1 of degree n 1 with
complex coefficients we associate the n n circulant matrix
c0 c1 c2 cn2 cn1
cn1 c0
c1 cn3 cn2
CP = cn2 cn1 c0 cn4 cn3
,
c1 c2 c3 cn1 c0
Set
2i
=e n , i= 1,
so thatn = 1. Consider the n n Vandermonde matrix V = V (1, , . . . , n1 ) defined as in (1.17)
(a) Show that for any j = 1, . . . , n 1 we have
1 + j + 2j + + (n1)j = 0.
LINEAR ALGEBRA 21
Thus,
f (2) = 2, f (3) = 3, f (4) = 5, f (5) = 8, f (6) = 13, . . . .
Use the results (a)(d) above to find a short formula describing f (n). t
u
Exercise 1.19. Let b, c be two distinct complex numbers. Consider the n n Jacobi matrix
b+c b 0 0 0 0
c
b+c b 0 0 0
0 c b + c b 0 0
Jn = .. .. .
.. .. .. .. ..
. . . . . . .
0 0 0 c b + c b
0 0 0 0 c b+c
Find a short formula for det Jn .
Hint: Use the results in Exercises 1.14 and 1.18. t
u
LINEAR ALGEBRA 23
A priori, there is no good reason of choosing the basis e = (e1 , . . . , en ) over another f = (f 1 , . . . , f n ).
With respect to this new basis the operator T is represented by another matrix
n
X
B = M(f , T ) = (bij )1i,jn , T f k = bjk f j .
j=1
Thus the, k-th column of C describes the coordinates of the vector f k in the basis e. Then C is
invertible and
B = C 1 AC. (2.1)
The space U has lots of bases, so the same operator T can be represented by many different matrices.
The question we want to address in this section can be loosely stated as follows.
Find bases of U so that, in these bases, the operator T represented by very simple matrices.
We will not define what a a very simple matrix is but we will agree that the more zeros a matrix
has, the simpler it is. We already know that we can find bases in which the operator T is represented
by upper triangular matrices. These have lots of zero entries, but it turns out that we can do much
better than this.
The above question is closely related to the concept of invariant of a linear operator. An invariant
is roughly speaking a quantity naturally associated to the operator that does not change when we
change bases.
Definition 2.1. (a) A subspace V U is called an invariant subspace of the linear operator T
L(U ) if
T v V , v V .
(b) A nonzero vector u0 U is called an eigenvector of the linear operator T if and only if the linear
subspace spanned by u0 is an invariant subspace of T . t
u
Example 2.2. (a) Suppose that T : U U is a linear operator. Its null space or kernel
ker T := u U ; T u = 0 ,
is an invariant subspace of T . Its dimension, dim ker T , is an invariant of T because in its definition
we have not mentioned any particular basis. We have already encountered this dimension under a
different guise.
24 LIVIU I. NICOLAESCU
Corollary 2.3.
ker T 6= 0 det T = 0. t
u
Definition 2.5. The polynomial PT (x) is called the characteristic polynomial of the operator T . t
u
Recall that a number F is called an eigenvalue of the operator T if and only if there exists
u U \ 0 such that T u = u, i.e.,
(1 T )u = 0.
Thus is an eigenvalue of T if and only if ker(I T ) 6= 0. Invoking Corollary 2.3 we obtain the
following important result.
Corollary 2.6. A scalar F is an eigenvalue of T if and only if it is a root of the characteristic
polynomial of T , i.e., PT () = 0. t
u
26 LIVIU I. NICOLAESCU
Proposition 2.12. Let T L(U ) , dimF U = n, and spec(T ). Then the generalized eigenspace
E (T ) is an invariant subspace of T .
Proof. We need to show that T E (T ) E (T ). Let u E (T ), i.e.,
(1 T )n u = 0.
Clearly u T u ker(1 T )n+1 = E (T ). Since u E (T ) we deduce that
T u = u (u T ) E (T ).
t
u
28 LIVIU I. NICOLAESCU
Proof. To prove (a) we will argue by induction on n. For n = 1 the result is trivially true. For the
inductive step we assume that the result is true for any triangulable operator on an (n1)-dimensional
F-vector space V , and we will prove that the same is true for triangulable operators acting on an n-
dimensional space U .
Let T L(U ) be such an operator. We can then find a basis e = (e1 , . . . , en ) of U such that, in
this basis, the operator T is represented by the upper triangular matrix
1
0 2
.. .
. .
. .
. .
.
A= .
. . . .
0 0 n1
0 0 0 n
Suppose that spec(T ). For simplicity we assume = 0. Otherwise, we carry the discussion of
the operator T 1. Let be the number of times 0 appears on the diagonal of A we have to show
that
= dim ker T n .
Denote by V the subspace spanned by the vectors e1 , . . . , en1 . Observe that V is an invariant
subspace of T , i.e., T V V . If we denote by S the restriction of T to V we can regard S as a linear
operator S : V V .
The operator S is triangulable because in the basis (e1 , . . . , en1 ) of V it is represented by the
upper triangular matrix
1
0 2
B = .. .. .
.. ..
. . . .
0 0 n1
Denote by the number of times 0 appears on the diagonal of B. The induction hypothesis implies
that
= dim ker S n1 = dim ker S n .
Clearly . Note that
ker S n ker T n
so that
= dim ker S n dim ker T n .
We distinguish two cases.
LINEAR ALGEBRA 29
Proof. Set
v n := T en .
Observe that v n V . From (2.3b) we deduce that R S n1 = R S n so that there exists v 0 V such
that
S n1 v n = S n v 0 .
Set u := en v 0 . Note that u U \ V ,
T u = v n T v 0 = v n Sv,
n n1
T u=T (v n Sv 0 ) = S n1 v n S n v 0 = 0.
t
u
This proves (a). The equalities (2.4c), (2.4b), (2.4c) follow easily from (a). t
u
+ In the remainder of this section we will assume that F is the field of complex numbers, C.
Suppose that U is a complex vector space and T L(U ) is a linear operator. We already know
that T is triangulable and we deduce from the above theorem the following important consequence.
Corollary 2.15. Suppose that T is a linear operator on the complex vector space U. Then
Y Y
det T = m (T ) , PT (x) = (x )m (T ) ,
spec(T ) spec(T )
X
m (T ). t
u
spec(T )
Proof. Fix a basis e = (e1 , . . . , en ) in which T is represented by the upper triangular matrix
1
0 2
.. .
.. .
.. .
.. ..
A= . .
0 0 n1
0 0 0 n
Note that
n
PT (x) = det(x1 T ) =
Y
(x j )
j=1
so that
n
(T j 1).
Y
PT (T ) =
j=1
For j = 1, . . . , N we define
U j := span{e1 , . . . , ej }.
and we set U 0 = {0}. Note that for any j = 1, . . . , n we have
(T j 1)U j U j1 .
Thus
n n1
(T j ) (T n 1)U n
Y Y
PT (T )U = (T j )U n =
j=1 j=1
LINEAR ALGEBRA 31
n1
Y n2
Y
(T j )U n1 (T j )U n2 (T 1 )U 1 {0}.
j=1 j=1
In other words,
PT (T )u = 0, u U .
t
u
2.4. The Jordan normal form of a complex operator. Let U be a complex n-dimensional vector
space and T : U U . For each eigenvalue spec(T ) we denote by E (T ) the corresponding
generalized eigenspace, i.e.,
u E (T )k > 0 : (T 1)k u = 0.
From Proposition 2.12 we know that E (T ) is an invariant subspace of T . Suppose that the spectrum
of T consists of ` distinct eigenvalues,
spec(T ) = 1 , . . . , ` .
Proposition 2.18.
U = E1 (T ) E` (T ).
32 LIVIU I. NICOLAESCU
(1 S )m (T ) u = 0,
i.e.,
(S 1)m (T ) = (1)m (T ) (1 S )m (T ) = 0.
In Figure 1 we depicted a tower of height 4. Observe that the vectors in a tower are generalized
eigenvectors of the corresponding nilpotent operator.
u4
N
u3
N
u2
N
u1
T1 T2 T3 T4 T
5
F IGURE 2. A group of 5 towers. The tops are shaded and about to be removed.
The inductive assumption implies that the towers T10 , . . . , Tr0 are mutually disjoint and their union
T0 is a linear independent family of vectors. Hence T 0 is a basis of R(S), and thus
dim R(S) = k 0 = k r.
On the other hand
Sbj = N bj = 0, j = 1, . . . , r.
Since the vectors b1 , . . . , br are linearly independent we deduce that
dim ker S r,
From the rank-nullity theorem we deduce that
dim V = dim ker S + dim R(S) r + k r = r.
on the other hand
dim V = dim span(T) |T| k.
Hence
|T| = dim V = k.
This proves that the towers T1 , . . . Tr are mutually disjoint and their union is linearly independent. t
u
Theorem 2.22 (Jordan normal form of a nilpotent operator). Let N : U U be a nilpotent operator
on an n-dimensional complex vector space U . Then U has a basis consisting of a disjoint union of
towers of N .
Proof. We will argue by induction on the dimension n of U . For n = 1 the result is trivially true. We
assume that the result is true for any nilpotent operator on a space of dimension < n and we prove it
is true for any nilpotent operator N on a space U of dimension n.
LINEAR ALGEBRA 35
u1
N
u u
3 5
t1 N N
u
N
t3 t
t2
u4
N
b3 t v1 v2
b1 b2 b4 4 b5
KerN
Next observe that the bottoms b1 , . . . , br belong to ker N and are linearly independent, because
they are a subfamily of the linearly independent family T1 Tr . We can therefore extend the
family {b1 , . . . , br } to a basis of ker N ,
b1 , . . . , br , v 1 , . . . , v s , r + s = dim ker N .
We obtain new towers T1 , . . . , Tr , Tr+1 , . . . , Tr+s defined by (see Figure 3)
T1 := T1 {u1 }, . . . , Tr := Tr {ur }, Tr+1 := {v 1 }, . . . , Tr+s := {v s }.
The sum of the heights of these towers is
(k1 + 1) + (k2 + 1) + + (kr + 1) + |1 + {z
+ 1}
s
= (k1 + + kr ) + (r + s) = dim U .
| {z } | {z }
=dim R(N ) =dim ker N
36 LIVIU I. NICOLAESCU
By construction, their bottoms are linearly independent and Proposition 2.21 implies that they are
mutually disjoint and their union is a linearly independent collection of vectors. The above computa-
tion shows that the number of elements in the union of these towers is equal to the dimension of U .
Thus, this union is a basis of U . t
u
Example 2.24. (a) Suppose that the nilpotent operator N : U U admits a Jordan basis consisting
of a single tower
e1 , . . . , en .
Denote by Cn the matrix representing N in this basis. We use this basis to identify U with Cn and
thus
1 0 0
0 1 0
0 0 0
e1 = .. , e2 = .. , . . . , en = .. .
. . .
0 0 0
0 0 1
From the equalities
N e1 = 0, N e2 = e1 , N e3 = e2 , . . .
we deduce that the first column of Cn is trivial, the second column is e1 , the 3-rd column is e2 etc.
Thus Cn is the n n matrix.
0 1 0 0 0
0 0 1 0 0
Cn = ... ... ... .. .. ..
. . .
0 0 0 0 1
0 0 0 0 0
The matrix Cn is called a (nilpotent) Jordan cell of size n.
(b) Suppose that the nilpotent operator N : U U admits a Jordan basis consisting of mutually
disjoint towers T1 , . . . , Tr of heights k1 , . . . , kr . For j = 1, . . . , r we set
U j = span(Tj ).
Observe that U j is an invariant subspace of N , Tj is a basis of U j and we have a direct sum decom-
position
U = U 1 U r.
The restriction of N to U j is represented in the basis Tj by the Jordan cell Ckj so that in the basis
T = T1 Tr the operator N has the block-matrix description
Ck1 0 0 0
0 Ck 0 0
2
.. . t
u
.. .. .. ..
. . . . .
0 0 0 Ckr
We want to point out that the sizes of the Jordan cells correspond to the heights of the towers in a
Jordan basis. While there may be several Jordan bases, the heights of the towers are the same in all
of them; see Remark 2.26. In other words, these heights are invariants of the nilpotent operator. t
u
LINEAR ALGEBRA 37
Definition 2.25. The Jordan invariant of a nilpotent operator N is the nonincreasing list of the sizes
of the Jordan cells that describe the operator in a Jordan basis. t
u
Remark 2.26 (Algorithmic construction of a Jordan basis). Here is how one constructs a Jordan basis
of a nilpotent operator N : U U on a complex vector space U of dimension n.
(i) Compute N 2 , N 3 , . . . and stop at the moment m when N m = 0. Set
R0 = U , R1 = R(N ), R2 = R(N 2 ), . . . , Rm = R(N m ) = {0}.
Observe that R1 , R2 , . . . , Rm are invariant subspaces of N , satisfying
R0 R1 R2 ,
(ii) Denote by Nk the restriction of N to Rk , viewed as an operator Nk : Rk Rk . Note that
Nm1 = 0, N0 = N and
Rk = R(Nk1 ), k = 1, . . . , m.
Set rk = dim Rk so that dim ker Nk = rk rk+1 .
(iii) Construct a basis Bm1 of Rm1 = ker Nm1 . Bm1 consists of rm1 vectors.
(iv) For each b Bm1 find a vector t(b) U such that
b = N m1 t(b).
For each b Bm1 we thus obtain a tower of height m
Tm1 (b) = b = N m1 t(b), N m2 t(b), . . . , N t(b), t(b) , b Bm1 .
(xi) For uniformity set Cm1 = Bm1 , and for any j = 1, . . . , m denote by cj the cardinatlity of
Cj1 . In the above Jordan basis the operator N will be a dirrect sum of c1 cells of dimension
1, c2 cells of dimension 2, etc. We obtain the identities
n = c1 + 2c2 + + mcm ,
r1 = c2 + 2c3 + + (m 1)cm1 , r2 = c3 + + (m 2)cm1 ,
rj = cj+1 + 2cj+2 + + (m j)cm , j = 0, . . . , m 1. (2.8)
where r0 = n.
If we treat the equalities (2.8) as a linear system with unknown k1 , . . . , km , we see that the matrix
of this system is upper triangular with only 1-s along the diagonal. It is thus invertible so that the
numbers c1 , . . . , cm are uniquely determined by the numbers rj which are invariants of the operator
N . This shows that the sizes of the Jordan cells are independent of the chosen Jordan basis.
If you are interested only in the sizes of the Jordan cells, all you have to do is find the integers m,
r1 , . . . , rm1 and then solve the system (2.8). Exercise 2.13 explains how to explicitly express the
cj -s in terms of the rj -s. t
u
Remark 2.27. To find the Jordan invariant of a nilpotent operator N on a complex n-dimensional
space proceed as follows.
(i) Find the smallest integer m such that N m = 0.
(ii) Find the ranks rj of the matrices N j , j = 0, . . . , m 1, where N 0 : I.
(iii) Find the nonnegative integers c1 , . . . , cm by solving the linear system (2.8).
(iv) The Jordan invariant of N is the list
m, . . . , m, (m 1), . . . , (m 1), . . . , 1, . . . , 1 .
| {z } | {z } | {z }
cm cm1 c1
t
u
0 0 0 1
viewed as a linear operator C4 C4 .
Expanding along the first row and then along the last column we deduce that
2 x + 1 1
PA (x) = det(xI A) = (x 1) det = (x 1)4 .
4 x3
Thus A has a single eigenvalue = 1 which has multiplicity 4. Set
0 0 0 0
1 2 1 0
N := A I = 2 4 2 0 .
0 0 0 0
The matrix N is nilpotent. In fact we have
N2 = 0
Upon inspecting N we see that ech of its columns is a multiple of the fiurst column. This means that
the range of N is spanned by the vector
0
1
u1 := N e1 = 2 ,
0
where e1 , . . . , e4 denotes the canonical basis of C4 .
The vector u1 is a tower in R(N ) which we can extend to a taller tower of N
T1 = (u1 , u2 ), u2 = e1 .
Next, we need to extend the basis {u1 } of R(N ) to a basis of ker N . The rank nulity theorem tells us
that
dim R(N ) + dim ker N = 4,
so that dim ker N = 3. Thus, we need to find two more vectors v 1 , v 2 so that the collection
{u1 , v 1 , v 2 } is a basis of ker N .
To find ker N we need to solve the linear system
x1
x2
x3 .
N x = 0, x =
x4
which we do using Gauss elimination, i.e., row operations on N . Observe that
0 0 0 0 1 2 1 0 1 2 1 0
1 2 1 0 2 4 2 0 0 0 0 0
N = 2 4 2 0 0 0 0 0 0 0 0
.
0
0 0 0 0 0 0 0 0 0 0 0 0
Hence
ker N = x C4 ; x1 2x2 + x3 = 0 ,
40 LIVIU I. NICOLAESCU
2.5. Exercises.
Exercise 2.1. Denote by U 3 the space of polynomials of degree 3 with real coefficients in one
variable x. We denote by B the canonical basis of U 3 ,
B = 1, x, x2 , x3 .
Exercise 2.2. (a) For any n n matrix A = (aij )1i,jn we define the trace of A as the sum of the
diagonal entries of A,
n
X
tr A := a11 + + ann = aij .
i=1
Show that if A, B are two n n matrices then
tr AB = tr BA.
(b) Let U be an n-dimensional F-vector space, and e, f be two bases of U . Suppose that T : U U
is a linear operator represented in the basis e by the matrix A and the basis f by the matrix B. Prove
that
tr A = tr B.
(The common value of these traces is called the trace of the operator T and it is denoted by tr T .)
Hint: Use part (a) and (2.1).
(c) Consider the operator A : F2 F2 described by the matrix
a11 a12
A= .
a21 a22
Show that
PT (x) = x2 tr Ax + det A.
(d) Let T be a linear operator on the n-dimensional F-vector space. Prove that the characteristic
polynomial of T has the form
PT (x) = det(x1 T ) = xn (tr T )xn1 + + (1)n det T. t
u
Exercise 2.3. Suppose T : U U is a linear operator on the F-vector space U , and V 1 , V 2 U
are invariant subspaces of T . Show that V 1 V 2 and V 1 + V 2 are also invariant subspaces of T . t
u
42 LIVIU I. NICOLAESCU
Exercise 2.5. Consider the n n matrix A = (aij )1i,jn , where aij = 1, for all i, j, which we
regard as a linear operator Cn Cn . Consider the vector
1
1
c = .. Cn .
.
1
(a) Compute Ac.
(b) Compute dim R(A), dim ker A and then determine spec(A).
(c) Find the characteristic polynomial of A. t
u
Exercise 2.6. Find the eigenvalues and the eigenvectors of the circulant matrix described in Exercise
1.16. t
u
Exercise 2.7. Prove the equality (2.7) that appears in the proof of Proposition 2.21. t
u
Exercise 2.8. Let T : U U be a linear operator on the finite dimensional complex vector space
U . Suppose that m is a positive integer and u U is a vector such that
T m1 u 6= 0 but T m u = 0.
Show that the vectors
u, T u, . . . , T m1 u
are linearly independent. t
u
Exercise 2.9. Let T : U U be a linear operator on the finite dimensional complex vector space
U . Show that if
dim ker T dim U 1 6= dim ker T dim U ,
then T is a nilpotent operator and
dim ker T j = j, j = 1, . . . , dim U .
Can you construct an example of operator T satisfying the above properties? t
u
LINEAR ALGEBRA 43
Exercise 2.10. Let S, T be two linear operators on the finite dimensional complex vector space U .
Show that ST is nilpotent if and only if T S is nilpotent. t
u
Exercise 2.11. Find a Jordan basis of the linear operator C4 C4 described by the 4 matrix
1 0 1 1
0 1 1 1
A= 0 0
. t
u
1 0
0 0 0 1
Exercise 2.12. Find a Jordan basis of the linear operator C7 C7 given by the matrix
3 2 1 1 0 3 1
1 2 0 0 1 1 1
1 1 3 0 6 5 1
A= 0 2 1 4 1 3 1 . t
u
0 0 0 0 3 0 0
0 0 0 0 2 5 0
0 0 0 0 2 0 5
Exercise 2.13. (a) Find the inverse of the m m matrix
1 2 3 m 1 m
0 1 2 m 2 m 1
A = ... ... ... .. .. ..
.
. . .
0 0 0 1 2
0 0 0 0 1
(b) Find the Jordan normal form of the above matrix. t
u
44 LIVIU I. NICOLAESCU
3. E UCLIDEAN SPACES
In the sequel F will denote either the field R of realnumbers, or the field C of complex numbers.
Any complex number has the form z = a + bi, i = 1, so that i2 = 1. The real number a is
called the real part of z and it is denoted by Re z. The real number b is called the imaginary part of
z and its is denoted by Im z.
The conjugate of a complex number z = a + bi, is the complex number
z = a bi.
In particular, any real number is equal to its conjugate. Note that
z + z = 2 Re z, z z = 2i Im z.
The norm or absolute value of a complex number z = a + bi is the real number
p
|z| = a2 + b2 .
Observe that
|z|2 = a2 + b2 = (a + bi)(a bi) = z z.
In particular, if z 6= 0 we have
1 1
= 2 z.
z |z|
Example 3.2. (a) The standard real n-dimensional Euclidean space. The vector space Rn is
equipped with a canonical inner product
h, i : Rn Rn R.
LINEAR ALGEBRA 45
More precisely if
u1 v1
u2 v2
u= , v = Rn ,
.. ..
. .
un vn
then
n
X
hu, vi = u1 v1 + + un vn = uk vk = u v.
k=1
You can verify that this is indeed an inner product, i.e., it satisfies the conditions (i)-(iv) in Definition
3.1.
(b) The standard complex n-dimensional Euclidean space. The vector space Cn is equipped
with a canonical inner product
h, i : Cn Cn C.
More precisely if
u1 v1
u2 v2
u = .. , v = .. Cn
. .
un vn
then
Xn
hu, vi = u1 v1 + + un vn = uk vk .
k=1
You can verify that this is indeed an inner product.
(c) Denote by Pn the vector space of polynomials with real coefficients and degree n. We can
define an inner product on Pn
h, i : Pn Pn R,
by setting
Z 1
hp, qi = p(x)q(x)dx, p, q Pn .
0
You can verify that this is indeed an inner product
(d) Any finite dimensional F-vector space U admits an inner product. Indeed, if we fix a basis
(e1 , . . . , en ) of U , then we define
h, i : U U F,
by setting D E
u1 e1 + + un en , v1 e1 + + vn en = u1 v1 + + un vn . t
u
3.2. Basic properties of Euclidean spaces. Suppose that (U , h, i) is an Euclidean vector space.
We define the norm or length of a vector u U to be the nonnegative number
p
kuk := hu, ui.
Example 3.3. In the standard Euclidean space Rn of Example 3.2(a) we have
v
u n
uX
kuk = t u2k . t
u
k=1
46 LIVIU I. NICOLAESCU
Proof. If hu, vi = 0, then the inequality is trivially satisfied. Hence we can assume that hu, vi = 0.
In particular, u, v 6= 0.
From the positive definiteness of the inner product we deduce that for any x F we have
0 ku xvk2 = hu xv, u xvi = hu, u xvi xhv, u xvi
= hu, ui xhu, vi xhv, ui + xxhv, vi
= kuk2 xhu, vi xhv, ui + |x|2 kvk2 .
If we let
1
x0 = hu, vi, (3.1)
kvk2
then
1 1 |hu, vi|2
x0 hu, vi + x0 hv, ui = hu, vihu, vi + hu, vihu, vi = 2 ,
kvk2 kvk2 kvk2
|hu, vi|2
|x0 |2 kvk2 = ,
kvk2
and thus
|hu, vi|2 |hu, vi|2 |hu, vi|2
0 ku x0 vk2 = kuk2 2 + = kuk2
. (3.2)
kvk2 kvk2 kvk2
Thus
|hu, vi|2
kuk2
kvk2
so that
hu, vi |2 kuk2 kvk2 .
Note that if 0 = hu, vi = kuk kvk then at least one of the vectors u, v must be zero.
If 0 6= hu, vi = kuk kvk, then by choosing x0 as in (3.1) we deduce as in (3.2) that
ku x0 vk = 0.
Hence u = x0 v. t
u
Remark 3.5. The Cauchy-Schwarz theorem is a rather nontrivial result, which in skilled hands can
produce remarkable consequences. Observe that if U is the standard real Euclidean space of Example
3.2(a), then the Cauchy-Schwarz inequality implies that for any real numbers u1 , v1 , . . . , un , vn we
have
n v n
v
u n
X u u X uX
uk vk
t 2
uk t vk2
k=1 k=1 k=1
If we square both sides of the above inequality we deduce
n
!2 n
! n
!
X X X
uk vk u2k vk2 . (3.3)
k=1 k=1 k=1
t
u
LINEAR ALGEBRA 47
Observe that if u, v are two nonzero vectors in an Euclidean space (U , h, i), then the Cauchy-
Schwarz inequality implies that
Rehu, vi
[1, 1].
kuk kvk
Thus there exists a unique [0, ] such that
Rehu, vi
cos = .
kuk kvk
This angle is called the angle between the two nonzero vectors u, v. We denote it by ](u, v). In
particular, we have, by definition,
Rehu, vi
cos ](u, v) = . (3.4)
kuk kvk
Note that if the two vectors u, v were perpendicular in the classical sense, i.e., ](u, v) = 2, then
Rehu, vi = 0. This justifies the following notion.
Definition 3.6. Two vectors u, v in an Euclidean vector space (U , h, i) are said to be orthogonal
if hu, vi = 0. We will indicate the orthogonality of two vectors u, v using the notation u v. t
u
= kuk2 + kvk2 .
t
u
3.3. Orthonormal systems and the Gramm-Schmidt procedure. In the sequel U will denote an
n-dimensional Euclidean F-vector space. We will denote the inner product on U by h, i.
Definition 3.9. A family of nonzero vectors
1 , . . . , uk U
is called orthogonal if
ui uj , i 6= j.
An orthogonal family
u1 , . . . , uk U
is called orthonormal if
kui k = 1, i = 1, . . . , k.
A basis of U is called orthogonal (respectively orthonormal) if it is an orthogonal (respectively
orthonormal) family. t
u
be such a family. The induction assumption implies that we can find an orthonormal system
e1 , . . . , ek1
such that
span{u1 , . . . , uj = span e1 , . . . , ej , j = 1, . . . , k 1.
Define
v k := huk , e1 ie1 + . . . + huk , ek1 iek1 ,
f k := uk v k .
Observe that
v k span e1 , . . . , ek1 = span{u1 , . . . , uk1 .
Since {u1 , . . . , uk } is a linearly independent family we deduce that
uk 6 e1 , . . . , ek1
so that f k = uk v k 6= 0. We can now set
1
ek := f .
kf k k k
By construction kek k = 1. Also note that if 1 j < k, then
hf k , ej i = huk v k , ej i = huk , ej i hv k , ej i
D E
= huk , ej i huk , e1 ie1 + . . . + huk , ek1 iek1 , ej
| {z }
=v k
huk , ej i huj , ej i hej ej i = 0.
This proves that {e1 , . . . , ek1 , ek } is an orthonormal family.
Finally observe that
uk = v k + f k = v k + kf k kek .
Since
v k span e1 , . . . , ek1
we deduce
uk span e1 , . . . , ek1 , ek
and thus
span{u1 , . . . , uk = span e1 , . . . , ek
t
u
Remark 3.12. The strategy used in the proof of the above theorem is as important as the theorem
itself. The procedure we used to produce the orthonormal family {e1 , . . . , ek }. This procedure goes
by the name of the Gramm-Schmidt procedure. To understand how it works we consider a simple
case, when U is the space R2 equipped with the canonical inner product.
Suppose that
3 5
u1 = , u2 := .
4 0
Then
ku1 k2 = 32 + 42 = 9 + 16 = 25,
so that and we set 3
1 5
e1 := u1 = .
ku1 k 4
5
50 LIVIU I. NICOLAESCU
Next
9
5 5
v 2 = hu2 , e1 ie1 = 3e1 , f 2 = u2 v 2 =
0 12
5
16 4
5 5
= = 4 .
12
5 35
We see that f 2 = 4 and thus
4
5
e2 = . t
u
35
The Gramm-Schmidt theorem has many useful consequences. We will discuss a few of them
Corollary 3.13. Any finite dimensional Euclidean vector space (over F = R, C) admits an orthonor-
mal basis.
Proof. Apply Theorem 3.11 to a basis of the vector space. t
u
The orthonormal bases of an Euclidean space have certain computational advantages. Suppose that
e1 , . . . , en
is an orthonormal basis of the Euclidean space U . Then the coordinates of a vector u U in this
basis are easily computed. More precisely, if
u = x1 e1 + + xn en , x1 , . . . , xn F, (3.5)
then
xj = hu, ej i j = 1, . . . , n. (3.6)
Indeed, the equality (3.5) implies that
hu, ej i = x1 e1 + + xn en , ej = xj hej , ej i = xj ,
where at the last step we used the orthonormality condition which translates to
(
1, i = j
hei , ej i =
0, i 6= j.
Applying Pythagoras theorem we deduce
kx1 e1 + + xn en k2 = x21 + + x2n = |hu, e1 i|2 + + |hu, en i |2 . (3.7)
Example 3.14. Consider the orthonormal basis e1 , e2 of R2 constructed in Remark 3.12. If
2
u= ,
1
then the coordinates x1 , x2 of u in this basis are given by
6 4
x1 = hu, e1 i = + = 2,
5 5
8 3
x2 = hu, e2 i = = 1,
5 5
so that
u = 2e1 + e2 . t
u
LINEAR ALGEBRA 51
In other words, X consists of the vectors orthogonal to all the vectors in X. For this reason we will
obten write u X to indicate that u X . The following result is left to the reader.
Proposition 3.16. (a) The subset X is a vector subspace of U .
(b)
X Y X Y .
(c)
X = span(X) . t
u
3.5. Linear functionals and adjoints on Euclidean spaces. Suppose that U is a finite dimensional
F-vector space, F = R, C. The dual of U , is the F-vector space of linear functionals on U , i.e.,
linear maps
: U F.
The dual of U is denoted by U . The vector space U has the same dimension as U and thus they
are isomorphic. However, there is no distinguished isomorphism between these two vector spaces!
We want to show that if we fix an inner product on U , then we can construct in a concrete way an
isomorphism between these spaces. Before we proceed with this construction let us first observe that
there is a natural bilinear map
B : U U F, B(, u) = (u).
Theorem 3.22 (Riesz representation theorem: the real case). Suppose that U is a finite dimensional
real vector space equipped with an inner product
h, i : U U R.
To any u U we associate the linear functional u U defined by the equality
B(u , x) = u (x) := hx, ui, x U .
Then the map
U 3 U 7 u U
is a linear isomorphism.
Proof. Let us show that the map u 7 u is linear.
For any u, v, x U we have
(u + v) (x) = hx, u + vi = hx, u i + hx, v i
= u (x) + v (x) = (u + v )(x),
which show that
(u + v) = u + v .
For any u, x U and any t R we have
(tu) (x) = hx, tui = thx, ui = tu (x),
i.e., (tu) = tu . This proved the claimed linearity. To prove that it is an isomorphism, we need to
show that it is both injective and surjective.
Injectivity. Suppose that u U is such that u = 0. This means that u (x) = 0, x U . If we let
x = u we deduce
0 = u (u) = hu, ui = kuk2 ,
so that u = 0.
Surjectivity. We have to show that that for any U there exists u U such that = u .
Let n = dim U . Fix an orthonormal basis e1 , . . . , en of U . The linear functional is uniquely
determined by its values on ei ,
i = (ei ).
Define
Xn
u := k ek .
k=1
Then
u (ei ) = hei , ui = hei , 1 e1 + + i ei + + n en i
LINEAR ALGEBRA 55
Theorem 3.23 (Riesz representation theorem: the complex case). Suppose that U is a finite dimen-
sional complex vector space equipped with an inner product
h, i : U U C.
To any u U we associate the linear functional u U defined by the equality
B(u , x) = u (x) := hx, ui, x U .
Then the map
U 3 U 7 u U
is bijective and conjugate linear, i.e., for any u, v U and any z C we have
(u + v) = u + v , (zu) = zu .
Proof. Let us first show that the map u 7 u is conjugate linear.
For any u, v, x U we have
(u + v) (x) = hx, u + vi = hx, u i + hx, v i
= u (x) + v (x) = (u + v )(x),
which show that
(u + v) = u + v .
For any u, x U and any z C we have
(zu) (x) = hx, zui = zhx, ui = zu (x),
i.e., (zu) = zu . This proved the claimed conjugate linearity. We now prove the bijectivity claim.
Injectivity. Suppose that u U is such that u = 0. This means that u (x) = 0, x U . If we let
x = u we deduce
0 = u (u) = hu, ui = kuk2 ,
so that u = 0.
Surjectivity. We have to show that that for any U there exists u U such that = u .
Let n = dim U . Fix an orthonormal basis e1 , . . . , en of U . The linear functional is uniquely
determined by its values on ei ,
i = (ei ).
Define
Xn
u := k ek .
k=1
Then
u (ei ) = hei , ui = hei , 1 e1 + + i ei + + n en i
= hei , 1 e1 i + + hei , i ei i + + hei , n en i = ei , i ei = i hei , ei i = i .
Thus
u (ei ) = (ei ), i = 1, . . . , n,
so that = u . t
u
56 LIVIU I. NICOLAESCU
Suppose that U and V are two F-vector spaces equipped with inner products h, iU and respec-
tively h, iV . Next, assume that T : U V is a linear map.
Theorem 3.24. There exists a unique linear map S : V U satisfying the equality
hT u, viV = hu, SviU , u U , v V . (3.10)
This map is called the adjoint of T with respect to the inner products h, iU , h, iV and it is
denoted by T . The equality (3.10) can then be rewritten
hT u, viV = hu, T viU , hT v, uiU = hv, T uiU , u U , v V . (3.11)
Proof. Uniqueness. Suppose there are two linear maps S1 , S2 : V U satisfying (3.10). Thus
0 = hu, S1 viU hu, S2 viU = hu, (S1 S2 )viU , u U , v V .
For fixed v V we let u = (S1 S2 )v and we deduce from the above equality
0 = h (S1 S2 )v, (S1 S2 )v iU = k(S1 S2 )vk2U ,
so that (S1 S2 )v = 0, for any v in V . This shows that S1 = S2 , thus proving the uniqueness part
of the theorem.
Existence. Any v V defines a linear functional
Lv : U F, Lv (u) = hT u, viV .
Thus there exists a unique vector Sv U such that
Lv = (Sv) hT u, viV = hu, SviU , u U .
One can verify easily that the correspondence V 3 v 7 Sv U described above is a linear map;
see Exercise 3.13. t
u
We deduce that
hT ej , f i iV = aij .
Hence
hf i , T ej iV = hT ej , f i iV = aij .
On the other hand, the i-th column of A describes the coordinates of T f i in the basis e so that
T f i = a1i e1 + + ami em
and we deduce that
hT f i , ej i = aji .
On the other hand, we have
(3.11)
aij = hf i , T ej iV = hT f i , ej i = aji .
Thus, A is the conjugate transpose of A. In other words, the entries of A are the conjugates of the
corresponding entries of the transpose of A,
A = A . t
u
Definition 3.26. Let U be a finite dimensional Euclidean F-space with inner product h, i. A linear
operator T : U U is called selfadjoint or symmetric if T = T , i.e.,
hT u1 , u2 i = hu1 , T u2 , u1 , u2 U . t
u
Example 3.27. (a) Consider the standard real Euclidean n-dimensional space Rn . Any real n n
matrix A can be identified with a linear operator TA : Rn Rn . The operator TA is selfadjoint if
and only if the matrix A is symmetric, i.e.,
aij = aji , i, j
or, equivalently A = A = the transpose of A.
(b) Consider the standard complex Euclidean n-dimensional space Cn . Any complex n n matrix
A can be identified with a line operator TA : Cn Cn . The operator TA is selfadjoint if and only if
the matrix A is Hermitian, i.e.,
aij = aji , i, j
or, equivalently A = A = the conjugate transpose of A.
(c) Suppose that V is a subspace of the finite dimensional Euclidean space U . Then the orthogonal
projection PV : U U is a selfadjoint operator, i.e.,
hPV u1 , u2 i = hu1 , PV u2 i, u1 , u2 U .
Indeed, let u1 , u2 U . They decompose uniquely as
u1 = v 1 + w1 , u2 = v 2 + w2 , v 1 , v 2 V , w1 , w2 V .
Then PV u1 = v 1 so that
hPV u1 , u2 i = hv 1 , v 2 + w2 i = hv 1 , v 2 i + hv 1 , w2 i = hv 1 , v 2 i.
| {z }
w2 v 1
Corollary 3.29. If A is an n n complex matrix such that A = A , then all the roots of the charac-
teristic polynomial PA () = det(1 A) are real.
Proof. The roots of PA () are the eigenvalues of the linear operator TA : Cn Cn defined by A.
Since A = A we deduce that TA is selfadjoint with respect to the natural inner product on Cn so that
all its eigenvalues are real. t
u
Theorem 3.30. Suppose that U , V are two finite dimensional Euclidean F-vector spaces and T :
U V is a linear operator. Then the following hold.
(a) (T ) = T .
(b) ker T = R(T ) .
(c) ker T = R(T ) .
(d) R(T ) = (ker T ) .
(e) R(T ) = (ker T .
Proof. (a) The operator (T ) is a linear operator U V . We need to prove that, for any u U we
have x := (T ) u T u=0. Let
Because (T ) is the adjoint of T we deduce from (3.11) that
h(T ) u, viV = hu, T viU .
Because T is the adjoint of T we deduce from (3.11) that
hu, T viU = hT u, viV .
Hence, for any v V we have
0 = h(T ) u, viV hT u, viV = hx, viV .
By choosing v = x we deduce x = 0.
LINEAR ALGEBRA 59
Corollary 3.31. Suppose that U is a finite dimensional Euclidean vector space, and T : U U is
a selfadjoint operator. Then
ker T = R(T ) , R(T ) = (ker T ) , U = ker T R(T ). t
u
Proposition 3.32. (a) Suppose that U , V , W are finite dimensional Euclidean F-spaces. If
T :U V, S :V W
are linear operators, then
(ST ) = T S .
(b) Suppose that T : U V is a linear operator between two finite dimensional Euclidean F-spaces.
Then T is invertible if and only if the adjoint T : V U is invertible. Moreover, if T is invertible,
then
(T 1 ) = (T )1 .
(c)Suppose that S, T : U V are linear operators between two finite dimensional Euclidean F-
spaces. Then
(S + T ) = S + T , (zS) = zS , z F. t
u
Definition 3.34. Let U , V be two finite dimensional Euclidean F-vector spaces. A linear operator
T : U V is called an isometry if for any u U we have
kT ukV = kukU . t
u
LINEAR ALGEBRA 61
Proposition 3.35. A linear operator T : U V between two finite dimensional Euclidean vector
spaces is an isometry if and only if
T T = 1U . t
u
Proof. The implication follows from Proposition 3.35. To prove the opposite implication,
assume that T is an orthogonal operator. Hence
kT uk = kuk, u U .
This implies in particular that ker T = 0, so that T is invertible. If we let u = T 1 v in the above
equality we deduce
kT 1 vk = kvk, v U .
Hence T 1 is also an isometry so that
(T 1 ) T 1 = 1U .
Using Proposition 3.32 we deduce (T 1 ) = (T )1 . Hence
(T )1 T 1 = 1U .
Taking the inverses of both sides of the above equality we deduce
1U = (T )1 T 1 1 = ( T 1 )1 ( (T )1 )1 = T T .
t
u
62 LIVIU I. NICOLAESCU
3.6. Exercises.
Exercise 3.1. Prove the claims made in Example 3.2 (a), (b), (c). t
u
Exercise 3.2. Let U , h, i be an Euclidean space.
(a) Show that for any u, v U we have
ku + vk2 + ku vk2 = 2 kuk2 + kvk2 .
and 2
|z1 |2 + + |zn |2 |z1 | + + |zn | .
Exercise 3.4. Consider the space P3 of polynomials with real coefficients and degrees 3 equipped
with the inner product
Z 1
hP, Qi = P (x)Q(x)dx, P, Q P3 .
0
Construct an orthonormal basis of P3 by using the Gramm-Schmidt procedure applied to the basis of
P3 given by
E0 (x) = 1, E1 (x) = x, E2 (x) = x2 , E3 (x) = x3 . t
u
Exercise 3.5. Fill in the missing details in the proof of Corollary 3.15. t
u
Exercise 3.6. Suppose that T : U U is a linear operator on a finite dimensional complex Eu-
clidean vector space. Prove that there exists an orthonormal basis of U , such that, in this basis T is
represented by an upper triangular matrix. t
u
Exercise 3.8. Consider the standard Euclidean space R3 . Denote by e1 , e2 , e3 the canonical or-
thonormal basis of R3 and by V the subspace generated by the vectors
v 1 = 12e1 + 5e3 , v 2 = e1 + e2 + e3 .
Find the matrix representing the orthogonal projection PV : R3 R3 in the canonical basis
e1 , e2 , e3 . t
u
Exercise 3.9. Consider the space P3 of polynomials with real coefficients and degrees 3. Find
P P3 such that p(0) = p0 (0) = 0 and
Z 1
2 + 3x p(x) 2 dx
0
is as small as possible.
Hint Observe that the set
V = p P3 ; p(0) = p0 (0) = 0
LINEAR ALGEBRA 63
is a subspace of P3 . Then compute PV , the orthogonal projection onto V with respect to the inner
product Z 1
hp, qi = p(x)q(x)dx, p, q P3 .
0
The answer will be PV q, q = 2 + 3x P3 . t
u
Exercise 3.10. Suppose that (U , h, i) is a finite dimensional real Euclidean space and P : U U
is a linear operator such that
P 2 = P, kP uk kuk, u U . (P )
Show that there exists a subspace V U such that P = PV = the orthogonal projection onto V .
Hint: Let V = R(P ) = P (U ), W := ker P . Using (P ) argue by contradiction that V W and
then conclude that P = PV . t
u
Exercise 3.11. Let P2 denote the space of polynomials with real coefficients and degree 2. De-
scribe the polynomial p0 P2 uniquely determined by the equalities
Z Z
cos xq(x)dx = p0 (x)q(x)dx, q P2 . t
u
Exercise 3.12. Let k be a positive integer, and denote by Pk denote the space of polynomials with real
coefficients and degree k. For k = 2, 3, 4, describe the polynomial pk Pk uniquely determined
by the equalities Z 1
q(0) = pk (x)q(x)dx, q Pk . t
u
1
Exercise 3.13. Finish the proof of Theorem 3.24. (Pay special attention to the case when F = C.) t
u
Exercise 3.14. Let P2 denote the space of polynomials with real coefficients and degree 2. We
equip it with the inner product Z 1
hp, qi = p(x)q(x)dx.
0
Consider the linear operator T : P2 P2 defined by
dp
Tp = , p P2 .
dx
Describe the adjoint of T . t
u
Exercise 3.17. Let U be a finite dimensional Euclidean F-vector space and T : U U is a linear
operator. Prove that the following statements are equivalent.
(i) The operator T : U U is orthogonal.
(ii) hT u, T vi = hu, vi, u, v U .
(iii) For any orthonormal basis e1 , . . . , en of U , the collection T e1 , . . . , T en is also an orthonor-
mal basis of U .
64 LIVIU I. NICOLAESCU
t
u
LINEAR ALGEBRA 65
Proposition 4.2. If T : U U is a normal operator then so are any of its powers T k , k > 0.
Proof. To see this we invoke Proposition 3.32 and we deduce
(T k ) = (T )k
Then
(T k ) T k = (T T ) (T T ) = (T T ) (T T ) T
| {z } | {z } | {z } | {z }
k k k1 k
= (T T ) (T T ) (T )2 = = (T T )(T )k = T k (T )k = T k (T k ) .
| {z } | {z } | {z }
k2 k k
t
u
We can unravel a bit the above definition and observe that a linear operator T on an n-dimensional
Euclidean F-space is orthogonally diagonalizable if and only if there exists an orthonormal basis
e1 , . . . , en and numbers a1 , . . . , an F such that
T ek = ak ek , k.
Thus the above basis is rather special: it is orthonormal, and it consists of eigenvectors of T . The
numbers ak are eigenvalues of T .
Note that the converse is also true. If U admits a n orthonormal basis consisting of eigenvectors of
T , then T is orthogonally diagonalizable.
Proposition 4.4. Suppose that U is a complex Euclidean space of dimension n and T : U U is
orthogonally diagonalizable. Then T is a normal operator.
66 LIVIU I. NICOLAESCU
4.2. The spectral decomposition of a normal operator. We want to show that the converse of
Proposition 4.4 is also true. This is a nontrivial and fundamental result of linear algebra.
Theorem 4.5 (Spectral Theorem for Normal Operators). Let U be an n-dimensional complex Eu-
clidean space and T : U U a normal operator. Then T is orthogonally diagonalizable, i.e., there
exists an orthonormal basis of U consisting of eigenvectors of T .
Proof. The key fact behind the Spectral Theorem is contained in the following auxiliary result.
Lemma 4.6. Let spec(T ). Then
ker(1U T )2 = ker(1U T ).
We first complete the proof of the Spectral Theorem assuming the validity of the above result.
Invoking Lemma 2.9 we deduce that
ker(1U T ) = ker(1U T )2 = ker(1U T )2 =
so that the generalized eigenspace of T corresponding to an eigenvalue coincides with the eigenspace
ker(1U T ),
E (T ) = ker(1U T ).
Suppose that
spec(T ) = 1 , . . . , ` .
From Proposition 2.18 we deduce that
U = ker(1 1U T ) ker(` 1U T ). (4.1)
The next crucial observation is contained in the following elementary result.
Lemma 4.7. Suppose that , are two distinct eigenvalues of T , and u, v U are eigenvectors
T u = u, T v = v.
Then
T u = , T v = v,
LINEAR ALGEBRA 67
and
u v.
0 = S S u = S S u.
Hence
0 = hS S u, ui = hS u, S ui = kS uk2 .
This proves that T u = u. A similar argument shows that T v = v.
From the equality T u = u we deduce
Hence
( )hu, vi = 0.
Since 6= we deduce hu, vi = 0. t
u
From the above result we conclude that the direct summands in (4.1) are mutually orthogonal. Set
dk = dim ker(k 1U T ). We fix an orthonormal basis
of ker(k 1U T ). By construction, the vectors in this basis are eigenvectors of T . Since the spaces
ker(k 1U T ) are mutually orthogonal we deduce from (4.1) that the union of the orthonormal
bases ek is an orthonormal basis of U consisting of eigenvectors of T . This completes the proof of
the Spectral Theorem, modulo Lemma 4.6. t
u
Proof of Lemma 4.6. The operator S = 1U T is normal so that the conclusion of the lemma
follows if we prove that for any normal operator S we have
ker S 2 = ker S.
Note that ker S ker S 2 so that it suffices to show that ker S 2 ker S.
Let u U such that S 2 u = 0. We have to show that Su = 0. Note that
0 = (S )2 S 2 u = S S SSu = S SS Su.
Set A := S S. Note that A is selfadjoint, A = A and we can rewrite the above equality as 0 = A2 u.
Hence
0 = hA2 u, ui = hAu, Aui = kAuk2 .
The equality Au = 0 now implies
4.3. The spectral decomposition of a real symmetric operator. We begin with the real counterpart
of Proposition 4.4
Proposition 4.8. Suppose that U is a real Euclidean space of dimension n and T : U U is
orthogonally diagonalizable. Then T is a symmetric operator.
Proof. Fix an orthonormal basis
e = (e1 , . . . , en )
of U such that, in this basis, the operator T is represented by the diagonal matrix
a1 0 0 0 0
0 a2 0 0 0
D = .. .. .. , a1 , . . . , an R.
.. .. ..
. . . . . .
0 0 0 0 an
Clearly D = D and the computations in Example 3.25 show that T is a selfadjoint operator. t
u
We can now state and prove the real counterpart of Theorem 4.5
Theorem 4.9 (Spectral Theorem for Real Symmetric Operators). Let U be an n-dimensional real
Euclidean space and T : U U a symmetric operator. Then T is orthogonally diagonalizable, i.e.,
there exists an orthonormal basis of U consisting of eigenvectors of T .
Proof. We argue by induction on n = dim U . For n = 1 the result is trivially true. We assume that
the result is true for real symmetric operators action on Euclidean spaces of dimension < n and we
prove that it holds for a symmetric operator T on a real n-dimenasional Euclidean space U .
To begin with let us observe that T has at least one real eigenvalue. Indeed, if we fix an orthonormal
basis e1 , . . . , en of U , then in this basis the operator T is represented by a symmetric nn real matrix
A. As explained in Corolllary 3.29, all the roots of the characteristic polynomial det(1 A) are
real, and they coincide with the eigenvalues of T .
Fix one such eigenvalue spec(T ) R and denote by E the corresponding eigenspace
E := ker(1 T ) U .
Lemma 4.10. The orthogonal complement E is an invariant subspace of T , i.e.,
u E T u E .
Proof. Let u E . We have to show that T u E , i.e., T u v, v E . Given such a v we
have
T v = v.
Next observe that u v since u E . Hence
hT u, vi = hu, T vi = hu, vi = 0.
t
u
4.4. Exercises.
Exercise 4.1. Let U be a finite dimensional complex Euclidean vector space and T : U U a
normal operator. Prove that for any complex numbers a0 , . . . , ak the operator
a0 1U + a1 T + + ak T k
is a normal operator. t
u
Exercise 4.2. Let U be a finite dimensional complex vector space and T : U U a normal operator.
Show that the following statements are equivalent.
(i) The operator T is orthogonal.
(ii) If is an eigenvalue of T , then || = 1. t
u
Exercise 4.3. (a) Prove that the product of two orthogonal operators on a finite dimensional Euclidean
space is an orthogonal operator.
(b) Is it true that the product of two selfadjoint operators on a finite dimensional Euclidean space is
also a selfadjoint operator? t
u
Exercise 4.4. Suppose that U is a finite dimensional Euclidean space and P : U U is a linear
operator such that P 2 = P . Show that the following statements are equivalent.
(i) P is the orthogonal projection onto a subspace V U .
(ii) P = P .
t
u
Exercise 4.5. Suppose that U is a finite dimensional complex Euclidean space and T : U U is a
normal operator. Show that
R(T ) = R(T ). t
u
Exercise 4.6. Does there exists a symmetric operator T : R3 R3 such that
1 0 1 1
T 1 = 0 , T 2 = 2 ? t
u
1 0 3 3
Exercise 4.7. Show that a normal operator on a complex Euclidean space is selfadjoint if and only if
all its eigenvalues are real. t
u
Exercise 4.8. Suppose that T is a normal operator on a complex Euclidean space U such that T 7 =
T 5 . Prove that T is selfadjoint and T 3 = T . t
u
Exercise 4.9. Let U be a finite dimensional complex Euclidean space and T : U U be a selfad-
joint operator. Suppose that there exists a vector u, a complex number , and a number > 0 such
that
kT u uk < kuk.
Prove that there exists an eigenvalue of T such that | | < . t
u
70 LIVIU I. NICOLAESCU
5. A PPLICATIONS
5.1. Symmetric bilinear forms. Suppose that U is a finite dimensional real vector space. Recall
that a symmetric bilinear form on U is a bilinear map
Q:U U R
such that
Q(u1 , u2 ) = Q(u1 , u2 ).
We denote by Sym(U ) the space of symmetric bilinear forms on U .
Suppose that e = (e1 , . . . , en ) is a basis of the real vector space U . This basis associated to any
symmetric bilinear form Q Sym(U ) a symmetric matrix
A = (aij )1i,jn , aij = Q(ei , ej ) = Q(ej , ei ) = aji .
Note that the form Q is completely determined by the matrix A. Indeed if
n
X n
X
u= ui ei , v = vj ej ,
i=1 j=1
then
Xn n
X n
X m
X
Q(u, v) = Q ui ei , vj e j = ui vj Q(ei , ej ) = aij ui vj .
i=1 j=1 i,j=1 i,j=1
The matrix A is called the symmetric matrix associated to the symmetric form Q in the basis e.
Conversely any symmetric n n matrix A defines a symmetric bilinear form QA Sym(Rn )
defined by
Xm
Q(u, v) = aij ui vj , u, v Rn .
i,j=1
If h, i denotes the canonical inner product on Rn , then we can rewrite the above equality in the
more compact form
QA (u, v) = hu, Avi.
Definition 5.1. Let Q Sym(U ) be a symmetric bilinear form on the finite dimensional real space
U . The quadratic form associated to Q is the function
Q : U R, Q (u) = Q(u, , u), u U . t
u
Observe that if Q U , then we have the polarization identity
1
Q(u, v) = Q (u + v) Q (u v) .
4
This shows that a symmetric bilinear form is completely determined by its associated quadratic form.
Proposition 5.2. Suppose that Q Sym(U ) and
e = {e1 , . . . , en }, f = {f 1 , . . . , f n }.
Denote by S the matrix describing the transition from the basis e to the basis f . In other words, the
j-th column of S describes the coordinates of f j in the basis e, i.e.,
n
X
fj = sij ei .
i=1
LINEAR ALGEBRA 71
Proof. We have
n n
!
X X
bij = Q(f i , f j ) = Q ski ek , s`j e`
k=1 `=1
n
X n
X
= ski s`j Q(ek e` ) = ski ak` s`j
k,`=1 k,`=1
If we denote by sij the entries of the transpose matrix S , sij = sji , we deduce
n
sik ak` s`j .
X
bij =
k,`=1
The last equality shows that bij is the (i, j)-entry of the matrix S AS. t
u
+ We strongly recommend the reader to compare the change of base formula (5.1) with the change
of base formula (2.1).
Theorem 5.3. Suppose that Q is a symmetric bilinear form on a finite dimensional real vector space
U . Then there exist at least one basis of U such that the matrix associated to Q by this basis is a
diagonal matrix.
Proof. We will employ the spectral theory of real symmetric operators. For this reason with fix an
Euclidean inner product h, i on U and the choose a basis e of U which is orthonormal with
respect to the above inner product. We denote by A the symmetric matrix associated to Q by this
basis, i.e.,
aij = Q(ei , ej ).
This matrix defines a symmetric operator TA : U U by the formula
n
X
TA ej = aij ei .
i=1
Let us observe that
Q(u, v) = hu, TA vi, u, v U . (5.2)
To verify the above equality we first notice that both sides of the above equalities are bilinear on u, v
so that it suffices to check the equality in the special case when the vectors u, v belong to the basis e.
We have * +
X
hei , TA ej i = ei , akj ek = aij = Q(ei , ej ).
k
The spectral theorem for real symmetric operators implies that there exists an orthonormal basis f of
U such that the matrix B representing TA in this basis is diagonal. If S denotes the matrix describing
the transition to the basis e to the basis f then the equality (2.1) implies that
B = S 1 AS.
72 LIVIU I. NICOLAESCU
Since the biases e and f are orthonormal we deduce from Exercise 3.17 that the matrix S is orthog-
onal, i.e., S S = SS = 1. Hence S 1 = S and we deduce that
B = S AS.
The above equality and Proposition 5.2 imply that the diagonal matrix B is also the matrix associated
to Q by the basis f . t
u
Theorem 5.4 (The law of inertia). Let U be a real vector space of dimension n and Q a symmetric
bilinear form on U . Suppose that e, f are bases of U such that the matrices associated to Q by these
bases are diagonal.3 Then these matrices have the same number of positive (respectively negative,
respectively zero) entries on their diagonal.
Proof. We will show that these two matrices have the same number of positive elements and the same
number of negative entries on their diagonals. Automatically then they must have the same number
of trivial entries on their diagonals.
We take care of the positive entries first. Denote by A the matrix associated to Q by e and by B
the matrix associated by f . We denote by p the number of positive entries on the diagonal of A and
by q the number of positive entries on the diagonal of B. We have to show that p = q. We argue by
contradiction and we assume that p 6= q, say p > q.
We can label the elements in the basis e so that
aii = Q(ei , ei ) > 0, i p, ajj = Q(ej , ej ) 0, j > p. (5.3)
Observe that since A is diagonal we have
Q(ei , ej ) = 0, i 6= j. (5.4)
Similarly we can label the elements in the basis f so that
bkk = Q(f k , f k ) > 0, k q, b`` = Q(f ` , f ` ) 0, ` > q. (5.5)
Since B is diagonal we have
Q(f i , f j ) = 0, i 6= j. (5.6)
Denote V the subspace spanned by the vectors ei , i = 1, . . . , p, and by W the subspace spanned by
the vectors f q+1 , . . . , f n . From the equalities (5.3), (5.4), (5.5), (5.6) we deduce that
Q(v, v) > 0, v V \ 0, (5.7a)
Q(w, w) 0, w W \ 0. (5.7b)
On the other hand, we observe that dim V = p, dim W = n q. Hence
dim V + dim W = n + p q > n > dim U
so that there exists a vector
u (V W ) \ 0.
The vector u cannot simultaneously satisfy both inequalities (5.7a) and (5.7b). This contradiction
implies that p = q.
Using the above argument for the form Q we deduce that A and B have the same number of
negative elements on their diagonals. t
u
The above theorem shows that no matter what diagonalizing basis of Q we choose, the diagonal
matrix representing Q in that basis will have the same number of positive negative and zero elements
on its diagonal. We will denote these common numbers by + (Q), (Q) and respectively 0 (Q).
These numbers are called the indices of inertia of the symmetric form Q. The integer (Q) is called
Morse index of the symmetric form Q and the difference
(Q) = + (Q) (Q).
is called the signature of the form Q.
Definition 5.5. A symmetric bilinear form Q : Sym(U ) is called positive definite is
Q(u, u) > 0, u U \ 0.
It is called positive semidefinite if
Q(u, u) 0, u U .
It is called negative (semi)definite if Q is positive (semi)definite. t
u
We observe that a symmetric bilinear form on an n-dimensional real space U is positive definite if
and only if + (Q) = n = dim U .
Definition 5.6. A real symmetric nn matrix A is called positive definite if and only if the associated
symmetric bilinear form QA Sym(Rn ) is positive definite. t
u
5.2. Nonnegative operators. Suppose that U is a finite dimensional Euclidean F-vector space.
Definition 5.7. A linear operator T : U U is called nonnegative if the following hold.
(i) T is selfadjoint, T = T .
(ii) hT u, ui 0, for all u U .
The operator is called positive if it is nonnegative and hT u, ui = 0u = 0. t
u
Definition 5.9. Suppose that T : U U is a linear operator on the finite dimensional F-space U .
A square root of T is a linear operator S : U U such that S 2 = T . t
u
Theorem 5.10. Let T : U U be a linear operator on the n-dimensional Euclidean F-space. Then
the following statements are equivalent.
(i) The operator T is nonnegative.
(ii) The operator T is selfadjoint and all its eigenvalues are nonnegative.
(iii) The operator T admits a nonnegativesquare root.
(iv) The operator T admits a selfadjoint root.
(v) there exists an operator S : U U such that T = S S.
Proof. (i) The operator T being nonnegative is also selfadjoint. Hence all its eigenvalues are real.
If is an eigenvalue of T and u ker(1U T ) \ 0. then
kuk2 = hT u, ui 0.
74 LIVIU I. NICOLAESCU
This implies 0.
(ii) (iii) Since T is selfadjoint, there exists an orthonormal basis e = {e1 , . . . , en } of U such
that, in this basis, the operator T is represented by the diagonal matrix
A = Diag(1 , . . . , .n ),
where 1 , . . . , n are the eigenvalues of T . They are all nonnegative so we can form a new diagonal
matrix p p
B = Diag( 1 , . . . , n ).
The matrix B defines a selfadjoint linear operator S on U which is represented by the matrix B in
the basis e. More precisely p
Sei = i ei , i = 1, . . . , n.
If u = ni=1 ui ei , then
P
n p
X n p
X
Su = i ui ei , hSu, ui = i |ui |2 0.
i=1 i=1
Proposition 5.11. Let U be a finite dimension Euclidean F-space. Then any nonnegative operator
T : U U admits a unique nonnegative square root.
Proof. We have an orthogonal decomposition
ker(1U T )
M
U=
spec(T )
Moreover X
Tu = u .
spec(T )
Suppose that S is a nonnegative square root of T . If spec(S) and u ker(1U S), then
T u = S 2 u = S(Su) = S(u) = 2 u.
Hence
2 spec(T )
and
ker(1U S) ker(2 1U T ).
We have a similar orthogonal decomposition
ker(1U S) ker(2 1U T ) ker(1U T ) = U .
M M M
U=
spec(S) spec(S) spec(T )
LINEAR ALGEBRA 75
then X
Su = u .
spec(T )
The last equality determines S uniquely. t
u
Definition 5.12. If T is a nonnegative operator, then its unique nonnegative square root is denoted by
T. t
u
76 LIVIU I. NICOLAESCU
5.3. Exercises.
Exercise 5.1. Let Q be a symmetric bilinear form on an n-dimensional real vector space. Prove that
there exists a basis of U such that the matrix associated to Q by this basis is diagonal and all the
entries belong to {1, 0, 1}. t
u
Exercise 5.2. Let Q be a symmetric bilinear form on an n-dimensional real vector space U with
indices of inertia + , , 0 . We say Q is positive definite on a subspace V U if
Q(v, v) > 0, v V \ 0.
(a) Prove that if Q is positive definite on a subspace V , then dim V + .
(b) Show that there exists a subspace of dimension + on which Q is positive definite.
Hint: (a) Choose a diagonalizing basis e = {e1 , . . . , en } of Q. Assume that Q(ei , ei ) > 0, for
i = 1, . . . , + and Q(ej , ej ) 0 for j > + . Argue by contradiction that
dim V + dim span{ej ; j > + } dim U = n. t
u
Exercise 5.3. Let Q be a symmetric bilinear form on an n-dimensional real vector space U . with
indices of inertia + , , 0 . Define
Null (Q) := u U ; Q(u, v) = 0, v U . .
Show that Null (Q) is a vector subspace of U of dimension 0 . t
u
Exercise 5.4 (Jacobi). For any n n matrix M we denote by Mi the i i matrix determined by the
first i rows and columns of M .
Suppose that Q is a symmetric bilinear form on the real vector space U of dimension n. Fix a
basis e = {e1 , . . . , en } of U . Denote by A the matrix associated to Q by the basis e and assume that
i := det Ai 6= 0, i = 1, . . . , n.
(a) Prove that there exists a basis f = (f 1 , . . . , f n ) of U with the following properties.
(i) span{e1 , . . . , ei } = span{f 1 , . . . , f i }, i = 1, . . . , n.
(ii) Q(f k , ei ) = 0, 1 i < k n, Q(f k , ek ) = 1, k = 1, . . . , n.
Hint: For fixed k, express the vector f k in terms of the vectors ei ,
Xn
fk = sik ei
i=1
and then show that the conditions (i) and (ii) above uniquely determine the coefficients sik , again with
k fixed.
(b) If f is the basis found above, show that
Q(f k , f i ) = 0, i 6= k,
k1
Q(f k , f k ) = , k = 1, . . . , n, 0 := 1.
k
(c) Show that the Morse index (Q) is the number of sign changes in the sequence
1, 1 , 2 , . . . , n . t
u
Exercise 5.5 (Sylvester). For any n n matrix M we denote by Mi the i i matrix determined by
the first i rows and columns of M .
Suppose that Q is a symmetric bilinear form on the real vector space U of dimension n. Fix a
basis e = {e1 , . . . , en } of U and denote by A the matrix associated to Q by the basis e. Prove that
the following statements are equivalent.
LINEAR ALGEBRA 77
Exercise 5.6. (a) Let and f : [0, 1] [0, ) a continuous function which is not identically zero. For
any k = 01, 2, . . . we define the k-th momentum of f to be the real number
Z 1
k := k (f ) = xk f (x)dx.
0
Prove that the symmetric (n + 1) (n + 1)-matrix symmetric matrix
A = (aij )0i,jn , aij = i+j .
is positive definite.
Hint: Associate to any vector
u0
u1
u=
..
.
un
xn
the polynomial Pu (x) = u0 + u1 x + + un and then express the integral
Z 1
Pu (x)Pv (x)f (x)
0
in terms of A.
(b) Prove that the symmetric n n symmetric matrix
1
B = (bij )1i,jn , bij =
i+j
is positive definite.
(c) Prove that the symmetric (n + 1) (n + 1) symmetric matrix
C = (cij )0i,jn , cij = (i + j)!,
where 0! := 1, n! = 1 2 n.
Hint: Show that for any k = 0, 1, 2, . . . we have
Z
xk ex dx = k!,
0
and then due the trick in (a). t
u
Show that
T # = A1 T A,
where T is the adjoint of T with respect to the canonical inner product h, i. What does this
formula tell you in the special case when T is described by the symmetric matrix
1 2
?
2 3
Hint: In (#) use the equalities hx, yiA = hx, Ayi, hT x, yi = hx, T yi, x, y R2 . t
u
Exercise 5.9. (a) Suppose that U is a finite dimensional real Euclidean space and Q Sym(U ) is a
positive definite symmetric bilinear form. Prove that there exists a unique positive operator
T :U U
such that
Q(u, v) = hT u, T vi, u, v U . t
u
LINEAR ALGEBRA 79
(f) If (U 0 , k k0 ) and (U 1 , k k1 ) are two normed F-spaces, then their cartesian product U 0 U 1
is equipped with a natural norm
k(u0 , u1 )k := ku0 k0 + ku1 k1 , u0 U 0 , u1 U 1 . t
u
Theorem 6.3 (Minkowski). Consider the space RN with canonical basis e1 , . . . , eN . Let p (1, ).
Then the function
p
!1
X p
N p
| |p : R [0, ), |u1 e1 + + un en |p = |uk |
k=1
is a norm on RN .
Proof. The non degeneracy and homogeneity of | |p are obvious. The tricky part is the triangle
inequality. We will carry the proof of this inequality in several steps. Define q (1, ) by the
equality
1 1 p
1 = + q = .
p q p1
Step 1. Youngs inequality. We will prove that if a, b are nonnegative real numbers then
ap bq
ab + . (6.2)
p q
1
The inequality is obviously true if ab = 0 so we assume that ab 6= 0. Set := p and define
f : (0, ) R, f (x) = x x + 1.
Observe that
f 0 (x) = (x1 1).
Hence f 0 (x) = 0x = 1. Moreover, since < 1 we deduce
f 00 (1) = ( 1) < 0
so 1 is a global max of f (x), i.e.,
f (x) f (1) = 0, x > 0.
ap
Now let x = bq . We deduce
p 1 p 1
a p 1 ap 1 1 q a
p ap bq
1 = b
bq p bq p q bq p q
q pq ap bq
ab + .
p q
q
This is precisely (6.2) since q p = q(1 p1 ) = 1.
Step 2. Holders inequality. We prove that for any x, y RN we have
|hx, yi| |x|p |y|q , (6.3)
where hx, yi is the canonical inner product of the vectors x, y RN ,
hx, yi = x1 y1 + + xN xN .
Clearly it suffices to prove the inequality only in the case x, y 6= 0. Using Youngs inequality with
|xk | |yk |
a= , b=
|x|p |y|q
LINEAR ALGEBRA 81
we deduce
|xk yk | 1 |xk |p 1 |yk |q
+ , k = 1, . . . , N.
|x|p |y|q p |x|pp q |y|qq
Summing over k we deduce
N N
1 X 1 X p 1 X 1 1
|xk yk | p |xk | + q |yk |q = + .
|x|p |y|q p|x|p q|y|q p q
k=1
|k=1{z } |k=1{z }
=|x|pp =|y|qq
Hence X
|xk xk | |u|p |v|q .
k=1
Now observe that
XN X
|hx, yi| = xk yk |xk yk |.
k=1 k=1
Step 3. Conclusion. We have
N
X N
X
|u + v|pp = |uk + vk | =p
|uk + vk | |uk + vk |p1
k=1 k=1
N
X N
X
p1
|uk | |uk + vk |) + |vk | |uk + vk |)p1 .
k=1 k=1
Using Holders inequality and the equalities q(p 1) = p, 1q = pq p1 we deduce
N N
! p1 N
! 1q
X X X p
|uk | |uk + vk |p1 |uk |p |uk + vk |q(p1) = |u|p |u + v|pq
k=1 k=1 k=1
Similarly, we have
N
X p
|vk | |uk + vk |p1 |v|p |u + v|pq .
k=1
Hence p
|u + v|pp |u + v| q |u|p + |v|p ,
so that
p pq
|u + v|p = |u + v|p |u|p + |v|p .
t
u
Definition 6.4. Suppose that k k0 and k k1 are two norms on the same vector space U we say
that k k1 is stronger than k k0 , and we denote this by k k0 k k1 if there exists C > 0 such
that
kuk0 Ckuk1 , u U . t
u
The relation is a partial order on the set of norms on a vector space U . This means that if
k k0 , k k1 , k k2 are three norms such that
k k 0 k k1 , k k 1 k k 2 ,
then
k k 0 k k2 .
82 LIVIU I. NICOLAESCU
Definition 6.5. Two norms k k0 and k k1 on the same vector space U are called equivalent, and
we denote this by k k0 k k1 , if
k0 k 1 k k 1 k k 0 . t
u
Observe that two norms k 0k1 are equivalent if and only if there exist positive constants c, C
such that
k k0 k k1 c, C > 0 : ckuk1 kuk0 Ckuk1 , u U . (6.4)
The relation is an equivalence relation on the space of norms on a given vector space.
Proposition 6.6. The norms | |p , p [1, ], on RN are all equivalent with the norm | |
Proof. Indeed for u RN we have
N
! p1 1
X p 1
p p
|u|p = |uk | N max |uk | = N p |u| ,
1kN
k=1
so that | |p | | . Conversely
1 N
! p1
p X
p p
|u| = max |uk | |uk | = |u|p .
1kN
k=1
t
u
Corollary 6.8. For any p [1, ], the norm | |p on RN is stronger than any other norm k k on
RN .
Proof. Indeed the norm | |p is stronger than which is stronger than k k. t
u
u = ... RN
uN
in the norm | | if and only if
lim uk (n) = uk , k = 1, . . . , N.
n
Proof. We have
lim |u(n) u | lim max |uk (n) uk | = 0 lim |uk (n) uk | = 0, k = 1, . . . , N.
n n 1kN n
t
u
Proposition 6.11. Let (U , k k) be a normed space. If the sequence ( u(n) )n0 in U converges to
u U in the norm k k, then
lim ku(n)k = ku k.
n
Proof. We have
(6.1b)
ku(n)k ku k ku(n) u k 0 as n .
t
u
Proposition 6.13. Let U be a vector space and k k0 , k k1 be two norms on U . The following
statements are equivalent.
(i) k k0 k k1 .
(ii) If a sequence converges in the norm k k1 , then it converges to the same limit in the norm
k k0 as well.
Proof. (i) (ii). We know that there exists a constant C > 0 such that kuk0 Ckuk1 , u U .
We have to show that if the sequence ( u(n) )n0 converges in the norm k k1 to u , then it also
converges to u in the norm k k0 . We have
ku(n) u k0 Cku(n) u k1 0 as n .
84 LIVIU I. NICOLAESCU
(ii) (i). We argue by contradiction and we assume that for any n > 0 there exists v(n) U \ 0
such that
kv(n)k0 nkv(n)k1 . (6.5)
Set
1
u(n) := v(n), n > 0.
kv(n)k0
1
Multiplying both sides of (6.5) by kv(n)k0
we deduce
1 = ku(n)k0 nku(n)k1 , n > 0.
Hence u(n) 0 in the norm k k1 and thus u(n) 0 in the norm k k0 . Using Proposition 6.11
we deduce ku(n)k0 0. This contradicts the fact that ku(n)k0 = 1 for any n > 0. t
u
Corollary 6.14. Two norms on the same vector space are equivalent if and only if any sequence that
converges in one norm converges to the same limit in the other norm as well. t
u
We set
m := inf kuk.
uS
We claim that
m>0 (6.6)
Let us observe that the above inequality implies that | | k k. Indeed for any u RN \ 0 we
have
1
u := uS
|u|
so that
kuk m.
Multiplying both sides of the above inequality with |u| we deduce
1
kuk m|u| |u| kuk, u RN \ 0| | k k.
m
Let us prove the claim (6.6). Choose a sequence u(n) S such that
lim ku(n)k = m. (6.7)
n
Since u(n) S, coordinates uk (n) of u(n) satisfy the inequalities
uk (n) [1, 1], k = 1, . . . , N, n.
The Bolzano-Weierstrass theorem implies that we can extract a subsequence ( u() ) of the sequence
( u(n) ) such that sequence ( uk () ) converges to some uk R, for any k = 1, . . . , N . Let u RN
be the vector with coordinates u1 , . . . , uN . Using Proposition 6.10 we deduce that u() u in the
norm | | . Since |u()| = 1, , we deduce from Proposition 6.11 that |u | = 1, i.e., u S.
On the other hand k k | | and we deuce that u() converges to u in the norm k k.
Hence
(6.7)
ku k = lim ku()k = m.
Since |u | = 1 we deduce that u 6= 0 so that m = ku k > 0. t
u
LINEAR ALGEBRA 85
Corollary 6.16. On a finite dimensional real vector space U any two norms are equivalent.
Proof. Let N = dim U . We can then identify U with RN and we can invoke Theorem 6.15 which
implies that any two norms on RN are equivalent. t
u
Corollary 6.17. Let U be a finite dimensional real vector space. A sequence in U converges in some
norm if and only if it converges to the same limit in any norm. t
u
6.3. Completeness.
Definition 6.18. Let (U , k k) be a normed space. A sequence ( u(n) )n0 of vectors in U is said
to be a Cauchy sequence (in the norm k k) if
lim ku(m) u(n)k = 0,
m,n
i.e.,
> 0 = () > 0 such that dist u(m), u(n) = ku(m) u(n)k < , m, n > . t
u
Proposition 6.19. Let (U , k k) be a normed space. If the sequence ( u(n) ) is convergent in the
norm k k, then it is also Cauchy in the norm k k.
Proof. Denote by u the limit of the sequence ( u(n) ). For any > 0 we can find = () > 0
such that
ku(n) u k < .
2
If m, n > () then
ku(m) u(n)k ku(m) u k + ku un k < + = .
2 2
t
u
Definition 6.20. A normed space (U , k k) is called complete if any Cauchy sequence is also con-
vergent. A Banach space is a complete normed space. t
u
Proposition 6.21. Let k k0 and k k1 be two equivalent norms on the same vector space U . Then
the following statements are equivalent.
(i) The space U is complete in the norm k k0 .
(ii) The space U is complete in the norm k k1 .
Proof. We prove only that (i) (ii). The opposite implication is completely similar. Thus, we know
that U is k k0 -complete and we have to prove that it is also k k0 -complete.
Suppose that ( u(n) ) is a Cauchy sequence in the norm k k1 . We have to prove that it converges
in the norm k k1 . Since k k0 k k1 we deduce that there exists a constant C > 0 such that
ku(n) u(m)k0 Cku(n) u(m)k1 0 as m, n .
Hence the sequence ( u(n) ) is also Cauchy in the norm k k0 . Since U is k k0 -complete, we
deduce that the sequence ( u(n) ) converges in the norm k k0 to some vector u U .
Using the fact that k k1 k k0 we deduce from Proposition 6.13 that the sequence converges
to u in the norm k k1 as well. t
u
Proof. For simplicity we assume that U is a real vector space of dimension N . Thus we can identify
it with RN . Invoking Proposition 6.21 we see that it suffices to prove that U complete with respect
to any norm equivalent to | k. Invoking Theorem 6.15 we see that it suffices to prove that RN is
complete with respect to the norm | | .
Suppose that ( u(n) ) is a sequence in RN which is Cauchy with respect to the norm | | . As
usual, we denote by u1 (n), . . . , uN (n) the coordinates of u(n) RN . From the inequalities
| uk (m) uk (n)| | u(m) u(n) | , k = 1, . . . , N,
we deduce that for any k = 1, . . . , N the sequence of coordinates ( uk (n) ) is a Cauchy sequence of
real numbers. Therefore it converges to some real number uk . Invoking Proposition 6.10 we deduce
that the sequence ( u(n) ) converges to the vector u RN with coordinates u1 , . . . , uN . t
u
6.4. Continuous maps. Let (U , k kU ) and (V , k kV ) be two normed vector spaces and S U .
Definition 6.23. A map f : S V is said to be continuous at a point s S if for any sequence
( s(n) ) of vectors in S that converges to s in the norm k kU the sequence f (s(n)) of vectors in
V converges in the norm k kV to the vector f (s ). The function f is said to be continuous (on S)
if it is continuous at any point s in S. t
u
Remark 6.24. The notion of continuity depends on the choice of norms on U and V because it relies
on the notion of convergence in these norms. Hence, if we replace these norms with equivalent ones,
the notion of continuity does not change. t
u
Example 6.25. (a) Let (U , k k) be a normed space. Proposition 6.11 implies that the function
f : U R, f (u) = kuk is continuous.
(b) Suppose that U , V are two normed spaces, S U and f : S V is a continuous map. Then
the restriction of f to any subset S 0 U is also acontinuous map f |S 0 : S 0 V . t
u
Proposition 6.26. Let U , V and S as above, f : S V a map and s S. Then the following
statements are equivalent.
(i) The map f is continuous at s .
(ii) For any > 0 there exists = () > 0 such that if s S and ks s kU < , then
kf (s) f (s )kV < .
Proof. (i) (ii) We argue by contradiction and we assume that there exists 0 > 0 such that for any
n > 0 there exists sn S such that
1
ksn s kU < and kf (sn ) f (s )kV 0 .
n
From the first inequality above we deduce that the sequence sn converges to s in the norm k kU .
The continuity of f implies that
lim kf (sn ) f (s )kV = 0.
n
The last equality contradicts the fact that kf (sn ) f (s )kV 0 for any n.
(ii) (i) Let s(n) be a sequence in S that converges to s . We have to show that f ( s(n) ) converges
to f (s ).
Let > 0. By (ii), there exists () > 0 such that if s S and ks s kV < (), then
kf (s) f (s )kV < . Since s(n) converges to s , we can find = () such that if n (), then
LINEAR ALGEBRA 87
Definition 6.27. A map f : S V is said to be Lipschitz if there exists L > 0 such that
kf (s1 ) f (s2 )kV Lks1 s2 kU , s1 , s2 S.
Observe that if f : S V is a Lipschitz map, then
|f (s1 ) f (s2 )kV
sup ; s1 , s2 S; s1 6= s2 < .
ks1 s2 kU
The above supremum is called the Lipschitz constant of f .
Proposition 6.28. A Lipschitz map f : S V is continuous.
Proof. Denote by L the Lipschitz constant of f . Let s S and s(n) a sequence in S that converges
to s .
kf (s(n)) f (s )kV Lks(n) s kU 0 as n .
t
u
Proof. (i) (ii) We argue by contradiction. We assume that for any positive integer n there exists a
vector un U such that
kT un kV nkun kU .
Set
1
un := u(n).
nku(n)kU
Then
1
kun kU = , (6.9a)
n
kT un kV nkun kU = 1. (6.9b)
The equality (6.9a) implies that un 0. Since T is continuous we deduce kT un kV 0. This
contradicts (6.9b).
(ii) (iii) For any u1 , u2 U we have
kT u1 T u2 kV = kT (u1 u2 )kV Cku1 u2 kU .
This proves that T is Lipschitz. The implication (iii) (i) follows from Proposition 6.28. t
u
88 LIVIU I. NICOLAESCU
We denote by B(U , V ) the space of linear operators such that that kT k < , and we will refer to
such operators as bounded. When U = V and k kU = k kV we set
B(U ) := B(U , U ). t
u
Observe that we can rephrase Proposition 6.29 as follows.
Corollary 6.31. Let (U , k kU ) and (V , k kV ) be two normed vector spaces. A linear map
T : U V is continuous if and only if it is bounded. t
u
Corollary 6.33. If (Sn ) and (Tn ) are sequences in B(U ) which converge in the operator norm to
S B(U ) and respectively T B(U ), then the sequence (Sn Tn ) converges in the operator norm to
ST .
Proof. We have
kSn Tn ST k = k(Sn Tn STn ) + (STn ST )k k(Sn S)Tn k + kS(Tn T )k
k(Sn S)k kTn k + kSk k(Tn T )k.
Observe that
k(Sn S)k 0, kTn k kT k, kSk k(Tn T )k 0.
Hence
k(Sn S)k kTn k + kSk k(Tn T )k 0,
and thus kSn Tn ST k 0. t
u
Theorem 6.34. Let (U , k kU ) and (V , k kV ) be two finite dimensional normed vector spaces.
Then any linear map T : U V is continuous.
Proof. For simplicity we assume that the vector spaces are real vector spaces, N = dim U , M =
dim V . By choosing bases in U and V we can identify them with the standard spaces U = RN ,
V =R . M
The notion of continuity is only based on the notion of convergence of sequences and as we know,
on finite dimensional spaces the notion of convergence of sequences is independent of the norm used.
Thus we can assume that the norm on U and V is | | .
LINEAR ALGEBRA 89
where u(n) is a vector in U for any n 0. The n-th partial sum of the series is the finite sum
Sn = u(0) + u(1) + + u(n).
The series is called convergent if the sequence Sn is convergent. The limit of this sequence is called
the sum of the series. The series is called absolutely convergent if the series of nonnegative real
numbers X
ku(n)k
n0
is convergent. t
u
Proposition 6.36. Let (U , k k) be a Banach space, i.e., a complete normed space. If the series in
U X
u(n) (6.11)
n0
is absolutely convergent, then it is also convergent. Moreover
X X
k u(n)k ku(n)k. (6.12)
n0 0
90 LIVIU I. NICOLAESCU
Proof. Denote by Sn the n-th partial sum of the series (6.11), and by Sn+ the n-th partial sum of the
series X
ku(n)k.
n0
Since U is complete, it suffices to show that the sequence (Sn ) is Cauchy. For m < n we have
kSn Sm k = ku(m + 1) + + u(n)k ku(m + 1)k + + ku(n)k = Sn+ Sm
+
.
Since the sequence (Sn+ ) is convergent, we deduce that it also Cauchy so that
lim (Sn+ Sm
+
) = 0.
m,n
This forces
lim kSn Sm k = 0.
m,n
The inequality (6.12) is obtained by letting n in the inequality
ku(0) + + u(n)k ku(0)k + + ku(n)k.
t
u
We denote by BN the vector space of continuous linear operators CN CN equipped with the
operator norm defined above. The space is a finite dimensional complex vector space (dimC BN =
N 2 ) and thus it is complete. Consider the series in BN
X 1 1 1
An = 1 + A + A2 + .
n! 1! 2!
n0
Observe that
kAn k kAkn
LINEAR ALGEBRA 91
so that
1 1
kAn k kAkn .
n! n!
The series of nonnegative numbers
X 1 1 1
kAkn = 1 + kAk + kAk2 + ,
n! 1! 2!
n0
Note that
keA k ekAk .
Example 6.39. Suppose that A is a diagonal matrix,
A := Diag(1 , . . . , N ).
Then
An = Diag(n1 , . . . , nN )
and thus
X 1 X 1
eA = Diag n1 , . . . , nN = Diag e1 , . . . , eN .
t
u
n! n!
n0 n0
Before we explain the uses of the exponential of of a matrix, we describe a simple strategy for
computing it.
Proposition 6.40. Let A be a complex N N matrix and S and invertible N N matrix. Then
1 AS
S 1 eA S = eS . (6.14)
6.7. The exponential of a matrix and systems of linear differential equations. The usual expo-
nential function ex satisfies the very important property
ex+y = ex ey , x, y.
If A, B are two N N matrices, then the equality
eA+B = eA eB
no longer holds because the multiplication of matrices is non commutative. Something weaker does
hold.
Theorem 6.41. Suppose that A, B are complex N N matrices that commute, i.e., AB = BA. Then
eA+B = eA eB .
Proof. The proof relies on the following generalization of Newtons binomial formula whose proof
is left as an exercise.
Lemma 6.42 (Newtons binomial formula: the matrix case). If A, B are two commuting N N
matrices, then for any positive integer n we have
n
n
X n nk k n n n n1 n n2 2
(A + B) = A B = A + A B+ A B + ,
k 0 1 2
k=0
We set
1 1 1 1
Sn (A) = 1 + A + + An , Sn (B) = 1 + B + + B n ,
1! n! 1! n!
1 1
Sn (A + B) = 1 + (A + B) + + (A + B)n .
1! n!
We have
Sn (A) eA , Sn (B) eB , Sn (A + B) eA+B as n .
and we will prove that
lim kSn (A)Sn (B) Sn (A + B)k = 0,
n
which then implies immediately the desired conclusion. Observe that
1 1 1 1
Sn (A)Sn (B) = 1 + A + + An 1 + B + + Bn
1! n! 1! n!
n k 2n n
! !
X X 1 i 1 X X 1 1
= A Ai B ki + Ai Ai B ki
i! (k i)! i! (k i)!
k=0 i=0 k=n+1 i=0
n k 2n n
! !
X 1 X k! X 1 X k!
= Ai B ki + Ai B ki
k! i!(k i)! k! i!(k i)!
k=0
| i=1 {z } k=n+1 i=1
=(A+B)k
n 2n n
!
X 1 X 1 X k!
= (A + B)k + Ai B ki
k! k! i!(k i)!
k=0 k=n+1 i=1
Hence
2n n
!
X 1 X k!
Sn (A)Sn (B) Sn (A + B) = Ai B ki
| {z } k! i!(k i)!
k=n+1 i=1
=:Rn
and we deduce
2n n 2n n
! !
X 1 X k! i ki
X 1 X k! i ki
kRn k kA B k kAk kBk k
k! i!(k i)! k! i!(k i)!
k=n+1 i=1 k=n+1 i=1
2n k 2n
!
X 1 X k! X 1
kAki kBkki k = (kAk + kBk)k .
k! i!(k i)! k!
k=n+1
| i=1 {z } k=n+1
(kAk+kBk)k
Set C := kAk + kBk so that
2n
X 1 k
kRn k C . (6.17)
k!
k=n+1
Fix an positive integer n0 such that
n0 > 2C.
For k > n0 we have
k! = 1 2 n0 (n0 + 1) k nkn
0
0
so that
1 1
kn0 , k > n0 .
k! n0
94 LIVIU I. NICOLAESCU
Remark 6.44. The notion of closed set with respect to a norm is based only on the notion of conver-
gence with respect to a norm. Thus, if a set is closed in a given norm, it is closed in any equivalent.
In particular, the notion of closed set in a finite dimensional normed space is independent of the norm
used. A similar observation applies to open sets. t
u
Proposition 6.45. Suppose that (U , k k) is a normed vector space and O U . Then the following
statements are equivalent.
(i) The set O is open in the norm k k.
(ii) For any u O there exists > 0 such that any v U satisfying kv uk < belongs to O.
t
u
Proof. (i) (ii) We argue by contradiction. We assume that there exists u O so that for any
n > 0 there exists v n U satisfying
1
kv n u k < but v n 6 O.
n
Set C so that C is closed and v n C for any n > 0. The inequality kv n u k < n1 implies
v n u . Since C is closed we deduce that u C . This contradicts the initial assumption that
u O =
bsU \ C .
(ii) (i) We have to prove that C = U \ O is closed, given that O satisfies (ii). Again we
argue by contradiction. Suppose that there exists sequence ( u(n) ) in C which converges to a vector
u U \ C = O. Thus there exists > 0 such that any v U satisfying kv u k does not
belong to C . This contradicts the fact that u(n) u because at least one of the terms u(n) satisfies
ku(n) u k < , and yet it belongs to C . t
u
LINEAR ALGEBRA 95
Definition 6.46. Let (U , k k) be an normed space. The open ball of center u and radius r is the set
B(u, r) := v U ; kv uk < r . t
u
We can rephrase Proposition 6.45 as follows.
Corollary 6.47. Suppose that (U , k k) is a normed vector space. A subset O U is open in the
norm k k if and only if for any u O there exists > 0 such that B(u, ) O. t
u
6.9. Compactness. We want to discuss a central concept in modern mathematics which is often used
proving existence results.
Definition 6.49. Let (U , k k) be a normed space. A subset K U is called compact if any
sequence of vectors in K has a subsequence that converges to some vector in K. t
u
Remark 6.50. Since the notion of compactness is expressed only in terms of convergence of se-
quences we deduce that if a set is compact in some norm, it is compact in any equivalent norm. In
particular, on finite dimensional vector spaces the notion of compactness is independent of the norm.
t
u
Example 6.51 (Fundamental Example). For any real numbers a < b the closed interval [a, b] is a
compact subset of R. t
u
The next existence result illustrates our claim of the usefulness of compactness in establishing
existence results.
Theorem 6.52 (Weierstrass). Let (U , k k) be a normed space and f : K R a continuous
function defined on the compact subset K U . Then there exist u , u K such that
f (u ) f (u) f (u ), u K,
i.e.,
f (u ) = inf f (u), f (u ) = sup f (u).
uK uK
In particular, f is bounded both from above and from below.
Proof. Let
m := inf f (u) [, ).
uK
There exists a sequence ( u(n) ) in K such that4
lim f (u(n)) = m.
n
Since K is compact, there exists a subsequence ( u() ) of ( u(n) ) that converges to some u K.
Observing that
lim f ( u() ) = lim f ( u(n) ) = m
n
we deduce from the continuity of f that
f (u ) = lim f ( u() ) = m.
Definition 6.53. Let (U , k k) be a normed space. A set S U is called bounded in the norm k k
if there exists M > 0 such that
ksk M, s S.
In other words, S is bounded if and only if
sup ksk < t
u
sS
Let us observe again that if a set S is bounded in a norm k k, it is also bounded in any other
equivalent norm.
Proposition 6.54. Let (U , k k) be a normed space and K U a compact subset. Then K is both
bounded and closed.
Proof. We first prove that K is bounded. Indeed, Example 6.25 shows that the function f : K R,
f (u) = kuk is continuous and Theorem 6.52 implies that supuK f (u) < . This proves that K is
bounded.
To prove that K is also closed consider a seequence (u(n) ) in K which converges to some u in
U . We have to show that in fact u K. Indded, since K is compact, the sequence ( u(n) ) contains
a subsequence which converges to some point in K. On the other hand, since ( u(n) ) is convergent,
limit of this subsequence must coincide with the limit of u(n) which is u . Hence u K. t
u
Theorem 6.55. Let (U , kk) be a finite dimensional normed space, and K U . Then the following
statements are equivalent.
(i) The set K is compact.
(ii) The set K is bounded and closed.
Proof. The implication (i) (ii) is true for any normed space so we only have to concetrate on the
opposite implication. For simplicity we assume that U is a real vector space of dimension N . By
fixing a basis of U we can identify it with RN . Since the notions of compactness, boundedness and
closedness are independent of the norm we use on RN we can assume that k k is the norm | | .
Since K is bounded we deduce that there exists M > 0 such that
|u| M, u K.
In particular we deduce that
uk [M, M ], k = 1, . . . , N, u K,
where as usual we denote by u1 , . . . , uN the coordinates of a vector u RN .
Suppose now that ( u(n) ) is a sequence in K. We have to show that it contains a subsequence that
converges to a vector in K. We deduce from the above that
uk (n) [M, M ], k = 1, . . . , N, n.
LINEAR ALGEBRA 97
Since the interval [M, M ] is a compact subset of R we can find a subsequence (, u() ) of ( u(n) )
such that each of the sequences ( uk () ), k = 1, . . . , N converges to some uk [M, M ]. denote
by u the vector in RN with coordinates u1 , . . . , uN . Invoking Proposition 6.10 we deuce that the
subsequence ( u() ) converges in the norm | | to u . Since K is closed and the sequence ( u() )
is in K we deduce that its limit u is also in K. t
u
98 LIVIU I. NICOLAESCU
6.10. Exercises.
Exercise 6.1. (a) Consider the space RN with canonical basis e1 , . . . , eN . Prove that the functions
| |1 , | | : RN [0, ) defined by
N
X
|u1 e1 + + uN eN | := max |uk |, , |u|1 = |uk |
1kN
k=1
are norms on RN .
(b) Let U denote the space C([0, 1]) of continuous functions u : [0, 1] R. Then the function
k k : C([0, 1]) [0, ), kuk := sup |u(x)|
x[0,1]
is a norm on U . t
u
Exercise 6.3. (a) Prove that the space U of continuous functions u : [0, 1] R equipped with the
norm
kuk = max |u(x)|
x[0,1]
is a complete normed space.
(b) Prove that the above vector space U is not finite dimensional.
For part (a) you need to use the concept of uniform convergence of functions. For part (b) consider
the linear functionals Ln : U R Ln (u) = u( n1 ), u U , n > 0 and then show they are linearly
independent. t
u
Exercise 6.7. Suppose that (U , k k) is a normed space. Prove that for any u U and any r > 0
the ball B(u, r) is an open subset. t
u
are continuous. t
u
Exercise 6.10. Prove Newtons binomial formula, Lemma 6.42, using induction on n. t
u
100 LIVIU I. NICOLAESCU
R EFERENCES
[1] S. Axler: Linear Algebra Done Right, Springer Verlag, 2004.
[2] V.V. Prasolov: Problems and Theorem in Linear Algebra, in Russian.
[3] S. Treil: Linear Algebra Done Wrong, Notes for a Linear algebra course at Brown University.