You are on page 1of 30

Algebra Tutorial Examples

Key:
E: easy examples
S: standard examples
P: hard or peripheral examples
Fields and Vector Spaces
1. (E) Find the multiplicative inverses of the non-zero elements in Z7 . (Just experimenting is probably easier
than using the Euclidean algorithm.)

Part hand-in
for week 2

Solution: Multiplying by 1,2,3,4,5,6 mod 7 we have


1:

1, ....

2:

2, 4, 6, 1, ...

3 : 3, 6, 2, 5, 1, ...,
and so 11 = 1, (as always), 21 = 4, 31 = 5, whence 41 = 2, 51 = 3 and 61 = 21 .31 = 20 mod 7 =
6.
To use the Euclidean algorithm to find (say) 41 , we seek integers a, b such that 7a + 4b = 1, and then
b = 41 mod 7. Thus
7 1.4 =
43 =

3
1,

and so (working backwards) 1 = 4 3 = 4 (7 4) = 2.4 7. Hence 41 = 2.


2. (E) Show that if L K is a subfield then 1, 1 + 1, 1 + 1 + 1, . . . are all elements of L. (It is tempting
to call these 1, 2, 3, . . . but note that (e.g. in Zp ) they are not necessarily all distinct.) Deduce that Zp
does not have any subfields (other than itself). What do you think the smallest subfield of R is?
[Ans: Q]
Solution: The first part used closure under addition (and, implicitly, the fact that addition is associative,
otherwise expressions like 1+1+1 need brackets).
Any subfield has 1 and therefore all of Zp .
By the same argument, the smalles subfield of R must included Z, the inverses of all non-zero integers, and
all products of elements generated so far, which leads to Q. (Note: though Z2 is a field, and its elements
are a subset of R, it is not a subfield, as the arithmetic operations are different.)
3. (E) Do the following equations have solutions in the fields C, R, Q, Z3 , Z2 ?
(a) x2 + 1 = 0,
(b) x2 x 1 = 0
Note: what this means in each case is this: is there an element of the given field such that if you substitute
it in to this equation and do all the arithmetic in that field then you get zero? The answers for the first
three fields should be easy from elementary background knowledge. The last two fields have very few
elements and so you can just experiment.
Solution:
(a) In C, x = i. No solution in R or Q. In Z3 try 02 +1 = 1 6= 0, 12 +1 = 2 6= 0, 22 +1 = 5 mod 3 = 2 6= 0,
therefore no solution. In Z2 , try 02 + 1 = 1 6= 0, 12 + 1 = 2 mod 2 = 0; hence there is a unique
solution x = 1.

(b) In C, x = (1 5)/2, which is also in R but not in Q. Trial and error shows there is no solution in
either Z3 or Z2 .

Completion of
hand-in for
week 2

4. (E) Which of the following are subspaces of the given vector space?
(a) {x R3 | 2x1 x2 + x3 = 1} R3

(b) {x R3 | x1 = 2x2 } R3

(c) {P P3 | P (1) = 0} P3 where Pn denotes the vector space of real polynomials of degree n in a
variable x.

(d) {A M | AT = A} M where M is the vector space of real 2 2 matrices and T denotes matrix
transpose
[Ans: no, yes, yes, yes]
Solution:
(a) 0 R3 is not in the set.

(b) To prove that a subset U is a subspace, it is enough to prove that U is closed under linear combinations, i.e. that, if u1 , u2 U and a1 , a2 F , then a1 u1 + a2 u2 U .
Here, suppose
ui = (xi1 , xi2 , xi3 ),
i = 1, 2. Then
a1 u1 + a2 u2 = (a1 x11 + a2 x21 , a1 x12 + a2 x22 , a1 x13 + a2 x23 ).

(1)

Now we have to check that x1 = 2x2 , where x1 , x2 are the first and second components of (1). Now
a1 x11 + a2 x21 2(a1 x12 + a2 x22 ) = a1 (x11 2x12 ) + a2 (x21 2x22 ) = 0
because ui U and so xi1 = 2xi2 .

(c) Similarly, if p1 , p2 are polynomials in P3 such that pi (1) = 0, then so is a1 p1 + a2 p2 .


(d) If A1 , A2 are two such matrices, then (a1 A1 + a2 A2 )T = a1 (A1 )T + a2 (A2 )T = a1 A1 a2 A2 =
(a1 A1 + a2 A2 ).
5. (E) Find all the vectors
(a) in Z23 that are scalar multiples of a = (1, 2)
(b) in the subset U := {x | x1 + 2x2 = 0} of Z23 .

(c) in Sp(a, b) Z32 , where a := (1, 1, 0), b := (1, 1, 1).

[Ans. (a) (0, 0), (1, 2), (2, 1), (b) (0, 0), (1, 1), (2, 2),(c) (0, 0, 0), (1, 1, 0), (1, 1, 1), (0, 0, 1).]
Solution:
(a) Multiply a by 0,1,2.
(b) Choose all possible values for x2 , and find x1 for each.
(c) Write down all possible values of c1 a + c2 b, where c1 = 0, 1 Z2 . There are four combinations of
possibilities.
6. (E) In Z32 find all the vectors in
(a) Sp(x, y) where x = (1, 1, 0) , y = (0, 1, 1)
(b) the subspace V Z32 given by V = {x Z32 | x1 + x2 + x3 = 0}.
Solution:
(a) (0, 0, 0) , (1, 1, 0) , (0, 1, 1) , (1, 0, 1) (cf. Q.5(c))
(b) Same as (a). Try all four combinations of x2 , x3 and, for each combination, solve for x1 .

Part hand-in
for week 3

7. (E) Let U = {x | x1 = 0} and V = {x | x2 = 0} be subspaces of R3 . What is the sum U + V of these


subspaces? State the Dimension Theorem for sums of subspaces and verify it in this example. Is this an
example of a direct sum?
[Ans: R3 , no]
Solution: U = {(0, a, b) , a, b R} , V = {(c, 0, d) , c, d R}. Hence U + V = {(c, a, b + d) , a, b, c, d R}.
Any element of R3 may be expressed in this way (e.g. with d = 0), and so U + V = R3 .
Dimension Theorem: dim(U ) + dim(V ) = dim(U + V ) + dim(U V ).

Here dim(U ) = 2. ({(0, 1, 0) , (0, 0, 1)} is a basis.) Similarly dim(V ) = 2. If x U V then c = a = 0 and
so x = (0, 0, b) , b R. Hence dim(U V ) = 1.

For a direct sum we require U V = {0}.


8. (E)

(a) Show that in Z35 the vectors (1, 1, 3) , (2, 0, 2) , (4, 3, 0) are linearly dependent.
(b) Complete the following sentence without using any terms from linear algebra. If R were finitedimensional as a vector space over Q, it would mean that there exist a finite number r1 , . . . , rn of . . .
such that every . . . could be written as . . ..

Part hand-in
for week 3

Solution:
(a) Call the vectors a, b, c. Looking at the last entry in each, we see that a + b has last entry 0, as c has.
In fact a + b = (3, 1, 0), and so 3(a + b) = c, i.e. 3a + 3b c = 0.

(b) If R were finite- dimensional as a vector space over Q, it would mean that there exist a finite number
r1 , . . . , rn of real numbers such that every real number could be written as a linear combination of
r1 , . . . , rn with rational coefficients.
9. (E) Give a basis of the subspace of Z35 defined by the equation x1 + x2 + x3 = 0. What is the dimension
of this subspace? How many vectors are there in this subspace? Find the coordinate matrix of the vector
v = (4, 3, 3) in your chosen basis.

Completion of
hand-in for
week 3

Solution: Any vector in the subspace has the form (x2 x3 , x2 , x3 ) = x2 a + x3 b, where a = (1, 1, 0) ,b =
(1, 0, 1). Hence the subspace is Sp(a, b). Also a, b are linearly independent. Hence a, b is a linearly independent spanning set, i.e. a basis, and the subspace is 2-dimensional.
x2 , x3 can each take 5 values, which determine x1 , and so there are 25 vectors in the subspace.
First, check that v is in the subspace (x1 + x2 + x3 = 0). Clearly the coordinate matrix has entries (3, 3).
10. (a) (E) Find all the vectors in the subspace V Z32 given by V = {x Z32 |x1 + x2 + x3 = 0}.

(b) (S)
There are seven different non-zero vectors in Z32 and hence (since the only scalars are {0, 1})
there are seven different one-dimensional subspaces. There are also seven
different linear equations of the form
1 x1 + 2 x2 + 3 x3 = 0,

j Z2 ,

apart from the trivial one with all the j being zero. Each of these defines
a different two-dimensional subspace of Z32 . In the picture there are seven
blobs and seven lines (six straight ones together with a circle). Label
each blob with a different one-dimensional subspace and each line with
a two-dimensional subspace in such a way that a blob is on a line if and
only if the one-dimensional subspace lies inside the two-dimensional one
(i.e. iff the vector satisfies the equation). (Note by the way that this
configuration has the property that through every pair of points there is
a unique line, and every pair of lines meet in precisely one point. It is an
example of a finite projective plane.)
[Ans (a) (0, 0, 0) , (1, 0, 1) , (1, 1, 0) , (0, 1, 1); (b) For example, A = (1, 0, 0) , B = (0, 1, 0) , C = (0, 0, 1), etc.]
Solution:

(a) The form of a general vector is as in the solution to Q.9, but now x2 , x3 can take only 2 values each,
and so the points in V are (0, 0, 0) , (1, 1, 0) , (1, 0, 1) , (0, 1, 1) . (Remember you are doing arithmetic
mod 2.)
(b) Having decided on A and B (e.g. as in the answer stated), c is fixed, since it must be the one
non-trivial linear combination of A and B, i.e. (1, 1, 0). Next, C can be anything that has not been
used yet. Then the remaining vertices can be filled in (with no further choice) by the argument used
to determine c.
In the answer stated, clearly AB, BC, CA are, respectively, x3 = 0, x1 = 0, x2 = 0. Cc is x1 + x2 = 0,
and the circle is the subspace V discussed in part (a).
There are many possible correct answers.
11. (S) How many two-dimensional subspaces does Z42 have? You might want to think along the lines of
defining such a subspace by choosing a nonzero vector, and then choosing another vector that is not a
multiple of it their span determines a subspace. Count how many ways there are of doing this and
then work out how many times each subspace has been counted.
[Ans: 35]
Solution: For each of the four entries in an element of Z42 there are two choices, and so there are 24 points
in the whole space. One of these is the zero vector, and so there are 24 1 choices for the first vector. For
the second vector there remain 24 2 choices, and so there are (24 1)(24 2) pairs of non-zero vectors.
Each subspace, however, has exactly 3 non-zero points. (Recall the discussion of the two-dimensional
subspaces in Qs.9,10a.) Therefore to each subspace there correspond 3! ordered pairs of non-zero points.
Hence the number of distinct subspaces is (24 1)(24 2)/3! = 35.
12. Assuming that R and C are fields, state which of the following sets are fields:
(a) (E) all positive integers

(b) (S) all numbers a + b 3, where a, b Q

(c) (S) all numbers a + b 3 5, a, b Q


(d) (E) all non-integer rational numbers

(e) (S) all numbers a + b 2i, a, b Q


[Ans: N,Y,N,N,Y]
Solution:
(a) 0
/ set
(b) Refer forward to Q.14a.
0 and 1 are
= 1, b = 0, respectively. Closure under
given by a = b = 0 and a

are
addition: (a1 + b1 3) + (a2 + b2 3) = (a1 + a
2 ) + (b1 + b
2 ) 3, where (a1 + a2 ) and (b1 + b2 )
rational. Closure under multiplication: (a1 + b1 3)(a2 + b2 3) = (a1 a2 + 3b1b2 ) + (a1 b2 + a2 b1 )3,
where the two bracketed expressions are rational. Additive
inverse: (a + b 3) = (a) + (b) 3,
which is again in the set. Multiplicative inverse: if a + b 3 6= 0 then

(a + b 3)1 =
a+b 3

ab 3

=
(a + b 3)(a b 3)
b
a
+
=
3,
a2 3b2 a2 3b2

and the tworational expressions are rational numbers. Note that the introduction of the common
2
2
factor
a b 3 isvalid because it also is non-zero. If it were zero
then we would have a 3b =
(a + b 3)(a b 3) = 0. If also b 6= 0 it would follow that 3 = (a/b), i.e. it is rational
(contradiction).
So we would require b = 0 and hence a = 0, which contradicts our assumption that

a + b 3 6= 0.
(c) Closure
under multiplication is a problem.
Suppose
numbers

thereexist rational

a, b suchthat
( 3 5)2 = a + b 3 5. Multiplying by 3 5 we get 5 = a 3 5+ b( 3 5)2 = a 3 5 + ab + b2 3 5. Since 3 5 is
irrational (by a
similar argument to the irrationality of 2) we deduce a + b2 = 0 and ab = 5. Hence
3
3
5 = (b) , i.e. 5 is rational. (Contradiction)
(d) 1/2 + 1/2 is not in the set
(e) Like (b)

13. (E) The field extension Q( 2) consists of all numbers of the form a + b 2, where a, b Q. Show that
this set is closed under multiplication and
addition, that it contains 0 and
1, and that it contains the
multiplicative and additive inverses of a + b 2. Write down a basis of Q( 2) regarded as a vector space
over Q. What is its dimension?

Part hand-in
for week 4

Solution: Up to near the end, this is like the solution to Q.12b. The
set {1, 2} is clearly a spanning
set. These numbers are also linearly independent over Q, because 2 is irrational. Hence this set is a
basis, and the dimension is 2.
14. (S)
(a) Prove the following theorem: A subset S of a field F is a subfield if S contains the zero and unity
of F , if S is closed under addition and multiplication, and if each a S has its negative a and
(provided a 6= 0) its inverse a1 in S.
(b) Show that the pair of conditions 0 S and 1 S can be replaced by the single condition S
contains at least two elements. (Hint: aa1 = 1 if a 6= 0.)
Solution:
(a) The axioms for a field fall into two kinds: (a) identities (such as the distributive law a(b+c) = ab+ac)
and (b) statements of existence (e.g. there exists a1 ). Axioms of the first kind apply to all elements
of F and therefore to all elements of S. The existence axioms are
i. Closure under addition (i.e. a, b S a + b S)
ii. Existence of 0
iii. Existence of additive inverse a
iv. Closure under multiplication
v. Existence of 1
vi. Existence of multiplicative inverse a1 is a 6= 0.
(b) If S contains at least two elements, there is at least one non-zero element a. By (vi) and (iv) we have
aa1 = 1 S. Similarly a a = 0 S.
Linear Transformations
15. (E) Give an example of two vector spaces V, W over the field Z2 and a linear transformation T : V W
which is injective but not surjective. Write down the null space and range for your example, find the rank
and nullity, and verify the rank theorem.
[Ans: Z2 Z22 , x 7 (x, 0),{0},Sp ((1, 0)), 1, 0]
Solution: There are, of course, lots of solutions to this problem.
Injective: (x, 0) = (x , 0) x = x .

Not surjective: no element maps to (0, 1).


Null space: (x, 0) = (0, 0) x = 0.

Range: (x, 0) = x (1, 0), i.e. it is an element of the space stated in the Answer, and clearly any element of
that space is in the range.
Since there is a basis of the range consisting of the single vector (1, 0), and the set consisting of this vector
is obviously linearly independent, it is a basis, and so the rank is 1.
The null space is {0}, which is 0-dimensional.

dim(V ) = 1, and so the rank theorem is verified.

16. (E) Consider the linear map T : Z32 Z32 defined by


T (x1 , x2 , x3 ) = (x1 + x2 , x3 + x1 , x2 + x3 ) .
Find all vectors in the null space and all vectors in T (Z32 ). Find the rank and nullity of T , and verify the
rank theorem. Is T injective? surjective? bijective? an isomorphism?
[Ans: Sp ((1, 1, 1)), Sp ((1, 1, 0) , (1, 0, 1)),2,1,N,N,N,N]
Solution: Null space: x1 + x2 = x3 + x1 = x2 + x3 = 0, and so x1 = x2 = x3 . Both possibilities (0,1) give
vectors in the null space, which therefore consists of (0, 0, 0) and (1, 1, 1), which is the span of the second
of these.
Since Z32 contains only 8 vectors, you can try them all. In summary,

Completion of
hand-in for
Week 4

(a) if all xi = 0, the result is (0, 0, 0)


(b) if exactly one of x1 , x2 , x3 is 1 then exactly one component of T (v) equals 0. This includes the two
vectors stated in the answer and also (0, 1, 1) = (1, 1, 0) + (1, 0, 1), which is therefore in the stated
span.
(c) if exactly two of the xi are 1, the result is the same.
(d) if exactly three are 1, we get the null vector.
Hence result.
The two stated spanning sets are linearly independent, and therefore are bases. Hence the dimensions
(rank and nullity) as stated.
dim(Z32 ) = 3 = rank + nullity.
T is not injective, because two vectors map to the null vector.
T is not surjective, since nothing maps to (1, 0, 0) (for example).
Since it is neither injective nor surjective it cannot be bijective, or an isomorphism.
17. (S) Consider a 2x2 matrix A =

a b
c d

with entries in Z2 .

(a) How many such matrices are there?


(b) Find a necessary and sufficient condition for A to have a matrix inverse of the same form (with
respect to ordinary matrix multiplication).
(c) How many of the matrices found in (a) have inverses?
(d) (Harder) Let A be such an invertible matrix, and let e1 , e2 , e3 be the three non-zero vectors in Z22 .
Show that A permutes these three vectors, and find the permutation explicitly for each such A.


1 1
[Ans (a) 16, (b) ad bc 6= 0, (c) 6, (d)
a particular 3-cycle, etc.]
1 0
Solution:
(a) a, b, c, d have two independent possibilities each, hence the number of such matrices is 24


e f
(b) If there exists a matrix inverse
then (taking determinants) we must have (ad bc)(eh
g h
gf ) = 1 and so ad bc 6= 0. This istherefore anecessary condition. Conversely, suppose ad bc 6= 0,
1
d b
. Check that this is an inverse of A. Hence the
and consider the matrix
c
a
ad bc
condition is also sufficient.
(c) In Z2 the necessary and sufficient condition is ad bc = 1.
a
0
1
0
a, b, c we can determine the possible values of d easily: 0
0
1
1
1

Working through the 8 possibilities for


b c
d
0 0 No solution
0 0
1
1 0 No solution
0 1 No solution
1 1
0,1
0 1
1
1 0
1
1 1
0

(d) Aei 6= 0 (i.e. the null vector of Z22 ), otherwise ei = A1 0 = 0. Hence A maps the three nonzero vectors onto the set of non-zero vectors. The map is injective, because (applying the inverse)
Aei = Aej ei = ej ). Hence it acts as a permutation.
Suppose e1 = (1, 0) , e2 = (0, 1) , e3 = (1, 1) . Then for the example given, e1 e3 e2 e1 .

18. (S) Let T1 , T2 be two linear mappings acting from V to W . Define their sum T1 + T2 and prove that it is
linear. Prove that rank(T1 + T2 ) rank(T1 ) + rank(T2 ).
[Ans (T1 + T2 )(v) = T1 (v) + T2 (v)]

Solution: (T1 + T2 )(v) = T1 (v) + T2 (v)


Linearity:
(T1 + T2 )(1 v1 + 2 v2 )

= T1 (1 v1 + 2 v2 ) + T2 (1 v1 + 2 v2 )
definition of T1 + T2
= 1 T1 (v1 ) + 2 T1 (v2 ) + 1 T2 (v1 ) + 2 T2 (v2 ) linearity of T1 and T2
= 1 (T1 + T2 )(v1 ) + 2 (T1 + T2 )(v2 )

definition of T1 + T2 again.

It follows from the definition that the range of T1 + T2 is a subspace of W1 + W2 , where Wi is the range
of Ti . Hence the rank of T1 + T2 is less than or equal to dim(W1 + W2 ) dim(W1 ) + dim(W2 ).
19. (S) Let Pj [Z3 ] denote the vector space of polynomials of degree j with coefficients in Z3 .
(a) State the dimension of Pj [Z3 ].
(b) Show that (1 + x, x + x2 , x2 ) is a basis for P2 [Z3 ].
(c) Show that T : P2 [Z3 ] P3 [Z3 ] where T : p(x) 7 (x + 2)p(x) is a linear map.

(d) Calculate the matrix of


P3 [Z3 ].

2 0
3 2
[Ans: (a) j + 1, (d)
1 3
0 1

T with respect to the basis above for P2 [Z3 ] and the basis (1, x, x2 , x3 ) for

0
0
]
2
1

Solution:
(a) As usual, there is an obvious basis (1, x, x2 , . . . , xj ).
(b) If a1 (1 + x) + a2 (x + x2 ) + a3 x2 = 0 (i.e. it is the zero polynomial), then the coefficient of every
power of x must vanish, i.e.
a1
a1 + a2

=
=

0
0

a2 + a3

, whence a1 = a2 = a3 = 0. Therefore the set is linearly independent. The size of the set is the same
as the dimension of the space, and so the set is a basis.
(c)
T (a1 p1 (x) + a2 p2 (x))

= (x + 2)(a1 p1 (x) + a2 p2 (x))


= a1 (x + 2)p1 (x) + a2 (x + 2)p2 (x)
= a1 T (p1 (x)) + a2 T (p2 (x)).

(d) The columns of the matrix A are the images of the basis vectors of P2 [Z3 ]

1 0 1
20. (S) Let A = 0 1 1 . It would be better to have an example which could not be done
0 0 0
with eigenvectors. Find non-singular matrices P, Q such that the matrix P AQ is a diagonal matrix
D in which the diagonal elements are ones or zeroes, the ones coming before the zeroes,

Part hand-in
for Week 5

(a) using elementary row and column operations; and


(b) by finding suitable bases of R3 .
(Here as elsewhere, unless otherwise stated, the diagonal is the leading diagonal, from top left to bottom
right.)
Solution:
(a) No row operations are needed, so we take P = I. Carry out column operation C3 := C3 C1 C2 ,
whichis equivalentto postmultiplying by I after the same operation has been applied to it, i.e.
1 0 1
Q = 0 1 1 . Then P AQ = diag(1, 1, 0).
0 0 1

(b) Since C3 = C1 + C2 , and the vectors (C1 , C2 ) are linearly independent, the image is Sp(C1 , C2 ), the
rank is 2, and the nullity 1. The null space is clearly Sp(v3 ) where v3 = (1, 1, 1). Complete a basis
of R3 with any other two suitable vectors v1 , v2 , e.g. the first two vectors of the standard basis. The
3
transformation from this basis to thestandard basis
of R has the matrix whose columns are the
1 0 1
three basis vectors, i.e. we take Q = 0 1 1 .
0 0 1
Then Av1,2 , which are actually the same vectors as v1 , v2 , are chosen as the first two basis vectors
w1 , w2 , which can be extended to a basis with the third standard basis vector. Thus the transformation from the basis (w1 , w2 , w3 ) to the standard basis is the identity, and its inverse (which we need
for P ) is also I.
Clearly these solutions are not unique.

5
2
7
1 is equivalent to the matrix A in Q.20, in a sense which
21. (S) Show that the matrix B = 3 4
1 2 3
you should state.
Solution: Again C3 = C1 + C2 , while C1 , C2 are linearly independent. Hence the rank is 2, and the
same procedure gives matrices P , Q such that P BQ = diag(1, 1, 0). Hence B = P 1 diag(1, 1, 0)Q1 =
P 1 P AQQ1 . Thus there exist non-singular matrices X(= P 1 P ), Y (= QQ1 ) such that B = XAY .
Operators
22. (E) Let p be a polynomial.
(a) Suppose that the square matrix A is diagonal with entries j . Show that p(A) is diagonal with entries
p(j ).
(b) Suppose T, v, are (respectively) an operator, vector and scalar such that T v = v, and let p be a
polynomial. Show that p(T )(v) = p()(v).
Solution: Let p(x) =

Pn

i=0

ai xi .

(a) Ai is diagonal with entries ij . Hence p(A) =


Pn
i
i=0 ai j .

Pn

i=0

ai Ai is a diagonal matrix with the diagonal entry

(b) T v = v, T 2 v = T (v) = (T v) by linearity. Hence T 2 v = 2 v and, by induction, T i v = i v. Hence


result.
23. (E) Give an example of a vector space V and a linear transformation T such that T cannot be represented
by an upper triangular matrix with respect to any basis of V .
Solution: V cannot be complex, because of a theorem stated in class. Any 1-dimensional example
corresponds to a 1 1 matrix, which is trivially upper-triangular, and so we have to try at least two

Completion of
hand-in for
week 5

dimensions. If dim(V ) = 2 and T has an eigenvalue and eigenvector v then if we choose v as the first
basis vector, the first column of the matrix representing T is (, 0), and so the matrix is upper
 triangular.

0 1
2
We try a tranformation on R with no eigenvalue in R, e.g. rotation by /2, which has matrix
.
1 0
24. (S) A subspace U of a vector space V is said to be invariant under a transformation T if T (U ) U .
(a) Suppose that the matrix of a linear
transformation
(on a two-dimensional vector space) with respect


0 0
to some coordinate system is
. How many subspaces are there which are invariant under
0 1
the transformation?
(b) Let be an eigenvalue of a linear transformation on a vector space V , and U the corresponding set
of eigenvectors (including the zero vector). Show that U is invariant under T .
[Ans: (a) 4]
Solution:
(a) Two trivial invariant subspaces are V itself and the null subspace {0}. These are the only possibilities
with dimension 2,0 respectively. Any one-dimensional subspace is Sp(v) for some v V , and if it is
invariant we require T v = v for some v, i.e. v is an eigenvector. The eigenvalues of the matrix are
given by (1 ) = 0, i.e. = 0, 1, with eigenvectors (1, 0) , (0, 1) , respectively. The spans of these
vectors are the only possible 1-dimensional invariant subspaces.
(b) u U T u = u U , since U is closed under multiplication by scalars.
25. (S) Give an example of a matrix A which represents a projection on R2 . Verify that A2 = A. Find the
eigenvalues of your matrix A.
In general a linear transformation T is a projection if and only if T 2 = T . Prove that the eigenvalues of
a projection are 0 or 1.
[Ans: See Q.24a; 0,1]
Solution:
2

Orthogonal projection in R onto the x1 axis has matrix A =


satisfy (1 )() = 0, and so = 0, 1.

1
0

0
0


. Obviously A2 = A. Eigenvalues

Let be an eigenvalue of T , with a corresponding non-zero eigenvector v. Then T 2 v = T v and so 2 v = v.


Since v 6= 0 we require 2 = 0. Hence result.

26. (S) For each of the following matrices A find, when possible, a complex non-singular matrix P for which
P 1 AP is diagonal:


2 4
(a)
5 3


3 2
(b)
2 3


1 2
(c)
2 2


1 2i
(d)
2i 2






4 1
1 1
2 1
[Ans e.g. (a)
, (b)
, (c)
]
5 1
i i
1 2

(d) is part
hand-in for
week 6

Solution:
(a) Eigenvalues are given by (2 )(3 ) 20 = 0, i.e. 2
( 7)( 
+ 2) = 0,
 5 14 =0,
 i.e. 
0
5 4
x1
=
and so = 7, 2. For = 7, eigenvectors are given by
, and so
0
x2
5 4


4
5x1 + 4x2 = 0, and we can take as eigenvector
. (Note that this is the top row of the matrix
5
A I (which is written
 out
 above) with the order reversed and one sign changed.) Similarly for
1
= 2 we may take
.
1


4 1
Next, these two eigenvectors are taken as the columns of P , i.e. P =
. Hence (if you need
5 1


1
1 1
. (Remember: to find the inverse of a 2 2 -matrix, swop the diagonal
it) P 1 =
5 4
9
entries, change the sign of the off-diagonal entries, and divide by the determinant.)
Infinitely many different answers are possible, by swapping the columns of P , and multiplying them
by different non-zero constants.
(b) = 3 2i, P as stated.

(c) = 2, 3, P as stated.


2i i
(d) = 2, 3, P =
.
1 2
In the next few problems we write Jn () for the Jordan matrix which is the n n matrix
5 1
entry is: if i = j; 1 if j = i + 1 and 0 otherwise. Thus for example J3 (5) = 0 5
0
0

whose
i, j-th
0
1
5

27. (E) Write down A = J2 (1), where the notation is defined above. State its eigenvalue(s). Write down the
matrix A I for each eigenvalue , state its rank and nullity, and hence (or otherwise) write down the
geometric multiplicity of each eigenvalue. Also compute (A I)2 , and hence or otherwise write down
the algebraic multiplicity of each eigenvalue.



1
. Eigenvalues of an upper triangular matrix are the diagonal entries, i.e.
1

0 1
1 only here. J2 (1) 1.I =
. Rank is the dimension of the space spanned by the columns of
0 0
the matrix, i.e. the space Sp ((1, 0)) in this case, and so rank = 1. Rank + nullity = 2 (since 2 2
matrices act on R2 , with dimension 2), and so nullity is 1. This is the dimension of the space of solutions
v of (A 1.I)v = 0, i.e. the eigenspace

 corresponding to eigenvalue 1, and therefore equals the geometric
0
0
multiplicity. (J2 (1)1.I)2 =
. Here rank is 0, nullity 2, i.e. the space of solutions of (AI)2 v = 0
0 0
is 2, and so the algebraic multiplicity is 2.
Solution: J2 (1) =

1
0

Completion of
hand-in for
week 6

28. (S) Write down the eigenvalues of J3 (), and find their algebraic and geometric multiplicity. How does
this generalize to Jn ()?
[Ans: , 3, 1. ,n,1]

0
1 . Upper triangular matrix, and so the diagonal entries give all the

0 1 0
eigenvalues, i.e. only here. Following the procedure of Q.27 we compute J3 () I = 0 0 1 .
0 0 0
2
This
has
rank
2,
rank
+
nullity
=
3,
and
so
nullity
is
1,
which
is
the
geometric
multiplicity.
(J
()I)
=
3

0 0 1
0 0 0
0 0 0 and (J3 () I)3 = 0 0 0 . This has rank 0, nullity 3, and so the geometric
0 0 0
0 0 0
multiplicity is 3.
1
Solution: J3 () = 0
0 0

(Note the following: in lectures, a generalised eigenvector was defined as a solution v of (A I)k v = 0,
for some k = 1, . . . n, where A is n n. Then the algebraic multiplicity was defined as the dimension
of the space of all such generalised eigenspaces. But the result of Q.29a, which shows that the spaces
corresponding to successive k are nested, implies that we need only consider k = n.)
For general

0
1
0
0

0
0

... ...

0
0

0
0
0
0

n, the only diagonal


entries are again , which is the only eigenvalue. Jn () I =
0 ... 0
0
1 ... 0
0

0 ... 0
0

... ... ... ...


. The column space is spanned by the last n 1 columns, and so rank is
0 ... 1
0

0 ... 0
1
0 ... 0
0

0
0
1 ... 0
0
0
0
0 ... 0
0

0
0
0
.
.
.
0
0

n1, nullity 1, and the geometric multiplicity is 1. Clearly (Jn ()I)2 =


. . . . . . . . . . . . . . . . . . ,
0
0
0 ... 0
1

0
0
0 ... 0
0
0
0
0 ... 0
0
i.e. the line of 1s is shifted up and to the right, intersecting the top row at the 3rd entry. The line shifts 1
place each time the power of Jn () I is increased by 1, and so (A I)n = 0 (the null matrix). Then
the rank is 0, nullity is n, and the algebraic multiplicity is n.
29. (S) For k > 1 the k-th generalized eigenspace of T : V V with eigenvalue is E,k := {v V |(T
I)k v = 0}. So, for k = 1 the generalized eigenspace is just the eigenspace in the usual sense.
(a) Show that if k l then E,k E,l . Deduce that the algebraic multiplicity is greater than or equal
to the geometric multiplicity.
(b) Let A = J3 (). Show that E,k = V for k 3. Describe E,2 and give its dimension.
(c) In general, what is dim E,k for the Jordan matrix Jn ()?

[Ans: (b) Sp ((1, 0, 0) , (0, 1, 0)), 2 (c) min(k, n)]


Solution:
(a) v E,k (T I)k v = 0. Hence (T I)l v = (T I)(lk) (T I)k v = 0 = (T I)(lk) 0 = 0,
i.e. v E,l .
The geometric and algebraic multiplicities are, respectively, dim(E,1 ) and dim(E,n ). Hence result.
(b) The calculations in Q.28 contain all you need.
(c) Again, the solution to Q.28 shows that the matrix (Jn ()I)k consists of zeros except for a diagonal
line of ones which intersects the top row at the (k + 1)th position. The column space is spanned by
columns (k + 1) to n inclusive. For k < n the rank of the matrix is therefore n k, and for k n it
is 0. Hence the nullity is k if k < n and n otherwise.

30. (E) Give examples of each of the following:


(a) an operator on C3 whose minimal polynomial is x2 .
(b) an operator on C4 whose minimal polynomial is x(x 1)2 .

(c) an operator on C4 whose characteristic and minimal polynomials are both x(x 1)2 (x 3).

(d) an operator on C4 whose characteristic polynomial is x(x 1)2 (x 3) and whose minimal polynomial
is x(x 1)(x 3).
[Ans: (a) T (x, y, z) = (z, 0, 0), (b) T (x, y, z, w) = (x + y, y, 0, 0), (c) T (x, y, z, w) = (0, y + z, z, 3w), (d)
T (x, y, z, w) = (0, y, z, 3w)]
Solution:
(a) 0 is the only eigenvalue. In a basisgiving an upper
triangular matrix, the diagonal entries are 0, and
0 a b
so the matrix is of the form A = 0 0 c . Also, we require A2 = 0 (since x = A must satisfy
0 0 0
2
x = 0) but A 6= 0 (otherwise the minimal polynomial
would
be x). Therefore we require

ac =
0 but

0 0 1
x
a, b, c not all zero, and one solution is A = 0 0 0 . Multiplying into the vector y gives
0 0 0
z
the stated transformation in the standard basis.
(b) The characteristic polynomial is a quartic of which the minimal polynomial is a factor, and so
one possibility for the characteristic polynomial is x2 (x 1)2 . Then the eigenvalues are 0, 1, with
algebraic multiplicity 2 each. In a basis with respect to which
the matrix isblock diagonal, with
1 a 0 0
0 1 0 0

upper triangular blocks, we have a matrix of the form A =


0 0 0 b . Then the diagonal
0 0 0 0

0 a 0 0
0 0 0 0
2

entries of AI are 0,0,-1,-1, and (AI)2 =


0 0 1 b . The bottom right entry of A(AI)
0 0 0 1
is b, and so we require b = 0; all other entries are already 0. Taking (for example) a = 1, and the
standard basis of C4 , we get the example stated.
For less cumbersome notation, you
 that the matrix is block diagonal with upper triangular
 could note
A1 0
, where A1 , A2 are upper triangular 2 2 matrices,
blocks, i.e. of the form A =
0 A2
having diagonal
 entries 1,0,2 respectively, and
 0 means the 2 2 null matrix. In the same notation
A
(A

I)
0
1
1
A(A I)2 =
, where I now means the 2 2 unit matrix. Now it is
0
A2 (A2 I)2
clear that we require A2 = 0.
(c) Now the diagonal entries are 0,1,1,3. Viewed as a block diagonal matrix, the 2 2 upper triangular
block with diagonal entries 1 must have a non-zero non-diagonal entry, otherwise
the minimal
poly
0 0 0 0
0 1 1 0

nomial would be x(x 1)(x 3); and the matrix may be taken to be A =
0 0 1 0 , which
0 0 0 3
leads to the stated example.
(d) In a suitable basis the matrix is upper diagonal with diagonal entries which may be ordered 0,1,1,3.
Now there is no non-zero off-diagonal entry, and the stated answer is obtained.

2
31. (S) Show that, for any operator
Ton C2 such that

 T = 0, 0 is the only eigenvalue, and that there is a basis
0 1
0 0
such that T has matrix
or
. What are the characteristic and minimal polynomials in
0 0
0 0
each case?

Hand-in for
Week 7

Solution: is an eigenvalue means there is a non-zero v such that T v = v, and so T 2 (v) = 0 =


T (T (v)) = T (v) = T (v) = 2 v. Hence 2 = 0, and the characteristic polynomial is x2 .
There is a basis forwhich the
 matrix of T is upper triangular, and the diagonal entries must be 0, i.e.
0 a
the matrix is A =
. Thus the basis vectors (v1 , v2 ) are such that T (v1 ) = 0, T (v2 ) = av1 . If
0 0
a 6= 
0, let v1 = av1 . Then (v1 , v2 ) is also a basis (obviously), and since T (v2 ) = v1 the matrix of T is
0 1
A=
. The minimal polynomial is obviously x2 .
0 0

If a = 0 we already have the other possibility, and then the minimal polynomial is x.

32. (S) Suppose T is an operator on a vector space V , and let v V . Let p be the monic polynomial of
smallest degree such that p(T )v = 0. Prove that p divides the minimal polynomial of T .
Solution: Let m(x) = q(x)p(x) + r(x), where m is the minimal polynomial, and r is the remainder on
division by p. Thus deg(r) < deg(p) and m(T ) = q(T )p(T ) + r(T ). Applying this to v, we conclude that
0 = 0 + r(T )v. Hence r is a polynomial of smaller degree such that r(T )v = 0, and so r(x) = 0 (i.e. the
zero polynomial). Hence result.
33. (S) Suppose T is the operator on C3 which takes (x, y, z) to (y, z, 0). Show that there is no operator S
such that S 2 = T .
Solution: Direct calculation shows that T 2 6= 0 but T 3 = 0, and the same argument as in Q.31 shows
that 0 is the only eigenvalue of T . For any eigenvalue S of S there must be a non-zero vector v such that
S(v) = S v. Therefore T (v) = S 2 (v) = 2S v. Hence 2S is also an eigenvalue of T , and so S = 0. Again
the characteristic polynomial of S is x3 , i.e. S 3 = 0. Hence T 2 = S 4 = 0, which is in contradiction with
the direct computation of T 2 .
34. (S) Prove the Cayley-Hamilton theorem for strictly upper triangular matrices, by direct computation.
(The strictly here means that the diagonal values are 0.) Do the same for upper triangular matrices.
Solution: All diagonal entries of a strictly upper triangular n n matrix A are 0, and so 0 is the only
n
eigenvalue, and the characteristic polynomial is xn ; thus we have to show that
Pn A = 0. Consider first
2
2
2
A , noting that aij = 0 for j i. Then the typical entry in A is (A )i,j = k=1 ai,k ak,j . Now ai,k = 0
if k i and ak,j = 0 if k j. Hence all terms in the sum vanish, and so (A2 )i,j = 0, provided that
j i + 1. Therefore A2 is also strictly upper triangular, with the additional property that the diagonal
above the leading diagonal also vanishes. Similary A3 has two zero diagonals above the leading diagonal.
Continuing, An = 0. (This can be dressed up as a proof by induction if you wish.)
If A is merely upper triangular, its diagonal entries are the eigenvalues, and the number of times each is
present is its multiplicity. Hence the characteristicpolynomial is n1 (Aaii I). Consider
the product of the
0

...
a11 a22

0 a22 a11

.
.
.
0
0

first two terms, i.e. M = (Aa11 I)(Aa22 I), i.e.


0
0
a33 a11 . . .
0
0 a33 a22
0
0
0
...
0
0
0
Clearly M is not only upper triangular, but its entire first two columns vanish. (This argument can be
expressed in much the same manner as
for the strictly upper
is a little
triangular case, but the result
a11 a33

...
0 0 ...
0 0 ...
0
a22 a33 . . .
, and it is

harder to follow.) Then M (A a33 I) =


0 0 ...
0
0
0 ...
0
0
0 ...
0 0 0 ...
clear that this is upper triangular and the first three columns vanish. Continuing, we find that A makes
the stated characteristic polynomial vanish.

...
...
.
...
...

35. (S) A nilpotent operator T is one for which there exists a positive integer p
(a) Suppose N is a nilpotent operator. Prove that 0 is the only eigenvalue.
(b) Suppose 0 is the only eigenvalue of an operator N on a complex vector space. Prove that N is
nilpotent. Given an example to show that this is not necessarily true on a real vector space.
[Ans (b) T (x, y, z) = (0, z, y), x, y, z R]
Solution:
(a) This is a trivial generalisation of the first bit of Q.31.
(b) In a basis in which the matrix of N is upper triangular, the diagonal entries are 0, and the matrix is
strictly upper triangular (in the sense of Q.34). Then the first part of that question gives the desired
result.
For the real case, we can try to find a matrix which has zero as an eigenvalue, and other eigenvalues
which are complex. Projections have a zero eigenvalue (Q.24a), while all plane rotations (except for
a rotation by a multiple of ) have complex eigenvalues, and so we try a combination of projection
onto the y, z plane followed by rotation
by /2 about
the xaxis. This leads to the stated example.

0 0 0
Its matrix in the standard basis is 0 0 1 , which has eigenvalues 0, i over C, i.e just 0 over
0 1 0
R, as required. It is not nilpotent, as direct calculation shows that N 3 = N (which also follows
from the Cayley-Hamilton theorem), and clearly no power of N vanishes.

36. (S) A matrix is said to be in Jordan form if it is block-diagonal with each diagonal block being a Jordan
matrix (see the description aboveQ.27). There may be more
than one block with a given parameter value.
J3 (5)
0
0
J1 (5)
0 is in Jordan form. Note also that J1 () is the
So for example the 7 7 matrix 0
0
0
J3 (2)
1 1 matrix (a.k.a. number) , and so a diagonal matrix is an example of Jordan form where all the
blocks are of size 1.

Parts (a),(b)
are part
hand-in for
Week 8

(a) Consider a matrix A in Jordan form with just two Jordan blocks Jp (), Jq () where 6= . Find
the characteristic polynomial, eigenvalues, minimal polynomial, and algebraic multiplicity of each
eigenvalue (i.e. the dimension of the corresponding generalized eigenspace) of A
(b) The same, only now assume that = .
(c) Conjecture how this generalizes to a general matrix in Jordan form, with blocks Jpi (j ) (j ).
(d) (Added after the tutorial sheet was issued to the class.) In (a) - (c) above, give the geometric
multiplicity of each eigenvalue.
[Ans (a) (xP )p (x )q ; , ; (x )p (x )q ; p, q, (b)
(x )p+q ; ; (x )max(p,q) ; p + q, (c)
pi (j )
maxi (pi (j )) P
i
; {j , j = 1, . . .}; j (x j )
; i,j pi (j ), (d) 1,1; 2; mG () is the number
j (x j )
of blocks for which j = .]
Solution:
(a) A is upper triangular. The diagonal entries (p s and q s) give the eigenvalues and their algebraic
multiplicities, and hence the characteristic equation, as stated. The minimal
polynomial must

 be of
J
(0)
0
p
the form (x )i (x )j . Using block notation, we have A I =
, since
0
Jq ( )
subtraction of I alters all diagonal
entries by , and there is a corresponding
form for A I.


(Jp (0))i Jp ( )j
0
i
j
Then (A I) (A I) =
. The diagonal entries of
0
Jq ( )i Jq (0)j
Jp ( )j are ( )j , i.e. non-zero, while (Jp (0))i has all entries 0 except a diagonal of 1s, i places
above the leading diagonal (by arguments similar to the calculations about strictly upper triangular
matrices in Q.34). Therefore the upper left block vanishes iff i p, and similarly for the lower right
block. The minimal polynomial is given by the smallest values of i and j, hence result.
(b) Again the eigenvalue(s), algebraic multiplicity and characteristic polynomialare immediate. The

(Jp (0))i
0
minimal polynomial must be of the form (x )i , and we have (A I)i =
.
0
Jq (0)i
This vanishes iff {i p and i q}.
(c) If you have understood (a) and (b) this is obvious.

(d) (a) The matrix A I has p + q 1 linearly independent rows (i.e. rows 1 . . . (p 1), corresponding
to the first block, and rows p + 1 . . . p + q, corresponding to the second block). Hence its rank is
p + q 1, its nullity 1. Similarly for A I.
(b) Now A I has p + q 2 linearly independent rows, (i.e. rows 1 . . . (p 1), corresponding to
the first block, and rows p + 1 . . . p + q 1, corresponding to the second block. Hence the nullity
is 2.
(c) See the solution to (c) above.
37. (S) Find two matrices A, B in Jordan form which have the same minimal polynomial, characteristic
polynomial and dimensions of the (ordinary, not generalized) eigenspaces and such that A 6= B (and
neither do A, B differ only by a change in the order of the blocks down the diagonal).
[Ans (example): A = diag(J3 (), J2 (), J2 ()), B = diag(J3 (), J3 (), J1 ())]
Solution: Lets try matrices with a single eigenvalue . As in Q.36(b),(c), the largest Jordan block
determines the power of x in the minimal polynomial, and so we ensure that this is the same for A
and B. Next, the geometric multiplicity is given by the number of Jordan blocks. To see this, suppose
(x1 , . . . , xn )T is an eigenvector and that the first block has size r. Then the first r linear equations
determining the eigenvector are just 0.x1 = 0 and xi = 0, i = 2 . . . r. Continuing with the remaining
equations, it is easily seen that the most general eigenvector is (x1 , 0, . . . , xr+1 , 0, . . . , 0)T , i.e. there is one
arbitrary non-zero coordinate for each block. Therefore to complete A and B we add to the largest block
the same number of smaller blocks, in such a way as to give matrices of the same size. The above seems
the simplest solution.

Completion of
hand-in for
Week 8

38. (E)
(a) Suppose T is an operator on a vector space V and (v1 , . . . , vn ) is a basis of V with respect to which
the matrix of T is the Jordan matrix Jn (a). Describe the matrix of T with respect to the basis
(vn , . . . , v1 ).
(b) Repeat the question if (v1 , . . . , vn ) is a basis with respect to which the matrix of T is in Jordan Form
(Qu.36).

[Ans (a) The matrix Jn () = {ji,j


} where ji,j
= if i = j, 1 if i = j + 1, 0 otherwise. (b) Block diagonal

matrix with blocks Jn (), in the opposite order.]

Solution:
(a) Let wi = vn+1i , i = 1 . . . n. Since T vi = vi + vi1 if i > 1 and T v1 = v1 , it follows that
T wn+1i = wn+1i + wni+2 if n + 1 i < n and T wn = wn . Hence T wj = wj + wj+1 if j < n.
Hence result.
(b) Another way of getting to the previous result is by reversing the columns of the matrix and then
reversing the rows. (The reason for this is that the change-of-basis matrix is a matrix of 1s along the
diagonal from lower left to upper right, 0s elsewhere.) This makes the second result obvious.
39. (S) Let V be an n-dimensional real vector space.
(a) Let V have basis {v1 , . . . , vn }. Write down a basis of C V , the complexification of V , verifying that
your basis is indeed a linearly independent spanning set. Hence find dimC V .
(b) Let c1 , c2 R, c = c1 + ic2 , and (v1 , v2 )
C

V . Show that c(v1 , v2 ) = (c1 v1 c2 v2 , c1 v2 + c2 v1 ).

(c) Let T be an operator on V , and T its complexification. Show that C T is an operator on C V .

(d) If I is the identity on V , show that C I is the identity on C V .


[Ans. (a) {(v1 , 0), . . . , (vn , 0)}, n.]
Solution:
(a) The stated set is linearly independent for, let cj = xj + iyj , j = 1, . . . , n be complex numbers such
that c1 (v1 , 0) + . . . + cn (vn , 0) = 0. Then (x1 v1 , y1 , v1 ) + . . . + (xn vn , yn , vn ) = 0, i.e. (x1 v1 +
. . . xn vn , y1 v1 + . . . yn vn ) = 0. Since {v1 , . . . , vn } is linearly independent over R, all xs and ys must
vanish. The set stated in the answer is also a spanning set, for any element of C V may be written as
(x1 v1 + . . . xn vn , y1 v1 + . . . yn vn ) = c1 (v1 , 0) + . . . + cn (vn , 0).
(b) We write (v1 , v2 ) as v1 + iv2 and multiply formally, giving (c1 + ic2 )(v1 + iv2 ) = (c1 v1 c2 v2 ) +
i(c1 v2 + c2 v1 ), which we write as (c1 v1 c2 v2 , c1 v2 + c2 v1 ).
(c) Since C T (v1 , v2 ) = (T v1 , T v2 ), clearly C T (C V )
and (v1 , v2 ), (w1 , w2 ) C V . Then
C

T ((v1 , v2 ) + (v1 , v2 ))

=
=

C
C

V . For linearity, let = 1 +i2 , = 1 +i2 C

T ((1 v1 2 v2 , 1 v2 + 2 v1 ) + (1 w1 2 w2 , 1 w2 + 2 w1 ))
T (1 v1 2 v2 + 1 w1 2 w2 , 1 v2 + 2 v1 + 1 w2 + 2 w1 )

= (T (1 v1 2 v2 + 1 w1 2 w2 ), T (1 v2 + 2 v1 + 1 w2 + 2 w1 ))

= (1 T v1 2 T v2 + 1 T w1 2 T w2 , 1 T v2 + 2 T v1 + 1 T w2 + 2 T w1 )
= (1 T v1 2 T v2 , 1 T v2 + 2 T v1 ) + (1 T w1 2 T w2 , 1 T w2 + 2 T w1 )

= (T v1 , T v2 ) + (T w1 , T w2 )
= C T (v1 , v2 ) + C T (w1 , w2 ).
(d)

I(v1 , v2 ) = (Iv1 , Iv2 ) = (v1 , v2 ).

40. (S) Let C T be the complexification of of an operator T on a finite dimensional real vector space V , and
C
V the complexification of V .
(a) If is a real eigenvalue of C T , show that it is also an eigenvalue of T .
is also an eigenvalue of C T .
(b) If is a complex eigenvalue of C T , show that its complex conjugate
are
(c) If is a complex eigenvalue of C T , it may be proved that the algebraic multiplicities of and
equal. Deduce that, if dim V is odd, T must have a an eigenvalue.
Solution:
(a) There is a non-zero (v1 , v2 ) C V such that C T (v1 , v2 ) = (v1 , v2 ). Hence (T v1 , T v2 ) = (v1 , v2 ),
since is real. Hence T v1 = v1 and T v2 = v2 . Now v1 , v2 are not both zero, and so at least one
is a non-zero eigenvector of T with eigenvalue .
(b) Again there is a non-zero (v1 , v2 ) C V such that C T (v1 , v2 ) = (v1 , v2 ). Hence (T v1 , T v2 ) =
(v1 v2 , v2 + v1 ), where = + i, , R. Hence (T v1 , T v2 ) = (v1 v2 , v2 v1 ),
and so C T (v1 , v2 ) = ( i)(v1 , v2 ). Hence result.

(c) The sum of the algebraic multiplicities of the eigenvalues of C T is dim C V = dim V . Complex
eigenvalues give an even contribution, and so if dim V is odd, the contribution from real eigenvalues
must be non-zero. Hence there must exist a real eigenvalue of C T , and this is also an eigenvalue of
T.

41. (P) Let V be a complex n-dimensional space. Its decomplexification is the real vector space R V = {v : v
V } where addition of vectors is defined as in V and multiplication by real numbers is the same as the scalar
product of V , restricted to the reals. Let {v1 , . . . , vn } be a basis of V . Show that {v1 , . . . , vn , iv1 , . . . , ivn }
is a basis of R V , and write down its dimension. Is C (R V ) = V ?
[Ans: No]
Solution: Any v V may be written v = c1 v1 + . . . + cn vn , where c1 , . . . , cn C. Expressing their real
and imaginary parts as cj = aj + ibj , we conclude v = a1 v1 + . . . an vn + b1 .iv1 + . . . bn .ivn , and so the stated
set is a spanning set of R V . It is also linearly independent, for if a1 v1 + . . . an vn + b1 .iv1 + . . . bn .ivn = 0,
with all aj , bj R, we may define cj = aj + ibj and deduce that c1 v1 + . . . + cn vn = 0. Since the vj
are linearly independent over C we conclude that all cj = 0, and hence all aj and bj are zero. Thus
the set {v1 , . . . , vn , iv1 , . . . , ivn } is a linearly independent spanning set of R V . Hence dim R V = 2 dim V .
C R
( V ) 6= V (unless dim V = 0), because dim C (R V ) = dim R V (by ex.39) and so dim C (R V ) = 2 dim V .

Bilinear and Quadratic Forms on Real Vector Spaces


42. (S)
(a) In lectures an SBF was defined by its properties of (i) symmetry, (ii) linearity in the first argument.
Prove that an SBF is linear in the second entry.
(b) If B is a symmetric n n matrix, prove that b(x, y) = xT By defines a SBF on Rn , where x, y are
column vectors as usual.
(c) Let b be a SBF on a vector space V . Show that Nb := {x V |b(x, v) = 0 for all v V } is a subspace
of V .
Solution:
(a)
b(u, 1 v1 + 2 v2 )

= b(1 v1 + 2 v2 , u)

(symmetry)

= 1 b(v1 , u) + 2 b(v2 , u) (linearity in the first argument)


= 1 b(u, v1 ) + 2 b(u, v2 ) (symmetry).
(b)

i. (symmetry) xT By is a product of 1 n, n n and n 1 matrices, i.e. a 1 1 matrix. Hence


it equals its own transpose, i.e. xT By = (xT By)T = y T B T (xT )T = y T Bx. (Here we have used
the rule for transposing a product, and the fact that B is symmetric.)
ii. (linearity) Obvious, really:

(1 x1 + 2 x2 )T By = (1 x1 )T + (2 x2 )T By

= 1 xT1 + 2 xT2 By
= 1 xT1 By + 2 xT2 By.

43. (E)



2 3
. Write down the corresponding quadratic
3 1
form in terms of standard coordinates x1 , x2 on R2 . Find a vector v 6= 0 in R2 such that b(v, v) = 0.
Give a sketch showing the regions in the plane where b(x, x) is positive, negative and zero. Find the
eigenvalues of the matrix, and hence or otherwise determine the type, rank and signature of b.

1 0 0
(b) Consider the SBF on R3 with matrix B = 0 1 0 . Write down the corresponding quadratic
0 0 1
form in terms of standard coordinates x1 , x2 , x3 on R3 . Sketch the regions in R3 where b(x, x) is
positive, negative and zero. Find the eigenvalues of the matrix, and hence or otherwise determine
the type, rank and signature of b.

, (3 37)/2, (1,1),2,0
[Ans: (a) 2x21 + 6x1 x2 + x22 , (3 + 7, 2),
(a) Consider the SBF b on R2 with matrix B =

b<0

b>0

1
x

b>0

b<0

(b) x21 + x22 x23 ,

, {1, 1, 1}, (2,1), 3, 1]


b>0

b>0

Solution:

Part (c) is
part hand-in
for Week 9




2x1 + 3x2
x1
= 2x21 + 6x1 x2 + x22 . To find v
= (x1 , x2 )
3x1 + x2
x2
2
2
2
we set this to zero:
2x1 + 6x1 x2 + x2 = 0 2(x1 /x2 ) + 6(x1 /x2 ) + 1 = 0 if x2 6= 0. Hence
x1 /x2 = 3/2 7/2. Taking the plus sign and x2 = 2 (for example) leads to the stated result.
There are two lines on which b = 0 (obtained by taking the two signs), and these are sketched in the
Maple plot. To find which regions are positive and which negative, just try some simple points. For
instance b(1, 1) = 9 > 0.
Eigenvalues are given by (2 )(1 ) 9 = 0. Hence stated results. Eigenvalues are of opposite
sign (one each), hence p = 1 and q = 1. Rank and signature are p + q and p q respectively.

(a) The quadratic form is (x1 , x2 )B

(b) Quadratic form obvious. b > 0 iff x23 < x21 + x22 . In cylindrical polar coordinates r, , z this is |z| < r,
which is the equatorial zone between the two cones in the figure. b = 0 on their surface, and b < 0
in the polar regions.
Eigenvalues are obvious, two positive and one negative. Hence results.
44. (S)

(a) Let V be the vector space of 2 2 real matrices with real entries and trace zero. What is the
dimension of V ? (Read (a) right through before deciding whether to give up.) Consider the SBF
(known, by the way, as the trace
) on V . Find the matrix of b with
  form)
 b(X,
 Y ) = Trace(XY

0 1
0 0
1 0
respect to the basis
,
,
of V . Find the eigenvalues of the matrix,
0 0
1 0
0 1
and hence or otherwise determine the type, rank and signature of b.
(b) Let P2 denote the real vector space of polynomials of degree 2 in a variable x. What is the
R1
dimension of P2 ? Show that b(p(x), q(x)) := 0 p (x)q (x)dx defines an SBF on P2 . (The dashes
indicate derivatives.) Find a nonzero element of Nb (see question 42c) and hence deduce that b is
degenerate. Find the matrix of b with respect to the basis {x2 , x, 1} of P2 and hence or otherwise
find the type, rank and signature of b.

0 1 0
4/3 1 0

[Ans: (a) 3, 1 0 0 , {1, 1, 2}, (2,1), 3, 1; (b) 3, 1, 1 1 0 , {(7 37)/6, 0}, (2,0), 2, 2]
0 0 2
0 0 0
Solution:









a b
1 0
0 1
0 0
=a
+b
+c
, which shows
c a
0 1
0 0
1 0
that the three matrices given later are indeed a spanning set; they are also obviously linearly independent, since their general linear combination A equals zero iff a = b = c = 0. Thus they are a
basis, and dim V = 3.
Calling the three matrices (as ordered in the question) X1 , X2 , X3 , the entries of the matrix B are
Bij = b(Xi , Xj ). Thus B11 = Tr(X12 ) = Tr0 = 0. Similar, not always quite so trivial, calculations
lead to the stated matrix.

The eigenvalues are given by ()2 1 (2 ), hence results. Two are positive, one negative and
so the type, rank and signature are as stated.

(a) A general such matrix is A =

(b) P2 (sometimes called R2 (x) in class) has the obvious basis given, hence the dimension.
b is obviously symmetric, and linear in the first argument (as both differentiation and integration are
linear).
Clearly b(1, p) = 0 for all p P2 . Hence there is clearly at least a one-dimensional subspace (Sp(1))
on which b(p(x), p(x)) = 0. Hence the rank of b must be less than 3, i.e. the form is degenerate.
R1
The matrix has 11-component B11 = 0 (2x)2 dx = 4/3. Similarly the other entries are easily found,
as given in the answer. The eigenvalues are given by ((4/3 )(1 ) 1) () = 0, which leads to
the stated results. p = 2, q = 0, and the other results follow immediately.
45. (S)

Ip
0
0
(a) Suppose that the matrix of b in a basis is in the standard form 0 Iq 0 . Identify a p0
0
0
dimensional subspace on which b is positive definite. Identify also an (n p)-dimensional subspace
on which b is negative semi-definite (meaning that b(v, v) 0 for all v in the subspace). By considering
intersections, deduce that there is no subspace dimension larger than p on which b is positive definite.

(b) True or false: an SBF is non-degenerate if and only if its matrix does not have zero as an eigenvalue.
(c) What are the possible types of an SBF on R3 if there exists a 2-dimensional subspace on which it is
negative definite?
(d) An SBF on Rn has type (p, q). What is the largest possible dimension for a subspace V such that
b(v, v) < 0 for all nonzero v V ?
(e) Same as (d), but now b(v, v) 0 for all nonzero v V .

(f) True or false: If an SBF is positive definite on subspaces U, U V then it is positive definite on
their sum U + U . Explain your answer. (Hint: take a look at Q.43.)

[Ans: (a) Sp(v1 , . . . , vp ) if p > 0, Sp(vp+1 , . . . , vn ) if p < n; (b) T; (c) (0,2), (0,3), (1,2); (d) q; (e) n p;
(f) F]
Solution:
Pp
Pp+q
(a) The associated quadratic form is 1 x2i p+1 x2i , and so the subspaces stated are obvious. Let
U be a subspace on which b is positive definite, and W the stated subspace on which it is negative
semi-definite. On non-zero vectors v U W we must have both b(v, v) > 0 and b(v, v) 0. Hence
dim(U W ) = 0. Also U + W V (the vector space on which b is defined) and so dim(U + W ) n.
Using the general result that dim(U ) + dim(W ) = dim(U + W ) + dim(U W ) we deduce that
dim(U ) dim(V ) + dim(U W ) dim(W ) = n + 0 (n p) = p.

(b) It is non-degenerate iff, when put in standard form, p + q = n. If this is true, all eigenvalues are
non-zero, and conversely.
(c) Here we know only that q 2, while p + q 3. This leads to the three possibilities listed. All are
realisable; for instance the forms x21 x22 , x21 x22 x23 , x21 x22 + x23 .

(d) See the first part of this question (swapping positive and negative).

(e) Another application of the same idea. There is a subspace of dimension p on which b is positive
definite.
(f) Look at part (a) of the question indicated. Choose any two 1-dimensional subspaces U, V lying in
the region where b > 0. Then U + V = R2 , but b is not positive definite on the whole of R2 .
46. (S)
T

(a) Find a 2 2 orthogonal matrix P such that P SP is diagonal, where S =

7
6

6
2


.

(b) Find also a non-singular matrix P such that P T SP is diagonal with diagonal entries 1 or 0.

(c) What is the type of the SBF on R2 given by the matrix S? What is its rank and what is its signature?





2/5
1/5
2/ 5 1/5

,
(b)
, (c) (1,1), 2, 0]
[Ans: (a)
1/ 5 2/ 5
2/10 2/5
Solution:
(a) Eigenvalues given by (7 )(2
) 36 = 0, whence = 10, 5. For = 10 an eigenvector is

2
given by 3x1 + 6x2 , e.g.
. For an orthogonal matrix this must be normalised, by dividing
1

by 5. This gives the first column of the matrix given, and the other eigenpair gives the second.
p
(b) Divide each column by || to get the stated matrix.
(c) At this point the form is represented by the matrix diag(1, 1), which leads to the stated results.

47. (S) Let V denote the vector space of n n real matrices.


(a) What is the dimension of V and of the subspace of symmetric matrices and of the subspace of
antisymmetric matrices?
n X
n
X
(b) Let b(X, Y ) = Trace(XY ). Show that b(X, Y ) =
Xjk Ykj , and that b defines an SBF on V .
j=1 k=1

(c) Show that b is positive definite on the subspace of symmetric matrices and negative definite on the
subspace of antisymmetric matrices.

(d) Find the type, rank and signature of b.

This whole
question
completes the
hand-in for
Week 9

[Ans: (a) n2 , n(n + 1)/2, n(n 1)/2 (d) (n(n + 1)/2, n(n 1)/2), n2 , n]
Solution:
(a) V is spanned by the n2 matrices VIJ , where (VIJ )ij = 1 if i = I and j = J, 0 otherwise. These are
obviously linearly independent, and so dim V = n2 . A less formal but more straightforward way of
seeing this is that n n matrices have n2 elements which can be chosen independently.
For symmetric matrices the elements above the diagonal determine those below the diagonal; in
addition the elements on the diagonal may be chosen independently. Therefore the total number of
elements that may be chosen independently is 1 + 2 + . . . + n = n(n + 1)/2.
For antisymmetric matrices the diagonal entries are 0, and again those above the diagonal determine
those below. Therefore the number of entries which may be chosen independently is less than for
symmetrical matrices by the number on the diagonal, n, giving the stated result.
Pn
(b) (XY )ii = j=1 Xij Yji ; hence result. (Recall that the trace is the sum of diagonal entries.)
Pn Pn
b is obviously linear in the first argument. For symmetry we note that
j=1
k=1 Xjk Ykj =
Pn Pn
j=1 Ykj Xjk , which is the expression for b(Y, X), except that dummy indices of summation
k=1
j, k are interchanged.
Pn Pn
Pn Pn
2
(c) Let Y = X, where X is symmetric. Then b(X, X) =
j=1
k=1 Xjk Xkj =
j=1
k=1 Xjk ,
by symmetry. This is positive definite
positive, unless X = 0). Similarly, if X is
P (i.e.
P strictly
2
antisymmetric we have b(X, X) = nj=1 nk=1 Xjk
. Choose bases of the subspaces of symmetric
and antisymmetric matrices with respect to which the matrix of b is diagonal in standard form
(Q.45a). Their union is clearly a basis of V with respect to which b is diagonal with n(n + 1)/2 values
of 1 on the diagonal and n(n 1)/2 values of -1 on the diagonal. These are the values of p, q, from
which the other results are immediate.


I2
0

0
I2

. Let A be
48. (S) Let b be the SBF on R given by the matrix B given in block form as B =




Av
a fixed 2 2 matrix. Define U := x R4 |x =
, v R2 . (We are using block form notation
v
above.) Show that U is a subspace of R4 and state its dimension.
Show that b is identically zero on U if and only if A is an orthogonal matrix (i.e. iff AT A = I).
[Ans: 2]

Avi
, i = 1, 2,
Solution: Let (v1 , v2 ) be a basis of R . Then clearly U is spanned by the two vectors
vi
which are obviously linearly independent. Hence they are a basis of U , whose dimension must be 2.


Av
Next, with u =
, we have
v


b(u, u) =
=
=
=
=

uT Bu



Av
(Av)T , v T B
v



Av
v T AT , v T
v

v T AT Av v T v
v T (AT A I)v

Hence b is identically zero iff the form on R2 defined by the matrix AT A I is identically zero. By
considering this form with respect to a basis which puts its matrix in standard form, this happens if and
only if AT A I = 0.

49. (S) Consider the quadratic form = 2x2 + 3y 2 + z 2 4xy + 2xz + 2yz.
(a) Write down the (symmetric) matrix B of this quadratic form.
(b) By evaluating determinants only, determine the type of this form.

Part (a) is
part hand-in
for Week 10

(c) Find the eigenvalues and eigenvectors of B and check that the signs of the eigenvalues are consistent
with derivation of the type in the previous part. (You might want to use Maple to find the eigenvalues
use evalf(LinearAlgebra[Eigenvalues](B)).)

!
r
!
2 2 1
k
1
3 3
7

2 3 1 , (b) (2,1), (c) 4.57, -0.71, 2.14 or (harder) 2


+
+ 2, k =
sin
arcsin
[Ans. (a)
3
3
3
14 7
1
1 1
0, 1, 2]
Solution:
(a) See above. The diagonal entries are the coefficients of x2 , y 2 , z 2 , and the off-diagonal terms are half
the coefficients of the mixed terms (e.g. 4xy give (B)12 = (B)21 = 2).

(b) Computing 1 1, 2 2 and 3 3 (sub)determinants starting from the top left, the results are 2, 2, 7.
Thus the number of changes of sign in the sequence 1, 2, 2, 7 is 1, and so q = 1. The form is
non-degenerate (det B 6= 0) and so p = 2.
(c) Suitable Maple code is

>with(LinearAlgebra):
>A:=Matrix([[2,-2,1],[-2,3,1],[1,1,1]]):
evalf(Eigenvalues(A));
Without evalf the expressions are suprisingly complex, even though it is known that the result
is real. Even the numerical evaluation leads to a small complex part, because of rounding error.
The trigonometric form given in the answer is obtained by a standard method; see, for example,
http://mathworld.wolfram.com/CubicFormula.html, starting at about eq.(71).
50. (S, with H extension) Show that the origin is a critical point of
f (x, y, z) = 2x2 + y sin y + z 2 + 2(y + z) sin x 2ky sin z
(where k is a constant). What can you say about the nature of the critical point for different values of k?
[Ans: Min if 1 < k < 0, saddle if k < 1 or if k > 0; (harder) min if k = 0, 1]
Solution: f = (4x + 2(y + z) cos x, sin y + y cos y + 2 sin x 2k sin z, 2z + 2 sin x 2ky cos z) = (0, 0, 0)
at (0, 0, 0).

4
2
2
At this point the Hessian is H = 2
2
2k . The sequence of subdeterminants (supplemented
2 2k
2
with a leading 1) is 1, 4, 4, 16k(k + 1). If 1 < k < 0 then k(k + 1) < 0; there is no sign change, and
H is positive definite. Then (0, 0, 0) is a minimum. If k < 1 or k > 0 then H has type (2,1), and the
critical point is a saddle.
Dealing with the end-points 0 and 1 is a little tedious, but illustrates a fairly standard procedure for
such problems. It is best to first diagonalise. For k = 0 eigenpairs are

0
2
1


2, 1 , 6, 1 , 0, 1

1
1
1
. Let P be the matrix with columns equal to these eigenvectors. It gives the transformation from new
coordinates x , y , z to the original coordinates, i.e.
x
y

= 2y z
= x + y + z

= x + y + z .

In these coordinates we have (for k = 0) f = 18y 2 + 2x2 plus higher order terms, which confirms the
expected type. The leading higher order terms are quartic; there are no cubic terms as f is invariant
under reflection (x, y, z) (x, y, z). Now perform a further coordinate transformation
x
y

= x + g
= y + h

= z ,

where g, h are functions whose series expansion start with cubic terms. Then f = 2x2 + 18y 2 + 4x g +
36y h plus the original quartic terms and terms of higher order. Now we can choose g, h to eliminate
1 4
f (0, 0, 0). We can compute
quartic terms containing x or y as a factor. This leaves the term
24 z 4

this by setting x = y = 0 in f , i.e. setting x = z , y = z , z = z . Then f = 3z 2 3z sin z , and so the


required quartic term is (1/2)z 4 . Thus f = 2x2 + 18y 2 + (1/2)z 4 plus terms of degree six and up. The
dominant terms are positive, and so the origin is a minimum.

Fork = 1,
the eigenvalues are 4 2 2, 0, and a diagonalising transformation has matrix

in summary,
2 2 0

1
1
1 . Then f = (16 + 8 2)x2 + (16 8 2)y 2 plus quartic terms. All except the term
1
1
1
in z 4 can be eliminated by a further (nonlinear) change of variable, and this term can be evaluated by
expanding f after setting x = y = 0, i.e. x = 0, y = z , z = z , which gives the term (1/6)z 4 . Again
the origin is a minimum.

3 12 7
2 ? You do not need to evaluate a 3 3
51. (S) What is the type of the SBF with matrix 12 4
7 2
2
determinant (or compute eigenvalues - dont even think of it).
[Ans: (2,1)]
Solution: The lower right 1 1 and 2 2 subdeterminants are positive, and therefore b, restricted to the
span of the last two basis vectors, is positive definite. Hence p 2. It is clearly negative definite on the
span of the first basis vector, and so q 1. But p + q 3 and so the result follows.

0 1
52. (S) What is the type of the SBF with matrix (a) 1 1
1 1
[Ans: (a) (2,1), (b) (2,1), (c) (2,2)]

1
1 6
1 (b) 6 1
2
3 1

3 2 7
3
2 2 6
1 (c)
7
6 1
2
4 3 1

4
3

1
2

Solution:
(a) Evaluating subdeterminants, starting from the lower right, we get the sequence 2, 1, 1. There is one
sign change (starting with a supplementary 1), hence result.
(b) Evaluating subdeterminants, starting from top left, gives the sequence 1, 37, 46, with only one
sign change. Hence q = 1.
(c) The top left 1 1 and 2 2 subdeterminants are 3, 2, and so b is negative definite on the span of
the first two basis vectors. Thus q 2. Similarly, however, on the span of the last two basis vectors,
b is positive definite, and so p 2. But p + q 4, hence result.
53. (S) Classify the following quadrics.
(a) x2 + 2y 2 + 3z 2 + 2xy + 2xz = 1
(b) 2xy + 2xz + 2yz = 1
(c) x2 + 3y 2 + 2xz + 2yz 6z 2 = 1
You may be able to do all of these using determinants if you think carefully. You can always check your
answer by asking Maple for the eigenvalues.
[Ans: (a) ellipsoid, (b) hyperboloid of two sheets (c) hyperboloid of one sheet]
Solution:

1 1 1
(a) The corresponding matrix is 1 2 0 , and the sequence of subdeterminants from top left is
1 0 3
1, 1, 1. No sign change, therefore type (3,0). Hence result.

0 1 1
(b) The matrix is 1 0 1 , which looks very tricky. But the form is clearly not negative definite,
1 1 0
and so p 1 and q 2. Looking at the top left determinant (1) we see that, restricted to the
x, y plane, the form has type (1,1), and so q 1. Similarly, when restricted to each of the three
coordinate planes, it has this type. Hence the subspace on which it is negative definite must intersect
these planes, and must itself be a plane (no line can do so). Therefore q = 2.

1 0 1
(c) The matrix is 0 3 1 , and the sequence of determinants is 1, 3, 22. One sign change, and
1 1 6
so q = 1.
54. (S) Let (x) be a quadratic form on Rn given by a symmetric matrix S. How are the maximum and
minimum values of (x) on the unit sphere xT x = 1 related to the eigenvalues of S? (Hint: orthogonal
change of coordinates to standard form.)
Continuing, use Lagrange multipliers to find the critical points of xT Sx subject to the constraint xT x = 1.
[Ans: maxi i , mini i ; xi (normalised eigenvector)]
Solution: There is an orthogonal change of coordinates to a set of coordinates x such that the matrix of
is diag(1 , . . . , n ). Also, because the transformation is orthogonal, the unit sphere is xT x = 1. Thus

2
1 x2
1 + . . . + n xn

2
min(i )(x2
1 + . . . + xn )

min(i ),

and the minimum is attainable, by taking


for the maximum.

xi

= 1, where i is the index of the smallest eigenvalue. Similarly

The above calculation essentially


gives the answer to thesecond part of the question. Using Lagrange
X
X
X
X
xi Sik 2xk = 0.
Skj xj +
x2i = 0. Thus
xi Sij xj
multipliers, however, we set
xk
i
j
i
i,j
X
X
xi Ski 2xk = 0, where the two sums differ only in the dummy
Skj xj +
By symmetry of S, this is
i

index of summation. Hence

X
j

Skj xj xk = 0. In matrix notation this is just Sx = x, and so x is an

eigenvector. It must be normalised to satisfy the constraint xT x = 1.




1 3
55. (S) Consider the SBFs on R2 given (with respect to the standard basis) by the matrices B =
,
3 3


2 1
A=
. Show that one of these is positive definite and hence find a basis for R2 with respect to
1 1
which the matrices of both the SBFs are diagonal (with the positive definite one having the identity as
its matrix). Write down the change of basis matrix that diagonalises both forms.


0 1
[Ans:
]
1 1
Solution: The 1 1 and 2 2 leading subdeterminants of B are 1,-6, and so it is not positive definite.
For A we get 2,1, and it is positive definite.

det(B A)

1 2 3
3 3

det

(1 2)(3 ) (3 )2 ,

and so 
= 2,3. For = 2 (relative) eigenvectors are given by (1 2(2))x1 + (3 (2))x2 = 0, and
a
so x =
is a possibility. It has to be normalised with respect to A, i.e. xT Ax = 1, and so a2 = 1.
a

Completion of
hand-in for
week 10


0
. Hence (in a different order) we obtain the
1
columns of the transformation matrix X as given above. It is easy to verify that X T AX and X T BXare
diag(1, 1) and diag(3, 2).
We choose a = 1. Similarly for = 3 we may get

Quotient Spaces
56. (S) In each of the following cases show that is an equivalence relation on the set X, and describe [x] for
x X.
(a) X = R, u v u v Z

(b) X = Z, a b a + 2b = 3k for some k Z

(c) X = {(a, b) | a, b Z and b 6= 0} (so an element of X is a pair of integers, with the second one
non-zero.) (a, b) (k, l) al = bk.

[Ans: (a) (x + k | k Z); (b) (x + 3k | k Z); (c) ((k, l) | k/l = a/b)]


Solution:
(a) (i) a a = 0 Z, and so a a; (ii) a b a b Z b a Z b a; (iii) a b and
b c a b Z and b c Z. Hence (adding) a c Z, and so a c. The equivalence class is
obvious.
(b) Preliminary: subtracting 3b, a b a b is divisible by 3, i.e. iff a = b mod 3. Then the three
assumptions of the definition are easy to verify. The equivalence class is obvious.
(c) Preliminary: (a, b) (k, l) a/b = k/l. Hence the three assumptions are easy to check. The
equivalence class is obvious.
57. (S)
(a) Show that if x R2 is non-zero then there exists an invertible 2 2 matrix A such that Ae1 = x,
where e1 is the first standard basis vector in R2 .
(b) Use the above to show that, given two non-zero vectors x, y R2 , there exists an invertible 2 2
matrix P such that y = P x. (Hint: take x to e1 and then e1 to y.)
(c) Let x y iff there exists an inevertible 2 2 matrix A such that y = Ax. Show that this defines an
equivalence relation on R2 .
(d) What are the equivalence classes for this equivalence relation? How many elements does R2 / have?
[Ans: (d) ([e1 ], [0]), 2]
Solution:
(a) Place x in the first column of A, and choose for the second column any vector such that the columns
are linearly independent. (This is possible, as x 6= 0.) Call this matrix Ax . It is invertible, since the
columns are linearly independent.
1
(b) Since Ax e1 = x and Ay e1 = y, y = Ay Ax1 x. Let P = Ay A1
= Ax A1
x ; then P
y .

(c) (i) Choose A = I. Thus x x for all x R2 . (ii) x y implies that there is invertible A such that
y = Ax. Then x = A1 y; A1 is invertible. Hence y x. (iii) is rather like (b) above.

(d) If x 6= 0, (b) shows that all non-zero y are in [x]. By (a), obviously [x] = [e1 ]. Also, x = 0 and x y
implies that y = 0. Thus [0] consists of the single vector 0. There are two equivalence classes.

58. (S) The three assumptions for an equivalence relation are (i) reflexivity, (ii) symmetry, (iii) transitivity.
The following argument appears to show that the first of these is redundant. By (ii), a b b a, and
then (iii) implies that a a. Spot the flaw, and produce an example of a relation which satisfies (ii) and
(iii) but not (i).
[Ans: X = (0, 1), a b a + b = 0]
Solution: The flaw is that, for some a, there may be no b such that a b. In the example, this is the
case for 1.

59. (E) Let V X be a subspace.


(a) Check that the relation x y x y V is symmetric and transitive.

(b) Check that the addition defined in class for the quotient space X/V is well defined.
(c) Check that the distributive law (x + y) = x + y, for all vectors x, y and scalars , holds in X/V .

(a) is part
hand-in for
Week 11

Solution:
(a) x y x y V . Hence y x V as a subspace must be closed under multiplication by the
scalar 1. Hence y x. Similarly x y and y z x y V and y z V . Hence x z V ,
as V must be closed under addition. Hence x z.

(b) Suppose [x] = [x ] and [y] = [y ]. Then x x V and y y V . Hence x x + y y =


(x + y) (x + y ) V , and so [x + y] = [x + y ], i.e. addition is well defined.
(c)

([x] + [y]) =
=

([x + y]) by definition of + in X/V


[(x + y)] by definition of scalar multiplication in X/V

=
=

[x + y] by the distributive law in X


[x] + [y] by definition of addition again

[x] + [y] by definition of scalar multiplication again.

60. (S) Suppose T : X Y is a linear map and that V X is a subspace such that V ker T . Define a
linear map T : X/V Y (checking that the map you have defined is both well defined and linear) such
that the diagram
X
P

X/V

>Y
>

commutes. (The vertical map is the usual one, denoted P in lectures.) Find the dimension of the kernel
of T in terms of the dimensions of V and ker T . (Hint: apply the rank theorem to T and T.)
[Ans: dim ker T dim V ]
Solution: Let T([x]) = T (x). This is well defined, as [x] = [x ] x x V ker T , and so
T (x x ) = 0 = T (x) T (x ). It is also linear, as
T(1 [x1 ] + 2 [x2 ]) =
=
=
=

T ([1 x1 + 2 x2 ]) by definition of operations in X/V


T (1 x1 + 2 x2 ) by definition of T
1 T (x1 ) + 2 T (x2 ) since T is linear
1 T([x1 ]) + 2 T([x2 ]) by definition of T again.

T P (x) = T([x]) = T (x), and so the diagram commutes.


rankT + dim ker T = dim X and rankT + dim ker T = dim X/V = dim X dim V . But obviously rankT =
rankT, and so dim X dim ker T = dim X dim V dim ker T. Hence dim ker T = dim ker T dim V .

1 0 1
61. (S) Let T be the linear map on R3 with matrix 0 2 0 relative to the standard basis (e1 , e2 , e3 ).
1 0
0
Show that V = Sp(e2 ) is invariant under T (i.e. T (V ) V ). Write down a basis of R3 /V , and the matrix
of the canonical map T with respect to this basis.


1 1
[Ans: ([e1 ], [e3 ]),
]
1 0

1 0 1
0
0
Solution: T (e2 ) = 0 2 0 1 = 2 = 2e2 . Thus T (e2 ) V , and so V is invariant.
1 0
0
0
0
Therefore T exists.

Completion of
hand-in for
Week 11

The vectors [e1 ], [e3 ] are linearly independent, as a1 [e1 ] + a3 [e3 ] = [0] a1 e1 + a3 e3 V , i.e. there
exists a2 such that a1 e1 + a3 e3 = a2 e2 . Since (e1 , e2 , e3 ) is a basis of R3 , a1 = a2 = a3 = 0. Also
dim R3 /V = dim R3 dim V = 2, and so ([e1 ], [e3 ]) must be a basis of R3 /V .
T([e1 ]) = [T (e1 )] = [e1 + e3 ] = [e1 ] + [e3 ] and T([e3 ]) = [T (e3 )] = [e1 ] = [e1 ]. The coefficients of
[e1 ], [e3 ] give the columns of the required matrix.
62. (S) Consider the differentiation map D : P3 P3 , where P3 is the real vector space of polynomials of
: P3 /V P3 /V , where V is the
degree 3 in a variable x. Show that D gives rise to a linear map D

subspace of constant polynomials. What is the matrix of D with respect to the basis ([x], [x2 ], [x3 ]) of
P3 /V ?

0 2 0
[Ans: 0 0 3 ]
0 0 0
is well defined. Then
Solution: Since V is invariant under D, D

D([x])
= [D(x)] = [1] = 0 P3 /V
2 ]) = [D(x2 )] = [2x] = 2[x]
D([x
3 ]) = [D(x3 )] = [3x2 ] = 3[x2 ]
D([x
from which the columns of the matrix are immediate.
Polar and Singular Value Decomposition
63. (S) Find (by explicit calculation) all 2 2 real symmetric positive definite matrices S satisfying S 2 = I2 ,
where I2 is the 2 2 unit matrix.
[Ans: I2 ]

Solution: S =

a
b

b
d

works iff
a2 + b 2

(2)

b(a + d) =
b2 + d2 =

0
1.

(3)

By eqs.(2) and (3), a = d.


(a) If a = d the only conditions are a2 + b2 = 1 and 2ab = 0. If a 6= 0 then b = 0 and the results are I2 .
Of these only I2 is positive definite.
(b) If a = d the only condition is a2 + b2 = 1. But if a 6= 0 then either a < 0 or d < 0, and so the matrix
is negative definite on the subspace of the first or second basis vector. If a = 0, b = 1, det S = 1,
and so S must be of type (1,1), and so not positive definite.


1
64. (S) Find the polar form A = S AT A of the matrices (a)
10





0 1 0
3 1
3/ 10 1/10
(b) 0 0 1
[Ans: (a)
1 3
1/ 10 3/ 10
1 0 0

0
(b) 0
1

10 6
0 8

1 0
0 2
0 0

0
0 ]
3

2 0
0 3
0 0

Solution:




1
1 1
. This has eigenvalues 4, 16, orthogonal diagonalising matrix P =
,
1 1
2



3 1
.
and singular values 1,2 = 2, 4. Hence AT A = P diag(2, 4)P T =
1 3
Now construct the matrix Q whoseith column
 is Avi /i , where A is the original matrix and vi is

1
2
1
. Then A = QP T AT A, and QP T gives the answer
the ith column of P . Thus Q = 5
2 1
stated.

(a) AT A =

10 6
6 10

(b) Here AT A = diag(1, 4, 9), P = I3 , i = i (i = 1, 2, 3),

0 1
AT A = diag(1, 2, 3), Q = 0 0
1 0

0
1 .
0

65. (S) Suppose A is a 2 2 symmetric matrix with unit eigenvectors u1 , u2 . If its eigenvalues are 1 = 3 and
2 = 2, what are U, and V in the SVD A = U V T ?
[Ans: U has columns u1 , u2 , = diag(3, 2), V has columns u1 , u2 ]

Solution: Let P have columns u1 , u2 . Thus P T AP = := diag(3, 2) and A = P P T . Hence


AT A = P 2 P T where 2 = diag(9, 4). P is the diagonalising matrix also of AT A, and taking the square
root requires the positive square roots of the diagonal entries. Thus the singular values are 3, 2, and
= P diag(3, 2)P T . We take V = P , and for U the matrix whose columns are Aui /i = i ui /i , i.e.
u1 , u2 .
66. (S) Find the singular value decomposition of A if it has orthogonal columns w1 , . . . , wn of lengths 1 , . . . , n
(i.e. wiT wi = i2 , and you may assume i > 0).
[Ans: U diag(1 , . . . , n )I T , where the ith column of U is wi /i ]
Solution: AT has rows which are orthogonal to the columns wi of A, and so AT A is a diagonal matrix
with diagonal entry equal to wiT wi = i2 . Hence I is the diagonalising matrix of AT A, and V = I. Also,
the singular values are 1 , . . . , n , and = diag(1 , . . . , n . Finally, the ith column of U is Avi /i , where
vi is the ith column of V , which is the ith standard unit vector. Hence the ith column of U is wi /i , as
stated.


1 4
67. (S) Find the SVD of the matrix
2 8





1/17 4/ 17
1/5 2/ 5
85 0
Solution: [Ans:
]
0
0
2/ 5 1/ 5
4/ 17 1/ 17




1 4
5 20
. Singular
AT A =
, eigenvalues 85, 0, orthogonal diagonalising matrix V = 117
4 1
20 80



1/5
.
value 1 = 85. First column of U is Av1 /1 , where v1 is the first column of V , and so u1 =
2/ 5
Complete V to make it orthogonal, giving the stated result (not quite uniquely).


68. (S) Let A1 , A2 be real n n matrices. Prove that they have the same singular values iff there exist real
orthogonal matrices S1 , S2 such that A1 = S1 A2 S2 .
Solution: Let the SVDs be Ai = Ui i ViT , so that i = UiT Ai Vi .
If they have the same singular values then 1 = 2 1 . Hence A1 = U1 2 V1T = U1 U2T A2 V2 V1T = S1 A2 S2 ,
where S1 = U1 U2T and S2 = V2 V1T . These are orthogonal, as S1 S1T = U1 U2T U2 U1T = U1 U1T = I = S1T S1 ,
similarly, and similary for S2 .
Suppose A1 = S1 A2 S2 , where S1 , S2 are orthogonal. Then AT1 A1 = S2T AT2 S1T S1 A2 S2 = S2T AT2 A2 S2 .
Hence if V1 diagonalises AT1 A1 , i.e. V1T AT1 A1 V1 = 1 , where is the diagonal matrix whose diagonal
entries are the square of the singular values of A1 . Hence V1T S2T AT2 A2 S2 V1 = 1 , and so there is a
diagonalising matrix V2 = S2 V1 which diagonalises AT2 A2 to the same diagonal matrix as AT1 A1 . Hence
result.





0 1 0
1 1
69. (S) Find the SVD of (a) 1 1 1 1 (b)
(c)
1 0 0
0 0

1/2
1/2
1/2
1/2




1/ 2 1/ 2
0
0
0 1
1 0 0
(b)

I3
[Ans: (a) (1)(2, 0, 0, 0)
0
1 0
0 1 0
0
1/ 2 1/ 2
1/2
1/2
1/2 1/2




1/2 1/ 2
2 0
]
(c) I2
0 0
1/ 2 1/ 2
1 Strictly, they need not appear in the same order, in which case = T , where is a permutation matrix. But then is
1
orthogonal, and has no effect on the following argument except slightly complicating the formulae

Solution:
(a) It is easier to find the SVD of B = AT first. Thus B T B = AAT = (4) (a 1 1 matrix). This is
diagonalised by V = I1 (the 1 1 unit matrix), and there is one singular value, i.e. 1 = 2. The
matrix has the same shape as B, and is therefore (2, 0, 0, 0)T .
T
The first column of U is BI1 /1 , i.e. (1/2, 1/2,
1/2, 1/2) . It canbe extended to an orthogonal

0
1/2
1/2 1/ 2
1/2 1/ 2
0
1/2
. (Just try constructing each
matrix in many ways, e.g. U =
1/2
0
1/ 2 1/2
1/2
0
1/ 2 1/2
column in such a way that it is orthogonal to everything that has gone before, and then normalise.)
With these choices, B = U V T and so A = V T U T , giving the solution stated.
(b) AT A = diag(1, 1,0), and so the singular values are 1, 1, the diagonalising matrix is V = I3 , and
1 0 0
=
. In the present case the columns of U are the first two columns of A (because
0 1 0
V = I3 and 1,2 = 1).



1 1
T
(c) A A =
, eigenvalues 2, 0, singular value 1 = 2, orthogonal diagonalising matrix V
1 1


1
as given in the answer. The first column of U is Av1 /1 =
. This can be extended to an
0
orthogonal matrix in only two ways.

70. (S) This question is an extended hand-in for Week 12 or Semester 2, Week 1
(a) A relation on a set X is said to be circular if a b and b c imply c a, for all a, b, c X. Prove
that a relation is reflexive and circular iff it is reflexive and symmetric and transitive.
[6]
(b) State the properties and sizes of the matrices U, , V in the singular value decomposition A = U V T
of an m n real matrix A. What is meant by singular values?
[4]


1 1
(c) Find the singular value decomposition of the matrix
.
[7]
1 1
Solution:
(a)

i. Suppose is reflexive and circular. To check symmetry, suppose a b; then by reflexivity, b b;


and by circularity, b a; hence is symmetric. To check transitivity, suppose a b and b c;
by circularity, c a; by symmetry (already proved) a c; hence is transitive.
[2,2]
ii. Conversely, suppose is reflexive, symmetric and transitive. To check circularity, suppose a b
and b c; then by transitivity, a c; by symmetry, c a; hence is circular.
[2]

(b) U is m m orthogonal, is m n diagonal, V is n n orthogonal. Singular values are the non-zero


diagonal elements of .
[3,1]


2 2
(c) Denoting the matrix by A, AT A =
. Eigenvalues are given by (2 )2 4 = 0, i.e.
2 2


2 0
= 4, 0; the singular value is thus 1 = 2 and =
.
[3]
0 0
The corresponding eigenvectors are

  
1
1
,
,
1
1
corresponding normalised eigenvectors give the columns of V , and so we can take



1/ 2 1/2
V =
.
1/ 2 1/ 2
[2]
Denoting its columns by v1 , v2 , 
the first
column
of
U
is
Av
/
,
and
the
second
column
is
any
1
1


1/ 2 1/2
orthogonal unit vector, e.g. U =
.
[2]
1/ 2 1/ 2

You might also like