Professional Documents
Culture Documents
Key:
E: easy examples
S: standard examples
P: hard or peripheral examples
Fields and Vector Spaces
1. (E) Find the multiplicative inverses of the non-zero elements in Z7 . (Just experimenting is probably easier
than using the Euclidean algorithm.)
Part hand-in
for week 2
1, ....
2:
2, 4, 6, 1, ...
3 : 3, 6, 2, 5, 1, ...,
and so 11 = 1, (as always), 21 = 4, 31 = 5, whence 41 = 2, 51 = 3 and 61 = 21 .31 = 20 mod 7 =
6.
To use the Euclidean algorithm to find (say) 41 , we seek integers a, b such that 7a + 4b = 1, and then
b = 41 mod 7. Thus
7 1.4 =
43 =
3
1,
(b) In C, x = (1 5)/2, which is also in R but not in Q. Trial and error shows there is no solution in
either Z3 or Z2 .
Completion of
hand-in for
week 2
4. (E) Which of the following are subspaces of the given vector space?
(a) {x R3 | 2x1 x2 + x3 = 1} R3
(b) {x R3 | x1 = 2x2 } R3
(c) {P P3 | P (1) = 0} P3 where Pn denotes the vector space of real polynomials of degree n in a
variable x.
(d) {A M | AT = A} M where M is the vector space of real 2 2 matrices and T denotes matrix
transpose
[Ans: no, yes, yes, yes]
Solution:
(a) 0 R3 is not in the set.
(b) To prove that a subset U is a subspace, it is enough to prove that U is closed under linear combinations, i.e. that, if u1 , u2 U and a1 , a2 F , then a1 u1 + a2 u2 U .
Here, suppose
ui = (xi1 , xi2 , xi3 ),
i = 1, 2. Then
a1 u1 + a2 u2 = (a1 x11 + a2 x21 , a1 x12 + a2 x22 , a1 x13 + a2 x23 ).
(1)
Now we have to check that x1 = 2x2 , where x1 , x2 are the first and second components of (1). Now
a1 x11 + a2 x21 2(a1 x12 + a2 x22 ) = a1 (x11 2x12 ) + a2 (x21 2x22 ) = 0
because ui U and so xi1 = 2xi2 .
[Ans. (a) (0, 0), (1, 2), (2, 1), (b) (0, 0), (1, 1), (2, 2),(c) (0, 0, 0), (1, 1, 0), (1, 1, 1), (0, 0, 1).]
Solution:
(a) Multiply a by 0,1,2.
(b) Choose all possible values for x2 , and find x1 for each.
(c) Write down all possible values of c1 a + c2 b, where c1 = 0, 1 Z2 . There are four combinations of
possibilities.
6. (E) In Z32 find all the vectors in
(a) Sp(x, y) where x = (1, 1, 0) , y = (0, 1, 1)
(b) the subspace V Z32 given by V = {x Z32 | x1 + x2 + x3 = 0}.
Solution:
(a) (0, 0, 0) , (1, 1, 0) , (0, 1, 1) , (1, 0, 1) (cf. Q.5(c))
(b) Same as (a). Try all four combinations of x2 , x3 and, for each combination, solve for x1 .
Part hand-in
for week 3
Here dim(U ) = 2. ({(0, 1, 0) , (0, 0, 1)} is a basis.) Similarly dim(V ) = 2. If x U V then c = a = 0 and
so x = (0, 0, b) , b R. Hence dim(U V ) = 1.
(a) Show that in Z35 the vectors (1, 1, 3) , (2, 0, 2) , (4, 3, 0) are linearly dependent.
(b) Complete the following sentence without using any terms from linear algebra. If R were finitedimensional as a vector space over Q, it would mean that there exist a finite number r1 , . . . , rn of . . .
such that every . . . could be written as . . ..
Part hand-in
for week 3
Solution:
(a) Call the vectors a, b, c. Looking at the last entry in each, we see that a + b has last entry 0, as c has.
In fact a + b = (3, 1, 0), and so 3(a + b) = c, i.e. 3a + 3b c = 0.
(b) If R were finite- dimensional as a vector space over Q, it would mean that there exist a finite number
r1 , . . . , rn of real numbers such that every real number could be written as a linear combination of
r1 , . . . , rn with rational coefficients.
9. (E) Give a basis of the subspace of Z35 defined by the equation x1 + x2 + x3 = 0. What is the dimension
of this subspace? How many vectors are there in this subspace? Find the coordinate matrix of the vector
v = (4, 3, 3) in your chosen basis.
Completion of
hand-in for
week 3
Solution: Any vector in the subspace has the form (x2 x3 , x2 , x3 ) = x2 a + x3 b, where a = (1, 1, 0) ,b =
(1, 0, 1). Hence the subspace is Sp(a, b). Also a, b are linearly independent. Hence a, b is a linearly independent spanning set, i.e. a basis, and the subspace is 2-dimensional.
x2 , x3 can each take 5 values, which determine x1 , and so there are 25 vectors in the subspace.
First, check that v is in the subspace (x1 + x2 + x3 = 0). Clearly the coordinate matrix has entries (3, 3).
10. (a) (E) Find all the vectors in the subspace V Z32 given by V = {x Z32 |x1 + x2 + x3 = 0}.
(b) (S)
There are seven different non-zero vectors in Z32 and hence (since the only scalars are {0, 1})
there are seven different one-dimensional subspaces. There are also seven
different linear equations of the form
1 x1 + 2 x2 + 3 x3 = 0,
j Z2 ,
apart from the trivial one with all the j being zero. Each of these defines
a different two-dimensional subspace of Z32 . In the picture there are seven
blobs and seven lines (six straight ones together with a circle). Label
each blob with a different one-dimensional subspace and each line with
a two-dimensional subspace in such a way that a blob is on a line if and
only if the one-dimensional subspace lies inside the two-dimensional one
(i.e. iff the vector satisfies the equation). (Note by the way that this
configuration has the property that through every pair of points there is
a unique line, and every pair of lines meet in precisely one point. It is an
example of a finite projective plane.)
[Ans (a) (0, 0, 0) , (1, 0, 1) , (1, 1, 0) , (0, 1, 1); (b) For example, A = (1, 0, 0) , B = (0, 1, 0) , C = (0, 0, 1), etc.]
Solution:
(a) The form of a general vector is as in the solution to Q.9, but now x2 , x3 can take only 2 values each,
and so the points in V are (0, 0, 0) , (1, 1, 0) , (1, 0, 1) , (0, 1, 1) . (Remember you are doing arithmetic
mod 2.)
(b) Having decided on A and B (e.g. as in the answer stated), c is fixed, since it must be the one
non-trivial linear combination of A and B, i.e. (1, 1, 0). Next, C can be anything that has not been
used yet. Then the remaining vertices can be filled in (with no further choice) by the argument used
to determine c.
In the answer stated, clearly AB, BC, CA are, respectively, x3 = 0, x1 = 0, x2 = 0. Cc is x1 + x2 = 0,
and the circle is the subspace V discussed in part (a).
There are many possible correct answers.
11. (S) How many two-dimensional subspaces does Z42 have? You might want to think along the lines of
defining such a subspace by choosing a nonzero vector, and then choosing another vector that is not a
multiple of it their span determines a subspace. Count how many ways there are of doing this and
then work out how many times each subspace has been counted.
[Ans: 35]
Solution: For each of the four entries in an element of Z42 there are two choices, and so there are 24 points
in the whole space. One of these is the zero vector, and so there are 24 1 choices for the first vector. For
the second vector there remain 24 2 choices, and so there are (24 1)(24 2) pairs of non-zero vectors.
Each subspace, however, has exactly 3 non-zero points. (Recall the discussion of the two-dimensional
subspaces in Qs.9,10a.) Therefore to each subspace there correspond 3! ordered pairs of non-zero points.
Hence the number of distinct subspaces is (24 1)(24 2)/3! = 35.
12. Assuming that R and C are fields, state which of the following sets are fields:
(a) (E) all positive integers
are
addition: (a1 + b1 3) + (a2 + b2 3) = (a1 + a
2 ) + (b1 + b
2 ) 3, where (a1 + a2 ) and (b1 + b2 )
rational. Closure under multiplication: (a1 + b1 3)(a2 + b2 3) = (a1 a2 + 3b1b2 ) + (a1 b2 + a2 b1 )3,
where the two bracketed expressions are rational. Additive
inverse: (a + b 3) = (a) + (b) 3,
which is again in the set. Multiplicative inverse: if a + b 3 6= 0 then
(a + b 3)1 =
a+b 3
ab 3
=
(a + b 3)(a b 3)
b
a
+
=
3,
a2 3b2 a2 3b2
and the tworational expressions are rational numbers. Note that the introduction of the common
2
2
factor
a b 3 isvalid because it also is non-zero. If it were zero
then we would have a 3b =
(a + b 3)(a b 3) = 0. If also b 6= 0 it would follow that 3 = (a/b), i.e. it is rational
(contradiction).
So we would require b = 0 and hence a = 0, which contradicts our assumption that
a + b 3 6= 0.
(c) Closure
under multiplication is a problem.
Suppose
numbers
thereexist rational
a, b suchthat
( 3 5)2 = a + b 3 5. Multiplying by 3 5 we get 5 = a 3 5+ b( 3 5)2 = a 3 5 + ab + b2 3 5. Since 3 5 is
irrational (by a
similar argument to the irrationality of 2) we deduce a + b2 = 0 and ab = 5. Hence
3
3
5 = (b) , i.e. 5 is rational. (Contradiction)
(d) 1/2 + 1/2 is not in the set
(e) Like (b)
13. (E) The field extension Q( 2) consists of all numbers of the form a + b 2, where a, b Q. Show that
this set is closed under multiplication and
addition, that it contains 0 and
1, and that it contains the
multiplicative and additive inverses of a + b 2. Write down a basis of Q( 2) regarded as a vector space
over Q. What is its dimension?
Part hand-in
for week 4
Solution: Up to near the end, this is like the solution to Q.12b. The
set {1, 2} is clearly a spanning
set. These numbers are also linearly independent over Q, because 2 is irrational. Hence this set is a
basis, and the dimension is 2.
14. (S)
(a) Prove the following theorem: A subset S of a field F is a subfield if S contains the zero and unity
of F , if S is closed under addition and multiplication, and if each a S has its negative a and
(provided a 6= 0) its inverse a1 in S.
(b) Show that the pair of conditions 0 S and 1 S can be replaced by the single condition S
contains at least two elements. (Hint: aa1 = 1 if a 6= 0.)
Solution:
(a) The axioms for a field fall into two kinds: (a) identities (such as the distributive law a(b+c) = ab+ac)
and (b) statements of existence (e.g. there exists a1 ). Axioms of the first kind apply to all elements
of F and therefore to all elements of S. The existence axioms are
i. Closure under addition (i.e. a, b S a + b S)
ii. Existence of 0
iii. Existence of additive inverse a
iv. Closure under multiplication
v. Existence of 1
vi. Existence of multiplicative inverse a1 is a 6= 0.
(b) If S contains at least two elements, there is at least one non-zero element a. By (vi) and (iv) we have
aa1 = 1 S. Similarly a a = 0 S.
Linear Transformations
15. (E) Give an example of two vector spaces V, W over the field Z2 and a linear transformation T : V W
which is injective but not surjective. Write down the null space and range for your example, find the rank
and nullity, and verify the rank theorem.
[Ans: Z2 Z22 , x 7 (x, 0),{0},Sp ((1, 0)), 1, 0]
Solution: There are, of course, lots of solutions to this problem.
Injective: (x, 0) = (x , 0) x = x .
Range: (x, 0) = x (1, 0), i.e. it is an element of the space stated in the Answer, and clearly any element of
that space is in the range.
Since there is a basis of the range consisting of the single vector (1, 0), and the set consisting of this vector
is obviously linearly independent, it is a basis, and so the rank is 1.
The null space is {0}, which is 0-dimensional.
Completion of
hand-in for
Week 4
a b
c d
with entries in Z2 .
(d) Aei 6= 0 (i.e. the null vector of Z22 ), otherwise ei = A1 0 = 0. Hence A maps the three nonzero vectors onto the set of non-zero vectors. The map is injective, because (applying the inverse)
Aei = Aej ei = ej ). Hence it acts as a permutation.
Suppose e1 = (1, 0) , e2 = (0, 1) , e3 = (1, 1) . Then for the example given, e1 e3 e2 e1 .
18. (S) Let T1 , T2 be two linear mappings acting from V to W . Define their sum T1 + T2 and prove that it is
linear. Prove that rank(T1 + T2 ) rank(T1 ) + rank(T2 ).
[Ans (T1 + T2 )(v) = T1 (v) + T2 (v)]
= T1 (1 v1 + 2 v2 ) + T2 (1 v1 + 2 v2 )
definition of T1 + T2
= 1 T1 (v1 ) + 2 T1 (v2 ) + 1 T2 (v1 ) + 2 T2 (v2 ) linearity of T1 and T2
= 1 (T1 + T2 )(v1 ) + 2 (T1 + T2 )(v2 )
definition of T1 + T2 again.
It follows from the definition that the range of T1 + T2 is a subspace of W1 + W2 , where Wi is the range
of Ti . Hence the rank of T1 + T2 is less than or equal to dim(W1 + W2 ) dim(W1 ) + dim(W2 ).
19. (S) Let Pj [Z3 ] denote the vector space of polynomials of degree j with coefficients in Z3 .
(a) State the dimension of Pj [Z3 ].
(b) Show that (1 + x, x + x2 , x2 ) is a basis for P2 [Z3 ].
(c) Show that T : P2 [Z3 ] P3 [Z3 ] where T : p(x) 7 (x + 2)p(x) is a linear map.
2 0
3 2
[Ans: (a) j + 1, (d)
1 3
0 1
T with respect to the basis above for P2 [Z3 ] and the basis (1, x, x2 , x3 ) for
0
0
]
2
1
Solution:
(a) As usual, there is an obvious basis (1, x, x2 , . . . , xj ).
(b) If a1 (1 + x) + a2 (x + x2 ) + a3 x2 = 0 (i.e. it is the zero polynomial), then the coefficient of every
power of x must vanish, i.e.
a1
a1 + a2
=
=
0
0
a2 + a3
, whence a1 = a2 = a3 = 0. Therefore the set is linearly independent. The size of the set is the same
as the dimension of the space, and so the set is a basis.
(c)
T (a1 p1 (x) + a2 p2 (x))
(d) The columns of the matrix A are the images of the basis vectors of P2 [Z3 ]
1 0 1
20. (S) Let A = 0 1 1 . It would be better to have an example which could not be done
0 0 0
with eigenvectors. Find non-singular matrices P, Q such that the matrix P AQ is a diagonal matrix
D in which the diagonal elements are ones or zeroes, the ones coming before the zeroes,
Part hand-in
for Week 5
(b) Since C3 = C1 + C2 , and the vectors (C1 , C2 ) are linearly independent, the image is Sp(C1 , C2 ), the
rank is 2, and the nullity 1. The null space is clearly Sp(v3 ) where v3 = (1, 1, 1). Complete a basis
of R3 with any other two suitable vectors v1 , v2 , e.g. the first two vectors of the standard basis. The
3
transformation from this basis to thestandard basis
of R has the matrix whose columns are the
1 0 1
three basis vectors, i.e. we take Q = 0 1 1 .
0 0 1
Then Av1,2 , which are actually the same vectors as v1 , v2 , are chosen as the first two basis vectors
w1 , w2 , which can be extended to a basis with the third standard basis vector. Thus the transformation from the basis (w1 , w2 , w3 ) to the standard basis is the identity, and its inverse (which we need
for P ) is also I.
Clearly these solutions are not unique.
5
2
7
1 is equivalent to the matrix A in Q.20, in a sense which
21. (S) Show that the matrix B = 3 4
1 2 3
you should state.
Solution: Again C3 = C1 + C2 , while C1 , C2 are linearly independent. Hence the rank is 2, and the
same procedure gives matrices P , Q such that P BQ = diag(1, 1, 0). Hence B = P 1 diag(1, 1, 0)Q1 =
P 1 P AQQ1 . Thus there exist non-singular matrices X(= P 1 P ), Y (= QQ1 ) such that B = XAY .
Operators
22. (E) Let p be a polynomial.
(a) Suppose that the square matrix A is diagonal with entries j . Show that p(A) is diagonal with entries
p(j ).
(b) Suppose T, v, are (respectively) an operator, vector and scalar such that T v = v, and let p be a
polynomial. Show that p(T )(v) = p()(v).
Solution: Let p(x) =
Pn
i=0
ai xi .
Pn
i=0
Completion of
hand-in for
week 5
dimensions. If dim(V ) = 2 and T has an eigenvalue and eigenvector v then if we choose v as the first
basis vector, the first column of the matrix representing T is (, 0), and so the matrix is upper
triangular.
0 1
2
We try a tranformation on R with no eigenvalue in R, e.g. rotation by /2, which has matrix
.
1 0
24. (S) A subspace U of a vector space V is said to be invariant under a transformation T if T (U ) U .
(a) Suppose that the matrix of a linear
transformation
(on a two-dimensional vector space) with respect
0 0
to some coordinate system is
. How many subspaces are there which are invariant under
0 1
the transformation?
(b) Let be an eigenvalue of a linear transformation on a vector space V , and U the corresponding set
of eigenvectors (including the zero vector). Show that U is invariant under T .
[Ans: (a) 4]
Solution:
(a) Two trivial invariant subspaces are V itself and the null subspace {0}. These are the only possibilities
with dimension 2,0 respectively. Any one-dimensional subspace is Sp(v) for some v V , and if it is
invariant we require T v = v for some v, i.e. v is an eigenvector. The eigenvalues of the matrix are
given by (1 ) = 0, i.e. = 0, 1, with eigenvectors (1, 0) , (0, 1) , respectively. The spans of these
vectors are the only possible 1-dimensional invariant subspaces.
(b) u U T u = u U , since U is closed under multiplication by scalars.
25. (S) Give an example of a matrix A which represents a projection on R2 . Verify that A2 = A. Find the
eigenvalues of your matrix A.
In general a linear transformation T is a projection if and only if T 2 = T . Prove that the eigenvalues of
a projection are 0 or 1.
[Ans: See Q.24a; 0,1]
Solution:
2
1
0
0
0
. Obviously A2 = A. Eigenvalues
26. (S) For each of the following matrices A find, when possible, a complex non-singular matrix P for which
P 1 AP is diagonal:
2 4
(a)
5 3
3 2
(b)
2 3
1 2
(c)
2 2
1 2i
(d)
2i 2
4 1
1 1
2 1
[Ans e.g. (a)
, (b)
, (c)
]
5 1
i i
1 2
(d) is part
hand-in for
week 6
Solution:
(a) Eigenvalues are given by (2 )(3 ) 20 = 0, i.e. 2
( 7)(
+ 2) = 0,
5 14 =0,
i.e.
0
5 4
x1
=
and so = 7, 2. For = 7, eigenvectors are given by
, and so
0
x2
5 4
4
5x1 + 4x2 = 0, and we can take as eigenvector
. (Note that this is the top row of the matrix
5
A I (which is written
out
above) with the order reversed and one sign changed.) Similarly for
1
= 2 we may take
.
1
4 1
Next, these two eigenvectors are taken as the columns of P , i.e. P =
. Hence (if you need
5 1
1
1 1
. (Remember: to find the inverse of a 2 2 -matrix, swop the diagonal
it) P 1 =
5 4
9
entries, change the sign of the off-diagonal entries, and divide by the determinant.)
Infinitely many different answers are possible, by swapping the columns of P , and multiplying them
by different non-zero constants.
(b) = 3 2i, P as stated.
(c) = 2, 3, P as stated.
2i i
(d) = 2, 3, P =
.
1 2
In the next few problems we write Jn () for the Jordan matrix which is the n n matrix
5 1
entry is: if i = j; 1 if j = i + 1 and 0 otherwise. Thus for example J3 (5) = 0 5
0
0
whose
i, j-th
0
1
5
27. (E) Write down A = J2 (1), where the notation is defined above. State its eigenvalue(s). Write down the
matrix A I for each eigenvalue , state its rank and nullity, and hence (or otherwise) write down the
geometric multiplicity of each eigenvalue. Also compute (A I)2 , and hence or otherwise write down
the algebraic multiplicity of each eigenvalue.
1
. Eigenvalues of an upper triangular matrix are the diagonal entries, i.e.
1
0 1
1 only here. J2 (1) 1.I =
. Rank is the dimension of the space spanned by the columns of
0 0
the matrix, i.e. the space Sp ((1, 0)) in this case, and so rank = 1. Rank + nullity = 2 (since 2 2
matrices act on R2 , with dimension 2), and so nullity is 1. This is the dimension of the space of solutions
v of (A 1.I)v = 0, i.e. the eigenspace
corresponding to eigenvalue 1, and therefore equals the geometric
0
0
multiplicity. (J2 (1)1.I)2 =
. Here rank is 0, nullity 2, i.e. the space of solutions of (AI)2 v = 0
0 0
is 2, and so the algebraic multiplicity is 2.
Solution: J2 (1) =
1
0
Completion of
hand-in for
week 6
28. (S) Write down the eigenvalues of J3 (), and find their algebraic and geometric multiplicity. How does
this generalize to Jn ()?
[Ans: , 3, 1. ,n,1]
0
1 . Upper triangular matrix, and so the diagonal entries give all the
0 1 0
eigenvalues, i.e. only here. Following the procedure of Q.27 we compute J3 () I = 0 0 1 .
0 0 0
2
This
has
rank
2,
rank
+
nullity
=
3,
and
so
nullity
is
1,
which
is
the
geometric
multiplicity.
(J
()I)
=
3
0 0 1
0 0 0
0 0 0 and (J3 () I)3 = 0 0 0 . This has rank 0, nullity 3, and so the geometric
0 0 0
0 0 0
multiplicity is 3.
1
Solution: J3 () = 0
0 0
(Note the following: in lectures, a generalised eigenvector was defined as a solution v of (A I)k v = 0,
for some k = 1, . . . n, where A is n n. Then the algebraic multiplicity was defined as the dimension
of the space of all such generalised eigenspaces. But the result of Q.29a, which shows that the spaces
corresponding to successive k are nested, implies that we need only consider k = n.)
For general
0
1
0
0
0
0
... ...
0
0
0
0
0
0
0 ... 0
0
0 ... 0
1
0 ... 0
0
0
0
1 ... 0
0
0
0
0 ... 0
0
0
0
0
.
.
.
0
0
0
0
0 ... 0
0
0
0
0 ... 0
0
i.e. the line of 1s is shifted up and to the right, intersecting the top row at the 3rd entry. The line shifts 1
place each time the power of Jn () I is increased by 1, and so (A I)n = 0 (the null matrix). Then
the rank is 0, nullity is n, and the algebraic multiplicity is n.
29. (S) For k > 1 the k-th generalized eigenspace of T : V V with eigenvalue is E,k := {v V |(T
I)k v = 0}. So, for k = 1 the generalized eigenspace is just the eigenspace in the usual sense.
(a) Show that if k l then E,k E,l . Deduce that the algebraic multiplicity is greater than or equal
to the geometric multiplicity.
(b) Let A = J3 (). Show that E,k = V for k 3. Describe E,2 and give its dimension.
(c) In general, what is dim E,k for the Jordan matrix Jn ()?
(c) an operator on C4 whose characteristic and minimal polynomials are both x(x 1)2 (x 3).
(d) an operator on C4 whose characteristic polynomial is x(x 1)2 (x 3) and whose minimal polynomial
is x(x 1)(x 3).
[Ans: (a) T (x, y, z) = (z, 0, 0), (b) T (x, y, z, w) = (x + y, y, 0, 0), (c) T (x, y, z, w) = (0, y + z, z, 3w), (d)
T (x, y, z, w) = (0, y, z, 3w)]
Solution:
(a) 0 is the only eigenvalue. In a basisgiving an upper
triangular matrix, the diagonal entries are 0, and
0 a b
so the matrix is of the form A = 0 0 c . Also, we require A2 = 0 (since x = A must satisfy
0 0 0
2
x = 0) but A 6= 0 (otherwise the minimal polynomial
would
be x). Therefore we require
ac =
0 but
0 0 1
x
a, b, c not all zero, and one solution is A = 0 0 0 . Multiplying into the vector y gives
0 0 0
z
the stated transformation in the standard basis.
(b) The characteristic polynomial is a quartic of which the minimal polynomial is a factor, and so
one possibility for the characteristic polynomial is x2 (x 1)2 . Then the eigenvalues are 0, 1, with
algebraic multiplicity 2 each. In a basis with respect to which
the matrix isblock diagonal, with
1 a 0 0
0 1 0 0
0 a 0 0
0 0 0 0
2
I)
0
1
1
A(A I)2 =
, where I now means the 2 2 unit matrix. Now it is
0
A2 (A2 I)2
clear that we require A2 = 0.
(c) Now the diagonal entries are 0,1,1,3. Viewed as a block diagonal matrix, the 2 2 upper triangular
block with diagonal entries 1 must have a non-zero non-diagonal entry, otherwise
the minimal
poly
0 0 0 0
0 1 1 0
nomial would be x(x 1)(x 3); and the matrix may be taken to be A =
0 0 1 0 , which
0 0 0 3
leads to the stated example.
(d) In a suitable basis the matrix is upper diagonal with diagonal entries which may be ordered 0,1,1,3.
Now there is no non-zero off-diagonal entry, and the stated answer is obtained.
2
31. (S) Show that, for any operator
Ton C2 such that
T = 0, 0 is the only eigenvalue, and that there is a basis
0 1
0 0
such that T has matrix
or
. What are the characteristic and minimal polynomials in
0 0
0 0
each case?
Hand-in for
Week 7
If a = 0 we already have the other possibility, and then the minimal polynomial is x.
32. (S) Suppose T is an operator on a vector space V , and let v V . Let p be the monic polynomial of
smallest degree such that p(T )v = 0. Prove that p divides the minimal polynomial of T .
Solution: Let m(x) = q(x)p(x) + r(x), where m is the minimal polynomial, and r is the remainder on
division by p. Thus deg(r) < deg(p) and m(T ) = q(T )p(T ) + r(T ). Applying this to v, we conclude that
0 = 0 + r(T )v. Hence r is a polynomial of smaller degree such that r(T )v = 0, and so r(x) = 0 (i.e. the
zero polynomial). Hence result.
33. (S) Suppose T is the operator on C3 which takes (x, y, z) to (y, z, 0). Show that there is no operator S
such that S 2 = T .
Solution: Direct calculation shows that T 2 6= 0 but T 3 = 0, and the same argument as in Q.31 shows
that 0 is the only eigenvalue of T . For any eigenvalue S of S there must be a non-zero vector v such that
S(v) = S v. Therefore T (v) = S 2 (v) = 2S v. Hence 2S is also an eigenvalue of T , and so S = 0. Again
the characteristic polynomial of S is x3 , i.e. S 3 = 0. Hence T 2 = S 4 = 0, which is in contradiction with
the direct computation of T 2 .
34. (S) Prove the Cayley-Hamilton theorem for strictly upper triangular matrices, by direct computation.
(The strictly here means that the diagonal values are 0.) Do the same for upper triangular matrices.
Solution: All diagonal entries of a strictly upper triangular n n matrix A are 0, and so 0 is the only
n
eigenvalue, and the characteristic polynomial is xn ; thus we have to show that
Pn A = 0. Consider first
2
2
2
A , noting that aij = 0 for j i. Then the typical entry in A is (A )i,j = k=1 ai,k ak,j . Now ai,k = 0
if k i and ak,j = 0 if k j. Hence all terms in the sum vanish, and so (A2 )i,j = 0, provided that
j i + 1. Therefore A2 is also strictly upper triangular, with the additional property that the diagonal
above the leading diagonal also vanishes. Similary A3 has two zero diagonals above the leading diagonal.
Continuing, An = 0. (This can be dressed up as a proof by induction if you wish.)
If A is merely upper triangular, its diagonal entries are the eigenvalues, and the number of times each is
present is its multiplicity. Hence the characteristicpolynomial is n1 (Aaii I). Consider
the product of the
0
...
a11 a22
0 a22 a11
.
.
.
0
0
...
0 0 ...
0 0 ...
0
a22 a33 . . .
, and it is
...
...
.
...
...
35. (S) A nilpotent operator T is one for which there exists a positive integer p
(a) Suppose N is a nilpotent operator. Prove that 0 is the only eigenvalue.
(b) Suppose 0 is the only eigenvalue of an operator N on a complex vector space. Prove that N is
nilpotent. Given an example to show that this is not necessarily true on a real vector space.
[Ans (b) T (x, y, z) = (0, z, y), x, y, z R]
Solution:
(a) This is a trivial generalisation of the first bit of Q.31.
(b) In a basis in which the matrix of N is upper triangular, the diagonal entries are 0, and the matrix is
strictly upper triangular (in the sense of Q.34). Then the first part of that question gives the desired
result.
For the real case, we can try to find a matrix which has zero as an eigenvalue, and other eigenvalues
which are complex. Projections have a zero eigenvalue (Q.24a), while all plane rotations (except for
a rotation by a multiple of ) have complex eigenvalues, and so we try a combination of projection
onto the y, z plane followed by rotation
by /2 about
the xaxis. This leads to the stated example.
0 0 0
Its matrix in the standard basis is 0 0 1 , which has eigenvalues 0, i over C, i.e just 0 over
0 1 0
R, as required. It is not nilpotent, as direct calculation shows that N 3 = N (which also follows
from the Cayley-Hamilton theorem), and clearly no power of N vanishes.
36. (S) A matrix is said to be in Jordan form if it is block-diagonal with each diagonal block being a Jordan
matrix (see the description aboveQ.27). There may be more
than one block with a given parameter value.
J3 (5)
0
0
J1 (5)
0 is in Jordan form. Note also that J1 () is the
So for example the 7 7 matrix 0
0
0
J3 (2)
1 1 matrix (a.k.a. number) , and so a diagonal matrix is an example of Jordan form where all the
blocks are of size 1.
Parts (a),(b)
are part
hand-in for
Week 8
(a) Consider a matrix A in Jordan form with just two Jordan blocks Jp (), Jq () where 6= . Find
the characteristic polynomial, eigenvalues, minimal polynomial, and algebraic multiplicity of each
eigenvalue (i.e. the dimension of the corresponding generalized eigenspace) of A
(b) The same, only now assume that = .
(c) Conjecture how this generalizes to a general matrix in Jordan form, with blocks Jpi (j ) (j ).
(d) (Added after the tutorial sheet was issued to the class.) In (a) - (c) above, give the geometric
multiplicity of each eigenvalue.
[Ans (a) (xP )p (x )q ; , ; (x )p (x )q ; p, q, (b)
(x )p+q ; ; (x )max(p,q) ; p + q, (c)
pi (j )
maxi (pi (j )) P
i
; {j , j = 1, . . .}; j (x j )
; i,j pi (j ), (d) 1,1; 2; mG () is the number
j (x j )
of blocks for which j = .]
Solution:
(a) A is upper triangular. The diagonal entries (p s and q s) give the eigenvalues and their algebraic
multiplicities, and hence the characteristic equation, as stated. The minimal
polynomial must
be of
J
(0)
0
p
the form (x )i (x )j . Using block notation, we have A I =
, since
0
Jq ( )
subtraction of I alters all diagonal
entries by , and there is a corresponding
form for A I.
(Jp (0))i Jp ( )j
0
i
j
Then (A I) (A I) =
. The diagonal entries of
0
Jq ( )i Jq (0)j
Jp ( )j are ( )j , i.e. non-zero, while (Jp (0))i has all entries 0 except a diagonal of 1s, i places
above the leading diagonal (by arguments similar to the calculations about strictly upper triangular
matrices in Q.34). Therefore the upper left block vanishes iff i p, and similarly for the lower right
block. The minimal polynomial is given by the smallest values of i and j, hence result.
(b) Again the eigenvalue(s), algebraic multiplicity and characteristic polynomialare immediate. The
(Jp (0))i
0
minimal polynomial must be of the form (x )i , and we have (A I)i =
.
0
Jq (0)i
This vanishes iff {i p and i q}.
(c) If you have understood (a) and (b) this is obvious.
(d) (a) The matrix A I has p + q 1 linearly independent rows (i.e. rows 1 . . . (p 1), corresponding
to the first block, and rows p + 1 . . . p + q, corresponding to the second block). Hence its rank is
p + q 1, its nullity 1. Similarly for A I.
(b) Now A I has p + q 2 linearly independent rows, (i.e. rows 1 . . . (p 1), corresponding to
the first block, and rows p + 1 . . . p + q 1, corresponding to the second block. Hence the nullity
is 2.
(c) See the solution to (c) above.
37. (S) Find two matrices A, B in Jordan form which have the same minimal polynomial, characteristic
polynomial and dimensions of the (ordinary, not generalized) eigenspaces and such that A 6= B (and
neither do A, B differ only by a change in the order of the blocks down the diagonal).
[Ans (example): A = diag(J3 (), J2 (), J2 ()), B = diag(J3 (), J3 (), J1 ())]
Solution: Lets try matrices with a single eigenvalue . As in Q.36(b),(c), the largest Jordan block
determines the power of x in the minimal polynomial, and so we ensure that this is the same for A
and B. Next, the geometric multiplicity is given by the number of Jordan blocks. To see this, suppose
(x1 , . . . , xn )T is an eigenvector and that the first block has size r. Then the first r linear equations
determining the eigenvector are just 0.x1 = 0 and xi = 0, i = 2 . . . r. Continuing with the remaining
equations, it is easily seen that the most general eigenvector is (x1 , 0, . . . , xr+1 , 0, . . . , 0)T , i.e. there is one
arbitrary non-zero coordinate for each block. Therefore to complete A and B we add to the largest block
the same number of smaller blocks, in such a way as to give matrices of the same size. The above seems
the simplest solution.
Completion of
hand-in for
Week 8
38. (E)
(a) Suppose T is an operator on a vector space V and (v1 , . . . , vn ) is a basis of V with respect to which
the matrix of T is the Jordan matrix Jn (a). Describe the matrix of T with respect to the basis
(vn , . . . , v1 ).
(b) Repeat the question if (v1 , . . . , vn ) is a basis with respect to which the matrix of T is in Jordan Form
(Qu.36).
Solution:
(a) Let wi = vn+1i , i = 1 . . . n. Since T vi = vi + vi1 if i > 1 and T v1 = v1 , it follows that
T wn+1i = wn+1i + wni+2 if n + 1 i < n and T wn = wn . Hence T wj = wj + wj+1 if j < n.
Hence result.
(b) Another way of getting to the previous result is by reversing the columns of the matrix and then
reversing the rows. (The reason for this is that the change-of-basis matrix is a matrix of 1s along the
diagonal from lower left to upper right, 0s elsewhere.) This makes the second result obvious.
39. (S) Let V be an n-dimensional real vector space.
(a) Let V have basis {v1 , . . . , vn }. Write down a basis of C V , the complexification of V , verifying that
your basis is indeed a linearly independent spanning set. Hence find dimC V .
(b) Let c1 , c2 R, c = c1 + ic2 , and (v1 , v2 )
C
T ((v1 , v2 ) + (v1 , v2 ))
=
=
C
C
T ((1 v1 2 v2 , 1 v2 + 2 v1 ) + (1 w1 2 w2 , 1 w2 + 2 w1 ))
T (1 v1 2 v2 + 1 w1 2 w2 , 1 v2 + 2 v1 + 1 w2 + 2 w1 )
= (T (1 v1 2 v2 + 1 w1 2 w2 ), T (1 v2 + 2 v1 + 1 w2 + 2 w1 ))
= (1 T v1 2 T v2 + 1 T w1 2 T w2 , 1 T v2 + 2 T v1 + 1 T w2 + 2 T w1 )
= (1 T v1 2 T v2 , 1 T v2 + 2 T v1 ) + (1 T w1 2 T w2 , 1 T w2 + 2 T w1 )
= (T v1 , T v2 ) + (T w1 , T w2 )
= C T (v1 , v2 ) + C T (w1 , w2 ).
(d)
40. (S) Let C T be the complexification of of an operator T on a finite dimensional real vector space V , and
C
V the complexification of V .
(a) If is a real eigenvalue of C T , show that it is also an eigenvalue of T .
is also an eigenvalue of C T .
(b) If is a complex eigenvalue of C T , show that its complex conjugate
are
(c) If is a complex eigenvalue of C T , it may be proved that the algebraic multiplicities of and
equal. Deduce that, if dim V is odd, T must have a an eigenvalue.
Solution:
(a) There is a non-zero (v1 , v2 ) C V such that C T (v1 , v2 ) = (v1 , v2 ). Hence (T v1 , T v2 ) = (v1 , v2 ),
since is real. Hence T v1 = v1 and T v2 = v2 . Now v1 , v2 are not both zero, and so at least one
is a non-zero eigenvector of T with eigenvalue .
(b) Again there is a non-zero (v1 , v2 ) C V such that C T (v1 , v2 ) = (v1 , v2 ). Hence (T v1 , T v2 ) =
(v1 v2 , v2 + v1 ), where = + i, , R. Hence (T v1 , T v2 ) = (v1 v2 , v2 v1 ),
and so C T (v1 , v2 ) = ( i)(v1 , v2 ). Hence result.
(c) The sum of the algebraic multiplicities of the eigenvalues of C T is dim C V = dim V . Complex
eigenvalues give an even contribution, and so if dim V is odd, the contribution from real eigenvalues
must be non-zero. Hence there must exist a real eigenvalue of C T , and this is also an eigenvalue of
T.
41. (P) Let V be a complex n-dimensional space. Its decomplexification is the real vector space R V = {v : v
V } where addition of vectors is defined as in V and multiplication by real numbers is the same as the scalar
product of V , restricted to the reals. Let {v1 , . . . , vn } be a basis of V . Show that {v1 , . . . , vn , iv1 , . . . , ivn }
is a basis of R V , and write down its dimension. Is C (R V ) = V ?
[Ans: No]
Solution: Any v V may be written v = c1 v1 + . . . + cn vn , where c1 , . . . , cn C. Expressing their real
and imaginary parts as cj = aj + ibj , we conclude v = a1 v1 + . . . an vn + b1 .iv1 + . . . bn .ivn , and so the stated
set is a spanning set of R V . It is also linearly independent, for if a1 v1 + . . . an vn + b1 .iv1 + . . . bn .ivn = 0,
with all aj , bj R, we may define cj = aj + ibj and deduce that c1 v1 + . . . + cn vn = 0. Since the vj
are linearly independent over C we conclude that all cj = 0, and hence all aj and bj are zero. Thus
the set {v1 , . . . , vn , iv1 , . . . , ivn } is a linearly independent spanning set of R V . Hence dim R V = 2 dim V .
C R
( V ) 6= V (unless dim V = 0), because dim C (R V ) = dim R V (by ex.39) and so dim C (R V ) = 2 dim V .
= b(1 v1 + 2 v2 , u)
(symmetry)
43. (E)
2 3
. Write down the corresponding quadratic
3 1
form in terms of standard coordinates x1 , x2 on R2 . Find a vector v 6= 0 in R2 such that b(v, v) = 0.
Give a sketch showing the regions in the plane where b(x, x) is positive, negative and zero. Find the
eigenvalues of the matrix, and hence or otherwise determine the type, rank and signature of b.
1 0 0
(b) Consider the SBF on R3 with matrix B = 0 1 0 . Write down the corresponding quadratic
0 0 1
form in terms of standard coordinates x1 , x2 , x3 on R3 . Sketch the regions in R3 where b(x, x) is
positive, negative and zero. Find the eigenvalues of the matrix, and hence or otherwise determine
the type, rank and signature of b.
, (3 37)/2, (1,1),2,0
[Ans: (a) 2x21 + 6x1 x2 + x22 , (3 + 7, 2),
(a) Consider the SBF b on R2 with matrix B =
b<0
b>0
1
x
b>0
b<0
b>0
Solution:
Part (c) is
part hand-in
for Week 9
2x1 + 3x2
x1
= 2x21 + 6x1 x2 + x22 . To find v
= (x1 , x2 )
3x1 + x2
x2
2
2
2
we set this to zero:
2x1 + 6x1 x2 + x2 = 0 2(x1 /x2 ) + 6(x1 /x2 ) + 1 = 0 if x2 6= 0. Hence
x1 /x2 = 3/2 7/2. Taking the plus sign and x2 = 2 (for example) leads to the stated result.
There are two lines on which b = 0 (obtained by taking the two signs), and these are sketched in the
Maple plot. To find which regions are positive and which negative, just try some simple points. For
instance b(1, 1) = 9 > 0.
Eigenvalues are given by (2 )(1 ) 9 = 0. Hence stated results. Eigenvalues are of opposite
sign (one each), hence p = 1 and q = 1. Rank and signature are p + q and p q respectively.
(b) Quadratic form obvious. b > 0 iff x23 < x21 + x22 . In cylindrical polar coordinates r, , z this is |z| < r,
which is the equatorial zone between the two cones in the figure. b = 0 on their surface, and b < 0
in the polar regions.
Eigenvalues are obvious, two positive and one negative. Hence results.
44. (S)
(a) Let V be the vector space of 2 2 real matrices with real entries and trace zero. What is the
dimension of V ? (Read (a) right through before deciding whether to give up.) Consider the SBF
(known, by the way, as the trace
) on V . Find the matrix of b with
form)
b(X,
Y ) = Trace(XY
0 1
0 0
1 0
respect to the basis
,
,
of V . Find the eigenvalues of the matrix,
0 0
1 0
0 1
and hence or otherwise determine the type, rank and signature of b.
(b) Let P2 denote the real vector space of polynomials of degree 2 in a variable x. What is the
R1
dimension of P2 ? Show that b(p(x), q(x)) := 0 p (x)q (x)dx defines an SBF on P2 . (The dashes
indicate derivatives.) Find a nonzero element of Nb (see question 42c) and hence deduce that b is
degenerate. Find the matrix of b with respect to the basis {x2 , x, 1} of P2 and hence or otherwise
find the type, rank and signature of b.
0 1 0
4/3 1 0
[Ans: (a) 3, 1 0 0 , {1, 1, 2}, (2,1), 3, 1; (b) 3, 1, 1 1 0 , {(7 37)/6, 0}, (2,0), 2, 2]
0 0 2
0 0 0
Solution:
a b
1 0
0 1
0 0
=a
+b
+c
, which shows
c a
0 1
0 0
1 0
that the three matrices given later are indeed a spanning set; they are also obviously linearly independent, since their general linear combination A equals zero iff a = b = c = 0. Thus they are a
basis, and dim V = 3.
Calling the three matrices (as ordered in the question) X1 , X2 , X3 , the entries of the matrix B are
Bij = b(Xi , Xj ). Thus B11 = Tr(X12 ) = Tr0 = 0. Similar, not always quite so trivial, calculations
lead to the stated matrix.
The eigenvalues are given by ()2 1 (2 ), hence results. Two are positive, one negative and
so the type, rank and signature are as stated.
(b) P2 (sometimes called R2 (x) in class) has the obvious basis given, hence the dimension.
b is obviously symmetric, and linear in the first argument (as both differentiation and integration are
linear).
Clearly b(1, p) = 0 for all p P2 . Hence there is clearly at least a one-dimensional subspace (Sp(1))
on which b(p(x), p(x)) = 0. Hence the rank of b must be less than 3, i.e. the form is degenerate.
R1
The matrix has 11-component B11 = 0 (2x)2 dx = 4/3. Similarly the other entries are easily found,
as given in the answer. The eigenvalues are given by ((4/3 )(1 ) 1) () = 0, which leads to
the stated results. p = 2, q = 0, and the other results follow immediately.
45. (S)
Ip
0
0
(a) Suppose that the matrix of b in a basis is in the standard form 0 Iq 0 . Identify a p0
0
0
dimensional subspace on which b is positive definite. Identify also an (n p)-dimensional subspace
on which b is negative semi-definite (meaning that b(v, v) 0 for all v in the subspace). By considering
intersections, deduce that there is no subspace dimension larger than p on which b is positive definite.
(b) True or false: an SBF is non-degenerate if and only if its matrix does not have zero as an eigenvalue.
(c) What are the possible types of an SBF on R3 if there exists a 2-dimensional subspace on which it is
negative definite?
(d) An SBF on Rn has type (p, q). What is the largest possible dimension for a subspace V such that
b(v, v) < 0 for all nonzero v V ?
(e) Same as (d), but now b(v, v) 0 for all nonzero v V .
(f) True or false: If an SBF is positive definite on subspaces U, U V then it is positive definite on
their sum U + U . Explain your answer. (Hint: take a look at Q.43.)
[Ans: (a) Sp(v1 , . . . , vp ) if p > 0, Sp(vp+1 , . . . , vn ) if p < n; (b) T; (c) (0,2), (0,3), (1,2); (d) q; (e) n p;
(f) F]
Solution:
Pp
Pp+q
(a) The associated quadratic form is 1 x2i p+1 x2i , and so the subspaces stated are obvious. Let
U be a subspace on which b is positive definite, and W the stated subspace on which it is negative
semi-definite. On non-zero vectors v U W we must have both b(v, v) > 0 and b(v, v) 0. Hence
dim(U W ) = 0. Also U + W V (the vector space on which b is defined) and so dim(U + W ) n.
Using the general result that dim(U ) + dim(W ) = dim(U + W ) + dim(U W ) we deduce that
dim(U ) dim(V ) + dim(U W ) dim(W ) = n + 0 (n p) = p.
(b) It is non-degenerate iff, when put in standard form, p + q = n. If this is true, all eigenvalues are
non-zero, and conversely.
(c) Here we know only that q 2, while p + q 3. This leads to the three possibilities listed. All are
realisable; for instance the forms x21 x22 , x21 x22 x23 , x21 x22 + x23 .
(d) See the first part of this question (swapping positive and negative).
(e) Another application of the same idea. There is a subspace of dimension p on which b is positive
definite.
(f) Look at part (a) of the question indicated. Choose any two 1-dimensional subspaces U, V lying in
the region where b > 0. Then U + V = R2 , but b is not positive definite on the whole of R2 .
46. (S)
T
7
6
6
2
.
(b) Find also a non-singular matrix P such that P T SP is diagonal with diagonal entries 1 or 0.
(c) What is the type of the SBF on R2 given by the matrix S? What is its rank and what is its signature?
2/5
1/5
2/ 5 1/5
,
(b)
, (c) (1,1), 2, 0]
[Ans: (a)
1/ 5 2/ 5
2/10 2/5
Solution:
(a) Eigenvalues given by (7 )(2
) 36 = 0, whence = 10, 5. For = 10 an eigenvector is
2
given by 3x1 + 6x2 , e.g.
. For an orthogonal matrix this must be normalised, by dividing
1
by 5. This gives the first column of the matrix given, and the other eigenpair gives the second.
p
(b) Divide each column by || to get the stated matrix.
(c) At this point the form is represented by the matrix diag(1, 1), which leads to the stated results.
(c) Show that b is positive definite on the subspace of symmetric matrices and negative definite on the
subspace of antisymmetric matrices.
This whole
question
completes the
hand-in for
Week 9
[Ans: (a) n2 , n(n + 1)/2, n(n 1)/2 (d) (n(n + 1)/2, n(n 1)/2), n2 , n]
Solution:
(a) V is spanned by the n2 matrices VIJ , where (VIJ )ij = 1 if i = I and j = J, 0 otherwise. These are
obviously linearly independent, and so dim V = n2 . A less formal but more straightforward way of
seeing this is that n n matrices have n2 elements which can be chosen independently.
For symmetric matrices the elements above the diagonal determine those below the diagonal; in
addition the elements on the diagonal may be chosen independently. Therefore the total number of
elements that may be chosen independently is 1 + 2 + . . . + n = n(n + 1)/2.
For antisymmetric matrices the diagonal entries are 0, and again those above the diagonal determine
those below. Therefore the number of entries which may be chosen independently is less than for
symmetrical matrices by the number on the diagonal, n, giving the stated result.
Pn
(b) (XY )ii = j=1 Xij Yji ; hence result. (Recall that the trace is the sum of diagonal entries.)
Pn Pn
b is obviously linear in the first argument. For symmetry we note that
j=1
k=1 Xjk Ykj =
Pn Pn
j=1 Ykj Xjk , which is the expression for b(Y, X), except that dummy indices of summation
k=1
j, k are interchanged.
Pn Pn
Pn Pn
2
(c) Let Y = X, where X is symmetric. Then b(X, X) =
j=1
k=1 Xjk Xkj =
j=1
k=1 Xjk ,
by symmetry. This is positive definite
positive, unless X = 0). Similarly, if X is
P (i.e.
P strictly
2
antisymmetric we have b(X, X) = nj=1 nk=1 Xjk
. Choose bases of the subspaces of symmetric
and antisymmetric matrices with respect to which the matrix of b is diagonal in standard form
(Q.45a). Their union is clearly a basis of V with respect to which b is diagonal with n(n + 1)/2 values
of 1 on the diagonal and n(n 1)/2 values of -1 on the diagonal. These are the values of p, q, from
which the other results are immediate.
I2
0
0
I2
. Let A be
48. (S) Let b be the SBF on R given by the matrix B given in block form as B =
Av
a fixed 2 2 matrix. Define U := x R4 |x =
, v R2 . (We are using block form notation
v
above.) Show that U is a subspace of R4 and state its dimension.
Show that b is identically zero on U if and only if A is an orthogonal matrix (i.e. iff AT A = I).
[Ans: 2]
Avi
, i = 1, 2,
Solution: Let (v1 , v2 ) be a basis of R . Then clearly U is spanned by the two vectors
vi
which are obviously linearly independent. Hence they are a basis of U , whose dimension must be 2.
Av
Next, with u =
, we have
v
b(u, u) =
=
=
=
=
uT Bu
Av
(Av)T , v T B
v
Av
v T AT , v T
v
v T AT Av v T v
v T (AT A I)v
Hence b is identically zero iff the form on R2 defined by the matrix AT A I is identically zero. By
considering this form with respect to a basis which puts its matrix in standard form, this happens if and
only if AT A I = 0.
49. (S) Consider the quadratic form = 2x2 + 3y 2 + z 2 4xy + 2xz + 2yz.
(a) Write down the (symmetric) matrix B of this quadratic form.
(b) By evaluating determinants only, determine the type of this form.
Part (a) is
part hand-in
for Week 10
(c) Find the eigenvalues and eigenvectors of B and check that the signs of the eigenvalues are consistent
with derivation of the type in the previous part. (You might want to use Maple to find the eigenvalues
use evalf(LinearAlgebra[Eigenvalues](B)).)
!
r
!
2 2 1
k
1
3 3
7
(b) Computing 1 1, 2 2 and 3 3 (sub)determinants starting from the top left, the results are 2, 2, 7.
Thus the number of changes of sign in the sequence 1, 2, 2, 7 is 1, and so q = 1. The form is
non-degenerate (det B 6= 0) and so p = 2.
(c) Suitable Maple code is
>with(LinearAlgebra):
>A:=Matrix([[2,-2,1],[-2,3,1],[1,1,1]]):
evalf(Eigenvalues(A));
Without evalf the expressions are suprisingly complex, even though it is known that the result
is real. Even the numerical evaluation leads to a small complex part, because of rounding error.
The trigonometric form given in the answer is obtained by a standard method; see, for example,
http://mathworld.wolfram.com/CubicFormula.html, starting at about eq.(71).
50. (S, with H extension) Show that the origin is a critical point of
f (x, y, z) = 2x2 + y sin y + z 2 + 2(y + z) sin x 2ky sin z
(where k is a constant). What can you say about the nature of the critical point for different values of k?
[Ans: Min if 1 < k < 0, saddle if k < 1 or if k > 0; (harder) min if k = 0, 1]
Solution: f = (4x + 2(y + z) cos x, sin y + y cos y + 2 sin x 2k sin z, 2z + 2 sin x 2ky cos z) = (0, 0, 0)
at (0, 0, 0).
4
2
2
At this point the Hessian is H = 2
2
2k . The sequence of subdeterminants (supplemented
2 2k
2
with a leading 1) is 1, 4, 4, 16k(k + 1). If 1 < k < 0 then k(k + 1) < 0; there is no sign change, and
H is positive definite. Then (0, 0, 0) is a minimum. If k < 1 or k > 0 then H has type (2,1), and the
critical point is a saddle.
Dealing with the end-points 0 and 1 is a little tedious, but illustrates a fairly standard procedure for
such problems. It is best to first diagonalise. For k = 0 eigenpairs are
0
2
1
2, 1 , 6, 1 , 0, 1
1
1
1
. Let P be the matrix with columns equal to these eigenvectors. It gives the transformation from new
coordinates x , y , z to the original coordinates, i.e.
x
y
= 2y z
= x + y + z
= x + y + z .
In these coordinates we have (for k = 0) f = 18y 2 + 2x2 plus higher order terms, which confirms the
expected type. The leading higher order terms are quartic; there are no cubic terms as f is invariant
under reflection (x, y, z) (x, y, z). Now perform a further coordinate transformation
x
y
= x + g
= y + h
= z ,
where g, h are functions whose series expansion start with cubic terms. Then f = 2x2 + 18y 2 + 4x g +
36y h plus the original quartic terms and terms of higher order. Now we can choose g, h to eliminate
1 4
f (0, 0, 0). We can compute
quartic terms containing x or y as a factor. This leaves the term
24 z 4
Fork = 1,
the eigenvalues are 4 2 2, 0, and a diagonalising transformation has matrix
in summary,
2 2 0
1
1
1 . Then f = (16 + 8 2)x2 + (16 8 2)y 2 plus quartic terms. All except the term
1
1
1
in z 4 can be eliminated by a further (nonlinear) change of variable, and this term can be evaluated by
expanding f after setting x = y = 0, i.e. x = 0, y = z , z = z , which gives the term (1/6)z 4 . Again
the origin is a minimum.
3 12 7
2 ? You do not need to evaluate a 3 3
51. (S) What is the type of the SBF with matrix 12 4
7 2
2
determinant (or compute eigenvalues - dont even think of it).
[Ans: (2,1)]
Solution: The lower right 1 1 and 2 2 subdeterminants are positive, and therefore b, restricted to the
span of the last two basis vectors, is positive definite. Hence p 2. It is clearly negative definite on the
span of the first basis vector, and so q 1. But p + q 3 and so the result follows.
0 1
52. (S) What is the type of the SBF with matrix (a) 1 1
1 1
[Ans: (a) (2,1), (b) (2,1), (c) (2,2)]
1
1 6
1 (b) 6 1
2
3 1
3 2 7
3
2 2 6
1 (c)
7
6 1
2
4 3 1
4
3
1
2
Solution:
(a) Evaluating subdeterminants, starting from the lower right, we get the sequence 2, 1, 1. There is one
sign change (starting with a supplementary 1), hence result.
(b) Evaluating subdeterminants, starting from top left, gives the sequence 1, 37, 46, with only one
sign change. Hence q = 1.
(c) The top left 1 1 and 2 2 subdeterminants are 3, 2, and so b is negative definite on the span of
the first two basis vectors. Thus q 2. Similarly, however, on the span of the last two basis vectors,
b is positive definite, and so p 2. But p + q 4, hence result.
53. (S) Classify the following quadrics.
(a) x2 + 2y 2 + 3z 2 + 2xy + 2xz = 1
(b) 2xy + 2xz + 2yz = 1
(c) x2 + 3y 2 + 2xz + 2yz 6z 2 = 1
You may be able to do all of these using determinants if you think carefully. You can always check your
answer by asking Maple for the eigenvalues.
[Ans: (a) ellipsoid, (b) hyperboloid of two sheets (c) hyperboloid of one sheet]
Solution:
1 1 1
(a) The corresponding matrix is 1 2 0 , and the sequence of subdeterminants from top left is
1 0 3
1, 1, 1. No sign change, therefore type (3,0). Hence result.
0 1 1
(b) The matrix is 1 0 1 , which looks very tricky. But the form is clearly not negative definite,
1 1 0
and so p 1 and q 2. Looking at the top left determinant (1) we see that, restricted to the
x, y plane, the form has type (1,1), and so q 1. Similarly, when restricted to each of the three
coordinate planes, it has this type. Hence the subspace on which it is negative definite must intersect
these planes, and must itself be a plane (no line can do so). Therefore q = 2.
1 0 1
(c) The matrix is 0 3 1 , and the sequence of determinants is 1, 3, 22. One sign change, and
1 1 6
so q = 1.
54. (S) Let (x) be a quadratic form on Rn given by a symmetric matrix S. How are the maximum and
minimum values of (x) on the unit sphere xT x = 1 related to the eigenvalues of S? (Hint: orthogonal
change of coordinates to standard form.)
Continuing, use Lagrange multipliers to find the critical points of xT Sx subject to the constraint xT x = 1.
[Ans: maxi i , mini i ; xi (normalised eigenvector)]
Solution: There is an orthogonal change of coordinates to a set of coordinates x such that the matrix of
is diag(1 , . . . , n ). Also, because the transformation is orthogonal, the unit sphere is xT x = 1. Thus
2
1 x2
1 + . . . + n xn
2
min(i )(x2
1 + . . . + xn )
min(i ),
xi
X
j
det(B A)
1 2 3
3 3
det
(1 2)(3 ) (3 )2 ,
and so
= 2,3. For = 2 (relative) eigenvectors are given by (1 2(2))x1 + (3 (2))x2 = 0, and
a
so x =
is a possibility. It has to be normalised with respect to A, i.e. xT Ax = 1, and so a2 = 1.
a
Completion of
hand-in for
week 10
0
. Hence (in a different order) we obtain the
1
columns of the transformation matrix X as given above. It is easy to verify that X T AX and X T BXare
diag(1, 1) and diag(3, 2).
We choose a = 1. Similarly for = 3 we may get
Quotient Spaces
56. (S) In each of the following cases show that is an equivalence relation on the set X, and describe [x] for
x X.
(a) X = R, u v u v Z
(c) X = {(a, b) | a, b Z and b 6= 0} (so an element of X is a pair of integers, with the second one
non-zero.) (a, b) (k, l) al = bk.
(c) (i) Choose A = I. Thus x x for all x R2 . (ii) x y implies that there is invertible A such that
y = Ax. Then x = A1 y; A1 is invertible. Hence y x. (iii) is rather like (b) above.
(d) If x 6= 0, (b) shows that all non-zero y are in [x]. By (a), obviously [x] = [e1 ]. Also, x = 0 and x y
implies that y = 0. Thus [0] consists of the single vector 0. There are two equivalence classes.
58. (S) The three assumptions for an equivalence relation are (i) reflexivity, (ii) symmetry, (iii) transitivity.
The following argument appears to show that the first of these is redundant. By (ii), a b b a, and
then (iii) implies that a a. Spot the flaw, and produce an example of a relation which satisfies (ii) and
(iii) but not (i).
[Ans: X = (0, 1), a b a + b = 0]
Solution: The flaw is that, for some a, there may be no b such that a b. In the example, this is the
case for 1.
(b) Check that the addition defined in class for the quotient space X/V is well defined.
(c) Check that the distributive law (x + y) = x + y, for all vectors x, y and scalars , holds in X/V .
(a) is part
hand-in for
Week 11
Solution:
(a) x y x y V . Hence y x V as a subspace must be closed under multiplication by the
scalar 1. Hence y x. Similarly x y and y z x y V and y z V . Hence x z V ,
as V must be closed under addition. Hence x z.
([x] + [y]) =
=
=
=
60. (S) Suppose T : X Y is a linear map and that V X is a subspace such that V ker T . Define a
linear map T : X/V Y (checking that the map you have defined is both well defined and linear) such
that the diagram
X
P
X/V
>Y
>
commutes. (The vertical map is the usual one, denoted P in lectures.) Find the dimension of the kernel
of T in terms of the dimensions of V and ker T . (Hint: apply the rank theorem to T and T.)
[Ans: dim ker T dim V ]
Solution: Let T([x]) = T (x). This is well defined, as [x] = [x ] x x V ker T , and so
T (x x ) = 0 = T (x) T (x ). It is also linear, as
T(1 [x1 ] + 2 [x2 ]) =
=
=
=
1 0 1
61. (S) Let T be the linear map on R3 with matrix 0 2 0 relative to the standard basis (e1 , e2 , e3 ).
1 0
0
Show that V = Sp(e2 ) is invariant under T (i.e. T (V ) V ). Write down a basis of R3 /V , and the matrix
of the canonical map T with respect to this basis.
1 1
[Ans: ([e1 ], [e3 ]),
]
1 0
1 0 1
0
0
Solution: T (e2 ) = 0 2 0 1 = 2 = 2e2 . Thus T (e2 ) V , and so V is invariant.
1 0
0
0
0
Therefore T exists.
Completion of
hand-in for
Week 11
The vectors [e1 ], [e3 ] are linearly independent, as a1 [e1 ] + a3 [e3 ] = [0] a1 e1 + a3 e3 V , i.e. there
exists a2 such that a1 e1 + a3 e3 = a2 e2 . Since (e1 , e2 , e3 ) is a basis of R3 , a1 = a2 = a3 = 0. Also
dim R3 /V = dim R3 dim V = 2, and so ([e1 ], [e3 ]) must be a basis of R3 /V .
T([e1 ]) = [T (e1 )] = [e1 + e3 ] = [e1 ] + [e3 ] and T([e3 ]) = [T (e3 )] = [e1 ] = [e1 ]. The coefficients of
[e1 ], [e3 ] give the columns of the required matrix.
62. (S) Consider the differentiation map D : P3 P3 , where P3 is the real vector space of polynomials of
: P3 /V P3 /V , where V is the
degree 3 in a variable x. Show that D gives rise to a linear map D
subspace of constant polynomials. What is the matrix of D with respect to the basis ([x], [x2 ], [x3 ]) of
P3 /V ?
0 2 0
[Ans: 0 0 3 ]
0 0 0
is well defined. Then
Solution: Since V is invariant under D, D
D([x])
= [D(x)] = [1] = 0 P3 /V
2 ]) = [D(x2 )] = [2x] = 2[x]
D([x
3 ]) = [D(x3 )] = [3x2 ] = 3[x2 ]
D([x
from which the columns of the matrix are immediate.
Polar and Singular Value Decomposition
63. (S) Find (by explicit calculation) all 2 2 real symmetric positive definite matrices S satisfying S 2 = I2 ,
where I2 is the 2 2 unit matrix.
[Ans: I2 ]
Solution: S =
a
b
b
d
works iff
a2 + b 2
(2)
b(a + d) =
b2 + d2 =
0
1.
(3)
1
64. (S) Find the polar form A = S AT A of the matrices (a)
10
0 1 0
3 1
3/ 10 1/10
(b) 0 0 1
[Ans: (a)
1 3
1/ 10 3/ 10
1 0 0
0
(b) 0
1
10 6
0 8
1 0
0 2
0 0
0
0 ]
3
2 0
0 3
0 0
Solution:
1
1 1
. This has eigenvalues 4, 16, orthogonal diagonalising matrix P =
,
1 1
2
3 1
.
and singular values 1,2 = 2, 4. Hence AT A = P diag(2, 4)P T =
1 3
Now construct the matrix Q whoseith column
is Avi /i , where A is the original matrix and vi is
1
2
1
. Then A = QP T AT A, and QP T gives the answer
the ith column of P . Thus Q = 5
2 1
stated.
(a) AT A =
10 6
6 10
0 1
AT A = diag(1, 2, 3), Q = 0 0
1 0
0
1 .
0
65. (S) Suppose A is a 2 2 symmetric matrix with unit eigenvectors u1 , u2 . If its eigenvalues are 1 = 3 and
2 = 2, what are U, and V in the SVD A = U V T ?
[Ans: U has columns u1 , u2 , = diag(3, 2), V has columns u1 , u2 ]
1/17 4/ 17
1/5 2/ 5
85 0
Solution: [Ans:
]
0
0
2/ 5 1/ 5
4/ 17 1/ 17
1 4
5 20
. Singular
AT A =
, eigenvalues 85, 0, orthogonal diagonalising matrix V = 117
4 1
20 80
1/5
.
value 1 = 85. First column of U is Av1 /1 , where v1 is the first column of V , and so u1 =
2/ 5
Complete V to make it orthogonal, giving the stated result (not quite uniquely).
68. (S) Let A1 , A2 be real n n matrices. Prove that they have the same singular values iff there exist real
orthogonal matrices S1 , S2 such that A1 = S1 A2 S2 .
Solution: Let the SVDs be Ai = Ui i ViT , so that i = UiT Ai Vi .
If they have the same singular values then 1 = 2 1 . Hence A1 = U1 2 V1T = U1 U2T A2 V2 V1T = S1 A2 S2 ,
where S1 = U1 U2T and S2 = V2 V1T . These are orthogonal, as S1 S1T = U1 U2T U2 U1T = U1 U1T = I = S1T S1 ,
similarly, and similary for S2 .
Suppose A1 = S1 A2 S2 , where S1 , S2 are orthogonal. Then AT1 A1 = S2T AT2 S1T S1 A2 S2 = S2T AT2 A2 S2 .
Hence if V1 diagonalises AT1 A1 , i.e. V1T AT1 A1 V1 = 1 , where is the diagonal matrix whose diagonal
entries are the square of the singular values of A1 . Hence V1T S2T AT2 A2 S2 V1 = 1 , and so there is a
diagonalising matrix V2 = S2 V1 which diagonalises AT2 A2 to the same diagonal matrix as AT1 A1 . Hence
result.
0 1 0
1 1
69. (S) Find the SVD of (a) 1 1 1 1 (b)
(c)
1 0 0
0 0
1/2
1/2
1/2
1/2
1/ 2 1/ 2
0
0
0 1
1 0 0
(b)
I3
[Ans: (a) (1)(2, 0, 0, 0)
0
1 0
0 1 0
0
1/ 2 1/ 2
1/2
1/2
1/2 1/2
1/2 1/ 2
2 0
]
(c) I2
0 0
1/ 2 1/ 2
1 Strictly, they need not appear in the same order, in which case = T , where is a permutation matrix. But then is
1
orthogonal, and has no effect on the following argument except slightly complicating the formulae
Solution:
(a) It is easier to find the SVD of B = AT first. Thus B T B = AAT = (4) (a 1 1 matrix). This is
diagonalised by V = I1 (the 1 1 unit matrix), and there is one singular value, i.e. 1 = 2. The
matrix has the same shape as B, and is therefore (2, 0, 0, 0)T .
T
The first column of U is BI1 /1 , i.e. (1/2, 1/2,
1/2, 1/2) . It canbe extended to an orthogonal
0
1/2
1/2 1/ 2
1/2 1/ 2
0
1/2
. (Just try constructing each
matrix in many ways, e.g. U =
1/2
0
1/ 2 1/2
1/2
0
1/ 2 1/2
column in such a way that it is orthogonal to everything that has gone before, and then normalise.)
With these choices, B = U V T and so A = V T U T , giving the solution stated.
(b) AT A = diag(1, 1,0), and so the singular values are 1, 1, the diagonalising matrix is V = I3 , and
1 0 0
=
. In the present case the columns of U are the first two columns of A (because
0 1 0
V = I3 and 1,2 = 1).
1 1
T
(c) A A =
, eigenvalues 2, 0, singular value 1 = 2, orthogonal diagonalising matrix V
1 1
1
as given in the answer. The first column of U is Av1 /1 =
. This can be extended to an
0
orthogonal matrix in only two ways.
70. (S) This question is an extended hand-in for Week 12 or Semester 2, Week 1
(a) A relation on a set X is said to be circular if a b and b c imply c a, for all a, b, c X. Prove
that a relation is reflexive and circular iff it is reflexive and symmetric and transitive.
[6]
(b) State the properties and sizes of the matrices U, , V in the singular value decomposition A = U V T
of an m n real matrix A. What is meant by singular values?
[4]
1 1
(c) Find the singular value decomposition of the matrix
.
[7]
1 1
Solution:
(a)
1/ 2 1/2
V =
.
1/ 2 1/ 2
[2]
Denoting its columns by v1 , v2 ,
the first
column
of
U
is
Av
/
,
and
the
second
column
is
any
1
1
1/ 2 1/2
orthogonal unit vector, e.g. U =
.
[2]
1/ 2 1/ 2