You are on page 1of 3

Homework 5 Solutions

4.1 #19(b). Determine the image and kernel of the following linear operators on P3 .
(b) L (p (x)) = p (x) p0 (x) .

Solution. Let
p (x) = a + bx + cx2
be a general element of P3 . Then,

L (p (x)) = a + bx + cx2 (b + 2cx) = (a b) + (b 2c) x + cx2
So L (p (x)) = 0 i
ab = 0
b 2c = 0
c = 0
and this simple 3 3 system has solution c = 0, b = 0, and a = 0. Hence,
ker (L) = {0}
Likewise, a polynomial q (x) = + x + x2 is in the range of L i
ab =
b 2c =
c =
and this system can be solved for any choice of constants , , and :
c =
b = + 2
a = + + 2
Thus, given any polynomial q (x) = + x + x2 in P3 the polynomial p (x) =
( + + 2) + ( + 2) x + x2 satisfies
L (p (x)) = q (x) .
This shows that L is onto; that is,
R (L) = P3
Remark. As soon as you figure out that ker (L) = {0} , you also know that
dim P3 = dim ker (L) + dim R (L) = 0 + dim R (L)
Since dim R (L) = dim P3 and R (L) P3 you know (why?) that R (L) = P3
(without the algebra).

1
4.2 #24. Let A be a 2 2 matrix, and let LA be the linear operator defined by
LA (x) = Ax. Show that

(a) LA maps R2 onto the column space of A.


(b) If A is nonsingular, then LA maps R2 onto R2 .

Solution. Both parts can be established by direct calculations for the 2 2


case. The same results are true in the n n case and the proofs are easy if you
use what you know. So lets just do the general case.
(a) For any n n matrix A and any x Rn , you know that

Ax = x1 a1 + x2 a2 + + xn an

where A = [a1 a2 ... an ] express A in terms of its columns and x = [x1 x2 ... xn ]T .
Since R (LA ) is the set of all the vectors LA (x) = Ax as x varies over Rn , the
displayed equation say exactly that

R (LA ) = {x1 a1 + x2 a2 + + xn an : xk R} = span (a1 , a2 , ..., an ) .

(b) If A is nonsingular, then A1 exists. To show R (LA ) = Rn , you must


show that given b Rn there exists x Rn such that

LA x = b;

that is, Ax = b. But clearly x = A1 b does the job



LA A1 b = A A1 b = b.

Thus, R (LA ) = Rn .

Remarks (on (b) once (a) is done). Another proof uses the fact that LA : Rn
Rn is onto i it is one-to-one. Here it is equally easy to check onto directly as
above but you should provide the argument that LA is one-to-one; hence, onto.
Alternatively, since A is nonsingular, det (A) 6= 0 and the determinant test show
that the columns of A, a1 , a2 , ..., an , are linearly independent in Rn . Since there
are n columns these linearly independent vectors are a basis. (Why?) Since the
elements of a basis span the vector space, R (LA ) = span (a1 , a2 , ..., an ) = Rn .

AP 8 #1. Let T : V W be a linear transformation. Prove: If T is one-to-one, then


T maps linearly independent sets in V to linearly independent sets in W ;
that is, if v1 , ..., vn are linearly independent in V, then T (v1 ) , ..., T (vn )
are linearly independent in W.

2
Solution. To show that T (v1 ) , ..., T (vn ) are linearly independent in W you
must show that the equation
1 T (v1 ) + + n T (vn ) = 0
implies that all the scalars j are zero. By linearity the foregoing equation can
be expressed as
T (1 v1 + + n vn ) = 0.
Now, T (0) = 0 for any linear transformation. Since this T is one-to-one the only
vector that T maps to zero is the zero vector. Hence, the preceding equation
implies
1 v1 + + n vn = 0.
Since v1 , ..., vn are linearly independent in V, this equation implies
1 = 0, 2 = 0, ..., n = 0.
As noted at the start, this prove that T (v1 ) , ..., T (vn ) are linearly independent
in W.

AP 8 #2. Let T : V W be a linear transformation. Prove: If T is onto, then


T maps spanning sets in V to spanning sets in W ; that is, if S is a
set of vectors in V and span (S) = V, then span (T (S)) = W. (Here
T (S) = {w W : w = T (s) for some s S} .)

Solution. By definition of span, to prove that span (T (S)) = W you must


show that given any w W there exists scalars 1 , ..., n in the scalar field and
vectors s1 , ..., sn in S such that
w = 1 T (s1 ) + + n T (sn ) .
Since the map T is onto W, given w W there exists a vector v in V with
w = T (v) .
Since V = span (S) , there exist scalars 1 , ..., n in the scalar field and vectors
s1 , ..., sn in S and such that
v = 1 s1 + + n sn .
Then by linearity of T
T (v) = T (1 s1 + + n sn ) = 1 T (s1 ) + + n T (sn ) .
Since w = T (v) ,
w = 1 T (s1 ) + + n T (sn ) .
Thus, as noted at the outset, this proves span (T (S)) = W.

Remark. Several of you assumed that S was a finite set. This made the proof
a little easier notationally. The set S need not be a finite set of vectors. For
example, P the vector space of all polynomials
does not have a finite spanning
set but is spanned by S = 1, x, x2 , x3 , ... .

You might also like