You are on page 1of 9

CDS 212 - Solutions to Homework #1

Instructor: Danielle C. Tarraf


Fall 2007

Posted Online: Saturday October 27, 2007.


Last Modified: Tuesday November 13, 2007.

Problem 1

(a) Recall that the null space and the range of a matrix A Cmn are defined
as:

N (A) = {x Cn |Ax = 0}
R(A) = {y Cm |y = Ax, for some x Cn }

Note that N (A) is a subspace of Cn since it contains 0 Cn (A0 = 0) and is


closed under addition and scalar multiplication:

x1 , x2 N (A) Ax1 = 0 and Ax2 = 0


Ax1 + Ax2 = 0
A(x1 + x2 ) = 0
x1 + x2 N (A)

x N (A), C A(x) = Ax = 0 x N (A)


Similarly, note that R(A) is a subspace of Cm since it contains 0 Cm (A0 = 0)
and is closed under addition and scalar multiplication:

y1 , y2 R(A) y1 = Ax1 and y2 = Ax2 for some x1 , x2 Cn


y1 + y2 = Ax1 + Ax2 = A(x1 + x2 ) = Ax3 for some x3 Cn
y1 + y2 R(A)

y R(A), C y = Ax for some x Cn , y = Ax = A(x)


y R(A)

1
(b) Note that:

x R (A) x Ay = 0, y Cn
y A x = 0, y Cn
A x = 0
x N (A )

(c) N (A) = R(A ) follows from part (b) and the following linear algebra facts.
Let V and W be subspaces of an inner product space (X, <, >); we have:

(V ) = V , for all V .

V = W V = W .

(d) Let r = dim (N (A)), B1 = {v1 , v2 , . . . , vr } be a set of basis vectors for


N (A), and B2 = {vr+1 , . . . , vn } be a set of vectors such that B = B1 B2 is a
basis for Cn . We will prove that dim (R(A)) = n r.
We begin by noting that Avr+1 , Avr+2 , . . . , Avn are linearly independent,
for if that was not the case, then there exists scalars ar+1 , ar+2 , . . . , an (not all
zero) such that:

ar+1 Avr+1 + ar+2 Avr+2 + . . . + an Avn = 0 A(ar+1 vr+1 + ar+2 vr+2 + . . . + an vn ) = 0


ar+1 vr+1 + ar+2 vr+2 + . . . + an vn N (A)

leading to a contradiction. Thus, dim (R(A)) n r since there are at least


n r linearly independent vectors in R(A).
Now suppose there exists y R(A) such that y, Avr+1 , Avr+2 ,..., Avn
are linearly independent. Then, there exists x Cn such that y = Ax and
x/ span(B2 ), again leading to a contradiction. Thus dim (R(A)) = n r.

Problem 2

(a) Recall that the 1-induced norm of a A is defined as:

kAxk1
kAk1 = sup = max kAxk1
x6=0 kxk1 kxk1 =1

2
m
X
We begin by showing that kAk1 max |aij |. We have:
1jn
i=1

m X
n
X

kAxk1 =
a x
ij j

i=1 j=1
m X
X n
|aij xj |
i=1 j=1
Xm X n
= |aij ||xj |
i=1 j=1
Xn X m
= |aij ||xj |
j=1 i=1
n m
!
X X
= |xj | |aij |
j=1 i=1
n m
!
X X
|xj | max |aij |
1jn
j=1 i=1
m
X X
= max |aij | |xj |
1jn
i=1 1jn
m
X
= max |aij | kxk1
1jn
i=1

m
X
To show that this upper bound can be achieved, let jo = argmax |aij |, and
j
i=1
= ejo , the joth basis vector (i.e. x
consider x i = 1 when i = jo and 0 otherwise).
Clearly, k
x1 k1 = 1 and:

a1jo m
.. X
Ax = . kAxk1 = |aijo |
amjo i=1

which completes the proof.

(b) Recall that the -induced norm of A is defined as:

kAxk
kAk = sup = max kAxk
x6=0 kxk kxk =1

3
n
X
We begin by showing that kAk max |aij |. We have:
1im
j=1
n
X
kAxk = max | aij xj |
1im
j=1
n
X
max |aij xj |
1im
j=1
Xn
= max |aij ||xj |
1im
j=1

n
X
max |aij | ( max |xj |)
1im 1jn
j=1

n
X
= max |aij | kxk
1im
j=1

n
X
= max |aij | kxk
1im
j=1
n
X
= max |aij | whenever kxk = 1
1im
j=1
n
X
To show that this upper bound can be achieved, let io = argmax |aij | and
i
j=1
consider vector x
defined as:

sgn(aio 1 )
x
=
..
.
sign(aio n )
Clearly k
xk = 1 since |
xj | = 1 for all j, and
n
X n
X n
X
kA
xk = aio j sgn(aio j ) = |aio j | = max |aij |
1im
j=1 j=1 j=1

which completes the proof.

(c) We begin by noting that the inequality



kAk kAk2 mkAk
holds when n = 1 (i.e. when A is a vector) since:
 2 X m
kAk2 = max |ai | |ai |2 = kAk22
1im
i=1

4
and
m
X m 
X 2
kAk22 = |ai |2 max |ai | = mkAk2
1im
i=1 i=1
When x 6= 0, we have:
kAxk kAxk 1 kAxk2

kxk kxk2 m kxk2
with each of the inequalities following directly from the established vector in-
equalities. Hence:
kAxk 1 kAxk2
kAk = sup for all x 6= 0
x6=0 kxk m kxk2
from which it follows that:
1 kAxk2 1
kAk sup = kAxk2
x6=0 m kxk2 m
which proves the right hand side inequality.
When x 6= 0, it also follows from the established vector identities that
kAxk kAxk2 . Hence we have:
kAxk kAxk2 kAxk2
n n sup n = nkAk2
kxk2 kxk2 x6=0 kxk2
1
Noting that for x Cn the established vector inequality implies that
kxk
n
, we have:
kxk2
kAxk kAxk kAxk
n nkAk2 kAk = sup n nkAk2
kxk kxk2 x6=0 kxk

which proves the left hand side inequality.

(d) With the inherent assumption that m n, suppose that rank(A) = n


and let U V be the singular value decomposition of A. We have:

kAxk22 = kU V xk22
= kV xk22 (Since U is unitary)
= kyk22 where y = V x
Xn
= |i yi |2
i=1
min (A)2 kyk22
= min (A)2 kV xk22
= min (A)2 kxk22 (Since V is unitary)

5
kAxk2
Hence min (A) for all x 6= 0 and min kAxk2 min (A). To show
kxk2 kxk2 =1
that this lower bound can be achieved, pick x = vn , where vn is the nth column
of V . Then:
v1

0
.. ..
V x = . vn = . = en ,


vn1 0
vn 1
V x = en = min (A)em
where em is the basis vector with unity entry in row m, and
Ax = U V x = U min (A)em = min (A)um
where um is the mth column of U . Hence:
kAxk2 = kmin (A)um k2 = min (A)kum k2 = min (A).
When rank(A) < n, note that Ax = 0 for x = vn since V x = 0. Hence
min kAxk2 = 0 in this case.
kxk2 =1

Problem 3

(a) Let x = k+1 vk+1 + . . . + n vn for some k+1 ,...,n C. We have:


kAxk2 = kU V xk2 = kV xk2
since U is unitary, and:

0 0
.. ..
v1 . .


. 0 0
V x = .. = =

k+1 k+1 k+1
vn

. ..
..

.
n n n
since V is unitary, hence vi vj = 1 whenever i = j and is 0 otherwise. Thus:
n
X n
X
kAxk22 = |i i |2 |i |2 k+1
2

i=k+1 i=k+1

and
kxk22
= kk+1
vk+1 + . . . + n vn k22

= (k+1 vk+1 + . . . + n vn )(k+1 vk+1 + . . . + n vn )
Xn
= |i |2
i=k+1

6
Hence kAxk2 k+1 kxk2 .

(b) We have:
A is invertible det(A) 6= 0
det(U V ) 6= 0
det(U )det()det(V ) 6= 0
det() 6= 0
(A) 6= 0
(A) > 0
where the third equivalence follows because U , and V are square matrices
of identical sizes, and where the fourth equivalence follows because both U and
V are unitary hence invertible.
Note that when A is invertible, A1 is given by
A1 = V 1 U
where
1
0 ... 0

(A)
.. ..
0 . .
1 =


.. ..
. . 0
1
0 ... 0 (A)

Hence, the singular values of A1 are the reciprocals of the singular values of A
with
1
(A1 ) = .
(A)

(c) Recall that a matrix is symmetric positive semidefinite iff all its eigen-
values are non-negative, and that the singular values of a Hermitian matrix are
identical to its eigenvalues. Also, note that the singular value decomposition of
a Hermitian matrix is of the form U U . We have:
(A)I A = (A)IU U U U = U ((A)I ) U
The eigenvalues of (A)I A are then given by 0, (A) 2 (A),..., (A) (A)
and are thus all non-negative, which proves that:
(A)I A 0 A (A)I
Similarly we have:
A + (A)I = U ( + I) U
and the eigenvalues of A + (A)I are clearly non-negative, each being the sum
of two non-negative terms. Hence:
A + (A)I 0 (A) A.

7
Problem 4

sup |u(t)|
is not a norm as it doesnt satisfy the positivity condition:
t
Consider a constant signal u(t) = c > 0, and note that u(t)
= 0 for all t,
hence supt |u(t)|
= 0 even though u(t) is a non-zero signal.

kuk = |u(0)| + sup |u(t)|


qualifies as a norm:
t

(i) (Positivity) Being the sum of nonnegative quantities, kuk 0 for all
u. Moreover

|u(0)| = 0
kuk = 0 u(t) = 0, for all t 0
|u(t)|
= 0 t 0

(ii) (Homogeneity) Note that for any scalar , we have:



d(u(t))

kuk = |u(0)| + sup = |||u(0)| + sup |u(t)|
= || kuk
t dt t

(iii) (Triangle Inequality) Note that:



d(u(t) + v(t))
ku + vk = |u(0) + v(0)| + sup
t dt
|u(0)| + |v(0)| + sup |u(t)
+ v(t)|

t
|u(0)| + |v(0)| + sup |u(t)|
+ sup |v(t)|

t t
= kuk + kvk

kuk = max{sup |u(t)|, sup |u(t)|}


qualifies as a norm:
t t

(i) (Positivity) Being the sum of nonnegative quantities, kuk 0 for all
u. Moreover

supt |u(t)| = 0
kuk = 0 u(t) = 0, for all t 0
supt |u(t)|
=0

(ii) (Homogeneity) Follows from the homogeneity of the supremum (see


above) and from the fact that max{f1 , f2 } = max{f1 , f2 }.
(iii) (Triangle Inequality) Having already established that:

sup |u(t) + v(t)| sup |u(t)| + sup |v(t)|


t t t
sup |u(t)
+ v(t)|
sup |u(t)|
+ sup |v(t)|

t t t

8
what is left to note is that:

max{f1 + f2 , f3 + f4 } max{f1 , f3 } + max{f2 , f4 }

The triangle inequality then follows.

kuk = sup |u(t)| + sup |u(t)|


qualifies as a norm:
t t

(i) (Positivity) Being the sum of nonnegative quantities, kuk 0 for all
u. Moreover

supt |u(t)| = 0
kuk = 0 u(t) = 0, for all t 0
supt |u(t)|
=0

(ii) (Homogeneity) Follows from the homogeneity of the supremum.


(iii) (Triangle Inequality) Follows from the fact that both supt |u(t)| and
supt |u(t)|
satisfy the triangle inequality, and hence so does their sum.

You might also like