You are on page 1of 30

EECS 16B Designing Information Devices and Systems II

Spring 2017 Murat Arcak and Michel Maharbiz Homework 9


This homework is due April 5, 2017, at 17:00.

1. Midterm 2 - Question 1
Redo the midterm!

2. Midterm 2 - Question 2
Redo the midterm!

3. Midterm 2 - Question 3
Redo the midterm!

4. Midterm 2 - Question 4
Redo the midterm!

5. Midterm 2 - Question 5
Redo the midterm!

6. Midterm 2 - Question 6
Redo the midterm!

7. Midterm 2 - Question 7
Redo the midterm!

8. The Moore-Penrose pseudoinverse for “fat” matrices


Say we have a set of linear equations described as A~x = ~y. If A is invertible, we know that the solution
is ~x = A−1~y. However, what if A is not a square matrix? In 16A, you saw how this problem could be
approached for tall matrices A where it really wasn’t possible to find a solution that exactly matches all
the measurements. The Linear Least-Squares solution gives us a reasonable answer that asks for the “best”
match in terms of reducing the norm of the error vector.
This problem deals with the other case — when the matrix A is short and fat. In this case, there are generally
going to be lots of possible solutions — so which should we choose? Why? We will walk you through
the Moore-Penrose Pseudoinverse that generalizes the idea of the matrix inverse and is derived from the
singular value decomposition.

(a) Say you have the following matrix. " #


1 1 1
A=
1 −1 1
Calculate the SVD decomposition of A. That is to say, calculate U, Σ,V such that,

A = UΣV T

EECS 16B, Spring 2017, Homework 9 1


What are the dimensions of U, Σ and V ?
Note. Do NOT use a computer to calculate the SVD. You may be asked to solve similar questions on
your own in the exam.

Solution:
First, note that,  
2 0 2
 
AT A = 0 2 0 .
2 0 2
Since the first and last column are identical, it has two linearly dependent columns and thus we can
always find a non-zero solution to the following equation:
AT A~v = ~0 (1)
which means that the matrix has an eigenvalue of 0. It is clear that a vector that solves the above
equation is [1, 0, −1]T , which can be normalized to be √12 [1, 0, −1]T . Since the other eigenvectors need
to be orthogonal to it, some natural candidates include [0, 1, 0]T , [1, 0, 1]T , and so on. We can check
and verify that they are indeed eigenvectors, corresponding to eigenvalues of 2 and 4, respectively.
Therefore, we have λ0 = 4, λ1 = 2, λ2 = 0 as the eigenvalues, and the corresponding eigenvectors
~v0 = √12 [1, 0, 1]T ,~v1 = [0, 1, 0]T ,~v2 = √12 [1, 0, −1]T respectively. Then, noting that the singular values
are the square roots of the eigenvalues, we get,
 1 
" # √ 0 √1
2 √0 0  2 2

Σ= ,V =  0 1 0 
0 2 0 √1
2
0 − √12
We can then solve for U by noting that
"√ #
h √ i 2 1 0
UΣ = 2~u0 2~u1 ~0 = AV = √
2 −1 0
√ √ √ √
From the above equation, we can see that ~u0 = [ 22 , 2 T
2 ] , and ~u1 = [ 2
2
, − 2 T
2 ] . This gives us,
" 1 #
√ √1
U= 2 2
√1 − √12
2

So the SVD decomposition is


 
" #" # √1 0 √12
√1 √1 2 √0 0  2
2 2 
A= √1
0 1 0 
− √12 0 2 0 √1 1
2
2
0 − √2

(b) Let us think about what the SVD does. Let us look at matrix A acting on some vector ~x to give the
result ~y. We have,
A~x = UΣV T~x =~y
Observe that V T~x rotates the vector, Σ scales it and U rotates it again. We will try to "reverse" these
operations one at a time and then
 put them together.
T
If U “rotates” the vector ΣV ~x, what operator can we derive that will undo the rotation?

Solution: By orthonormality, we know that U T U = UU T = I. Therefore, U T undoes the rotation.

EECS 16B, Spring 2017, Homework 9 2


(c) Derive an matrix that will "unscale", or undo the effect of Σ where it is possible to undo. Recall that Σ
has the same dimensions as A. Ignore any division by zeros (that is to say, let it stay zero).

Solution: If you observe the equation:


Σ~x =~y, (2)
you can see that σi xi = yi for i = 0, ..., m − 1, which means that to obtain xi from yi , we need to multiply
yi by σ1i . For any i > m−1, the information in xi is lost by multiplying with 0. Therefore, the reasonable
guess for xi is 0 in this case. That’s why we padded 0s in the bottom of Σ e given below:
 
1
σ0 0 ... 0
  0 1
... 0 
 
σ0 0 0 0 0 ... 0 . σ1
.. .. 
  . 
 0 σ1 0 0 0 ... 0 . . ... . 
If Σ =   e=
0 1 
... σm−1 
 .. .. .. .. ..  then Σ 0
. . . . . ... 0  
0 0 ... 0 
0 0 0 σm−1 0 . . . 0  
 .. .. .. .. 
. . . . 
0 0 ... 0

(d) Derive an operator that would "unrotate" by V T .

Solution: By orthonormality, we know that V T V = VV T = I. Therefore, V undoes the rotation.


(e) Try to use this idea of "unrotating" and "unscaling" to derive an "inverse" (which we will use A† to
denote). That is to say,
~x = A†~y
The reason why the word inverse is in quotes (or why this is called a pseudo-inverse) is because we’re
ignoring the "divisions" by zero.

Solution: We can use the unrotation and unscaling matrices we derived above to "undo" the effect of
A and get the required solution. Of course, nothing can possibly be done for the information that was
destroyed by the nullspace of A — there is no way to recover any component of the true ~x that was in
the nullspace of A. However, we can get back everything else.

~y = A~x = UΣV T~x


U T~y = ΣV T~x Unrotating by U
e T~y
ΣU = V T~x e
Unscaling by Σ
e T~y
V ΣU =~x Unrotating by V
e T , where Σ
Therefore, we have A† = V ΣU e is given in (c).
(f) Use A† to solve for ~x in the following systems of equations.
" # " #
1 1 1 2
~x =
1 −1 1 4

Solution: From the above, we have the solution given by:

EECS 16B, Spring 2017, Homework 9 3


e T~y
~x = A†~y = V ΣU
 1  
√ 0 √12 1
0 " #" #
√1 √1
 2
2 √ 
2 2 2 2
= 0 1 0  0 2  √1
√1 √1
− √12 4
2
0 − 2
0 0 2
 
3
 2 
= −1
3
2

Therefore, the solution to the system of equations is:


 
3
 2 
~x = −1
3
2

(g) (Optional) Now we will see why this matrix is a useful proxy for the matrix inverse in such circum-
stances. Show that the solution given by the Moore-Penrose Psuedoinverse satisfies the minimality
property that if ~x̂ is the psuedo-inverse solution to A~x =~y, then k~x̂k ≤ k~zk for all other vectors~z satis-
fying A~z =~y.
(Hint: look at the vectors involved in the V basis. Think about the relevant nullspace and how it is
connected to all this.)
This minimality property is useful in both control applications (as you will see in the next problem)
and in communications applications.

Solution: Since ~x̂ is the pseudo-inverse solution, we know that,


e T~y
~x̂ = V ΣU

Let us write down what ~x̂ is with respect to the columns of V . Let there be k non-zero singular values.
The following expression comes from expanding the matrix multiplication.

~x̂ = V T~x̂
V
= V T A†~y = V T V ΣUe T~y = ΣU e T~y
 T
h~y,~u0 i h~y,~u1 i h~y,~uk−1 i
= , ,..., , 0, . . . , 0
σ0 σ1 σk−1

The n − k zeros at the end come from the fact that there are only k non-zero singular values. Therefore,
by construction, ~x̂ is a linear combination of the first k columns of V .

Since any other~z is also a solution to the original problem, we have

A~z = UΣV T~z = UΣ~z|V =~y, (3)

where ~z|V is the projection of ~z in the V basis. Using the idea of “unscaling” for the first k elements
(where the unscaling is clearly invertible) and “unrotation” after that, we see that the first k elements
of ~z|V must be identical to those first k elements of ~x|V .

EECS 16B, Spring 2017, Homework 9 4


However, since the information for the last n − k elements of ~z|V is lost by multiplying 0s, any values
α` there are unconstrained as weights on the last part of the V basis — namely the weights on the basis
for the nullspace of A. Therefore,
 T
h~y,~u0 i h~y,~u1 i h~y,~uk−1 i
~z|V = , ,..., , αk , αk+1 , . . . , αn−1
σ0 σ1 σk−1
.
Now, since the columns of V are orthonormal, observe that,

~ 2
k−1
h~y,~ui i 2
||x̂|| = ∑
i=0 σi

and that,
2
k−1
h~y,~ui i 2 n−1 2
||~z|| = ∑ + ∑ |αi |
i=0 σ i i=k

Therefore,
n−1
||~z||2 = ||~x̂||2 + ∑ |αi |2
i=k

This tells us that,


||~z|| ≥ ||~x̂||

9. SVD for minimum energy control


Given a practical discrete linear system model ~x(t + 1) = A~x(t).
Consider applying open loop control

~x(t + 1) = A~x(t) + Bu(t)

to the system to drive it from some initial state ~x0 to ~x f . (for simplicity we considered scaler u(t), but the
conclusion of this problem can be readily extended to vector inputs). We know that if A, B are controllable
and the dimension is n, then clearly we can get to the desired ~x f in n steps. However, suppose that we only
need to get there by m > n steps. We now have a lot of flexibility.
Among all controls that guarantees “reachability”, we could ask for a control that gets us to the desired ~x f
using minimal energy. i.e., having minimal
m−1
∑ ku(t)k2 .
t=0

m−1
A concrete example such that ∑t=0 ku(t)k2 can be the “energy” of the control inputs is if the input is a
voltage, where voltage2 is power.

(a) Consider the system evolution equations from t = 1 to t = m, obtain an expression of~x(m) as a function
of the initial state ~x0 and control inputs.

Solution: We have
~x(1) = A~x(0) + Bu(0)

EECS 16B, Spring 2017, Homework 9 5


~x(2) = A~x(1) + Bu(1) = A(A~x(0) + Bu(0)) + Bu(1) = A2~x(1) + ABu(0) + Bu(1)
..
.
~x(m) = Am~x(0) + Am−1 Bu(0) + Am−2 Bu(1) + · · · + ABu(m − 2) + Bu(m − 1)

(b) Write out the above equation in a matrix form, with ~u = [u(0), u(1), · · · , u(m − 1)]T .

Solution:  
u(0)
 
h i



u(1)
m
~x(m) − A ~x(0) = A m−1 B A m−2 B · · · AB B   ..
  .
 
u(m − 2)
u(m − 1)

(c) Now you have obtained a linear equation in the form ~y = C~u, where ~y and C contains your results
from last question. Recall that in the previous problem, you have shown that the solution obtained
by psuedo-inverse (using the SVD) has a nice minimality property. Use this to derive the minimum
energy control inputs ~u.

Solution: If ~û is the psuedo-inverse solution to ~y = C~u, then k~ûk ≤ k~zk for all other vectors ~z
satisfying ~y = C~z.
Now in order to obtain minimum energy control inputs, observe that it is equivalent to finding the
control inputs having minimum norm k~uk, Hence all we have to do is to find the psuedo-inverse
solution to ~y = C~u, by using SVD as was discussed in the previous problem.

10. Recommendation system


On Saavan’s recommendation, the EE16B TAs hang out all the time outside of work. Every Friday night,
we watch movies on Netflix and we have been collecting ratings for all the movies we’ve watched. A
sample of this data set is shown below, and gives star ratings (between 1 and 5 stars) for each of the
movies we’ve watched. These data are saved in the file data_TAs.csv. Professors Maharbiz and Ar-
cak sometimes crash movie night, and when they do we also collect their ratings. These data are saved in
data_arcak.json and data_maharbiz.json.

EECS 16B, Spring 2017, Homework 9 6


In this problem, we will use the SVD to build a system that will predict ratings for unrated movies based
on a small sample of rated movies. This will allow us to make customized movie recommendations for the
professors, like Netflix does for its viewers. Use the iPython notebook Recommender_System.ipynb.
Note that the first cell loads in the TAs’ ratings for you already.

11. Brain-machine interface


The iPython notebook pca_brain_machine_interface.ipynb will guide you through the process of analyzing
brain machine interface data using principle component analysis (PCA). This will help you to prepare for
the project, where you will need to use PCA as part of a classifier that will allow you to use voice or music
inputs to control your car.
Please complete the notebook by following the instructions given.

Solution: The notebook pca_brain_machine_interface_sol.ipynb contains solutions to this exercise.

EECS 16B, Spring 2017, Homework 9 7


EE 16B Midterm 2, March 21, 2017

Name:
SID #:
Discussion Section and TA:
Lab Section and TA:
Name of left neighbor:
Name of right neighbor:

Important Instructions:
• Show your work. An answer without explanation
is not acceptable and does not guarantee any credit.
• Only the front pages will be scanned and
graded. You can use the back pages as scratch paper.
• Do not remove pages, as this disrupts the scanning.
Instead, cross the parts that you don’t want us to grade.

Problem Points
1 10
2 15
3 10
4 20
5 15
6 15
7 15
Total 100
1
1. (10 points) The thirteenth century Italian mathematician Fibonacci de-
scribed the growth of a rabbit population by the recurrence relation:

y(t + 2) = y(t + 1) + y(t)

where y(t) denotes the number of rabbits at month t. A sequence generated


by this relation from initial values y(0), y(1) is known as a Fibonacci sequence.

a) (5 points) Bring the recurrence relation above to the state space form using
the variables x1 (t) = y(t) and x2 (t) = y(t + 1).

2
b) (5 points) Determine the stability of this system.

3
2. (15 points) Consider the circuit below that consists of a capacitor, an
inductor, and a third element with the nonlinear voltage-current characteristic:

i = −v + v 3 .

iL i
+ +
vC C L v
− −

a) (5 points) Write a state space model of the form

dx1 (t)
= f1 (x1 (t), x2 (t))
dt
dx2 (t)
= f2 (x1 (t), x2 (t))
dt
using the states x1 (t) = vC (t) and x2 (t) = iL (t).

f1 (x1 , x2 ) = f2 (x1 , x2 ) =

4
b) (5 points) Linearize the state model at the equilibrium x1 = x2 = 0 and
specify the resulting A matrix.

5
c) (5 points) Determine stability based on the linearization.

6
3. (10 points) Consider the discrete-time system

�x(t + 1) = A�x(t) + Bu(t)

where    
0 1 0 0
A =  0 0 0 B = 1 .
0 0 0 0

a) (5 points) Determine if the system is controllable.

7
b) (5 points) Explain whether or not it is possible to move the state vector
from �x(0) = 0 to  
2
�x(T ) = 1 .
0
If your answer is yes, specify the smallest possible time T and an input sequence
u(0), . . . , u(T − 1) to accomplish this task.

8
4. (20 points) Consider the system
� � � �
cos θ − sin θ 0
�x(t + 1) = �x(t) + u(t)
sin θ cos θ 1

where θ is a constant.
a) (5 points) For which values of θ is the system controllable?

b) (10 points) Select the coefficients k1 , k2 of the state feedback controller

u(t) = k1 x1 (t) + k2 x2 (t)

such that the closed-loop eigenvalues are λ1 = λ2 = 0. Your answer should be


symbolic and well-defined for the values of θ you specified in part (a).

9
Additional workspace for Problem 4b.

10
c) (5 points) Suppose the state variable x1 (t) evolves as depicted below when
no control is applied (u = 0). What is the value of θ?

1
x1 (t)
0.5

0 t

-0.5

-1
0 2 4 6 8 10 12 14 16

11
5. (15 points) Consider the inverted pendulum below, where p(t) is the position
of the cart, θ(t) is the angle of the pendulum, and u(t) is the input force.

u M

p
When linearized about the upright position, the equations of motion are
m 1
p̈(t) = − g θ(t) + u(t)
M M (1)
M +m 1
θ̈(t) = g θ(t) − u(t)
M� M�
where M , m, �, g are positive constants.

a) (5 points) Using (1) write the state model for the vector
� �T
�x(t) = p(t) ṗ(t) θ(t) θ̇(t) .

12
b) (5 points) Suppose we measure only the position; that is, the output is
y(t) = x1 (t). Determine if the system is observable with this output.

13
c) (5 points) Suppose we measure only the angle; that is, the output is y(t) =
x3 (t). Determine if the system is observable with this output.

14
6. (15 points) Consider the system
      
x1 (t + 1) 0.9 0 0 x1 (t) � � x1 (t)
x2 (t + 1) =  0 1 1 x2 (t) , y(t) = 0 1 0 x2 (t) .
x3 (t + 1) 0 1 0 x3 (t) � �� � x (t)
3
� �� � C
A
a) (5 points) Select values for �1 , �2 , �3 in the observer below such that x̂1 (t),
x̂2 (t), x̂3 (t) converge to the true state variables �x1 (t), �x2 (t), �x3 (t) respectively.
      
x̂1 (t + 1) 0.9 0 0 x̂1 (t) �1
x̂2 (t + 1) =  0 1 1 x̂2 (t) + �2 (x̂2 (t) − y(t)).
x̂3 (t + 1) 0 1 0 x̂3 (t) �3
� �� �
L

15
Additional workspace for Problem 6a.

16
b) (5 points) Professor Arcak found a solution to part (a) that guarantees
convergence of x̂3 (t) to x3 (t) in one time step; that is

x̂3 (t) = x3 (t) t = 1, 2, 3, . . .

for any initial �x(0) and x̂(0). Determine his �3 value based on this behavior of
the observer. Explain your reasoning.

17
c) (5 points) When Professor Arcak solved part (a), he found the convergence
of x̂1 (t) to x1 (t) to be rather slow no matter what L he chose. Explain the
reason why no choice of L can change the convergence rate of x̂1 (t) to x1 (t).

18
7. (15 points) Consider a system with the symmetric form
� � � �� � � �
d �x1 (t) F H �x1 (t) G
= + �u(t), (2)
dt �x2 (t) H F �x2 (t) G

where �x1 and �x2 have identical dimensions and, therefore, F and H are square
matrices.

a) (5 points) Define the new variables

�z1 = �x1 + �x2 and �z2 = �x1 − �x2 ,

and write a state model with respect to these variables:


   
� � � �
d �z1 (t)   �z1 (t)  
=


 �z2 (t) + 

 u(t).

dt �z2 (t)

19
b) (5 points) Show that the system (2) is not controllable.

20
c) (5 points) Write a state model for the circuit below using the inductor
currents as the variables. Show that the model has the symmetric form (2).

x1 x2
u
L L R

21
22
Contributors:

• Siddharth Iyer.

• Ioannis Konstantakopoulos.

• John Maidens.

EECS 16B, Spring 2017, Homework 9 30

You might also like