You are on page 1of 4

Uppsala University

Dept. of Systems and Control


Final Exam in System Identication for F and STS
Answers and Brief Solutions
Date: March 16, 2007
Examiner: Magnus Mossberg
1. a)
lim
N

N

0
=
_
E{y
2
(t)} E{y(t)u(t)}
E{u(t)y(t)} E{u
2
(t)}
_
1
_
E{y(t 1)w(t)}
E{u(t 1)w(t)}
_
= 0
b) For example,

N
=
_
1
N
N

t=1
z(t)
T
(t)
_
1
_
1
N
N

t=1
z(t)y(t)
_
where
z(t) =
_
y(t 2) u(t 1)

T
2. a)
mse(

d) = var(

d) +
_
bias(

d)
. .
=0
_
2
= var(

d) =

2
N
b)
mse(

d) = var(

d) +
_
bias(

d)
_
2
=
a
2

2
N
+ d
2
(a 1)
2
where
a
opt
=
d
2
d
2
+
2
/N
minimizes mse(

d). The estimator



d is not realizable with this choice of a, since
a
opt
depends on the unknown parameter d.
c)

d(t) =
_
t

k=1

tk
_
1
t

k=1

tk
x(k)

d(t) =

d(t 1) +
1
1
t
_
x(t)

d(t 1)
_
1
3. a) Not identiable; predictor y(t|b
1
, b
2
) = (b
1
+ b
2
)u
0
.
b) Not identiable; predictor y(t|a, b) = (bc a)y(t 1).
Use, for example, u(t) = cy(t 1) or use two dierent values of c.
4. a) One resonance peak is given by a complex conjugated pair of poles. Start with
model order four.
b) ARX; linear regression.
c) Overt.
d) The result is often given as a table or as a curve.
e) The A-polynomial of the ARX-model must describe the disturbance dynamic
through 1/A. This can result in a slightly erroneous description of the system
dynamic. It is easier for the ARMAX-model to describe both the system- and
the disturbance dynamic due to the C-polynomial.
f) A useful and good model can describe new data with high enough accuracy. Such
a test can also reduce the risk for overt.
5. Predictor
y(t|) =
_
y(t 1) y(t 2

_
a
1
a
2
_
=
T
(t)
a)
E{(

N

0
)(

N

0
)
T
}

2
N
(E{(t)
T
(t)})
1
=

2
N
_
r
y
(0) r
y
(1)
r
y
(1) r
y
(0)
_
1
=
1
N
_
1 a
0
a
0
1
_
so var( a
1
) = var( a
2
) = 1/N.
b)
lim
N

N
= (E{(t)
T
(t)})
1
E{(t)y(t)}
=
_
r
y
(0) r
y
(1)
r
y
(1) r
y
(0)
_
1
_
r
y
(1)
r
y
(2)
_
=
_
a
0
0
_
lim
N
a
1
= a
0
. The estimate is correct.
lim
N
a
2
= 0. This makes sense when comparing the model with the system.
6. a)

G(e
i
) =

k=1
g(k)e
ik
=
1

k=1
y(k)e
ik
=
1

NY
N
() =
Y
N
()
U
N
()
=

G
N
(e
i
)
2
where it is used that
U
N
() =
1

in the second last equality.


b)
g(t) = g(t) g
0
(t) =
v(t) v(t 1)

var
_
g(t)
_
=
2
2

2
7. Linear regression
r() =
_
r( 1) r( na)

_
a
1
.
.
.
a
na
_

_
where
r() =
1
N
N

t=
y(t)y(t )
= nc + 1, nc + 2, . . . , nc + na give
_

_
r(nc) r(nc + 1 na)
.
.
.
.
.
.
r(nc + na 1) r(nc)
_

_
_

_
a
1
.
.
.
a
na
_

_
=
_

_
r(nc + 1)
.
.
.
r(nc + na)
_

_
This can (essentially, apart from dierent start indexes in sums) be written as
_
1
N
N

t=1
z(t)
T
(t)
_

IV
N
=
1
N
N

t=1
z(t)y(t)
where
(t) =
_
y(t 1) y(t na)

T
(the AR-part is estimated) and
z(t) =
_
y
_
t (nc + 1)
_
y
_
t (nc + na)
_
T
3
8. a) Let

N
=
1
N
N

t=1

2
(t,

(p)
N
)
The C
p
-criterion can then be written as
C
p
=
N

N
s
2
N
(N 2p) (1)
When selecting p, the quantities N and
s
2
N
=
1
N
N

t=1

2
(t,

(p
max
)
N
)
are seen as constants. Minimizing (1) with respect to p is therefore equivalent to
minimizing
N

N
+ 2p s
2
N
(2)
with respect to p. Compare with AIC, which minimizes
N

N
_
1 +
2p
N
_
= N

N
+ 2p

N
(3)
The only dierence between (2) and (3) is that

N
in (3) is replaced by s
2
N
. From
an operational point of view, this dierence is minor.
b) Assume that the denominator is
C(z) = (z p
1
)(z p
2
)
so
C(e
i
) = (e
i
p
1
)(e
i
p
2
)
Resonance peaks at = 1 means that C(e
i
) has minimum values for = 1:
e
i
p
1
= 0
e
i
p
2
= 0
so p
1
= e
i
and p
2
= e
i
.
c) Minimum point of Q(x)? Solve Q

(x) = 0 to get
x = x
k

_
J

(x
k
)
_
1
J

(x
k
)
Take x
k+1
as the minimum point:
x
k+1
= x
k

_
J

(x
k
)
_
1
J

(x
k
)
4

You might also like