You are on page 1of 56

DCA-CINVESTAV

Stochastic Process
Homework

Suresh Kumar Gadi


7/27/2010
Contents

Contents ................................................................................................................................................................................... I
List of Figures ......................................................................................................................................................................... III
List of Programs ...................................................................................................................................................................... V
1. Introduction: ................................................................................................................................................................... 1
1.1. List of Noises used in program: ............................................................................................................................... 1
2. Nonlinear regression: ...................................................................................................................................................... 2
2.1. Limitations of the function:..................................................................................................................................... 2
2.2. Algorithm: ............................................................................................................................................................... 2
2.3. Implementation (Scalar Case): ................................................................................................................................ 2
2.4. Implementation (Two Dimensional Case): .............................................................................................................. 5
3. Stochastic Optimization: ................................................................................................................................................. 8
3.1. Limitations of the function:..................................................................................................................................... 8
3.2. Stochastic gradient method (1st order Stochastic Optimization): .......................................................................... 8
3.2.1. Algorithm: ....................................................................................................................................................... 8
3.2.2. Implementation: ............................................................................................................................................. 8
3.3. Kiefer–Wolfowitz procedure (0th order Stochastic Optimization): ....................................................................... 10
3.3.1. Algorithm: ..................................................................................................................................................... 10
3.3.2. Implementation: ........................................................................................................................................... 11
3.4. Random Search: .................................................................................................................................................... 14
3.4.1. Algorithm: ..................................................................................................................................................... 14
3.4.2. Implementation: ........................................................................................................................................... 14
4. Parametric Identification: ............................................................................................................................................. 17
4.1. Least Square Method (LSM): ................................................................................................................................. 17
4.1.1. Algorithm: ..................................................................................................................................................... 17
4.1.2. Implementation: ........................................................................................................................................... 18
4.2. Instrument Variable Method: ............................................................................................................................... 22
4.2.1. Algorithm: ..................................................................................................................................................... 22
4.2.2. Implementation: ........................................................................................................................................... 22
5. Kalman Filter (for Discrete Time): ................................................................................................................................. 26
5.1. Filtering: ................................................................................................................................................................ 26
5.1.1. Algorithm: ..................................................................................................................................................... 26

I
5.1.2. Implementation: ........................................................................................................................................... 27
5.2. Prediction: ............................................................................................................................................................. 30
5.2.1. Algorithm: ..................................................................................................................................................... 30
5.2.2. Implementation: ........................................................................................................................................... 30
5.3. Smoothing: ............................................................................................................................................................ 33
5.3.1. Algorithm: ..................................................................................................................................................... 33
5.3.2. Implementation: ........................................................................................................................................... 33
6. Add-ins to the algorithms: ............................................................................................................................................ 36
6.1. Ruppert–Polyak version with averaging: .............................................................................................................. 36
6.1.1. Algorithm: ..................................................................................................................................................... 36
6.1.2. Implementation: ........................................................................................................................................... 36
6.2. Obtaining Global minimum using Random search method: ................................................................................. 37
6.2.1. Method: ........................................................................................................................................................ 37
6.2.2. Implementation: ........................................................................................................................................... 38
6.3. Adding saturation function to estimated variable: ............................................................................................... 39
6.3.1. Algorithm: ..................................................................................................................................................... 39
6.3.2. Implementation: ........................................................................................................................................... 40
6.4. Whitening Method (with LSM algorithm): ............................................................................................................ 41
6.4.1. Algorithm: ..................................................................................................................................................... 41
6.4.2. Implementation: ........................................................................................................................................... 42
6.5. Simple Application of Kalman Filter (Software sensor) ........................................................................................ 45
6.5.1. Circuit Details: ............................................................................................................................................... 45
6.5.2. Mathematical Modeling:............................................................................................................................... 46
6.5.3. Implementation: ........................................................................................................................................... 46
Bibliography .......................................................................................................................................................................... 50

II
List of Figures

Figure 1 : Screenshot of Program GUI..................................................................................................................................... 1


Figure 2 : Input function.......................................................................................................................................................... 3
Figure 3 : Iteration vs. X* ........................................................................................................................................................ 4
Figure 4 : Iterations vs. Mean square error (taking final reading as correct) ........................................................................ 4
Figure 5 : Convergence Graph................................................................................................................................................. 4
Figure 6 : Linear fit of convergence to calculate the value of 'Const' ..................................................................................... 4
Figure 7 : Noise used ............................................................................................................................................................... 5
Figure 8 : PDF of the noise used.............................................................................................................................................. 5
Figure 9 : Input function.......................................................................................................................................................... 6
Figure 10 : Iterations vs. x1 ..................................................................................................................................................... 6
Figure 11 : Iterations vs. x2 ..................................................................................................................................................... 6
Figure 12 : x1 vs. x2 ................................................................................................................................................................. 6
Figure 13 : Mean square error (w.r.t final value) .................................................................................................................... 7
Figure 14 : Convergence and its linear fit ............................................................................................................................... 7
Figure 15 : PDF of noise added to function 1 .......................................................................................................................... 7
Figure 16 : PDF of noise added to function 2 .......................................................................................................................... 7
Figure 17 : Input Function ....................................................................................................................................................... 9
Figure 18 : Iterations vs. x1 ................................................................................................................................................... 10
Figure 19 : Iterations vs. x2 ................................................................................................................................................... 10
Figure 20 : x1 vs. x2 ............................................................................................................................................................... 10
Figure 21 : Mean square error (w.r.t. final value)................................................................................................................. 10
Figure 22 : Convergence and its linear fit ............................................................................................................................. 10
Figure 23 : PDF of Noise 1 ..................................................................................................................................................... 10
Figure 24 : PDF of Noise 2 ..................................................................................................................................................... 10
Figure 25 : Input function...................................................................................................................................................... 12
Figure 26 : Iterations vs. x1 ................................................................................................................................................... 13
Figure 27 : Iterations vs. x1 ................................................................................................................................................... 13
Figure 28 : Interations vs. x1 (zoomed)................................................................................................................................. 13
Figure 29 : Interations vs. x2 (zoomed)................................................................................................................................. 13
Figure 30 : PDF of the noise .................................................................................................................................................. 13
Figure 31 : Input Function ..................................................................................................................................................... 15
Figure 32 : Iterations vs. x1 ................................................................................................................................................... 15
Figure 33 : Iterations vs. x2 ................................................................................................................................................... 15
Figure 34 : x1 vs. x2 ............................................................................................................................................................... 16
Figure 35 : Mean Square error (w.r.t. Final Value) ............................................................................................................... 16
Figure 36 : PDF of v1(1) ........................................................................................................................................................ 16
Figure 37 : PDF of v1(2) ........................................................................................................................................................ 16
Figure 38 : PDF of .............................................................................................................................................................. 16
Figure 39 : Iterations vs. Parameters (Case 1) ...................................................................................................................... 20
Figure 40 : Iterations vs. Parameters (Case 2) ...................................................................................................................... 20
Figure 41 : Iterations vs. Parameters (Case 3) ...................................................................................................................... 21

III
Figure 42 : Iterations vs. Parameters (Case 4) ...................................................................................................................... 21
Figure 43 : Iterations vs. Parameters (Case 4) ...................................................................................................................... 21
Figure 44 : Iterations vs. Parameters (case vn=zn) ............................................................................................................... 24
Figure 45 : Iterations vs. Parameters (case vn=zn) ............................................................................................................... 24
Figure 46 : Iterations vs. Parameters (case vn=zn-1) ............................................................................................................ 24
Figure 47 : Iterations vs. Parameters (case vn=zn-1) ............................................................................................................ 24
Figure 48 : Iterations vs. Parameters (case vn=zn-2) ............................................................................................................ 25
Figure 49 : Iterations vs. Parameters (case vn=zn-2) ............................................................................................................ 25
Figure 50 : Iterations vs. Parameters (case vn=zn-3) ............................................................................................................ 25
Figure 51 : Iterations vs. Parameters (case vn=zn-3) ............................................................................................................ 25
Figure 52 : Iterations vs. State 1 [Kalman Filter] ................................................................................................................... 28
Figure 53 : Iterations vs. State 2 [Kalman Filter] ................................................................................................................... 29
Figure 54 : Iterations vs. State 3 [Kalman Filter] ................................................................................................................... 29
Figure 55 : Iterations vs. State x1 (Prediction) ...................................................................................................................... 31
Figure 56 : Iterations vs. State x2 (Prediction) ...................................................................................................................... 32
Figure 57 : Iterations vs. State x3 (Prediction) ...................................................................................................................... 32
Figure 58 : Iterations vs. State x1 (Smoothing) ..................................................................................................................... 34
Figure 59 : Iterations vs. State x2 (Smoothing) ..................................................................................................................... 35
Figure 60 : Iterations vs. State x3 (Smoothing) ..................................................................................................................... 35
Figure 61 : Iterations vs. x1 ................................................................................................................................................... 37
Figure 62 : Iterations vs. x2 ................................................................................................................................................... 37
Figure 63 : Input function for finding global minimum......................................................................................................... 38
Figure 64 : Iterations vs. x1 .................................................................................................................................................. 38
Figure 65 : Iterations vs. x2 .................................................................................................................................................. 38
Figure 66 : x1 vs. x2 ............................................................................................................................................................... 39
Figure 67 : PDF of the v1(1)................................................................................................................................................... 39
Figure 68 : PDF of the v2(1)................................................................................................................................................... 39
Figure 69 : PDF of ............................................................................................................................................................... 39
Figure 70 : Iterations vs. x1 ................................................................................................................................................... 41
Figure 71 : Iterations vs. x2 ................................................................................................................................................... 41
Figure 72 : Iterations vs. x1 ................................................................................................................................................... 41
Figure 73 : Iterations vs. x2 ................................................................................................................................................... 41
Figure 74 : Iterations vs. Parameters (Whitening Method) .................................................................................................. 44
Figure 75 : Iterations vs. Parameters (Whitening Method) .................................................................................................. 44
Figure 76 : Circuit Diagram of the RLC circuit ....................................................................................................................... 45
Figure 77 : Settings of input power supply ........................................................................................................................... 45
Figure 78 : Time vs. Actual Current ....................................................................................................................................... 48
Figure 79 : Time vs. Current (Measured using Kalman Filter)............................................................................................... 48
Figure 80 : Time vs. Input, Output and States ...................................................................................................................... 49
Figure 81 : Actual Input and Output ..................................................................................................................................... 49

IV
List of Programs

Program 1 : Nonlinear regression (Scalar case) ...................................................................................................................... 3


Program 2 : Nonlinear Regression (Two Dimensional Case)................................................................................................... 5
Program 3 : Stochastic gradient method ................................................................................................................................ 9
Program 4 : Kiefer–Wolfowitz Method ................................................................................................................................. 12
Program 5 : Random Search Method.................................................................................................................................... 15
Program 6 : Program of Least Square Method ..................................................................................................................... 19
Program 7 : Program for Kalman Filter ................................................................................................................................. 28
Program 8 : Program for Prediction ...................................................................................................................................... 31
Program 9 : Program for Smoothing ..................................................................................................................................... 34
Program 10 : Ruppert-Polyak version with averaging .......................................................................................................... 36
Program 11 : Optimization with modification of saturation................................................................................................. 40
Program 12 : Program for Whitening Method (with LSM algorithm) ................................................................................... 43
Program 13 : Program for measuring current (state) by software sensor ........................................................................... 47

V
1. Introduction:

A MATLAB program is made to implement first few algorithms stated in this report of stochastic processing. The
program contains following noises are shifted and scaled form so that their . The remaining algorithms are
tested by individual programs as shown.

(Oishi)

1.1. List of Noises used in program:

1. Uniform Distribution (Continuous)


2. Normal Distribution
3. Students t Distribution
4. Beta Distribution
5. Binomial Distribution
6. Chi-Square Distribution
7. Exponential Distribution
8. Extreme Value Distribution
9. F Distribution
10. Gamma Distribution
11. Generalized Extreme Value Distribution
12. Generalized Pareto Distribution
13. Geometric Distribution
14. Hypergeometric Distribution
15. Lognormal Distribution
16. Negative Binomial Distribution
17. Noncentral F Distribution
18. Noncentral t Distribution
19. Noncentral Chi-Square Distribution
20. Poisson Distribution
21. Rayleigh Distribution
22. Uniform Distribution (Discrete)
23. Weibull Distribution

The program allows us to change between different noise and different functions. Program allows us to
visualize the step by step progress of convergence. Screenshot is shown below.

Figure 1 : Screenshot of Program GUI


2. Nonlinear regression:

There exist many efficient methods to find the roots of the equations like “Newton–Raphson method”. Which
will even allow us to find more than one root based on the initial conditions and also allows us to find the imaginary
roots. But these methods cannot be applied on the real life signals (measured with noise).

The nonlinear regression method of the stochastic process will be able to find the root of the equation.
Unfortunately roots can’t be found for all type of signals. This method has following limitations on the function.

2.1. Limitations of the function:

1. The function must have a real root. And the root should be unique. That is this method cannot find the
imaginary roots or more than one roots.
2. The function should be bounded within a cone. That is or some positive scalar values of and the relation
satisfies.

Where:

2.2. Algorithm:

The following recurrent method will converge any given initial to .

Where:

and is termed as Gain matrix.

, that is measured value with the noise.

is noise with and

2.3. Implementation (Scalar Case):

Following code implements the above shown algorithm.

2
x = 1;
for k = 1 : chrvv000;
Outputx(k,1) = k;% for x* vs Iterations graph
Outputy(k,1) = x;

F = eval(get(handles.edit1,'string'));
y = F+Xi1(k);
gamma = eval(get(handles.edit2,'string'));
x = x - gamma*y;
end

Program 1 : Nonlinear regression (Scalar case)

The algorithm is tested on the function . The graph of


the function is as shown below.

Figure 2 : Input function

Following results were obtained from the implementation of the algorithm by taking and .
The noise used for this results are “Students t Distribution”. The results for other noise also observed similar to this.

3
Figure 4 : Iterations vs. Mean square error (taking final reading as
Figure 3 : Iteration vs. X* correct)

The convergence formula is

Where,

The convergence formula can be written in the following form

Following graph is between on x-axis and on y-axis. Later the points allow us to approximated a linear
fit to calculate the value of .

Figure 5 : Convergence Graph


Figure 6 : Linear fit of convergence to calculate the value of 'Const'

4
The calculated is , we can see that from the above graph.

Finally the noise used and its PDF is shown in the below graphs.

Figure 7 : Noise used Figure 8 : PDF of the noise used

2.4. Implementation (Two Dimensional Case):

The following code implements the nonlinear regression on a two dimensional case.
for k = 1 : chrvv000;
x1 = x(1,1);
x2 = x(2,1);
GAMMA = eval(get(handles.edit3,'string'));

Outputx(k,1) = k;% for x1* vs Iterations graph


Outputy(k,1) = x1;
Outputx(k,2) = k;% for x1* vs Iterations graph
Outputy(k,2) = x2;

F(1,1) = eval(get(handles.edit1,'string'));
F(2,1) = eval(get(handles.edit2,'string'));
Fn(1,1) = F(1,1)+Xi1(k);
Fn(2,1) = F(2,1)+Xi2(k);
gamma = eval(get(handles.edit4,'string'));
x = x - gamma*GAMMA*Fn;
end
Program 2 : Nonlinear Regression (Two Dimensional Case)

The functions used are and . Where, and .


The graph is shown in the following figure. Where is taken on x-axis, on y-axis and on z-axis. As the graph of
is in 3D, the intersection of the functions is a straight line. To get the point from the straight line, an additional

5
plane is drawn. The intersection of the three planes is a point which is required root of the given system of
equations.

Figure 9 : Input function

Following results were obtained from the implementation of the algorithm with and .
The noise added to the function 1 is ‘Gamma Distribution’ and to the function 2 is ‘Extreme Value Distribution’. The
measured value of is and we can see that .

Figure 10 : Iterations vs. x1 Figure 11 : Iterations vs. x2 Figure 12 : x1 vs. x2

6
Figure 13 : Mean square error (w.r.t final value) Figure 14 : Convergence and its linear fit

Figure 15 : PDF of noise added to function 1 Figure 16 : PDF of noise added to function 2

7
3. Stochastic Optimization:

The Optimization problem for a deterministic function can be obtained by equating the first derivative to zero.
The same cannot be applied for the stochastic optimization, since the presence of noise will not give first derivative zero
at any point. The following algorithms guarantee that the recurrent procedure will asymptotically merge to minimum
value.

1. Stochastic gradient method


2. Kiefer–Wolfowitz procedure
3. Random Search

These algorithms have following limitations over the function.

3.1. Limitations of the function:

1. Function should be strictly convex, that is should have only one minimum point and the following condition
satisfies.

Here is minimum value of the .

3.2. Stochastic gradient method (1st order Stochastic Optimization):

3.2.1. Algorithm:

The following recurrent series guarantees convergence of to .

Where:

and is termed as Gain matrix.

, that is measured value with the noise.

is noise with and

3.2.2. Implementation:

Following code implements the above shown algorithm for a two dimensional case. The input function is taken
in the form . Where , and are matrices.

8
for k = 1 : chrvv000;
A = eval(get(handles.edit1,'string'));
B = eval(get(handles.edit2,'string'));
C = eval(get(handles.edit3,'string'));
GAMMA = eval(get(handles.edit4,'string'));

Outputx(k,1) = k;% for x1* vs Iterations graph


Outputy(k,1) = x(1);
Outputx(k,2) = k;% for x1* vs Iterations graph
Outputy(k,2) = x(2);

dF=2*A*x + B;
Y = dF + [Xi1(k);Xi2(k)];
gamma = eval(get(handles.edit5,'string'));
x = x - gamma*GAMMA*Y;
end
Program 3 : Stochastic gradient method

The function is obtained by taking , and . The


function is shown in the following graph.

Figure 17 : Input Function

Following results were obtained from the implementation of the algorithm with and .
The noise taken for is ‘Beta Distribution’ and for for is ‘Binomial Distribution’. The measured value of is
and we can see that .

9
Figure 18 : Iterations vs. x1 Figure 19 : Iterations vs. x2 Figure 20 : x1 vs. x2

Figure 21 : Mean square error (w.r.t. final value) Figure 22 : Convergence and its linear fit

Figure 23 : PDF of Noise 1 Figure 24 : PDF of Noise 2

3.3. Kiefer–Wolfowitz procedure (0th order Stochastic Optimization):

3.3.1. Algorithm:

The following recurrent series guarantees convergence of to .

10
Where:

and is termed as Gain matrix

, that is measured value with the noise.

is noise with and

3.3.2. Implementation:

Following code implements the above shown algorithm for a two dimensional case. The input function is taken
in the form . Where , and are matrices.
for k = 1 : chrvv000;
A = eval(get(handles.edit1,'string'));
B = eval(get(handles.edit2,'string'));
C = eval(get(handles.edit3,'string'));

alpha = eval(get(handles.edit8,'string'));
Yn = [0;0];
N=2;

Outputx(k,1) = k;% for x1* vs Iterations graph


Outputy(k,1) = x(1);
Outputx(k,2) = k;% for x1* vs Iterations graph
Outputy(k,2) = x(2);

for n = 1:N
if n==1
e=[1;0];
end
if n==2
e=[0;1];
end
xn = x + alpha*e;
yn = xn'*A*xn+B'*xn+C + Xi1(4*(k-1)+2*(n-1)+1);
Yn = Yn + yn*e;
xn = x - 2 * alpha*e;
yn = xn'*A*xn+B'*xn+C + Xi1(4*(k-1)+2*(n-1)+2);
Yn = Yn - yn*e;
end

11
gamma = eval(get(handles.edit5,'string'));
x = x - gamma*Yn/(2*alpha);
end
Program 4 : Kiefer–Wolfowitz Method

The function is obtained by taking , and . The function is shown in the


following graph.

Figure 25 : Input function

Following results were obtained from the implementation of the algorithm with , and
. The noise taken is ‘Hypergeometric Distribution’. Since we are working with a two dimensional case
and .

12
Figure 26 : Iterations vs. x1 Figure 27 : Iterations vs. x1

Figure 28 : Interations vs. x1 (zoomed) Figure 29 : Interations vs. x2 (zoomed)

Figure 30 : PDF of the noise

13
3.4. Random Search:

3.4.1. Algorithm:

The following recurrent series guarantees convergence of to .

Where:

and is termed as Gain matrix

is a sequence of independent random vectors

3.4.2. Implementation:

Following code implements the above shown algorithm for a two dimensional case. The input function is taken
in the form . Where , and are matrices.
for i = 1 : chrvv000;
k=i;
A = eval(get(handles.edit1,'string'));
B = eval(get(handles.edit2,'string'));
C = eval(get(handles.edit3,'string'));
GAMMA = eval(get(handles.edit4,'string'));

Outputx(i,1) = i;% for x1* vs Iterations graph


Outputy(i,1) = x(1);
Outputx(i,2) = i;% for x1* vs Iterations graph
Outputy(i,2) = x(2);

if i>30
k=i-30;
else
k=1;
end
gamma = eval(get(handles.edit5,'string'));
alpha = eval(get(handles.edit8,'string'));
vn = [Xi1(i);Xi2(i)];
xn = x+alpha*vn;
F = (xn')*A*xn+(B')*xn+C;
yn = F + Xi3(i);
Yn = vn*yn;

14
x = x - (gamma/alpha)*GAMMA*Yn;
end
Program 5 : Random Search Method

The function is obtained by taking , and . The function is shown in the


following graph.

Figure 31 : Input Function

Following results were obtained from the implementation of the algorithm with , and
. The noise taken for and for are ‘Uniform Distribution’.

Figure 32 : Iterations vs. x1 Figure 33 : Iterations vs. x2

15
Figure 34 : x1 vs. x2 Figure 35 : Mean Square error (w.r.t. Final Value)

Figure 36 : PDF of v1(1) Figure 37 : PDF of v1(2) Figure 38 : PDF of

16
4. Parametric Identification:

The identification problem finds the parameters where we consider the states are available or measurable. The
following methods will be studied and verified.

1. Least Square Method (LSM)


2. Instrument Variable Method (IVM)

We will use the following nomenclature for this section:

as States

is system dynamics

are measurable quantity

are parameters to identify

is weights of the noise

4.1. Least Square Method (LSM):

4.1.1. Algorithm:

Direct method:

Where,

The should be persistently exist for applying this algorithm.

Recursive method:

Where,

17
4.1.2. Implementation:

Let’s consider the following state representation of the difference equation a system
.

We have taken four cases, with the following values of and

1. and

2. and

3. and

4. and

Following code implements the algorithm

clc
p1=[];
p2=[];
p3=[];

%a = [0.1 0.2 0.3;1 0 0;0 1 0];


a = [00 00 00;1 0 0;0 1 0];
%a = [0.2 00 0.5;1 0 0;0 1 0];
%a = [0.3 -0.21 0.0;1 0 0;0 1 0];

%b = [2 15 1;0 0 0;0 0 0];


b = [2.5 3.1 0;0 0 0;0 0 0];
%b = [0 0 0;0 0 0;0 0 0];
%b = [5.1 2.3 0;0 0 0;0 0 0];

C = [a b]
c = zeros(3,6);
k=0;
un = sin(0.2*k);
un_1 = .3*cos(0.4*(k-1))+0.2*sin(0.2*(k-1))+0.5*sin(0.5*(k-1));

18
un_2 = .3*cos(0.4*(k-2))+0.2*sin(0.2*(k-2))+0.5*sin(0.5*(k-2));
xn = 0;
xn_1 = 0;
xn_2 = 0;
x = [xn;xn_1;xn_2];
u = [un;un_1;un_2];
z = [x;u];
v = zeros(6,3);
iGAMMA = zeros(6,6);
GAMMA = zeros(6,6);
randd1 = random('unif',-1,1,1,10002);
randd2 = random('unif',-1,1,1,10002);
for i = 1:10000
x1 = a*x+b*u+1*randd1(i+2);
if min(min(eig(GAMMA)>0)) == 1
c = c+(x1-c*z)*z'*GAMMA;
GAMMA = GAMMA - (GAMMA*(z*z')*GAMMA)/(1+z'*GAMMA*z);
else
iGAMMA = iGAMMA + z*z';
GAMMA = pinv(iGAMMA);
c = v'*GAMMA;
v = v+z*x1';
end
x = x1;
u = [0 0 0;1 0 0;0 1 0]*u + [.3*cos(0.4*i)+0.5*sin(0.5*(i))+0.2*sin(0.2*i);0;0];
z = [x;u];
p1(:,i) = c(1,:);
p2(:,i) = c(2,:);
p3(:,i) = c(3,:);
end
c
e = C-c
plot(1:10000,p1,1:10000,p2,1:10000,p3);
Program 6 : Program of Least Square Method

The following results were obtained by running the MATLAB code shown above. Each case is commented. The
same program can be used to simulate all the results by changing the matrices of a, b.

19
Figure 39 : Iterations vs. Parameters (Case 1)

Figure 40 : Iterations vs. Parameters (Case 2)

20
Figure 41 : Iterations vs. Parameters (Case 3)

Figure 42 : Iterations vs. Parameters (Case 4) Figure 43 : Iterations vs. Parameters (Case 4)

21
4.2. Instrument Variable Method:

4.2.1. Algorithm:

Direct Method:

Where:

Recursive Method:

Where:

4.2.2. Implementation:

Let us verify the algorithm on the system


, which can be written in the following matrix form.

Let us consider following values for the implementation in the code.

, ,

22
MATLAB code is as follows:

clc
a = [0.1 0.2 0.3;1 0 0;0 1 0];
b = [2 15 1;0 0 0;0 0 0];
C = [a b]
c = zeros(3,6);
k=0;
un = sin(0.2*k);
un_1 = sin(0.2*(k-1))+sin(0.5*(k-1));
un_2 = sin(0.2*(k-2))+sin(0.5*(k-2));
un_3 = sin(0.2*(k-2))+sin(0.5*(k-3));
un_4 = sin(0.2*(k-2))+sin(0.5*(k-4));
un_5 = sin(0.2*(k-2))+sin(0.5*(k-5));
xn = 0;
xn_1 = 0;
xn_2 = 0;
xn_3 = 0;
xn_4 = 0;
xn_5 = 0;
xp = [xn_1;xn_2;xn_3];
up = [un_1;un_2;un_3];
xpp = [xn_2;xn_3;xn_4];
upp = [un_2;un_3;un_4];
xppp = [xn_3;xn_4;xn_5];
uppp = [un_3;un_4;un_5];

zp = [xp;up];
zpp = [xpp;upp];
zppp = [xppp;uppp];
x = [xn;xn_1;xn_2];
u = [un;un_1;un_2];
z = [x;u];
v = zeros(6,3);
vi = zeros(6,3);
iGAMMA = zeros(6,6);
GAMMA = zeros(6,6);
randd1 = random('unif',-1,1,1,10002);
for i = 1:10000
x1 = a*x+b*u+0.5*randd1(i+2)+0.5*randd1(i+1)+0.5*randd1(i);
% vt = z; %vt=zn = Least Square Method
% vt = zp; %vt=zn-1
% vt = zpp; %vt=zn-2
vt = zppp; %vt=zn-3
if min(min(eig(GAMMA)>0)) == 1
GAMMA = GAMMA - (GAMMA*(z*vt')*GAMMA)/(1+vt'*GAMMA*z);
c = c+(x1-c*z)*vt'*GAMMA;
else
v = v+vt*x1';
iGAMMA = iGAMMA + z*vt';
GAMMA = pinv(iGAMMA);
c = v'*GAMMA;
end
x = x1;
u = [0 0 0;1 0 0;0 1 0]*u + [.3*cos(0.4*i)+0.5*sin(0.5*(i))+0.2*sin(0.2*i);0;0];
zppp=zpp;
zpp=zp;
zp=z;
z = [x;u];

23
end
c
e = C-c
Results obtained are as follows:

For

Figure 44 : Iterations vs. Parameters (case vn=zn) Figure 45 : Iterations vs. Parameters (case vn=zn)

For

Figure 46 : Iterations vs. Parameters (case vn=zn-1) Figure 47 : Iterations vs. Parameters (case vn=zn-1)

24
For

Figure 48 : Iterations vs. Parameters (case vn=zn-2) Figure 49 : Iterations vs. Parameters (case vn=zn-2)

For

Figure 50 : Iterations vs. Parameters (case vn=zn-3) Figure 51 : Iterations vs. Parameters (case vn=zn-3)

25
5. Kalman Filter (for Discrete Time):

Kalman filter is an adoptive filter for linear systems, which is the way to obtain the best possible signal for a
given Gaussian noise. The filter can be extended to predict or smoothening the states. In this section, we will see the
filtering, smoothing prediction.

We will consider the following system and nomenclature.

5.1. Filtering:

Filtering reduces the noise of the signal and can be applied in the real time systems. This filtering involves
prediction and correction part. Following algorithm illustrate in detail.

5.1.1. Algorithm:

Where:

is estimated value of the state with the information up to .

is estimated value of the state with the information up to .

is estimated value of output with information up to .

is known as Kalman gain matrix.

is known as estimation error covariance matrix.

is known as predicted estimation error covariance matrix.

26
5.1.2. Implementation:

Let us implement the algorithm for the following system.

Where and are Gaussian noise.

Following MATLAB code implements the algorithm

N = 1000;
A = '[0.2*sin(0.01*n) 0.1*sin(0.01*n) 0.5*sin(0.01*n);1 0 0; 0 1 0]';
B = '[1*sin(0.02*n);1*sin(0.01*n);1*sin(0.01*n)]';
H = '[1 1 1]';
Theta = [0.001 0.001 0.001;0.001 0.001 0.001;0.001 0.001 0.001];
OMEGA = 0.0012;
x=[5;5;5];
X=x;
xc=x;
randd1 = random('norm',0,1,1,N);
randd2 = random('norm',0,1,1,N);
randd3 = random('norm',0,1,1,N);
randd4 = random('norm',0,1,1,N);
P = eye(3);
K = [1;1;1];

for i = 1:N
n = i;
An = eval(A);
Bn = eval(B);
Hn = eval(H);
n = i-1;
An_1 = eval(A);
Bn_1 = eval(B);
Hn_1 = eval(H);
xn = An*x+Bn+1*[randd1(i);randd2(i);randd3(i)];
y =Hn*x+randd4(i);

Xn = An*X+Bn;
Y = Hn*X;

Pp = An_1*P*An_1';
P = (eye(3)-K*Hn)*Pp;
K = Pp*Hn'*pinv(OMEGA + Hn*Pp*Hn');
xp = An_1*xc+Bn_1;

27
yp = Hn*xp;
xc = xp + K*(y-yp);

X=Xn;
x=xn;
end
Program 7 : Program for Kalman Filter

Following results are obtained:

Figure 52 : Iterations vs. State 1 [Kalman Filter]

28
Figure 53 : Iterations vs. State 2 [Kalman Filter]

Figure 54 : Iterations vs. State 3 [Kalman Filter]

29
5.2. Prediction:

The algorithm can be extended to predict the state variables. The predictions are based on the presently
available estimate of state. As we wish to predict farther values, the error increases and for closer predictions are of
much accuracy.

5.2.1. Algorithm:

Where is prediction of steps ahead of presently estimated based on the available information up to .

5.2.2. Implementation:

The same system seen above was used and predicted 15 steps ahead.

N = 1000;
A = '[0.2*sin(0.01*n) 0.1*sin(0.01*n) 0.5*sin(0.01*n);1 0 0; 0 1 0]';
B = '[1*sin(0.02*n);1*sin(0.01*n);1*sin(0.01*n)]';
H = '[1 1 1]';
Theta = [0.001 0.001 0.001;0.001 0.001 0.001;0.001 0.001 0.001];
OMEGA = 0.0012;
x=[5;5;5];
X=x;
xc=x;
randd1 = random('norm',0,1,1,N);
randd2 = random('norm',0,1,1,N);
randd3 = random('norm',0,1,1,N);
randd4 = random('norm',0,1,1,N);
P = eye(3);
K = [1;1;1];
pred = 15;

for i = 1:N
n = i;
An = eval(A);
Bn = eval(B);
Hn = eval(H);
n = i-1;
An_1 = eval(A);
Bn_1 = eval(B);
Hn_1 = eval(H);
xn = An*x+Bn+1*[randd1(i);randd2(i);randd3(i)];
y =Hn*x+randd4(i);

Xn = An*X+Bn;
Y = Hn*X;

30
Pp = An_1*P*An_1';
P = (eye(3)-K*Hn)*Pp;
K = Pp*Hn'*pinv(OMEGA + Hn*Pp*Hn');
xp = An_1*xc+Bn_1;
yp = Hn*xp;
xc = xp + K*(y-yp);

%prediction
Ank = 1;
for k = 0 : pred-1
n=i+k;
Ank = Ank*eval(A);
end
An1sb = 0;
for k = 0 : pred-1
An1s = 1;
for s = k : pred-2
n=i+1+s;
An1s = An1s*eval(A);
end
n=i+k;
An1sb = An1sb+An1s*eval(B);
end

xpr = Ank*xc+An1sb;

X=Xn;
x=xn;
end
Program 8 : Program for Prediction

Following results were obtained

Figure 55 : Iterations vs. State x1 (Prediction)

31
Figure 56 : Iterations vs. State x2 (Prediction)

Figure 57 : Iterations vs. State x3 (Prediction)

32
5.3. Smoothing:

Smoothing is done to estimate the previous stages of the state based on present information. This estimate is
better than the filtered value as the more information is available.

5.3.1. Algorithm:

Where is smoothing of steps behind of presently estimated based on the available information up to .

5.3.2. Implementation:

MATLAB code for implementation is as below for the smoothing of 3 steps back:

N = 300;
A = '[0.2*sin(0.01*n) 0.1*sin(0.01*n) 0.5*sin(0.01*n);1 0 0; 0 1 0]';
B = '[1*sin(0.02*n);1*sin(0.01*n);1*sin(0.01*n)]';
H = '[1 1 1]';
Theta = [0.001 0.001 0.001;0.001 0.001 0.001;0.001 0.001 0.001];
OMEGA = 0.0012;
x=[5;5;5];
X=x;
xc=x;
randd1 = random('norm',0,1,1,N);
randd2 = random('norm',0,1,1,N);
randd3 = random('norm',0,1,1,N);
randd4 = random('norm',0,1,1,N);
P = eye(3);
K = [1;1;1];
smt = 3;

for i = 1:N
n = i;
An = eval(A);
Bn = eval(B);
Hn = eval(H);
n = i-1;
An_1 = eval(A);
Bn_1 = eval(B);
Hn_1 = eval(H);
xn = An*x+Bn+1*[randd1(i);randd2(i);randd3(i)];
y =Hn*x+randd4(i);

Xn = An*X+Bn;

33
Y = Hn*X;

Pp = An_1*P*An_1';
P = (eye(3)-K*Hn)*Pp;
K = Pp*Hn'*pinv(OMEGA + Hn*Pp*Hn');
xp = An_1*xc+Bn_1;
yp = Hn*xp;
xc = xp + K*(y-yp);

%smoothing
Ank = 1;
for k = -smt : -1
n=i+k;
Ank = Ank*eval(A);
end
An1sb = 0;
for k = 1 : smt
An1s = 1;
for s = -k : -2
n=i+1+s;
An1s = An1s*eval(A);
end
n=i-k;
An1sb = An1sb+An1s*eval(B);
end

xsmt = pinv(Ank)*(xc-An1sb);

X=Xn;
x=xn;
end
plot(1:N,p1,1:N,p2,1:N,p3,1:N,p4,1:N,p5);
Program 9 : Program for Smoothing

Following results were obtained:

Figure 58 : Iterations vs. State x1 (Smoothing)

34
Figure 59 : Iterations vs. State x2 (Smoothing)

Figure 60 : Iterations vs. State x3 (Smoothing)

35
6. Add-ins to the algorithms:

6.1. Ruppert–Polyak version with averaging:

The following algorithm improves the performance of convergence of Nonlinear regression. As the same
algorithm is exploited in “Stochastic gradient method” for “Stochastic Optimization”, we can use this add-in.

6.1.1. Algorithm:

Where:

is the Ruppert-Polyak Average Value

is the estimated value from the actual algorithm

6.1.2. Implementation:

Following code shows the changes made to ‘Program 3 : Stochastic gradient method’ to improve the
performance.
for k = 1 : chrvv000;
A = eval(get(handles.edit1,'string'));
B = eval(get(handles.edit2,'string'));
C = eval(get(handles.edit3,'string'));
GAMMA = eval(get(handles.edit4,'string'));

Outputx(k,1) = k;% for x1* vs Iterations graph


Outputy(k,1) = x(1);
Outputx(k,2) = k;% for x1* vs Iterations graph
Outputy(k,2) = x(2);

dF=2*A*x + B;
Y = dF + [Xi1(k);Xi2(k)];
gamma = eval(get(handles.edit5,'string'));
x = x - gamma*GAMMA*Y;

Outy(k,1) = xb(1);% Ruppert–Polyak version with averaging


Outy(k,2) = xb(2);
xb = xb - (1/k)*(xb-x);
end
Program 10 : Ruppert-Polyak version with averaging

36
By using the same conditions, we get the following graphs.

Figure 61 : Iterations vs. x1 Figure 62 : Iterations vs. x2

Here we can see that the values are smoothly converging compared to the previous results.

6.2. Obtaining Global minimum using Random search method:

6.2.1. Method:

We can see that by changing the added noise to the Random Search: method to a normal distribution, we can
get obtain the global minimum.

To verify this, we taken the function with the parameters , and


. We observe the following graph for the above function.

37
Figure 63 : Input function for finding global minimum

6.2.2. Implementation:

Simulation gave the following results; initial conditions were taken as to verify that the algorithm
converges to the global minimum.

Figure 64 : Iterations vs. x1 Figure 65 : Iterations vs. x2

38
Figure 66 : x1 vs. x2

Figure 67 : PDF of the v1(1) Figure 68 : PDF of the v2(1) Figure 69 : PDF of

6.3. Adding saturation function to estimated variable:

The optimization technique can be applied in many practical situations like inventory control, where the may
refer to the cost to be spend on each material. During the implementation of the above algorithms, we see that the
value of goes to very high values. As we see in the implementation of the “Kiefer–Wolfowitz procedure (0th order
Stochastic Optimization):”, where the values of goes as high as . To solve the problem, the saturation is applied
over the value of .

6.3.1. Algorithm:

Where:

is Lower bound

is upper bound

39
6.3.2. Implementation:

Implemented previously available program of the ‘Program 4 : Kiefer–Wolfowitz Method’ by taking upper and
lower bounds as .
for k = 1 : chrvv000;
A = eval(get(handles.edit1,'string'));
B = eval(get(handles.edit2,'string'));
C = eval(get(handles.edit3,'string'));

alpha = eval(get(handles.edit8,'string'));
Yn = [0;0];
N=2;

Outputx(k,1) = k;% for x1* vs Iterations graph


Outputy(k,1) = x(1);
Outputx(k,2) = k;% for x1* vs Iterations graph
Outputy(k,2) = x(2);

for n = 1:N
if n==1
e=[1;0];
end
if n==2
e=[0;1];
end
xn = x + alpha*e;
yn = xn'*A*xn+B'*xn+C + Xi1(4*(k-1)+2*(n-1)+1);
Yn = Yn + yn*e;
xn = x - 2 * alpha*e;
yn = xn'*A*xn+B'*xn+C + Xi1(4*(k-1)+2*(n-1)+2);
Yn = Yn - yn*e;
end
gamma = eval(get(handles.edit5,'string'));
x = x - gamma*Yn/(2*alpha);

if abs(x(1))>5 % for fixing maximum and minimum limits


x(1)=sign(x(1))*5;
end
if abs(x(2))>5
x(2)=sign(x(2))*5;
end
end
Program 11 : Optimization with modification of saturation

Simulation was run without modifying other parameters of the ‘Kiefer–Wolfowitz procedure (0th order
Stochastic Optimization):’ and the following results are obtained. We can see that the values are converging correctly.

40
Figure 70 : Iterations vs. x1 Figure 71 : Iterations vs. x2

When the saturation limits are changed to , following graphs were obtained. Here we can see that optimized
value of which is within the bounds was optimized correctly, but the other value is converged to the extreme bound
value.

Figure 72 : Iterations vs. x1 Figure 73 : Iterations vs. x2

6.4. Whitening Method (with LSM algorithm):

In section 4.2 we have seen that the LSM does not work for the models with the noise, so we add the whitening method
which will guarantee convergence of parameters

6.4.1. Algorithm:

41
Where,

and

and

, and are weights of the noise as stated the section 4.2.2

6.4.2. Implementation:

Let us verify the algorithm on the system


, which can be written in the following matrix form.

Let us consider following values for the implementation in the code.

, ,

Values of , and are chosen such a way that is stable. Where is a unit
delay. (Note that the difference equation will be stable when .)

MATLAB code is as follows:

a = [0.1 0.2 0.3;1 0 0;0 1 0];


b = [2 15 1;0 0 0;0 0 0];
C = [a b]
c = zeros(3,6);
k=0;
un = sin(0.2*k);
un_1 = sin(0.2*(k-1))+sin(0.5*(k-1));
un_2 = sin(0.2*(k-2))+sin(0.5*(k-2));
un_3 = sin(0.2*(k-2))+sin(0.5*(k-3));
un_4 = sin(0.2*(k-2))+sin(0.5*(k-4));
un_5 = sin(0.2*(k-2))+sin(0.5*(k-5));
xn = 0;
xn_1 = 0;
xn_2 = 0;
xn_3 = 0;

42
xn_4 = 0;
xn_5 = 0;
xp = [xn_1;xn_2;xn_3]*0;
up = [un_1;un_2;un_3]*0;
xpp = [xn_2;xn_3;xn_4]*0;
xppp = [xn_3;xn_4;xn_5]*0;
upp = [un_2;un_3;un_4];
xppp = [xn_3;xn_4;xn_5];
uppp = [un_3;un_4;un_5];

zp = [xp;up]*0;
zpp = [xpp;upp]*0;
zppp = [xppp;uppp]*0;
zpppp = zp*0;
x = [xn;xn_1;xn_2];
u = [un;un_1;un_2];
z = [x;u];
v = zeros(6,3);
vi = zeros(6,3);
iGAMMA = zeros(6,6);
randd1 = random('unif',-1,1,1,10002);
for i = 1:10000
x1 = a*sin(0.01*x)+b*(u)+.5*randd1(i+2)+.15*randd1(i+1)+.01*randd1(i);
zu = (z - .15*zp - .01*zpp)/.5;
vt = zu;
x1u = (x1 - .15*x - .01*xp)/.5;
v = v+vt*x1u';
iGAMMA = iGAMMA + zu*vt';
GAMMA = pinv(iGAMMA);
c = v'*GAMMA;
xppp=xpp;
xpp=xp;
xp=x;
x = x1;
uppp=upp;
upp=up;
up=u;
u = [0 0 0;1 0 0;0 1 0]*u + [.3*cos(0.4*i)+0.5*sin(0.5*(i))+0.2*sin(0.2*i);0;0];
zpppp=zppp;
zppp=zpp;
zpp=zp;
zp=z;
z = [x;u];
end
c
e = C-c
Program 12 : Program for Whitening Method (with LSM algorithm)

43
Results are obtained as shown below:

Figure 74 : Iterations vs. Parameters (Whitening Method)

Figure 75 : Iterations vs. Parameters (Whitening Method)

44
6.5. Simple Application of Kalman Filter (Software sensor)

Kalman filter is known as a software sensor, we try to apply the filter to an electrical circuit and obtain the
current of the given circuit. It is possible to measure the voltage from a circuit (readily available) by connecting
voltmeter across the required points. Measuring the current is almost impossible task in a PCB or readily available
circuit. This example will try to answer the problem to estimate the current by using Kalman filter.

6.5.1. Circuit Details:

Let us take a simple RLC series circuit with resistance , Inductance and Capacitor .
The following circuit shows the setup made in the NI’s Multisim software.

Figure 76 : Circuit Diagram of the RLC circuit

The input supply is set to triangle wave with the settings as shown in the following figure. The voltage out is
measured by an oscilloscope. The output voltage considered here is across the capacitor.

Figure 77 : Settings of input power supply

45
6.5.2. Mathematical Modeling:

The following state space model can be obtained by selecting loop current and voltage across the capacitance as
the state variables.

Where states and input . By applying values, we get the system as following in continuous
domain:

By converting the system into discrete, we get following system by taking sampling time as 10µs:

By saving data in the oscilloscope of the NI’s Multisim, we obtain the data of and . These values are used to
calculate the current virtually by using software sensor. Gaussian noise is also added to to simulate the real life
situation of measuring the data using a probe. Therefore the output term becomes as follows:

6.5.3. Implementation:

The values of the is stored in the variable ‘obt_yn’ and the is stored in the variable ‘obt_u’. The following
MATLAB code implements the filter to obtain the state values .

clc;
N = 2886;
p1=zeros(N,1);
p2=zeros(N,1);
p3=zeros(N,1);
p4=zeros(N,1);
p5=zeros(N,1);
A = '[-0.0009471 -0.000992;0.992 0.991]';
B = '[0.000992;0.008967]';

46
H = '[0 1]';
Theta = [0.001 0.001;0.001 0.001];
OMEGA = 0.0012;
x=[0;0];
X=x;
xc=x;
randd1 = random('norm',0,1,1,N);
randd2 = random('norm',0,1,1,N);
randd3 = random('norm',0,1,1,N);
randd4 = random('norm',0,1,1,N);
P = eye(2);
K = [1;1];

for i = 2:N
n = i;
An = eval(A);
Bn = eval(B)*obt_u(n);
Hn = eval(H);
n = i-1;
An_1 = eval(A);
Bn_1 = eval(B)*obt_u(n);
Hn_1 = eval(H);

y =obt_yn(n);

Xn = An*X+Bn;
Y = Hn*X;

Pp = An_1*P*An_1';
P = (eye(2)-K*Hn)*Pp;
K = Pp*Hn'*pinv(OMEGA + Hn*Pp*Hn');
xp = An_1*xc+Bn_1;
yp = Hn*xp;
xc = xp + K*(y-yp);

p1(i)=y;
p2(i)=xc(1);
p3(i)=xc(2);
p4(i)=xn(2);
p5(i)=X(2);

X=Xn;
end
plot(1:N,p1,'g',1:N,p3,'r',1:N,p2);
%plot(1:N,p1,1:N,p2,1:N,p3,1:N,p4,1:N,p5);
Program 13 : Program for measuring current (state) by software sensor

47
The following graph is obtained by transferring data from simulation software to MS Excel. The graph shown in
Figure 79 is calculated by MATLAB using the output and input details.

Current Current
2.00E-02

1.50E-02

1.00E-02

5.00E-03

0.00E+00
0.00E+00 5.00E-03 1.00E-02 1.50E-02 2.00E-02 2.50E-02 3.00E-02
-5.00E-03

-1.00E-02

-1.50E-02

Figure 78 : Time vs. Actual Current

Figure 79 : Time vs. Current (Measured using Kalman Filter)

48
The following figure shows the input and output relations of both estimated and actual values, from the graphs
we can observe that the filter is working good with the shown example.

Figure 80 : Time vs. Input, Output and States

1.50E+01
Input Voltage (Channel A)
Voltage Across Capacitor (Channel B)
1.00E+01

5.00E+00

0.00E+00
0.00E+00 5.00E-03 1.00E-02 1.50E-02 2.00E-02 2.50E-02 3.00E-02

-5.00E+00

-1.00E+01

-1.50E+01

Figure 81 : Actual Input and Output

49
Bibliography
Oishi, D. (n.d.). State Equation Representation of Dynamic Systems. Retrieved July 15, 2010, from
http://courses.ece.ubc.ca/360/lectures/Lecture07.pdf: http://courses.ece.ubc.ca/360/lectures/Lecture07.pdf

Poznyak, A. S. (2009). Advanced Mathematical Tools for Automatic Control Engineers. Jordan Hill, Oxford: Elsevier Ltd.

Welch, G., & Bishop, G. (n.d.). An Introduction to the Kalman Filter. Retrieved July 15, 2010, from Department of
Computer Science, University of North Carolina: http://www.cs.unc.edu/~welch/media/pdf/kalman_intro.pdf

50

You might also like