You are on page 1of 7

Instruction Sheet for

SOFT COMPUTING LABORATORY (EE 753/1)

Develop the following programs in the MATLAB environment:


1. Write a program in MATLAB for Feed Forward Neural Network with Back propagation
training algorithm for realizing :
(a) XOR problem
(b) Straight line
(c) Eclipse:

e.g. y = 2x + 3;

x2 y2
+
=1
a2 b2

2. Write a generalized C/C++ program for Multilayer Feed Forward Neural Networks with Back
Propagation training algorithm.
3. Develop a Kohonens self organizing neural network to classify the following patterns into
required number of groups. The number of groups should be flexible and may change
according to the input data set.
X 1 = [1 2 3 4 5]
X 2 = [1.1 2.1 3.1 4.1 5.1]
X 3 = [2 3 4 5 6]
X 4 = [2.1 3.1 4.1 5.1 6.1]
X 5 = [3 4 5 6 7]
X 6 = [3.1 4.1 5.1 6.1 7.1]
4. For the problem no. 3, consider a test pattern X= [2.2 3.2 4.2 5.2 6.2]
Find the group in which the test pattern is classified and verify your results.
5. Develop a LVQ neural network to classify the following patterns into three predetermined
groups as: {X 1 ,X 2 } Group1,{ X 3 ,X 4 } Group 2 and, { X 5 ,X 6 } Group 3.
Where,
X 1 = [1 2 3 4 5]
X 2 = [1.1 2.1 3.1 4.1 5.1]
X 3 = [2 3 4 5 6]
X 4 = [2.1 3.1 4.1 5.1 6.1]
X 5 = [3 4 5 6 7]
X 6 = [3.1 4.1 5.1 6.1 7.1]
Page 1 of 7

6. For the problem no. 5, consider a test pattern X= [2.2 3.2 4.2 5.2 6.2]
Find the group in which the test pattern is classified and verify your results.
7. Consider an antilog braking system, directed by a micro controller chip. The micro controller
has to be take decision based on the temperature (T) and speed (N). The input and output
ranges are as follows:
T (0-125 C) [cold

moderate

N (0-100) [ low medium

hot]

high]

Brake position (B) (0-1) [ low

medium

high]

Rule 1: if T is cold and N is high, B is medium


Rule 2: if T is moderate and N is High, B is high
Rule 3: if T is hot and N is moderate, B is low
Use uniform membership functions assuming centre of largest area de-fuzzification strategies.
Determine the value of B when T=100 C and N=55

Outline of the Program Structures


1. Feed Forward ANN with Back Propagation Training Algorithm
clear all;
x=-10:.25:10;
% since there are 81 values of x
for i=1:81
y(i)=x(i)/(1+x(i)^2); % Data Generation
end
p=x;

%Training Data: Input

t=y;

%Training Data: Output/Target

net=newff(minmax(p),[ 15 1],{'tansig' 'purelin'},'trainlm');


net.trainParam.show = 10;
net.trainParam.epochs = 300; % number of iterations during training
net.trainParam.goal = .001;
% Goal of training
[net,tr]=train(net,p,t); % Train the network with training data

q=p;

% Test Data
Page 2 of 7

a = sim(net,q); % Testing the Neural Network


b=a';
% Network Output for new Test Data
f=t;
d=(b-f'); % Error
for i=1:81
e(i)=d(i);
end
plot(x,a,'ro');
grid;
hold;
plot(x,y);
% Calculate rms error
r=e';
rms=0;
sum=0;
for i=1:81
sum=(e(i))^2+sum;
end
rms=sum/81;
rms=rms^0.5;

2. Kohonens Network for Pattern Classification:


clc;
% Consider FOUR input patterns as follows:
%
%
%
%

p1=[0.1 0.2 0.3 0.4 0.5];


p2=[0.11 0.22 0.33 0.44 0.55];
p3=[1 2 3 4 5];
p4=[1.1 2.1 3.1 4.1 5.1];

% Represent these input patterns in Matrix form:


p = [0.1 0.2 0.3 0.4 0.5;
0.11 0.22 0.33 0.44 0.55;
2.1 2.2 3.2 4.5 5.6;
1.1 2.1 3.1 4.1 5.1];
%
%

However, data has to be input column wise to the ANN, and hence
transposition is necessary
q = p'

You can now verify the transposed input data (which is in a column

Page 3 of 7

%
%
%
%
%
%
%

wise format)
q =
0.1000
0.1100
0.2000
0.2200
0.3000
0.3300
0.4000
0.4400
0.5000
0.5500

1.0000
2.0000
3.0000
4.0000
5.0000

1.1000
2.1000
3.1000
4.1000
5.1000

% Visibly, there are two distinct classes in the input data.


% *********************************************************************
% ***********Creating the Competitive Learning ANN Architecture ******
% ************************************************************************
% To classify these 4 input patterns, create a two layer
% Competitive Learning ANN with FIVE input elements
% (the number of input neurons must be same as the dimension of the input
data)
% ranging from 0.1 to 5.11 (the overall min-max range of the data)
% Note that, in the "newc" the first argument indicates the ranges of each of
the FIVE nput elements,
% and the second argument says that there are to be two neurons in the output.
% Note that the number of output neurons decides the maximum number of
% output classes to be created by ANN. In this case there are TWO distinct
% classes to be created...
%
net = newc([0.1 5.1; 0.1 5.1 ; 0.1 5.1; 0.1 5.1; 0.1 5.1],4);
%
% The weights are initialized to the centres of the input ranges with the
function midpoint.
%
wts1 = net.IW{1,1}
%
You can check to see these initial values as follows:
% wts =
%
2.6000
2.6000
2.6000
2.6000
2.6000
%
2.6000
2.6000
2.6000
2.6000
2.6000
% These weights are indeed the values at the midpoint of the range (0 to 1) of
the inputs,
% as you would expect when using midpoint for initialization.
%
% The biases are computed by initcon, which gives
%
biases = net.b{1}
%
%
biases =
%
5.4366
%
5.4366
% Now you have a network, but you need to train it to do the classification
job.
% % Recall that each neuron competes to respond to an input vector p.
% If the biases are all 0, the neuron whose weight vector is closest to p gets
the highest net input
% and, therefore, wins the competition and outputs 1.
% All other neurons output 0. You want to adjust the winning neuron so as to
move it closer to the input.
% A learning rule to do this is discussed in the next section.

Page 4 of 7

% **************** Kohonen Learning Rule (learnk)********************


% The function learnk is used to perform the Kohonen learning rule in this
toolbox.
% The weights of the winning neuron (a row of the input weight matrix) are
adjusted
% with the Kohonen learning rule. Supposing that the ith neuron wins,
% the elements of the ith row of the input weight matrix are adjusted as shown
below.
% The Kohonen rule allows the weights of a neuron to learn an input vector,
% and because of this it is useful in recognition applications.
% Thus, the neuron whose weight vector was closest to the input vector is
updated to be even closer.
% The result is that the winning neuron is more likely to win the
% competition
% Next time a similar vector is presented,
% and less likely to win when a very different input vector is presented.
% As more and more inputs are presented, each neuron in the layer closest to a
group of input vectors
% soon adjusts its weight vector toward those input vectors.
% Eventually, if there are enough neurons, every cluster of similar input
vectors will have
% neuron that outputs 1 when a vector in the cluster is presented,
% while outputting a 0 at all other times. Thus, the competitive network
learns to categorize
% the input vectors it sees.
% *******************************
%
Training
% *******************************
% Now train the network for 50 epochs. You can use either train or adapt.
%
net.trainParam.epochs = 200;
net = train(net,q);
% For each epoch, all training vectors (or sequences) are each presented once
in a different random order
% with the network and weight and bias values updated after each individual
presentation.
% Next, supply the original vectors as input to the network, simulate the
network, and
% finally convert its output vectors to class indices.
%
a = sim(net,q);
class_index = vec2ind(a)
% This yields
% class_index =
%

% Note that the network is trained to classify the input vectors into two
groups,
% First two vectors are put into class 1, and the other two vectors are put
into class 2.

Page 5 of 7

3. LVQ Network for Pattern Classification:


clc;
% Consider FOUR input patterns as follows:
%
%
%
%

p1=[0.1 0.2 0.3 0.4 0.5];


p2=[0.11 0.22 0.33 0.44 0.55];
p3=[1 2 3 4 5];
p4=[1.1 2.1 3.1 4.1 5.1];

% Represent these input patterns in Matrix form:


P = [0.1 0.2 0.3 0.4 0.5;
0.11 0.22 0.33 0.44 0.55;
1 2 3 4 5;
1.1 2.1 3.1 4.1 5.1];
%
%

However, data has to be input colum wise to the ANN, and hence
transposition is necessary
q = P'

%
%
%
%
%
%
%
%

You can now verify the transposed input data (which is in a column
wise format)
q =
0.1000
0.1100
1.0000
1.1000
0.2000
0.2200
2.0000
2.1000
0.3000
0.3300
3.0000
3.1000
0.4000
0.4400
4.0000
4.1000
0.5000
0.5500
5.0000
5.1000

% Visibly, there are two distinct classes in the input data.


% The objective is now to classify these 4 input pattrens into 2
% pre-defined classes (let us call them class 1 and class 2).

%
%
%
%
%

Since LVQ is a supervised learning algorithm, the Target Output classes


for each vector has to be defined. In our example,1st and 2nd input
vector belong to Class 1, whereas
3rd and 4th vector belong to Class 2.
Let us now define the target classes for the 4 input vectors as follows:
Tc=[1 1 2 2]

%
%
%
%
%
%
%
%

Now use the command "newlvq" with the proper arguments


Note that the 1st argument defined the range of input vector, followed
by number of hidden layer neurons (chosen as 10). The third set of
arguments stand for the class percentage of the input vectors. Here 2
input vetors belong to class 1 and rest of the 2 vectors belong to
class 2. Hence both Class 1 and Class 2 have equal percentage of 50%

net = newlvq(minmax(q),10,[.5 .5]);


%
% Next convert the Tc matrix to target vectors.
%

Page 6 of 7

T = ind2vec(Tc);
%
% This gives a sparse matrix T that can be displayed in full with
%
targets = full(T)
%
% which gives
%
targets =
%
1
1
0
0
%
0
0
1
1
%
% *******************************
%
Training
% *******************************
% Now train the network for 50 epochs. You can use either train or adapt.
%
net.trainParam.epochs = 50;
net = train(net,q,T);
% After training is complete, Simulate the network with the same input set
% to verify the accuracy of classification
a = sim(net,q);
% Output Class index of the input patterns can be checked as:
%
class_index = vec2ind(a)
% This yields
% class_index =
%

% Note that the network is trained to classify the input vectors into two predefined groups,
% First two vectors are put into class 1, and the other two vectors are put
into class 2.
%
% Now it is necessary to chechk the classification accuracy for unseen
% vector. Let us choose an unseen input vector as: q1= [1.2 2.2 3.2 4.2 5.2]
% which should belong to Class 2.
%
q1= [.12 .23 .35 .42 .55];
a = sim(net,q1');
%
% Output Class index of this unseen input patterns can be checked as:
%
class_index = vec2ind(a)
%
% This yields
% class_index = 2

Page 7 of 7

You might also like