Professional Documents
Culture Documents
Abstract- The purpose of this paper is to study, analyze and improve the performance of RBF, Cascade, Elman, Feed Forward and Pattern
Recognition Networks using „Contrast-limited Adaptive Histogram Equalization method‟ of featureextraction. This work is divided in to two
sections. In the earlier work, we have performed the performance analysis of RBF neural network, Cascade Neural network, Elman Neural
network and Feed forward neural network for the character recognition of handwritten Marathi curve scripts using „Edge detection and Dilation
method‟ of feature extraction. In this paper, we have applied the feature extraction methodknown as Contrast-limited Adaptive Histogram
Equalization (CLAHE). This feature extraction method enhances the contrast of images by transforming the values in the intensity image. For
this experiment, we have considered the six samples each of 48 Marathi characters. For every sampled character, the CLAHE feature extraction
method is applied. Then we have studied and analyzed the performance of these five Neural Networks for character recognition. It is found that
except Elman Network, the performance of rest of all the networks is increased.
Keywords-Histogram Equalization; Performance analysis; RBF neural network; Cascade neural network; Elman neural network; feed forward
network; Pattern Recognition Networks, CLAHE.
__________________________________________________*****_________________________________________________
The RBF network approach is more intuitive than the Elman BP NN 0.48278
MLP. An RBF network performs classification by measuring Feed Forward NN 0.47774
the input‟s similarity to examples from the training set. Each
RBF network neuron stores a “prototype”, which is just one RBF NN 1.00000
of the examples from the training set. When we want to Table 1: Showing the Regression values in earlier Experiment
classify a new input, each neuron computes the Euclidean
distance between the input and its prototype. Roughly III. FEATURE EXTRACTION (CLAHE)
speaking, if the input more closely resembles the class A The feature extraction method used in this present work
prototypes than the class B prototypes, it is classified as class is „Contrast-limited Adaptive Histogram Equalization
A. method (CLAHE)‟ of feature extraction. The data set used
Pattern Recognition network is an N-layer feed-forward in the present research work comprises of 288 handwritten
backpropagation network. The transfer functions can be any characters of Marathi language. The said set is created by
differentiable transfer function such as tansig, logsig, or collecting six samples of 48 characters each from six
purelin. The training function BTF can be any of the different people on paper. All the 288characters are used as
backpropagation training functions such as trainlm, trainbfg, training samples. Following are the steps used in „Contrast-
trainrp, traingd, etc. limited Adaptive Histogram Equalization method (CLAHE)‟
179
IJFRCSCE | November 2017, Available @ http://www.ijfrcsce.org
_______________________________________________________________________________________
International Journal on Future Revolution in Computer Science & Communication Engineering ISSN: 2454-4248
Volume: 3 Issue: 11 178 – 183
_______________________________________________________________________________________________
of feature extraction. In this experiment, with the help of After performing all these steps, the regression
CLAHE, we have created the P as the training pattern which plots of every network were stored.
is nothing but all the characters which are converted into a
V. RBF NEURAL NETWORK
matrix form. T is the testing pattern with which we have to
perform the recognition.
An RBF neural network, is a three layer feed
The CLAHE feature extraction method proposed here has forward network that consists of one input layer, one radial
the following steps. layer and one output layer, each input neuron corresponds to
1] Read the image. a component of an input vector x. Generally, these
2] Convert the image into Gray scale classifiers are used for interpolation in multidimensional
3] Adjust the image intensity values space [10].
4] Create a morphological structuring element P1 (intensity The radial layer consists of K neurons and one bias
area) by using 22 points in the image neuron. Each node in the radial layer uses an RBF denoted
5] Normalization of the input pattern vector
with (r ), as its non-linear activation function.
6] Now P is the matrix which is to be trained.
The hidden layer performs a non-linear transform
The specialty of this method is that flat disk shaped of the input and the output layer. This layer is a linear
morphological structuring element is created which combiner which maps the nonlinearity into a new space. The
considers the different 22 point of the image. This covers the biases of the output layer neurons can be modeled by an
entire morphological structure of the image. The above additional neuron in the hiddenlayer, which has a constant
mentioned steps are applied to each and every image of 288. activation function 0 (r ) 1 .
IV. METHODOLOGY The RBF network can achieve a global optimal
In this experiment we have used five different solution to the adjustable weights in the minimum MSE
networks so as to train the pattern vector P. T is the target range by using the linear optimization method. Thus, for an
vector with which we are going to train the pattern vector P. input pattern x, the output of the j th node of the output layer
These five networks are… can define as;
1] RBF neural networks K
2] Cascade neural networks y j ( x) wkjk ( xi k ) w0 j (1)
3] Elman neural networks k 1
4] Feed Forward neural networks
5] Pattern Recognition neural networks The Radial Basis Function (.) is typically selected as the
For every network, corresponding MATLAB Gaussian function that can be represented as:
functions are available. Before applying the below
xl k
2
mentioned methodology in Figure 2, we have prepared the
input pattern vectorP after performing the CLAHE feature k ( xl ) exp( ) (2)
extraction technique. Our testing vector T was kept ready. 2 k2 for
All the steps viz. Extract the features, Create the network, k (1,2,....., , K )
Train the network, Simulate the network and Plot the
regression were carried out and the results were stored.
and 1 for k 0
(bias neuron) Where x is the N- dimensional input vector,
k is the vector determining the centre of the basis function
k and k represents the width of the neuron. The weight
vector between the input layer and the kth hidden layer
neuron can consider as the centre
k for the feed forward
RBF neural network.
VI. CASCADE NEURAL NETWORKS
Cascade Forward models are similar to feed-forward
networks, but include a weight connection from the input to
each layer and from each layer to the successive layers.
While two-layer feedforward networks can potentially learn
virtually any input-output relationship, feed-forward
networks with more layers might learn complex
relationships more quickly [11].
180
IJFRCSCE | November 2017, Available @ http://www.ijfrcsce.org
_______________________________________________________________________________________
International Journal on Future Revolution in Computer Science & Communication Engineering ISSN: 2454-4248
Volume: 3 Issue: 11 178 – 183
_______________________________________________________________________________________________
Here, the error is given by..
(3)
(8)
181
IJFRCSCE | November 2017, Available @ http://www.ijfrcsce.org
_______________________________________________________________________________________
International Journal on Future Revolution in Computer Science & Communication Engineering ISSN: 2454-4248
Volume: 3 Issue: 11 178 – 183
_______________________________________________________________________________________________
Confusion matrix shows the wrong and right
classification of patterns.
XI. COMPARATIVE CHART SHOWING THE
REGRESSION VALUES OF FIVE NETWORKS
WITH EDGE DETECTION AND DILATION
METHOD AND CLAHE METHOD OF FEATURE
EXTRACTION
182
IJFRCSCE | November 2017, Available @ http://www.ijfrcsce.org
_______________________________________________________________________________________
International Journal on Future Revolution in Computer Science & Communication Engineering ISSN: 2454-4248
Volume: 3 Issue: 11 178 – 183
_______________________________________________________________________________________________
In future, we can go for the experiments to enhance the
performance of Elman network.
ACKNOWLEDGMENT
The author thanks to Dr. Bharat Shinde, Principal,
VidyaPratishthan‟s Arts, Science and Commerce College,
Vidyanagari, Baramati, Dist-Pune (India) for his valuable
guidance and encouragement for the research work.
REFERENCES
[1] Naveen Kumar Sharma, S R Pande and Manu Pratap
Singh, “Performance Evaluation Analysis of MLP & DG-
RBF Feed Forward Neural Networks for Pattern Classification
of Handwritten English Curve Scripts” , (IJCSIT) International
Figure 5b. Performance plot for RBF network Journal of Computer Science and Information Technologies,
Vol. 2 (2) , 2011, 932-945.
[2] Manu Pratap Singh and V.S. Dhaka, “Handwritten Character
Recognition Using Modified Gradient Descent Technique of
Neural Networks And Representation Of Conjugate Descent
For Training Patterns”, IJE Transactions A: Basics Vol. 22,
No. 2, June 2009.
[3] SubhashPawar and Neeta Jain, “Performance Evaluation
Analysis of MLP & DG-RBF Feed Forward Neural Networks
for Pattern Classification of Handwritten English Curve
Scripts”, (IJCSIT) International Journal of Computer Science
and Information Technologies, Vol. 2 (2) , 2011, 932-945.
[4] Rahulvivekpurohit and S. A. Imam, “Feature Extraction
Pattern Recognition: A Review”, MASAUM Journal Of
Reviews and Surveys, Volume 1 Issue 1, September 2009.
[5] O. D. Trier, A. K. Jain and T.Taxt , “Features Extraction
Methods for Character Recognition A Survey”, Pattern
Recognition, 1996, Vol.29, No.4, pp. 641-662.
[6] GauriKatiyar and ShabanaMehfuz, “MLPNN Based
Figure 6a. Regression Plot for Pattern Recognition Network
Handwritten Character Recognition Using Combined Feature
Extraction”, International Conference on Computing,
Communication and Automation (ICCCA2015)
[7] J.Pradeep, E.Srinivasan and S.Himavathi, “Diagonal Based
Feature Extraction For Handwritten Character Recognition
System Using Neural Network”, 978-1-4244-8679-3/11/$26.00
©2011 IEEE
[8] AmalRamzi and AmmarZahary, “Online Arabic Handwritten
Character Recognition using Online-Offline Feature Extraction
and Back Propagation Neural Network”, 978-1-4799-4888-
8/14/$31.00 ©2014 IEEE
[9] Jarernsri L. Mitrpanont and UrairatLimkonglap, “Using
Contour Analysis to Improve Feature Extraction in Thai
Handwritten Character Recognition Systems”, 0-7695-2983-
6/07 $25.00 © 2007 IEEE, DOI 10.1109/CIT.2007.103
[10] 10. Birijesh K. Verma, Member, IEEE, “Handwritten Hindi
Figure 6b. Confusion Matrix for Pattern Recognition Network Character Recognition Using Multilayer Perceptron and:
RadialBasis Function Neural Networks”, 0-7803-2768-
XII. CONCLUSION AND FUTURE SCOPE 3/95/$4.00 0 1995 IEEE 2111.
[11] G. Lo Sciuto, G. Cammarata, G. Capizzi, S. Coco, and G.
The results mentioned in this paper clearly exhibit the Petrone, “Design Optimization of Solar Chimney Power Plant
earlier and present comparison of performances of Cascade by Finite Elements Based Numerical Model and Cascade
network, Elman BP network, Feed Forward network and Neural Networks”, 978-1-5090-2067-6/16/$31.00 ©2016
IEEE.
RBF network and Pattern recognition network with respect [12] Yihe Wan, Shiji Song and GaoHuang “Incremental Extreme
to the two different feature extraction methods. These two Learning Machine based on Cascade Neural Networks”, 978-
methods are „edge detection and dilation‟ and CLAHE. The 1-4799-8697-2/15 $31.00 © 2015 IEEE, DOI
experimental details obtained after applying these four 10.1109/SMC.2015.330
[13] Kannathal N, SadasivanK.Putthuserypady and Lim Choo Min,
neural network techniques are explored in terms of “Elman Neural Networks For Dynamic Modeling Of Epiletic
regression in the form of plots. From the graph, it is clear EEG”, Proceedings of the 28th IEEE, EMBS Annual
that the value of RBF classifier is 1 in both of the International Conference, New York City, USA, Aug 30- Sept
experiments.In this comparison, regression value of Elman 3, 2006
[14] Mrs. SoniChaturvedi , Ms. RutikaTitre and Ms.NehaSondhiya
network in the present network is less than the previous , “Review of Handwritten Pattern Recognition of Digits and
experiment by 0.04115. For the remaining networks i.e Feed Special characters using Feed Forward Neural Network and
forward, Cascade and Pattern Recognition, the regression Izhikevich Neural Model”, 978-1-4799-2102-7/14 $31.00 ©
values are increased. This shows the increased performance. 2014 IEEE, DOI 10.1109/ICESC.2014.83
183
IJFRCSCE | November 2017, Available @ http://www.ijfrcsce.org
_______________________________________________________________________________________