You are on page 1of 6

JOURNAL OF COMPUTING, VOLUME 3, ISSUE6, JUNE 2011, ISSN 2151-9617

HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
WWW.JOURNALOFCOMPUTING.ORG
114

Neuro-fuzzy Pattern Classifier for Channel
Equalization
Priti Ranjan Hathy, Siba Prasada Panigrahi., and Prashanta Kumar Patra
AbstractThis paper proposes a novel equalizer where a hybrid structure of two multi-layer neural networks acts as a
classifier to classify the detected signal pattern. The neurons were embedded with optimization algorithms. We have considered
two optimization algorithms, Bacteria Foraging Optimization (BFO) and Ant Colony Optimization (ACO). The proposed structure
reduces both training time and decision delay. Also, Simulation results prove the superior performance of the proposed
equalizer over the existing equalizers.
Index Terms Channel Equalization, Multilayer Perceptron Networks, Bacteria Foraging, Ant Colony Optimization..



1 INTRODUCTION

Channel equalization plays an important role in
digital communication systems. There are tremendous
developments in equalizer structures since the advent of
neuralnetworksinsignalprocessingapplications.Recent
literature is healthy enough with newer applications of
neural networks [15] and in particular to independent
component analysis, noise cancellation and channel
equalization [616]. But none of these papers considered
the huge amount of training time required to train the
neural network. Also, these papers fail in reducing the
decisiondelay.Authorsin[17]havetriedtoaddressthese
two problems, through a hybrid structure of two ANNs,
and were successful in reducing both training time and
decision delay. However, the result was suboptimal in
nature, since the neurons were not trained with any
optimizationalgorithms.
This paper takes the similar structure as that of [17].
But, novelty in this paper is to use the structure as a pat-
tern classifier for classifying detected signal pattern. Also,
each of the neurons in the classifier structure is embedded
with optimization algorithm.
Recently, Particle Swarm Intelligence (PSO) [18], Bacte-
ria Foraging Optimization (BFO) [19-21] and Ant colony
Optimization (ACO) [22-25] have been used for optimiza-
tion purpose in different fields of research. This paper
uses BFO and ACO for optimization.
Rest of the paper is organized as: Section 2 discusses the
problem considered in the paper. In section 3, we will
discuss the proposed equalizer structure and optimiza-
tion algorithms. Section 4 is dedicated for simulation re-
sults. The paper finally concluded in section 5.
2. PROBLEM STATEMENT
Impulse response of channel & co-channel can
be represented as:

( )

s s =
1
0
,
0
i
p
j
j
j i i
n i z a z H

(1)
Here
i
p and
j i
a
,
are length and tap weights
of
th
i channel impulse response. We assume a bi-
nary communication system, which would make
the analysis simple, though it can be extended to
any communication system in general.
They satisfy the condition
( ) | | 0 = n x E
i

(2)
( ) ( ) | | ( ) ( )
2 1 2 1
n n j i n x n x E
j i
= o o
(3)
where | | E represents the expectation operator and
( )

=
=
=
0 0
0 1
n
n
n o
(4)
The channel output scalars can be represented as
( ) ( ) ( ) ( ) n n d n d n y
co
n + + =
(5)
Here ( ) n d desired received signal ( ) n d
co
is interfer-
ing signal and ( ) n n is noise component assumed to be
Gaussian with variance
( ) | |
2 2
n
o n = n E
and uncorre-
lated with data.






- Priti Ranjan Hathy, Dept.Of Computer Application, Government Poly-
technic, Bhubaneswar, Orissa,India.
- Siba Prasada Panigrahi.,Professor, Electrical Engineering, KIST, Bhuba-
neswar, KIST, Bhubaneswar.
- Prashanta Kumar Patra ,.DepartmentofComputerScience&Engineer
ing,.CollegeofEngineering&Technology,.BijuPatnaikUniversityof
Technology,Bhubaneswar,India.


JOURNAL OF COMPUTING, VOLUME 3, ISSUE 6, JUNE 2011, ISSN 2151-9617
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
WWW.JOURNALOFCOMPUTING.ORG 115

The desired and interfering signal can be represented
as
( ) ( )

=
=
1
0
0 , 0
0
p
j
j
j n x a n d
(6)
( ) ( )

=
=
n
i
p
j
i j i co
i
j n x a k d
1
1
0
,
(7)
The task of the equalizer is to estimate the transmitted
sequence ( ) d n x

0
based on channel observation vector
( ) ( ) ( ) ( ) | |
T
m n y n y n y n y 1 , , 1 , + = , where m is
order of equalizer and d

is decision delay.
The cost function is the MSE value of ( ) 1 , , + n q p e .
So
( ) 1 , ,
2
1
2
+ = n q p e J (8)
The error generated at the output of equalizer should
be minimized to give an acceptable solution. The initial
condition for the equalizer model is derived from the DFE
expressions. So, determination of the error at interior
points the channel is essential. We assume, input same as
that of desired output. Hence, the total error is the
difference between output and input of the network
model. The cost function is the mean of the sum of
squares of this error (MSE).
3.Proposed Equalizer
The proposed equalizer in this paper shown in figure 1
and consists of two basic components, one classifier and
one optimizer. The purpose of classifier is to receive the
distorted output from the channel and will form two sep-
arate and independent patterns, one for { } 1 + and next
for { } 1 . The purpose of the optimizer is to optimize the
cost function of (8) and thereby minimizing the error. The
details of classifier and working algorithms for the opti-
mizer are discussed in following two sub-sections.
Figure 1: Proposed equalizer structure


3.1 Hybrid ANN classifier
First part of the equalizer of this paper, the
classifier, is the same structure as that used in [17].
Authors in [17] used the structure for the purpose of
equalization but we used the same for classification
of distorted signal. Authors in [17], embedded DFE
algorithm to neural nets. But, in this paper we have
embedded optimization algorithm to neural network.
This optimization algorithm is however, shown
separately in the equalizer structure and also
discussed separately in following subsection. This
structure is a hybrid of two ANNs (ANN-I and ANN-
II). Both of these two ANNs have four inputs and one
output node. There are three common inputs to both
the networks as 1, p and n . Fourth input ( )
i
q to
ANN-I is from ANN-II and similarly fourth input to
ANN-II ( )
i
q is from ANN-I. Output node of ANN-I
represents { } 1 + and output of ANN-II represents
{ } 1 . The discrete values for the space and time
components of input signal represented as p and n
respectively. These discrete inputs form a
M 2 matrix, with ( ) n p M , max = . Simple do
(for) loops were used to give these matrix elements,
on line. This three-layered Feed forward network
uses a mean variance type connection [26] for the
input-to-hidden layer. This is similar to a RBF neural
network. The input to each neuron, in a layer, is the
outputs of the neurons of previous layer multiplied
by a set (two sets for the hidden layer) of weighting
factors. A tanh and a linear activation functions
give, respectively, the output of a hidden neuron and
the output neuron. The ANN training uses the back-
propagation algorithm. The adaptability of the
network is the discrepancy between the network
output and the desired output. The number of
neurons in the hidden layer is to be such that over
training is avoided and good accuracy is obtained for
testing data. The minimum number of neurons in the
hidden layer is [(Number of input layer neurons) X
(Number of output layer neurons)] [26]. This
classifier structure reduces the delay time [17].
The weights were updated by minimizing the cost
function with respect to each weight. Through this formu-
lation the optimization algorithm used in optimizer is
embedded into the Neural Network. The mean and va-
riance connection strengths between the
th
h input neuron
and
th
k hidden neurons are
hk
u and
hk
v respectively. The
connection strength between the
th
k hidden neuron and
output neuron is
1
w . Similarly,
3 2 1
& , a a a correspond
respectively to n p & , 1 .






JOURNAL OF COMPUTING, VOLUME 3, ISSUE 6, JUNE 2011, ISSN 2151-9617
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
WWW.JOURNALOFCOMPUTING.ORG 116


The weight updating is done according to normal back
propagation technique as:

c
c
=
c
c
=
c
c
=
old hk
old hk new hk
old hk
old hk new hk
old k
old k new k
v
J
v v
u
j
u u
w
J
w w
,
, ,
,
, ,
,
, ,

|
o

(9)
Here, | o, and are the learning rate terms to
speed up the training process.
3.2 The Optimizer
Second part of the equalizer proposed in this paper is
an optimizer. However, practically, one optimization
algorithm, which is discussed in this section, is embedded
to the neural network that is used as a classifier. We have
taken two different optimization algorithms, first one
Bacteria foraging optimization and next with Ant colony
optimization. The algorithms are discussed in following
sections, and for clarity of the readers, with different
nomenclatures. This paper has not tested the structure
with any other optimization algorithms and can be seen
as one area for future work.
3.2.1 Bacteria Foraging Optimization
Natural selection tends to eliminate animals with poor
foraging strategies and favor the propagation of genes of
those animals that have successful foraging strategies,
since they are more likely to enjoy reproductive success.
After many generations, poor foraging strategies are
either eliminated or shaped into good ones. This activity
of foraging led the researchers to use it as optimization
process. The E. coli bacteria that are present in our
intestines also undergo a foraging strategy. The control
system of these bacteria that dictates how foraging should
proceed can be subdivided into four sections, namely,
chemo taxis, swarming, reproduction, and elimination
and dispersal.
Algorithm
For initialization, we must choose
ed ed re s c
P N N N N S P , , , , , , and the ( ) S i i C , 2 , 1 , = .
In case of swarming, we will also have to pick the
parameters of the cell-to-cell attractant functions;
here we will use the parameters given above.
Also, initial values for the S i
i
, 2 , 1 , = u must be
chosen. Choosing these to be in areas where an
optimum value is likely to exist is a good choice.
Alternatively, we may want to simply randomly
distribute them across the domain of the
optimization problem. The algorithm that models
bacterial population chemo taxis, swarming,
reproduction, elimination, and dispersal is given
here (initially, 0 = = = l k j ). For the algorithm,
note that updates to the
i
u automatically result in
updates to P . Clearly, we could have added a
more sophisticated termination test than simply
specifying a maximum number of iterations.
1)Elimination-dispersal loop: 1 + = l l
2)Reproduction loop: 1 + = k k
3)Chemo taxis loop: 1 + = j j
a) For S i , 2 , 1 = take a chemo tactic step for
bacterium i as follows.
b) Compute ( ) l k j i J , , , (i.e., add on the
cell-to-cell attractant effect to the nutrient
concentration).
c) Let ( ) l k j i J J
last
, , , = to save this value
since we may find a better cost via a run.
d) Tumble: Generate a random vector
( )
P
R i e A with each element ( ) p m i
m
, , 2 , 1 , = A , a
random number on [-1, 1].
e)Move
( ) ( ) ( ) ( )
( ) ( ) i i
i i c l k j l k j
t
i i
A A
A + = +
1
, , , , 1 u u
This results in a step of size ( ) i c in the
direction of the tumble for bacteriumi .
f)Compute ( ) l k j i J , , 1 , + and then let,
( ) ( ) ( ) ( ( j P l k j J l k j i J l k j i J
i
cc
. , , 1 , , 1 , , , 1 , + + + = + u

Swim (note that we use an approximation
since we decide swimming behavior of each cell
as if the bacteria numbered { } i 2 , 1 have moved
and { } s I i 2 , 1 + + have not; this is much simpler
to simulate than simultaneous decisions about
swimming and tumbling by all bacteria at the
same time):
I. Let 0 = m (counter for swim length).
II. While
s
N m < (if have not climbed
down too long)
*Let 1 + = m m
**If ( )
last
J l k j i J < + , , 1 , (if doing better),
Let ( ) l k j i J J
last
, , 1 , + = and let
( ) ( ) ( ) ( )
( ) ( ) i i
i i c l k j l k j
t
i i
A A
A + = +
1
, , , , 1 u u and
use this ( ) l k j
i
, , 1 + u to compute new
( ) l k j i J , , 1 , + as we did in above step.
*Else, let
s
N m = this is the end of the while
statement.
h)Go to next bacterium( ) 1 + i if S i = (i.e., go to b)
to process the next bacterium).
JOURNAL OF COMPUTING, VOLUME 3, ISSUE 6, JUNE 2011, ISSN 2151-9617
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
WWW.JOURNALOFCOMPUTING.ORG 117

4)If
c
N j s , go to step 3. In this case, continue
chemo taxis, since the life of the bacteria is not
over.
5)Reproduction:
For the given k and l, and for each
S i , 2 , 1 = let ( )

+
=
=
1
1
, , ,
c
N
j
i
health
l k j i j J be the health
of bacterium i (a measure of how many nutrients
it got over its lifetime and how successful it was at
avoiding noxious substances) Sort bacteria and
chemo tactic parameters ( ) i c in order of ascending
cost
health
J (higher cost means lower health).
b) The
r
S bacteria with the highest
health
J values
die and the other
r
S bacteria with the best values
split (and the copies that are made are placed at
the same location as their parent).
6)If
re
N K s go to step 2. In this case, we have
not reached the number of specified reproduction
steps, so we start the next generation in the chemo
tactic loop.
7) Elimination-dispersal: for S i , 2 , 1 = , with
probability
ed
P , eliminate and disperse each
bacterium (this keeps the number of bacteria in the
population constant). To do this, if you eliminate a
bacterium, simply disperse one to a random
location on the optimization domain.
8)If
ed
N I s then go to step 1; otherwise end.
3.2.2 Ant Colony Optimization
Ant colony optimization (ACO) is a constructive
population-based search technique to solve optimization
problems by using principle of pheromone information. It
is an evolutionary approach where several generations of
artificial agents in a cooperative way search for good
solutions. Agents are initially randomly generated on
nodes, and stochastically move from a start node to
feasible neighbor nodes. During the process of finding
feasible solutions, agents collect and store information in
pheromone trails. Agents can online release pheromone
while building solutions. In addition, the pheromone will
be evaporated in the search process to avoid local
convergence and to explore more search areas. Thereafter,
additional pheromone is deposited to update pheromone
trail offline so as to bias the search process in favor of the
currently optimal path. The pseudo code of ant colony
optimization is stated as [20]:
Procedure: Ant colony optimization (ACO)
Begin
While (ACO has not been stopped) do
Agents_generation_and_activity();
Pheromone_evaporation();
Daemon actions();
End;
In ACO, agents find solutions starting from a start
node and moving to feasible neighbor nodes in the
process of Agents_generation_and_activity. During the
process, information collected by agents is stored in the
so-called pheromone trails. In this process, agents can
release pheromone while building the solution (online
step-by-step) or while the solution is built (online
delayed). An agent-decision rule, made up of the
pheromone and heuristic information, governs agents_
search toward neighbor nodes stochastically. The
th
k ant
at time t positioned on node r move to the next
node s with the rule governed by
( )
( ) | |

s
)
`

= =
otherwise S
q q when t
s
ru ru
t allowed u
k
0
max arg
| o
n t
(15)
where ( ) t
ru
t is the pheromone trail at time t ,
ru
n is the problem-specific heuristic information,
a is a parameter representing the importance of
pheromone information, | is a parameter
representing the importance of heuristic
information, q is a random number uniformly
distributed in [0, 1],
0
q is a pre-specified
parameter ( 1 0
0
s s q ), allowed
k
(t) is the set of
feasible nodes currently not assigned by ant k at
timet , and S is an index of node selected from
allowedk(t) according to the probability
distribution given by
( )
( )
( )
( )
( )

e
=
e
otherwise
t allowed s if
t
t
t P
k
t allowed u
ru rs
rs rs
k
rs
k
0
|
| o
n t
n t
(16)
Pheromone_evaporation is a process of decreasing the
intensities of pheromone trails over time. This process is
used to avoid locally convergence and to explore more
search space. Daemon actions are optional for ant colony
optimization, and they are often used to collect useful
global information by depositing additional pheromone.
In the original algorithm of [20], there is a scheduling
process for the above three processes. It is to provide
freedom for conducting how these three processes should
interact in ant colony optimization and other approaches.
4. SIMULATION RESULTS
To test the effectiveness of the proposed equalizer, a
real symmetric channel impulse response with an im-
pulse response considered as:
( )
2 1
2887 . 0 9129 . 0 2887 . 0

+ + = z z z H (9)
Transmitted signal constellation was set to {1} keep-
ing the transmitted power unity. Co-channel Interference
was treated as noise. For simulation the training data con-
sisted of 8 random values of p , and 25 random values of
n (including 0 = n , and 256 = n ).

For the simulations, we considered three cases. In first
JOURNAL OF COMPUTING, VOLUME 3, ISSUE 6, JUNE 2011, ISSN 2151-9617
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
WWW.JOURNALOFCOMPUTING.ORG 118

case, detected signal was feedback to classifier for weight
updating of neurons similar to that in [1]. In second case,
detected signal was optimized with BFO and send back to
classifier as feedback signal for updating the neurons. In
third case, detected signal was optimized with ACO and
send back to classifier as feedback signal for updating the
neurons. Figure 2 and 3 respectively shows Mean square
Error (MSE) and Symbol Error Rate (SER) curves for these
three cases.

Figure 2: MSE for NDFE [1], BFO trained and ACO
trained equalizer

Figure 3: SER for NDFE [1], BFO trained and ACO
trained equalizer
From both of the above figures, it is shown that Equa-
lizer with an optimizer outperforms the equalizer [1]
without optimizer. Also it is interesting to see the com-
parison between two optimization strategies used. Here
ACO performs better than BFO.
5. CONCLUSION
This paper proposed a novel equalizer where a hybrid
structure of two multi-layer neural networks acts as a
classifier to classify the detected signal pattern. The
neurons were embedded with optimization algorithms.
Simulation results prove the superior performance of the
proposed equalizer i.e ACO performs better than BFO.

REFERENCES
1. Elif Derya beyli, Lyapunov Exponents/probabilistic neural
networks for analysis of EEG signals, Expert Systems with Ap-
plications, Volume 37, Issue 2, March 2010, Pages 985-992.
2. Elif Derya beyli, Recurrent neural networks employing Lya-
punov Exponents for analysis of ECG signals, Expert Systems
with Applications, Volume 37, Issue 2, March 2010, Pages 1192-
1199.
3. Daw-Tung Lin, Judith E. Dayhoff, Pangs A. Ligomenides, Tra-
jectory production with adaptive time-delay neural network,
Neural Networks, Volume 8, Issue 3, 1995, Pages 447-461.
4. Ling Gao, Shouxin Ren, Combining orthogonal signal correc-
tion and wavelet pocket transform with radial basis function
neural networks for multicomponent determination, Chemo-
metrics and Intelligent Laboratory Systems, Volume 100, Issue
1, 15 January 2010, Pages 57-65
5. K.-L. Du, Clustering: A neural network approach, Neural Net-
works, Volume 23, Issue 1, January 2010, Pages 89-107
6. Haiquan Zhao, Xiangping Zeng, Jiashu Zhang, Adaptive re-
duced feedback FLNN filter for active control of noise
processes, Signal Processing, Volume 90, Issue 3, March 2010,
Pages 834-847
7. Chris Potter, Ganesh K. Venayagamoorthy, Kurt Kosbar, RNN
based MIMO channel prediction, Signal Processing, Volume 90,
Issue 2, February 2010, Pages 440-450
8. Jagdish C. Patra, Pramod K. Meher, Goutam Chakraborty, Non-
linear channel equalization for wireless communication sys-
tems using Legendre Neural networks, Signal Processing, Vo-
lume 89, Issue 11, November 2009, Pages 2251-2262
9. Sasmita Kumari Padhy, Siba Prasada Panigrahi, Prasanta Ku-
mar Patra, Santanu Kumar Nayak, Non-linear channel equali-
zation using adaptive MPNN, Applied Soft Computing, Vo-
lume 9, Issue 3, June 2009, Pages 1016-1022
10. Ling Zhang, Xianda Zhang, MIMO Channel Estimation and
equalization using threelayer neural network with feedback,
Tsinghua Science & Technology, Volume 12, Issue 6, December
2007, Pages 658-662
11. Haiquan Zhao, Jiashu Zhang, Functional link neural network
cascaded with Chevyshev orthogonal polynomial for nonlinear
channel equalization, Signal Processing, Volume 88, Issue 8,
August 2008, Pages 1946-1957
12. Haiquan Zhao, Jiashu Zhang, A novel nonlinear adaptive filter
using a pipelined second-order Volterra recurrent neural net-
work, Neural Networks, Volume 22, Issue 10, December 2009,
Pages 1471-1483
13. Wan-De Weng, Che-Shih Yang, Rui-Chang Lin, A channel
equalizer using reduced decision feedback Chebyshev func-
tional link artificial neural networks, Information Sciences, Vo-
lume 177, Issue 13, 1 July 2007, Pages 2642-2654
14. Wai Kit Wong, Heng Siong Lim, A robust and effective fuzzy
adaptive equalizer for powerline communication channels,
Neurocomputing, Volume 71, Issues 1-3, December 2007, Pages
311-322
15. Jungsik Lee, Ravi Sankar, Theoretical derivation of minimum
mean square error of RBF based equalizer, Signal Processing,
Volume 87, Issue 7, July 2007, Pages 1613-1625
JOURNAL OF COMPUTING, VOLUME 3, ISSUE 6, JUNE 2011, ISSN 2151-9617
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
WWW.JOURNALOFCOMPUTING.ORG 119

16. Haiquan Zhao, Jiashu Zhang, Nonlinear dynamic system iden-
tification using pipelined functional link artificial recurrent
neural network, Neurocomputing, Volume 72, Issues 13-15,
August 2009, Pages 3046-3054
17. Siba Prasada Panigrahi, Santanu Kumar Nayak, Sasmita Kuma-
ri Padhy, Hybrid ANN reducing training time requirements
and decision delay for equalization in presence of co-channel
interference, Applied Soft Computing, Volume 8, Issue 4, Sep-
tember 2008, Pages 1536-1538
18. Mara Alejandra Guzmn, Alberto Delgado, Jonas De Carvalho,
A novel multiobjective optimization algorithm based on Bac-
terial chemotaxis, Engineering Applications of Artificial Intelli-
gence, Volume 23, Issue 3, April 2010, Pages 292-301
19. Babita Majhi, G. Panda, Development of efficient identification
scheme for nonlinear dynamic systems using swarm intelli-
gence techniques, Expert Systems with Applications, Volume
37, Issue 1, January 2010, Pages 556-566
20. D.P. Acharya, G. Panda, Y.V.S. Lakshmi, Effects of finite regis-
ter length on fast ICA, bacteria foraging optimization based
ICA and constrained genetic algorithm based ICA algorithm,
Digital Signal Processing, Available online 12 August 2009
21. B.K. Panigrahi, V. Ravikumar Pandi, Congestion management
using adaptive Bacterial foraging Algorithm, Energy Conver-
sion and Management, Volume 50, Issue 5, May 2009, Pages
1202-1209
22. Mehmet Korrek, Ali Nizam, Clustering MIT-BIH arrhythmias
with Ant colony Optimization using time domain and PCA
compressed wavelet coefficients, Digital Signal Processing,
Available online 13 November 2009
23. Mehdi Hosseinzadeh Aghdam, Nasser Ghasem-Aghaee, Mo-
hammad Ehsan Basiri, Text feature selection using ant colony
optimization, Expert Systems with Applications, Volume 36, Is-
sue 3, Part 2, April 2009, Pages 6843-6853
24. Sung-Shun Weng, Yuan-Hung Liu, Mining time series data for
segmentation by using Ant colony Optimization, European
Journal of Operational Research, Volume 173, Issue 3, 16 Sep-
tember 2006, Pages 921-937
25. Jing Tian, Weiyu Yu, Lihong Ma, Antshrink: Ant colony opti-
mization for image shrinkage, Pattern Recognition Letters,
Available online 7 January 2010.
26. W. Chen, N. Minh, and J. Litva, On incorporating finite im-
pulse response neural network with finite difference time do-
main method for simulating electromagnetic problems, Anten-
nas and Propagation Society International Symposium, 1996.
AP-S. Digest, Volume: 3, 1996, pp 1678 -1681.

Priti Ranjan Hathy, MCA, MBA, M.Sc
(Mathematics) is presently working with
Department of Computer Application,
Government Polytechnic, Bhubaneswar,
Orissa,India.

Dr. Siba Prasada Panigrahi, Ph.


D.(Electronics), M. Tech.(Electrical)from NIT,
Rourkela, B. Tech (Electrical) NIT, Rourkela,
Orissa, B. Tech (Electrical) from CET, Bhuba
neswar. He has published more than 22 in
ternational journals. He is presently working as Professor in
ElectricalwithKIST,Bhubaneswar.Orissa,India.

Prof. (Dr.) Prashant Kumar Patra, received Bachelor
Degree in Electronics Engineering from SVRCET
(NIT), Surat, India, M. Tech. Degree in Computer
Engineering from Indian Institute of Technology,
Kharagpur, India and Ph. D. Degree in Computer
SciencefromUtkalUniversity,Bhubaneswar,Indiaintheyear1986,
1993 and 2003 respectively. He is presently working as Professor in
ComputerScience&Engineering ofCollegeofEngineering&Tech
nology, Bhubaneswar, India, a constituent college of Biju Patnaik
University of Technology, Orissa, India. He has published many
papers at National/International journals/ Conferences in the areas
of Soft Computing, Image processing & Pattern recognition which
arethesubjectsofhisresearchinterest.

You might also like