You are on page 1of 6

IntelliSec – The 1st International Workshop on Intelligent Security Systems

11-24th November 2009, Bucharest, Romania

INTEGRATED HANDWRITING RECOGNITION SYSTEM USING


ARTIFICIAL NEURAL NETWORKS
RAILEANU, Ana-Maria; CARSTOIU Dorin

Abstract: In this study we set purpose to prove the high degree A multitude of types of neural networks have been
of security that can be oferred by using ANN-s as the base of a proposed over time. Actually, neural networks have been so
biometric system. Neural networks , based upon a feed-forward intensely studied (for example by IT engineers, electronics
architecture are being used in problem solving as universal engineers, biologists and psychologists) that they have received
approximators in concrete associations such as classification a variety of names. Scientists refer to them as „Artificial Neural
(including nonlinear separable classes), prediction, Networks”, „Multi-Layered Perceptron” and „Parallel-
compression.The error backpropagation algorithm has been Distributed Processors”. Despite this fact ,there is a small group
used to train the multi-layered perceptron network. The results of classical networks which are used, mainly networks which
showed that errors can be reduced by increasing the number of use the BackPropagation algorithm, Hopfield networks,
learning epochs and the number of input charaters up until a „Competitive” networks and those networks which use Spiky
point and that, of course, there is room for improvement. neurons.
Knowledge can be classified by its degree of
Key words: neural networks, biometrics,character recognition generality. At the basic level are signals which contain useful
data as well as parasite elements (noise). Data consists of
1. INTRODUCTION elements which can raise a potential interest. We must consider
A biometric system is essentially a pattern the fact that processed data lead to obtaining information,
recognition system, which makes a personal identification by driven by a specific interest. When a piece of information is
determining the authenticity of a specific physiological or subjected to a certain specialization then we face knowledge.
behavioral characteristics possessed by the user. Pattern Knowledge based systems, depending on the purpose and on
recognition , as a branch of artificial intelligence is aiming their type, can rationalize on their own, having as a starting
identification of similarity relationships between abstract point signals, data, pieces of information; further more, in these
representations of objects or phenomena., for recognition is to knowledge based systems we may be dealing with
classify data entry as belonging to certain classes using metaknowledge.
classification criteria based on information previously built. Here are a number of resons why we should study
An important issue in designing a practical system is neural networks:
to determine how an individual is identified. Biometrics dates 1. They are a viable alternative to the computational
back to the ancient Egyptians, who measured people to identity paradigm based upon the utilization of a formal
them. Keeping to the basics, we submit to your attention the model and the design of algorithm whose behaviour
ideea of identifying someone by his handwriting. Every person do not alter during use.
who desires to entry a secured perimeter is obliged to write a 2. They incorporate results which derive from different
random text , which will be compared with previously taken fields of study, for the purpose of ontaining simple
samples of his handwriting. Depending on the results , calculus architectures.
consisting of a percentage which illustrates the similarity , the 3. They model human intelligence, helping us to better
person shall, or shall not be allowed the entry. understand the way the human brain works.
Biometrics devices have three primary components: 4. They can offer a better rejection of errors, being able
an automated mechanism that scans and captures a digital / to have a good performance even if the data entries
analog image of a living personal characteristics; another have been flawed.
handles compression, processing, storage and comparison of ANNs have multiple representational forms but the most
image with the stored data; the third interfaces with application common are the mathematical. For each artificial neuron, the
systems. mathematical form consists of a function g ( x) of the input
Pattern recognition is a branch of artificial
intelligence aiming identification of similarity relationships vector x , where x = (x1; x2; ... ; xi) . Each input xi
between abstract representations of objects or phenomena. For is weighted according to its weight
recognition is to classify forms (= recognition) data entry
(forms) as belonging to certain classes using classification w = (w1 ; w 2 ; ... ; w i ) . K is the post-processing
criteria based on information previously built. function that is finally applied. This results in the following
This study is dealing with the first part of the equation for a single neuron:
biometric system, illustrated by the useage of artificial neural g ( x) = K (∑ wi xi ) (1)
networks which are, as their name indicates, computational
i
networks which attempt to simulate the networks of nerve cell
(neurons) of the biological central nervous system.The neural
When interpreting the results we must take into
network is in fact a novel computer architecture and a novel
consideration the fact that in handwritten text we face the
algorithmization architecture relative to conventional
variability due to the loss of synchronism between the muscles
computers. It allows using very simple computational
of the hand as well as the variation of one’s style due to several
operations (additions, multiplication and fundamental logic
factors, including but not limited to: education, mood,etc.
elements) to solve complex, mathematically ill-defined
Reading handwriting is a very difficult task considering
problems. A conventional algorithm will employ complex sets
the diversities that exist in ordinary penmanship. However,
of equations, and will apply to only a given problem and
progress is being made. Early devices, using non-reading inks
exactly to it. The ANN will be computationally and
to define specifically-sized character boxes, read constrained
algorithmically very simple and it will have a self-organizing
handwritten entries.
feature to allow it to hold for a wide range of problems.
IntelliSec – The 1st International Workshop on Intelligent Security Systems
11-24th November 2009, Bucharest, Romania

2. RELATED WORK LAMSTAR, etc. incorporate certain elements of these


Obviously, if one is to solve a set of diffrential equations, fundamental networks, or use them as building blocks, usually
one would not use an ANN. But problems of recognition, when combined with other decision elements,
filtering and control would be problems suited for ANNs. As statistical or deterministic and with higher-level controllers.
always, no tool or discipline can be expected to do it all. And The Adaptive Resonance Theory (ART) was
then, ANNs are certainly at their infancy. They started in the originated by Carpenter and Grossberg (1987a) for the purpose
1950s; and widespread interest in them dates from the early of developing artiffcial neural networks whose manner of
1980s. So, all in all, ANNs deserve our serious attention. performance, especially (but not only) in pattern recognition or
One field that has developed from Character Recognition classification tasks, is closer to that of the biological neural
is Optical Character Recognition (OCR). OCR is used widely network (NN).
today in the post offices, banks, airports, airline offices, and Since the purpose of the ART neural network is to
businesses. Address readers sort incoming and outgoing mail, closely approximate the biological NN, the ART neural
check readers in banks capture images of checks for processing, network needs no „teacher" but functions as an
airline ticket and passport readers are used from accounting for unsupervised self-organizing network. Its ART-I version deals
passenger revenues to checking database records, and form with binary inputs. The extension of ART-I known as ART-II
readers are used to read and process up to 5,800 forms per hour. [Carpenter and Grossberg, 1987b] deals with both analog
OCR software is also used in scanners and faxes that allow the patterns and with patterns represented by different levels of
user to turn graphic images of text into editable documents. grey.
Newer applications have even expanded outside the limitations The cognitron, as its name implies, is a network
of just characters. Eye, face, and fingerprint scans used in high- designed mainly with recognition of patterns in mind. To do
security areas employ a newer kind of recognition. this, the cognitron network employs both inhibitory and
Optical Character Recognition has even advanced excitory neurons in its various layers. It was first devised by
into a newer field - Handwritten Recognition, which of course Fukushima (1975), and is an unsupervised network such that it
is also based on the simplicity of Character Recognition. resembles the biological neural network in that respect.
The basic principles of the artificial neural networks The LAMSTAR (LArge Memory STorage And
(ANNs) were first formulated by McCulloch and Pitts in 1943, Retrieval) network is not a specific network but a system of
in terms of five assumptions, as follows: networks for storage, recognition, comparison and decision that
1. The activity of a neuron (ANN) is all-or-nothing. in combination allow such storage and retrieval to be
2. A certain fixed number of synapses larger than 1 must accomplished.
be excited within a given interval of neural addition for 3. POSSIBILITY OF HARDWARE
a neuron to be excited. IMPLEMENTATION
3. The only significant delay within the neural system is Most of the physical implementation of neural systems are
the synaptic delay. based on the mathematical model due of McCulloch and Pitts
4. The activity of any inhibitory synapse absolutely (1943). The main issues raised by the synthesis of artificial
prevents the excitation of the neuron at that time. systems which simulate actual behavior are the number and
5. The structure of the interconnection network does not nature of biological real features, starting with the connectivity
change over time. matrix of elements whose size increases with the square of their
The Hebbian Learning Law (Hebbian Rule) due to Donald number, and processing time, which must be independent of the
Hebb (1949) is also a widely applied principle. The Hebbian size of the network.
Learning Law states that:”When an axon of cell A is near- Complex neural networks produce temporal variations of
enough to excite cell B and when it repeatedly and persistently network parameters and can perform some more sophisticated
takes part in firing it, then some growth process or metabolic mathematical operations than mere summary of the signals.
change takes place in one or both these cells such that the Consequently, elements of processing are organized in several
efficiency of cell A [Hebb, 1949] is increased" (i.e. - the layers of input, output and one or more hidden layers. ANN
weight of the contribution of the output of cell A to the above physical implementation should incorporate as many aspects of
firing of cell B is increased). physiological, and operational characteristics of the
1. Historically, the earliest ANNs are The Perceptron, mathematical models as possible.
proposed by the psychologist Frank Rosenblatt We can highlight three main physical modeling of neurons
(Psychological Review, 1958). and default artificial neural networks,considering the
2. The Artron (Statistical Switch-based ANN) due to R. advantages and limitations of technology:
Lee (1950s). a) Analog modeling of the amplifier gain control and resistive
3. The Adaline (Adaptive Linear Neuron, due to B. synapses;
Widrow, 1960). This artificial neuron is also known as b) ANN modeling with semi-parallel shift registers;
the ALC (adaptive linear combiner), the ALC being its c) electro-optical modeling of ANN.
principal component. It is a single neuron, not a Main trends in ANN approach form of an integrated
network. circuit semiconductor (IC) are to increase density components
4. The Madaline (Many Adaline), also due to Widrow of the circuit per unit area. The limited level of integration is
(1988). This is an ANN (network) formulation based on determined by the matrix of connectivity whose size increases
the Adaline above. with the square of the number of the dynamic processing units
5. The Back-Propagation network - A multi-layer (neurons).
Perceptron-based ANN, giving an elegant solution to In essence, the matrix of connectivity is phisically made
hidden-layers learning [Rumelhart et al., 1986 and through a network of perforations arranged in an insulating
others]. material,in which conductive material is injected (usually
6. The Hopfield Network, due to John Hopfield (1982). polycrystalline silicon). On the two sides of the insulating
7. The Counter-Propagation Network [Hecht-Nielsen, material are secured two sets of metal interconnections, which
1987] | where Kohonen's Self-Organizing Mapping correspond to inputs (dendrites) and outputs (axon) of
(SOM) is utilized to facilitate unsupervised amplifiers (neurons). Physically achieving positive and
learning(absence of a „teacher"). negative synapses is made by doubling each neuron. The value
The other networks, such as ART, Cognitron,
IntelliSec – The 1st International Workshop on Intelligent Security Systems
11-24th November 2009, Bucharest, Romania

of each resistor is determined by the section of the hole and units and one hidden layer we can choose for the
corresponding inverse synaptic efficacy. The circuit integration
can be achieved both by standard bipolar technology, the last one the size as M ⋅ N ).
modest level of integration, as well as CMOS technology. If the number of hidden layer neurons is too small, the
In order to avoid the difficulty brought about by the network fails to form an adequate internal representation of
imposed compromise between the high level of connectivity data training and thus the classification error will be high. With
and synaptic contacts inaccessibility we could use a scheme of a number too large, the network learns very well the training
neural network implemented using CCD (Charge Coupled data but it turns out to be incapable of obtaining a good
Device) type microelectronic circuits beacouse CCD shift generalization obtaining high levels of error for the test data.
registers can store discrete groups of electrons in well defined Therefore the input vector consists of 150 parts
position, which can then be quickly moved by applying external representing the matrix elements ,size 10x15 pixel binary
potential, keeping their local value. representation. The matrix size was chosen considering the
average values of the characters represented, with a minimum
of noise introduced.
Used for network learning algorithm is the well-known
Backpropagation, proposed in 1986 by Rumelhart, Hinton and
Williams for setting weights and hence for the training of multi-
layer perceptrons.
Here's how learning arises: it initializes the network’s
weights with some random numbers, usually between -1 and 1.
The next step consists of applying the set of entry data and
calculating the exit (this step is called "step forward"). The
calculation brings one result completely different from our
target, because all the weights had had random values. At this
point the error of each neuron is calculated, which usually
Fig. 1. A neural network architecture implemented with CCD meets the formula: Target - Effective output. This error is then
shift register type used to modify the weights so that the error has become
increasingly smaller. The process is repeated until when the
error is minimal.
The learning rate (η ) is to speed up or slow down the
The circuit shown in the previous figure mostly avoids the
limitations imposed by the high degree of connectivity
characterized by a synaptic matrix easily accessible and learning process, if appropriate. We have decided upon a
modifiable. On the other hand, the arrangement proposed is learning rate of 150 as being the appropiate for this system, but
partially sacrificing the parallelism and the asynchronous signal we are allowing the user to modify it in the range of 1 to 200
processing which gave rise to the original idea. It is true that and configure it according to his own needs.
relatively high speed CCD circuits partially compensates non- The detection of the symbols is a very important part of
parallel data processing. Currently, there can be implemented the program. This is based on the premise that we are dealing
shift register containing up to 2000 CCD circuits operating at only with black and white images, where white RGB
frequencies of 10MHz. (255,0,0,0) = space and black RGB (255,255,255,255) =
character bitmap image, with any resolution. It is also
4. PROPOSED SOFTWARE considered that the image contains only characters, any another
form of existence (line of the table, edge, etc..) are considered
ARCHITECTURE
noise.
In order to obtain positive results a network-type
feedforward was chosen for implementing the integrated
system,consisting of 150 neurons in the input layer, 250
neurons in the hidden layer and 16 neurons in the output layer
,which represent the characters of the alphabet in binary code,
each uniquely represented on 16 bits.
Inserting hidden layers enhances the capacity of representation
of the feed forward networks but raise difficulties in terms of
learning as „delta” type algorithms
can not be directly applied. This was one of the main reasons
for the stagnation of the development of feedforward networks
with supervised learning between 1969 (when Papert and
Minsky highlighted the limits of single-level networks) and
1985 (when the BackPropagation algorithm, developed in
parallel by several researchers became known). Fig. 2. Neural Network Chosen Architecture
In determining the number of neurons in each layer the
following had been taken into account: We should mention also that each set of training consists
- Both entry level and output level should have as of an image and a text file containing the desired output.
many units as needed to represent input data Concerning the user’s ability to customize the application,
respectively output. we have granted the possibility to choose an activation
function. To fully understand the mechanism we should
- The number of hidden units should be just enough
acknowledge that in biologically-inspired neural networks, the
to solve the problem, but not higher than necessary.
activation function is usually an abstraction representing the
The number of hidden units is based either on
rate of action potential firing in the cell. In its simplest form,
theoretical results concerning the capacity of
this function is binary-that is, either the neuron is firing or not.
representation of the architecture (such as the case
The function looks like φ(vi) = U(vi), where U is the Heaviside
of the current chosen network) or heuristic rules
(eg. for a network with N input units and M output
IntelliSec – The 1st International Workshop on Intelligent Security Systems
11-24th November 2009, Bucharest, Romania

step function. In this case a large number of neurons must be 5. EXPERIMENTAL RESULTS
used in computation beyond linear separation of categories. 5.1. Results obtained for variation in the number of
Activation functions for the hidden units are needed to epochs of training are illustrated in the following
introduce nonlinearity into the network. Without nonlinearity, tables:
hidden units would not make nets more powerful than just plain
perceptrons (which do not have any hidden units, just input and Activation function:= bipolar sigmoid.
output units). The reason is that, a linear function of linear Number of symbols =90, Learning rate=150
functions is again a linear function. However, it is the
nonlinearity (i.e, the capability to represent nonlinear functions) 50 200 500
that makes multilayer networks so powerful. Almost any
nonlinear function does the job, except for polynomials. Used Font Misid Misid
For hidden units, sigmoid activation functions Misid Error Error Error
char char
are usually preferable to threshold activation functions. Here char
they are:
- the standard sigmoid function which ranges from 0 Arial 19 24% 2 3% 4 4.5%
to 1: Tahoma 13 14.5% 5 5.6% 3 3.4%
1 Times New Roman 11 12.3% 4 4.5% 1 1.2%
y= (2)
1 + e − D⋅ x 800 1000 2000
- the hyperbolic tangent which ranges from -1 to 1: Used Font
2 Misid Misid Misid
y= −1 (3) char
Error
char
Error
char
Error
1 + e−2⋅ x
- the Gauss function Arial 2 3% 1 1.2% 2 3%
Tahoma 2 3% 2 3% 2 3%
y = e− x
2
(4) Times New Roman 2 3% 1 1.2% 1 1.2%
- another sigmoid usually used:
x
y= (5)
1+ x 3000
As for the last function, if the network output is a set of Used Font
Misid
numerical values , then it will require more iterations to achieve Error
char
the target value. But if the problem is a classification, as in this
case, this function is appropriate because it consumes less Arial 0 0%
during the central unit processing, without the number of Tahoma 1 1.2%
iterations being affected.
Times New Roman 0 0%
Networks with threshold units are difficult to train because
the error function is stepwise constant, hence the gradient either
does not exist or is zero, making it impossible to use backprop
Variation in the number of epochs
or more efficient gradient-based training methods.
Considering all of the above the user has a choice to
make: unipolar sigmoid, bipolar sigmoid, linear function, 50 A
number of misidentified

Heaviside or Gauss functions. 45


Most supervised learning algorithms rely on minimizing 40
an error function using a gradient type method, therefore the 35
characters

general structure comprises two stages: the initialization of the 30 T


parameters and the iterative process of adjustment. 25
Operations that are made in the software implemented program 20
can be classified into two classes: 15 TN
1. Training phase 10 R
1.1. Analysis of image and character separation; 5
1.2. Convert symbols to pixel matrices; 0
1.3. Search for the desired output and convert to ASCII; 50 200 500 800 1000 2000
1.4. Matrix linearization and sending to the network number of epochs
entry;
1.5. Output calculation; F
1.6. Comparing the obtained output to the desired output Fig. 3. Variations in the number of epochs for different font
and calculation of error; styles
1.7. Adjusting weights properly until reaching the
maximum number of iterations. 5.2. Results obtained for variation in the number of
2. Testing phase characters to input
2.1. Analysis of image and character separation
2.2. Convert symbols to pixel matrices; Activation function:= bipolar sigmoid.
2.3. Output calculation; Number of epochs =300, Learning rate=150
2.4. Displaying the recognized character.
IntelliSec – The 1st International Workshop on Intelligent Security Systems
11-24th November 2009, Bucharest, Romania

Used Font 20 50 90 to the fact that the adjustment of the parameters are
Misid Misid Misid very small.
Error Error Error § Overtraining : the network provides a good
char char char
approximation on the set of training, but possesses a
Latin Arial 0 0 6 12 11 12.22 low generalization ability.
Latin Tahoma 0 0 3 6 8 8.89 Starting from the standard BP there can be developed
Latin Times Roman 0 0 2 4 9 10 BackPropagation algorithm variants that differ by:
§ How to choose the learning rate: constant or adaptive;
§ Relations adjustment (determined by the
5.3. Results obtained for variation of learning rate
minimization algorithm used, which is different from
simple gradient algorithm: conjugate gradient type
Activation function:= bipolar sigmoid. algorithms, Newton-type algorithms, decrease
Number of symbols =90, Number of epochs=300 random algorithms, genetic algorithms, etc.).
1 10 40 § The way the parameters are being intialized: random
Used Font initialization or based upon a search algorithm.
Misid Misid Misid § Attending training set (it only influences the serial
Error Error Error version of the algorithm): sequential or random;
char char char
§ Error function: besides the squared mean error there
can be used some specific measures of error to solve
Arial 56 63% 6 6.78% 5 5.6% the problem (eg in case of classification problems
Tahoma 70 78% 8 8.9% 10 11.2% there can be used an entropy based error);
Times Roman 48 54% 4 4.5% 3 3.4% § Stopping criterion: In addition to the criterion based
on the maximum number of epochs and the
80 120 corresponding error of training set we can use criteria
Used Font related kit validation error and the size adjustment of
the last era.
Misid Misid
Error Error
char char
7. CONCLUSIONS
Arial 2 2.33% 0 0% The NN was trained and tested for different test and
Tahoma 2 2.33% 2 2.33% training patterns.
Times Roman 0 0 0 0% § In all the cases the amount of convergence and error
rate was observed.
§ The convergence greatly depended on the hidden
Variation of learning rate layers and number of neurons in each hidden layer.
§ The number in each hidden layer should neither be
too less or too high.
80 § The NN once properly trained was very accurate in
Number of misidentified

70 classifying data in most of the test cases. The amount


60
A of error observed makes it ideal for classification
problems like Face Detection.
charactrs

50 There are a few things worth mentioning about this


40 T system:
30 § If learning rate is subunit, the network can not handle
20 the learning process, the number of correctly
TNR
recognized characters tending to zero.
10
§ Also tests have been performed on the recognition of
0 other character sets than the ones learned initially by
1 10 40 80 120 the network. Unfortunately, results have not been
Learning rate very encouraging.
§ Following the experimental results, we can see that,
Fi in general, increasing the number of epochs has a
Fig. 4. Variations of the learning rate for different font styles positive effect on the network’s performance. This is
going until the network reaches the optimum point.
6. POSSIBLE IMPROVEMENTS Also we can see that, if the number of epochs is
increased further, the network tends to become
The motivation for needing to develop new versions unstable, increasing the number of characters
of the standard BP algorithm is that it presents a number of recognized wrong. This is called "overlearning”.
drawbacks: § Input set size is also very important in terms of
§ Slow convergence: it requires too many epochs to network performance. The more symbols the network
reach a value for which the error is low enough must learn, the more likelihood of errors is greater.
§ Blocking in a local minimum: once the algorithm Concluding, for a set of maximum 90 symbols in the
reaches a local minimum of the error function, the set of learning the network requires 250 neurons in
algorithm does not allow escape from this minimum the hidden layer of the network.
to achieve the global optimum. In the current study the best results were provided by the
§ Stagnation (paralysis):the algorithm stagnates in an additive model, bipolar sigmoid activation function, feed-
area that is not necessarily near a local minimum, due forward architecture and supervised learning algorithm, the
backpropagation.
We were able to see that learning was more effective when the
IntelliSec – The 1st International Workshop on Intelligent Security Systems
11-24th November 2009, Bucharest, Romania

input set had a reduced number of items and optimum results


were obtained when the test image contained a small number of
words, words preferably with as many letters as possible that
were repeated .
Also, the characters more likely to give rise to errors were "H",
"I", "L", "g". The figures have caused errors of recognition only
if the network was trained by an insufficient number of times
(<100).
It has been proved that the number of hidden neurons
is an optimal value, generally impossible to predict with
accuracy, before network performance evaluation in several
sets of experiments.
This conclusion is also true for the total number of training
epochs and the final value of the average quadratic error: the
optimal value of these parameters can be determined only
experimentally and they are conditioned by the type and
structure of the artificial neural network.
Handwriting recognition using neural networks has seen
many implementations, but none managed to achieve
acceptable performance as to be used in commercial
applications.Systems lack the reliability and robustness, which
may be achieved only through extensive research and decades
of experiments.
In summary, the excitement in ANNs should not be
limited to its greater resemblance to the human brain. Even its
degree of self-organizing capability can be built into
conventional digital computers using complicated artificial
intelligence algorithms. The main contribution of ANNs is that,
in its gross imitation of the biological neural network, it allows
for very low level programming to allow solving complex
problems, especially those that are non-analytical and/or
nonlinear and/or nonstationary and/or stochastic, and to do so in
a self-organizing manner that applies to a wide range of
problems with no re-programming or other interference in the
program itself. The insensitivity to partial hardware failure is
another great
attraction, but only when dedicated ANN hardware is used.
Given enough entrepreneurial designers and sufficient
research and development dollars, OCR can become a powerful
tool for future data entry applications.

8. REFERENCES

Danciu D., Răsvan V. (2008). Neural networks. Equilibria,


Synchronization, Delays, Information Science
Reference (Idea Group Inc.), ISBN 978-1-59904-849-9,
U.S.A
Danciu D. (2008). Neural Networks Dynamics as Systems with
Several Equilibria, Information Science Reference (Idea
Group Inc.), ISBN 978-1-59904-996-0, U.S.A.
Fausett L. (1994). Fundamentals of Neural Networks, Prentice-
Hall, ISBN 0 13 042250 9, U.S.A.
Graupe D.(2007). Principles of Artificial Neural
Networks,World Scientific Publishing, ISBN 13 978-981-
270-624-9, Singapore
Gurney K. (1997). An Introduction to Neural Networks, UCL
Press, ISBN 1 85728 503 4 , U.S.A.
Stanasila,O. & Neagoe, V. (2000).Teoria Recunoasterii
Formelor, Editura Academiei Romane, ISBN 9732703415 ,
Romania
***(2009)http://franck.fleurey.free.fr/FaceDetection/inde
x.htm, Accessed on: 2009-10-20
***(2009)http://page.mi.fuberlin.de/rojas/neural/chapter/K3.pdf
, Accessed on: 2009-10-14
*** (2009) http://www.ine-web.org/, Accessed on: 2009-10-14
***(2009)http://free.beyonddream.com/NeuralNet/fundamental
.htm, Accessed on: 2009-10-11

You might also like