You are on page 1of 6

Signature Recognition Using Neural Networks

25 May 2007
(translated from portuguese: 18 January 2010)

by Luis Rei me@luisrei.com

Objective
! The objective of this project is to present a solution to a simple learning task. The task chosen was signature recognition and the solution was to use an Articial Neural Network (ANN). ANN is a computational model that tries to simulate aspects of biological neural networks. To use an ANN it is necessary rst to design its structure, dene how it functions and to train it to perform the task at hand, in this case, to recognize signatures.

Program Structure
The program can be divided into three separate modules. Signature Processing ! The signature processing module allows reading and processing images for later classication. Currently, it accepts images in the Bitmap format (Windows BMP) with xed dimensions of 512x128 and 256 colors (8 bits). ! To reduce the probability of errors and facilitate the use of the signatures in future operations by following modules, after loading, the signatures are normalized by operations of scaling and translation. Firstly, the image is reduced to 128x32 pixels and converted to black&white (binary value, no shades of gray) then the signature is aligned to the right upper corner of the image and then the signature is scale to ll as much space as possible within the image without being distorted.

Figure 1: Image normalization

Feature Extraction ! One could feed the entire normalized signature (the output of the previous module) directly to the neural net. However the results wouldnt be particularly good and it would require many input nodes which would also result in a much slower running program. ! This module extracts the projections of the signature at different angles, namely at 0 and 90.

Figure 2: signature projections ! Each image (the projections) is divided into 8x2 blocks. The number of black pixels in each block is used as the input to the neural network. As such the input of the network will be 8x2x2 = 32 integer values. This value seems to be the minimum possible for obtaining good results with the signatures used in the tests. The following is an example of how a signature is processed from normalization to feature extraction:

Figure 3: Original Signature, normalized signature, projection at 0 and projection at 90

Figure 4: Division in blocks (8x2) Finally the corresponding array that will be the input of the neural network is: 402001153121614800117160000000120000100 The Articial Neural Network ! The neural network module does the actual signature recognition based on the output of the feature extraction module. The network implemented has 3 layers: an input layer, a middle (also known as hidden) layer and an output layer. The input layer has the same number of neurons (nodes) as the number of the ouputs of the previous module, 32. The output layer has as many as neurons as different people using the system i.e. the number of people that supplied signatures to be identied. The number of neurons in the middle layer can be specied by the program user at runtime, by default, 30 neurons are used as this number yielded good results in the experiments. ! There are 3 additional values that specify the ANN: alpha (also known as momentum) which was set at 0.1, beta (also known as learning coefcient) which was set at 0.3 and the threshold, the value of the Mean Squared Error beneath which the ANN training ends, which was set at 0.0001, the highest value which did not noticeably reduce accuracy during the experiments (the higher this value the quicker the training phase will end). ! The output of the ANN is an array of doubles (one value for each output neuron) which indicates the probability (between 0 and 1) of the signature tested corresponding to a particular person whose signatures are in the database and that corresponds to that output neuron. Thus each different possible identity corresponds to a different output neuron.

! Each neuron of a layer has a weighted link to every neuron of the following layer (feedforward) - there are no links to nodes in the previous layers (feedback). These weights represent the knowledge of the ANN (memory or state of learning). Initially the weights are random values and are changed as the ANN is trained, supervised learning using Back-propagation according to the following algorithm:
1. Initialize the weights in the network (often randomly) 2. 2. repeat * foreach example e in the training set do 1. O = neural-net-output(network, e) ; forward pass 2. T = teacher output for e 3. Calculate error (T - O) at the output units 4. Compute delta_wi for all weights from hidden layer to output layer ; backward pass 5. Compute delta_wi for all weights from input layer to hidden layer ; backward pass continued 6. Update the weights in the network * end 3. until all examples classified correctly or stopping criterion satisfied 4. return(network)

Figure 5: ANN shcema. n corresponds to the number of different people (signatures) using the system.

State of the Project


! ! ! ! In its present state the program allows: 1 - Importing signatures for the learning phase; 2 - Training the network using those signatures; 3 - Signature recognition according to the training;

! 4 - Verify the matching percentage of the tested signature to each person in the system. Currently the program only executes in UNIX platforms because of of the functions use calls to the UNIX API, namely those functions which require manipulating the lesystem. It is important to note that the signatures used for testing recognition are always different from those used to train the network.

Performance Evaluation
The most important criteria for the selection of a signature recognition program are: - Accuracy of the answers i.e. if the signatures are correctly identied; - Average time for signature recognition; - Time taken by the training phase. The accuracy seems to be pretty good. Occasionally the program is unable to correctly recognize a signature (from the test database) or very rarely, two signatures. Most of the time it is able to correctly recognize all of them (100% accuracy). A curious fact is that the number of errors (1, occasionally 2) seems to be constant: when it was presented with 17 signatures it would occasionally fail to recognize 1. As the number of signatures in the test database progressively grew to 34, it still only failed to recognize 1 (very rarely, 2) signatures. The test signatures misidentied where not the same. The accuracy of the program thus seems to be good enough but, as expected, not perfect. Recognizing a single signature takes only an instant while training the ANN takes several minutes depending mainly on the conguration of the network but this is acceptable because in real use training would be done rarely (only when adding a new persons signature to the system) in comparison with testing.

Bibliography
Hugo Rodrigues, Mrio Lopes, Pedro Pinto, Reconhecimento de Assinaturas Utilizando Redes Neuronais J. Coetzer, B. M. Herbst, J. A. du Preez, Offline Signature Verification Using the Discrete Radon Transform and a Hidden Markov Model S.Russel, P.Norvig, Prentice-Hall, "Artificial Intelligence: A Modern Approach" Rasha Abbas, A Prototype System For Off-Line Signature Verification Using Multilayered Feedforward Neural Networks Wikipedia, Brackpopagation H. Shivanan, Coding a Pattern Recognition Engine: An Implementation Of Artificial Neural Networks

You might also like