Professional Documents
Culture Documents
ABSTRACT: The stable singularity of the human iris (see figure 1) carrying a striking information density of more than 3.2 measurable bits per mm2 gives reason to use iris scans for individual identification. This report describes motivations, basic fundamentals, techniques, recent fields of application and future visions for iris recognition systems. The awarded state of the art identification method is elucidated and explained in mathematical detail. Potentials and weaknesses of the methods are shown and discussed.
of input signals and devices and such a high algorithm complexity. Often there is no time to train the system on a persons behaviour. In order to find a way out people are given keys, id cards, smart cards in combination with passwords and all the other systems which rule the markets of today. The figure 2 gives a structured overview of recent identification techniques. Knowing loads of passwords and holding lots of devices is reality but against our human nature. It is a well known fact that people do not like to remember passwords or to get new keys and devices. We carry our singularity with us without holding anything. Our body itself is unique. Holdings can be stolen, passwords can be copied or simply forgotten our body in its structure and behaviour is directly connected with us, always and carrying singular information.
checks Human is identified All available information by senses combined with memory. Biometrics - geometry of the hand - fingerprint - look, appearance - signature - retina scan, iris patterns - voice, skin, DNA, Holdings - paper document - metal key - magnetic card - smart card - calculator IT system
figure 1 : RGB image of a human iris. It shows the striking singular structure of this amazing part of the body. (picture source [Daugh]) Human
Finally a decision to identify a stranger is made by what the unknown person is has know.
1. General Introduction
How do humans identify each other? We use the whole variety of body information we can perceive from the other by our senses (look, voice, walk, smell, ) and compare this input to our memory. About half of our brain structures can be involved in such a single identification act. The enormous large quantity of processed indications leads to the strong confidentiality of manual identification of humans, especially of well known persons when a good knowledge base about the persons properties lays behind. The price to pay for this high quality identification is a giant amount of data processing: Imagine, our visual system processes a data volume of about 5 billion bits per second with amazing precision. Recent computer systems are not able to deal with such a huge bandwidth, such a large variety
Port of connection
figure 2
figure 4 : Error comparison of biometric verification methods: iris recognition (situated on the vertical axes) is leading far in front of all other compared biometric technologies in 2001. (figure source [Ger02])
the enrolled template most similar to the new template meets a similarity threshold, the system declares a match. The identification matching process is known as 1:N (one to many) matching.
The verification process is quite similar, except that after enrollment when an individual presents himself, he claims an identity. The new template created at the verification point is compared to the template associated with the enrolled identity being claimed. If the two templates are similar enough to meet the threshold, a match is declared. This is known as a 1:1 (one to one) match. But which characteristic of the human body should be used and how is a template computed and how can templates be compared?
Obviously, it is necessary to find parts of the human body that offer stable, very unique, easy measurable and fast processable patterns in order to increase quality and speed of identification algorithms. Gaining algorithm quality in this specific field means to decrease the system failures to recognise a valid user (false acceptance) and to decrease errors that lead to an acceptance of an imposter (imposter pass).
SINGULARITY. Current algorithms can extract at least about 3.2 bit per mm2 (this number varies from person to person) within the analysed iris area [Daug01]. This density can be interpreted as 266 unique spots of one iris (15-30 for other biometric techniques). And this number describes even a lower limit of the actually available information (see also chapter 11). Furthermore, the randomness of iris patterns has very high dimensionality (173 free binary dimensions). It enables one to have recognition decisions that are made with confidence levels high enough to support rapid and reliable exhaustive searches through huge databases. INVARIABILITY. Most human characteristics vary over time. For example, handwriting, facial look or voice get modified over years or even from day to day (e.g. a flue changes voice). Our iris does not change its outer look apart from slight colour fluctuations in young years. The basic structures of the iris are formed from the 3rd to the 8th month of
figure 3 : Example of a biometrical system configuration. The shown iris recognition system differentiates between the registration device and the verification device. (source http://www.oki.com)
pregnancy and are completely and permanent existent at the beginning of the 9th month. In the first months after birth the pigments get a final formation and create a permanent colouring. This indication has been used in personal descriptions of ID-cards for a long time.
Finally, the used photo technology to take iris pictures must not be expensive and available on public markets.
figure 6 : The OKI Iris Recognition System according to figure 3. (source OKI)
For example, IriScan has developed an active application that automatically captures the iris image from a distance of approximately 12 inches. Sensar has further developed the technology so that the biometric system can automatically locate the users face and focus the camera from a distance of 3 feet (a system used in iris recognition ATM's). Other systems are produced by OKI, Panasonic and many more. Other eye observing techniques like retina recognition often need much higher resolutions. This increases the hardware costs and usually decreases the maximum distance between camera and the persons face. The enrollment rate for iris recognition
systems is very good. Only very young children may be unable to follow the simple
figure 5 : Iridian Iris Scanner at Heathrow Airport to efficient boarding procedures (source Iridian Technologies)
instructions for iris measurement. The final result of this first acquisition step is
specific G. It produces a perfect cut of high frequencies and filters out noise by that. As discussed, all positions ( x0 , y0 ) of the taken iris picture are parsed through and tested as middle points for the circles C with different radiuses r. A differentiation of the image function f in direction of the radius r is done for all points of a line integral around a circular angle s of the circle C in order to compute the function CL.
circles | c N
max CL (c) }
circles
circles {( x 0 , y 0 , r ) | x 0 , y 0 , r
N}
ordering function based on contrast lines S) gives the first two biggest elements of set S ordered by the function CL N ...natural numbers x0 , y 0 ...image coordinates r...radius of test circle
Iris and pupil can be simply differentiated from each other by the radius. Note that the centre of the two circles does not need to be equal: Although the results of the iris search greatly constrain the pupil search, concentricity of these boundaries cannot be assumed. Very often the pupil centre is nasal, and inferior to the iris centre. [Daugh01].
C ( x0 , y0 ) r
f( x, y )
Gaussian Blur
figure 8 : Visualisation of the algorithm for pupil/iris location; It shows one test circle C with a radius r around the middle point (x0,y0). The line integral is computed on the image pixels (symbolised by squares) at all the positions (x,y) around the circle along s. The areas around (x,y) are blured in a specified radius.
The essential computation for the strength of contrast around the circle according to the described algorithm melts down to the following equation non-deterministic functional description for CL:
(eq.2, extended from [Daug01])
CL ( x 0 , y 0 , r )
abs G (r )
I ( x, y ) ds r x0 , y 0 , r 2 r
The computation of CL can be time-optimised by differential methods that are not discussed here. The final result of this second recognition step is detailed information about the iris position to extract the iris information of interest. A next step is due to extract this unique data
The next passage describes how to compute the function CL: A blur function characterised by a functional core G and scaled with a size parameter is applied to the image in order to reduce the influence of noise on the operation. The Gauss Function is usually chosen as a
dynamic adaptation of the window size according to the current computed frequency can lead to better results because some selected frequencies can be demodulated with higher precision than others. This idea leads to the Wavelet Transformation that scales the window size by a new parameter a: F
(eq.7)
h f ( x) dx 1 abs (a ) h x b a
F (u U )
C(x
X , u U ) f (x
X ) dx
On the one hand a taken iris image can be described as an image function f that gives an 8bit value for every position x of the pixel image. On the other hand the image can be defined by a frequency function F that gives a complex value for every space frequency s. This value describes the phase angle and the amplitude for every frequency in the picture. A transformation core C allows to transform the image description from one version to the other. Replacing the mean transformation core C with the specific Fourier Core leads to the basic formula of the Fourier Transformation (FT).
(eq.4)
The function h is the core function of this transformation (compare to (eq.3) and it is called the Basis Wavelet. In this equation parameter b translates and parameter a scales this function h. This form of the Basis Wavelet is often modified to meet specific needs for the analysis.
F (s)
1 2
isx
f ( x) dx
This transformation is not suitable for a compressing image demodulation because the base functions of FT are complex sinus functions. 1 (eq.5) e isx sin( 2 xy ) i sin( 2 xy ) 2 2 The compression of edges expressing very high space frequencies that influences all coefficients. To avoid this, the transformation is extended to the Short-Time FourierTransformation (STFT) that introduces a Window Function w:
(eq.6)
figure 9 : Simplifying the iris area via transformation and mapping into a polar coordinate system.
e f ( x) w( x b) dx 2 Window w can be positioned on the image by the new introduced parameter b. It is usual to use a Gauss Function as Window Function w. But this new Window Function is static. A
F ( s , b)
isx
Hence, every point (e.g. pixel) of the iris image can be specified by a radius r and an angle . After the transformation the image function f takes this radius and angle instead of a position vector x to give back the 8bit image value.
In a final step a strong quantisation is applied only the signs of the resulting phase angle values are stored.
(eq.10, extended from [Daug01])
sign...gives a vector containing the signs of the real and imaginary value as 0 for negative and 1 for positive FDaugman {(0,0), (0,1), (1,0), (1,1)}
Hence, the function FDaugman gives two bits (note again, the transformation F produces complex results) specifying one angle on the complex destination plane per analysed frequency of the source polar coordinate system (see figure 10).
)2
a2
b2
iw (
This Base Wavelet is used as core function for an integral transformation from the pixel image (in polar coordinates) to the frequency space as follows:
(eq.9, extended from [Daug01])
Integral Transformation F : F
r
f (r , ) dr d
F gives a complex value containing a phase angle and an amplitude for every specific Wavelet h. The amplitude information depends on brightness, contrast and camera gain of the picture. The phase information is independent from those parameters. That is why only the phase information and not the amplitude within the power spectrum is used to describe the iris. This gains good independence from the picture quality and light situation.
figure 10 : Visualisation of Daugmans final iris-bitcode differentiating positions of the complex plane. Local regions of an iris are projected onto Wavelets, generating complex values whose real and imaginary parts specify the coordinates of a phasor in the complex plane. The angle of each phasor is quantised to one of the four quadrants, setting two bits of phase information. (source [Daug])
This process of computing those bit-pairs is repeated all across the iris with many Wavelets h (e.g. different frequencies and orientations) to extract finally 2,048 Bits (512 Bytes) out of one iris picture. Such a describing template can be viewed on the top in figure 7. Different templates can be compared using modified Hamming Distances.
The totality of all bit-pairs describes the fundamental image of the iris in an amazing compact form with a maximum of highdimensionality and randomness. The algorithms for iris recognition based on Daugmans transformation earned the British Computer Society's 1997 IT Award and Medal as well as the Smithsonian Award in 2000 and finally the "Time 100" Innovation Award in 2001.
The algorithm can also be used for partial injured (or blocked) irises: if 2/3 of the iris are completely obscured, accurate measure of the remaining rest would result in an equal error rate of 1 in 100,000. Furthermore, the discussed algorithm works independent from the pupil's expansion and contraction, skews and stretches of the iris.
12. Cost
Iris recognition was traditionally one of the most expensive biometric technologies. The cost for a single system was tens of thousands of dollars. The significant drop in the price of computer hardware and cameras and several firm cooperations brought the price of the high-end physical security unit into the $4000-$5000 range for automatic focussing technologies. The IriScan PC Iris is showing that iris technology can be used in the home or office as well. It is priced in the $500-$800 range.
The technology itself was designated a Millennium Product by the UK Design Council in 1998 and throughout 2000 it was used in the Millennium Dome. The algorithms are used in most of the current available iris recognition systems (see Appendix A).
Generally, there are two key public acceptance factors: ease of use and intrusiveness. A usage of invisible wavelength can help to make the filming procedure more comfortable. Nevertheless, there are some people who are hesitant to use the system due to the perception that the camera is taking a picture of ones eye.
User active iris scan without auto focus properties of the camera requires more participation of the user because the capture mechanism needs to be manually focused and the user must be closer to the camera
(about 3 inches). Furthermore, users see a reflection of the eye, and are thus more aware of what the system is doing. This kind of system might be used for low cost applications. Nevertheless, for all kinds of systems the measurement process is very fast and takes about second to take the image and 1 second to compute the template.
The United Nations High Commission for Refugees uses iris scan technologies for anonymous identification of returning Afghan refugees. The current situation underlines an old known fact: Global playing firms, the military forces, state agencies often seem to be the very first organisations investing in new information security technologies.
Appendix A: Links to some firms dealing actively with iris recognition systems:
IBM:
http://www-916.ibm.com/press/prnews.nsf/contacts Iridian Technologies, USA : http://www.iridiantech.com Panasonic: http://www.panasonic.com/medical_industrial/iris.asp London Heathrow Airport: http://www.cnn.com/2002/TECH/science/02/08/airports.eyes/index.html Amsterdam Schiphol Airport: http://www.idg.net/go.cgi?id=647567 Charlotte Airport USA: http://www.cnn.com/2000/TECH/computing/07/24/iris.explainer/index.html IrisAccess: http://www.iridiantech.com/lgiris.htm
Appendix B: References
[Daug] Prof. Dr. J. Daugman, OBE (2001), personal webpage of the University of Cambridge, Internet source (viewed on 20/10/02): http://www.cl.cam.ac.uk/users/jgd1000
[Daug01] Prof. Dr. J. Daugman, OBE (2001), How iris recognition works, University of Groningen/University of Cambridge, Internet source (viewed on 20/10/02): http://www.cl.cam.ac.uk/users/jgd1000/irisrecog.pdf [Daug02] Daugman J and Downing C (2001) "Epigenetic randomness, complexity, and singularity of human iris patterns." Procedings of the Royal Society, B, 268, Biological Sciences, pp 1737 - 1740. University of Cambridge, Computer Laboratory Internet source (viewed on 20/10/02): http://www.cl.cam.ac.uk/users/jgd1000/roysoc.pdf [Daug93] Daugman, J. (1993) "High confidence visual recognition of persons by a test of statistical independence." IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15(11), pp. 1148-1161. [NSCS] National Center for State Courts of the United States of America, The Court Technology Laboratory Internet source (viewed on 20/10/02) : http://ctl.ncsc.dni.us/biomet%20web/BMIris.html#basics http://ctl.ncsc.dni.us/biomet%20web/BMIndividuals.html [Kron] Kronfeld, P. (1962) Gross anatomy and embryology of the eye. In: The Eye (H. Davson, Ed.) Academic Press: London
[Stei02] Rainer Steinwandt (2002), Wavelet Transformations Internet source (viewed on 10/11/02) : http://i31www.ira.uka.de/docs/semin94/03_Wavelet [Pfitz01] Prof. Dr. A. Pfitzmann (2001), Information Security and Cryptography, Dresden University of Technology, Internet source (viewed on 18/09/01): http://ikt.inf.tu-dresden.de [Mei01] Prof. Dr. K. Meissner (2001), Scriptum Introduction to Multimedia, Dresden University of Technology 2001 [Fu01] Prof. Dr. S. Fuchs (2001), Scriptum Image Processing, Dresden University of Technology,
[Ger02] M. Geruso (2002), An Analysis of the Use of Iris Recognition Systems in U.S. Travel Document Applications, Virginia Tech 2002 [IS02] International Biometric Group, Independent Expert Group Internet source (viewed on 20/10/02): http://www.iris-scan.com/iris_technology.htm
This document was created with Win2PDF available at http://www.daneprairie.com. The unregistered version of Win2PDF is for evaluation or non-commercial use only.