You are on page 1of 10

Report on Identity Verification | Assignment Information Security | COMS40213

Inside Iris Recognition


Bakk. Medien-Inf. Tilo Burghardt (tiloburghardt@bytemaster.de), (European Student from Germany) Master Course in Global Computing and Multimedia, University of Bristol, November 2002

ABSTRACT: The stable singularity of the human iris (see figure 1) carrying a striking information density of more than 3.2 measurable bits per mm2 gives reason to use iris scans for individual identification. This report describes motivations, basic fundamentals, techniques, recent fields of application and future visions for iris recognition systems. The awarded state of the art identification method is elucidated and explained in mathematical detail. Potentials and weaknesses of the methods are shown and discussed.

of input signals and devices and such a high algorithm complexity. Often there is no time to train the system on a persons behaviour. In order to find a way out people are given keys, id cards, smart cards in combination with passwords and all the other systems which rule the markets of today. The figure 2 gives a structured overview of recent identification techniques. Knowing loads of passwords and holding lots of devices is reality but against our human nature. It is a well known fact that people do not like to remember passwords or to get new keys and devices. We carry our singularity with us without holding anything. Our body itself is unique. Holdings can be stolen, passwords can be copied or simply forgotten our body in its structure and behaviour is directly connected with us, always and carrying singular information.
checks Human is identified All available information by senses combined with memory. Biometrics - geometry of the hand - fingerprint - look, appearance - signature - retina scan, iris patterns - voice, skin, DNA, Holdings - paper document - metal key - magnetic card - smart card - calculator IT system

figure 1 : RGB image of a human iris. It shows the striking singular structure of this amazing part of the body. (picture source [Daugh]) Human

Finally a decision to identify a stranger is made by what the unknown person is has know.

1. General Introduction
How do humans identify each other? We use the whole variety of body information we can perceive from the other by our senses (look, voice, walk, smell, ) and compare this input to our memory. About half of our brain structures can be involved in such a single identification act. The enormous large quantity of processed indications leads to the strong confidentiality of manual identification of humans, especially of well known persons when a good knowledge base about the persons properties lays behind. The price to pay for this high quality identification is a giant amount of data processing: Imagine, our visual system processes a data volume of about 5 billion bits per second with amazing precision. Recent computer systems are not able to deal with such a huge bandwidth, such a large variety

Knowledge - password - answers to questions - calculated results ... IT system

Physical specifics - casing - seal - hologram - dirt Location of the system

Port of connection

figure 2

Overview of identification methods (extended from [Pfitzmann01] chapter 2.2.1)

2. Basics of Biometric Systems


Hence, a simple question is raising: How can static or dynamic characteristics of the human body be measured and analysed with available technology to get a unique identification code of a person? This leads to the field of biometrics trying to give answers to that question. There are two fundamental operation modes for biometric systems. The first being identification (including the registration) and the second being verification. For registration the user presents an appropriate documentation, such as a birth certificate, to establish an initial identity to which his biometric information will be linked. Afterwards, the user presents (maybe multiple times) some personal characteristics, such as his iris, to be scanned by an acquisition device. The scanned data are converted into a compact and unique template, which acts as a map of characteristics and which should carry as less redundancy as possible. When an individual presents himself at another time or place for identification, the system creates another template. This is compared to all the enrollment templates in the database. If

3. Properties of the Human Iris


Iris recognition systems produce much less errors than any other known and applied biometrical technology (see figure 4). Why?

figure 4 : Error comparison of biometric verification methods: iris recognition (situated on the vertical axes) is leading far in front of all other compared biometric technologies in 2001. (figure source [Ger02])

the enrolled template most similar to the new template meets a similarity threshold, the system declares a match. The identification matching process is known as 1:N (one to many) matching.
The verification process is quite similar, except that after enrollment when an individual presents himself, he claims an identity. The new template created at the verification point is compared to the template associated with the enrolled identity being claimed. If the two templates are similar enough to meet the threshold, a match is declared. This is known as a 1:1 (one to one) match. But which characteristic of the human body should be used and how is a template computed and how can templates be compared?

Obviously, it is necessary to find parts of the human body that offer stable, very unique, easy measurable and fast processable patterns in order to increase quality and speed of identification algorithms. Gaining algorithm quality in this specific field means to decrease the system failures to recognise a valid user (false acceptance) and to decrease errors that lead to an acceptance of an imposter (imposter pass).

SINGULARITY. Current algorithms can extract at least about 3.2 bit per mm2 (this number varies from person to person) within the analysed iris area [Daug01]. This density can be interpreted as 266 unique spots of one iris (15-30 for other biometric techniques). And this number describes even a lower limit of the actually available information (see also chapter 11). Furthermore, the randomness of iris patterns has very high dimensionality (173 free binary dimensions). It enables one to have recognition decisions that are made with confidence levels high enough to support rapid and reliable exhaustive searches through huge databases. INVARIABILITY. Most human characteristics vary over time. For example, handwriting, facial look or voice get modified over years or even from day to day (e.g. a flue changes voice). Our iris does not change its outer look apart from slight colour fluctuations in young years. The basic structures of the iris are formed from the 3rd to the 8th month of

figure 3 : Example of a biometrical system configuration. The shown iris recognition system differentiates between the registration device and the verification device. (source http://www.oki.com)

pregnancy and are completely and permanent existent at the beginning of the 9th month. In the first months after birth the pigments get a final formation and create a permanent colouring. This indication has been used in personal descriptions of ID-cards for a long time.

Finally, the used photo technology to take iris pictures must not be expensive and available on public markets.

6. The Real World


How does the recognition process work in real systems? First of all, a snapshot of the iris with a radius more than 70 pixels (usually about 120) using a basic CCD camera operating with monochromatic light in a wavelength 700-900nm is an appropriate basis for iris recognition algorithms. Humans can not see light of the used ultra violet wavelength. So filmed persons do not perceive the observation process. The time to focus by maximisation of high frequencies in the Fourier spectrum of the image with current technology is about 15msec [Daugh01].

4. Brief History of Iris Recognition


The idea of using iris patterns for personal identification was originally proposed by an ophthalmologist named Frank Burch in 1936. In the 1980's the idea had appeared in James Bond films, but it still remained science fiction and conjecture. In 1987 two other ophthalmologists, Aran Safir and Leonard Flom, patented this idea, and in 1989 they asked John Daugman (then teaching at Harvard University) to create actual applicable algorithms for iris recognition. The Daugman algorithms could be found initially in the original paper "High confidence visual recognition of persons by a test of statistical independence." They combine the field of classical pattern recognition with modern computer vision, mathematical statistics and studies of the human-machine interface. It is an interdisciplinary field. The patents are owned by Iridian Technologies and are the basis for all current iris recognition systems and products.

5. Requirements for Iris Cameras


The general purpose is a high confidence and real time recognition of an individuals identity by mathematical analysis of the random patterns that are scanned from the iris of an eye from some distance. The procedure of filming human irises must meet some general requirements to be applicable in real scenarios: The process of measurement should be fast, comfortable for the measured person as well as robust against natural modifications of the eyes. Hence, used filming algorithms are finally required to be quite invariant to translation in space, reflections, scaling, deformation (e.g. pupil dialation) and possible camera and light differences like shading or noise.

figure 6 : The OKI Iris Recognition System according to figure 3. (source OKI)

For example, IriScan has developed an active application that automatically captures the iris image from a distance of approximately 12 inches. Sensar has further developed the technology so that the biometric system can automatically locate the users face and focus the camera from a distance of 3 feet (a system used in iris recognition ATM's). Other systems are produced by OKI, Panasonic and many more. Other eye observing techniques like retina recognition often need much higher resolutions. This increases the hardware costs and usually decreases the maximum distance between camera and the persons face. The enrollment rate for iris recognition

systems is very good. Only very young children may be unable to follow the simple
figure 5 : Iridian Iris Scanner at Heathrow Airport to efficient boarding procedures (source Iridian Technologies)

instructions for iris measurement. The final result of this first acquisition step is

usually an 8bit image of one iris.

7. Algorithm Details: Locating the Iris


How to find the iris within the taken image? The picture is searched through for circles c having a strong contrast line CL around. The two circles with the most striking contrast lines (called bestmatch) characterise iris and pupil. This gives a basic functional description of the recognition behaviour as follows:
(eq. 1, extended from [Daug01] and [Daug])

specific G. It produces a perfect cut of high frequencies and filters out noise by that. As discussed, all positions ( x0 , y0 ) of the taken iris picture are parsed through and tested as middle points for the circles C with different radiuses r. A differentiation of the image function f in direction of the radius r is done for all points of a line integral around a circular angle s of the circle C in order to compute the function CL.

bestmatch {c CL : circles max CL( s


S

circles | c N

max CL (c) }
circles

circles {( x 0 , y 0 , r ) | x 0 , y 0 , r

N}

ordering function based on contrast lines S) gives the first two biggest elements of set S ordered by the function CL N ...natural numbers x0 , y 0 ...image coordinates r...radius of test circle
Iris and pupil can be simply differentiated from each other by the radius. Note that the centre of the two circles does not need to be equal: Although the results of the iris search greatly constrain the pupil search, concentricity of these boundaries cannot be assumed. Very often the pupil centre is nasal, and inferior to the iris centre. [Daugh01].

C ( x0 , y0 ) r

f( x, y )
Gaussian Blur

figure 8 : Visualisation of the algorithm for pupil/iris location; It shows one test circle C with a radius r around the middle point (x0,y0). The line integral is computed on the image pixels (symbolised by squares) at all the positions (x,y) around the circle along s. The areas around (x,y) are blured in a specified radius.

The essential computation for the strength of contrast around the circle according to the described algorithm melts down to the following equation non-deterministic functional description for CL:
(eq.2, extended from [Daug01])

CL ( x 0 , y 0 , r )

abs G (r )

I ( x, y ) ds r x0 , y 0 , r 2 r

G ...blur function core x0 , y 0 ...image coordinates r...radius of test circle


figure 7 : Image of a located iris; The two marked circles carry the highest contrast line of circles in the picture. They are interpreted as pupil and iris border lines. (picture source [Daugh])

The computation of CL can be time-optimised by differential methods that are not discussed here. The final result of this second recognition step is detailed information about the iris position to extract the iris information of interest. A next step is due to extract this unique data

The next passage describes how to compute the function CL: A blur function characterised by a functional core G and scaled with a size parameter is applied to the image in order to reduce the influence of noise on the operation. The Gauss Function is usually chosen as a

8. Mathematical Basis for Extraction


The unique iris information can be found in a very compressed form as the specific spectrum of space frequencies of light reflected by the iris. Hence, a demodulation of this reflection (e.g. the iris image) can give a compact description of the iris. Therefore, a functional transformation (in this specific case an integral transformation) from the pixel space X to the frequency space U has to be applied to the image. The basic form of an integral transformation has the following fundamental form:
(eq.3)

dynamic adaptation of the window size according to the current computed frequency can lead to better results because some selected frequencies can be demodulated with higher precision than others. This idea leads to the Wavelet Transformation that scales the window size by a new parameter a: F
(eq.7)

h f ( x) dx 1 abs (a ) h x b a

F (u U )

C(x

X , u U ) f (x

X ) dx

On the one hand a taken iris image can be described as an image function f that gives an 8bit value for every position x of the pixel image. On the other hand the image can be defined by a frequency function F that gives a complex value for every space frequency s. This value describes the phase angle and the amplitude for every frequency in the picture. A transformation core C allows to transform the image description from one version to the other. Replacing the mean transformation core C with the specific Fourier Core leads to the basic formula of the Fourier Transformation (FT).
(eq.4)

The function h is the core function of this transformation (compare to (eq.3) and it is called the Basis Wavelet. In this equation parameter b translates and parameter a scales this function h. This form of the Basis Wavelet is often modified to meet specific needs for the analysis.

9. Tricky Step: Polar Transformation


There needs to be performed another step before the Wavelet demodulation can be applied to the iris image. The iris area on the image is a ring area. But it is quite difficult to deal with a hole in an image applying an integral transformation. That is why the image is firstly mapped into a polar coordinate system where the origin gets defined by all the points being in directly neighbourhood to the central pupil area. Every taken iris is scaled to the same size. So the resulting coordinates are invariant concerning translation and scaling for all taken iris images and give a basis for iris templates and comparisons. radius angle

F (s)

1 2

isx

f ( x) dx

This transformation is not suitable for a compressing image demodulation because the base functions of FT are complex sinus functions. 1 (eq.5) e isx sin( 2 xy ) i sin( 2 xy ) 2 2 The compression of edges expressing very high space frequencies that influences all coefficients. To avoid this, the transformation is extended to the Short-Time FourierTransformation (STFT) that introduces a Window Function w:
(eq.6)

figure 9 : Simplifying the iris area via transformation and mapping into a polar coordinate system.

e f ( x) w( x b) dx 2 Window w can be positioned on the image by the new introduced parameter b. It is usual to use a Gauss Function as Window Function w. But this new Window Function is static. A

F ( s , b)

isx

Hence, every point (e.g. pixel) of the iris image can be specified by a radius r and an angle . After the transformation the image function f takes this radius and angle instead of a position vector x to give back the 8bit image value.

10. Daugmans Transformation Formula


In 1993 John Daugman (see figure 11) applied a Wavelet Transformation on iris images in polar coordinates using complex quadrature 2D Gabor Wavelets (see original paper [Daug93]). This step was the central idea of the state of the art iris recognition method that is secured by the U.S. core technology patent called "Biometric Personal Identification System Based on Iris Analysis." with the U.S. Patent No. 5,291,560 issued in 1994. Daugman used a specific 2D Base Wavelet function h in order to code the two components of one polar coordinate separately, to avoid taking square roots and have complete control over translation and scaling of the wavelet. The parameters a and b scale the filter window for the two components of the polar input coordinates. All parameters indexed with 0 describe a translation to position the Wavelet filter. Radius r and angle f are used as variables for integration.
(eq.8, extended from [Daug01])

In a final step a strong quantisation is applied only the signs of the resulting phase angle values are stored.
(eq.10, extended from [Daug01])

Daugman' s Transformation Function : FDaugman sign( F )

sign...gives a vector containing the signs of the real and imaginary value as 0 for negative and 1 for positive FDaugman {(0,0), (0,1), (1,0), (1,1)}

Hence, the function FDaugman gives two bits (note again, the transformation F produces complex results) specifying one angle on the complex destination plane per analysed frequency of the source polar coordinate system (see figure 10).

Fundamental Base Wavelet h :


( r0 r ) 2 (
0

)2

a2

b2

iw (

This Base Wavelet is used as core function for an integral transformation from the pixel image (in polar coordinates) to the frequency space as follows:
(eq.9, extended from [Daug01])

Integral Transformation F : F
r

f (r , ) dr d

F gives a complex value containing a phase angle and an amplitude for every specific Wavelet h. The amplitude information depends on brightness, contrast and camera gain of the picture. The phase information is independent from those parameters. That is why only the phase information and not the amplitude within the power spectrum is used to describe the iris. This gains good independence from the picture quality and light situation.

figure 10 : Visualisation of Daugmans final iris-bitcode differentiating positions of the complex plane. Local regions of an iris are projected onto Wavelets, generating complex values whose real and imaginary parts specify the coordinates of a phasor in the complex plane. The angle of each phasor is quantised to one of the four quadrants, setting two bits of phase information. (source [Daug])

This process of computing those bit-pairs is repeated all across the iris with many Wavelets h (e.g. different frequencies and orientations) to extract finally 2,048 Bits (512 Bytes) out of one iris picture. Such a describing template can be viewed on the top in figure 7. Different templates can be compared using modified Hamming Distances.

The totality of all bit-pairs describes the fundamental image of the iris in an amazing compact form with a maximum of highdimensionality and randomness. The algorithms for iris recognition based on Daugmans transformation earned the British Computer Society's 1997 IT Award and Medal as well as the Smithsonian Award in 2000 and finally the "Time 100" Innovation Award in 2001.

The algorithm can also be used for partial injured (or blocked) irises: if 2/3 of the iris are completely obscured, accurate measure of the remaining rest would result in an equal error rate of 1 in 100,000. Furthermore, the discussed algorithm works independent from the pupil's expansion and contraction, skews and stretches of the iris.

12. Cost
Iris recognition was traditionally one of the most expensive biometric technologies. The cost for a single system was tens of thousands of dollars. The significant drop in the price of computer hardware and cameras and several firm cooperations brought the price of the high-end physical security unit into the $4000-$5000 range for automatic focussing technologies. The IriScan PC Iris is showing that iris technology can be used in the home or office as well. It is priced in the $500-$800 range.

13. User Acceptance


figure 11 : Prof. John Daugman, OBE; Johann Bernoulli Professor at the Institute of Mathematics and Informatics at University of Groningen and member of the University of Cambridge Computer Laboratory (picture source [Daug])

The technology itself was designated a Millennium Product by the UK Design Council in 1998 and throughout 2000 it was used in the Millennium Dome. The algorithms are used in most of the current available iris recognition systems (see Appendix A).

Generally, there are two key public acceptance factors: ease of use and intrusiveness. A usage of invisible wavelength can help to make the filming procedure more comfortable. Nevertheless, there are some people who are hesitant to use the system due to the perception that the camera is taking a picture of ones eye.

11. Accuracy of the Method


According to Jim Cambier of Iridian, there has not been a single case of false matching reported during any testing of the matching algorithm for iris systems. [Ger02] The odds of two different irises returning a 75% match (i.e. having a Hamming Distance of 0.25) can be stated as 1 in 1016. The Equal Error Rate (the point at which the likelihood of a false accept and false reject are the same) is 1 in 1.2 million and finally the odds of 2 different irises returning identical iris codes is striking with the probability of 1 to 1052.
figure 12 : Close-up image of the actual used iris system at London Heathrow Airport. Although the scenery looks quite artificial the airport reports good user acceptance and easy usage after a first user introduction. (source BBC Television)

User active iris scan without auto focus properties of the camera requires more participation of the user because the capture mechanism needs to be manually focused and the user must be closer to the camera

(about 3 inches). Furthermore, users see a reflection of the eye, and are thus more aware of what the system is doing. This kind of system might be used for low cost applications. Nevertheless, for all kinds of systems the measurement process is very fast and takes about second to take the image and 1 second to compute the template.

14. Strength against System Attacks


Latest iris recognition systems can account for fraudulent samples in several ways: 2D samples (photographs) produce noticeable different cornea reflections. The used wavelength does not allow to present simple photographs. The papillary (pupil) can be tested as to be alive by activation with different light situations. There are adaptations of the basic algorithms that detect contact lenses atop the cornea. Some systems use infrared illumination in addition to determine the state of the sample eye tissue. At the moment there is no known successful system attack. The only way to cheat seems to be an iris transplantation. Such a surgery act has never been documented
figure 13 : Two world famous photographs of one and the same Afghan refugee. She was re-identified after years by an iris recognition system. The pictures were published by the National Geographic Society. (source [Daugh])

The United Nations High Commission for Refugees uses iris scan technologies for anonymous identification of returning Afghan refugees. The current situation underlines an old known fact: Global playing firms, the military forces, state agencies often seem to be the very first organisations investing in new information security technologies.

16. Forecasts and Visions 15. Recent Applications


Applications (so far) have been aviation security, automatic border crossing controls, database access, computer login, building entry, ATMs, and some Government programmes. In Britain, the algorithms are currently being considered for biometrically enabled National Identity Cards. Large airports as the Flughafen Frankfurt Airport in Germany or the Douglas International Airport in North Carolina in the USA offer frequent passengers to register their iris images in order to effective the boarding procedures [NCSC]. Other airports around the globe have recently installed these algorithms for passenger screening and immigration control (in lieu of passports), including London Heathrow and Amsterdam Schiphol Airport. In the year 2003 iris recognition systems will be installed in all 11 international airports in Canada, including with Toronto and Vancouver, in automatic iris cameras replacing passport inspection by immigration officers. Japan Airlines has began a trial at Narita Airport. Iris technology is forecast to be an important part of a wide range of applications in which a person's identity must be established or confirmed. In general, these cover financial transactions including electronic commerce, information security, entitlements authorisation, building entry, automobile ignition, forensic and police applications, computer login, or any other transaction in which personal identification currently relies just on special possessions or secrets. It is imaginable that a large future world wide identification system for individuals will store iris images as unique patterns of persons. The time for comparing billions of different iris images can be decreased by parallel processing using cheap CPUs to categories around only one second [Daug01]. The question is, if such large databases holding iris information of millions of registered people ever came to exist.

Appendix A: Links to some firms dealing actively with iris recognition systems:
IBM:

http://www-916.ibm.com/press/prnews.nsf/contacts Iridian Technologies, USA : http://www.iridiantech.com Panasonic: http://www.panasonic.com/medical_industrial/iris.asp London Heathrow Airport: http://www.cnn.com/2002/TECH/science/02/08/airports.eyes/index.html Amsterdam Schiphol Airport: http://www.idg.net/go.cgi?id=647567 Charlotte Airport USA: http://www.cnn.com/2000/TECH/computing/07/24/iris.explainer/index.html IrisAccess: http://www.iridiantech.com/lgiris.htm

Appendix B: References
[Daug] Prof. Dr. J. Daugman, OBE (2001), personal webpage of the University of Cambridge, Internet source (viewed on 20/10/02): http://www.cl.cam.ac.uk/users/jgd1000

[Daug01] Prof. Dr. J. Daugman, OBE (2001), How iris recognition works, University of Groningen/University of Cambridge, Internet source (viewed on 20/10/02): http://www.cl.cam.ac.uk/users/jgd1000/irisrecog.pdf [Daug02] Daugman J and Downing C (2001) "Epigenetic randomness, complexity, and singularity of human iris patterns." Procedings of the Royal Society, B, 268, Biological Sciences, pp 1737 - 1740. University of Cambridge, Computer Laboratory Internet source (viewed on 20/10/02): http://www.cl.cam.ac.uk/users/jgd1000/roysoc.pdf [Daug93] Daugman, J. (1993) "High confidence visual recognition of persons by a test of statistical independence." IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15(11), pp. 1148-1161. [NSCS] National Center for State Courts of the United States of America, The Court Technology Laboratory Internet source (viewed on 20/10/02) : http://ctl.ncsc.dni.us/biomet%20web/BMIris.html#basics http://ctl.ncsc.dni.us/biomet%20web/BMIndividuals.html [Kron] Kronfeld, P. (1962) Gross anatomy and embryology of the eye. In: The Eye (H. Davson, Ed.) Academic Press: London

[Stei02] Rainer Steinwandt (2002), Wavelet Transformations Internet source (viewed on 10/11/02) : http://i31www.ira.uka.de/docs/semin94/03_Wavelet [Pfitz01] Prof. Dr. A. Pfitzmann (2001), Information Security and Cryptography, Dresden University of Technology, Internet source (viewed on 18/09/01): http://ikt.inf.tu-dresden.de [Mei01] Prof. Dr. K. Meissner (2001), Scriptum Introduction to Multimedia, Dresden University of Technology 2001 [Fu01] Prof. Dr. S. Fuchs (2001), Scriptum Image Processing, Dresden University of Technology,

[Ger02] M. Geruso (2002), An Analysis of the Use of Iris Recognition Systems in U.S. Travel Document Applications, Virginia Tech 2002 [IS02] International Biometric Group, Independent Expert Group Internet source (viewed on 20/10/02): http://www.iris-scan.com/iris_technology.htm

This document was created with Win2PDF available at http://www.daneprairie.com. The unregistered version of Win2PDF is for evaluation or non-commercial use only.

You might also like