Author Topic: Segmentation Techniques for Iris Recognition System  (Read 3412 times)

0 Members and 1 Guest are viewing this topic.


  • Newbie
  • *
  • Posts: 48
  • Karma: +0/-0
    • View Profile
Segmentation Techniques for Iris Recognition System
« on: April 23, 2011, 09:21:29 am »
Author : Surjeet Singh, Kulbir Singh
International Journal of Scientific & Engineering Research, IJSER - Volume 2, Issue 4, April-2011
ISSN 2229-5518
Download Full Paper -

Abstract— A biometric system provides automatic identification of an individual based on a unique feature or characteristic possessed by the individual. Iris recognition is regarded as the most reliable and accurate biometric identification system available. Iris recognition systems capture an image of an individual's eye, the iris in the image is then segmented and normalized for feature extraction process. The performance of iris recognition systems highly depends on segmentation and normalization. This paper discusses the performance of segmentation techniques for iris recognition systems to increase the overall accuracy.

Index Terms—Active contour, Biometrics, Daugman’s method, Hough Transform, Iris, Level Set method, Segmentation.

Reliable personal recognition is critical to many processes. Nowadays, modern societies give higher relevance to systems that contribute to the increase of security and reliability, essentially due to terrorism and other extremism or illegal acts. In this context, the use of biometric systems has been increasingly encouraged by public and private entities in order to replace or improve traditional security systems. Basically, the aim is to establish an identity based on who the person is, rather than on what the person possesses or what the person remembers.
Biometrics can be regarded as the automated measurement and enumeration of biological characteristics, in order to obtain a plausible quantitative value that, with high confidence, can distinguish between individuals.

Although less automatized, biometrics has been used - at least - for centuries. In the 14th century, the Portuguese writer Joa ̃o de Barros reported its first known application. He wrote that Chinese merchants stamped children’s palm print and footprints on paper with identification purposes. In the western world, until the late 1800s the automatic recognition of individuals was largely done using “photographic memory”. In 1883, the French police and anthropologist Alphonse Bertillon developed an anthropometric system, known as Bertillonage, to fix the problem of identification of convicted criminals.
In 1880, the British scientific journal Nature published an article by Henry Faulds and William James describing the uniqueness and permanence of fingerprints. This motivated the design of the first elementary fingerprint recognition system by Sir Francis Galton and improved by Sir Edward R. Henry. Having quickly disseminated, the first fingerprint system in the United States was inaugurated by the New York State Prison Department in 1903 and the first known convicted due to fingerprint evidences was reported in 1911.
Presently, due to increasing concerns associated with security and the war on terrorism, biometrics has considerably increased its relevance. It has moved from a single and almost standardized trait (fingerprint) to the use of more than ten distinct traits.
According to Matyas Jr. and Riha [1], every biometric system depends on the features, whether genotypic or phenotypic it is based on. Similarly to Daugman [2], authors divide the biometric traits into two types. Fried [3] and A. Bromba [4] classified the origin of the biometric traits into three different types: genotypic, behavioral, and randotypic.
Following the proposal of Jain et al. [5], biometric systems can be evaluated regarding seven parameters: uniqueness, universality, permanence, collectability, performance, acceptability and circumvention.
Figure 2 contains a comparison between the most common biometric traits. Each value was obtained through averaging and weighting of the classifications proposed in [6], [7], [4], [8], [9], [10] and [11].
For the purposes of our work, one of the most important features is the ability to perform covert recognition, which can be performed by the fingerprint, face, iris and palmprint. Among these, iris must be enhanced, as it provides higher uniqueness and circumvention values.

The iris is a thin, circular structure located anterior to the lens, often compared to a diaphragm of an optical system. The centre aperture, the pupil, actually is located slightly nasal and inferior to the iris centre. Pupil size regulates retinal illumination. The diameter can vary from 1 mm to 9 mm depending on lighting conditions. The pupil is very small (miotic) in brightly lit conditions and fairly large (mydriatic) in dim illumination. The average diameter of the iris is 12 mm, and its thickness varies. It is thickest in the region of the collarette, a circular ridge approximately 1.5 mm from the pupillary margin. This slightly raised jagged ridge was the attachment site for the fetal pupillary membrane during embryologic development. The collarette divides the iris into the pupillary zone, which encircles the pupil, and the ciliary zone, which extends from the collarette to the iris root. The colour of these two zones often differs [12].
The pupillary margin of the iris rests on the anterior surface of the lens and, in profile, the iris has a truncated cone shape such that the pupillary margin lies anterior to its peripheral termination, the iris root. The root, approximately 0.5 mm thick, is the thinnest part of the iris and joins the iris to the anterior aspect of the ciliary body. The iris divides the anterior segment of the globe into anterior and posterior chambers, and the pupil allows the aqueous humor to flow from the posterior into the anterior chamber with no resistance.

Read More-