Docstoc

Acquisition of Iris Images, Iris Localization, Normalization, and Quality Enhancement for Personal Identification

Document Sample
Acquisition of Iris Images, Iris Localization, Normalization, and Quality Enhancement for Personal Identification Powered By Docstoc
					   International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)
       Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com
Volume 1, Issue 2, July – August 2012                                          ISSN 2278-6856




       Acquisition of Iris Images, Iris Localization,
       Normalization, and Quality Enhancement for
                 Personal Identification
                                              B.Sabarigiri1, T.Karthikeyan2
                                        1
                                        Research Scholar, Department of Computer Science,
                                        PSG College of Arts and Science, Coimbatore, India
                                    2
                                    Associate Professor, Department of Computer Science,
                                    PSG College of Arts and Science, Coimbatore, India



Abstract: Iris based biometric personal identification and        Section 2 deals with image processing steps that include
Verification methods have gained much interest with an            Iris Localization that comprises of Pupil, Iris Boundary
increasing emphasis on security. The proposed approach            Detection, Eyelash, and Eyelid Boundary Detection.
comprises of acquisition of Iris images, Iris Localization,
Normalization and Quality Enhancement. Algorithms like
Circular Hough transform, Canny Edge Detection, Gabor
filters, Homogeneous rubber sheet model, and Daubechies
wavelets methods were used based on the requirements of the
Iris Pre-Processing (IIP) Module. Accurate templates are the
key to Iris recognition system. Artifacts Removal and Pre-
Processing will help to produce accurate matching patterns.
Our proposed work produced 99.14% accuracy in edge
detection, which gives reliable solution to the segmentation
and Quality Enhancement.

Keywords: Iris Segmentation, Quality Enhancement,
Circular Hough transform, Canny Edge Detection, Gabor
filters, Homogeneous Rubber Sheet Model, Daubechies
wavelets, IIP

1. INTRODUCTION
Traditional methods for personal identification are based
on what a person possesses (a physical key, ID card, etc.)
or what a person knows (a secret password, etc). These
methods have some problems, key may be lost, ID cards
may be forged, and passwords may be forgotten.
In recent years, bio metric personal identification is
receiving growing interests from both academia and
industry [1]. There are two types of biometric features:
physiological (e.g. Iris, face, fingerprint) and behavioral
(e.g. voice and hand writing)                                           Figure 1: Proposed Block Diagram for Iris Image
Usually an iris image impossibly contains artifacts, Such                          Processing (IIP) Module
as Pupil, eyelid, eyelash, etc. The artifacts may change          After that Iris Normalization and Iris Image Quality
and make the texture distort. It must eliminate the above         Enhancement Finally, the conclusion and experimental
said factors to the iris recognition through the image pre-       results were proposed.
processing.
Mat lab tool used for its image manipulation and Pre-
                                                                  2. ACQUISITION OF IRIS IMAGES
processing. In this paper Section 1 deals with Acquisition        The Iris recognition has been an essential research area for
of Iris images and Iris image manipulation, Segmentation          personal identification due to its high accuracy and the
and Edge Detection.                                               encouragement of both the government and private entities
                                                                  to replace traditional security systems, which suffer

Volume 1, Issue 2 July-August 2012                                                                                Page 271
    International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)
       Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com
Volume 1, Issue 2, July – August 2012                                          ISSN 2278-6856


noticeable margin of error. Forty healthy volunteers (25       interest from the input image containing an eye. The IIP
males and 15 females) IRIS images were collected.              Module contains three major tasks. (i) Iris Localization
Extraction of the IRIS image is more complicated, since        (ii) Iris Normalization (iii) Iris Image Enhancement.
IRIS is small in size and dark in Color. The IRIS patterns
are differentiated by several characteristics including        3.1 Iris Localization
ligaments, furrows, ridges, crypts, rings, corona, freckles,   The Iris Localization Unit Contains Three Sections (i)
and a Zigzag collarette. Stability is one of the key           Pupil Detection and Iris Boundary Detection (ii) Eyelid
advantages of IRIS recognition and it is suitable for one -    Detection and (iii) Eyelash Detection
many identification. Veri Eye Standard SDK used to
extract IRIS Patterns in the Effective Manner. The image       3.1.1 Pupil Detection and Iris Boundary Detection
size is 328×356 with 500 dpi and stored in the JPEG file       Both the inner boundary and the outer boundary of a
format. Each volunteer right eye image was collected and       typical iris can be taken as circles. But the two circles are
eight impressions on each have been made for IRIS              usually not concentric. Compared with other part of the
recognition. We took some efforts to control image quality     eye, the pupil is much darker. The eyelids and eyelashes
on eye pictures, and as well as appropriate settings such as   normally occlude the upper and lower arts of the iris
lighting and distance to camera were adjusted [2].             region. Sometimes, specular reflections can occur within
2.1 Iris Image Manipulation                                    the iris region corrupting the iris pattern. So we required
                                                               to detect the above said artifacts, then only which will
The images captured by camera have RGB color iris              help to produce accurate matching patterns
image. We transformed the images from RGB to gray              (Templates)[3].
level and from eight-bit to double precision thus              Hough Trans form is a standard computer algorithm that
facilitating the manipulation of the images in subsequent      can be used to determine the parameters of simple
steps.                                                         geometric objects, Such as lines and circles in an image.
                                                               The circular Hough Transform can be employed to
2.2 Iris Segmentation and Edge Detection
                                                               deduce the radius and center coordinates of the pupil and
Image can be viewed as depicting a scene composed of           iris regions.
different regions, objects etc. Then Image segmentation is     An automatic segmentation algorithm based on the
the process of decomposing the image into these regions        Circular Hough Transform is employed by wildes et al[4],
and objects by associating or labeling each pixel with the     Kong and Zhang[5]. Firstly, an edge map is generated by
object that it corresponds to. Hence, segmentation sub         calculating the first derivatives of intensity values in an
divides an image into its constituent regions or objects.      eye image and then thresholding the result. From the
Before move in to the IIP Module we require to reduce the      edge map, votes are cast in Hough space for the
noise of the image using Gaussian Smoothing. It is             parameters of circles passing through each edge point.
replacing each pixel by the average of the neighboring         These parameters are the center co-ordinates XC and YC,
pixel values. Mathematically, 2-D Gaussian Function is         and the radius r, which are able to define any circle
written as                                                     according to define any circle according to the equation
                                                               XC2+ YC2 = r2.
                1           x ( n  1) x 2  y 2              A maximum point in Hough space will correspond to the
g ( x, y )            e                               (1)
               2 2               2 2                        radius and center coordinates of the circle best defined by
The Gaussian outputs a ‘weighted average’ of each              the edge points Wiles et al [4] and Kong and Zhang [5]
                                                               also make use of the parabolic Hough transform to detect
pixel’s neighborhood, with the average weight more
                                                               the eyelids, approximating the upper and lower eyelids
towards the value of the central pixels. This is in contrast
                                                               with parabolic arcs, which are represented as
to the mean filter’s uniformly weighted average. Because
of this, this, a Gaussian provides gentler smoothing and       (-(x-hj)sin j+(y-kj)cos j)2=aj((x-hj)cos j+(y-kj) sin
preserves edges better than a similarly sized mean filter.
                                                                                                                  (2)
Edge Detection refers to the process of identifying and
                                                               Where aj controls the curvature, (hj,kj) is the peak of
locating sharp discontinuities in an image. Variables          parabola and j is the angle of rotation relative to the x-
involved in the selection of an edge detection operator        axis.
includes Edge orientation, Noise Environment, Edge             In performing the preceding edge detection step, Wiles et
structure, There are many ways to perform edge detection       al. bias the derivatives in the horizontal direction for
They may be grouped in to two categories.(1) Gradient          detecting the eyelids, and in the vertical direction for
(2) Laplacian.                                                 detecting the outer circular boundary of the iris. The
                                                               motivation for this is that the eyelids are usually
3. IRIS IMAGE PROCESSING                                       horizontally aligned, and also the eyelid edge map will
In the Iris image processing (IIP) module we using some        corrupt the circular iris boundary edge map if using all
image processing algorithms to demarcate the region of         gradient data. Taking only the vertical gradients for
                                                               locating the iris boundary will reduce influence of the
Volume 1, Issue 2 July-August 2012                                                                              Page 272
   International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)
       Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com
Volume 1, Issue 2, July – August 2012                                          ISSN 2278-6856


eyelids when performing Circular Hough Transform, and          To detect lower eyelid, the steps are repeated with lower
not all of the edge pixels defining the circle are required    IRIS boundary area.
for successful localization. Not only does this make circle
localization more accurate, it also makes it more efficient,   3.1.3 Eyelash Detection
since there are less edge points to cast votes in the Hough    Gabor filter and variance of intensity approaches are
Space[6].                                                      proposed for eyelash detection. The eyelashes are
                                                               categorized into separable eyelashes and multiple
                                                               eyelashes. Separable eyelashes are detected using 1D
                                                               Gabor filters.
                                                               A low output value is obtained from the convolution of
                                                               the separable eyelashes with the Gabor filter. For multiple
                                                               eyelashes, the variance of intensity is very small. If the
                                                               variance of intensity in a window is smaller than a
 Figure 2: (a)An eye image (b) Corresponding Edge map          threshold, the center of the window is considered as the
   (c)Edge map with only horizontal Gradients (d) Edge         eyelashes.
              map with only Vertical Gradients
We can use Active Contour model has been used to
localize Iris. The contour is defined as a set of n vertices
connected as a simple closed curve. The movement of the
contour is caused by internal and external forces acting
on the vertices. The internal forces Expand the contour
into a perfect circle.




                Figure 3: Pupil Detection                                Figure 4: Eyelash and Eyelid Detection
The external forces push the contour inward. The average       3.2 Iris Normalization
radius and center of the contour obtained are the              Iris may be captured in different size with varying
parameters of the Iris boundary. The discrete circular         imaging distance. Due to illumination variations the
active contour search for the Iris boundary is affected by     radial size of the pupil may change accordingly. The
the specular reflections from cornea. Therefore, image         resulting deformation of the Iris texture will affect the
preprocessing algorithm is required remove the specular        performance of subsequent feature extraction and
reflections                                                    matching stages. Therefore, the Iris region needs to be
                                                               normalized to compensate for these variations.
3.1.2 Eyelid Detection                                         The homogeneous rubber sheet model algorithm remaps
Texture segmentation is adopted to detect upper and            each pixel in the localized Iris region from the Cartesian
lower eyelids in [7].The energy of high spectrum at each       coordinates to polar co ordinate to polar co ordinates [9],
region is computed to segment the eyelashes. The region        [10]. The non-concentric polar representation is
with high frequency is considered as the eyelashes area.       normalized to a fixed size rectangular block. The
The information of the pupil position is used in upper         homogenous rubber sheet model accounts for pupil
eyelashes are fit with a parabolic arc. The parabolic arc      dilation, imaging distance and non-concentric pupil
shows the position of the upper eyelid. For eyelid             displacement. This algorithm does not compensate for the
detection, the histogram of the original image is used.        rotation variance.
The lower eyelid area is segmented to compute the edge
point of the lower eyelid. The lower eyelid is fit with the          Table 1: Accuracy of Iris Boundaries Detection
edge points. In [8], the daubechies wavelets methods is
used to decompose the original image in to four bands,                   Method                 Accuracy(%)
HH,HL,LH and LL. Canny edge detection is applied to              Hanho sung[11]                    98.55
the LH image. To minimize the influence of eyelashes,            Xiaofu[12]                        97.67
canny edge detector is tuned to horizontal direction. The        Cui.et.al[13]                     97.35
Canny Edge Detection Algorithm runs in 5 steps:(1)               Md slim[14]                         90
Smoothing(2) Finding Gradients (3) Non maximum
                                                                 Mojtaba Najafi [15]               98.64
suppression (4) Double thresholding (5) Edge tracking by
hysteresis.                                                      Proposed System                   99.14
The edge points that are close to each other are connected
to detect the upper eyelid. The longest connected edge         3.3 Image Enhancement
that fits with a parabolic arc is taken as the upper eyelid.
Volume 1, Issue 2 July-August 2012                                                                             Page 273
   International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)
       Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com
Volume 1, Issue 2, July – August 2012                                          ISSN 2278-6856


The original image has low contrast and may have non                        Communication and Applications (ICCCA),
uniform illumination caused by the position of the light                    2012.
source.                                                             [3]    Ujwalla Gawande, Mukesh Zaveri, Avichal
                                                                            Kapur, “Improving Iris Recognition Accuracy by
                                                                            Score Based Fusion Method”, International
                                                                            Journal of Advancements in Technology, Vol 1,
                                                                            No 1, June 2010.
                                                                    [4]    R.P Wiles, “Iris Recognition: An Emerging
                                                                            Biometric Technology”, Proceedings of the
      Figure 5: Iris Normalization and Teture Image after                   IEEE, Vol.85, No. 9, pp. 1348-1363, 1999.
                        Enhancement                                 [5]    Wai-Kin-Kong & David Zhang, “Detecting
                                                                            Eyelash and Reflection for Accurate Iris
These may impair the result of the texture analysis. We                     Segmentation”        Proceedings       of      2005
enhance the iris image and reduce the effect of non-                        International    Symposium        on     Intelligent
uniform illumination by means of local histogram                            Multimedia, Video and Speech Processing,
equalization, Median filter and the 2-D Wiener filter. We                   Vol.8, pp.897-906,2005.
just notify that the window size for using the median filter        [6]    Deepak       Sharma,     Ashok      Kumar,       “Iris
is(3X3) and local variance are compared according to the                    Recognition-An           Effective          Human
following equations:                                                        Identification”,    International     Journal      of
                                                                            Computing and Business Research, volume 2
              1
                                                                            Issue 2, May 2011.
                       a(n1, n 2)                          (3)   [7]    J.Cui,Y.Wang,T.Tan,L.Ma, and Z.Sun. “A Fast
          MN          n1, n 2
                                                                            and Robust Iris Localization Method Based on
                                                                            Texture Segmentation”, SPIE Defense and
      2           1                                    2                    Security Symposium, Vol.5404,pp.401-408,
                             a 2 ( n1 , n 2 )            (4)
                                                                            2004.
              MN
                        n1, n 2  
                                                                    [8]    Y.Chen, S.Dass, and A.Jain “Localized IRIS
                                                                            Image Quality Using 2D Wavelets”, Proceedings
Where  refers to the local window with size MxN                            of International Conference on Biometrics,2006.
around each pixel that is going to process,  id the local          [9]    J.Daugman. “High confidance Visual Recognition
                                  2                                         of persons by a test of statistical Independence”,
mean value and  is the local variance. The output of                       IEEE transactions Pattern Analysis nad machine
wiener filter is obtained as follows:                                       Intelligence, Vol.15,pp.1148-1161,1993.
                                  2 2                            [10]   J.Daugman “How IRIS Recognition works”, IEEE
  b(n , n )                           ( a (n , n )   )   (5)           Trans.CSVT, Vol.15,pp.21-30,2004.
     1 2                                       1 2
                                    2                              [11]    Hanho sung jaekyung lim, “Iris Localization
Where  2 presents the noise variance which is                              using collarette boundary Localization” 17th
considered to be equal mean variance.                                       International conference on pattern recognition,
                                                                            Cambridge, UK, Vol 4, pp.857-860, August
                                                                            2004.
4. CONCLUSION
                                                                    [12]    Xiaofu He, Pengfei shi “A new segmentation
The algorithms which are used in this paper are already
                                                                            approach for iris recognition based on hand-held
existing system but different combinations were produced
                                                                            capture device” Science direct, Pattern
the Iris Segmentation accuracy. These steps will help to
                                                                            recognition,Vol 40,Issue 4,pp.1326-1333, April
produce better Feature Extraction methods and matching
                                                                            2007.
patterns in the iris Recognition system. Actually, EEG
                                                                    [13]    Richard N.F Yonh “An effective Segmentation
will add as an additional modality to the IRIS .These two
                                                                            Methods for iris recognition System” 5th IEEE
combinations will create high performance multimodal
                                                                            conference on visual information Engineering,
biometrics in future.
                                                                            XianChina, pp.548-553, August 2008.
                                                                    [14]    Md slim al. M, “Iris recognition: a new approach
REFERENCES                                                                  for iris segmentation”, 12th ICCIT, 2009.
  [1]     Special Issues on Biometrics, Proceedings of the          [15]   Mojtaba Najafi, Sedigheh Ghofrani, “A new iris
           IEEE, vol.85, no.9, Sept 1997.                                   identification Method Based on Ridgelet
  [2]     T.Karthikeyan, B.Sabarigiri, “Countermeasures                     Tranform”, The 3rd International Conference on
           against Iris Spoofing and Liveness Detection                     Machine Vision(ICMV),2010.
           Using        Electroencephalogram       (EEG),
           International Conference on Computing,


Volume 1, Issue 2 July-August 2012                                                                                   Page 274
    International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)
       Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com
Volume 1, Issue 2, July – August 2012                                          ISSN 2278-6856


AUTHORS

Sabarigiri.B received the M.C.A., M.Phil. Degree in Computer
Science in 2007, 2010 receptively and is currently working
towards the PhD degree in computer science at PSG College of
Arts and Science, Coimbatore. He involved in the development
of a multimodal biometric system which includes Iris and EEG.
He has published 7 papers in national and international
conferences and journals.. His areas of interest include Medical
Image Processing, Biometrics, Fusion Techniques and Neuro
Imaging.

Thirunavu Karthikeyan received his graduate degree
in Mathematics from Madras University in 1982. Post
graduate degree in Applied Mathematics from
Bharathidasan University in 1984. Received Ph.D., in
Computer Science from Bharathiar University in 2009.
Presently he is working as a Associate Professor in
Computer Science Department of P.S.G. College of
Arts and Science, Coimbatore. His research interests
are Image Coding, Medical Image Processing and Data
mining. He has published many papers in national and
international conferences and journals.           He has
completed many funded projects with excellent
comments. He has contributed as a program
committee member for a number of international
conferences. He is the review board member of various
reputed journals. He is a board of studies member for
various autonomous institutions and universities.




Volume 1, Issue 2 July-August 2012                                                  Page 275

				
DOCUMENT INFO
Description: International Journal of Emerging Trends & Technology in Computer Science (IJETTCS) is an online Journal in English published bimonthly for scientists, Engineers and Research Scholars involved in computer science, Information Technology and its applications to publish high quality and refereed papers. Papers reporting original research and innovative applications from all parts of the world are welcome. Papers for publication in the IJETTCS are selected through rigid peer review to ensure originality, timeliness, relevance and readability. The aim of IJETTCS is to publish peer reviewed research and review articles in rapidly developing field of computer science engineering and technology. This journal is an online journal having full access to the research and review paper. The journal also seeks clearly written survey and review articles from experts in the field, to promote intuitive understanding of the state-of-the-art and application trends. The journal aims to cover the latest outstanding developments in the field of Computer Science and engineering Technology.