Learning Center
Plans & pricing Sign in
Sign Out



									                                                                              International Journal of Computer Information Systems,
                                                                                                                  Vol. 2, No. 6, 2011

     Handprint Recognition Technique based on image
               segmentation for recognize
           Miss. Khamael Abbas Al-Dulaimi                                      Mr. Aiman Abdul Razzak Al-Saba’awi
            Department of Computer Science                                         Department of Computer Science
        College of Science, Al-Nahrain University                              College of Science, Al-Nahrain University
                      Iraq, Baghdad                                                          Iraq, Baghdad

Abstract—In this paper, a handprint recognition system based on        has attracted scientists in the last decade and many papers have
segmentation image technology called JSEG, (J-image                    been presented based on it [2].
segmentation) is presented .Hand recognition systems should
provide a reliable personal recognition schemes to either confirm              There are some reasons which have caused handprint
or determine the identity of an individual. Applications of such a     recognition find popularity. The most important factors are: (1)
system include computer systems security, secure electronic            Low complexity, (2) High accuracy, (3) Low-resolution
banking, secure access to buildings, health and social services. A     imaging, (4) Stable line features, and (5) High user acceptance,
criterion to local windows in the class-map results of the “J-         [4].
image”, is applied in which high and low values correspond to
possible region boundaries and region centers, respectively then a         Hand recognition technology exploits some of these hand
region growing method is used to segment the color image to            features. Friction ridges do not always flow continuously
regions based on the multi-scale J-images then determined the          throughout a pattern and often result in specific characteristics
centric point of each region used the Euclidean distance to            such as ending ridges or dividing ridges and dots. A hand
calculate the distance between each pixel (x, y) on the boundary       recognition system is designed to interpret the flow of the
and the centric point The least square criterion is then utilized to   overall ridges to assign a classification and then extract the
determine the difference between the existed (in Database file)        minutiae detail-a subset of the total amount of information
hands with a new hand’s images.                                        available, yet enough information to effectively search a large
                                                                       repository of palmprints.Minutiae are limited to the location
Keywords; JSEG algorithm,Eculidean Distance,region growing             ,direction and orientation of ridge endings and
                                                                       bifurcations(splits) along ridge path [4].
                       I.    INTRODUCTION                                       In this paper, A criterion to local windows in the class-
                                                                       map results of the “J-image”, is applied in which high and low
    Image segmentation is the foundation of hand recognition           values correspond to possible region boundaries and region
and computer vision. In general, image noise should be                 centers, respectively then a region growing method is used to
eliminated through image preprocessing. And there is some              segment the color image to regions based on the multi-scale J-
specifically given work (such as region extraction and image           images then determined the centric point of each region used
marking) to do after the main operation of image segmentation          the Euclidean distance to calculate the distance between each
for the sake of getting better visual effect. Image segmentation       pixel (x, y) on the boundary and the centric point The least
is the process of partitioning/subdividing a digital image into        square criterion is then utilized to determine the difference
multiple meaningful regions or sets of pixels regions with             between the existed (in Database file) hands with a new hand’s
respect to a particular application. The segmentation is based         images
on measurements taken from the image and might be grey
level, color, texture, depth or motion. The result of image
segmentation is a set of segments that collectively cover the                                 II.    PRIOR WORK
entire image. All the pixels in region are similar with respect to         A considerable number of papers have been published in the
some characteristic or computed property, such as color,               last decade about biometric recognition using palm-print
intensity, or texture. Adjacent regions differ with respect to         features. One of the most important stages in these methods is
same characteristics. JSEG algorithm is one of the frequently          pre-processing which contains some operations such as
used techniques in digital image processing [1].                       filtering, Region Of Interest (ROI) extraction, normalization,
           Today a wide variety of applications require reliable       etc. This paper proposes a precise method for extracting ROI of
verification schemes to confirm the identity of an individual,         the palm-print. The basis of this method is geometrical
recognizing humans based on their body characteristics and             calculations and Euclidean distance.
these techniques have become more and more interesting in                Biometrics-based authentication is a verification approach
emerging technology applications. Biometric cannot be                  using the biological features inherent to each individual. They
borrowed, stolen, or forgotten, and forging one is practically         are processed based on the identical, portable, and arduous
impossible . Among different body characteristics, hand-print

       June Issue                                              Page 7 of 78                                     ISSN 2229 5208
                                                                                          International Journal of Computer Information Systems,
                                                                                                                                  Vol. 2, No. 6, 2011
duplicate characteristics. In other paper, we propose a scanner-                                further, let us first consider the measure J defined
based personal authentication system by using the palm print                                    as follows. Let Z be the set of all N data points in
features. The authentication system consists of enrollment and                                  the class-map. Let z=(x, y), z  Z, and m be the
verification stages. In the enrollment stage, the training samples                              mean,
are collected and processed to generate the matching Templates
                                                                                                  m                                                    
    Although considerable amount of work has been done in                                                    N    zZ
order to improve the accuracy of the biometric verification
system in the recognition or classification front, not much work                       Suppose Z is classified into C classes, Zi, i=1, …, C Let mi
has been reported in the area of feature selection for biometric                   be the mean of the Ni data points of class Zi,
based verification systems. In [5] a feature selection
mechanism has been proposed for hand-geometry based
identification system. They perform statistical analysis to
determine the discriminability of the features using multiple                                      m     i
                                                                                                                      z               
discriminate analyses                                                                                            N i zZ i

                           III.    METHODOLOGY                                        Let
   Suppose you have M hand images of the training set. All
                                                                                                              zm
hand images must be in same size (e.g. n × n pixels).                                                                             2
                                                                                                    S    T


                                                                                                  C                    C                                      2
             (1)             (2)             (3)              (4)
                                                                                    S     W
                                                                                                 s
                                                                                                  i 1
                                                                                                                    i 1 z
                                                                                                                                       z  mi                      
  Figure 1. Training set of four hand images, each of 256×256 pixels size.                                                       zi
          The first step of the algorithm is to quantize color in                     The measure J is defined as
the image in order to obtain edges that can be used to
differentiate regions in the image where a threshold decides the
value of quantization which it could be a dynamic value .As
shown in “Fig 2”.                                                                                 S                  S              S 
                                                                                          J             B
                                                                                                                          T               W
                                                                                                  S      W                 S      W

                                                                                             It essentially measures the distances between different
                                                                                   classes SB over the distances between the members within each
                                                                                   class SW .A higher value of J indicates that the classes are more
                                                                                   separated from each other and the members within each class
                                                                                   are closer to each other, and vice versa. [7]
             (1)             (2)             (3)              (4)
                                                                                      Spatial Segmentation Algorithm
     Figure 2. :Training set of four hand image after quantization color
                                                                                             The characteristics of the J-images allow us to use a
       The second step of the algorithm there is some calculation                  region-growing method to segment the image. Consider the
performed on the image after quantized it. The image is                            original image as one initial region the algorithm starts segment
partitioned into class to perform the calculation on each class                    all the regions in the image at an initial large scale.
independently, where the new value of pixels is extracted from
the difference between original value of the pixel and the mean                    A. Valley Determination
of its class                                                                                  At the beginning, a set of small initial areas are
                                                                                   determined to be the bases for region growing. These areas
                  The class-map can be viewed as a set of spatial
                                                                                   have the lowest local J values and are called valleys. In
                   data points located in a 2-D plane. The value of
                                                                                   general, finding the best set of valleys in a region is a non-
                   each point is the image pixel position, a 2-D vector
                                                                                   trivial problem. The following simple heuristics have provided
                   (x, y).
                                                                                   good results in the experiments:
                  These data points have been classified and each
                   point is assigned a label. Before proceeding

       June Issue                                                          Page 8 of 78                                               ISSN 2229 5208
                                                                                                  International Journal of Computer Information Systems,
                                                                                                                                      Vol. 2, No. 6, 2011
               Calculate the average and the standard deviation of
                the local J values in the region, denoted by and
               Set a threshold TJ at

   T   J
                           a J 

                                                                                                       (1)                 (2)                    (3)                  (4)
            Pixels with local J values less than TJ are considered
       as candidate valley points. Connect the candidate valley
       points based on the 4-connectivity and obtain candidate                                                             Figure 3. J-images
               If a candidate valley has a size larger than the
                minimum size scale, it is determined to be a valley.
               a is chosen from the set of parameter values [-0.6,-
                0.4,-0.2,0,0.2,0.4] which gives the most number of
                valleys[6].                                                                              (1)                 (2)                   (3)                    (4)

B. Valley Growing                                                                                              Figure 4. Region growing for j-images
         The new regions are then grown from the valleys. It is
slow to grow the valleys pixel by pixel. A faster approach is                                         In this step mapping the regions that obtained from
used in the implementation:                                                                previous step on the original image in order to find segments in
                                                                                           the image where the checking is performed on the image that
                       Remove “holes” in the valleys.                                     contain only region if the pixel is colored by black then map its
                                                                                           color and location to the original image in this case the
                       Average the local J values in the remaining
                                                                                           segments will appear clearly
                        unsegmented part of the region and connect pixels
                        below the average to form growing areas. If a
                        growing area is adjacent to one and only one
                        valley, it is assigned to that valley.
                       Calculate local J values for the remaining pixels at
                        the next smaller scale to more accurately locate the
                        boundaries. Repeat steps 2.                                                      (1)                  (2)                 (3)               (4)
                       Grow the remaining pixels one by one at the
                        smallest scale. Unclassified pixels at the valley                                      Figure 5. mapping on original images
                        boundaries are stored in a buffer. Each time, the
                        pixel with the minimum local J value is assigned                              The next stage in our approach is determining centric
                        to its adjacent “valley” and the buffer is updated                 point. The information we have up to now is the boundary of
                        till all the pixels are classified [7].                            hand as sequential coordinates in the image. Now, we show
                                                                                           that using this boundary pixels and centric point of boundary
C. Region Merge                                                                            image Centric point of the hand boundary image which is
                                                                                           calculated by the equations (7, 8) Here f (i, j) is our hand image
      After region growing, an initial segmentation of the                                 function and X Centriod Y Centriod is the coordinates of the
image is obtained. It often has over-segmented regions. These                              centric point.
regions are merged based on their color similarity. The
quantized colors are naturally color histogram bins. The color                                                          
histogram features for each region are extracted and the                                              m pq      i
                                                                                                                i   j  
                                                                                                                                     j q f (i, j )
distances between these features can be calculated. Since the
colors are very coarsely quantized, in our algorithm it is
assumed that there are no correlations between the quantized                                                         m10
                                                                                                      X Centroid 
colors. Therefore, a Euclidean distance measure is applied                                                      m00     
directly. First, distances between two neighboring regions are
calculated and stored in a distance table. The pair of regions
with the minimum distance are merged together. [7].
                                                                                                      YCentroid 
                                                                                                                     m00 

           June Issue                                                             Page 9 of 78                                                       ISSN 2229 5208
                                                                                                       International Journal of Computer Information Systems,
                                                                                                                                           Vol. 2, No. 6, 2011
    We calculate the Euclidean distance between each pixel (x,                                  and translation of the hands. A preview image of the Database
y) on the boundary and the centric point as equation (9) shown                                  of hand is as shown in Fig. (7, 8, 9, 10, and 11) .The
in Table-І.                                                                                     experimental observations of the experiments performed on
                                                                                                datasets are shown in Table-П and Table-ш as follows:
             D( x, y)  ( x  xcentriod ) 2  ( y  ycentriod ) 2                                   Suppose you have trained set of hand image◦, as illustrated
                                                                             in Fig.7.

                            Figure 6. centric point of images

    Using these key points, we find points A, B, C, D that are
the initial and end points of the holes between fingers in the                                                    Figure 7. Training set of hand
boundary image. A sample of the way we find these points is
shown in Fig.6 (a, b, c, d). We find H1 and then calculate                                          Following the same procedures being done for the trained
Euclidean distance of pixels on the boundary. As it is obvious,                                 set (mentioned above), to segment each hand image to regions.
the similarity between the verify hand and the trained set can
be represented by the minimum distance test (i.e. utilizing the
Mean-Square-Error “MSE” criterion), given by:

Min{MSEK }  Min{ (VK ,i  VM 1,i ) 2 },
                                                 i                                         
 for K  1,2,...., M                                                        (11)

                                                                                                                       Figure 8. J-Image
                      ED               ED                ED                 ED
   No                hand1            hand 2            hand3              hand4
    1                   3               12.3             21.6               12.9
    2                  12                20               28                 15
    3                  20               2.5               15                32.5
    4                  30               30.2             30.4               30.6
    5                 11.5               16              20.5                25
    6                  16                25               34                 43
    7                 22.3               10               2.3               14.6
    8                  12               4.2               3.6               11.4

                                                                                                                    Figure 9. Region growing
                      IV.      EXPERIMENTS AND RESULTS
                  We used the database for hand recognition
experiments. Here we have experimented with nearly 60
images with variations of 40 persons. The experiment results
show the effectiveness of the proposed method. They show
clearly the flexibility of the method in relation to the rotation

        June Issue                                                                      Page 10 of 78                                      ISSN 2229 5208
                                                                                                    International Journal of Computer Information Systems,
                                                                                                                                        Vol. 2, No. 6, 2011

                                                                                              Suppose an existed image within the trained set has been
                                                                                          selected to be verified after rotation by 90◦, as illustrated in

                   Figure 10. mapping on original image

                                                                                                        Figure 11. Verify image after image rotation by 90◦

   The Euclidean Distance between trained set and the verifying hand image is, obvious as listed below:


            ED           ED               ED             ED          ED              ED              ED          ED              ED               ED       ED
   No      hand1        hand2            hand3          hand4       hand5           hand6           hand7       hand8           hand9           hand10   hand11
    1        10           51               5             17           14              12              10           7              5               2        10
    2        33           10               41            36           40              44              48          52              56              60       40
    3        22            2               9              2          8.5              15             21.5         28             34.5             41       21
    4        40           23               36            29           27              25              23          21              19              17       23
    5        15           24               11            12.6       10.66            8.66            6.67        4.67             2.6            0.8       8
    6        12           15               27            33          40.5             48             55.5         63             70.5             78      55.5
    7        10           41               2             9.6          6              1.67            2.3         6.33             10              14      2.3
    8        20           29               20            23           23              23              23          23              23              23       22
    9        33           19               20            11          4.5              2              8.5          15             21.5             28      8.5
   10        12            2               13            10          10.5             11             11.5         12             12.5             13      11.5

   The Min {MSE} between trained set and the verifying hand image is, obvious, between ED hand7 and ED hand11 i.e.
Min{MSEK} =MSE7, as listed below:


                           No     MSE1          MSE2      MSE3      MSE4      MSE5          MSE6      MSE7     MSE8      MSE9           MSE10
                            1        0           41           -5     7          4            2             0      -3        -5            -8
                            2       -7           -30           1     -4         0            4             8      12       16            20
                            3        1           -19          -12    -19      -12.5          -6        0.5        7       13.5           20
                            4       17            0           13     6          4            2             0      -2        -4            -6
                            5        7           16            3     4.6       2.66         0.66       -1.33    -3.33      -5.4          -7.2
                            6      -43.5        -40.5     -28.5     -22.5      -15          -7.5           0     7.5       15            22.5
                            7       7.7          38.7      -0.3      7.3       3.7          -0.63          0     4.03      7.7           11.7
                            8       -2            7           -2     1          1            1             1      1         1             1
                            9      24.5          10.5      11.5      2.5        -4          -6.5           0     6.5       13            19.5
                           10       0.5          -9.5         1.5   -1.5        -1          -0.5           0     0.5        1            1.5

        June Issue                                                          Page 11 of 78                                                       ISSN 2229 5208
                                                                                       International Journal of Computer Information Systems,
                                                                                                                           Vol. 2, No. 6, 2011
                                                                               [6]   HAND GEOMETRY: A NEW APPROACH FOR FEATURE
                           V.     CONCLUSION                                         EXTRACTION BOREKI, Guilherme1; ZIMMER, Alessandro2
                                                                                     UNICENP – Centro Universitário Positivo, Curitiba – Brazil, Computer
    In this paper developed a new approach to hand geometry                          Engineering                       Departmentguilherme@boreki.eng.br1,
feature extraction by using JSEG Algorithm to segment hands                          zimmer@unicenp.br2
image to regions then calculate distance between regions .This                 [7]    [S. Ribaric, 2005] S. Ribaric, I. F. (2005). A biometric identification
way we can extract features without imposing any restriction to                      system based on eigenpalm and eigenfinger features. IEEE Trans on
the user which makes it possible to identify the hand with                           Pattern Analysis and Machine Intelligence, 24:1698–1709.
virtually any rotation and translation. The developed method                   [8]    “Color Image Segmentation” Yining Deng, B. S. Manjunath and
                                                                                     Hyundoo Shin* Department of Electrical and Computer Engineering
gives better recognition accuracy, better discriminatory power                       University of California, Santa Barbara, CA 93106-9560*Samsung
and reduces the computational load significantly and                                 Electronics Inc.{deng, manj, hdshin}
calculation time.

                              REFERENCES                                                                   AUTHORS PROFILE

[1]   Y.Ramadevi, T.Sridevi, B.Poornima, B.Kalyani “SEGMENTATION                                     Miss. Khamael Abbas Al-Dulaimi received her M.Sc.
      AND OBJECT RECOGNITIONUSING EDGE DETECTION                                                     degree in Computer science from College of science,
      TECHNIQUES” Department of CSE , Chaitanya Bharathi Institute of                                Baghdad University in 2005. The author is working as a
      Technology Gandipet,                                                  lecture in computer science department in Al-Nahrain
[2]   K. Delac, M. Grgic, “A Survey of Biometric Recognition Methods”,
      46th International Symposium Electronics in Marine, ELMAR-
      2004,June 2004.
[3]   A.K. Jain, A. Ross, S. Prabhakar, “An Introduction to Biometric
      Recognition”, IEEE transaction on Circuits and Systems for
      VideoTechnology, Vol. 14, No. 1, January 2004.
[4]   D. Zhang, J. You, “Online Palmprint Identification”, IEEE Transactions                         Mr. Aiman Abdul Razzak Al-Saba’awi received his
      on Pattern Recognition and Machine Intelligence, Vol. 25, No. 9,                               Higher Diploma of Computer science from Informatics
      pp.1041-1050, 2003.                                                                            Institute for Postgraduate Studies in 2002, The author is
[5]   W. K. Kong, D. Zhang, W. Li, “Palmprint Feature Extraction Using 2-D                           working as a Programmer in computer science
      Gabor Filters”, Pattern Recogn., Vol. 36, pp. 2339-2347, 2003.                                 department         in       Al-Nahrain        university.

        June Issue                                                    Page 12 of 78                                             ISSN 2229 5208

To top