; A comparison of genetic feature selection and weighting techniques for multi-biometric recognition
Learning Center
Plans & pricing Sign in
Sign Out
Your Federal Quarterly Tax Payments are due April 15th Get Help Now >>

A comparison of genetic feature selection and weighting techniques for multi-biometric recognition


  • pg 1
									A Comparison of Genetic Feature Selection and Weighting
     Techniques for Multi-Biometric Recognition
 Khary Popplewell1, Aniesha Alford2, Gerry Dozier3, Kelvin Bryant4, John Kelly5, Josh Adams6,
               Tamirat Abegaz7, Kamiliah Purrington8, and Joseph Shelton9
                                  Center for Advanced Studies in Identity Sciences (CASIS@A&T)
                                      North Carolina Agricultural and Technical State University
                                             1601 E Market St., Greensboro, NC 27411
          ktpopple@ncat.edu, 2aalford@ncat.edu, 3gvdozier@ncat.edu, 4ksbryant@ncat.edu,
        jck@ncat.edu, 6jcadams2@ncat.edu, 7tamirat@programmer.net,8kmpurrin@ncat.edu,

ABSTRACT                                                                     and evolutionary feature selection to improve recognition
In this paper, we compare genetic-based feature selection                    accuracy while reducing the number of bits needed for matching.
(GEFeS) and weighting (GEFeW) techniques for multi-biometric                 Weighting the features of a feature set for a particular biometric
recognition using face and periocular biometric modalities. Our              modality has also been shown to be a successful method of
results show that fusing face and periocular features outperforms            increasing recognition accuracy [6, 7]. In feature weighting, less
face-only and periocular-only biometric recognition. Of the two              discriminating features are typically assigned lower weights
genetic-based approaches, GEFeW outperforms GEFeS.                           relative to more discriminating features rather than being removed
                                                                             altogether as in feature selection.
Categories and Subject Descriptors
I.5.4 [Pattern Recognition]: Applications – Computer Vision,                 Much of the work in the biometrics literature as related to feature
Signal Processing.                                                           selection and weighting has focused on a single biometric
                                                                             modality [1, 3]. In this paper, we use Genetic & Evolutionary
                                                                             feature selection (GEFeS) and weighting (GEFeW) techniques in
General Terms                                                                an effort to increase the performance of multi-biometric
Algorithms,     Measurement,             Performance,         Reliability,   recognition. The underlying rationale for multi-biometrics is that
Experimentation, Verification.                                               one may achieve greater recognition accuracy by using multiple
                                                                             biometric modalities [6]. In this paper we use the face and
Keywords                                                                     periocular biometric modalities. The features from the face are
Eigenface, Feature Selection, Feature Weighting, Local Binary                extracted via the Eigenface method [12] while the periocular
Pattern (LBP), Steady State Genetic Algorithm (SSGA)                         features are extracted via Local Binary Patterns (LBP) [10].
                                                                             The remainder of this paper is as follows. In Section 2, we
1. INTRODUCTION                                                              provide a brief introduction to Eigenface and LBP feature
A genetic algorithm (GA) can be described as a simulation of                 extraction. In Section 3, GEFeS and GEFeW are introduced. In
Darwinian Evolution in an effort to evolve an optimal (or near               Section 4, we describe our experiments, and in Section 5, the
optimal) solution to a problem [4]. GAs have been used to solve              results of our experiments are presented. Finally, in Section 6, we
a variety of complex, real world problems [1, 3, 4]. In the area             present our conclusions and future work.
biometrics, GAs have been successfully applied to feature
selection in an effort to reduce the number of features needed
                                                                             2. BACKGROUND OVERVIEW: FEATURE
while improve recognition accuracy [1, 3].
                                                                             EXTRACTION AND RECOGNITION
The biometrics research community is seeing an increasing                    The Eigenface [12] approach uses the concept behind Principal
interest in feature selection [1, 3, 5]. Gentile et al. proposed using       Component Analysis (PCA) for face recognition. Eigenface is a
short length iris codes (SLIC), which significantly reduced the              dimensionality reduction technique that reduces the image space
number of bits needed for recognition, improving the efficiency              into face space, which uses the Eigenfaces as the orthogonal basis
while maintaining accuracy [5]. Dozier et al. proposed GRIT                  for the face space. The principal components are normalized
(Genetically Refined Iris Templates) as a method for reducing the            eigenvectors of the covariance matrix of the features and ordered
number of bits needed as well [3]. Adams et al. [1] used genetic             according to how much of the variation present in the data they
                                                                             contain. Each component can then be interpreted as the direction,
                                                                             uncorrelated to previous components, which maximizes the
 Permission to make digital or hard copies of all or part of this work for
 personal or classroom use is granted without fee provided that copies are   variance of the data when projected onto the face space.
 not made or distributed for profit or commercial advantage and that         According to the PCA, the first principal component is the
 copies bear this notice and the full citation on the first page. To copy    direction with the largest variation. The second principal
 otherwise, or republish, to post on servers or to redistribute to lists,    component is the direction uncorrelated to the first component
 requires prior specific permission and/or a fee.                            along with the second largest variation, and so on.

 ACM SE 11, Mar 24-26 2011, Kennesaw, GA, USA
 Copyright 2011 ACM 978-1-4503-0686-7/11/03…$10.00.
The LBP [10] approach for feature extraction was applied to a                                                   n feat
periocular image instead of an iris image for our research. This                           c fit  m  10                    (2)
approach takes an image and segments it into a grid of evenly
sized patches. The grayscale value of each pixel within the patch        where cfit is the fitness of the candidate solution, m is the number
(the center pixel) is compared to the grayscale value of its eight       of errors, nfeat is the number of features used and ntotal is the total
adjacent pixels. If an adjacent pixel has a grayscale value greater
                                                                         number of features.
than or equal to the center pixel, it would be seen as 1, else it
would be seen as a 0. These new values are then concatenated,            After the fitness of the individuals is evaluated, the individuals are
starting from the upper-left corner, clockwise, around the center        evolved until a near-optimal solution is found or a certain number
pixel until all of the adjacent pixel’s values are added to the          of function evaluations are reached. Parents are selected via
output binary string. This binary string’s value is then used to         binary tournament selection and the offspring were created using
classify each center pixel and place it into a bin within the            uniform crossover. The worst individual in the population was
histogram. After all the center pixels were placed into the              always replaced by the offspring. Only one offspring was created
histogram, the histograms for the patches are concatenated               per iteration of each generation.
together, forming the probe's template.
The distance between each instance of a biometric feature set is
                                                                         4. EXPERIMENT
defined as the sum of the differences of the corresponding feature       For our experiment, the performances of GEFeS and GEFeW
set values using Manhattan Distance. This distance is a numerical        were compared. A subset of the Face Recognition Grand
representation of the dissimilarity between two biometric                Challenge (FRGC) dataset was used to compare each approach
instances and can be calculated using the following formula:             [9]. This subset contained the first 105 subjects. Each of these
                                                                         subjects had a total of three images. The probe set consisted of
                             n                                           one image of each subject (105 images total) while the gallery set
                       d    m  abs p
                            i 0
                                   i         i    g i    (1)          consisted of the other two images of each subject (210 images

where d is the distance, P={p1, p2, ..., pn}is the probe feature set,    The tested biometric modalities for the comparisons were face,
G={g1, g2, ..., gn} is the gallery feature set, M={m1, m2, ..., mn}      periocular, and face plus periocular. The features for our
represents the feature mask, and i is the index of the feature. If       experiment were extracted using the Eigenface [12] approach for
the feature mask is set to 0 for any value i, then the distance is not   face images and the LBP approach [10] for periocular images. All
affected for the features of the corresponding probe or gallery          of the features were used for matching as a control for our
feature.                                                                 experiment. The 210 face features were extracted using the
                                                                         Eigenface approach and 1416 periocular features were extracted
3. GEFES AND GEFEW                                                       from each periocular region using the LBP approach. When the
Genetic and evolutionary feature selection (GEFeS) uses a steady-        baseline Eigenface + LBP approach was used for the fusion of the
state genetic algorithm (SSGA) to evolve a feature mask,                 face and periocular biometrics, they were weighted evenly.
essentially selecting the features that are the most salient. The
feature mask is a set of binary values where a 1 represents a            5. RESULTS
feature that is turned on, and a 0 represents a feature that is turned   The results of our experiment were obtained by comparing feature
off. Each feature mask is created the same size as the                   sets from a probe to feature sets from the gallery set. The probe
corresponding feature set and then evolved using genetic                 consisted of a feature set (corresponding to a subject). The gallery
algorithms.                                                              set represented all of the 105 subjects (two images each) within
                                                                         the dataset.
Genetic and evolutionary feature weighting (GEFeW) uses a
SSGA to evolve the weight of the features. Like GEFeS, a feature         For our results, GEFeS and GEFeW used a population of 20
mask is created that is the same size as the feature set. Instead of     individuals, a Gaussian mutation range 0.2, and were allowed a
using a binary representation, the feature mask for GEFeW is             maximum of 1000 function evaluations. GEFeS and GEFeW were
composed of real values between 0.0 and 1.0. The feature mask            run a total of 30 times.
value is multiplied by each feature value to provide the weighted        The fitness of the best individuals from each run was returned
feature.                                                                 from X-TOOLSS. From that data, the best individual, the average
Evolution of the feature mask is done primarily inside of X-             fitness, the average percent of features, the average number of
TOOLSS [11]. X-TOOLSS is a suite of Genetic and Evolutionary             features, the average accuracy, and the best accuracy were
computations (GECs) that evolves candidate solutions based on a          calculated.
user defined fitness function [11]. The candidate solutions were         The rank is defined as the number of attempts necessary to
evaluated using an evaluation function that compared biometric           correctly match a subject probe to its corresponding subject
samples using the feature mask and returned the number errors            group. Let R = {r1, r2, ..., ri} represent the number of correctly
and the number of features used. Errors occur when a probe was           classified subjects at a given rank i. Let nsubj be the total number
the closest to an instance in the gallery that was not from the same     of subjects for the given data set and let accuracyi represent the
subject as the probe. The number of features that were used was          accuracy for a given rank. The accuracy at a given rank was
determined by the feature mask values that were greater than 0.0.        calculated by the following formula:
The function used to evaluate the candidate solutions is as follows
                                                                                                     (ri  sumi1 )
                                                                                      accuracy i                    100             (3)
 where sumi-1 is the number of correctly identified subjects at rank   fusion of face and periocular is better than GEFeS and GEFeW
 ri-1 .                                                                using a single biometric modality.
 Table 1 shows the results from the Face, Periocular, and Face +
 Periocular experiments. Row 2 shows the results from the face
 biometric experiments are shown, Row 3 shows the results from
 the experiments using the periocular biometric, and Row 4 shows
 the results from using the fusion of face + periocular biometrics.
 For the Face + Periocular experiment, the face and periocular
 biometrics were weighted evenly. However, for the Face +
 Periocular experiments using GEFeS and GEFeW, each
 biometrics modalities weight was evolved with the feature mask
 being used. Column 1 represents the biometric modalities that
 were used. Each dataset is from a subset of the FRGC data.
 Column 2 represents the algorithms used for the experiments.
 Column 3 represents the average percentage of features used by
 each algorithm for each biometric. Column 4 represents the
 average accuracy for rank 1.
         Table 1. Accuracies with Percentage of Features                            Figure 1. CMC Curve for Face-Only

 Dataset              Algorithm               Percent of
             Eigenface                         100.00%      64.76%
             Eigenface + GEFeS                  51.03%      77.87%
             Eigenface + GEFeW                  87.59%      87.59%
             LBP                               100.00%      94.29%
             LBP + GEFeS                        47.82%      95.14%
             LBP + GEFeW                        85.84%      95.46%
             Eigenface + LBP [evenly fused]    100.00%      90.77%
 Face +
             Eigenface + LBP + GEFeS            47.97%      97.40%
             Eigenface + LBP + GEFeW            87.20%      98.78%

 In Table 1, GEFeW had the highest average accuracy rates for all
 three data sets, while GEFeS used the lowest percentage of
 ANOVA and t-tests were performed using the best results of each                 Figure 2. CMC Curve for Periocular-Only
 of the 30 runs. Based on the results from those tests, GEFeW
 performs better in terms of accuracy with the Face-Only and the
 Face + Periocular datasets. GEFeW does not statistically
 outperform GEFeS on the Periocular-Only dataset in terms of
 Figure 1, Figure 2, and Figure 3 show the cumulative match
 characteristics (CMC) curves for each of the datasets for the first
 10 ranks. In these figures, the performance of GEFeS is denoted
 ‘Selection’, the performance of GEFeW is denoted as
 ‘Weighting’, while ‘Baseline’ denotes the performance of the
 baseline algorithms which are Eigenface for Figure 1, LBP for
 Figure 2, and the hybrid Eigenface + LBP in Figure 3. These
 charts were based on the best individual from each algorithm.
 Notice that for Face-Only and Face + Periocular both GEFeS and
 GEFeW outperform the baseline algorithms at all ranks. For
 Periocular-Only, both GEFeS and GEFeW provide a performance
 improvement for Ranks 1-4. For Ranks 5, 6, and 7, LBP (the                     Figure 3. CMC Curve for Face and Periocular
 baseline method) performs the best at Rank 5 and both LBP and
 GEFeS outperform GEFeW at Rank 6. At Ranks 7 and 8, all three         6. CONCLUSIONS AND FUTURE WORK
 methods have the same performance. Finally, at Ranks 9 and 10,        Our results show that fusing face and periocular biometric
 GEFeS outperforms both GEFeW and LBP. Notice also that in             modalities provides better recognition accuracy then using face
 terms of performance, the multi-biometric dataset had the best        and periocular modalities alone. The accuracy resulting from
 overall performance. This result suggests that the genetic-based      using feature weighting is higher than that of using feature
                                                                       selection. It is interesting to note that not all features were
selected in the weighted features experiment. This is because        [4] Fogel, D.B. An introduction to simulated evolutionary
some features' weights were so small, that the genetic algorithm         optimization. In IEEE Trans. Neural Networks, vol. 5, pp.
eventually gave them a weight of zero due to a round-off error.          3–14, Jan. 1994.
Based on the results, this suggests that removing non-discriminate   [5] Gentile, J.E., Ratha, N., and Connell, J. SLIC: Short-length
features will improve performance. We believe that a hybrid              iris codes. IEEE 3rd International Conference on
weighting/selection algorithm may further increase recognition           Biometrics: Theory, Applications, and Systems, 2009. BTAS
accuracy. This will be a topic of future research.                       '09, pp.1-5, 28-30 Sept. 2009.
7. ACKNOWLEDGMENTS                                                   [6] Jain, A.K., Nandakumar, K. and Ross, A. Score
This research was funded by the Office of the Director of National       normalization in multimodal biometric systems. Pattern
Intelligence (ODNI), Center for Academic Excellence (CAE) for            Recognition. vol. 38, no. 12, pp. 2270–2285, Dec. 2005.
the multi-university Center for Advanced Studies in Identity         [7] Kohavi, R., Langley, P., and Yun, Y. The utility of feature
Sciences (CASIS) and by the National Science Foundation (NSF)            weighting in nearest-neighbor algorithms. In Proceedings of
Science & Technology Center: Bio/computational Evolution in              the European Conference on Machine Learning (ECML-97),
Action CONsortium (BEACON). The authors would like to thank              1997.
the ODNI and the NSF for their support of this research.             [8] Park, U., Ross, A., and Jain, A.K. Periocular Biometrics in
                                                                         the Visible Spectrum: A Feasibility Study. In Proceedings of
8. REFERENCES                                                            BTAS ‘09, Washington DC.
[1] Adams, J., Woodard, D. L., Dozier, G., Miller, P., Bryant, K.
    and Glenn, G. Genetic-based type II feature extraction for       [9] Phillips, P.J., Flynn, P.J., Scruggs, T., Bowyer, K.W., Chang,
    periocular biometric recognition: less is more. In                   J., Hoff, K., Marques, J., Min, J., and Worek, W. Overview
    Proceedings of Int. Conf. on Pattern Recognition, 2010.              of face recognition grand challenge. In Proceedings of IEEE
                                                                         Conference on Computer Vision and Pattern Recognition,
[2] Chang, K.I., Bowyer, K.W., and Flynn, P.J. An Evaluation of          2005.
    Multimodal 2D+3D Face Biometrics. In Proceedings of
    IEEE Transactions on Pattern Analysis and Machine                [10] Sun, Z.N., Tan, T.N., and Qiu, X.C. Graph Matching Iris
    Intelligence, vol. 27, no. 4, pp. 619-624, Apr. 2005.                 Image Blocks with Local Binary Pattern. In: Zhang, D., Jain,
                                                                          A.K. (eds.) Advances in Biometrics. LNCS, vol. 3832, pp.
[3] Dozier, G., Frederiksen, K., Meeks, R., Savvides, M.,                 366–372. Springer, Heidelberg (2005).
    Bryant, K., Hopes, D., and Munemoto, T. Minimizing the
    number of bits needed for iris recognition via Bit               [11] Tinker, M. L., Dozier, G., and Garrett, A. The exploratory
    Inconsistency and GRIT. In IEEE Workshop on                           toolset for the optimization of launch and space systems (x-
    Computational Intelligence in Biometrics: Theory,                     toolss). http://xtoolss.msfc.nasa.gov/, 2010.
    Algorithms, and Applications, 2009. CIB 2009., vol., no.,        [12] Turk, M. and Pentland, A. Eigenfaces for recognition.
    pp.30-37, March 30 2009-April 2 2009.                                 Journal of Cognitive Neuroscience, vol. 3 no. 1 pp. 76-81,
                                                                          Winter 1

To top