Docstoc

Feature extraction for face recognition by using a novel and effective color boosting

Document Sample
Feature extraction for face recognition by using a novel and effective color boosting Powered By Docstoc
					 National Conference on Role of Cloud Computing Environment in Green Communication 2012
                                                                                                         717



Feature extraction for face recognition by
using a novel and effective color boosting
       S.S. Sugania                                            V.R. Bhuma M.E
       II M.E- CSE                                             Ass.Prof
       Vins Christian Collage of Engineering                   Vins Christian Collage of Engineering



  Abstract-This paper introduces the new             the existing color FR methods are restricted to
color face recognition (FR) method that              using a fixed color-component configuration
makes effective use of boosting learning as          comprising of only “two” or “three”[1] color
color-component         feature        selection     components . In particular, currently used color-
framework. The proposed boosting color-              component choices are mostly made through a
component feature selection framework is             combination of intuition and empirical
designed for finding the best set of color-          comparison without any systematic selection
component features from various color spaces         strategy.As such, existing methods may have a
(or models), aiming to achieve the best FR           limitation to attaining the best FR result for
performance for a given FR task. In addition,        given FR task. This is because specific color
to facilitate the complementary effect of the        components effective for a particular FR
selected color-component features for the            problem could not work well for other FR
purpose of color FR, they are combined using         problems under other FR operating conditions
the proposed weighted feature fusion scheme.         (e.g., illumination variations) that differ from
The effectiveness of my color FR method has          those considered during the process of
been successfully evaluated on the following         determining specific color components.
five public face databases (DBs): CMU-PIE,             Hence, the important issue in color FR is: how
Color FERET, XM2VTSDB,SCface, and                    can one select the color components from
FRGC 2.0. Experimental results show that             various color models in order to achieve the best
the results of the proposed method are               FR performance for the specific FR task?In this
impressively better than the results of other        paper, to cope with the aforementioned issue, I
state-of-the-art color FR methods over               propose a new color FR method. My method
different FR challenges including highly             takes advantage of “boosting” learning as a
uncontrolled illumination, moderate pose             feature selection mechanism,aiming to find the
variation, and small resolution face images.         optimal set of color-component features for the
Index Terms—Boosting learning, color face            purpose of achieving the best FR result[2]. To
recognition,         color          space,color      the best of our knowledge, my work is the first
component,feature selection.                         attempt to incorporate feature selection scheme
                                                     underpinning boosting learning into FR methods
                I.INTRODUCTION                       using color information.
                                                       A facial recognition system is a computer
 RECENTLY, considerable research work in             application for automatically identifying or
face recognition (FR) has shown that facial color    verifying a person from a digital image or a
information can be used to considerably improve      video frame from a video source. One of the
FR performance, compared to the FR methods           ways to do this is by comparing selected facial
relying only on grayscale information. Most of       features from the image and a facial database.It


Department of CSE, Sun College of Engineering and Technology
 National Conference on Role of Cloud Computing Environment in Green Communication 2012
                                                                                                                      718


is typically used in security systems and can be               framework. In particular, this section details the
compared to other biometrics such as                           proposed selection criterion. In Section III, I
fingerprint[2] or eye iris recognition systems.                explain the proposed modules for a FR
  The remaining of the paper is organized as                   purpose.Conclusions are presented in Section
follows. Section II describes feature extraction               IV.
for face recognition within boosting learning




                                 Fig 1:Proposed color framework model




II.FEATURE EXTRACTION                        FOR               termed “learning set,” the effective selection
FACE RECOGNITION                                               criterion is proposed. The proposed selection
                                                               criterion is in the form of penalty-based
                                                               objective function with its associated weighting
 In this paper, a multiclass boosting                          parameter for the purpose of selecting color-
“Adaboost.M2” framework is adapted to                          component features which not only produce
implement color-component feature selection.                   small classification errors, but also keep their
Differing from other boosting learning                         mutual dependence low. The proposed selection
frameworks,the key advantage of Adaboost.M2                    criterion is highly useful for achieving a low
framework is to force the weak learners to                     generalization classification error. In addition, to
concentrate not only on the hard instances (or                 perform color FR, the color-component features
patterns), but also on the incorrect class labels              chosen via our boosting framework are
that are hardest to classify. Overall framework                combined at the feature level. Specifically,
of the proposed color FR method which largely                  selected color-component features are fused
consists of two parts:1) color-component feature               based on weighted feature fusion scheme
selection with boosting, 2) color FR solution                  depending upon the associated confidence of
using selected color component features.                       each color-component feature for achieving
  To determine the best color component feature                better FR performance.
at each boosting round for recognizing the hard-                  In order to evaluate the effectiveness of the
to-classify sample subset of a training set,                   proposed color FR method, comparative and


Department of CSE, Sun College of Engineering and Technology
 National Conference on Role of Cloud Computing Environment in Green Communication 2012
                                                                                                        719


extensive experiments have been carried out. For        When performing YCbCr to R´G´B´
this, five public face databases (DB) CMU-           conversion using the above equations, note that
PIE,Color FERET,XM2VTSDB,SCface,FRGC                 the resulting R´G´B´ values have a nominal
2.0 are used. Experimental results show that the     range of 16-235 (black-white). Occasional
results of the proposed method are impressively      excursions into the 0-15 and 236-255 values are
better than the results of other state-of-the-art    possible due to Y and CbCr occasionally going
color FR methods over different FR challenges        outside the 16-235 and 16-240 ranges,
including highly uncontrolled illumination,          respectively, due to video processing.Rounding
moderate pose variation, and small resolution        errors and noise.
face images.                                            However, if the 24-bit R´G´B´ data are to
  The remaining of the paper is organized as         have a range of 0- 255 (black-white), as is
follows. Boosting color-component feature            commonly found in PCs, the following
selection describes our color-color component        equations (used by the HMP8115) should be
feature selection method within boosting             used to maintain the correct black and white
learning framework. In particular, this section      levels:
details the proposed selection criterion. In color
FR using selected color-component features, we                 Y = 0.257R´ + 0.504G´ + 0.098B´ + 16
explain the proposed weighted feature fusion                   Cb = -0.148R´ - 0.291G´ + 0.439B´ +
approach to combining selected color-                          128
component features for a FR purpose. In                        Cr = 0.439R´ - 0.368G´ - 0.071B´ + 128
experiments, we present extensive and                          R´ = 1.164(Y - 16) + 1.596(Cr - 128)
comparative experimental results that                          G´ = 1.164(Y - 16) - 0.813(Cr - 128) -
demonstrate the effectiveness of the proposed                  0.392(Cb - 128)
color FR method. Conclusions and directions for                B´ = 1.164(Y - 16) + 2.017(Cb - 128)
future research are presented in conclusion and
future research.                                             For the YCbCr to R´G´B´ equations, the
                                                     R´G´B´ values must be saturated at the 0 and
A.RGB Generation                                     255 levels due to occasional excursions outside
                                                     the nominal YCbCr ranges.
   BT.601 defines Y to have a nominal range of       B. Linear RGB Generation
16-235 (blackwhite); Cb and Cr are defined to
have a nominal range of 16- 240, with 128               PCs usually prefer to use the linear RGB data
corresponding to zero. YCbCr is also defined to      format due to the amount of software already
have been derived from gamma-corrected RGB           written and the simplified algorithms. Gamma
(R´G´B´) data. The BT.601 quations are used by       correction for the display monitor may then done
many video ICs to convert between digital            real-time in the GUI acceleration chip.
R´G´B´ data and YCbCr are:                           Therefore, it may be desirable to remove the
                                                     gamma information from the R´G´B´ data.
        Y = (77/256)R´ + (150/256)G´ +               NTSC video is pre-corrected using a gamma of
(29/256)B´                                           2.2. Thus, to generate 24-bit linear RGB data:
       Cb = -(44/256)R´ - (87/256)G´ +               for (R´, G´, B´) < 21
(131/256)B´ + 128
       Cr = (131/256)R´ - (110/256)G´ -                        R = ((R´/255) / 4.5) * 255
(21/256)B´ + 128                                               G = ((G´/255) / 4.5) * 255
       R´ = Y + 1.371(Cr - 128)                                B = ((B´/255) / 4.5) * 255
       G´ = Y - 0.698(Cr - 128) - 0.336(Cb -
128)                                                 for (R´, G´, B´) ≥ 21
       B´ = Y + 1.732(Cb - 128)
                                                     R = 255* (((R´/255) + 0.099) / 1.099)2.2


Department of CSE, Sun College of Engineering and Technology
 National Conference on Role of Cloud Computing Environment in Green Communication 2012
                                                                                                           720


G = 255 * (((G´/255) + 0.099) / 1.099)2.2              aims at making optimal balance between
B = 255 * (((B´/255) + 0.099) / 1.099)2.2              classification error and the degree of mutual
                                                       dependence among selected FR learners.
PAL video specifies a gamma of 2.8, although a           Here, using       classification error for is
value of 2.2 is now commonly used. If the video        calculated based on “pseudo-loss” where is the
is pre-corrected using a gamma of 2.8, the             mislabel weight vector.Note that for computing ,
following equations may be used to generate 24-        both hard-to-classify samples and hard-to-
bit linear RGB data:                                   separate pairs of class labels are taken into
                                                       account at the same time.
        R = 255 * (R´/255)2.8
        G = 255 * (G´/255)2.8
        B = 255 * (B´/255)2.8                                          III. EXPERIMENTS

Many modern PAL video decoders, such as the              Most of the existing color FR methods are
HMP8115, allow the selection of either the 2.2         restricted to using a fixed color-component
or 2.8 gamma factor to be used for calculations.       configuration comprising of only “two” or
                                                       “three” color components. In particular,
C. Proposed Selection Criterion                        currently used color-component choices are
                                                       mostly made through a combination of intuition
   At each boosting round, the best FR learner         and empirical comparison, without any
(i.e., the best color-component feature) should        systematic selection strategy. As such, existing
be determined from among constructed FR                methods may have a limitation to attaining the
learners , each of which depends upon a single         best FR result for given FR task. This is because
color-component feature. To this end, a selection      specific color components effective for a
criterion plays a crucial role in determining the      particular FR problem could not work well for
“goodness”of feature selection. in ensemble            other FR problems under other FR operating
classification (including boosting), it has been       conditions (e.g., illumination variations) that
shown that, to achieve the lowest generalization       differ from those considered during the process
error, we need to create ensembles (or                 of determining specific color components.
classifiers) with low training classification error,
while at the same time their mutual dependence           The existed color FR method which largely
should be kept minimal.                                consists of two parts: 1) Color-component
   In particular, in our feature selection problem,    feature selection with boosting, 2) Color FR
mutual dependence between color-component              solution using selected color component
features have to be carefully considered as            features.
different color channels may have similar
properties from the view-point of classification.        We propose a new color FR method. Our
For instance, the and channels (from and color         method takes advantage of “boosting” learning
spaces, respectively) both encode the intensity        as a feature selection mechanism, aiming to find
information for green colors.                          the optimal set of color-component features for
Therefore, before a FR learner is selected,            the purpose of achieving the best FR result with
mutual dependence between the new FR learner           the help of DWT,Eigenface ,Face congruency.
and each of the selected FR learners should be
                                                          To the best of our knowledge, our work is the
examined to ensure that the complementary
                                                       first attempt to incorporate feature selection
information (that improves classification)
                                                       scheme underpinning boosting learning into FR
carried by the new FR learner is not captured by
                                                       methods using color information.
the preceding FR learners before.
   To address the aforementioned issue, we                     A.Discrete Wavelet Transform
develop an effective selection criterion which


Department of CSE, Sun College of Engineering and Technology
 National Conference on Role of Cloud Computing Environment in Green Communication 2012
                                                                                                            721


  The field of Discrete Wavelet Transforms           equal to 1, and its corresponding (normalized)
(DWTs) is an amazingly recent one. The basic         eigen vector contains, as its components, the
Principles of wavelet theory were put forth in a     value of the <f> function at integer values of x.
paper by Gabor in 1945 , but all of the definitive   Once these values are known, all other values of
papers on discrete wavelets, an extinction of        the function <f>(x) can be generated by
Gabor's theories involving functions with            applying the recursion equation to get values at
compact support, have been published in the          half integer x, quarter-integer x, and so on down
past three years. Although the Discrete Wavelet      to the desired dilation. This effectively
Transform is merely one more tool added to the       determines the accuracy of the function
toolbox of digital signal processing, it is a very   approximation.
important concept for data compression. Its
utility in image compression has been electively        This class of wavelet functions is
demonstrated. This paper discusses the DWT           constrained, by definition, to be zero outside of a
and demonstrates one way in which it can be          small interval. This is what makes the wavelet
implemented as a real-time signal processing         transform able to operate on a finite set of data, a
system. Although this paper will attempt to          property which is formally called "compact
describe a very general implementation, the          support." Most wavelet functions, when plotted,
actual project used the STAR Semiconductor           appear to be extremely irregular. This is due to
SPROC lab digital signal processing system.          the fact that the recursion equation assures that a
                                                     wavelet <p> function is non-differentiable
         A wavelet, in the sense of the Discrete     everywhere. The functions which are normally
Wavelet Transform (or DWT), is an orthogonal         used for performing transforms consist of a few
function which can be applied to a finite group      sets of well-chosen coefficients resulting in a
of data. Functionally, it is very much like the      function which has a discernible shape.
Discrete Fourier Transform, in that the                 The Mallat "pyramid" algorithm is a
transforming function is orthogonal, a signal        computationally        efficient    method       of
passed twice through the transformation is           implementing the wavelet transform, and this is
unchanged, and the input signal is assumed to be     the one used as the basis of the hardware
a set of discrete-time samples. Both transforms      implementation. The lattice filter is equivalent to
are convolutions. Whereas the basis function of      the pyramid algorithm except that a different
the Fourier transform is a sinusoid, the wavelet     approach is taken for the convolution, resulting
basis is a set of functions which are defined by a   in a different set of coefficients, related to the
recursive difference equation,                       usual wavelet coefficients ck by a set of
                                                     transformations.
         Φ(x)=                         (1)
                                                     The Pyramid Algorithm
         Where the range of the summation is
determined by the specified number of nonzero            The pyramid algorithm operates on a finite
coefficients M. The number of nonzero                set of N input data, where N is a power of two;
coefficients is arbitrary, and will be referred to   this value will be referred to as the input block
as the order of the wavelet. The value of the        size. These data are passed through two
coefficients is, of course, not arbitrary, but is    convolution functions, each of which creates an
determined by constraints of orthogonality and       output stream that is half the length of the
normalization. A good way to solve for values of     original input. These convolution functions are
equation (1) is to construct a matrix of             filters; one half of the output is produced by the
coefficient values. This is a square M x XI          “low pass” filter function,related to equation (1):
matrix where M is the number of nonzero
coefficients. The matrix is designated L, with
entries .This matrix always has an eigen value


Department of CSE, Sun College of Engineering and Technology
 National Conference on Role of Cloud Computing Environment in Green Communication 2012
                                                                                                          722


           =                     i=1,……,              consists of sine functions, whose frequencies are
                                                      odd multiples of the fundamental frequency. At
(2)                                                   the rising edges of the square wave, each
                                                      sinusoidal component has a rising phase; the
and the other half is produced by the “high pass”     phases have maximal congruency at the edges.
filter function,                                      This corresponds to the human-perceived edges
                                                      in an image where there are sharp changes
      =                              , i=1,……,        between light and dark.
                                                        Congruency of phase at any angle produces a
(3)                                                   clearly perceived feature . The angle at which
                                                      the congruency occurs dictates the feature type,
         Where Z is the input block size, c are       for example, step or delta. The Local Energy
the coefficients, a and b are the output functions.   Model was developed by Morrone et al and
(In the case of the lattice filter, the low- and      Morrone and Owens. Other work on this model
high-pass outputs are usually referred to as the      of feature perception can be found in Morrone
odd and even outputs, respectively.) The              and Burr, Owens et al., Venkatesh and Owens,
derivation of these equations from the original       and Kovesi. The work of Morrone and Burr has
<p and ip equations can be found. In many             shown that this model successfully explains a
situations, the odd or low-pass output contains       number of psychophysical effects in human
most of the "information content" of the original     feature perception. The local, complex valued,
input signal. The event or high-pass output           Fourier components at a location x in the signal
contains the difference between the true input        will each have an amplitude An(x) and a phase
and the value of the reconstructed input if it        angle Án(x). The magnitude of the vector from
were to be reconstructed from only the                the origin to the end point is the Local Energy,
information given in the odd output. In general,      jE(x)j. The measure of phase congruency
higher-order wavelets (i.e., those with more non-     developed by Morrone et al is
zero coefficients) tend to put more information
into the odd output, and less into the even                               (x)=
output. If the average amplitude of the even
output is low enough, then the even half of the
signal may be discarded without greatly                        Under this definition phase congruency
affecting the quality of the reconstructed signal.    is the ratio of jE(x)j to the overall path length
An important step in wavelet-based data               taken by the local Fourier components in
compression is finding wavelet functions which        reaching the end point. If all the Fourier
cause the even terms to be nearly zero.               components are in phase all the complex vectors
                                                      would be aligned and the ratio of jE(x)j=Pn
B.Phase Congruency                                    An(x) would be 1. If there is no coherence of
                                                      phase
  Phase congruency is a measure of feature
significance in computer images, a method of
edge detection that is particularly robust against
changes in illumination and contrast.Phase
congruency reflects the behaviour of the image
in the frequency domain. It has been noted that
edge like features have many of their frequency
components in the same phase. The concept is
similar to coherence, except that it applies to
functions of different wavelength.For example,
the Fourier decomposition of a square wave


Department of CSE, Sun College of Engineering and Technology
 National Conference on Role of Cloud Computing Environment in Green Communication 2012
                                                                                                          723


             Figure 2:Polar diagram                  faces. Any human face can be considered to be a
                                                     combination of these standard faces. For
    Polar diagram showing the Fourier                example, one's face might be composed of the
components at a location in the signal plotted       average face plus 10% from eigenface 1, 55%
head to tail. The weighted mean phase angle is       from eigenface 2, and even -3% from eigenface
given by A(x). The noise circle represents the       3. Remarkably, it does not take many eigenfaces
level of E(x) one can expect just from the noise     combined together to achieve a fair
in the signal.The ratio falls to a minimum of 0.     approximation of most faces. Also, because a
Phase congruency provides a measure that is          person's face is not recorded by adigital
independent of the overall magnitude of the          photograph, but instead as just a list of values
signal making it invariant to variations in image    (one value for each eigenface in the database
illumination and/or contrast. Fixed threshold        used), much less space is taken for each person's
values of feature significance can then be used      face.
over wide classes of images.
     The measure of phase congruency does not
provide good localization and it is also sensitive
to noise. Kovesi developed a modified measure
consisting of the cosine minus the magnitude of
the sine of the phase deviation; this produces a
more localized response. This new measure also
incorporates noise compensation.A small
constant is incorporated to avoid division by
zero. Only energy values that exceed T, the
estimated noise influence, are counted in the
result. The symbols b c denotes that the enclosed
quantity is equal to itself when its value is
positive, and zero otherwise.
    In practice local frequency information is
                                                     Fig 3.Eigen face reconstruction
obtained via banks of Gabor wavelets tuned to
different spatial frequencies, rather than via the
Fourier transform. The appropriate noise
threshold, T is readily determined from the                    To create a set of eigenfaces, one must:
statistics of the filter responses to the image.          Prepare a training set of face images. The
Phase congruency is a measure of feature             pictures constituting the training set should have
significance in computer images, a method of         been taken under the same lighting conditions,
edge detection that is particularly robust against   and must be normalized to have the eyes and
changes in illumination and contrast.                mouths aligned across all images. They must
                                                     also      be      all     resampled       to     a
C.Eigen face                                         common pixel resolution (r × c). Each image is
                                                     treated      as       one      vector,      simply
  Eigenfaces are a set of eigenvectors used in       by concatenating the rows of pixels in the
the computer vision problem of human face            original image, resulting in a single row
recognition. A set of eigenfaces can be              with r × c elements. For this implementation, it
generated by performing a mathematical process       is assumed that all images of the training set are
called principal component analysis (PCA) on a       stored in a single matrix T, where each row of
large set of images depicting different human        the matrix is an image.Subtract the mean. The
faces. Informally, eigenfaces can be considered      average image a has to be calculated and then
a set of "standardized face ingredients", derived    subtracted    from      each     original    image
from statistical analysis of many pictures of        in T.Calculate         the eigenvectors        and


Department of CSE, Sun College of Engineering and Technology
 National Conference on Role of Cloud Computing Environment in Green Communication 2012
                                                                                                           724


eigenvalues of the covariance matrix S. Each          components in the same phase. Facial
eigenvector has the same dimensionality               recognition was the source of motivation behind
(number of components) as the original images,        the creation of eigenfaces. For this use,
and thus can itself be seen as an image. The          eigenfaces have advantages over other
eigenvectors of this covariance matrix are            techniques available, such as the system's speed
therefore called eigenfaces. They are the             and efficiency. Using eigenfaces is very fast, and
directions in which the images differ from the        able to functionally operate on lots of faces in
mean image. Usually this will be a                    very little time.
computationally expensive step (if at all
possible), but the practical applicability of                         ACKNOWLEDGEMENT
eigenfaces stems from the possibility to compute
the eigenvectors of S efficiently, without ever          First and foremost I acknowledge the abiding
computing S       explicitly,    as       detailed    presence and the abounding grace of our
below.Choose       the principal    components.       almighty god for this unseen hand yet tangible
The D x D covariance matrix will result               guidance all through the formation of this
in D eigenvectors, each representing a direction      project. I express my sincere gratitude to our
in the r × c-dimensional image space. The             Chairman, Thiru. Nanjil M. Vincent ,B.A.,B.L.,
eigenvectors (eigenfaces) with largest associated     Ex.M.P for providing me all supports.I am
eigenvalue are kept. Unfortunately, this type of      extremely grateful to our Principal, Dr.B.Sasi
facial recognition does have a drawback to            Kumar, M.E., Ph.D., for his inspiration to me for
consider: trouble recognizing faces when they         preceding this project.           I would like to
are viewed with different levels of light or          express my heartfelt thanks to our Head of the
angles. For the system to work well, the faces        Department, Mr.K.John peter, M.Tech., for his
need to be seen from a frontal view under             interest in my project for his valuable
similar lighting. Face recognition using              suggestions.I am grateful to my Internal guide,
eigenfaces has been shown to be quite accurate.       Mrs.V.R.Bhuma,       M.E.,lecturer., for her
                                                      Suggestion, motivation and dedicated guidance
                                                      in seeking this project work to its completion. I
                IV.CONCLUSION                         express my heartiest thanks to all other staffs of
   In this paper, a novel and effective color FR      computer science and engineering who have
method is proposed. It is based on the selection      helped me in one way or other for the successful
of the best color-component features (from            completion of the project. I express my thanks
various color models) using the proposed variant      to my beloved friends for the kind co-operation
of boosting learning framework by discrete            and their continuous encouragement. Finally I
wavelet transform , Phase Congruency and eigen        express my thanks to my beloved parents.
face. These selected color component features
are then combined into a single concatenated
color feature using weighted feature fusion. This                     REFERENCES
makes the FR method to be effective. For an
input represented by a list of 2n numbers, the
Haar wavelet transform may be considered to           [1] “Discrete Wavelet Transforms: Theory and
simply pair up input values, storing the              Implementation”          Tim          Edwards
difference and passing the sum. This process is       (tim@sinh.stanford.edu)Stanford     University,
repeated recursively, pairing up the sums to          September 1991.
provide the next scale: finally resulting in 2n − 1   [2] “Phase Congruency Detects Corners and
differences and one final sum. Phase congruency       Edges “ Peter Kovesi School of Computer
reflects the behaviour of the image in the            Science & Software Engineering The University
frequency domain. It has been noted that edge         of Western Australia Crawley, W.A. 6009.
like features have many of their frequency


Department of CSE, Sun College of Engineering and Technology
    National Conference on Role of Cloud Computing Environment in Green Communication 2012
                                                                                             725


[3] ” Color Face Recognition for Degraded Face
Images” Jae Young Choi, Yong Man Ro, Senior
Member, IEEE, and Konstantinos N. (Kostas)
Plataniotis, Senior Member, IEEE.
[4] ” Boosting Color Feature Selection for Color
Face Recognition” Jae Young Choi, Student
Member, IEEE, Yong Man Ro, Senior Member,
IEEE, and Konstantinos N. Plataniotis, Senior
Member, IEEE.
[5] “A Decision-Theoretic Generalization of On-
Line Learning and an Application to
Boosting”Yoav Freund and Robert E. Schapire-
AT6T Labs, 180 Park Avenue, Florham Park,
New Jersey.


.




Department of CSE, Sun College of Engineering and Technology

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:13
posted:7/26/2012
language:English
pages:9