Document Sample
					    International Journal of Electronics and Communication Engineering & Technology (IJECET),
    ISSN 0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 3, Issue 1, January- June (2012), ©

ISSN 0976 – 6464(Print)
ISSN 0976 – 6472(Online)
Volume 3, Issue 1, January- June (2012), pp. 311-316
© IAEME: www.iaeme.com/ijecet.html
Journal Impact Factor (2011): 0.8500 (Calculated by GISI)                 ©IAEME

                     POSE PARAMETERS
                  Abhishek Choubey                                      Girish D. Bonde
         Head, Department of Electronics and                  M.Tech. Student, Department of EC
                   Communication                            R.K.D.F. Institute of Technology, Bhopal
       R.K.D.F. Institute of Technology, Bhopal                         Bhopal, INDIA
                   Bhopal, INDIA                                girish_bonde55@rediffmail.com

               In this paper, we implemented eigenface based face recognition and tried to compare
      the results with fisherface algorithm. The process required preprocessing. The images had to
      be resized to a consistent size. The database used included cropped faces of various sizes.
      Hence the need for face detection was eliminated.
               We tried to compare two of the most frequently used algorithms; eigenface and
      fisherface. We compared the performance of each algorithm against two constraints. Pose and
      the size of training data. Our study has shown us that fisherface algorithm is robust in both
      cases. This leads us conclude that the eigenface algorithm is beneficial when the database is
      large. But given the robustness of the fisherface algorithm, it would be the algorithm of choice
      if the resources are not a problem.
               We have extended the work towards automatic estimation of pose parameters using a
      patch based approach.
      Keywords: Eigenface, Fisherface, pose

         1. INTRODUCTION

          The face plays a major role in our social intercourse in conveying identity and emotion.
      The human ability to recognize faces is remarkable. We can recognize thousands of faces
      learned throughout our lifetime and identify familiar faces at a glance even after years of
      separation. The skill is quite robust, despite large changes in the visual stimulus due to
      viewing conditions, expression, aging, and distractions such as glasses or changes in hairstyle.
          We have implemented the eigenface and fisherface algorithms and tested them against two
      face databases, observing results across pose (out-of-plane face rotation). We evaluated

International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN
0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 3, Issue 1, January- June (2012), © IAEME

performance against databases with both densely-sampled and sparsely-sampled facial
poses.We have also extended the work towards automatic estimation of pose parameters.

Our (Specific) Problem Statement

Given a training database of pre-processed face images, train an automated system to
recognize the identity of a person from a new image of the person. Examine sensitivity to pose
using the eigenface approach suggested in [1,2] and the fisherface approach developed in [3].

Our new ideas include:

   1. comparing results of eigenface & fisherface across pose,
   2. testing dense and sparse training databases
   3. Estimating the pose parameters of the face


    The Eigenface [1,2] is the first method considered as a successful technique of face
recognition. The Eigenface method uses Principal Component Analysis (PCA) to linearly
project the image space to a low dimensional feature space.
    The Fisherface [3] is an enhancement of the Eigenface method. The Eigenface method
uses PCA for dimensionality reduction, thus, yields projection directions that maximize the
total scatter across all classes, i.e., across all image s of all faces. The PCA projections are
optimal for representation in a low dimensional basis, but they may not be optional from a
discrimination standpoint. In stead, the Fisherface method uses Fisher’s Linear Discriminant
Analysis (FLDA or LDA) which maximizes the ratio of between-class scatter to that of
within-class scatter.


    Eigenface and Fisherface are global approach of face recognition which takes entire image
as a 2-D array of pixels. Both methods are quite similar as Fisherface is a modified version of
eigenface. Both make use of linear projection of the images into a face space, which take the
common features of face and find a suitable orthonormal basis for the projection. The
difference between them is the method of projection is different; Eigenface uses PCA while
Fisherface uses FLD. PCA works better with dimension reduction and FLD works better for
classification of different classes
   Eigenface is a practical approach for face recognition. Due to the simplicity of its algorithm,
we could implement an Eigenface recognition system easily. Besides, it is efficient in
processing time and storage. PCA reduces the dimension size of an image greatly in a short
period of time. The accuracy of Eigenface is also satisfactory (over 90 %) with frontal faces.
However, as there has a high correlation between the training data and the recognition data.

International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN
0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 3, Issue 1, January- June (2012), © IAEME

     Fisherface is similar to Eigenface but with improvement in better classification of
different classes image. With FLD, we could classify the training set to deal with different
people and different pose. We could have better accuracy in various pose than Eigenface
approach. Besides, Fisherface removes the first three principal components which is
responsible for light intensity changes, it is more invariant to light intensity.
     Fisherface is more complex than Eigenface in finding the projection of face space.
Calculation of ratio of between-class scatter to within-class scatter requires a lot of processing
time. Besides, due to the need of better classification, the dimension of projection in face
space is not as compact as Eigenface, results in larger storage of the face and more processing
time in recognition. Facial recognition software was developed using the MATLAB
programming language by the MathWorks. This environment was chosen because it easily
supports image processing, image visualization, and linear algebra.The software was tested
against UMIST databse. UMIST was created by Daniel B. Graham, with a purpose of
collecting a controlled set of images that vary pose uniformly from frontal to side view. The
UMIST database has 565 total images of 20 people. The UMIST database images, displayed
below, has uniform lighting and pose varying from side to frontal.

                               Figure 1: UMIST database Images

3.1 Comparison by Size of training data

    For these results, 20 recognition faces (one for each person) were randomly picked from
the database, leaving 545 photos to use as training faces. Mp, the number of principal
components to use, was chosen as 20.

    All 20 of 20 images were correctly recognized, confirming the very good performance of
eigenface with densely and uniformly sampled inputs. For this same database and setup,
fisherface performs very similarly.

International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN
0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 3, Issue 1, January- June (2012), © IAEME

3.2 Comparison by Image pose

    For these results, 20 recognition faces (one for each person) were randomly picked from
the database, then 60 more photos were used as training faces. Three training faces were
picked for each person: a frontal, side, and 45-degree view.
    Out of the 20 faces, 16 were correctly classified in the 1st match. Also notice that this
approach is rather pose invariant — it often (13 times) picks out all 3 training images from the
    For comparison, the same setup is run using the eigenface algorithm. Here 14 of the 20
faces are correctly classified, and all 3 correct images are never found. Clearly, the fisherface
algorithm performs better under pose variation when only a few samples across pose are
available in the training set.

                            Table comparing eigenface to fisherface

                                           Fisherface             Eigenface
              Computational                slightly        more
              Complexity                   complex
              Effectiveness       Across good, even with some,       with
              Pose                       limited data    enough data

              Sensitivity to Lighting      little                 very

   We find that both the eigenface and fisherface techniques work very well for a uniformly
and densely sampled data set varied over pose. When a more sparse data set across pose is
available, the fisherface approach performs better than eigenface


    Automatic estimation of head pose faciliates human facial analysis. It has widespread
applications such as, gaze direction detection, video teleconferencing and human computer
interaction (HCI). It can also be integrated in a multi-view face detection and recognition
system. Most current methods estimate pose in a limited range or treat pose as a classification
problem by assigning the face to one of many discrete poses [6,7]. Mainly tested on images
taken in controlled environments e.g. the FacePix dataset [8] (Fig. 2). In short, a framework is
missing for continuous face pose estimation in uncontrolled environments (Fig 2b).

International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN
0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 3, Issue 1, January- June (2012), © IAEME

   Figure 2: a) Example images from the FacePix database [8] typically used for pose
   estimation. These images are taken in controlled environments with fixed scale, lighting
   and background. b) We address the problem of estimating pose as a continuous parameter
   on “real world” images with large variations in background, illumination and expression.

      (a) Our method decomposes a test image Y into a regular grid of patches.
      (b) There is a large predefined library of object instances. The library can be
      considered as a palette from which image patches can be taken.
      (c). The library is used to approximate each patch from the test images. The choice of
      library patch provides information about the true pose.
      (d) Model parameters W are used to interpret these patch choices in a Bayesian
      framework to calculate a posterior over pose (e).

      Figure 3 Example Results. True pose i.e. human estimate (above) vs. our estimate
    The Eigenface and Fisherface method were investigated and compared. The omparative
experiment showed that the Fisherface method outperformed the Eigenface method. The
usefulness of the Fisherface method under varying pose and varying sizes of training
databases was verified. Also our results show that patch-based representation is suitable for
face pose estimation. We achieve promising results on automatic face pose estimation in
uncontrolled environments.

1. M. Turk and A. Pentland, “Eigenfaces for recognition,” J. Cognitive Neuroscience, vol. 3,
no. 1, 1991.
2. M. Turk and A. Pentland, “Face recognition using eigenfaces,” Proc. IEEE Conf. on
Computer Vision and Pattern Recognition, 1991, pp. 586-591.

International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN
0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 3, Issue 1, January- June (2012), © IAEME

3. P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman, “Eigenfaces vs. fisherfaces:
recognition using class specific linear projection,” IEEE Transactions on Pattern Analysis and
Machine Intelligence, vol. 19, no. 7, pp. 711-720, July 1997.
4.. Alan Brooks (in collaboration with Li Gao) Face Recognition: Eigenface and Fisherface
Performance Across Pose, ECE 432 Computer Vision with Professor Ying Wu 2004
5. Jania Aghajanian and Simon J.D. Prince, “Face Pose Estimation in Uncontrolled
Environments”, Department of Computer Science University College London
6. N. Kruger, M. Potzsch, T. Maurer, and M. Rinne. Estimation of face position and pose with
labeled graphs. In BMVC, pages 735–743, 1996.
7. SZ Li, X.H. Peng, X.W. Hou, H.J. Zhang, and Q.S. Cheng. Multi-view face pose
estimation based on supervised isa learning. In AFGR 2002.
8. G. Little, Krishna S., Black J., and Panchanathan S. A methodology for evaluating
robustness of face recognition algorithms with respect to changes in pose and illumination
angle. In ICASSP, 2005
9. Belhuumeur, P. N., Hespanha, J. P., and Kriegman, D.J. 1997. Eigenfaces vs. Fisherfaces:
Recognitionusing class specific linear projection. IEEETrans. Patt. Anal. Mach. Intell. 19,
10. Bhati R., Jain S, Mishra D.K., and D.Bhati 2010. A comparative analysis of different
neural networks for face recognition using Principal Component Analysis and efficient
variable learning rate. IEEE Fourth Asia International Conference on athematical/Analytical
Modelling and Computer Simulation pp. 354-359
11. Huang, J., Heisele, B., and Blanz, V. 2003. Component-based face recognition with 3D
morphable models. In Proceedings, International Conference on Audio- and Video-Based
Person Authentication.
12. Kirby, M. and Sirovich, L. 1990. Application of the Karhunen-Loeve procedure for the
characterization of human faces. IEEE Trans. Patt. Anal.Mach. Intell. 12 .
13. Khanale, P.B., 2010a. Recognition of marathi numerals using artificial neural network. J.
Artificial Intell., 3: 135-140.
14. Lanitis, A., Taylor, C. J., and Cootes, T. F. 1995. Automatic face identification system
using flexible appearance models. Image Vis. Comput. 13, 393–401
15. Marian Stewart Bartlett, Javier R. Movellan and Terrence J. Sejnowski,2002.Face
Recognition by Independent Component analysis. IEEE Transactions on Neural Networks ,
Vol. 13, No. 6.
16. S.Lawrence, C.L.Giles, A.C.Tsoi, and A.d.Back, (1993) IEEE Transactions of Neural
Networks. vol.8, no.1, pp.98-113.
17. Turk, M. and Petntland, A. 1991. Eigenfaces for recognition. J. Cogn. Neurosci. 3, 72–86.
18.Wiskott, L., Fellous, J.-M., and Von Der Malsburg, C. 1997. Face recognition by elastic
bunch graph matching. IEEE Trans. Patt. Anal. Mach. Intell. 19, 775–779.
19. Zaho W., R.Chellepa, A.Rosenfeld, 2003. Face Recognition: A Literature Survey, ACM
Computing Surveys, Vol.35, No.4, pp.399-458
20. Z aho W., R.Chellepa, A.Rosenfeld, 2003. Face Recognition: A Literature Survey, ACM
Computing Surveys, Vol.35, No.4, pp.399-458.


Shared By: