Face_tracking by niusheng11

VIEWS: 7 PAGES: 47

									Face tracking for interaction
     -review and work

              Changbo Hu
        Advisor: Matthew Turk
   Department of Computer Science,
 University of California, Santa Barbara
                        Outline
   Review
    – What is the aim of face tracking?
    – How did people do it?
    – What we are going to go?


   Current Works
    – Mean-shift skin tracking
    – Mean-shift elliptical head tracking
    – Face tracking and imitation
             Face in interaction
 Where?                     Detection
 Who?                       Recognition, verification
 What?
                            Expression, talking…
                            attributes
What we expect computer?
    To perceive the above information
                                           applications
    To response properly
             Applications
– Authentication
– Human recognition
– Internet
– Human-computer interface
– Facial animation
– Talking agent
– Model-based video coding
           The role of tracking
   Two meaning:
    – When face detected, keep up its motion
        Tracking is easier in some sense

        Some Tasks request you

    – To know its pose
        To improve performance for recognition of face and
         expression
        Synthesis and animation
         What facts cause face
               variation?




1. Pose (model the relative view to camera )
2. Deformation(model the face expression and talking…)
3. Intensity change (model the illumination and sensor)
       What is face tracking?
 To find all the variation factors
 Problem formulation:


      I ( P( Rg( x)  t ))                        I0
                                    translation
Intensity             deformation
sensor          rotation
        projection
How people did it?
ctned
        To look into some details
Gang Xu, ICPR98




Black, CVPR 95
         To look into some details
Blake, ICCV98



         Bilinear combination of motion
         and expression



Cassia CVPR99
          To look into some details

Pentland, Computer
Graphics, 96




 DT, PAMI 93
          To look into some details

Pentland ICCV workgroup 99
        To look into some details

GorkTurk ICCV01
            What will we do?
   Task:
    Personalized full tracking and animation of face
 Start point: 2d face location
 Selecting face model
 Modeling expression
 Modeling illumination
 Animation
     What conditions we have?
   Personalized face is specific
    –   to model shape
    –   to model expression
    –   to have stable feature points
    –   to sample lighting effect
   Statistical learning
    – PCA, ASM,AAM
    – muscle vector, human metric for expression
    – Learn feature point location
     Start point--current work
 Mean shift tracking of skin color
 Mean shift tracking of elliptical head
 2 step face tracking and expression imitation
              Selecting face model
Face modeling itself is a
large topic, related in
graphics, talking face, etc.
What model should we
choose , must considering:
1. The model can account
for 3d motion
2. The model is easy to
adjust to individual

                               From Reference [29]
     Face model: data capture
   to determine head geometry
    – method
        two calibrated front and frofile images

        10 feature ponits--four eye corners, two nostrils, the

         bottom of the upper front teeth, the chin, the base of
         ears
    Face model: locate features
   to locate the facial features with high
    precision in three steps
    – to find a coarse outline of the head and
      estimation of main features
    – to analyze the important areas in more detail
    – zooms in on specific points and measure with
      high accuracy.
Face model: locate features
Face model: Location of main
         features
   texture segmentation
    – using luminance image
    – bandpass filter and adaptive threshold
    – morphological operation
    – connected component analysis
    – extracting the center of mass, width, and height
      of each blob
Face model: Location of main
         features
   color segmentation
    – background color /skin,hair color
    – extraction the similar feature as the texture
   evaluating combination of features
    – to train a 2-d head model (size)
    – to score blobs to select candidates
    – to check each eye candidate for good
      combination
    – to evaluate whole head
Face model: Measuring facial
        features
   to find the exact dimension
    – area around the mouth and the eye
    – using HSI color space
    – threshold for each color cluster(predefined)
    – recalibrating the color thresholds dynamcally
    – remarkable accurate, not robust enough
    – 2 pixels, standard deviation
Face model: Measuring facial
         feature




  the colors of teeth, lips and the inner,dark
  part of the mouth is prelearned
    Face model: High accuracy
          feature points
   Correlation analysis
    – a group of kernel
    – kernel chosen by width and height
    – scan in the image for the best correlation
    – 20X20 in 100X100, conjugate gradient descent
      approach
    – 0.5 pixel standard deviation
Face model: High accuracy
     from correlation
Face model: Pose estimation
   using 6 corners, 3d known from the model

       iteration equation (to find i,j and Z0)




       lowpass filtering on their trajectories
         Modeling expression
   Like AAM, create pose free apperance
    patches
        Modeling illumination
   3D linear space , assuming Labersion
    surface, without shadowing
        E ( p )  a ( p )n ( p ) s
                              T



   Considering shadowing and distrotion, can
    increase the basis to around 10

   Using only one subject, we can learn the
    linear space by eperiment
              Animation
 Synthesis animation
 Performance driven sketch animation
         End


Questions and comments?
    Mean shift color tracking
   An implementation to show power of skin
   Feature is probability of skin hue
   Mean-shift search
     1.   Choose a search window size.
     2.   Choose the initial location of the search window.
     3.   Compute the mean location in the search window.
     4.   Center the search window at the mean location
          computed in Step3.
     5.   Repeat Steps 3 and 4 until convergence
                                      ctned
 Find the zeroth moment M00
 Find the first moment for x and y, M10, M01
 Then the mean search window location (the
  centroid) is (xc, yc)
    (xc = M10/ M00, yc = M01/ M00 )
   Get features from the blob:
    – Length, weighth, rotation
       ctned




back
       Meanshift elliptical head
             tracking
Based on shape and adaptive color: the
head is shaped as an ellipse and the head’s appearance
  is represented by adaptive color.

● First : mean shift to track the color blob
● Second: Maximizing the normalized gradient around the
  boundary of the elliptical head.
             Why adaptive color
The head’s hue vary during tracking, esp. in different views or big
rotation, such as:




In order to handle this problem, we modify the head’s color
continuously during tracking using tracking result.
                  hN    hT  (1   )  hR
 hT : the initial color representation
 hR : the tracking result color in the current frame
 hN : the head’s color for tracking in the next frame
       Relocate elliptical head
   Maximizing the normalized Gradient

▲Assuming the elliptical head’s state        s  ( y, h)
▲gi is the intensity gradient at perimeter pixel i of the ellipse
▲Nh is the number of pixels on the perimeter of the ellipse.

                         1          Nh
                                               
            
           s  arg max 
                    sS
                                           gi 
                         Nh         i 1      
   Then update color
                    Benefits
   Compared with Bradski’s paper and
    Stanford elliptical head paper, our approach
    has the benefits:
    – Robust (fusion of color and gradient cue,
      adaptive to color changing)
    – Fast (do not need to search, meanshift iterate
      fast)
       Demo




back
Real time face pose tracking &
 expression imitation (still on)
 A modification to Active apperance model
 The most obvious drawback of AAM?
    – slow, because it can not apply PCA projection
     directly
 Explictly compute the rigid motion by a
  rigid of feature points
 Learning the PCA space for nonrigid shape
  and appearance
            Two step face tracking
Formulation:
Rigid features x1, nonrigid features x2
Ta(x1)->z1, the same T a (x2)->z2

Z 2  Z 2  Pb

Deal with unprecise of rigid points by
synthesized feedback:
In the synthyzied Z2, relocate rigid feature x1
and compute new T
Iteration untill covergence
Pose free expression


                      Pose T




   New face with pose and expression
                    Animation
One implementaion: using a hand drawing corresponding
modes, for example:




                                           back
                                  Reference
1.   [H. li , PAMI93] H. li, P. Rovainen, and R. Forcheimer, “3-D motion estimation in model based
     facial image coding”PAMI, 6,1993
2.   [DT, PAMI 93] D. Terzopulos and K. Water, Analysis and synthesis of facial image sequences
     using physical and anatomical models. PAMI, 6, 1993
3.   [Black, CVPR 95] M Black, Yacoob, Tracking and recognizing rigid and non-rigid facial motion
     using local parametric model of image motion, CVPR95
4.   [Essa ICCV95] I. Essa and A. Pentland. Facial expression recognition using a dynamic model
     and motion energy. InProc. 5th Int.Conf. on Computer Vision, pages 360{367, 1995.
5.   [Darell CVPR96] Trevor Darrell, Baback Moghaddam Alex pentland, Active face tracking and
     pose estimation in an Interactive room, CVPR96,
6.   [Pentland, Computer Graphics, 96] Urfan Essa, Sumit Basu, T Darrel, Pentland, Modeling,
     tracking and interactive animation of faces and heads// using input from video, Proceedings
     Computer Graphics, 1996
7.   [L. Davis FG96] T. Horprasert, Y. Yacoob, and l.S Davis, “computing 3D head orientation from
     monocular image sequence”, FG96
8.   [Yacoob, PAM96] Y. Yacoob and LS Davis, “computing spatio-temporal representations of
     human faces”, PAMI, 6, 1996
9.   [Decarlo, CVPR 96] D. Decarlo and D . Metaxas, the intergration of optical flow and
     deformable models woth applications to human face shape and motion estimation, CVPR 96
10. [Nesi RTI96] P. Nesi and R. Magnol_. Tracking and synthesizing facial motions with dynamic
    contours. Real Time Imaging, 2:67-79, 1996.
11. [Oliver CVPR97] Nuria Olivedr, Alex Pentland, LAFTER: Lips and Face real time tracker,
    CVPR97,
12. [DT, CVPR97] P. Fieguth and D Terzopoulous, “Color-based tracking of heads and other
    mobile objects at video frame rates” CVPR97
13. [Pentland CVPR97] TS. Jebra and A Pentland, “Parameterized structure from motion for 3D
    adaptive feedback tracking of faces” CVPR97
14. [Cootes ECCV 98] T. Cootes, G Edwards, Active appearance model, ECCV98,
15. [Gang Xu, ICPR98]Gang Xu and Takeo Sugimoto, "Rits Eye: A Software-Based System for
    Realtime Face Detection and Tracking Using Pan-Tilt-Zoom Controllable Camera", Proc. of
    14th International Conference on Pattern Recognition, pp.1194-1197, 1998
16. [Birtchfield CVPR98] Stan Birchfield, Elliptical head tracking using Intensity Gradients and
    color histograms, CVPR 98
17. [Hager PAMI98] G Hager, P Belhumeur, Efficient Region Tracking With Parametric Models of
    Geometry and Illumination (with P. Belhumeur), IEEE Transactions on Pattern Analysis and
    Machine Intelligence, 20(10), pp.~1125-1139, 1998
18. [Shodl PUI98] Schödl, Haro, and Essa, Head tracking using a textured polygonal model,
    PUI98.
19. [Blake ICCV98] B. Bascle, A. Blake, Separability of pose and expression in facial tracking and
    animation, ICCV98
                                                                             cnted
20. [Cassia CVPR99] La Cascia, M, Sclaroff, S., fast, Reliable Head tracking under
    illumination, CVPR99
21. [Pentland ICCV workgroup 99] J. strom, T. Jebara, S. Baru, A. Pentland, Real time
    tracking and modeling of faces: an EKF-based analysis by synthesis approach, In
    International Conference on Computer Vision: Workshop on Modelling People,Corfu,
    Greece, September 1999.
22. [GorkTurk ICCV01] Salih Burak Gokturk, Jean-Yves Bouguet, et. al, A data-driven
    model for monocular face tracking, ICCV 2001
23. [Y Li ICCV01] Yongmin Li, Shaogang Gong and Heather Liddell, Modeling face
    dynamically across views and over time, ICCV, 2001
24. [Feris ICCV workgroup 01] Rogerio S Feris, Roberto m. Cesar Jr, Efficient real-time
    face tracking in wavelet subspace, ICCV Workshop, 2001
25. [Ahlberg RATFFG-RTS01] Jorgen Ahlberg, Using the Active Appearance Algorithm for
    Face and Facial Feature Tracking 2nd International Workshop on Recognition, Analysis
    and Tracking of Faces and Gestures in Realtime Systems (RATFFG-RTS), pp. 68 - 72,
    Vancouver, Canada, July 2001.
26. [CC Chang IJCV02]Chin-Chun Chang and Wen-hsinag Tsai, Determination of head
    pose and facial expression from a single perspective view by successive scaled
    orthographic approximations, IJCV,3,2002
27. Dorin Comaniciu and Peter Meer. Real-time tracking of non-rigid objects using Mean
    shift. In the Proc.of the IEEE CVPR, 2000, pp: 142-149.
28. G.R.Bradski. Real-Time Face and Object Tracking as a Component of a Perceptual
    User Interface. IEEE Workshop Application of Computer Vision. 1998, pp: 214-219
29. Eric Cosatto and Hans Peter Graf, Photo-realistic talking-heads from image samples,
    IEEE trans. On Multimedia, vol.2, No.3, September 2000

								
To top