Docstoc

FACIAL RECOGNITION SYSTEM

Document Sample
FACIAL RECOGNITION SYSTEM Powered By Docstoc
					 FACIAL RECOGNITION
       SYSTEM




Submitted By:
Biswa Ranjan Patra
   ENTC
0401229099
                OVERVIEW
People have an amazing
ability to recognize and
remember thousands of
face. In this seminar we
will see how computers
are turning our face into
computer code so it can
be compared to
thousands of other faces.
                            Facial recognition software can be
                            used to find criminals in a crowd,
                            turning a mass of people into a big
                            lineup.
             DEFINITION
 Facial Recognition Systems are computer
  based security systems that are able to
  automatically detect and identify human
  face.
 The automatic access control system on
  the basis of modern biometrics methods of
  identification of face recognition.
System Snapshots

Identification     Verification
       EARLY DEVELOPMENT
   Pioneers of Automated Facial Recognition
    include: Woody Bledsoe, Helen Chan Wolf &
    Charles Bisson.
    During 1964 & 1965, Bledsoe, along with Helen
    Chan & Charles Bisson, worked on using the
    computer to recognize human faces. This project
    was labeled Man-Machine because the human
    extracted the coordinates of a set of features
    from the photographs, which were then used by
    computer for recognition.
                       THE FACE


   Your face is an important part of who you are and how
    people identify you. Imagine how hard it would be to
    recognize an individual if all faces looked the same. Except
    in the case of identical twins, the face is arguably a person’s
    most unique physical characteristics. While humans have
    had the innate ability to recognize & distinguish different
    faces for millions of years, computers are just now catching
    up.
   If you look in the mirror, you can see that your face has
    certain distinguishable landmarks. These are peaks and
    valleys that make up facial features. Visionics defines these
    landmarks as NODAL POINTS.
There are about 80 Nodal Points on human
face. Here are a few of the Nodal Points that
are measures by the software:
► Distance between eyes
► Width of the nose
► Depth of eye sockets
► Cheek bones
► Jaw lines
► Chin
                     SOFTWARE
   Facial Recognition
    Software is based on the
    ability to first recognize
    faces, which is a
    technological feat in itself
    & then measure the
    various features of each
    face.
   The software that are
    used are FaceIt and FRS
    SDK Ver2.0
                                   Facial recognition software is
                                   designed to pinpoint a face and
                                   measure its features.
        THE BASIC PROCESS
1)   Detection: When the system is attached to a video
     surveillance system, the recognition software searches
     the field of view of a video camera for faces. If there is
     a face in the view, it is detected within a fraction of a
     second.

2)   Alignment: Once a face is detected, the system
     determines the head's position, size and pose. A face
     needs to be turned at least 35 degrees toward the
     camera for the system to register it.

3)   Normalization: The image of the head is scaled and
     rotated so that it can be registered and mapped into an
     appropriate size and pose. Normalization is performed
     regardless of the head's location and distance from the
     camera. Light does not impact the normalization
     process.
4) Representation: The system translates the facial
  data into a unique code. This coding process
  allows for easier comparison of the newly
  acquired facial data to stored facial data

5) Matching: The newly acquired facial data is
  compared to the stored data and (ideally) linked
  to at least one stored facial representation
   The heart of the FaceIt
    facial recognition system is
    the Local Feature Analysis
    (LFA) algorithm. This is the
    mathematical technique the
    system uses to encode
    faces. The system maps
    the face and creates a
    faceprint, a unique
    numerical code for that
    face. Once the system has
    stored a faceprint, it can
    compare it to the
    thousands or millions of
    faceprints stored in a
    database. Each faceprint is
    stored as an 84-byte file.
                                   Using facial recognition software,
                                   police can zoom in with cameras and
                                   take a snapshot of a face.
     MULTI MODAL SYSTEM FOR
    LOCATING HEADS AND FACES
   For a head location system to be perceived as
    truly non-intrusive by the observed people, a
    free motion of the heads has to be permitted.
    This results in large variations of the heads'
    sizes and orientations. To handle such a large
    range of conditions efficiently, we combine the
    information of three channels: shape, color and
    motion. This has resulted in a robust face and
    head location system that is suited for such
    applications as tracking people for surveillance
    purposes, model-based image compression for
    video telephony , and intelligent computer-user
    interfaces.
                  SHAPE ANALYSIS
   The shape analysis tries to find
    outlines of heads or
    combinations of facial features
    that indicate the presence of a
    face. It uses luminance only,
    and therefore can even work
    with cheap monochrome
    cameras. For frontal view of
    faces, we first identify
    candidate areas for various
    facial features, and then we
    search for combinations of
    such areas to find the whole
    face.
                          Example of the facial feature detection process. The top-left image
                          is the original, the top-right image has been filtered to select a
                          range of spatial frequencies and sizes. The bottom-left shows the
                          image after adaptive thresholding where areas of interest have
                          been marked with connected areas of gray pixels. The bottom
                          right image shows the best combination of facial features that
                          could be identified: eyebrows, eyes and nostrils.
                        FILTERING
   An image is first transformed by two filters, the first one is a
    bandpass filter selecting a range of spatial frequencies. The
    second one is tuned to detect a range of sizes of a simple shape
    such as a rectangle or an ellipse. These processing steps reduce
    variations due to different lighting conditions and enhance areas of
    facial features or head boundaries. The band-pass filter eliminates
    slowly varying gradients in light intensity and reduces differences
    in intensities between the two sides of a face in the case of side
    illumination.




    Examples of head locations, as well as eye and mouth locations,
    found with the shape analysis. In the face on the left only the search
    for the head outline succeeded since the search for facial features is
    limited to a head tilt of +-30 degrees.
                  THRESHOLDING
   After the filtering operations, the image is thresholded with an
    adaptive thresholding technique. The goal is to identify individual
    facial features with a simple connected component analysis.


                  n-GRAM SEARCH
    Once candidate facial features are marked with connected
    components, combinations of these features, that could represent a
    face, have to be found. This is done with an 'n-gram' search. First
    the shape of each individual connected component is analyzed and
    those that can definitely not represent a facial feature are discarded.
    Then combinations of two connected components are tested
    whether they can represent a combination of two facial features, for
    example an eye pair, eye brows, or an eye and a mouth. In the next
    step triple combinations are evaluated, etc.
Marking areas of interest for identifying the positions of facial
features. The top two images have been filtered for spatial
frequencies and sizes and are binarized with two different
thresholds.
          COLOUR ANALYSIS
   Color information is an efficient tool for
    identifying facial areas and specific facial
    features if the system can be calibrated properly
    for particular conditions.

   We can use the color information to track a face
    or a facial part, such as the lips.

   As color space we chose normalized rgb values
    [r = R/(R+G+B), g = G/(R+G+B), b = B/(R+G+B)]
Example of the color analysis. The top left image is the original and the top right the
downsampled representation, where different hue values have been transformed into
different gray levels. The bottom right image is the segmented image, where the
colors inside the ellipse of the diagram at the bottom right were taken as skin colors.
The bottom right diagram shows the distribution of the image colors in the r-g plane.
Many of the background colors are very similar to the face colors and without
calibration face and background could not be distinguished.
            MOTION ANALYSIS
   If multiple images of a video sequence are available,
    motion is often a parameter that is easily extracted and
    offers a quick way for locating objects, such as heads.
    When a close-up view of a talking person is analyzed,
    motion usually leads quickly to the area around the
    mouth, because this is the part of a face that moves the
    fastest.
   The first step in the algorithm is to compute the absolute
    value of the differences in a neighborhood (typically 8x8
    pixels) surrounding each pixel. When the accumulated
    difference is above a predetermined threshold T, we
    classify the pixel as belonging to a moving object.
Example of the motion analysis. The white outline
shows the boundary of a moving object. The cross
marks the center of the head.
Examples of heads located with the motion detection
algorithm.
          POTENTIAL USES
   By enforcement agencies to capture random
    faces in crowds.

   Eliminating voter fraud

   Check-cashing identity verification

   Computer security

   ATM and Cash-Checking Security
Many people who don't use banks use check cashing machines.
Facial recognition could eliminate possible criminal activity.
Facial recognition software can be used to
lock your computer.
                   CONCLUSION
   While humans have had the innate ability to recognize
    and distinguish different faces for millions of years,
    computers are just now catching up.

   To identify someone, facial recognition software
    compares newly captured images to databases of stored
    images.

   The newly acquired facial data is compared to the stored
    data and (ideally) linked to at least one stored facial
    representation.

   While facial recognition can be used to protect your
    private information, it can just as easily be used to invade
    your privacy by taking you picture when you are entirely
    unaware of the camera.
Thank
    You..!

				
DOCUMENT INFO
Shared By:
Categories:
Stats:
views:109
posted:9/13/2010
language:English
pages:26