Image processing is the science of manipulating a picture. It covers a broad scope of
techniques that are present in numerous applications. These techniques can enhance or distort an
image, highlight certain features of an image, create a new image from portions of other
images, restore an image that has been degraded during or after the image acquisition, and
so on. In the era of multimedia and Internet, image processing is a key technology Image
processing is any form of information processing for which the input is an image, such as
photographs or frames of video; the output is not necessarily an image, but can be for instance a
set of features of the image. Most image-processing techniques involve treating the image as a
two-dimensional signal and applying standard signal processing techniques to it.
The analysis of a picture using techniques that can identify shades, colors and
relationships that cannot be perceived by the human eye. Image processing is used to solve
identification problems, such as in forensic medicine or in creating weather maps from satellite
pictures. It deals with images in bitmapped graphics format that have been scanned in or
captured with digital cameras. Any image improvement, such as refining a picture in a paint
program that has been scanned or entered from a video source.
A facial recognition system is a computer-driven application for automatically
identifying or verifying a person from a digital still or video image. It does that by comparing
selected facial features in the live image and a facial database. It is typically used for security
systems and can be compared to other biometrics such as fingerprints Or eye iris recognition
systems. Facial recognition software is based on the ability recognize face and then measure the
various features of the face. This paper will give a general overview of the image processing
technique used in analysis of facial recognition, its methods and applications.
Face recognition methods:
Feature based methods : Use geometric features like distance between eyes, their size etc
to represent a face.. These features are computed using simple correlation filters with expected
templates. Every face has numerous, distinguishable landmarks, the different peaks and valleys
that make up facial features. Face, It defines these landmarks as nodal points. Each human face
has approximately 80 nodal points. Some of these measured by the software are:
Distance between the eyes
Width of the nose
Depth of the eye sockets
The shape of the cheekbones
The length of the jaw line
Facial recognition software measures nodal points on
the human face to create a faceprint and find a match.
These nodal points are measured creating a numerical code, called a faceprint, representing the
face in the database. An example for the face recognition based on feature based method (Using
mobile ). Facial feature identification systems today only allow for a two-dimensional frontal
images of one's face. However, there are systems that allow for front and side views which in
effect produces a three-dimensional mapping of one's face. The advantage of this method is that
it eliminates the security concern of unauthorized individuals showing photographs of
authorized users to the camera.
When the Face Recognition function is enabled the keypad will be locked until the handset is
opened and the pre-registered customer's facial features matched. Also, if facial features cannot
be properly sensed due to dark or backlight conditions, the face and secret question of the person
who opens the handset will be displayed and the handset can be unlocked by entering the
The Face Recognition function helps ensure the security of their handsets by preventing misuse
by third parties, and can also protect private information such as registered numbers, mail
addresses and mails in the event that their handset is lost or stolen. In addition, the function also
features a Mask Mode, which enables the sensing of facial features even if customers are
Based on ideas like eigenfaces After a large training set of images are collected, principal
component analysis is used to compute eigenfaces. Each new face is then characterized by its
projection onto this space of principal eigenfaces. In image processing, processed images of
faces can be seen as vectors whose components are the brightnesses of each pixel.The dimension
of this vector space is the number of pixels. The eigenvectors of the covariance matrix associated
to a large set of normalized pictures of faces are called eigenfaces. They are very useful for
expressing any face image as a linear combination of some of them. In the facial recognition
branch of biometrics, eigenfaces provide a means of applying data compression to faces for
An eigenvector of a linear transformation is a non zero vector that is either left unaffected or
simply multiplied by a scale factor after the transformation.An eigenspace of a given
transformation is the set of all eigenvectors of that transformation that have the same eigenvalue,
together with the zero vector (which has no direction).
For the example shown on the left, let the matrix that would produce a shear
transformationsimilar to this would be A, The set of eigenvectors X for A is defined as those
vectors which, when multiplied by A, result in a simple scaling λ of X . Thus, AX= λX. If we
restrict ourselves to real eigenvalues, the only effect of the matrix on the eigenvectors will be to
change their length, and possibly reverse their direction. So multiplying the right hand side by
the Identity matrix I, we have
AX=(λI)X , => (A-λI)X = 0
In order for this eqn. to have non-trivial solutions, we require the determinant det(A- λI) which is
called the characteristic polynomial of the matrix A to be zero.The solutions of characteristic
polynomial are the eigenvalues.By substituting λ values in (A-λI)X = 0 gives the values of
The eigenface approach. Sample training faces are shown (left), followed by the first 15
principal eigenfaces. A 2D face is represented by its projection onto this space
Three dimemsional method:
An enquiry face image shot by a video camera and the like is collated with face image data
converted into database. The data converted into the database are three-dimensional shape data
of surfaces of faces and color image data. Shooting conditions (an angle and lighting directions)
of the enquiry image are supplied to the data, and thereby a color image of each person's face
from the database in accordance with the shooting conditions is generated. By comparing the
color image with the enquiry image, the collation is implemented.
Brief description of the drawings:
The objects and features of the present invention will become more apparent from the
consideration of the following detailed description taken in conjunction with the accompanying
drawings in which:
FIG. 1 is a block diagram showing constitution of a face image recognition system according to
an embodiment of the present invention;
FIG. 2 is a diagram explaining an example of three-dimensional shape obtaining means of the
FIG. 3 is a diagram explaining pixel value obtained for each pixel of a video camera by the
example of three-dimensional shape obtaining means of the present invention;
FIG. 4 is a diagram explaining a method of deciding an angle of face of an enquiry face image by
a condition input means of the present invention;
FIG. 5 is a diagram explaining a method of deciding a direction of lighting of an enquiry face
image by a condition input means of the present invention;
FIG. 6 is a diagram explaining the Phong's model for generating a referential color image by a
graphics means of the present invention;
FIG. 7 is a block diagram showing typical constitution of an image collating means according to
the present invention; and
FIG. 8 is a diagram explaining a method of collating images by the image collating means
according to the present invention.
Process of face recognition:
Detection - When the system is attached to a video surveillance system,a multi-scale algorithm
is used to search for faces in low resolution. The system switches to a high-resolution search
only after a head-like shape is detected.
Alignment - Once a face is detected, the system determines the head's position, size and pose.A
face needs to be turned at least 35 degrees toward the camera for the system to register it.
Normalization -The image of the head is scaled and rotated so that it can be registered amappe
into an appropriate size and pose.
Representation - The system translates the facial data into a unique code. This coding process
allows for easier comparison of the newly acquired facial data to stored facial data.
Matching - The newly acquired facial data is compared to the stored data and (ideally) linked to
at least one stored facial representation.The system maps the face and creates a faceprint. Once
the system has stored a faceprint, it can compare it to the thousands or millions of faceprints
stored in a database. Each faceprint is stored as an 84-byte file.
How facial recognition works?
The detection and recognition scheme must be made capable of tolerating variations in the faces
themselves. Inter-personal variations can be due to race, identity, or genetics while intra-personal
variations can be due to deformations, expression, aging, facial hair, cosmetics and facial
The output of the detection and recognition system has to be accurate. It has to associate
an identity or name for each face it comes across by matching it to a large database of
individuals. Simultaneously, the system must be robust to typical image-acquisition problems
such as noise, video-camera distortion and image resolution,restoration. The aim of restoration is
also to improve the image, but unlike enhancement, knowledge of how the image was formed is
used in an attempt to retrieve the ideal (uncorrupted) image. Any image-forming system is not
perfect, and will introduce artifacts (for example, blurring, aberrations) into the final image that
would not be present in an ideal image.
The processing involved should be efficient with respect to run-time and storage space.
A newly-emerging trend in facial recognition software uses a 3D model, which claims to provide
more accuracy. Capturing a real-time 3D image of a person's facial surface, 3D facial recognition
uses distinctive features of the face -- where rigid tissue and bone is most apparent, such as the
curves of the eye socket, nose and chin -- to identify the subject. These areas are all unique and
don't change over time .
Real time applications:
Transport Objects: Airports, Railway Stations, Bus stations, Sea ports.
Defence Stations & Crime Branches
Banks,Business Centres & Casinos
Government Centres & Software Organization
Eliminating voter fraud
Check-cashing identity verification
document control (digital chip in passports, drivers' licenses);
transactional authentication (credit cards, ATMs, point -of-sale);
computer security (user access verification);
physical access control (smart doors);
voter registration (election accuracy);
time and attendance (entry and exit verification);
computer games (a virtual "you" plays against virtual opponents)
Image processing is an active area of research in such diverse fields as medicine, astronomy,
microscopy, seismology, defense, industrial quality control, and the publication and
entertainment industries. The concept of an image has expanded to include three-dimensional
data sets (volume images), and even four-dimensional volume-time data sets. It is also is used to
solve identification problems, such as in forensic medicine or in creating weather maps from
satellite pictures. The image processing technique used in analysis of facial recognition, its
methods and applications. Thus facial recognition system is a computer-driven application for
automatically identifying or verifying a person from a digital still or video image. It does that by
comparing selected facial features in the live image and a facial database.
1] Otsuka, T. and Ohya, J. Spotting Segments Displaying Facial Expressions from Image
Sequences Using HMM, In Proceedings of the 3rd Intern. Conf. on Automatic Face and
Gesture Recognition (April 1998), 442-447.
 M.J., Budynek, J. and Akamatsu, S. Automatic Classification of Single Facial Images, IEEE
Transactions on Pattern Analysis and Machine Intelligence, 21, 12, (December 1999), 1357-
 Cohn, J., Kanade, T., et al. A Comparative Study of Alternative FACS Coding Algorithms.
Technical Report CMU-RI-TR-02-06, Robotics Institute, Carnegie Mellon University
 Heutte, L., Nosary, A. and Paquet, T. A multiple agent architecture for handwritten text
recognition, Pattern Recognition, 37, 5 (April 2004), 665-674
 Efford, Nick. Digital Image Processing: A Practical Introduction Using Java. Addison-
Wesley. Essex: 2000.