A New Approach for Model based Gait Signature Extraction

Document Sample
A New Approach for Model based Gait Signature Extraction Powered By Docstoc
					                                                               (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                 Vol. 9, No. 3, March 2011

     A New Approach for Model based Gait Signature
                     Extraction
       Mohamed Rafi                   Shanawaz Ahmed J                       Md. Ekramul Hamid                     R.S.D Wahidabanu
      Dept. of CS&E                Dept. of Computer Science          Dept. of Computer Network Engg.               Dept. of E&C
   HMS Institute of Tech.         College of Computer Science            College of Computer Science         Government college of Engg
  Tumkur, Karnataka, India        King Khalid University, KSA         King Khalid University, Abha, KSA       Salem, Tamil Nadu, India.
  mdrafi2km@yahoo.com                jshanawaz@gmail.com                  ekram_hamid@yahoo.com               drwahidabanu@gmail.com


Abstract— Identifying individuals for security purposes are                     Gait is defined as, “A particular way or manner of moving
becoming essential now-a-day. Gait recognition aims essentially            on foot”. Early psychological studies into gait by Murray [2],
to address this problem by identifying people at a distance based          suggested that gait is a unique personal characteristic, with
on the way they walk. In this paper, a model is proposed for gait          cadence and is cyclic in nature. Gait recognition as a
signature extraction consists of gait capture, segmentation and            physiological biometric technique has become popular in
feature extraction steps. We use Hough transform technique that            recent times. Gait as a biometric can be seen as advantageous
helps to read all parameters which are used to generate gait               [3] over other forms of biometric identification techniques for
signatures that may result a better gait recognition rate. In the          it is unobtrusive, can be captured at a distance, does not require
preprocessing steps, the picture frames taken from video
                                                                           high quality images and it is difficult to disguise. The first
sequences are given as input to Canny edge detection algorithm
to detect edges of the image by extracting foreground from
                                                                           scientific article on animal walking gaits has been written
background also it reduces the noise using Gaussian filter. The            350BC by Aristotle [4]. He observed and described different
output from edge detection is an input to the Hough transform to           walking gaits of bipeds and quadrupeds and analyzed why all
extract gait parameters. We have used five parameters to                   animals have an even number of legs. Recognition approaches
successfully extract gait signatures. It is observed that when the         to gait were first developed in the early 1990s and were
camera is placed at 90 and 270 degrees, all the parameters used            evaluated on smaller databases and showed promise. DARPA’s
in the proposed work are clearly visible. Then using Hough                 Human ID at a Distance program [5] then collected a rich
transform, a clear line based model is designed to extract gait            variety of data and developed a wide variety of technique and
signatures. The utility of the model is tested on a variety of body        showed not only that gait could be extended to large databases
and stride parameters recovered in different viewing conditions            and could handle covariate factors. Since the DARPA program,
on a database consisting of 15 to 20 subjects walking at both an           research has continued to extend and develop technique, with
angled and frontal-parallel view with respect to the camera, both          especial consideration of practical factors such as feature
indoors and outdoors and find the method to be highly successful.          potency.
   Keywords- Biometric, Gait signature extraction, Hough                       In Silhouette Analysis-Based Gait Recognition for Human
Transform, Canny Edge detection, Gaussian filter                           Identification [6] a combination of background subtraction
                                                                           procedure and a simple correspondence method was used to
                                                                           segment and track spatial silhouettes of an image, but this
                       I.    INTRODUCTION                                  method generates more noise which leads to poor gait signature
                                                                           extraction. Therefore the rate of recognition was low. In gait
    The demand for automatic human identification system is                recognition by symmetry analysis[7], the Generalized
strongly increasing and growing in many important                          Symmetry Operator was used which locates features according
applications, especially at a distance and it has recently gained          to their symmetrical properties rather than relying on the
great interest from the pattern recognition and computer vision            borders of a shape or on general appearance and hence does not
researchers for it is widely used in many security-sensitive               require the shape of an object to be known in advance. The
environments such as banks, parks and airports. Biometrics is a            evaluation was done by masking with a rectangular bar of
new powerful tool for reliable human identification and it                 different widths: 5, 10 and 15 pixels in each image frame of the
makes use of human physiology or behavioral characteristics                test subject and at the same position. The area masked was on
such as face, iris, fingerprints and hand geometry for                     average 13.2%, 26.3% and 39.5% of the image silhouettes,
identification. However, these biometrics methodologies are                respectively. A recognition rate of 100% was obtained for bar
either instructive or restricted to many controlled environments.          size of 5 pixels. For a bar width of 10 pixels the test failed as
For example, most face recognition methods are capable of                  the test subject could not be recognized as subject was
recognizing only frontal or nearly frontal faces, other                    completely covered in most of the image frames. This suggests
biometrics such as fingerprint and iris are no longer applicable           that recognition is likely to be adversely affected when a
when the persons suddenly appears in the surveillance.                     subject walks behind a vertically placed object. There were also
Therefore, new biometrics recognition methods are strongly                 other limitations, Mark Ruane Dawson [8], like the legs were
needed in many surveillance applications, especially at a                  not being tracked to a high enough standard for gait
distance [1].                                                              recognition. The segmentation process leads to a very crude




                                                                      90                               http://sites.google.com/site/ijcsis/
                                                                                                       ISSN 1947-5500
                                                                   (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                     Vol. 9, No. 3, March 2011
model fitting procedure which in turn adversely affects the rate               A. Gait Capturing
of recognition. In other method of gait recognition, the subjects                  At this step the subjects are asked to walk for capturing of
in the video are always walking perpendicular to the camera                    gait. This is a very important step as the total results depend on
[9]. This would not be the case in real life as people would be                the quality of the gait captured. So the care should be taken to
walking at all angles to the video camera. Using of fewer                      see that the quality of gait capturing is maintained, this step
parameters for gait signatures was another major drawback                      includes video sequence and XML data store. In our proposed
which has to be addressed.                                                     research the following preprocessing steps are carried out
    The motivation behind this research is to find whether                     before segmenting a captured Image
increase in number of gait signature can improve the                                   •    [Reading a RGB image]
recognition rate? Improvement over model fitting can give us
better results? What factors affect gait recognition and to what                       •    [Converting an RGB image to Gray Scale]
extent? And what are the critical vision components affecting
gait recognition from video? The objective of this paper is to                         •    [Converting Gray Scale Image to Indexed Image]
explore the possibility of extracting a gait biometric from a                  The indexed image is the input to the segmentation algorithm
sequence of images of a walking subject without using                          for further processing. The above preprocesses of an image is
markers. Sophisticated computer vision techniques are                          shown in figure 2.
developed, aimed to extract a gait signature that can be used for
person recognition.                                                            B. Segmentation
                                                                                   In computer vision, segmentation refers to the process of
    Using video feeds from conventional cameras and without
                                                                               partitioning a digital image into multiple segments (sets of
the use of special hardware, implicates the development of a
                                                                               pixels, also known as super pixels). The goal of segmentation is
marker less body motion capture system. Research in this
                                                                               to simplify and/or change the representation of an image into
domain is generally based on the articulated-models approach.
                                                                               something that is more meaningful and easier to analyse. Image
Haritaoglu et al. [10] presented an efficient system capable of
                                                                               segmentation is typically used to locate objects and boundaries
tracking 2D body motion using a single camera. Amos Y.
                                                                               (lines, curves, etc.) in images. More precisely, image
Johnson[11] used a single camera with the viewing plane
                                                                               segmentation is the process of assigning a label to every pixel
perpendicular to the ground plane, 18 subjects walked in an
                                                                               in an image such that pixels with the same label share certain
open indoor-space at two view angles: a 45◦ path (angle view)
                                                                               visual characteristics.
toward the camera, and a frontal-parallel path (side-view) in
relation to the viewing plane of the camera. The side-view data                The Canny Edge Detection Algorithm:
was captured at two different depths, 3.9 meters and 8.3 meters
from camera. These three viewing conditions are used to                        The picture frames taken from video sequences are given as
evaluate our multi-view technique. In this research, we use                    input to Canny edge detection algorithm to detect the edges of
images captured at different views as the image captured from                  the image frames.
the frontal or perpendicular view does not give required                       The algorithm consists of 5 steps:
signatures. Segmentation is done on the captured image in
order to extract foreground from back ground using Canny                       1. Image Smoothing:
edge detection technique, as the purpose of edge detection in                      Image smoothing is used to reduce noise within an image.
general is to significantly reduce the amount of data in an                    The Canny edge detector uses a filter based on the first
image, while preserving the structural properties to be used for               derivative of a Gaussian, in the form:
further image processing. In order to obtain the gait model the
output of segmentation is processed using Hough transform,
which is a technique that can be used to isolate features of a
particular shape within an image                                                                                                 (1)
                                                                               2. Finding gradients
          II.    MODEL FOR GAIT SIGNITURE EXTRACTION
We propose a gait signature extraction model having the                            The edges of the image is marked where the gradients of
                                                                               the image has large magnitudes. The Canny algorithm basically
following steps- Picture frame capture, Segmentation, feature
                                                                               finds edges where the grayscale intensity of the image changes
Extraction which leads to gait signature identification which
                                                                               the most. These areas are found by determining gradients of the
shown in figure.1.                                                             image. First step is to approximate the gradient in the x- and y-
                                                                               direction respectively by applying the kernels. Then the
                                                                               gradient magnitudes (also known as the edge strengths) are
                                                                               determined as an Euclidean distance measure by applying the
                                                                               law of Pythagoras is given by equation
                                                                                   |G| = SQRT(Gx2 + Gy2)                         (2)
                                                                                  It is simplified by applying Manhattan distance measure is
                                                                               given by |G| = |Gx| + |Gy|, where Gx and Gy are the gradients in
Figure1. Components of the proposed model for Gait Signature Extraction
System.
    Identify applicable sponsor/s here. (sponsors)



                                                                          91                               http://sites.google.com/site/ijcsis/
                                                                                                           ISSN 1947-5500
                                                                       (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                         Vol. 9, No. 3, March 2011
the x- and y-directions, respectively. The direction of the edges                  parametric form, the classical Hough transform is most
is determined and stored as given by the equation below                            commonly used technique for the detection of regular curves
                                                                                   such as lines, circles, ellipses, etc. A convenient equation for
    ∂ = arctan |Gy|/|Gx|                             (3)                           describing a set of lines uses parametric or normal notion:
3. Non-maximum suppression                                                                  x cosθ + ysin θ = r                       (4)
    In the proposed study, only local maxima are marked as                            where r is the length of a normal from the origin to this line
edges. The purpose of this step is to convert the “blurred”                        and θ is the orientation of r with respect to the x-axis. For any
edges in the image of the gradient magnitudes to “sharp” edges.                    point (x,y) on this line, r and θ are constant.
Basically, this is done by preserving all the local maxima in
the gradient image, and deleting everything else.                                      In an image analysis context, the coordinates of the point(s)
                                                                                   of edge segments (i.e.(xi,yi)) in the image are known and
    The algorithm for each pixel in the gradient image:                            therefore serve as constants in the parametric line equation,
     a. Round the gradient direction to nearest 45 degrees,                        while r and θ are the unknown variables we seek. If we plot the
        corresponding to the use of an 8-connected                                 possible (r,θ) values defined by each (xi,yi), points in Cartesian
        neighborhood.                                                              image space map to curves (i.e. sinusoids) in the polar Hough
                                                                                   parameter space. This point-to-curve transformation is the
     b. Compare the edge strength of the current pixel with                        Hough transformation for straight lines. When viewed in
        the edge strength of the pixel in the positive and                         Hough parameter space, points which are collinear in the
        negative gradient direction, i.e., if the gradient                         Cartesian image space become readily apparent as they yield
        direction is north (θ =90 degrees), compare with the                       curves which intersect at a common (r, θ) point.
        pixels to the north and south.
                                                                                       The transform is implemented by quantizing the Hough
     c.   If the edge strength of the current pixel is largest;                    parameter space into finite intervals or accumulator cells. As
          preserve the value of the edge strength. If not,                         the algorithm runs, each (xi,yi) is transformed into a discretized
          suppress (i.e. remove) the value.                                        (r,θ ) curve and the accumulator cells which lie along this curve
4. Double thresholding                                                             are incremented. Resulting peaks in the accumulator array
                                                                                   represent strong evidence that a corresponding straight line
    Potential edges are determined by thresholding.                                exists in the image.
5. Edge tracking by hysteresis                                                          The main advantage of the Hough transform technique is
    Finally edges are determined by suppressing all edges that                     that it is tolerant of gaps in feature boundary descriptions and is
are not connected to a very certain (strong) edge as shown in                      relatively unaffected by image noise. We use this technique to
figure 2                                                                           extract lines from the segmented image. The Hough transform
                                                                                   can be used to identify the parameter(s) of a curve which best
                                                                                   fits a set of given edge points. This edge description is obtained
                                                                                   from the Canny edge detector and may be noisy, i.e. it may
                                                                                   contain multiple edge fragments corresponding to a single
                                                                                   whole feature. Furthermore, as the output of an edge detector
                                                                                   defines only where features how many are in an image, the
                                                                                   work of the Hough transform is to determine both what the
                                                                                   features are (i.e. to detect the feature(s) for which it has a
                                                                                   parametric (or other) description) and of them exist in the
                                                                                   image.
                                                                                   Hough Transform Algorithm for Straight Lines:
                                                                                      1.   Identify the maximum and minimum values of r and θ
                                                                                      2.   Generate an accumulator array A(r, θ) and set all
                                                                                           values to zero
                                                                                      3.   For all edge points (xi, yi) in the image
                                                                                                a.   Use gradient direction for θ
                                                                                                b.   Compute r from the equation
Figure 2: [a] Original Image [b]. RGB to Grayscale [c] Grayscale to Indexed                     c.   Increment A(r, θ) by one
Image [d] Edge Detected Image.
                                                                                      4.   For all cells in A(r, θ)
C. Gait Feature Extraction
    The Hough transform is a technique which can be used to                                     a.   Search for the maximum value of A(r, θ)
isolate features of a particular shape within an image. Because                                 b.   Calculate the equation of the line
it requires that the desired features be specified in some




                                                                              92                                http://sites.google.com/site/ijcsis/
                                                                                                                ISSN 1947-5500
                                                                          (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                            Vol. 9, No. 3, March 2011
    5.    To reduce the effect of noise more than one element
          (elements in a neighbourhood) in the accumulator array
          are increased.
    The edge detected image and the image after Hough
transform are shown in figure 3.
                                                                                       Table 2: Parameters and percentage of clarity when camera placed at 270
                                                                                       degree angle to the subject for frame 1.




            Figure 3: Images before and after the Hough Transform
                                                                                       Figure 5. Graphical representation of clarity for frame 1, when camera placed at
          III.   EXPERIMENTAL RESULTS AND DISCUSSION                                   270 degree.
    One of the most important aspects of this research is to                               Table 1 and Table 2 show the result. While the camera is
extract the gait signatures for a successful recognition rate.                         placed at 90 degrees and 270 degrees, it is found that for frame
Below table shows the number of parameters which are used to                           1 the clarity for the parameter distance between the legs is
generate a gait signature for different view of a subject(90                           higher; the y axis is taken as the reference axis for the subject.
degree and 270 degree).The attempts column shows how many                              Therefore this can be used to extract gait signatures for better
persons are used to extract the signature. The success column                          recognition. It is also observed that the parameter right thigh
shows how many of the subjects give successful gait                                    length can also be considered for extraction of gait signature. It
signatures.                                                                            is also observed that while the camera is placed at 90 degrees
                                                                                       and 270 degrees for frame 2, the clarity for the parameter right
                                                                                       thigh length is higher. Therefore, this can be used to extract
                                                                                       gait signatures for better recognition. While placing camera at
                                                                                       90 degrees and 270 degrees for frame 3, it is found that the
                                                                                       clarity for the parameter left thigh length is higher. Therefore
                                                                                       this can also be used to extract gait signatures for better
                                                                                       recognition.
Table 1: Parameters and percentage of clarity when camera placed at 90 degree                                        CONCLUSIONS
angle to the subject for frame 1.
                                                                                           The presented research has shown that gait signatures can
                                                                                       be extracted in a better way by using Hough transform. When
                                                                                       the camera is placed at 90 and 270 degrees it is found that most
                                                                                       parameters listed in the research are providing us clarity. Since
                                                                                       the lines are clearly visible they can easily be labeled and the
                                                                                       distance and angle between them can be measured accurately.
                                                                                       The proposed research gives best results if the camera is
                                                                                       placed at 90 degrees to the subject and it is recommended that
                                                                                       the subjects must be made to pass through an area which has a
                                                                                       white background because it will help in getting a better gait
                                                                                       signature extraction model. The research achieved 100 percent
                                                                                       clarity if the parameters length of left, right thigh and distance
                                                                                       between the legs are analyzed at 90 degree angle. The
                                                                                       signatures thus extracted can be used to get better gait
                                                                                       recognition rate. In future work it is recommended that the
                                                                                       lines extracted from Hough transform should be labeled by
Figure 4. Graphical representation of clarity for frame 1,When camera placed at        using an effective line labeling algorithm to calculate the
90 degree.
                                                                                       angles and distances between the various parameters to get
                                                                                       effective gait recognition.




                                                                                  93                                     http://sites.google.com/site/ijcsis/
                                                                                                                         ISSN 1947-5500
                                                                         (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                           Vol. 9, No. 3, March 2011


                               REFERENCES
[1]  Jiwen Lu A, Erhu Zhang B .”Gait recognition for human identification
     based on ICA and fuzzy SVM through multiple views fusion”, School of
     Electrical and Electronic Engineering, Nanyang Technological
     University, Nanyang Avenue, Singapore 639798, 25 July 2007.
[2] Murray, M. P., “Gait as a total pattern of movement”, American journal
     of Physical medicine, 46(1):290-333, 1967.                                                        Shanawaz Ahmed J received his MCA
[3] Davrondzhon Gafurov ,Einar Snekkenes ,Patrick Bours , “Improved Gait                              degree from the Department of Computer
     Recognition Performance Using Cycle Matching”, In proceedings of                 Science Bharathidasan University, India. After that he
     IEEE, 24th International Conference on Advanced Information
     Networking and Applications Workshops, 2010.                                     obtained the Masters of Philosophy degree from Periyar
[4] Aristotle (350BC), “On the Gait of Animals”, Translated by A. S. L.               University, India. He is presently pursuing his PhD degree
     Farquharson 2007.                                                                in Anna University, India. During 2004-2007, he was a
[5] Sarkar, S. , Phillips, P. J., Liu, Z.,Vega I. R., Grother, P., and Bowyer,        lecturer in the Department of Computer Science The New
     K., “The humanID gait challenge problem: Data sets,performance and               College, Chennai, India. Since 2007, he has been serving as
     analysis”, IEEE Trans.Pattern Anal. Mach. Intell., vol. 27, no. 2,pp.
     162–177, Feb. 2005.
                                                                                      a Lecturer in college of computer science at King Khalid
[6] Liang Wang, Tieniu Tan, Huazhong Ning, and Weiming Hu, “Silhouette
                                                                                      University, Abha, KSA. His research interests include
     Analysis-Based Gait Recognition for Human Identification”, IEEE                  image processing and image retrieval.
     Transactions on pattern analysis and machine intelligence, vol. 25, no.
     12, december 2003.                                                                                Md. Ekramul Hamid received his B.Sc
[7] James B. Hayfron-Acquah, Mark S. Nixon, John N. Carter, ”Automatic                                 and M.Sc degree from the Department of
     gait recognition by symmetry analysis”, Image, Speech and Intelligent
     Systems Group, Department of Electronics and Computer Science,                                    Applied Physics and Electronics, Rajshahi
     University of Southampton, Southampton, S017 1BJ, United Kingdom.                                 University, Bangladesh. After that he
[8] Dawson, M. R., ”Gait Recognition”, Imperial College of Science,                                    obtained the Masters of Computer Science
     Technology & Medicine, London, June, 2002.                                       from Pune University, India. He received his PhD degree
[9] Han, X.,”Gait Recognition Considering Walking Direction”, University              from Shizuoka University, Japan. During 1997-2000, he
     of Rochester, USA, August 20, 2010.                                              was a lecturer in the Department of Computer Science and
[10] Haritaoglu, I., Harwood, D., Davis, L.”A real-time system for detecting          Engineering, Rajshahi University. Since 2007, he has been
     and tracking people in 2.5D”, Proceedings of the 5th European Conf.
     Computer Vision 1998, Freiburg Germany, 1, pp.877-892 ,1998.                     serving as an associate professor in the same department.
[11] Amos Y. Johnson and Aaron F. Bobick. “A Multi-view Method for Gait               He is currently working as an assistant professor in the
     Recognition Using Static Body Parameters”.Electrical and Computer                college of computer science at King Khalid University,
     Engineering Georgia Tech., Atlanta, GA 30332.                                    Abha, KSA. His research interests include Digital Signal
                                                                                      Processing and Speech Enhancement.
                         AUTHORS PROFILE
                                                                                      Dr. R.S.D Wahidabanu received her BE (Electronics &
                                                                                      Communication) and ME degree (Applied Electronics)
              Mohamed Rafi received his BE and ME                                     from University of Madras Chennai, India. Obtained PhD
             degree in Computer Science & Engineering                                 from Anna University, Tamil nadu, India. Having more
             from Bangalore University, India. Presently                              than 30 years of experience in Teaching and research.
             Pursuing PhD from Vinayaka Mission                                       Working as Professor & Head, Dept of Electronics &
             University, Salem, Tamil nadu, India. From                               communication engineering, Government college of
August 2007 to till date working as a Professor, Dept of                              Engineering, Salem. More than 13 students obtained phd
Computer Science & Engineering, HMS institute of                                      and more than 20 students are pursuing phd under the
Technology, Tumkur, Karnataka, India. From November                                   guidance. Published more than 30 papers in international
2001 to July 2007 Worked as Assistant Professor in the                                journals.
Department of Computer Science and Information
Technology, at Adama University, Ministry of Education,
Ethiopia. His research interests include Image Processing,
Database system and software engineering.




                                                                                 94                            http://sites.google.com/site/ijcsis/
                                                                                                               ISSN 1947-5500