Marker-less 3D Human Body Modeling using Thinning algorithm in Monocular Video by ijcsis


									                                                             (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                  Vol. 8, No.2 May 2010

          Marker-less 3D Human Body Modeling using
            Thinning algorithm in Monocular Video
              K. Srinivasan                              K.Porkumaran                                        G.Sainarayanan
            Department of EIE                         Department of EEE                                      Head, R&D
  Sri Ramakrishna Engineering College         Dr.N.G.P Institute of Technology                        ICT Academy of Tamilnadu
            Coimbatore, India                           Coimbatore, India                                  Chennai, India                                   
         * Corresponding author

Abstract— Automatic marker-less 3D human body modeling for              based activity analysis has been implemented with the help of
the motion analysis in security systems has been an active              thinning algorithm. The recovery of 3D human body poses is a
research field in computer vision. This research work attempts to       very important in many video processing applications. A 3D
develop an approach for 3D human body modeling using                    human body model is an interconnection of all body elements
thinning algorithm in monocular indoor video sequences for the          in three dimensional views. Onishi [6] describe a 3D
activity analysis. Here, the thinning algorithm has been used to        human body posture estimation using Histograms of Oriented
extract the skeleton of the human body for the pre-defined poses.       Gradient(HOG) feature vectors that can express the shape of
This approach includes 13 feature points such as Head, Neck,            the object in the input image obtained from the monocular
Left shoulder, Right shoulder, Left hand elbow, Right hand              camera. A model based approach for estimating 3D human
elbow, Abdomen, Left hand, Right hand, Left knee, Right knee,           body poses in static images have been implemented by Mun
Left leg and Right leg in the upper body as well as in the lower
                                                                        Wai Lee, and Isaac Cohen [7]. They develop a Data-Driven
body. Here, eleven activities have been analyzed for different
videos and persons who are wearing half sleeve and full sleeve
                                                                        based approach on Markov Chain Monte Carlo (DD-MCMC),
shirts. We evaluate the time utilization and efficiency of our          where component detection results generate state proposals for
proposed algorithm. Experimental results validate both the              3D pose estimation.
likelihood and the effectiveness of the proposed method for the             Thinning is one of the important morphological operations
analysis of human activities.                                           that can be used to remove the selected foreground pixels from
                                                                        the images. Usually, the thinning operation has been applied to
  Keywords- Video surveillance, Background subtraction, Human           binary images. In the previous work, the thinning algorithm is
body modeling, Thinning algorithm, Activity analysis.                   mostly attempted for several image processing applications
                                                                        like pattern recognition and character recognition [8]-[11].
                                                                        Now we apply this thinning algorithm to model the human
                      I.    INTRODUCTION                                body in 3D view and it can be used to find the motion analysis
    In recent years, human tracking, modeling and activity              of human without using any markers on the body.
recognition from videos [1]-[5] has gained much importance                  This paper is organized as follows: Section 1 gives the
in human-computer interaction due to its applications in                brief introduction about the problem. Section 2 deals the
surveillance areas such as security systems, banks, railways,           proposed work of activity analysis using 3D modeling. The
airports, supermarkets, homes, and departmental stores. The             frame conversion algorithm and the background subtraction
passive surveillance system needs more cameras to monitor               algorithm are explained in section 3 and section 4. Section 5
the areas by a single operator and it is inefficient for tracking       illustrates the morphological operation and the thinning
and motion analysis of the people for better security. The              algorithm is described in section 6. Section 7 presents the
automated video surveillance system uses single camera with             human body feature points identification. Section 8 includes
single operator for the motion analysis and provides better             the results and analysis of our proposed work. The conclusion
results. Marker based human tracking and modeling is a                  and future work has been discussed in section 9. The
simple way of approach but it is not possible to reconstruct all        acknowledgements and references are included in the last part
the human poses in practical situations. This approach needs            of the paper.
markers at every time of surveillance persons. So, the marker-
less motion tracking and modeling have been very important
for the motion analysis. In the human body modeling, there are                              II.   PROPOSED WORK
two kinds of representation of modeling are available such as              Human body modeling has been used in the analysis of
2D modeling and 3D modeling. Among the two types of                     human activities in the indoor as well as in the outdoor
human body modeling, 2D modeling is simple approach which               surveillance environment. Model based motion analysis
can be used to model the complex nature of human body                   involves 2D and 3D human models representation [12]-[13].
whereas 3D modeling is much more complex to track the                   The features that are extracted from the human body are useful
persons in video data. In this paper, 3D human body modeling            to model the surveillance persons and it has been applied to

                                                                                                   ISSN 1947-5500
                                                                     (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                          Vol. 8, No.2 May 2010
recover the human body poses [14] and finding their activities.
In the proposed work as in Figure 1, first the video sequence is
acquired by the video camera from the indoor environment
and it is converted into frames for further processing. Due to
illumination changes, camera noise and lighting conditions,
there may be a chance of adding noise in the video data. These
unwanted details have to be removed to get the enhanced
video frame. The pre-processing stage helps to enhance the
video frames. In all the processing here, the human body is our
desired region of interest. The next aim is to obtain the human
body from the video frame by eliminating the background
scene. So, the background is subtracted with the help of the
frame differencing algorithm. Then, the video frames are
applied to morphological operation to remove details smaller
than a certain reference shape. After the morphological
operation, the thinning algorithm has been used to find
skeleton of the human body. In this work, 13 features have
been considered for a full body modeling and these features
are Head, Neck, Left shoulder, Right shoulder, Left hand
elbow, Right hand elbow, Abdomen, Left hand, Right hand,
Left knee, Right knee, Left leg and Right leg as in Figure 2.
Initially, the five terminating points such as head, left hand,
left leg, right leg, and right hand are determined. Then, the
intersecting, shoulder, elbow, and knee points are obtained                         Figure 2. A human body model with thirteen feature points
using image processing techniques. Finally, the 3D modeling
has been achieved for the activity analysis of human in video
data.                                                                                      III.   FRAME CONVERSION ALGORITHM
                                                                                  In the first stage, the Video sequence is captured by the
                      Input Video sequence
                                                                              high resolution Nikon COOLPIX Digital Video Camera
                                                                              having 8.0 million effective pixels and 1/2.5-in.CCD image
                                                                              sensor which produces NTSC and PAL video output. And it
                        Frame conversion                                      has a focal length of 6.3-18.9mm (equivalent with 35mm
                                                                              [135] format picture angle: 38-114mm). The video sequence is
                                                                              being taken at a rate of 30 frames/ second from the indoor
                                                                              surveillance environment for finding the human behaviour.
                     Background subtraction
                                                                              After that, the video sequence has been converted into
                                                                              individual frames with the help of the algorithm given below.

                    Morphological operation
                                                                                  VIDEO TO FRAME CONVERSION ALGORITHM

                       Thinning algorithm                                     Step0: Acquisition of video sequence from the Video camera
                                                                                      to MATLAB environment.
                                                                              Step1: Read the video file using ‘aviread’ function and
                     Find Terminating points                                         store it in a variable name.
                                                                              Step2: Assign the required frame as ‘jpg’.
                                                                              Step3: Determine the size of video file and number it.
                                                                              Step4: Then,
           Find Intersecting, Shoulder, Elbow, and                                      for i=1: fnum,
                         Knee points                                                              strtemp=strcat(int2str(i),'.',pickind);
                                                                                                  imwrite (mov(i).cdata(:,:,:),strtemp);
                           3D modeling
                                                                                     IV.      BACKGROUND SUBTRACTION ALGORITHM

                    Perform Activity analysis                                     In the proposed work, the background subtraction
                                                                              technique plays an important role for subtracting foreground
                                                                              images from the background image and it is described in
     Figure 1. Proposed model of 3D modeling for activity analysis            Figure 3. The frame differencing algorithm [15] has been

                                                                                                           ISSN 1947-5500
                                                                         (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                              Vol. 8, No.2 May 2010
proposed here to highlight the desired foreground scene and it                                           VI.       THINNING ALGORITHM
is given below.                                                                          In this paper, thinning operation can be used to find
          FRAME DIFFERENCING ALGORITHM                                               skeleton of the entire human body. The thinning operation is
                                                                                     performed by transforming the origin of the structuring
Step0: Read the Video data.
                                                                                     element to each pixel in the image. Then it is compared with
Step1: Convert it into video frames.                                                 the corresponding image pixels. When the background and
Step2: Set the background image.                                                     foreground pixels of the structuring element and an image are
Step3: Separate R, G, B components individually for                                  matched, the origin of the structuring element is considered as
       easy computation.                                                             background. Otherwise it is left unchanged. Here, the
           bc_r = bg(:,:,1);bc_g = bg(:,:,2); bc_b = bg(:,:,3);                      structuring element determines the use of the thinning
Step4: Read the current frame from the video sequence.                               operation. The thinning operation is achieved by the hit-and-
Step5: Separate R, G, B components individually for the                              miss transform. The thinning of an image A by a structuring
        computation.                                                                 element B is given by equation (3).
             cc_r = fr(:,:,1);cc_g = fr(:,:,2); cc_b = fr(:,:,3);
Step6: Subtract the R, G, B components of the current frame                          thin(A,B)=A-hit and miss(A-B)                                           (3)
      from the R, G, B components of background frame.
Step7: Check the threshold values of colour components.                                  Mostly the thinning operation has been used for
                                                                                     skeletonization to produce a connected skeleton in the human
                                                                                     body. Figure.4 shows the structuring elements for
                                                                                     skeletonization by morphological thinning. At each iteration,
                                                                                     the image is first thinned by the left hand structuring element,
                                                                                     and then by the right hand one, and then with the remaining
                                                                                     six 90° rotations of the two elements.

           (a)                       (b)                     (c)

   Figure 3. Background subtraction using frame differencing algorithm
   (a) Input video frame, (b) Background subtracted image, and (c) Silhouttee
                               of human body

                 V.     MORPHOLOGICAL OPERATION
                                                                                          Figure 4. Examples of structuring element for thinning operation
    Next, the proposed algorithm follows the morphological
operations which help to enhance the video frame for further                             The process is repeated in cyclic fashion until none of the
processes. The morphological operations include dilation and                         thinnings produce any further change. Normally, the origin of
erosion [16]. Finally, the noise has been removed using                              the structuring element is at the center. The steps of thinning
median filtering. Dilation adds pixels to the boundaries of the                      algorithm include,
objects in an image. The number of pixels added or removed
from the objects in an image depends on the size and shape of                        Step0: Partitioning the video frame into two distinct
the structuring element. If F(j,k), for 1≤j,k≤N is a binary                                 subfields in a checkerboard pattern.
valued image and H(j,k), for 1≤j,k≤L, where L is an odd                              Step1: Delete the pixel p from the first subfield if and
integer, is a binary valued array called a structuring element,                             only if the conditions (4), (5), and (6) are satisfied.
the dilation is expressed as in equation(1).
                                                                                      X H (p)=1                                                              (4)
G(j,k)=F(j,k) ⊕ H(j,k)                                                  (1)
    Erosion removes pixels on object boundaries. To erode an                                          4
                                                                                             X H (p)= ∑ bi
image, imerode function is used for our applications. The                                            i=1
dilation is expressed as in equation (2) where H(j,k) is an odd                      where       
size Lx L structuring element.                                                                   1 if X2i-1 = 0 and ( x2i = 1 or x2i+1=1)
                                                                                             bi= 
                                                                                                 0 otherwise
 G(j,k)=F(j,k) ⊗ H(j,k)                                       (2)
    At the end of this stage, the median filtering has been used                     x1, x2,….x8 are the values of the eight neighbors of p, starting
to reduce the salt and pepper noise present in the frame. It is                      with the east neighbor and numbered in counter-clockwise
similar to using an averaging filter, in that each output pixel is                   order.
set to an average of the pixel values in the neighborhood of the
corresponding input pixel.                                                                                    
                                                                                     2 ≤ min  n1(p),n 2 (p)  ≤ 3                                           (5)
                                                                                                              

                                                                                                                     ISSN 1947-5500
                                                                         (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                              Vol. 8, No.2 May 2010
               4                                                                           current pixel, then this pixel is not a terminating
      n1 (p)= ∑ X 2k-1 ∨ X 2k                                                              pixel.
where                                                                               Step9: If the current pixel does not satisfy the above
               4                                                                           condition, then it is an edge.
      n2(p)= ∑ X 2k ∨ X2k+1

                                                                                        Once the terminating points are determined, then the two
(X2 ∨ X3 ∨ X 8) ∧ X1=0                                                   (6)
                                                                                    intersecting points have been calculated which joints hands
Step2: Then, delete the pixel p from the second subfield if                         and legs. Then, the two shoulder points are determined. The
       and only if the conditions (4), (5), and (7) are satisfied.                  left shoulder co-ordinate is plotted at the pixel where the
                                                                                    iteration encounters a white pixel. Similarly the right shoulder
                                                                                    co-ordinate is plotted using the same technique. Figure.6
(X6 ∨ X7 ∨ X4) ∧ X5 = 0                                                  (7)        shows a graphical representation to determine Shoulder,
                                                                                    Elbow and Knee of the human body.
    The combination of step1 and step2 produce the one
iteration of the thinning algorithm. Here, an infinite number of
iterations (n=∞) have been specified to get the thinned image.
Figure.5 shows the thirteen points on thinned image for
different poses.

    Figure 5. Results of thinned image for a human body with 13 points

   In order to model the human body, thirteen feature points
                                                                                       Figure 6. Graphical representation to find Shoulder, Elbow and Knee
have been considered in the upper body as well as the lower
body. The feature points include the Terminating points
(5Nos), Intersecting points (2Nos), Shoulder points (2Nos),                             The elbow point is approximately halfway between the
Elbow joints (2Nos), and Knee joints(2Nos).Using terminating                        shoulder and the terminating points of the two hands. The
points, the ends of features such as head, hands and legs have                      problem arises when the hand is bent. In order to get the
been determined. The following are the steps involved in                            accurate elbow joint, a right angle triangle has been
determining the terminating points.                                                 constructed as in Figure 7(a).The (x1, y2) point of the right
                                                                                    angled triangle is determined by obtaining the x-axis of the
                                                                                    terminating point-1 (x1) and the y-axis of the shoulder point
                                                                                    (y2). The distance between the point (x1, y1) and (x2, y2) is
Step0: Input the thinned image.                                                     calculated by using the equation (8).
Step1: Initialize the relative vectors to the side borders
       from the current pixel.                                                                                              2            2
                                                                                    Distance between points (D) = (x1 -x 2 ) +(y1 -y 2 )                  (8)
Step2: Select the current coordinate to be tested.
Step3: Determine the coordinates of the pixels around
       this pixel.                                                                                           (x1-x 2 )2 +(y1-y2 )2
Step4: If this pixel is an island, then it is an edge to the                        Elbow Joint (EJ) =                                                    (9)
       island of 1 pixel. Save it.
Step5: Default assumption: pixel is an edge unless                                      Using the available distance as the x-axis reference, a for
       otherwise stated.                                                            loop is iterated from the first point of the same x-axis. The
                                                                                    point at which the iteration encounters a white pixel is plotted
Step6: Test all the pixels around this current pixel.
                                                                                    as the elbow joint. Similarly, the other elbow joint is also
Step7: For each surrounding pixel, test if there is a                               determined using the same technique. The process of
       corresponding pixel on the other side.                                       determining the knee joints is similar to the technique adopted
Step8: If any pixels that are on the opposite side of the                           to determine elbows. Figure 7(b) shows the graphical way to

                                                                                                                   ISSN 1947-5500
                                                                                                                                         (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                                                                                              Vol. 8, No.2 May 2010
determine the knee joint. But, in this case the loop is iterated                                                                                       This algorithm is implemented for a single person in
with a constant y-axis and a varying x-axis. The elbow joint                                                                                      indoor surveillance video with straight poses for eleven
has been identified using the equation (9). After the                                                                                             activities such as Standing, Right hand rise, Left hand rise,
determination of thirteen points, it has been displayed as in                                                                                     Both hands rise, Right hand up, Left hand up, Both hands up,
Figure.5.                                                                                                                                         Left leg rise, Right salute, Left salute, and Crouching as in
                                                                                                                                                  Figure 9.


                 (a)                                (b)
  Figure 7. Graphical representations to find Elbow joint and Knee joint
     (a) Determination of Elbow joint (b) Determination of Knee joint

                           VIII. RESULTS AND ANALYSIS                                                                                                                             B
   In this section, the experimental results of the proposed
work are shown and the algorithm has been developed using
MATLAB 7.6(2008a) on Intel dual core processor, 2 GB
RAM and Windows XP SP2. For implementing this 3D
human body model, more than 60 videos are considered.
Figure.8 shows the MATLAB results of human body
modeling in a 3D view for a single person with different
                           3D MODELING                                                     3D MODELING                                                                            C



       -100                                                           0

       -150                                                          -50

                                                                -100                                                                10
                                                                -150                                                            8

        10                                                      -200                                                        6

                                                          300                                                           4                                                         D
                   5                            200                       0
                                                                                  100                               2
                                        100                                                     200
                            0   0                                                                       300     0

                           3D MODELING

                                                                                            3D MODELING



         -200                                             0

              10                                    100

                       5                      200

                                    0                                10       5   0 300   250     200   150   100       50          0

   Figure 8. Results of 3D Modeling of human pose in a different views                                                                                                            F

                                                                                                                                                                              ISSN 1947-5500
                                                                         (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                              Vol. 8, No.2 May 2010
                                                                                     model. If that thirteen points are matched and inside of
                                                                                     silhouette, then the corresponding activity is identified. From
                                                                                     the response shown in Figure.10, the time taken to compute
                                                                                     our algorithm with the steps of 10 frames for a video is
                                                                                     observed. For a first frame in the video sequence, it takes
                                                                                     approximately 2.6 seconds as high as compared to consecutive
                                                                                     frames due to the computation of initial processing like frame
                                                                                     conversion, background subtraction and preprocessing. It was
                                     G                                               noticed that the proposed algorithm has taken 1.6 seconds as
                                                                                     an average for 3D models.
                                                                                                                                    3D Modeling Vs Time



                                                                                      Time in Sec



                                     I                                                                    0        20       40      60        80      100    120      140      160
                                                                                                                                         Frame Number

                                                                                                              Figure 10. Response of time utilization for an indoor video

                                                                                         We have experienced in the proposed models with eleven
                                                                                     activities as in Table I in the indoor monocular videos. Here,
                                                                                     we have considered three videos for calculating the algorithm
                                                                                     speed of our proposed models. For Video1, it takes an average
                                                                                     of 1.62 seconds, and 1.68, 1.78 for the video2 and video3
                                                                                     respectively. The efficiency of our models has been found
                                                                                     based on the True positives (TP) and False positives (FP).
                                                                                     True Positives indicate the number of frames in which the
                                                                                     output is correct in a video sequence. False Positive is the
                                                                                     number of frames for which the output is incorrect. Table II
                                                                                     shows the efficiency of our proposed modeling for different

                                                                                                          TABLE I.               TIME CALCULATION OF ELEVEN ACTIVITIES

             Column A                             Column B                                                                                   Video 1        Video 2         Video 3
                                                                                                      S.No              Activity Name
                                       K                                                                                                      (Sec)          (Sec)           (Sec)
 Figure 9. Experimental results of Activity analysis using 3D modeling for                                     1    Standing                   1.81           2.01            2.25
                              different persons.                                                               2    Right hand rise            1.74           1.89            2.05
        (Column A) Original video frame (Column B) 3D modeling                                                 3    Left hand rise             1.59           1.78            1.64
      A. Standing, B.Right hand rise, C.Left hand rise, D.Both hand rise,E.                                    4    Both hand rise             1.76           1.64            2.00
     Right hand up,F.Left hand up ,G.Both hands up, H. Left leg rise,I. Right                                  5    Right hand up              1.58           1.82            1.66
                     salute, J. Left salute, and K. Crouching                                                  6    Left hand up               1.57           1.52            1.56
                                                                                                               7    Both hands up              1.56           1.59            2.00
    To post process the frames for the identification of human                                                 8    Left leg rise              1.54           1.54            1.50
                                                                                                               9    Right salute               1.63           1.65            1.69
activities, silhouette matching technique is used. For this, the
                                                                                                              10    Left salute                1.58           1.71            1.58
silhouettes of eleven activities are stored in the data base.                                                 11    Crouching                  1.50           1.40            1.66
Then, the thirteen feature points of current video frame are                                                                 Average           1.62           1.68            1.78
identified and compared with the silhouette of the human body

                                                                                                                                          ISSN 1947-5500
                                                                        (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                             Vol. 8, No.2 May 2010
        TABLE II.       EFFICIENCY OF OUR PROPOSED ALGORITHM                        [9] L Huang, G Wan, and C Liu “An improved parallel Thinning algorithm,”
                                                                                        Proc. of the Seventh International conference on document analysis and
                                                           Efficiency                   recognition ,Vol.2, pp. 780-783, 2003.
       Input      TP     FP     TP+FP      TP/(TP+FP)
                                                              (%)                   [10] V.Vijaya Kumar, A.Srikrishna, Sadiq Ali Shaik, and S. Trinath “A new
      Video 1    914     92      1006         0.9085         90.85                      Skeletonization method based on connected component approach”
      Video 2    1100    47      1147         0.9590         95.90                      IJCSNS Int.J. of Computer Science and Network Security, Vol.8, No.2,
      Video 3    1286    96      1382         0.9305         93.05                      pp.133-137, February 2008.
      Video 4    798     66       864         0.9236         92.36                  [11] S. Schaefer and C. Yuksel, “Example-Based Skeleton Extraction”, Proc.
      Video 5    1349    153     1502         0.8981         89.81                      of Eurographics Symposium on Geometry Processing, pp. 1–10, 2007.
      Video 6    1114    143     1257         0.8862         88.62                  [12] R.Horaud, M.Niskanen, G. Dewaele,and E.Boyer, “Human motion
      Video 7    1171    115     1286         0.9105         91.05                      tracking by registering an articulated surface to 3D points and normals,”
                                                                                        IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.31,
                                                                                        pp. 158-163, 2009.
                IX.    CONCLUSION AND FUTURE WORK                                   [13] Jianhui Zhao, Ling Li and Kwoh Chee Keong, “Motion recovery based
                                                                                        on feature extraction from 2D Images,” Computer Vision and Graphics,
    We have implemented an approach for Human 3D                                        pp. 1075–1081,Springer, Netherlands. , 2006.
modeling for the motion analysis in video security                                  [14] Jingyu Yan, M.Pollefeys, “A Factorization based approach for articulated
applications. The proposed algorithm works on straight poses                            non-rigid shape, motion and Kinematic chain recovery from video,” IEEE
                                                                                        Transactions on Pattern Analysis and Machine Intelligence, Vol.
acquired by single static camera without using markers on the                           30, Issue 5, pp. 865-877, 2008.
human body. Here, eleven activities of 3D models have been                          [15] K.Srinivasan, K.Porkumaran, G.Sainarayanan, "Improved background
discussed based on the thinning algorithm and these activities                          subtraction techniques for security in video applications," Proceedings of
are used to describe almost all human activities in the indoor                          3rd IEEE International Conference on Anti-counterfeiting, Security, and
environment. We have considered 13 feature points for the                               Identification in Communication, pp.114-117, 20-22 Aug. 2009.
                                                                                    [16] William K. Pratt, “Digital Image Processing”, Jhon Wiley & Sons, Inc.,
upper body modeling as well as for lower body modeling. In                              Third edition, 2002.
this paper, time expenditure and efficiency of pre-defined 3D
models have been presented. In the future work, this work can
be extended to develop an algorithm for multiple persons                                                        AUTHORS PROFILE
tracking and modeling. Here, the occlusion problem of human
body segments is not considered. This problem will also be
considered with outdoor surveillance videos with side poses.                                          K.Srinivasan received his BE degree in Electronics and
                                                                                                      Communication Engineering from VLB Janakiammal
                                                                                                      College of Engineering and Technology, Coimbatore and
                                                                                                      ME in Process Control and Instrumentation Engineering,
                         ACKNOWLEDGMENT                                                               from Annamalai University, India in 1996 and 2004
                                                                                                      respectively. He is currently working as an Assistant
      We would like to express our deep and unfathomable                                              Professor at Sri Ramakrishna Engineering College,
thanks to our Management of SNR Charitable Trust,                                                     Coimbatore, India. His research interest includes
Coimbatore, India for providing the Image processing                                Image/Video Processing, Digital Signal Processing and Neural Networks and
Laboratory in Sri Ramakrishna Engineering College to collect                        Fuzzy systems.
and test the real time videos for the proposed work.
                               REFERENCES                                                               K.Porkumaran is a Vice-Principal in Dr. N.G.P. Institute
                                                                                                        of Technology, Anna University, Coimbatore, India. He
[1] N.Jin, F. Mokhtarian, “Human motion recognition based on statistical                                received his Master’s and PhD from PSG College of
    shape analysis,” Proceedings of AVSS, pp. 4-9, 2005.                                                Technology, India. He was awarded as a Foremost
[2] Wei Niu, Jiao Long, Dan Han, and Yuan-Fang Wang, “Human activity                                    Engineer of the World and Outstanding Scientist of the
    detection and recognition for video surveillance,” Proceedings of ICME,                             21st Century by the International Biographical Centre of
    Vol. 1, pp. 719-722, 2004.                                                                          Cambridge, England in 2007 and 2008 respectively. He
[3] H.Su, F. Huang, “Human gait recognition based on motion analysis,”                                  has published more than 70 research papers in National
    Proceedings of MLC, pp. 4464-4468, 2005.                                        and International Journals of high repute. His research areas of interest include
[4] Tao Zhao, Ram Nevatia and Bo Wu, “Segmentation and Tracking of                  Image and Video processing, Modelling and Simulation, Neural Networks and
    multiple humans in crowded environments,” IEEE Transactions on                  Fuzzy systems and Bio Signal Processing.
    Pattern Analysis and Machine Intelligence, Vol. 30, No. 7, pp.1198-1211,
    July 2008.
[5] Mun Wai Lee, and Ramakant Nevatia, “Human Pose Tracking in                                         G.Sainarayanan received his Engineering degree from
    monocular sequence using multilevel structured models,” IEEE                                       Annamalai University, and ME degree from PSG College
    Transactions on Pattern Analysis and Machine Intelligence, Vol. 31, No.                            of Technology, India in 1998 and 2000 respectively and
    1, pp.27-38, 2009.                                                                                 PhD degree from School of Engineering and Information
[6] K.Onishi, T.Takiguchi, and Y.Ariki, “3D Human posture estimation using                             Technology, University Malaysia Sabah, Malaysia in 2002.
    the HOG features from monocular image,” Proc. of 18th IEEE Int.                                    He is currently working as a Head of R&D, ICT Academy
    conference on Pattern Recognition, Tampa, FL, pp.1-4, 2008.                                        of Tamilnadu, Chennai, India. He is an author of many
[7] Mun Wai Lee, and Isaac Cohen, “A Model based approach for estimating                               papers in reputed National and International journals and he
    human 3D poses in static Images,” Trans. on Pattern Analysis and                                   has received funds from many funding agencies. His
    Machine Intelligence, Vol.28, No.6, pp.905-916, June 2006.                      research areas include Image/ video processing, Video Surveillance Systems,
[8] S.Veni, K.A.Narayanankutty, and M.Kiran Kumar, “Design of                       Control Systems, Neural Network & Fuzzy Logic, and Instrumentation.
    Architecture for Skeletonization on hexagonal sampled image grid,”
    ICGST-GVIP Journal, Vol.9, Issue (I), pp.25-34, February 2009.

                                                                                                                      ISSN 1947-5500

To top