Your Federal Quarterly Tax Payments are due April 15th Get Help Now >>

RGB Color Centroids Segmentation (CCS) for Face Detection by vqx13199

VIEWS: 11 PAGES: 9

									                         ICGST-GVIP Journal, ISSN 1687-398X, Volume (9), Issue (II), April 2009




          RGB Color Centroids Segmentation (CCS) for Face Detection

                                  Jun Zhang, Qieshi Zhang, and Jinglu Hu
                  Graduate School of Information, Production and System, Waseda University
                              Hibikino 2-7, Wakamatsu-ku, Kitakyushu, Japan
               J.Zhang@Akane.Waseda.jp, Q.Zhang@Akane.Waseda.jp, and Jinglu@Waseda.jp

Abstract                                                        recognition becomes a popular research direction and
Nowadays, face detection plays an important role in             is applied in many various applications such as
many application areas such as video surveillance,              financial transactions, monitoring system, credit card
human computer interface, face recognition and face             verification, ATM access, personal PC access, video
image database management etc. In face detection                surveillance etc.
applications, face usually form an inconsequential              Face detection and tracking are the key processes of
region of images. Consequently, preliminary                     face recognition. In recent surveys on face detection,
segmentation of images into regions that contain                some techniques such as principal component
"non-face" objects and regions that may contain                 analysis (PCA), neural networks (NN), support vector
"face" candidates can greatly accelerate the process of         machines (SVM), Hough transform (HT),
human face detection. Color information based                   geometrical template matching (GTM), color analysis
methods take a great attention, because colors have             etc. based methods are used to achieve this
obvious character and robust visual cue for detection.          application.
This paper presents a new color thresholding method             PCA based methods [1]-[3] need create Eigen face by
for detecting and tracking multiple faces in video              many dimension data and training sample data. The
sequence. The proposed method calculates the color              NN based methods [4][5] require a large number
centroids of image in RGB color space and segments              "face" and "non-face" images to train respectively for
the centroids region to get ideal binary image at first.        getting the network model [6]. SVM based method [7]
Then analyze the facial features structure character of         is a linear classifier and can classify goal region in
wait-face region to fix face region. The novel                  hyper-plane. HT [8] and GTM [9] based methods are
contribution of this paper is creating the color triangle       incorporated to detect gray faces in real time
from RGB color space and analyzing the character of             applications. Face detection methods based on the
centroids region for color segmenting. The                      representation used reveals that detection algorithms
experimental results show that the proposed method              using holistic representations have the advantage of
can achieve ideal thresholding result and it is much            finding small faces or faces in low quality images,
better than other color analysis based thresholding             while those using the geometrical facial features [10]
methods and can overcome the influence of                       provide a good solution for detecting faces in
background conditions, position, scale instance and             different poses. A combination of holistic and
orientation in images.                                          feature-based approaches is a promising approach to
                                                                face detection as well as face recognition.
Keywords: Face detection, Color image thresholding,             Human skin color has been used and proven to be an
Color centroids segmentation.                                   effective feature in the applications of face detection
                                                                and tracking. Although different people have
1. Introduction                                                 different skin color, several studies have that the
In our daily life, more and more application                    major difference lies largely between their intensity
techniques based on biometrics recognition such as              rather than their chrominance [11]-[13]. Several color
fingerprints, iris pattern and face recognition are             space have been used to detect pixels as skin region
developed to secure access control. Along with the              including RGB [14], [15], HSV (or HSI) [16], YCbCr
development of those techniques, computer control               [17]-[19] and YIQ [20]. Color information is an
plays an important role in making the biometrics                efficient tool for identifying facial areas and specific
recognition more economically feasible in such                  facial features if the skin color model can be properly
developments.                                                   adapted for different lighting environments. However,
Face recognition is a major concerned research                  such skin color models are not effective where the
direction in this field. In recent years, the face              spectrum of the light source varies significantly. In




                                                            1
                        ICGST-GVIP Journal, ISSN 1687-398X, Volume (9), Issue (II), April 2009




other words, color appearance is often unstable due to        coordinate of R, G and B in polar coordinate system.
changes in both background and foreground lighting.           Color triangle is created by following steps:
To solve above problems, this paper analyzes the              Step1: create a standard 2-D polar coordinate system;
character of color instead of using the existing color        Step2: create three color vectors to reflect R, G and B,
space and color channel analysis based methods.                      the range of them is 0, 255 and alternation
This paper proposes a color image thresholding                       120° reciprocally as follows:
algorithm based on color centroids segmentation                    Component:              90°,          0, 255 (2)
(CCS) for face detection and tracking. Experiments                 Component:             210°,          0, 255
have been made on video sequences with multiple                    Component:             330°,          0, 255
faces in different positions, scales and poses, or the
faces appear or disappear from sequence. This paper
is an extension of our paper [21]. We have added              Step3: connect the three apexes.
automatic multi-threshold selection and robust                After above processes, the color triangle can be
detection for color images. Experimental results show         created as Figure 1(b). For different R, G and B, the
that the proposed method can achieve ideal detection          shape of triangle is changeable. For example, Figure
results.                                                      1(c) show three sets of R, G and B and their
                                                              corresponding color triangles. From this example, it
1.1 Organization                                              can be seen that no matter the R, G and B values
In Section 2, we introduce how to create the CCS              change the main structure is unmodified.
model from created color triangle in detail. Section 3        2.2. Centroids Hexagon
describes the thresholding algorithm by analyzing the         Because the direction of R, G and B vectors is fixed
color centroids region. Detection and tracking method         and the value range is [0, 255], different combination
will be discussed in detail in Section 4. Section 5           of R, G, B represents different color, and the shape of
presents the thresholding results of our approach and         color triangle is different too. The different shape
gives the comparative results with other thresholding         triangles have different centroids and all centroids
methods. And we also give some detection results of           result in a hexagon region shown in Figure 2. This
various situations and compare them with some                 hexagon is divided to 7 regions: M (Magenta), R
reference methods. In Section 6, we summarize this            (Red), Y (Yellow), G (Green), C (Cyan), B (Blue)
paper and propose some future works.                          and L (Luminance, achromatic) regions. So we may
                                                              use seven threshold curves as the separating lines for
2. Color Centroids Segmentation Model                         thresholding.
This section firstly describes how to transform the           Observe the relationship between color and its
three components of 3-D RGB color space to 2-D                corresponding centroid position of color triangle, we
polar coordinate system for creating color triangle.          find that if the R, G and B values are close, no matter
Then calculates the centroids distribution region of all      small or large, it only reflect the luminance
colors and transforms it to histogram. Finally,               information (weak color information). So the
analyzes the histogram and gets multi-threshold to            centroids of corresponding color triangles will locate
segment the centroids region. After these processing,         in a circular region (L region). And other six color
the colors in one image can be divided to 2~7 colors
                                                            R
by 2~7 thresholds and the result is better than
traditional thresholding methods.
2.1. Color Triangle
In image processing, RGB, YCbCr, HSV, HSI etc.
color spaces are widely used. These color spaces use
                                                                                        B
three components to reflect different color e.g. RGB
color space consists of R, G and B components. This      G
paper transforms the 3-D RGB color space (Figure           (a) RGB color cube                     (b) RGB color triangle
1(a)) to 2-D polar system as the color triangle
(Figure 1(b)) following equation (1).                          R                             R                       R


           0     0     0           90°                                     B           G            B           G            B
                                                              G
           1     0     0           0

           0     0     0          210°                    [R,G,B] = [150,25,15] [R,G,B] = [25,91,143] [R,G,B] = [218,200,143]
                                                  (1)
           0     1     0           0                              (c) The example of color triangle shape with different color
                                                                               Figure 1. Color triangle model
           0     0     0          330°
           0     0     1           0                          regions reflect the color character of R, G and B
Here R, G and B is the component value of RGB                 components.
color space, ( , ), ( , ), ( , ), is the




                                                          2
                               ICGST-GVIP Journal, ISSN 1687-398X, Volume (9), Issue (II), April 2009




                                    R                                and other six vertical color-lines are threshold curves
                                                                     ( , , , ,             and ).
                    Y                           M




                    G                           B


                                    C                                   (a) Original          (b) Initially           (c) Self-adjusting
                                                                           image
                          Figure 2. CCS model

3. CCS Based Color Image Thresholding
3.1 Multiple thresholds
Considering the L region is usually not the goal
region and existing method cannot effectively divide
white and black region which are noises, cluster them                    (d) Hexagon               (e) Histogram distribution
into one region will effectively overcome the                             distribution
influence of white, black and other achromatic
regions. Here let as the threshold of L region, as                             Figure 3. color centroids distribution selection
the angle, the function of threshold curve is:
                                                                     To segment goal region ideally, the thresholds must
                        , 0°            360°             (3)         be calculated accurately. Because the histogram is not
                                                                     smooth, we use 1-D iterative median filter to smooth
The other six regions which around the L region as                   it for analysis. Through many experiments and
following formulas show:                                             observation, we define the adjustment range of
                                                                     threshold curves from left 20° to right 20° and find
               :                         ,                           the left and right valleys respectively in this range by
               :                         ,                           histogram analysis method. For , we define the
               :                        ,                            range from 3 to 15 and calculate the average value of
                                                         (4)         each valley bottom (one color region only calculate
                :                       ,
                :                       ,                            one minimum value). After this process, Figure 3(c)
                :                       ,                            can be got.

In formula (3) and (4), ,         ,    , ,      ,   and              4. Face Detection and Tracking
    are the thresholds, and the initial value of them is             4.1. Thresholding
5, 60°, 120°, 180°, 240°, 300° and 360°.                             By analyzing the characters introduced above, the
The seven thresholds can divide all colors in one                    thresholding results can be acquired to detect face
image into seven clusters and the result is shown in                 region in color image.
Figure 3(b). But these thresholds cannot always get                  4.1.1. CCS for Thresholding
ideal result for different image scene. So we propose                The CCS can solve the problem of existing methods
an automatic threshold selection method to get the                   based on color information, because it can overcome
suitable threshold for different images.                             the influences of color and luminance. The proposed
                                                                     method only calculates the direction of color and
3.2 Automatic Multi-threshold Acquisition                            clusters the dark and light region into one cluster
By observing the distribution of color centroids in                  which will result in removing some noises. By
hexagon as Figure 3(d), we can see that the centroid                 analyzing the many sample face region color, we find
distribution of different color is different, only when              that it always distributes in range 50°~150° , shown
the R=G=B, the centroid is same and it is the origin                 in Figure 4. So we only need to calculate ′ , ′ and
of hexagon. The color information is stronger; the                      for time saving. The pre-face region:
centroid is more far away from origin. For example,
(R, G, B) = (255, 0, 0) is the pure red color and its                                                 ′       ′
                                                                                         ,                ,       ,         , 85       (5)
centroid is the up peak point in Figure 2. But
threshold curve determination in Figure 3(d) is not                  Then the method described in Section III is used to
easy. To display the distribution character more                     select thresholds ′ , ′ and to get the threshold
clearly, we transform the centroids hexagon                          curves for thresholding. Other thresholds keep
distribution in Polar coordinate system to histogram                 invariant as initial values without calculation. By this
distribution in Cartesian coordinate system as shown                 way, the binary image can be got as Figure 5(b).
in Figure 3(e). In the Figure 3(e), horizontal axis is               From the result we can see that sky, background,
   (      0°, 360° ), vertical axis is (         0, 85 )             white scarf and red cloth region are clustered to black




                                                                 3
                                ICGST-GVIP Journal, ISSN 1687-398X, Volume (9), Issue (II), April 2009




and only the face color similar regions are clustered to               regions. So use formula (6) to correct image
white. Through this processing, many noise can be                      processed by the CCS based method with "and
removed, especially the excessive bright, dark and                     operation".
different color regions. But because some color of
background and cloth is close to the face region color,                            ,            ,                  ,       (7)
it is clustered to pre-face region.
4.1.2. Nonlinear Thresholding for Correction                           It will result in ideal result. Figure 5(d) is the
Despite the CCS thresholding can get better result,                    corrected binary image
but it cannot remove some noises caused by dark
color regions. To denoise them, this paper uses the                    4.2. Pre-face Region Decision
nonlinear thresholding method to correct the binary                    Once the binary image is got, median filter is used to
image acquired by CCS. Considering the gray values                     denoise as Figure 6(b) show. The white region is the
of the dark color is lower, apply the nonlinear                        wait-decision region, it maybe include face, hand,
transform described as                                                 skin region and other close color regions (white
                                                                       region on Figure 6(b)). Here all wait-decision regions
                                                                       are analysed in a selection process and some of them
                                                                       are accepted by aspect ratio and size.

                                                                       Accepted by size: Calculate the average area          of
                                                                       all wait-decision regions, delete the regions whose
                                                                       area is larger than 50% or smaller than 0.25%
                                                                       average area. If         the area of face
   (a) Hexagon distribution           (b) Histogram distribution        0.8     , 1.2    , it will be accepted as Figure 6(c).
         Figure 4. Sample face region centroids distribution

                                                                       Accepted by aspect ratio:
                                                                             4 /                                           (8)
                                                                       where aspect ratio, is the perimeter of boundary
                                                                       and is the area of wait-decision region. If
                                                                        1, 1.7 , it will be accepted as Figure 6(d). 4.3. Face
                                                                       Region Tracking
                                                                       After face region is fixed, use a circle to draw it by
                                                                       six steps:
         (a) Original image              (b) CCS thresholding
      result                                                           Step1: Use the binary face region as the mask to
                                                                              detect the face region from original image.
                                                                       Step2: Because of the eyes and mouth region is
                                                                              usually darker than face skin, divide the face
                                                                              region to 9 sub-blocks. Then maximum
                                                                              entropy method is used to thresholding them
                                                                              respectively, shown in Figure 7.
                                                                       Step3: Remove noise point by median filter and find
 (c) Nonlinear thresholding result (d) Corrected thresholding                 all pre-eye and pre-mouth regions in the
result
                                                                              inscribed circle of face region.
              Figure 5. Proposed thresholding method
                                                                       Step4: Fix eyes and mouth region by aspect ratio and
                                                                              occupancy as follows:
equation (6) to gray images to divide it into 2 clusters;                     Aspect ratio: between 0.2 and 1.7.
finally apply inverse transform to get the binary                             Occupancy: between 0.5% and 4% of face
image. Figure 5(c) is the binary image by nonlinear                           region.
thresholding.                                                          Step5: Calculate area centroids respectively.
                         ln 1     255          ,                       Step6: Draw a circumcircle (blue circle in Figure 7)
                ,                                          (6)                of triangle which is created by the three
                                  ln 1   255
                                                                              centroids (red points in Figure 7). Then use
where, is the number of cluster, here        2. From                          concentric circle multiplied by 1.5 to mark
Figure 5(c), it can be seen that the background of                            face (green circle in Figure 7).
binary image with white color and scarf region has
been clustered to same value with the face region by
nonlinear thresholding, so it is hard to separate the
face. But this binary image can overcome some
shortages of CCS, for example the dark color regions.
Sometimes, the centroids of some bright regions are
also in the goal region, but in fact they are noise



                                                                   4
                            ICGST-GVIP Journal, ISSN 1687-398X, Volume (9), Issue (II), April 2009




                                                                         Figure 15 is an outdoor image. The ideal binary
                                                                         image can be got by histogram analysis for detection.
                                                                         Figure 15(a) shows the accurately tracking result.
                                                                         Figure 16(a) is a simple background image with
                                                                         multi-face. Usually the light-colored clothing regions
                                                                         have an effect on the thresholding result got by using
                                                                         the method based on luminance or histogram analysis.
      (a) Original image                  (b) Binary image               Figure 16(c) shows that the proposed method can
                                                                         overcome those influences. But the clothing region is
                                                                         connected with the face of second person, so the face
                                                                         region cannot be detected. To overcome this, it need
                                                                         improve the face detection method to determine the
                                                                         pre-face regions.

                                                                         5.2. Comparison with reference [22]
      (c) Accepted by size            (d) Accepted by aspect ratio       Figure 17 shows a sample image of reference [22],
                      Figure 6. Face region decision                     (c)~(f) are the results by [22]. It is easy to see that
                                                                         Figure 17(e)(f) are bad thresholding results. (d) is
5. Experimental Results                                                  better but include some noise. Both (b) and (c) are
All the experiments are implemented using Matlab                         acceptable results and (b) is better than (c) because
7.0 on a Celeron M 1.73GHz platform with 2G                              (c) lost a small region under left eye. So (b) is the
memory.                                                                  best result in the five.

5.1. The Result of Proposed Method                                       6. Conclusions and Future Works
5.1.1. Thresholding results and comparison                               We have presented a novel color thresholding method
Figure 8 shows an example with complex background                        to segment the color image for temporal face
under outdoor situation, (a) is the original image; (b)                  detection and tracking. Experimental results
is the thresholding result by proposed method; (c)~(f)                   demonstrate that the proposed method can detect and
are different results by traditional thresholding                        track faces under various conditions effectively. First,
methods. Figure 9 shows the different thresholding                       calculate the centriods distribution of all colors in one
results for R                                                            image. Then use the binary image acquired by
                                                                         nonlinear thresholding to correct the binary image
                                                                         acquired by CCS method. Finally, close operation
                                                                         and filter are used to get the ideal binary image. In
                                                                         addition, the face can be marked correctly following
                                                                         our proposed tracking method.
                                                                         Experimental results show an excellent performance
                                                                         of the proposed method for different environments
                                                                         such as the change of room, lighting, complex
                                                                         background and color.
                                                                         In the future, we need to complete the following
                 Figure 7. Face tracking model                           items to improve the performance of our method
                                                                         further:
channel gray image of RGB color space. Figure 9,                                  Overcome the change of pose and view point.
Figure 10, Figure 11, Figure 12 and Figure 13 show                                Use motion analysis for effective prediction.
the different thresholding results for C channel gray                             Integrate the detection and tracking
image of CMYK color space, V channel of HSV color                                 information to make a face model for real-
space, L channel of Lab color space and Y channel of                              time recognition.
YIQ color space. Compare those thresholding results;
we can easily conclude that the proposed method is
better than the others.

5.1.2. Face detection results
Figure 14 shows an indoor image which contains
more than one person under situation. From Figure
14(b), we can see that it is easy to segment the goal
color region, the (c) is binary image and green circles
of (a) are detected result.




                                                                     5
                     ICGST-GVIP Journal, ISSN 1687-398X, Volume (9), Issue (II), April 2009




    (a) Original image                                (b) Proposed method                       (c) Histogram analysis thresholding




  (d) Non-linear thresholding                  (e) Maximum entropy thresholding                         (f) Otsu thresholding
                                Figure 8. Sample image thresholding with different methods




(a) Non-linear thresholding                    (b) Maximum entropy thresholding                         (c) Otsu thresholding
                      Figure 9. R channel of RGB color space thresholding with different methods




(a) Non-linear thresholding                    (b) Maximum entropy thresholding                         (c) Otsu thresholding
                     Figure 10. C channel of CMKY color space thresholding with different methods




(a) Non-linear thresholding                    (b) Maximum entropy thresholding                         (c) Otsu thresholding
                         Figure 11. V channel of H color space thresholding with different methods




                                                             6
                     ICGST-GVIP Journal, ISSN 1687-398X, Volume (9), Issue (II), April 2009




(a) Non-linear thresholding                     (b) Maximum entropy thresholding                       (c) Otsu thresholding
                         Figure 12. L channel of Lab color space thresholding with different methods




(a) Non-linear thresholding                     (b) Maximum entropy thresholding                       (c) Otsu thresholding
                      Figure 13. Y channel of YIQ color space thresholding with different methods




    (a) Original image                                (b) Centroids histogram                           (c) Binary image
                                         Figure 14. Detection with indoor condition




   (a) Original image                                 (b) Centroids histogram                           (c) Binary image
                                        Figure 15. Detection with outdoor condition




                                                              7
                          ICGST-GVIP Journal, ISSN 1687-398X, Volume (9), Issue (II), April 2009




           (a) Original image                               (b) Centroids histogram                      (c) Binary image
                                             Figure 16. Detection with sample background




                            (a) Original image      (b) Proposed method       (c) Reference [22]




                                (d) Skin locus      (e) Single Gaussian         (f) Histogram
                                   Figure 17. Thresholding result compare with reference [22]



7. References                                                               International Sensor Networks and Information
[1] N. Morizet, F. Amiel, I. D. Hamed, and T. Ea,                           Processing Conference, pp. 577-580, Dec. 2004.
    "A Comparative Implementation of PCA Face                        [7]    C. Shavers, R. Li, and G. Lebby, "An SVM-
    Recognition      Algorithm,"     2007      IEEE                         based Approach to Face Detection," 2006
    International Conference on Electronics, Circuits                       Processing of 38th Southeastern Symposium on
    and Systems, pp. 865-868, Dec. 2007.                                    System Theory, pp. 362-366, Mar. 2006.
[2] E. Bakry, H.M, and Q. Zhao, "Fast Neural                         [8]    H. Wu, G. Yoshikawa, T. Shioyama, T. Lao, and
    Implementation of PCA for Face Detection,"                              T. Kawade, "Glasses Frame Detection with 3D
    International Joint Conference on Neural                                Hough Transform" 2002 Processing of 16th
    Networks, pp.806–811, Jul. 2006.                                        International Conference on Pattern Recognition,
[3] V. D. M. Nhat, and S. Lee, "Two-dimensional                             Vol. 2, pp. 346-349, Aug. 2002.
    Weighted      PCA     Algorithm     for     Face                 [9]    A. Basavaraj, and P. Nagaraj, "The Facial
    Recognition," 2005 IEEE Processing of                                   Features Extraction for Face Recognition Based
    International Symposium on Computational                                on Geometrical Approach," 2006 Canadian
    Intelligence in Robotics and Automation, pp.                            Conference on Electrical and Computer
    219-223, Jun. 2005.                                                     Engineering, pp. 1936-1939, May 2006.
[4] K.Yang, H. Zhu, and Y.J. Pan, "Human Face                        [10]   S. Jeng, H. Liao, Y. Liu, and M. Chern, "An
    Detection Based on SOFM Neural Network,"                                Efficient Approach for Facial Feature Detection
    2006 IEEE Processing of International                                   using Geometrical Face Model," 1996
    Conference on Information Acquisition, pp.                              Processing of 13th International Conference on
    1253-1257, Aug. 2006.                                                   Pattern Recognition, Vol. 3, pp. 426-430, Aug.
[5] S.H. Lin, S.Y. Kung and L.J. Lin, "Face                                 1996.
    Recognition/Detection      by      Probabilistic                 [11]   H.P. Graf, T. Chen, E. Petajan, and E. Cosatto,
    Decision-based Neural Network," IEEE                                    "Locating Faces and Facial Parts," Processing 1st
    Transactions on Neural Networks, Vol. 8, pp.                            International Workshop Automatic Face and
    114-132, Jan. 1997.                                                     Gesture Recognition, pp. 41-46, 1995.
[6] H. Jee, K. Lee, and S. Pan, "Eye and face                        [12]   H.P. Graf, E. Cosatto, D. Gibbon, M. Kocheisen,
    detection using SVM," 2004 Processing of                                and E. Petajan, "Multimodal System for
                                                                            Locating Heads and Faces," Processing 2nd



                                                                 8
                         ICGST-GVIP Journal, ISSN 1687-398X, Volume (9), Issue (II), April 2009




       International Conference on Automatic Face and          [18] L. P. Son, A. Bouzerdoum, and D. Chai, "A
       Gesture Recognition, pp. 88-93, 1996.                        Novel Skin Color Model in YCbCr Color Space
[13]   J. Yang and A. Waibel, "A Real-Time Face                     and Its Application to Human Face Detection,"
       Tracker," Processing 3rd Workshop Applications               2002 Processing of International Conference on
       of Computer Vision, pp. 142-147, 1996.                       Image Processing, Vol. 1, pp. 289-292, Sept.
[14]   H. Stokman and T. Gevers, "Selection and                     2002.
       Fusion of Color Models for Image Feature                [19] Y. T. Pai, S. J. Ruan, M. C. Shie, and Y.C. Liu,
       Detection," IEEE Transactions on Pattern                     "A Simple and Accurate Color Face Detection
       Analysis and Machine Intelligence, Vol. 29, pp.              Algorithm in Complex Background," 2006 IEEE
       371-381, Mar. 2007.                                          International Conference on Multimedia and
[15]   T. Gevers, and A.W.M. Smeulders, "Combining                  Expo, pp.1545-1548, Jul. 2006.
       Color and Shape Invariant Features for Image            [20] Y. Dai and Y. Nakano, "Face-Texture Model
       Retrieval," IEEE Transactions. on Image                      Based on SGLD and Its Application in Face
       Processing, Vol. 9, pp. 102-119, Jan. 2000.                  Detection in a Color Scene," Pattern
[16]   W. Chen, Y. Q. Shi, and G. Xuan, "Identifying                Recognition, vol. 29, no. 6, pp. 1007-1017, 1996.
       Computer Graphics Using HSV Color Model                 [21] Q. Zhang, J. Zhang and S. Kamata: "Face
       and Statistical Moments of Characteristic                    Detection Method Based on Color Barycenter
       Functions," 2007 IEEE International Conference               Hexagon Model", 2008 International Multi-
       on Multimedia and Expo, pp. 1123-1126, Jul.                  Conference of Engineers and Computer
       2007.                                                        Scientists, Vol. 1, pp. 655-658, Mar. 2008.
[17]   C.N. R. Kumar, and A. Bindu, "An Efficient              [22] L. Sabeti, and Q. M. J. Wu, "High-speed Skin
       Skin Illumination Compensation Model for                     Color Segmentation for Real-time Human
       Efficient Face Detection," 32nd Annual                       Tracking," 2007 IEEE International Conference
       Conference on IEEE Industrial Electronics, pp.               on Systems, Man and Cybernetics, pp. 2378-
       3444-3449, Nov. 2006                                         2382, Oct. 2007.

Biographies
                Jun Zhang received the B.E. degree                             Jinglu Hu received the M.S. degree in
                in Automation and the M.E. degree                              Electronic Engineering from Zhong-
                in     Pattern    Recognition     and                          shan University, China in 1986, and
                Intelligence Systems from Xi'an                                the Ph.D degree in Computer Science
                University of Technology, China in                             and Engineering from Kyushu
                2001 and 2005 respectively, and                                Institute of Technology, Japan in
                then studies the Doctoral course in                            1997. From 1986 to 1993, he worked
Graduate School of Information, Production and                 in Zhongshan University, where he was a Research
Systems, Waseda University, Japan. She was an                  Associate and the Lecturer. From 1997 to March 2003,
instructor of the Department of Information and                he was a Research Associate in Graduate School of
Control Technology of Xi'an Institute of Post &                Information Science and Electrical Engineering,
Telecom from 2005 to 2007. Her research interest               Kyushu University, Japan. From April 2003 to March
includes image recognize, detection and analysis. She          2008, he was an Associate Processor in Graduate
is a member of Chinese Institute of Electrons and a            School of Information, Production and Systems,
student member of IEEE, IEICE, IIEEJ and JAMIT.                Waseda University. Since April 2008, he has been a
                                                               Processor in Graduate School of Information,
                   Qieshi Zhang received the B.E.              Production and Systems, Waseda University. Dr. Hu
                   degree in Automation from Xi'an             is a member of IEEE, SICE, IEEJ and IEICE.
                   University of Technology, China in
                   2004, and Master degree in
                   Information,    Production    and
                   Systems Engineering from Waseda
                   University, Japan in 2009. Now
studies the Doctoral course in Graduate School of
Information, Production and Systems, Waseda
University, Japan. He was an associate of the
Department of Mechanical Electronically Technology
in Xi’an Siyuan University, China, from 2004 to 2006.
His research interests include image compression,
detection and recognize. He is a member of Chinese
Institute of Electrons and a student member of IEEE,
IEICE and IIEEJ.




                                                           9

								
To top