Docstoc

Eyes Detection by Pulse Coupled Neural Networks

Document Sample
Eyes Detection by Pulse Coupled Neural Networks Powered By Docstoc
					IJCSN International Journal of Computer Science and Network, Volume 2, Issue 5, October 2013
ISSN (Online) : 2277-5420       www.ijcsn.org
                                                                                                                                    11


                   Eyes Detection by Pulse Coupled Neural Networks
             1
                 Maminiaina Alphonse Rafidison, 2Andry Auguste Randriamitantsoa, 3Paul Auguste Randriamitantsoa
                             1, 2, 3
                                  Telecommunication- Automatic – Signal – Image Research Laboratory
                                 High School Polytechnic of Antananarivo, University of Antananarivo
                                            Antananarivo, Ankatso BP 1500, Madagascar




                            Abstract
This paper presents a new method fast and robust for eyes              nodes coupled together with their neighbors within a
detection, using Pulse-Coupled Neural Networks (PCNN). The             definite distance, forming a grid (2Dvector). The PCNN
functionality is not the same as traditional neural network            neuron has two input compartments: linking and feeding.
because there are no training steps. Due of this feature, the
                                                                       The feeding compartment receives both an external and a
algorithm response time is around tree millisecond. The
approach has two components including: face area detection
                                                                       local stimulus, whereas the linking compartment only
based on segmentation and eyes detection using edge. The both          receives a local stimulus. When the internal activity
operations are ensured by PCNN The biggest region which is             becomes larger than an internal threshold, the neuron
constituted by pixel value one will be the human face area. The        fires and the threshold sharply increases. Afterward, it
segmented face zone which will be the input of PCNN for edge           begins to decay until once again the internal activity
detection undergoes a vertical gradient operation. The two             becomes larger.
gravity’s center of close edge near the horizontal line which
corresponds to the peak value of horizontal projection of vertical
gradient image will be the eyes.

Keywords: Pulse Coupled Neural networks, Face detection,
Eyes detection, Image Segmentation, Edge Detection.


1. Introduction

In recent decades, image processing domain has an
exponential evolution. The current status is completely
different from initial state. Actually, image processing
searches are oriented to object recognition especially for
face recognition. Eyes detection is an important phase
ensuring a good performance of face recognition. In this                           Fig. 1 Pulse Coupled Neural Networks Structure
paper, an eyes detection method is proposed. The method
is based on pulse coupled neural networks. It is divided in            This process gives rise to the pulsing nature of PCNN,
two parts: face detection first, following by eyes detection.          forming a wave signature which is invariant to rotation,
We will see in the next paragraph the neural network                   scale and shift or skew of an object within the image. This
coupled pulse purpose, then the details of the proposed                last feature makes PCNN a suitable approach for feature
algorithm followed by the test phase, its performance                  extraction in very-high resolution imagery, where the
measurement and its prospect.                                          view angle of the sensor may play an important role.
                                                                       PCNN system can be defined by the following expression:
2. Pulse Coupled Neural Networks Model
                                                                                                                                     (1)
The architecture of a Pulse Coupled Neural Networks
(PCNN) is rather simpler than most other neural network                                                                              (2)
implementations. PCNN do not have multiple layers and
receive input directly from the original image, forming a              Where    is the input stimulus to the neuron      ,
resulting “pulse” image. The network consists of multiple              and    are respectively the values of the Feeding and
                                                                       Linking compartment. Each of these neurons
IJCSN International Journal of Computer Science and Network, Volume 2, Issue 5, October 2013
ISSN (Online) : 2277-5420       www.ijcsn.org
                                                                                                                              12

communicates with neighboring neurons           by means               3.1 Face Detection
of the weights given by M and W kernels. Y is the output
of a neuron from the previous iteration, while and                     Searching face area focuses on the skin detection because
indicate normalizing constants. The output of feeding and              it is the dominant part in the top portion of human image.
linking compartment are combined to create the internal                Once we get grayscale image as input, we proceed to
state of the neuron :                                                  configure the PCNN using the below parameters:

                                                             (3)       Weights matrix

A dynamic threshold , is also calculated as follow:
                                                                                                                            (6)
                                                             (4)

In the end, the internal activity is compared with             to      Initial values of matrix :
produce the output , by:                                               The initial values of linking L, feeding F matrix and
                                                             (5)       stimulus S are the same as the input image. The
                                                                       convolution between null matrix which has the same size
                                                                       as the input image RxC and weights matrix initiates the
                                                                       output value Y of PCNN. The first value of dynamic
The result of a PCNN processing depends on many
parameters. For instance, the linking strength, β, affects             threshold is an R-by-C matrix of two.
segmentation and, together with M and W, scales feeding
and linking inputs, while the normalizing constants scale              Delay constants:
the internal signals. Moreover, the dimension of the                                           ,            and
convolution kernel affects the propagation speed of the
autowave. With the aim of developing an edge detecting ,                Normalizing constants:
PCNN, many tests have been made changing each                                            ,       ,          and
parameter [1][3].                                                      The PCNN is ready for iteration exercise. For skin
                                                                       segmentation, the iteration set is n=          . We have
3. Proposed Method                                                     already gotten an image segmented during the first
                                                                       iteration but to obtain a good result, we repeat the
The method doesn’t depend on image input format. In                    operation three times. The Fig. 4, Fig. 5, Fig 6 show the
case of image color, the conversion to grayscale type is               PCNN outputs.
required. We have two steps to follow: face detection, then
eyes detection. The following figure presents shortly the
chart of our algorithm.




                 Fig. 2 Face/Eyes detection method                                             Fig. 3 Original image
IJCSN International Journal of Computer Science and Network, Volume 2, Issue 5, October 2013
ISSN (Online) : 2277-5420       www.ijcsn.org
                                                                                                                                                                       13




              Fig. 4 PCNN output for first iteration n=1                                                 Fig. 6 PCNN output for third iteration n=3

                                                                       Once the original image with RxC size is segmented, we
                                                                       calculate the sum of pixel value per row      and per
                                                                       column      .


                                                                                                                                                                     (7)



                                                                                                                                                                     (8)



                                                                                                                    Projection vertical of PCNN image output
                                                                                                30




                                                                                                25




                                                                                                20
                                                                           Sum of pixel value




                                                                                                15

             Fig. 5 PCNN output for second iteration n=2
                                                                                                10




                                                                                                5




                                                                                                0
                                                                                                     0       50             100                  150           200         250
                                                                                                                                  Image column




                                                                                                             Fig. 7 Vertical projection of Fig. 6
IJCSN International Journal of Computer Science and Network, Volume 2, Issue 5, October 2013
ISSN (Online) : 2277-5420       www.ijcsn.org
                                                                                                                                                                   14

The vertical projection graph presents some peaks values
in the column range           , same case for horizontal
projections in row range

                                           Projection Horizontal of PCNN image output
                       25




                       20
  Sum of pixel value




                       15




                       10




                       5




                       0                                                                                                         Fig. 10 Face area
                            0   50            100             150              200      250         300
                                                           Image row


                                     Fig. 8 Horizontal projection of Fig. 6
                                                                                                          3.2 Eyes detection

Face area is the intersection region of the two bands; it                                                 After detecting the face, the next step is to localize the
means the rectangle’s area described on Fig. 9.                                                           iris. We need to extract first the content of rectangle from
                                                                                                          the image segmented PCNN’s output (Fig. 6). Then, we
                                                                                                          customize each region to be delimited as well. The
                                                                                              (9)         operation doing this task is available with Matlab called
                                                                                                          “imclose”. The Fig. 11 presents the result of this
                                                                                                          operation.




                                         Fig. 9 Face detection method                                                       Fig. 11 Region customization

With our experimental image:                                                                              The image with region closed becomes the input of PCNN
                                                                                                          for edge detection. The Pulse Coupled Neural Networks
                                                                                                          will use the same parameters as before during
                                                                                                          segmentation steps. Three iterations are enough to get a
And we get the following picture:                                                                         good result of edge detection and the below figures show
                                                                                                          the output for each iteration.
IJCSN International Journal of Computer Science and Network, Volume 2, Issue 5, October 2013
ISSN (Online) : 2277-5420       www.ijcsn.org
                                                                                                                                         15




                                                                                               _                     =



                                                                                           Fig. 15 Eyes regions candidates

                                                                       Eye regions candidates are found, then we are looking for
                                                                       gravity’s center of each region. For a 2D continuous
                                                                       domain, knowing the characteristic function of a region,
                       Fig. 12 First iteration                         the raw moment of order (          is defined as :
                                                                                                                                      (10)

                                                                       For                 adapting this to scalar (greyscale)
                                                                       image with pixel intensities     , raw image moments
                                                                           are calculated by:

                                                                                                                                      (11)


                                                                       The two raw moments order one             et     , associated
                                                                       with moment order zero            are used to calculate the
                                                                       centroid of each region. Its position is defined as:
                                                                                          et                                          (12)

                      Fig. 13 Second iteration                         Now, our problem is « how to identify the eyes? ». To
                                                                       answer this question, we proceed to calculate vertical
                                                                       gradient of face area segmented image.



                                                                                                                                      (13)




                       Fig. 14 Third iteration

The PCNN has played two important roles: segmentation
and edge detection. The closed edge will be filled with                            Fig. 16 Vertical gradient of face area segmented
blank color (“imfill” Matlab function) and we calculate
the difference between this one with the image of the last             We use the same principle as the face detection by
iteration of the PCNN.                                                 calculating the sum of gray level of vertical gradient
IJCSN International Journal of Computer Science and Network, Volume 2, Issue 5, October 2013
ISSN (Online) : 2277-5420       www.ijcsn.org
                                                                                                                                                                                              16

image per row [2]. We get the peaks and draw the line
   horizontal corresponding.

                                                                           Projection verticale gradient Y
                                          3000




                                          2000
    Sum of vertical gradient projection




                                          1000




                                             0




                                          -1000




                                          -2000




                                          -3000
                                                  0   20       40     60      80       100         120       140   160   180    200
                                                                                    Image row
                                                                                                                                                           Fig. 19 Eyes detected

                                                           Fig. 17 Horizontal projection of Fig. 16
                                                                                                                                      4. Results and Performance
                                                                                                                         (14)
                                                                                                                                      All tests were performed with image color with different
                                                                                                                                      dimension. As we know, the algorithm doesn’t use a
    is the line carrier relevant information in top part of                                                                           database image for training, so the eyes detection is very
image and the two centers of gravity of a region near the                                                                             fast. However, it has a weakness when the person wears
horizontal line are the eyes. The distance between     and                                                                            glasses because the iris is not detected correctly. Samples
the center of gravity is calculated by:                                                                                               of the experimental results are shown in the series of
                                                                                                                                      pictures (Fig. 20 and Fig. 21) below:

                                                                                                                         (15)




                                                                                                                                                         Fig. 20 Multiple detection


                                                      Fig. 18 Line and gravity’s centers positions

Finally, the eyes are detected with more precision.
IJCSN International Journal of Computer Science and Network, Volume 2, Issue 5, October 2013
ISSN (Online) : 2277-5420       www.ijcsn.org
                                                                                                                                              17

                                                                                               Table 2: Comparison results
                                                                             No.                                             Eyes detection
                                                                                                Methods
                                                                                                                              rate success
                                                                                          Choi and Kim [4]
                                                                             1                                                  98.7 %

                                                                             2            Proposed Method                       98.5%

                                                                                     S. Asteriadis, N. Nikolaidis,
                                                                             3                                                  98.2%
                                                                                         A. Hajdu, I. Pitas [5]
                                                                                           Song and Liu [6]
                                                                             4                                                  97.6 %
                                                                                       Kawaguchi and Rison [7]                  96.8 %
                                                                             5
                                                                                        Eye detection based-on
                                                                                                                                96.5%
                                                                             6          Haarcascade classifier
                                                                                          Zhou and Geng [8]                     95.9 %
                                                                             7

                                                                       5. Conclusion

                                                                       In this paper, we proposed a method for eyes detection
                                                                       using Pulse Coupled Neural Networks PCNN which is
                                                                       inspired by the human visual cortex. The algorithm has a
                                                                       two parts: face detection which is based on segmentation
                                                                       and eyes detection based on edge detection. The method is
                                                                       very fast due of iteration instead of image database
                                                                       learning. The time requirement of the algorithm is three
                                                                       millisecond which is acceptable for real time applications
                                                                       and less than this for grayscale image. The success rate is
                                                                       up to 99.4% for a picture with a person without glasses
                                                                       against 97.6% with glasses.
                       Fig. 21 Testing results

                                                                       Our prospects are turning to extract face feature such as
An approximate measure of performance was done by
                                                                       nose and mouth.               and              are the iris
passing image database test used by the methods listed on
Table 2 and some image from internet, as input of our                  position.     which is perpendicular line with
algorithm. The following table (Table 1) shows the result              segment, passes in middle of the both iris. The first black
of testing:                                                            region from PCNN output (Fig. 6) passed by           is the
                                                                       noise and the mouth is the second one.
                  Table 1: Performance measurement
                                                                       Acknowledgments
                   Face Detection          Eyes Detection
         With                                                          Authors thank Mrs Hellen Ndiri Ayuku and Mr Arnou
        glasses         99.6%                    99.4%
                                                                       Georges Philippe Jean for English language review.
       Without          98.4%                    97.6%
       glasses                                                         References

        Total            99%                     98.5%                 [1]       F. D. Frate, G. Licciardi, F. Pacifici, C. Pratola, and D.
                                                                                 Solimini, "Pulse Coupled Neural Network for Automatic
                                                                                 Features Extraction from Cosmo-Skymed and Terrasar-x
With performance 98.5%, we can say that our method is                            imagery", Tor Vergata Earth Observation Lab –
powerful. A comparison with another algorithm was done                           Dipartimento di Informatica, Sistemi e Produzione, Tor
                                                                                 Vergata University, Via del Politecnico 1, 00133 Rome,
and the table (Table 2) indicates the results. We can use
                                                                                 Italy, 2009.
this algorithm for face recognition or reading facial                  [2]       A. Soetedjo, "Eye Detection Based-on Color and Shape
expression.                                                                      Features", (IJACSA) International Journal of Advanced
IJCSN International Journal of Computer Science and Network, Volume 2, Issue 5, October 2013
ISSN (Online) : 2277-5420       www.ijcsn.org
                                                                                                                                        18

      Computer Science and Applications, Vol. 3, No. 5, 2011,          Maminiaina A. Rafidison was born in Moramanga, Madagascar on
      pp. 17-22, 2011.                                                 1984. He received his Engineer Diploma in Telecommunication on
[3]   T. Lindblad, J. M. Kinser, "Image processing Using               2007 and M.Sc. on 2011 at High School Polytechnic of Antananarivo,
                                                                       Madagascar. Currently, he is a consultant expert on Value Added
      Pulse-Coupled Neural Networks", Second, Revised                  Service (VAS) in telecom domain at Mahindra Comviva Technologies
      Edition, Springer, 2005.                                         and in parallel; he is a Ph.D. student at High School Polytechnic of
[4]   I. Choi, D. Kim, "Eye correction using correlation               Antananarivo. His current research is regarding image processing
      information", In Y. Yagi et al. (Eds.): ACCV 2007, Part I,       especially using Neural Networks.
      LNCS 4843, pp. 698-707, 2007.
[5]   Z. Zhou, X. Geng, "Projection functions for eye detection,       Andry A. Randriamitantsoa received his Engineer Diploma in
      Pattern Recognition", Vol. 37, pp. 1049-1056, 2004.              Telecommunication on 2008 at High School Polytechnic of
[6]   J. Song, Z. Chi, J. Liu, "A robust eye detection method          Antananarivo, Madagascar and his M.Sc. on 2009. Currently he is
                                                                       working for High School Polytechnic and he had a PhD in Automatic
      using combined binary edge and intensity information,            and Computer Science in 2013. His research interests include
      Pattern Recognition", Vol. 39, pp. 1110-1125, 2006.              Automatic, robust command, computer science.
[7]   T. Kawaguchi, M. Rizon, "Iris detection using intensity
      and edge information, Pattern Recognition", Vol. 36, pp.         Paul A. Randriamitantsoa was born in Madagascar on 1953. He is
      549-562, 2003.                                                   a professor at High School Polytechnic of Antananarivo and first
                                                                       responsible of Telecommunication- Automatic – Signal – Image
[8]   S. Asteriadis, N. Nikolaidis, A. Hajdu, I. Pitas, "An Eye
                                                                       Research Laboratory.
      Detection Algorithm Using Pixel to Edge Information",
      Department of Informatics, Aristotle University of
      Thessaloniki, Box 451, 54124, Thessaloniki, Greece,
      2010.

				
DOCUMENT INFO
Shared By:
Categories:
Stats:
views:12
posted:10/1/2013
language:English
pages:8
Description: This paper presents a new method fast and robust for eyes detection, using Pulse-Coupled Neural Networks (PCNN). The functionality is not the same as traditional neural network because there are no training steps. Due of this feature, the algorithm response time is around tree millisecond. The approach has two components including: face area detection based on segmentation and eyes detection using edge. The both operations are ensured by PCNN The biggest region which is constituted by pixel value one will be the human face area. The segmented face zone which will be the input of PCNN for edge detection undergoes a vertical gradient operation. The two gravity’s center of close edge near the horizontal line which corresponds to the peak value of horizontal projection of vertical gradient image will be the eyes.