Team4_Terrorists by FlavioBernardotti1


                   The Project
•   What is our goal?
•   Who are they?
•   How to start?
•   How ro recognize face?
    – Face detection
    – Face feature detection
    – Eyes, mouth, nose
      related search
                 What is our goal?

• Find out if someone is a terrorist.

• Try to identify then even if they
  are disguised
• We have a problem..
                           Who are they?
•   They are who
     –   Blow up cars, buildings
     –   Kill people
     –   Undertake control

•   Enough reason to do something
                              How to start?
•   Database
     –   Images of terrorist
     –   Training images for identification ( by computer)

•   Take a picture of suspicious person

•   Start to do a program that decides if someone is a terrorist
       How to recognize face?
• Problems
  – Disguised person
  – Other : rotated head, glasses.

• Use some algorithms
  – PCA
  – LDA

• OpenCV
  – Haar object detection
  – AdaBoot
•   Principal Component Analysis

•   reduce the dimensionality of the data while retaining as much
    as possible of the variation present in the original dataset

•   implies information loss

•   The best low-dimensional space can be determined by
    the "best„ eigenvectors of the covariance matrix

•   (i.e., the eigenvectors corresponding to the "largest"
    eigenvalues, also called "principal components").

•   PCA projects the data along the directions where the
    data varies the most.
Problems of Eigenface technique

• Sensitive to rotation, scale and translation.
• Sensitive to lighting variations
• Background interference

• Face images should be preprocessed to lessen the
  effects of possible variations.

• Variations such as lighting and rotation can also be
  taken into account during training. The training dataset
  may include samples with such variations.
•   Linear Discriminant Analysis
•   The objective of LDA is to perform
    dimensionality reduction while preserving as
    much of the class discriminatory information
    as possible
•   It seeks to find directions along which the
    classes are best separated.
•    It does so by taking into consideration the
    scatter within-classes but also the scatter
•    It is also more capable of distinguishing
    image variation due to person identity from
    variation due to other sources such as
    illumination and expression.
•   μr mean feature vector for class r.
•   Kr number of training samples from class r.
•   LDA computes a transformation that
    maximizes the between-class scatter while
    minimizing the within-class scatter:
•   Open Source Computer Vision Library
    –   Extensive vision suport
        •   Convolution, thresholding, floodfills, histogramming
        •   Pyramidal-subsampling
        •   Learning-based vision
        •   Feature detection
            –   Edge detection
            –   Blob finders ,. ....

    –   Haar cascade classifier
                          OpenCV -- Haar
•   OpenCV has a Haar features based face
    detection module.

•   Uses local features such as edges and line
    patterns. It scans a given image at different
    scales as in template matching.

•   Scale, translation and light invariant.

•   However it is sensitive to rotation.
     –   Rotate image and run again
               Advantages of using OpenCV
                  Haar object detection

•   Face detector already implemented

•   Its only argument is a xml file

•   Detection at any scale

•   Face detection (for videos) at 15 frames per second for 384*288 pixel images

•   90% objects detected – achievable doing 2 weeks training
                      Haar-Like Features

•   Each Haar-like feature consists of two or three jointed “black” and “white”

     Figure 1: A set of basic Haar-like features.

                                                    Figure 2: A set of extended Haar-like features.

•   The value of a Haar-like feature is the difference between the sum of the
    pixel gray level values within the black and white rectangular regions:
       f(x)=Sumblack rectangle (pixel gray level) – Sumwhite rectangle (pixel gray level)
•   Compared with raw pixel values, Haar-like features can reduce/increase
    the in-class/out-of-class variability, and thus making classification easier.
               Adaboost classifier
• Selects a small number of
  critical visual features

• Combines a collection of weak
  classification functions to form
  a strong classifier
                                     The first and second features
                                     selected by AdaBoost for face
•   The computation cost using Haar-like features:
    Example: original image size: 320X240,
             sub-window size: 24X24,
    The total number of sub-windows with one Haar-like feature per second:
    Considering the scaling factor and the total number of Haar-like
    features, the computation cost is huge.

•   AdaBoost (Adaptive Boost) is an iterative learning algorithm to construct
    a “strong” classifier using only a training set and a “weak” learning
    algorithm. A “weak” classifier with the minimum classification error is
    selected by the learning algorithm at each iteration.

•   AdaBoost is adaptive in the sense that later classifiers are tuned up in
    favor of those sub-windows misclassified by previous classifiers.
•   The algorithm:
 Adaboost      starts  with    a   uniform
distribution of “weights” over training
examples. The weights tell the learning
algorithm the importance of the example.

 Obtain a weak classifier from the weak
learning algorithm, hj(x).

 Increase the weights on the training
examples that were misclassified.

 (Repeat)

 At the end, carefully make a linear
combination of the weak classifiers
obtained at all iterations.

     )  h n)
    x final
  final 1x   
  f (, 1) ,h
           final
               (               
• Simple to implement

• But..
  – Suboptimal solution
  – Over fit in presence of noise
         The Cascade of Classifiers
•   A series of classifiers are applied to every sub-window.
•   Increases speed
•   The first classifier eliminates a large number of negative sub-windows and
    pass almost all positive sub-windows (high false positive rate) with very little
•   Subsequent layers eliminate additional negatives sub-windows (passed by
    the first classifier) but require more computation.
•   After several stages of processing the number of negative sub-windows have
    been reduced radically.
               The Cascade of Classifiers
•   Negative samples: non-object images.
    Negative samples are taken from arbitrary
    images. These images must not contain
    object representations.

•   Positive samples: images contain object
    (hand in our case). The hand in the
    positive samples must be marked out for
    classifier training.
                detecting                            cropping

                      face                             face
            144x150                                                                    90x130

    0                        creating                       detecting

                             feature                        features
                             vector                                                             256x256


Database of terrorists
                      0                 comparing vectors
                                                                       is not in the

      Eye detection with Haar
• eye_haarcascade_classifier
• create a growable
   sequence of eyes
• detect the objects
• store them in
  the sequence

To top