Docstoc

Ensemble_Tracking

Document Sample
Ensemble_Tracking Powered By Docstoc
					  Ensemble Tracking
           Shai Avidan
IEEE TRANSACTIONS ON PATTERN ANALYSIS
       AND MACHINE INTELLIGENCE
             February 2007
                outline

Prior knowledge : Adaboost
Introduction
Ensemble tracking
Implementation issues
Experiments
                      Adaboost

Resampling for Classifier Design
  Bagging
    Use multiple versions of a training set
       • Each created by drawing n’<n samples from D with
         replacement (i.e. if a sample is drawn, it is not removed
         from D but is reconsidered in the next sampling)
       • Each data set is used to train a different component
         classifier
       • The final classification decision is based on the vote of
         the component classifiers
                      Adaboost

Boosting
  To generate complementary classifiers by training
   the next component classifier on the mistakes of
   the previous ones
      • Using a subset of the training data that is most
        informative given the current set of component
        classifiers
  Adaboost trains a weak classifier on increasingly
    more difficult examples and combines the result to
    produce a strong classifier that is better than any of
    the weak classifiers.
  Weak classifier : hk ( x )  {1,1}
                                K m ax

  Strong classifier : g ( x )    k hk ( x ) y  sign[ g ( x)]{1,1}
                             k 1
                            Adaboost

 AdaBoost(adaptive boosting)
  Use the same training set over and over
      {( xi , yi ), i  1,.., n} yi  {1,1}
  Each training pattern receives a weight Wk(i)
      The probability that the i-th pattern is drawn to take the kth
       component classifier.
      Uniform initialization W1(i)=1/n
      If a training pattern is accurately classified hk(xi)=yi, its
       chance of used again is reduced
                                     k        1 1  Et
         Wk 1 (i )  Wk (i )  e           t  ln(     )
                                                2    Et
      Otherwise, hk(xi)yi            Ek  training error measured on D using Wk (i )
         Wk 1 (i )  Wk (i )  e k
                            Adaboost

Final decision
   g ( x)    k hk ( x)
            k
                                 Adaboost
 Kmax component classifiers
         hk ( x)  {1,1}
                  K m ax
       g ( x)           h ( x)
                           k k
                  k 1                       y  sign[ g ( x)]{1,1}

    {( xi , yi ), i  1,.., n}      yi  {1,1}
            n
     J   exp(  yi g ( xi )) h () ~ h ()
                                   1      t 1
           i 1
 1 ~  t 1                  t 1
                g t 1 ( x)    k hk ( x)
                                    k 1
                                Adaboost
 At the t step
                t
 g t ( x)    k hk ( x)  g t 1 ( x)   t ht ( x)
            k 1

            n
  J t   exp(  yi g t ( xi ))
           i 1
           n
         exp(  yi g t 1 ( xi )   t yi ht ( xi ))
          i 1


                    n
           wt (i ) exp(  t yi ht ( xi ))
                i 1
                                Adaboost
 Et e  t  (1  Et )e  t

 J t
       Et e t  (1  Et )e  t  0
  t

 Et e 2 t  1  Et

      1 1  Et
  t  ln(     )
      2    Et
                                          Adaboost


wt 1 (i )  e  yi g t ( xi )  e  yi ( g t 1 ( xi )  t ht ( xi ))
          yi g t 1 ( xi )  yi t ht ( xi )                              yi t ht ( xi )
e                            e                           wt (i )e
               Introduction

Considering tracking as a binary
 classification problem.
Ensemble tracking as a method for
 training classifiers on time-varying
 distributions.
Ensemble of weak classifiers is trained
 online to distinguish between the object
 and the background.
Introduction
               Introduction

Ensemble tracking maintains an implicit
 representation of the foreground and the
 background instead of describing
 foreground object explicitly alone.
Ensemble is not template-based methods.
 Those maintains the spatial integrity of the
 objects and are especially suited for
 handling rigid objects.
                Introduction

Ensemble tracking extends traditional
 mean-shift tracking in a number of
 important directions:
  Mean-shift tracking usually works with
   histogram of RGB colors. This is because gray-
   scale images do not provide enough information
   for tracking and high-dimensional feature
   spaces cannot be modeled with histograms due
   to exponential memory requirements.
               Introduction

This is in contrast to existing methods that
 either represent the foreground object
 using the most recent histogram or some
 ad hoc combination of the histograms of
 the first and last frames.
                 Introduction

Other advantages:
  It breaks the time consuming training phase into
   a sequence of simple and easy to compute
   learning tasks that can be performed online.
  It can also integrate offline and online learning
   seamlessly.
  Integrating classifier over time improves the
   stability of the tracker in cases of partial
   occlusions or illumination changes.
 In each frame, we keep the K “best” weak
  classifiers, discard the remaining T-K new weak
  classifiers, train T-K new weak classifiers on the
  newly available data, and reconstruct the strong
  weak classifier.
 The margin of the weak classifier h(x) is mapped
  to a confidence measure c(x) by clipping
  negative margins to zero and rescaling the
  positive margins to the range [0,1].
Ensemble update
Ensemble tracking
During Step 7 of choosing the K best weak
 classifier, weak classifiers do not perform
 much better than chance.
We allow up to existing weak classifiers to
 be removed this way because a large
 number might be a sign of occlusion and
 keep the ensemble unchanged for this
 frame.
         Implementations issues
 Outlier Rejection
Implementations issues
        Implementations issues

Multiresolution Tracking
Implementations issues
                 experiments
 The first version uses five weak classifiers, each
  working on an 11D feature vector per pixel that
  consists of an 8-bin local histogram of oriented
  gradients calculated on a 5x5 window as well as
  the pixel R, G, and B valuse.
 To improve robustness, we only count edges
  that are above some predefined threshold,
  which war set to 10 intensity values.
                   experiments

 We found that the original feature space was not stable
  enough and used a nonlinear version of that feature
  space instead. 2 3
             [ xi , xi , xi ]


 We use only three, instead of five weak classifiers.
 Three levels of the pyramid
 In each frame, we drop one weak classifier and add a
  newly trained weak classifier.
              experiments

We allow the tracker to drop up to two
 weak classifiers per frame because
 dropping more than that might be could be
 a sign of occlusion and we therefore do
 not update the ensemble in such a case.
                          experiments
Results on Color Sequences: a pedestrian crossing the streat
                        experiments
Results on Color Sequences:
tracking a couple walking with a hand-held camera.
                          experiments
Results on Color Sequences:
tracking a face exhibiting out-of-plane rotations
                         experiments
Results on Color Sequences: tracking a red car that is undergoing out-of-
plane rotations and partial occlusions.
11D feature vector, single scale, an ensemble of three classifier was
enough to obtain robust and stable tracking
                        experiments
Analyze the importance of the update scheme for tracking:
                        experiments
Analyze how often are the weak classifiers updated?
                       experiments
Analyze how does their weight change over time.
                        experiments
Analyze how does this method compare with a standard AdaBoost
classifier that trains all its weak classifiers on a given frame?
                       experiments
Results on gray-scale sequence :
                          experiments
Results on IR sequence:
                   experiments
 Handling long-period occlusion
  Classification rate is the fraction of the number pixels
   that were correctly classified
  As long as the classification rate is high, the tracking
   goes unchanged.
  When the classification level drops(<0.5), switch to
   prediction mode.
  Once occlusion is detected we start sampling, according
   to the particle filter, possible location where the object
   might appear.
  In each such location, compute the classification score.
   If it is above a threshold (0.7), then tracking resumes.
                       experiments
Handling occlusions:
experiments

				
DOCUMENT INFO