Your Federal Quarterly Tax Payments are due April 15th Get Help Now >>

Advanced Computer Vision Introduction by v53280

VIEWS: 0 PAGES: 66

									Advanced Computer Vision
      Introduction
         Lecture 02
      Roger S. Gaborski


          Roger S. Gaborski   1
             Corner Detection: Basic Idea
     • We should easily recognize the point by
       looking through a small window
     • Shifting a window in any direction should give
       a large change in intensity




        “flat” region:   “edge”:                “corner”:
        no change in     no change              significant
        all directions   along the edge         change in all
                         direction              directions
                            Roger S. Gaborski                   2
Source: A. Efros
   Corner Detection: Mathematics
   Change in appearance for the shift [u,v]:

        E (u, v)   w( x, y)  I ( x  u, y  v)  I ( x, y) 
                                                                     2

                    x, y


               Window              Shifted               Intensity
               function           intensity




Window function w(x,y) =                            or

                           1 in window, 0 outside            Gaussian
                             Roger S. Gaborski                              3
                                                                 Source: R. Szeliski
Corner Detection: Mathematics
Change in appearance for the shift [u,v]:

   E (u, v)   w( x, y)  I ( x  u, y  v)  I ( x, y) 
                                                               2

               x, y


             I(x, y)
                                            E(u, v)


                                                      E(3,2)
                                             E(0,0)



                        Roger S. Gaborski                            4
                                               Source: R. Szeliski
      Corner Detection: Mathematics
      Change in appearance for the shift [u,v]:

             E (u, v)   w( x, y)  I ( x  u, y  v)  I ( x, y) 
                                                                    2

                         x, y


    We want to find out how this function behaves for
    small shifts
   Second-order Taylor expansion of E(u,v) about (0,0)
   (local quadratic approximation):
                             Eu (0,0) 1          Euu (0,0) Euv (0,0) u 
  E (u, v)  E (0,0)  [u v]            2 [u v] E (0,0) E (0,0)  v 
                             Ev (0,0)            uv         vv       
                                  Roger S. Gaborski                      5
Source: R. Szeliski
              Detection: Mathematics
 Cornerapproximation simplifies to
The quadratic


                              u 
           E (u, v)  [u v] M  
                              v 
where M is a second moment matrix computed from image
derivatives:

                               I x2       IxI y 
              M   w( x, y )               2 
                  x, y        Ix I y
                                           Iy  

   M                   Roger S. Gaborski                                6
                                                     Source: R. Szeliski
Interpreting the second moment matrix
 First, consider the axis-aligned case
 (gradients are either horizontal or vertical)


                   Ix 2
                               I x I y  1 0 
  M   w( x, y )                 2 
                                             
      x, y        I x I y
                               I y   0 2 
                                       
λ1 and λ2 will be proportional to the principal
   curvature of autocorrelation function.
If either eigenvalue λ is close to 0, then this is not
    a corner, so look for locations where both are
                       Roger S. Gaborski                 7
    large.
      Interpreting the eigenvalues
Classification of image points using eigenvalues
of M:
                         2       “Edge”
                                  2 >> 1        “Corner”
                                                  1 and 2 are large,
                                                  1 ~ 2 ;
                                                  E increases in all
                                                  directions



  1 and 2 are small;
  E is almost constant           “Flat”                          “Edge”
  in all directions              region                          1 >> 2

                              Roger S. Gaborski                          1   8
     Defining Corner Response
            Function, R
Recall:
A = [ a b; c d]
Det(A) = ad – bc

Then:

R = det(M)- α trace(M)2
    = λ1 λ2 - α (λ1 + λ2)
Where α = .04 to .06
                 Roger S. Gaborski   9
 Corner Response Function
R  det(M )   trace( M ) 2  12   (1  2 ) 2


                         R<0
                        Edge             R>0

                                         Corners



                                  |R| small
                        “Flat”
                        region              Edge   R<0

                     Roger S. Gaborski                   10
    Harris Detector Algorithm
• Compute Gaussian Derivatives at each
  point
• Compute Second Moment Matrix M
• Compute Corner Response Function
• Threshold R
• Find Local Maxima


                Roger S. Gaborski        11
         Harris Corner Detector
• Reference: C.G. Harris and M.J. Stephens “A
  Combined Corner and Edge Detector”
• Code inspired by Peter Kovesi
  – Derivative Masks: dx = [-1, 0, 1;-1, 0, 1;-1, 0, 1]
  – dy = dx’
  – Image Derivatives;
     • Ix = imfilter(im, dx, 'conv',‘same’);
     • Iy = imfilter(im, dy, 'conv',‘same’);
  – Gaussian Filter
     • g = fspecial(‘gaussian’, 6*sigma, sigma);

                           Roger S. Gaborski              12
– Smooth squared image derivative
   • Ix2 = imfilter (Ix.^2, g, 'conv', ‘same’);
   • Iy2 = imfilter (Iy.^2, g, 'conv', ‘same’);
   • IxIy = imfilter (Ix .* Iy, g, 'conv', ‘same’);
c = (Ix2.*Iy2 – IxIy.^2)./(Ix2+Iy2).^2;




                         Roger S. Gaborski            13
   Non-maximal Suppression and
           Threshold
• Extract local maxima – gray scale
  morphological dilation
     •   size = 2*radius+1; %radius is parameter
     •   mx = imdilate(c,ones(size)); %gray scale dilate
     •   cc = (c==mx)&(c>thresh); %find maxima
     •   [r,c] = find(cc) %find row, col coordinates
     •   figure, imagesc( im), colormap(gray)
     •   hold on
     •   plot(c,r, ‘rs’), title(‘Corners)

                         Roger S. Gaborski                 14
     100x100 Grid
background =1, lines = 1




        Roger S. Gaborski   15
Roger S. Gaborski   16
Image rotated 45 Degrees




         Roger S. Gaborski   17
Image rotated 45 Degrees
   same parameters




         Roger S. Gaborski   18
Roger S. Gaborski   19
Roger S. Gaborski   20
Porsche Image




   Roger S. Gaborski   21
Harris Points




   Roger S. Gaborski   22
1983 Porsche




   Roger S. Gaborski   23
   HW#2 – Due Tuesday, noon
• Work in teams of 2 or 3
• Write a Harris Detector Function (do not simply
  copy one from web, write your own)
• Experiment with ‘grid image’ and Flower2 image
  and two ‘interesting’ images of your choice
• Goals: - Find all intersections on grid image
         –Detect all petal end points on flower
          image – better results that class lecture
          slide
• Email: 1- write up including result
  images, observations and 2-MATLAB
  code
                     Roger S. Gaborski            24
            Object Recognition
• Issues:
  – Viewpoint
  – Scale
  – Deformable vs. rigid
  – Clutter
  – Occlusion
  – Intra class variability


                      Roger S. Gaborski   25
                Current Work
• Fix:
  – Viewpoint
  – Scale
  – Rigid
• Explore affects of:
  – Intra class variability
  – Clutter
  – Occlusion

                      Roger S. Gaborski   26
                   Goal
• Locate all instances of automobiles in a
  cluttered scene




                  Roger S. Gaborski          27
        Acknowledgements
• Students:
  – Tim Lebo
  – Dan Clark


• Images used in presentation:
  – ETHZ Database, UIUC Database



                 Roger S. Gaborski   28
Object Recognition Approaches
• For specific object class:
  – Holistic
     • Model whole object
  – Parts based
     • Simple parts
     • Geometric relationship information




                      Roger S. Gaborski     29
Training Images and Segmentation




             Roger S. Gaborski   30
         Implicit Shape Model
• Patches – local appearance prototypes
• Spatial relationship – where the patch can be
  found on the object
• For a given class w:
     ISM(w) = (Iw ,Pw )
     where Iw is the codebook containing the patches
     and Pw is the probability distribution that describes
     where the patch is found on the object


• How do we find ‘interesting’ patches?
                           Roger S. Gaborski                 31
        Harris Point Operator
• what is it?




                Roger S. Gaborski   32
Roger S. Gaborski   33
Harris Points




   Roger S. Gaborski   34
        Segmented Training Mask




Segmented mask ensures only patches containing valid car regions are selected

A corresponding segmentation patch is also extracted
                                Roger S. Gaborski                         35
Selected Patches




     Roger S. Gaborski   36
      How is spatial information
           represented?
• Estimate the center of the object using the
  centroid of the segmentation mask
• Displacement between:
  – Center of patch
  – Centroid of segmentation mask




                  Roger S. Gaborski         37
Individual Patch and Displacement
            Information




             Roger S. Gaborski   38
Typical Training Example




         Roger S. Gaborski   39
Typical Training Example




         Roger S. Gaborski   40
Extracted Training Patches




          Roger S. Gaborski   41
            Cluster Patches
• Many patches will be visually similar
• Normalized Grayscale Correlation is used
  to cluster patches
  – All patches within a certain neighborhood
    defined by the NGC are grouped together
  – The representative patch is determined by
    mean of the patches
  – The geometric information for each patch in
    the cluster is assigned to the representative
    patch
                    Roger S. Gaborski               42
Patches




Roger S. Gaborski   43
Wheel Patch Example




      Roger S. Gaborski   44
                        Clusters




Opportunity for better clustering method

                           Roger S. Gaborski   45
Clusters




 Roger S. Gaborski   46
Roger S. Gaborski   47
           Object Detection
• Harris point operator to find interesting
  points
• Extract patches
• Match extracted patches with model
  patches
• Spatial information predicts center of
  object
• Create voting space
                   Roger S. Gaborski          48
Ideal Voting Space Example




          Roger S. Gaborski   49
                   Multiple Votes




Multiple geometric interpretations

                             Roger S. Gaborski   50
Resolving False Detections




          Roger S. Gaborski   51
Localization: Find Corners




          Roger S. Gaborski   52
Localization: Model Matching




           Roger S. Gaborski   53
Localization: Find Corners




          Roger S. Gaborski   54
Model Matching




    Roger S. Gaborski   55
                           Spatial Activation
9000 different locations
                            (Hough Space)




                                Roger S. Gaborski   56
                         Hypothesis Candidates
16 candidate locations




                                Roger S. Gaborski   57
Hypothesis Candidates




       Roger S. Gaborski   58
                        References
SEE RESOURCES ON COURSE WEB PAGE:


Timothy Lebo and Roger Gaborski, “A Shape model with Coactivation
Networks for Recognition and Segmentation,” Eighth International conference
on Signal and image Processing, Honolulu, HI. August 2006.

Timothy Lebo, “Guiding Object Recognition: A Shape Model with Co-activation
Networks,” MS Thesis, RIT, 2005.

Daniel Clark, “Object Detection and Tracking using a Parts Based Approach,”
MS Thesis, RIT, 2005.




                               Roger S. Gaborski                              59
                        References

Bastian Leibe, Ales Leonardis, and Bernt Schiele, “Combined object
categorization and segmentation with an implicit shape model,” ECCV’04
Workshop onStatistical Learning in Computer Vision, May 2004.

Shivani Agarwal, Aatif Awan, and Dan Roth, “Learning to detect objects in
images via a sparse, part-based representation,” IEEE Transactions on Pattern
Analysis and Machine Intelligence, 26(11):1475–1490, 2004.




                               Roger S. Gaborski                           60
Roger S. Gaborski   61
Voting Space




   Roger S. Gaborski   62
Roger S. Gaborski   63
Model Patches Selected




        Roger S. Gaborski   64
True Object Patches




      Roger S. Gaborski   65
Identified Objects




     Roger S. Gaborski   66

								
To top