Docstoc

Performance Evaluation of Shape Analysis Techniques

Document Sample
Performance Evaluation of Shape Analysis Techniques Powered By Docstoc
					                    Volume 1 No. 1 APRIL 2011
                                                ARPN Journal of Systems and Software


                                                 ©2010-11 AJSS Journal. All rights reserved

                                                       http://www.scientific-journals.org



               Performance Evaluation of Shape Analysis Techniques

                                                             S.Selvarajah
                                               Department of Physical Science,
                                 Vavuniya Campus of the University of Jaffna, Vavuniya, Sri Lanka
                                                 shakeelas@mail.vau.jfn.ac.lk

                                                         S.R. Kodituwakku
                                            Department of Statistics & Computer Science,
                                                University of Peradeniya, Sri Lanka.
                                                        salukak@pdn.ac.lk


                                                            ABSTRACT

Shape is one of the important features used in Content Based Image retrieval (CBIR) systems. The shape of the object is a
binary image representing the object. They are broadly categorized into two groups: contour-based and region-based shape
descriptors. An experimental comparison of a number of different shape features for CBIR is presented in this paper. The
objective of this research is to determine which shape feature or combination of features is most efficient in representing
the images. In this paper, we first present the comparison of individual shape features and then the comparison of
combined shape features. For the experiments, publicly available image databases are used and the retrieval performance
of the features is analyzed in detail. The article is concluded by stating which shape features perform well for CBIR.

Keywords- Colour moments, directional edges, non-directional edges



1.   INTRODUCTION
          A basic requirement of an image database is to
perform searches based on content for images. Shape is
one of the very powerful image descriptor [5, 10, 12, 22].
Shape processing plays important role in several
applications, particularly in computer vision, such as
object recognition, image retrieval and processing of
pictorial information. Shape is probably the most
important property that is perceived about objects. It
allows predicting more facts about an object than other
features such as color. Therefore, shape recognition is
crucial for object recognition. In some applications, it may
be used as the only feature to recognize images. Logo
recognition is one such example [1, 2, 10].                                         This paper presents the comparison of individual
          Shape analysis methods analyze the objects in an                 shape features and combined shape features. For the
image. The shape of the object is a binary image                           experiments, publicly available image databases are used
representing the object. They are broadly categorized into                 and the retrieval performance of the features is analyzed in
two groups: contour-based and region-based shape                           detail. The article is concluded by stating which shape
descriptors. Figure 1 shows different available methods                    features perform well for CBIR.
that fall into these two categories. The contour-based
descriptors concentrate on the boundary lines while the                    2. METHODS AND MATERIALS
region-based descriptors consider on the whole area of the
object.                                                                             For retrieving images based on the shape feature,
                                                                           some simple geometric features can be used [1]. This
                                                                           section summarizes such simple shape descriptors and the
                                                                           other techniques used.

                                                                           2.1.        Materials
                                                                                     In order to analyze the performance of shape
                                                                           feature, the following are considered.
                                                                                                                               12
                        Volume 1 No. 1 APRIL 2011
                                                          ARPN Journal of Systems and Software


                                                              ©2010-11 AJSS Journal. All rights reserved

                                                                    http://www.scientific-journals.org



2.1.1. Simple Shape Descriptors                                                         The M.              K.           Hu’s      traditional         invariant
                                                                                        moments
          Area, circularity, eccentricity, major axis
orientation, Euler Number and Perimeter are the common                                           The two dimensional traditional Geometric
shape parameters considered.                                                            Moments of order p+q of intensity function f(x, y) are
                                                                                        defined as:
2.1.2. Shape Signature                                                                             +∞ +∞
                                                                                         m pq =     ∫ ∫ x y f(x, y)dxdy               p, q = 0,1,2
                                                                                                            p   p
                                                                                                                                                           (1)
         A shape signature maps a two dimensional shape                                            −∞ −∞
to a one dimensional function derived from shape                                        These are not invariant.
boundary points. Centroidal profile, complex coordinates,
centroidal distance, tangent angle, cumulative angle,                                             The intensity function gives the intensity of the
curvature, area and chord-length are the shape descriptors                              point (x, y) in image space. In case of the binary image,
analyzed [2, 4]. These are considered as they are scale                                 f(x, y) takes the value of 1 when the pixel (x, y) represents
invariant translations.                                                                 objects or noise and takes zero when it is part of the
                                                                                        background.
2.1.3. Moments                                                                                    When the geometric moments m pq given in

         For both contour and region of a shape, one can                                equation (1) are referred to the object centroid (x c , y c )
use moment’s theory, an integrated theory from                                          they become the Central Moments and are given by:
mathematics and physics, to analysis the object. Boundary
moments can be used to reduce the dimensions of the                                                +∞ +∞
boundary representation. Assume that the shape boundary
has been represented as a shape signature z(i), the rth
                                                                                         μ pq =    ∫ ∫ (x - x
                                                                                                   − ∞− ∞
                                                                                                                    c   ) P (y - y c ) Q f(x, y)dxdy
                                                                                                                                                           (2)
moment mr and central moment μ r [3, 4, 5, 6, 8] can be                                       m                              m
estimated as                                                                             x c = 10 ,                     y c = 01
                                                                                              m 00                           m 00
                            1 N
                       m r = ∑ [z(i)]r
                            N i =1
                                                              ,                                 In practical applications, the equations (1) and (2)
                            1 N
                       μ r = ∑ [z(i) − m1 ] r                                           are discritized for binary images according to the
                            N i =1                                                      following formulae.
where N is the number of boundary points.
                                                                                                     m pq = ∑∑ x P y Q f(x, y)
The normalized moments                                                                                              x     y



 mr =
         mr
                   , μr =
                               μr
                                         are invariant to translation,                               μ pq = ∑∑ (x − x c ) P (y - y c ) Q f(x, y)
        μ2                    μ2
             r/2                   r/2                                                                          x        y

rotation and scaling. Less noise-sensitive shape descriptors
can be obtained from                                                                                Where       m pq and μ pq are computed by sweeping
                                                                                        the image space.
                         1/2                                                                    The total area of the object is given by m 00 and
                   (μ 2 )
         F1 =
                      m1                                                                the Central Moments                   μ pq are invariant to translation.
                                                                                        They may be normalized to turn invariant to area scaling
                       μ3                          μ4                                   through the relation. The set of seven lowest order
         F2 =                       and F3 =                                            rotation, translation and scale invariants up to the third
                             3/2                          2
                    (μ 2 )                       (μ 2 )                                 order is given by:
                                                                                         φ1 = η 20 + η 02
Region moments can be classified into two categories,
invariant moments [3] and Zernike moments [6].
                                                                                         φ 2 = (η 20 − η 02 ) 2 + 4η11 2
                                                                                         φ 3 = (η 30 − 3η 12 ) 2 + (η 03 − 3η 21 ) 2
                                                                                         φ 4 = (η 30 + η 12 ) 2 + (η 03 + η 21 ) 2



                                                                                                                                                                 13
                                Volume 1 No. 1 APRIL 2011
                                                                     ARPN Journal of Systems and Software


                                                                      ©2010-11 AJSS Journal. All rights reserved

                                                                             http://www.scientific-journals.org



φ5 = (η 30 − 3η12 )(η 30 + η12 )[(η 30 + η12 ) 2 − 3(η 21 + η 03 ) 2 ]                            S11 = A42 × A22 + c.c =

+ (3η 21 − η 03 )(η 21 + η 03 ) × [3(η 30 + η12 ) − (η 03 + η 21 ) ]
                                                           2                    2                       30
                                                                                                             {( 4 µ04 − 40 ) + 3( µ 20 − µ02 )](µ02 − µ 20 )
                                                                                                        π2
φ6 = (η 20 − η 02 )[(η30 + η12 ) − (η 03 + η 21 ) ] +
                                          2                      2
                                                                                                             + 4 µ11[4( µ31 + µ13 ) − 3µ11 ]}
      4η11 (η 30 + η12 )(η 03 + η 21 )
                                                                                                 2.1.4. Scale Space Method
φ7 = (3η 21 − η03 )(η30 + η12 )[(η30 + η12 ) 2 − 3(η03 + η 21 ) 2 ]
+ (3η12 − η30 )(η03 + η 21 ) × [3(η30 + η12 ) 2 − (η03 + η 21 ) 2 ]                                        The problem of noise sensitivity and boundary
                                                                                                 variations in most spatial domain shape methods inspire
Zernike moments invariants expressed in                                                          the use of scale space analysis [13, 14, 15, 16, 17]. The
                                                                                                 scale space representation of a shape is created by tracking
terms of usual moment
                                                                                                 the position of inflection points in shape boundary filtered
                                                                                                 by low-pass Gaussian filters of variable widths.
         The second, third and fourth moments are defined
by the following formulae.
                                                                                                 2.1.5. Chain Code representation
Second order
                                                                                                           Chain code represents an object by a sequence of
S1 = A20 = 3[2( µ 20 + µ 02 ) − 1] / π ,                                                         unit-size segments with a given orientation. In this
                                                                                                 representation, an arbitrary curve is represented by a
               2
                                                                                                 sequence of small vectors of unit length and a limited set
S 2 = A22           = 9[2( µ 20 + µ02 ) 2 + 4( µ11 ) 2 ] / π 2                                   of possible directions. Therefore, this approach is called as
                                                                                                 unit-vector method. In chain code representation scheme,
Third order                                                                                      a digital boundary of an image is superimposed with a
                                                                                                 grid, the boundary points are approximated to the nearest
               2
S3 = A33           = 16[2( µ03 + 3µ 21 ) 2 + ( µ30 − 3µ12 ) 2 ] / π 2                            grid point, and then a sampled image is obtained. From a
                                                                                                 selected starting point, a chain code can be generated by
               2                                                                                 using 4-directional or 8-directional chain code. N-
S 4 = A31 = 144[2( µ03 + µ 21 ) 2 + ( µ30 + µ12 ) 2 ] / π 2                                      directional (N>8 and N=2k) chain code is also possible; it
                                                                                                 is called general chain code [11].
S5 = ( A33 ) × ( A31 )3 + c.c

   =
       13824
             {( µ03 − 3µ 21 )( µ03 + µ 21 ) × [( µ03 + µ 21 ) 2 − 3( µ30 + µ 21 ) 2 ]
                                                                                                 2.1.6. Chain Code Histogram (CCH)
        π4
          − (( µ30 − 3µ 21 )( µ30 + µ12 ) × [( µ30 + µ12 ) 2 − 3( µ03 + µ21 ) 2 ])}                        The CCH is independent of the choice of the
                                                                                                 starting point. It is a translation and scale invariant shape
S6 = ( A31 ) × A22 + c.c
               2
                                                                                                 descriptor. It can be made invariant to rotations of 90
        864
    =         {( µ02 − µ 20 )( µ03 + µ 21 ) 2 − ( µ30 + µ12 ) 2 ]                                degrees irrespective of the number of directions in a chain
        π3                                                                                       code.
              + 4 µ11 ( µ03 + µ 21 )( µ30 + µ12 )}                                                         This shape descriptor is based on Freeman chain
                                                                                                 code. The Freeman chain code is an ordered sequence of n
Fourth order                                                                                     links {ci i = 1,2,......, n} where c i is a vector
               2                                                                                 connecting neighboring edge pixels. The directions of
S7 = A44
                                                                                                  c i are coded with integer values k=0, 1,….., K-1 (K is
    = 25[( µ 40 − 6 µ 22 + µ04 ) 2 + 16( µ31 − µ13 ) 2 ] / π 2                                   the number of directions) in a counterclockwise sense
                                                                                                 starting from the direction of the positive x-axis.
S 8 = A42 = 25{[ 4( µ 04 − µ 40 ) + 3( µ 20 − µ 02 )}2
               2
                                                                                                 The CCH is calculated from the chain code of a contour.
                                                                                                           The      CCH       is     a    discrete    function
                   + 4{4( µ 31 + µ13 ) − 3µ11 ]2 } / π 2
                                                                                                              nk
S9 = A40 = 5[6( µ 40 + 2 µ 22 + µ 40 ) − 6( µ 20 + µ02 ) + 1] / π                                 p(k) =         where n k is the number of chain code values
S10 = ( A44 ) × ( A42 ) 2 + c.c =
                                                                                                              n
                                                                                                 k in the chain code, and n is the number of links in a chain
250
       (( µ 40 − 6µ 22 + µ04 ) × {4( µ04 − µ 40 ) + 3( µ 20 + µ02 )]2                            code [12, 21].
 π3
- 4[4( µ31 + µ13 ) − 3µ11 ]2 } − 16[4( µ04 − µ 40 ) +
 3( µ 20 − µ02 )] × [4( µ31 + µ13 ) - 3µ11 ]( µ31 − µ13 ))



                                                                                                                                                               14
                    Volume 1 No. 1 APRIL 2011
                                                ARPN Journal of Systems and Software


                                                 ©2010-11 AJSS Journal. All rights reserved

                                                       http://www.scientific-journals.org



2.1.7. Fourier Descriptors                                                 then each point of A must be within distance d of some
                                                                           point of B, and there is also some point of A that is exactly
         Shape signatures are very sensitive to noise. Any                 distance d from the nearest point of B ( the most
small change in the boundary leads to a large error in the                 mismatched point). The Hausdorff distance H (A, B) is the
matching. Fourier Descriptors [4, 7] are used to overcome                  maximum of h (A, B) and h (B, A).
this problem. Fourier transform of the signature s (t) is
defined as
                                                                           2.1.10 Similarity measurements
     1 N         − j2π2π
u n = ∑ s(t)exp(        )
     N t =1         N                                                              Eight similarity measurements have been
                                                                           proposed [9]. In this work, we use fixed threshold, Sum-
                                                                           of-Squared-Differences (SSD) and sum-of-Absolute
The u n , n=1, 2,……N are called Fourier Descriptors and                    Differences (SAD) methods.
are denoted as FDn.                                                                                          n −1                  2

                                                                                        SSD ( f q , f t ) = ∑ ( f q [i ] − f t [i ])
2.1.8. Grid Method                                                                                           i =o
                                                                                                             n −1
         The grid shape descriptor is proposed by Lu and                                SAD( f q , f t ) = ∑ (1 f q [i ] − f t [i ])
Sajjanhar [13, 20]. A grid of cells is overlaid on a shape                                                   i =o                      ,   where
and the grid is then scanned from left to right and top to
bottom. The result of this process is a bitmap. The cells                    f q f t represent query feature vector, and database feature
covered by the shape are assigned 1 and those not covered                  vectors and n is the number of features in each vector.
by the shape are assigned 0. The shape can then be
represented as a binary feature vector. The binary                         2.2.        Methodology
Humming distance is used to measure the similarity
between two shapes.                                                                  Since the spatial distribution of gray values
                                                                           defines the qualities of texture, the binary images are used
2.1.9. Hausdorff Distance                                                  for experimentation. In order to evaluate the efficiency of
                                                                           the shape features, features defined in the previous section
         The Hausdorff distance is a measure defined                       are used. The sum-of-squared-difference (SSD) and sum-
between two point sets for representing a model and an                     of-absolute-difference (SAD) are used to measure the
image. This distance can be used to determine the degree                   similarity between query image and the database images.
of resemblance between two objects in a shape that are                     The MPEG7_CE-Shape-1_Part_B image database that
superimposed on one another. The key advantages of this                    contains 1403 images with GIF format is used for
approach are: (i) relative insensitivity to small                          experimentation.
perturbations of the image, (ii) simplicity and speed of                             For each image, shape features have been
computation, (and) (iii) naturally allowing for portions of                computed and stored in a database. The shape features
one shape to be compared with another [19].                                such as area, CCH, Hausdorff distance and so forth are
         Given two finite point sets                                       computed for images in the database and the computed
 A = {a 1 ,......a p } and B = {b1 ,.......b q } the Hausdorff             values are stored in a database. The absolute difference of
                                                                           the feature vector values of the query image and database
distance is defined as                                                     images are also calculated. After that, in order to identify
                                                                           the relevant images, SAD and SSD are used. The average
         H(A, B) = max(h(A, B), h(B, A))                                   precision of retrieved images for each feature is computed
                                                                           and recorded for analysis and comparison purposes.
         where h(A, B) = max min a − b                                               In image retrieval based on simple shape
                                a∈A   b∈B
                                                                           descriptors, area, circularity, eccentricity, major axis
                                                                           orientation, Euler Number and perimeter are calculated for
where a − b is any metric between the points a and b.                      query image as well as the database images. Then the
                                                                           relevant images are retrieved by combining all these
For   simplicity,     a − b = (x 2 − x 1 ) 2 + (y 2 − y1 ) 2               simple shape parameters.
which is the Euclidean distance between a(x1, y1) and                                To calculate the shape signature, the first step is
                                                                           to extract the shape boundary points. Then the complex
b(x2, y2).
                                                                           coordinates are calculated by using the following formula.
The function h (A, B) is called the directed Hausdorff
distance from A to B. It identifies the point a ∈ A that is
farthest from any point of B, and measures the distance
from a to its nearest neighbor in B. That is if h(A,B)=d,
                                                                                                                                             15
                      Volume 1 No. 1 APRIL 2011
                                                    ARPN Journal of Systems and Software


                                                     ©2010-11 AJSS Journal. All rights reserved

                                                           http://www.scientific-journals.org



z (t ) = [ x(t ) − xc ] + i[ y (t ) − yc ]                                     assigned. By performing this binary feature vector is
                                                                               obtained for the object. Relevant images are identified by
where                                                                          calculating the eccentricity.
                                                                                          In Hausdorff distance based shape description,
       1 N
xc =     ∑ x(t )
       N t =1
                                                                               before calculating the Hausdorff distance, edges of the
                                                                               image are detected. Then the Hausdorff distance is
                                                                               calculated by using the formula mentioned in section
       1 N
yc =     ∑ y(t )
       N t =1
                                                                               2.1.9. To identify the similar images, minimum and
                                                                               maximum Hausdorff distance values of images are
                                                                               considered. If Min_Hausdorff distance <= Max_Hausdorff
                                                                               distance for an image, it is taken as a similar image.
The central distance shape signature feature is calculated                                The first step of the process of computing CSS is
                                                                               same as that is used in computing FD. The output of is the
by r (t ) = ([ x(t ) − x c ] + [ y (t ) − y c ] .
                           2                   2
                                                                               boundary coordinates (x (t), y (t)), t= 0, 1, 2, …, N-1.
                                                                                          The second step is the scale normalization which
The curvature function is calculated by using                                  sampled the entire shape boundary into fixed number of
                                                                               points so that shapes with different number of boundary
          K (t ) = θ (t ) − θ (t − 1) , where                                  points can be matched. The normalization is done by an
                                                                               equal arc length sampling technique.
                                                                                          The equal arc length sampling best preserves the
                               y (t ) − y (t + w)
          θ (t ) = tan −1 (                       ),                           boundary topological structure. The two main steps in the
                               x(t ) − x(t + w)                                process are the CSS contour map computation and CSS
                                                                               peaks extraction.
where w is the jumping step in selecting next pixel. In our                               The CSS contour map is a multi-scale
                                                                               organization of the inflection points (or curvature zero-
case we selected w as 1.
                                                                               crossing points). To calculate the CSS contour map,
                                                                               curvature is first derived from shape boundary points (x (t)
     The cumulative angular function was calculated by
                                                                               y (t)), t = 0, 1, 2, …, N-1.
                                                                                          Curvature zero-cross points are located in the
          φ (t ) = [θ (t ) = θ (0)] mod(2π )                                   shape boundary. The shape is then evolved into next scale
                         Lt                         ,                          by applying Gaussian smooth. New curvature zero-
          ψ (t ) = φ (      )+t                                                crossing points are located at the each evolving scale.
                         2π                                                               This process is continued until no curvature zero-
                                                                               crossing points are found. The CSS contour map is
where L is the perimeter of the shape boundary.                                composed of all curvature zero-crossing points zc (t, σ),
         In CCH based feature description method,                              where t is the location and σ is the scale at which the zc
morphological closing operation (dilation followed by                          point is obtained.
erosion) is applied to the binary image. Then contour                                     The peaks or the local maxima of the CSS
image is obtained by subtracting the closed image from                         contour map are then extracted out and sorted in
dilated image. After that the Gaussian filter is applied to                    descending order of σ.
obtain the Gaussian smooth image. For the smoothed                                        The next step is to normalize all the obtained
image, the CCH is calculated as described in section 2.1.6.                    CSS peaks. The average height of all the peaks is
         From the extracted shape boundary coordinates,                        extracted from the database which is used for the peak
the shape signatures are derived. Then the shape is                            normalization. Finally, the normalized CSS peaks are used
sampled to fixed number of points. After that Discrete                         as CSS descriptors to index the shape.
Fourier Transform is applied to calculate the Fourier
descriptors (FD). Since all four shape signatures are not
invariant to all three transformations: translation, rotation                  3. RESULTS AND DISCUSSION
and scale, the rotation invariant FDs are obtained by
taking the magnitude values of them. For scale invariant                                 The shape features mentioned in section 2 are
FDs, the magnitude values of all other descriptors are                         used to retrieve images and to compare the performance of
divided by the magnitude value of the second descriptor.                       individual and combined shape features. In this
         In Grid Based method, after identifying the object                    experiment, only the top 50 images are chosen as the
the rotation and scale normalization is done. Then the                         retrieval results. Test results for some of the individual
normalized object is mapped on a grid of fixed cell size                       shape features are shown in Figures 2, 3, 4 and 5. The first
2x2. After that the grid is scanned and assigned either one                    image of these resultant images is the query image. The
or zero. If the cells depending on the number of pixels in                     results for combinations of features are given in Table 1.
the cell which are inside the object are greater than a
predefined threshold, one is assigned. Otherwise zero is
                                                                                                                                         16
                    Volume 1 No. 1 APRIL 2011
                                                 ARPN Journal of Systems and Software


                                                  ©2010-11 AJSS Journal. All rights reserved

                                                         http://www.scientific-journals.org



  Table 1: An average retrieval accuracy of
         combined texture features

                                                Average
    Feature/ Combined Feature
                                                Precision
Simple Shape Descriptors(SSD)                     0.22

Shape Signature(SS)                               0.28

Hu’s invariant moments(HM)                        0.44
                                                                                     Figure 3: First 50 retrieved images for the feature
Zernike moments(ZM)                               0.49                                   M.K.Hu’s traditional invariant moments

Scale Space Method(CSS)                           0.50

Chain Code representation(CC)                     0.35

Chain Code Histogram (CCH)                        0.40

Fourier Descriptors(FD)                           0.52

Grid Method(GM)                                   0.25

Hausdorff Distance(HD)                            0.46

SSD+SS                                            0.30

SSD+HM                                            0.45
                                                                                   Figure 4: First 50 retrieved images for the feature CCH.
SSD+ZM                                            0.49

SSD+FD                                            0.54




                                                                                      Figure 5: First 50 retrieved images for the feature
                                                                                                      Hausdorff Distance


                                                                             4. CONCLUSIONS
                                                                                       An experimental comparison of a number of
  Figure 2: First 50 retrieved images for the feature                        different shape features for content-based image retrieval
                 Simple Descriptors.                                         was carried out. Both contour-based and region-based
                                                                             methods were considered for retrieval. The retrieval
                                                                             efficiency of the shape features was investigated by means
                                                                             of relevance. According to the results obtained it is
                                                                             difficult to claim that any individual feature is superior to
                                                                                                                                            17
                   Volume 1 No. 1 APRIL 2011
                                               ARPN Journal of Systems and Software


                                                ©2010-11 AJSS Journal. All rights reserved

                                                      http://www.scientific-journals.org



others. The performance depends on the spatial                                   11. Rui Y. and Huang T. S (1997), Image retrieval:
distribution of images. The test results indicated that                              Past, Present, and Future, Journal of Visual
Fourier Descriptors perform well compared to other                                   Communication and Image Representation.
individual features. In most of the image categories, Scale
Space Method and Zernike moments feature also showed                             12. H. Freeman, On the encoding of arbitrary
better performance. The performance of Fourier                                       geometric con1gurations, IRE Trans. Electron.
Descriptors and can be improved by combining with                                    Comput. EC-10 (1961) 260–268.
simple descriptors.
                                                                                 13. Cui Ming; Peter Wonka; Anshuman Razdan;
REFERENCES                                                                           Jiuxiang Hu, A New Image Registration Scheme
                                                                                     Based on Curvature Scale Space Curve
     1.   Dengsheng Zhang, Guojun Lu, Review of Shape                                Matching.
          representation and description techniques,
          Pattern Recognition 37 (2004) 1 –19.                                   14. F. Mokhtarian & A. K. Mackworth. .A theory of
                                                                                     multiscale, curvature-based shape representation
     2.   John M. Z. Jr. and Sitharama S. I. (2000). An                              for planar curves. IEEE Trans. Pattern Analysis
          information theoretic approach to content based                            and Machine Intelligence 14, pp. 789.805, 1995.
          image retrieval, 2000.
                                                                                 15. H. Asada and M. Brady. The Curvature Primal
     3.   Ming-Kuei HU, Visual Pattern Recognition by                                Sketch. MIT AI Memo 758, 1984.
          Moment Invariants, IRE transactions on
          information theory.                                                    16. T. Lindeberg. Scale-space: A Framework for
                                                                                     Handling Image Structures at Multiple Scales.
     4.   David McG. Squire, Terry M. Caelli, Invariance                             Proc. CERN School of Computing, Egmond aan
          Signatures: Characterizing contours by their                               Zee, The Netherlands, 8-21 September, 1996.
          departures from invariance.
                                                                                 17. F. Mokhtarian, S. Abbasi and J. Kittler. Robust
     5.   Herbert Freeman, Computer Processing of Line-                              and Efficient Shape Indexing Through Curvature
          Drawing Images, Computing Surveys, Vol. 6,                                 Scale Space. Proc. British Machine Vision
          No. I, March 1974.                                                         Conference, pp.53-62, Edinburgh, UK 1996.

     6.   Richard J. Prokop, Anthony P. Reevees, A                               18. S. Abbasi, F. Mokhtarian and J. Kittler.
          Survey of Moment-Based Techniques for                                      Curvature Scale Space Image in Shape
          Unoccluded    Object  Representation  and                                  Similarity Retrieval. Multimedia Systems,
          Recognition.                                                               7:467-476, 1999.

     7.   D.S. Zhang, G. Lu, A comparative study of                              19. Hausdorff Distance for Shape Matching, Hacer
          Fourier descriptors for shape representation and                           Şengül AKAÇ, Dilay BİNGÜL, May 2010.
          retrieval, in: Proceedings of the Fifth Asian
          Conference on Computer Vision (ACCV02),                                20. Cyrus Shahabi · Maytham Safar, An
          Melbourne, Australia, January 22–25, 2002, pp.                             experimental study of alternative shape-based
          646–651.                                                                   image retrieval techniques, Multimedia Tools
                                                                                     Application, DOI 10.1007/s11042-006-0070-y.
     8.   R.C. Gonzalez, R.E. Woods, Digital Image
          Processing, Addison-Wesley, Reading, MA,                               21. G. Lu and A. Sajjanhar. On Performance
          1992, pp. 502–503.                                                         Measurement of Multimedia Information
                                                                                     Retrieval Systems. In Proc. of International
     9.   Dengsheng Zhang and Guojun Lu, Evaluation of                               Conference on Computational Intelligence and
          similarity measurement for image retrieval,                                Multimedia Applications, pp.781-787, Australia,
          IEEE Int. Conf. Neural Networks & Signal                                   Feb. 9-11, 1998.
          Processing Nanjing, China, December 14-17,                             22. Jukka Iivarinen and Ari Visa, Shape recognition
          2003.                                                                      of irregular objects, IIV96.

     10. S.Z. Li, Shape matching based on invariants, in:
         O. Omidvar (Ed.), Shape Analysis, Progress in
         Neural Networks, Vol. 6, Ablex, Norwood, NJ,
         1999, pp. 203–228.


                                                                                                                                  18

				
DOCUMENT INFO
Shared By:
Stats:
views:133
posted:5/10/2011
language:English
pages:7
Description: 12 Performance Evaluation of Shape Analysis Techniques S.Selvarajah Department of Physical Science, Vavuniya Campus of the University of Jaffna, Vavuniya, Sri Lanka shakeelas@mail.vau.jfn.ac.lk S.R. Kodituwakku Department of Statistics & Computer Science, University of Peradeniya, Sri Lanka. salukak@pdn.ac.lk ABSTRACT Shape is one of the important features used in Content Based Image retrieval (CBIR) systems. The shape of the object is a binary image representing the object. They are broadly categorized into two groups: contour-based and region-based shape descriptors. An experimental comparison of a number of different shape features for CBIR is presented in this paper. The objective of this research is to determine which shape feature or combination of features is most efficient in representing the images. In this paper, we first present the comparison of individual shape features and then the comparison of combined shape features. For the experiments, publicly available image databases are used and the retrieval performance of the features is analyzed in detail. The article is concluded by stating which shape features perform well for CBIR.