An Efficient Feature Extraction Technique for Texture Learning

Document Sample
An Efficient Feature Extraction Technique for Texture Learning Powered By Docstoc
					                                                             (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                Vol. 8, No. 2, 2010

              An Efficient Feature Extraction Technique
                        for Texture Learning
                           R. Suguna                                                         P. Anandhakumar
Research Scholar, Department of Information Technology                     Assistant Professor, Department of Information Tech.
    Madras Institute of Technology, Anna University                         Madras Institute of Technology, Anna University
         Chennai- 600 044, Tamil Nadu, India.                                     Chennai- 600 044, Tamil Nadu, India.

Abstract— This paper presents a new methodology for                      pixels. Texture analysis researchers agree that there is
discovering features of texture images. Orthonormal                      significant variation in intensity levels or colors between
Polynomial based Transform is used to extract the features               nearby pixels and at the limit of resolution there is non-
from the images. Using orthonormal polynomial basis                      homogeneity. Spatial non-homogeneity of pixels corresponds
function polynomial operators with different sizes are                   to the visual texture of the imaged material which may result
generated. These operators are applied over the images to                from physical surface properties such as roughness, for
capture the texture features. The training images are                    example. Image resolution is important in texture perception,
segmented with fixed size blocks and features are extracted              and low-resolution images contain typically very homogenous
from it. The operators are applied over the block and their
inner product yields the transform coefficients. These set                    The appearance of texture depend upon three ingredients:
of transform coefficients form a feature set of a particular             (i) some local ‘order’ is repeated over a region which is large in
texture class. Using clustering technique, a codebook is                 comparison to the order’s size, (ii) the order consists in the
generated for each class. Then significant class                         nonrandom arrangement of elementary parts, and (iii) the parts
representative vectors are calculated which characterizes                are roughly uniform entities having approximately the same
the textures. Once the orthonormal basis function of                     dimensions everywhere within the textured region[1].
particular size is found, the operators can be realized with                 Image texture, defined as a function of the spatial variation
few matrix operations and hence the approach is                          in pixel intensities (gray values), is useful in a variety of
computationally simple. Euclidean Distance measure is                    applications and has been a subject of intense study by many
used in the classification phase. The transform coefficients             researchers. One immediate application of image texture is the
have rotation invariant capability. In the training phase                recognition of image regions using texture properties. Texture
the classifier is trained with samples with one particular               is the most important visual cue in identifying these types of
angle of image and tested with samples at different angles.              homogeneous regions. This is called texture classification. The
Texture images are collected from Brodatz album.                         goal of texture classification then is to produce a classification
Experimental results prove that the proposed approach                    map of the input image where each uniform textured region is
provides good discrimination between the textures.                       identified with the texture class it belongs to [2].
                                                                              Texture analysis methods have been utilized in a variety 
   Keywords- Texture Analysis; Orthonormal Transform;
codebook generation; Texture Class representatives; Texture              of  application  domains.  Texture  plays  an  important  role  in 
Characterization.                                                        automated inspection, medical image processing, document 
                                                                         processing and remote sensing. In the detection of defects in 
                      I.      INTRODUCTION                               texture  images,  most  applications  have  been  in  the  domain 
    Texture can be regarded as the visual appearance of a                of  textile  inspection.  Some  diseases,  such  as  interstitial 
surface or material. Textures appear in numerous objects and             fibrosis, affect the lungs in such a manner that the resulting
environments in the universe and they can consist of very                changes in the X-ray images are texture changes as opposed 
different elements. Texture analysis is a basic issue in image           to  clearly  delineated  lesions.  In  such  applications,  texture 
processing and computer vision. It is a key problem in many
                                                                         analysis  methods  are  ideally  suited  for  these  images. 
application areas, such as object recognition, remote sensing,
content-based image retrieval and so on. A human may                     Texture plays a significant role in document processing and 
describe textured surfaces with adjectives like fine, coarse,            character  recognition. The  text  regions  in  a  document  are 
smooth or regular. But finding the correlation with                      characterized  by  their  high  frequency  content. Texture
mathematical features indicating the same properties is very             analysis has been extensively used to classify remotely sensed
difficult. We recognize texture when we see it but it is very            images. Land use classification where homogeneous regions
difficult to define. In computer vision, the visual appearance of        with different types of terrains (such as wheat, bodies of water,
the view is captured with digital imaging and stored as image            urban regions, etc.) need to be identified is an important

                                                                                                     ISSN 1947-5500
                                                               (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                  Vol. 8, No. 2, 2010
application. Haralick et al. [3] used gray level co-occurrence             the surface and how they are located. Stochastic textures are
features to analyze remotely sensed images.                                usually natural and consist of randomly distributed texture
                                                                           elements, which again can be, for example, lines or curves (e.g.
    Since we are interested in interpretation of images we can
                                                                           tree bark). The analysis of these kinds of textures is based on
define texture as the characteristic variation in intensity of a
                                                                           statistical properties of image pixels and regions. The above
region of an image which should allow us to recognize and
                                                                           categorization of textures is not the only possible one; there
describe it and outline its boundaries. The degrees of
                                                                           exist several others as well, for example, artificial vs. natural or
randomness and of regularity will be the key measure when
                                                                           micro textures vs. macro textures. Regardless of the
characterizing a texture. In texture analysis the similar textural
                                                                           categorization, texture analysis methods try to describe the
elements that are replicated over a region of the image are
                                                                           properties of the textures in a proper way. It depends on the
called texels. This factor leads us to characterize textures in the
                                                                           applications what kind of properties should be sought from the
following ways:
                                                                           textures under inspection and how to do that. This is rarely an
•   The texels will have various sizes and degrees of                      easy task.
    uniformity                                                                 One of the major problems when developing texture
•   The texels will be oriented in various directions                      measures is to include invariant properties in the features. It is
                                                                           very common in a real-world environment that, for example,
•   The texels will be spaced at varying distances in different
                                                                           the illumination changes over time, and causes variations in the
                                                                           texture appearance. Texture primitives can also rotate and
•   The contrast will have various magnitudes and variations               locate in many different ways, which also causes problems. On
•   Various amounts of background may be visible between                   the other hand, if the features are too invariant, they might not
    texels                                                                 be discriminative enough.
•    The variations composing the texture may each have                                         II. TEXTURE MODELS
     varying degrees of regularity                                             Image texture has a number of perceived qualities which
    It is quite clear that a texture is a complicated entity to            play an important role in describing texture. One of the
measure. The reason is primarily that many parameters are                  defining qualities of texture is the spatial distribution of gray
likely to be required to characterize it. Characterization  of             values. The use of statistical features is therefore one of the
textured  materials is usually very difficult and the goal of              early methods proposed in the machine vision literature.
characterization depends on the application. In general, the aim               The gray-level co-occurrence matrix approach is based on
is to give a description of analyzed material, which can be, for           studies of the statistics of pixel intensity distributions. The
example, the classification result for a finite number of classes          early paper by Haralick et al.[4] presented 14 texture measures
or visual exposition of the surfaces. It gives additional                  and these were used successfully for classification of many
information compared only to color or shape measurements of                types of materials for example, wood, corn, grass and water.
the objects. Sometimes it is not even possible to obtain color             However, Conners and Harlow [5] found that only five of these
information at all, as in night vision with infrared cameras.              measures were normally used, viz. “energy”, “entropy”,
Color measurements are usually more sensitive to varying                   “correlation”, “local homogeneity”, and “inertia”. The size of
illumination conditions than texture, making them harder to use            the co-occurrence matrix is high and suitable choice of d
in demanding environments like outdoor conditions. Therefore               (distance) and θ (angle) has to be made to get relevant features.
texture measures can be very useful in many real-world
applications, including, for example, outdoor scene image                      A novel texture energy approach is presented by Laws [6].
analysis.                                                                  This involved the application of simple filters to digital images.
                                                                           The basic filters he used were common Gaussian, edge
    To exploit texture in applications, the measures should be             detector, and Laplacian-type filters and were designed to
accurate in detecting different texture structures, but still be           highlight points of high “texture energy” in the image. Ade
invariant or robust with varying conditions that affect the                investigated the theory underlying Laws’ approach and
texture appearance. Computational complexity should not be                 developed a revised rationale in terms of Eigen filters [7]. Each
too high to preserve realistic use of the methods. Different               eigenvalue gives the part of the variance of the original image
applications set various requirements on the texture analysis              that can be extracted by the corresponding filter. The filters that
methods, and usually selection of measures is done with respect            give rise to low variances can be taken to be relatively
to the specific application.                                               unimportant for texture recognition.
    Typically textures and the analysis methods related to them
                                                                               The structural models of texture  assume that textures are 
are divided into two main categories with different
computational approaches: the stochastic and the structural                composed of texture primitives. The texture is produced by 
methods. Structural textures are often man-made with a very                the  placement  of  these  primitives  according  to  certain 
regular appearance consisting, for example, of line or square              placement  rules.  This  class  of  algorithms,  in  general,  is 
primitive patterns that are systematically located on the surface          limited  in  power  unless  one  is  dealing  with  very  regular 
(e.g. brick walls). In structural texture analysis the properties          textures.  Structural  texture  analysis  consists  of  two  major
and the appearance of the textures are described with different            steps: (a) extraction of the texture elements, and (b) inference
rules that specify what kind of primitive elements there are in            of the placement rule. An approach to model the texture by

                                                                                                        ISSN 1947-5500
                                                              (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                 Vol. 8, No. 2, 2010
structural means is described by Fu [8]. In this approach the                 Local frequency analysis has been used for texture analysis.
texture image is regarded as texture primitives arranged                  One of the best known methods uses Gabor filters and is based
according to a placement rule. The primitive can be as simple             on the magnitude information [14]. Phase information has been
as a single pixel that can take a gray value, but it is usually a         used in [15] and histograms together with spectral information
collection of pixels. The placement rule is defined by a tree             in [16].        Ojala T & Pietikäinen M [17] proposed a
grammar. A texture is then viewed as a string in the language             multichannel approach to texture description by approximating
defined by the grammar whose terminal symbols are the texture             joint occurrences of multiple features with marginal
primitives. An advantage of this method is that it can be used            distributions, as 1-D histograms, and combining similarity
for texture generation as well as texture analysis.                       scores for 1-D histograms into an aggregate similarity score.
                                                                          Ojala T introduced a generalized approach to the gray scale and
    Model based texture analysis methods are based on the
                                                                          rotation invariant texture classification method based on local
construction of an image model that can be used not only to
                                                                          binary patterns [18]. The current status of a new initiative
describe texture, but also to synthesize it. The model
                                                                          aimed at developing a versatile framework and image database
parameters capture the essential perceived qualities of texture.
                                                                          for empirical evaluation of texture analysis algorithms is
Markov random fields (MRFs) have been popular for modeling
                                                                          presented by him. Another frequently used approach in texture
images. They are able to capture the local (spatial) contextual
                                                                          description is using distributions of quantized filter responses
information in an image. These models assume that the
                                                                          to characterize the texture (Leung and Malik), (Varma and
intensity at each pixel in the image depends on the intensities
                                                                          Zisserman) [19] [20]. Ahonen T, proved that the local binary
of only the neighboring pixels. Many natural surfaces have a
                                                                          pattern operator can be seen as a filter operator based on local
statistical quality of roughness and self-similarity at different
                                                                          derivative filters at different orientations and a special vector
scales. Fractals are very useful and have become popular in
                                                                          quantization function [21].
modeling these properties in image processing.
                                                                             A rotation invariant extension to the blur insensitive local
          However, the majority of existing texture analysis
                                                                          phase quantization texture descriptor is presented by Ojansivu
methods makes the explicit or implicit assumption that texture
                                                                          V [22].
images are acquired from the same viewpoint (e.g. the same
scale and orientation). This gives a limitation of these methods.             Unitary Transformations are also used to represent the
In many practical applications, it is very difficult or impossible        images. The simple and powerful class of transform coding is
to ensure that images captured have the same translations,                linear block transform coding, where the entire image is
rotations or scaling between each other. Texture analysis                 partitioned into a number of non-overlapping blocks and then
should be ideally invariant to viewpoints. Furthermore, based             the transformation is applied to yield transform coefficients.
on the cognitive theory and our own perceptive experience,                This is necessitated because of the fact that the original pixel
given a texture image, no matter how it is changed under                  values of the image are highly correlated. A framework using
translation, rotation and scaling or even perspective distortion,         orthogonal polynomials for edge detection and texture analysis
it is always perceived as the same texture image by a human               is presented in [23] [24].
observer. Invariant texture analysis is thus highly desirable
from both the practical and theoretical viewpoint.                                III. ORTHONORMAL POLYNOMIAL TRANSFORM
          Recent developments include the work with                           A linear 2-D image formation system usually considered
automated visual inspection in work. Ojala et al., [9] and                around a Cartesian coordinate separable, blurring, point spread
Manthalkar et al., [10] aimed at rotation invariant texture               operator in which the image I results in the superposition of the
classification. Pun and Lee [11] aims at scale invariance. Davis          point source of impulse weighted by the value of the object f.
[12] describes a new tool (called polarogram) for image texture           Expressing the object function f in terms of derivatives of the
analysis and used it to get invariant texture features. In Davis’s        image function I relative to its Cartesian coordinates is very
method, the co-occurrence matrix of a texture image must be               useful for analyzing the image. The point spread function M(x,
computed prior to the polarograms. However, it is well known              y) can be considered to be real valued function defined for (x,
that a texture image can produce a set of co-occurrence                   y) € X x Y, where X and Y are ordered subsets of real values.
matrices due to the different values of a and d. This also results        In case of gray-level image of size (n x n) where X (rows)
in a set of polarograms corresponding to a texture. Only one              consists of a finite set, which for convenience labeled as {0, 1,
polarogram is not enough to describe a texture image. How                 2, … ,n-1}, the function M(x, y) reduces to a sequence of
many polarograms are required to describe a texture image                 functions.
remains an open problem. The polar grid is also used by
Mayorga and Ludeman [13] for rotation invariant texture                                   Μ ,              (τ), τ=0,1,...ν−1                   (1)
analysis. The features are extracted on the texture edge
statistics obtained through directional derivatives among                     The linear two dimensional can be defined by the point
circularly layered data. Two sets of invariant features are used          spread operator M(x,y), (M(i,t) = ui(t)) as shown in equation 2.
for texture classification. The first set is obtained by computing
the circularly averaged differences in the gray level between                 ,                     ,           ,       ,                      (2)
pixels. The second computes the correlation function along
circular levels. It is demonstrated by many recent publications
that Zernike moments perform well in practice to obtain
geometric invariance.

                                                                                                        ISSN 1947-5500
                                                               (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                  Vol. 8, No. 2, 2010
    Considering both X and Y to be a finite set of values {0, 1,             (a) If each pair of distinct vectors from S is orthogonal then
2, …. ,n-1}, equation (2) can be written in matrix notation as            we call S an orthogonal set.
                                                                             (b) If S is an orthogonal set and each of the vectors in S also
                                                                          has a norm of 1 then we call S an orthonormal set.
                         | |          | | | |                  (3)
                                                                               To enforce orthonormal property, divide each vector by its
    where ⊗ is the outer product,              are      matrices          norm. Suppose           , ,      forms an orthogonal set. Then,
                                                                              ,          ,          ,       0. Any vector v can be turned
arranged in the dictionary sequence, | | is the image,       are
                                                                          into a vector with norm 1 by dividing by its norm as follows,
the coefficients of transformation and the point spread operator
| | is
                                  …                                          To convert S to have orthonormal property, divide each
                          .                                               vector by its norm.
| |                       .                                    (4)
                          .                                                                                 ,       1, 2, 3                  (11)

    We consider the set of orthogonal polynomials                             After finding the orthonormal basis function, the operators
      ,     ,…,           of degrees 0, 1, 2, …, n-1 respectively         are generated by applying outer product. For an orthonormal
to construct the polynomial operators of different sizes from             basis function of size n,   operators are generated. Applying
equation (4) for n ≥ 2 and         . The generating formula for           the operators over the block of the image we get transform
the polynomials is as follows.                                            coefficients.
                                                                                               IV.    METHODOLOGY
                                                     1         (5)
                                                                              Sample Images representing different Textures are
                                          ,     1,                        collected. We collected the images from Outex Texture
                                                                          Database. Each image is of size 128 x 128. Images of each
                                                                          texture are partitioned into two groups as Training Set and Test
                 ,            ∑                                               The process involved in capturing the texture
b n                                                            (6)
             ,                ∑                                           characterization is depicted in Figure-1. Each training image is
                                                                          partitioned into non-overlapping blocks of size M*M. We have
   and                                                                    chosen M = 4. Features are extracted from each block using
                                                                          orthonormal polynomial based transform as described in
                     ∑                                         (7)        section 3. From each block a k-dimensional feature vector is
                                                                          generated. A codebook is built for each concept classes. The
                                                                          algorithm for construction of codebook is discussed below.
    Considering the range of values of          to be      ,
1,2,3, … , we get                                                         A. Codebook Generation Algorithm
                                                                             Input: Training Images of Texture Ti
                                                               (8)           Output: Codebook of the Texture Ti
                                                                             1. Read the image Tr(m) from the Texture Class Ti,
                                                                                 where m=1,2,…M; M denotes the number of training
                                      ∑                        (9)               images in Ti and i=1,2,…L; L denotes the number of
                                                                                 Textures. Size of Tr(m) is 128x128.
    We can construct point-spread operators | | of different                 2.   Each image is partitioned into p x p blocks and we
size from equation (4) using the above orthogonal polynomials                     have P blocks for each training image, p=4.
for     2 and       .
                                                                             3.   For each block apply Orthonormal Based transform by
   The orthogonal basis functions for n=2 and n=3 are given                       using a set of (pxp) polynomial operators and extract
below.                                                                            the feature coefficients. Inner product between a
                                       1     1 1                                  polynomial operator and image block results in a
                     1         1                                                  transform coefficient. We get p2 coefficients for each
                                       1 0         2
                     1        1                                                   block.
                                       1 1        1
    Orthonormal basis functions can be derived from                          4.   Rearrange the feature coefficients into 1-D array in
orthogonal sets. Suppose that S is a set of vectors in an inner                   descending sequence.
product space.

                                                                                                      ISSN 1947-5500
                                                                        (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                           Vol. 8, No. 2, 2010

        Figure 1.        Process involved in Texture Characterization            are used for training the texture classifier, which is then tested
                                                                                 with the samples of the other rotation angles.

   5.    Take only d coefficients to form the feature vector z,
         where z = {z(j), j=1,2,…,d; d<k}.                                       A. Image Data and Experimental Setup
   6.    From P blocks get P x d coefficients.                                       The image data included 12 textures from the Brodatz
                                                                                 album. Textures are presented at 6 different rotation angles (0,
   7.    Repeat 2-6 for all images in Ti and collect the z vectors.              30, 60, 90, 120, and 150). For each texture class there were 16
                                                                                 images for each class and angle (hence 1248 images in total).
    Apply clustering technique, to cluster the feature vectors of
                                                                                 Each texture class comprises following subsets of images: 16
Ti. The number clusters decides the codebook size. The mean
                                                                                 'original' images, 16 images rotated at 300 , 16 images rotated at
of the clusters form the code vectors.
                                                                                 600, 16 images rotated at 900, 16 images rotated at 1200and16
B. Building Class Representative Vector                                          images rotated at 1500. The size of each image is 128x128.
   Input:        Images of size N x N, Texture codebook                              The texture classes considered for our study are shown in
    Output:          Class Representative Vector Ri.                             Figure. 2. The texture classes are divided into two sets. Texture
                                                                                 Set-1 contains structural textures (regular patterns) and Texture
   1.    For each image in Ti, generate the code indices                         Set-2 contains stochastic textures (irregular patterns).
         associated with the corresponding codebook.
                                                                                    Texture Set-1 includes {bark, brick, bubbles, raffia, straw,
   2.    Find the number of occurrences in each code index for                   weave}. Texture Set-2 includes {grass, leather, pigskin, sand,
         each image.                                                             water, wool}.
   3.    Compute the mean of occurrences to generate class
         representative vector Ri, where i=1,2,…L, where L is
         the number of Textures.                                                    The statistical features of the texture class are studied first.
                                                                                 The mean and variance of the texture classes are found and
   4.    Repeat 1-3 for all Ti.                                                  depicted in Figure-3 to Figure 6.

C. Texture Classification
          Given any texture image this phase determines to
which texture class the image is relevant. Images from the Test
set are partitioned into non-overlapping blocks of size M*M.
Features are extracted using orthonormal polynomial
Transform. Consulting the codebooks, code indices are
generated and the corresponding input representative vector is
formed. Compute the distance di between the Class
Representative vector Ri and input image representative vector
IRi for Ti. Euclidean distance is used for similarity measure.
          di = dist(IRi, Ri)
    Find min (di) to obtain the Texture class.
                    V.     RESULTS AND DISCUSSION
    We demonstrate the performance of our approach with the
proposed Transform coefficients with texture image data that
have been used in recent studies on rotation invariant texture                                  Figure 2.    Sample Images of Textures
classification. Since the data included samples from several
rotation angles, we also present results for a more challenging
setup, where the samples of just one particular rotation angle

                                                                                                             ISSN 1947-5500
                                               (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                  Vol. 8, No. 2, 2010
                                                        B. Contribution of Transform Coefficients
                                                            Each Texture class with rotation angle 0 is taken for
                                                        training. Other images are used for Testing. For each Texture
                                                        class a code book is generated with the training samples. A
                                                        Class Representative Vector is estimated. Figure 7 and Figure 8
                                                        shows the representatives of Textures.

 Figure 3.    Mean of Structural Textures

                                                                Figure 7.       Class Representatives of Structural Textures

 Figure 4.    Mean of Stochastic Textures

                                                                Figure 8.       Class Representatives of Stochastic Textures

                                                            Table 1 and Table 2 presents results for a the challenging
                                                        experimental setup where the classifier is trained with samples
                                                        of just one rotation angle and tested with samples of other
                                                        rotation angles.
Figure 5.    Variance of Structural Textures                                Classification Accuracy (%) for different Training
                                                             Texture                              angles
                                                                             300       600          900        1200        1500
                                                              Bark          86.6      68.75        86.6        75.0       68.75

                                                              Brick         75.0       86.6        87.5        75.0        86.6

                                                             Bubbles        93.75     93.75        100          100        100

                                                              Raffia        100       93.75        87.5        87.5        87.5

                                                              Straw         56.25      62.5       68.75       56.25        62.5

                                                             Weave          93.75      100         100          100       93.75
Figure 6.    Variance of Stochastic Textures
                                                        Table 1 Classification Accuracies (%) of Structural Textures trained with
                                                                 One rotation angle (00) and Tested with other versions

                                                                                        ISSN 1947-5500
                                                                        (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                           Vol. 8, No. 2, 2010
                                                                                     The overall performance of Structured and Stochastic
                                                                                 Textures is reported in Figure 11 and Figure 12. If the mean
                  Classification Accuracy (%) for different Training
                                        angles                                   difference between the textures is less, then their classification
                   300       600        900        1200        1500              performance degrades.

     Grass         100       100        100       93.75        93.75

    Leather       87.5      93.75       87.5      93.75        93.75

     Pigskin      87.5       87.5      93.75       75.0        68.75

      Sand        75.0       75.0      68.75      68.75        75.0

     Water         100      93.75      93.75       87.5        87.5

      Wool        86.6       86.6       62.5      68.75         75
Table 2 Classification Accuracies (%) of stochastic Textures trained with
         One rotation angle (00) and Tested with other versions                    Figure 11.      Overall Classification Performance of Structural Textures

    It is observed that in Structural Textures, Bark is
misclassified as Straw and few as Brick. Brick is misclassified
as Raffia. Straw is misclassified as Bark and Bubbles. In the
case of Stochastic Textures Sand is misclassified as pigskin.
Wool is misclassified as Pigskin and sand. Compared to
structural Textures the performance of stochastic Textures is
good. The performance of Structured Textures and Stochastic
Textures are shown in Figure 9 and Figure 10.

                                                                                   Figure 12.      Overall Classification Performance of Stochastic Textures

                                                                                     We have also compared the performance of our feature
                                                                                 extraction method with other approaches. Table 3 shows the
                                                                                 comparative study with other Texture models.
     Figure 9.      Classification Performance of Structural Textures

                                                                                                Texture model               Recognition rate in %

                                                                                         Co occurrence matrix                        78.6

                                                                                       Autocorrelation method                        76.1

                                                                                        Laws Texture measure                         82.2
                                                                                         Transformed Feature                         89.2

                                                                                     Table 3 Performance of various Texture measures in classication
     Figure 10.     Classification Performance of Stochastic Textures

                                                                                                                  ISSN 1947-5500
                                                                           (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                              Vol. 8, No. 2, 2010
                             VI.    CONCLUSION                                                2001 Proceedings, Lecture Notes In Computer Science 2013, Springer,
                                                                                              397 - 406.
    An efficient way of extracting features from textures is                           [19]   Leung T And Malik J (2001) Representing And Recognizing The Visual
presented. From the orthonormal basis, new operators are                                      Appearance Of Materials Using Three Dimensional Textons, Int. J.
generated. These operators perform well in characterizing the                                 Comput. Vision, 43(1):29– 44.
textures. The operator can be used for gray-scale and rotation                         [20]   Varma M And Zisserman A (2005) A Statistical Approach To Texture
invariant texture classification. Experimental results are                                    Classification From Single Images, International Journal Of Computer
appreciable where the original version of image samples are                                   Vision, 62(1–2):61–81.
used for learning and tested for different rotation angles.                            [21]   Ahonen T & Pietikäinen M (2008) A Framework For Analyzing
                                                                                              Texture Descriptors”, Proc. Third International Conference On
Computational simplicity is another advantage since the                                       Computer Vision Theory And Applications (Visapp 2008), Madeira,
operator is evaluated by computing the inner product. This                                    Portugal, 1:507-512.
facilitates less time for implementation. The efficiency can be                        [22]   Ojansivu V & Heikkilä J (2008) A Method For Blur And Affine
further improved by varying the codebook size and the                                         Invariant Object Recognition Using Phase-Only Bispectrum, Proc.
dimension of feature vectors.                                                                 Image Analysis And Recognition (Iciar 2008), Póvoa De Varzim,
                                                                                              Portugal, 5112:527-536.
                                                                                       [23]   Krishnamoorthi R (1998) A Unified Framework Orthogonal
                                REFERENCES                                                    Polynomials For Edge Detection, Texture Analysis And Compression In
[1]    Hawkins J K Textural Properties for Pattern Recognition in Picture                     Color Images, Ph.D. Thesis, 1998
       Processing and Psychopictorics, (LIPKIN B AND ROSENFELD A                       [24]   Krishnamoorthi R And Kannan N (2009) A New Integer Image Coding
       Eds), Academic Press, New York 1969.                                                   Technique Based On Orthogonal Polynomials, Image And Vision
[2]    Chen Ch, Pau Lf, Wang Psp (1998) The Handbook Of Pattern                               Computing, Vol 27(8). 999-1006.
       Recognition And Computer Vision (2nd Edition), Pp. 207-248, World
       Scientific Publishing Co., 1998.
[3]    Haralick Rm, Shanmugam K And Dinstein I, (1973) Textural Feature
       For Image Classification Ieee Transactions On Systems, Man, And
       Cybernetics, Smc-3, Pp. 610-621.                                                                         Suguna R received M.Tech degree in CSE
[4]    Haralick Rm (1979) Statistical And Structural Approaches To Texture,                                     from IIT Madras, Chennai in 2004. She is
       Proc Ieee 67,No.5, 786-804                                                                               currently pursuing the Ph.D. degree in
[5]    Conners Rw And Harlow Ca (1980) Toward A Structural Textural                                             Dept. of IT, MIT Campus, Anna
       Analyzer Based On Statistical Methods, Comput. Graph, Image                                              University.
[6]    Laws Ki (1979) Texture Energy Measures Proc. Image Understanding
       Workshop, Pp. 47-51. 1979.
[7]    Ade F (1983) Characterization Of Texture By Eigenfilters Signal
       Processing, 5, No.5, 451-457.                                                                             Anandhakumar P received Ph.D degree in
[8]    Fu Ks (1982) Syntactic Pattern Recognition And Applications, Prentice-                                    CSE from Anna University, in 2006. He is
       Hall, New Jersey, 1982.
                                                                                                                 working as Assistant Professor in Dept. of
[9]    Ojala T, Pietikainen M And Maenpaa T (2002) Multiresolution Gray-                                         IT, MIT Campus, Anna University. His
       Scale And Rotation Invariant Texture Classification With Local Binary
       Patterns, Ieee Trans. Pattern Anal. Mach. Intell, 24, No. 7, 971-987.                                     research area includes image processing and
[10]   Manthalkar R, Biswas Pk And Chatterji Bn (2003) Rotation Invariant                                        networks
       Texture Classification Using Even Symmetric Gabor Filters, Pattern
       Recog. Lett, 24, No. 12, 2061-2068.
[11]   Pun Cm And Lee Mc (2003) Log Polar Wavelet Energy Signatures For
       Rotation And Scale Invariant Texture Classification, Ieee Trans.
       Pattern. Anal, Mach. Intell. 25, No.5, 590-603.
[12]   Larry S Davis (1981) Polarogram: A New Tool For Image Texture
       Analysis, Pattern Recognition 13 (3) 219–223.
[13]   Mayorga L. Ludman (1994), Shift And Rotation Invariant Texture
       Recognition With Neural Nets, Proceedings Of Ieee International
       Conference On Neural Networks, Pp. 4078–4083.
[14]   Manthalkar R, Biswas Pk And Chatterji Bn (2003) Rotation Invariant
       Texture Classification Using Even Symmetric Gabor Filters, Pattern
       Recog. Lett, 24, No. 12, 2061-2068.
[15]   Vo Ap, Oraintara S, And Nguyen Tt (2007) Using Phase And Magnitude
       Information Of The Complex Directional Filter Bank For Texture Image
       Retrieval, Proc. Ieee Int. Conf. On Image Processing (Icip’07), Pages
[16]   Xiuwen L And Deliang W(2003) Texture Classification Using Spectral
       Histograms, Ieee Trans. Image Processing, 12(6):661–670.
[17]   Ojala T & Pietikäinen M (1998) Nonparametric Multichannel Texture
       Description With Simple Spatial Operators, Proc. 14th International
       Conference On Pattern Recognition, Brisbane, Australia, 1052 – 1056.
[18]   Ojala T, Pietikäinen M & Mäenpää T (2001) A Generalized Local
       Binary Pattern Operator For Multiresolution Gray Scale And Rotation
       Invariant Texture Classification, Advances In Pattern Recognition, Icapr

                                                                                                                        ISSN 1947-5500