Docstoc

Hybrid Model of Texture Classification using 2D Discrete Wavelet Transform and Probablistic Neural Network

Document Sample
Hybrid Model of Texture Classification using 2D Discrete Wavelet Transform and Probablistic Neural Network Powered By Docstoc
					                                                              (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                        Vol. 8, No. 5, 2010

 Hybrid Model of Texture Classification using 2D Discrete
   Wavelet Transform and Probablistic Neural Network
  Reem Abd El-Salam El-Deeb                                 Taher. Hamza                                       Elsayed Radwan
                                                  Department of Computer Science,                      Department of Computer Science,
 Department of Computer Science,                      Faculty of Computer and                             Faculty of Computer and
    Faculty of Computer and
                                                       Information Sciences,                                Information Sciences,
      Information Sciences,
                                                    Mansoura University, Egypt,                          Mansoura University, Egypt,
   Mansoura University, Egypt,
                                                          P.O.BOX:35516                                        P.O.BOX:35516
         P.O.BOX:35516                               Taher_hamza@yahoo.com                                 elsfradwan@yahoo.com
    Reemm_db@yahoo.com




Abstract— In this paper, we present a combinational approach               texture classification problem based on various types of
for texture classification. The proposed method analyzes texture           features and different methods of feature extraction. Most of
by 2D Discrete Wavelet Transforms (DWT); wavelet energy and                the textural features are generally obtained from the application
some statistical features construct the features vector that               of a local operator, statistical analysis, or measurement in a
characterizes texture. For improving accuracy the Probabilistic
                                                                           transformed domain [14]. Generally, the features are estimated
Neural Network (PNN), which is considered as a good estimator
to the probability density function, is used as a classifier that          from Law’s texture energy measures, Markov random field
maps input features vectors to the most appropriate texture                models, Gibbs distribution models and local linear transforms
classes. Two comparative evaluations have been done in order to            were found not to be robust enough to allow one-to-one
ensure the effectiveness and efficiency of this model.                     mapping between patterns and parameter sets for many
                                                                           reasons: the parameters computed rely on the model assumed,
   Keywords- Texture classification, feature extraction, discrete          the neighborhoods used must not be self-contradictory and they
wavelet transform, probabilistic neural network                            rely on the number of samples available for each combination
                       I.    INTRODUCTION                                  of neighborhoods. In short, no model fits the observed textures
                                                                           perfectly, and so no model parameters are perfect in capturing
   Texture is the variation of data at scales smaller than the             all characteristics of a texture image [7][8]. Other studies used
scales of interest [7]. Techniques for the analysis of texture in          Fourier transform domain, fractals and co-occurrence matrices.
digital images are essential to a range of applications in areas           The co-occurrence features such as contrast, homogeneity etc.,
as diverse as robotics, medicine and the geo-sciences. In                  were found to be the best of these features that they are popular
biological vision, texture is an important cue allowing humans             due to the perceptual meaning they have. However, they are
to discriminate objects. This is because the brain is able to              not adequate for texture and object discrimination as they
decipher important variations in data at scales smaller than               throw away most of the information conveyed by the co-
those of the viewed objects. Texture may be important as well              occurrence matrices [7].
in object recognition as it tells us something about the material
from which the object is made. In order to deal with texture in            In the recent years, wavelet analysis has become a powerful
digital data, many techniques have been developed by image                 tool for multi-resolution analysis. Discrete Wavelet transform
processing researchers [14] [7].                                           (DWT) and Gabor Transform are extensively used for texture
Texture classification aims to assign texture labels to unknown            analysis. While the DWT uses fixed filter parameters for
textures, according to training samples and classification rules           image decomposition across scales, the Gabor Transform
by finding the best matched category for the given texture                 requires proper tuning of filter parameters for different scales
among existing textures. Two major issues are critical for                 of decomposition. Further, Wavelet based methods are shown
texture classification: the texture feature extraction and texture         to be efficient in detection, classification and segmentation for
classification algorithms [4].                                             many reasons: the wavelet transform is able to de-correlate the
Texture feature extraction is considered as the main base of the           data and achieve the same goal as the linear transformation, it
efficiency of the texture classification algorithm. In order to            provides orientation sensitive information which is essential in
design an effective algorithm for texture classification, it is            texture analysis and the computational complexity is
essential to find a set of texture features with good                      significantly reduced by considering the wavelet
discriminating power. Unfortunately, because of scale                      decomposition [11][14].
dependency of texture, its feature extraction has become a                 As denoted before the efficiency of any classification
difficult problem. There have been many studies in solving                 system depends on effective characterization as well as




                                                                     148                              http://sites.google.com/site/ijcsis/
                                                                                                      ISSN 1947-5500
                                                          (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                    Vol. 8, No. 5, 2010
choosing the appropriate classifier. Some classifier                   continues until some final scale is reached. The values or
algorithms such as support vector machines are used in                 transformed coefficients in approximation and detail images
some works which faced some problems because of high                   (sub-band images) are the essential features, which are shown
algorithmic complexity and extensive memory requirements               here as useful for texture analysis and discrimination. As
[16]; the distance classifier is also used for measurement of          textures have non-uniform pixel value variations, they can be
similarity and consequent labeling but it suffered from some           characterized by the values in the sub-band images or their
limitation in speed and adding parameters may cause the                combinations or derived features from these bands [11].
classifier to fail. [4].
In this study, a hybrid model based on the combinational
                                                                                                                      LL2            HL2
approach is proposed, which combine 2D Discrete Wavelet
                                                                              LL1            HL1                       2                          HL1
Transform (DWT) and Probabilistic Neural Network                                                                      LH2            HH2
(PNN) for solving texture classification problem. In the
hybrid configuration, the 2D DWT is used for texture
analysis and constructing features vector that characterizes                  LH1            HH1                            LH1                   HH1
the texture image by capturing all essential information.
The obtained features vectors are then fed into the PNN
which is used as a good estimator to probability density                         (a) One-Level                                    (b) Two-Level
function that help in mapping each texture feature vector
to the best appropriate class with fast and efficient                                      Figure 1. Image decomposition
performance. For illustrating the effectiveness of this
                                                                       B. Energy
model, two comparative evaluations have been done. The
first one was among variety of wavelet filters for finding              Energy is one of the most commonly used features for
the best features extractor that provides the best                      texture analysis [4]. Wavelet energy reflects the distribution
characterization. The other was between the PNN and                     of energy along the frequency axis over scale and orientation
Backpropagation Neural Network (NN) as a classifier                     and has proven to be very powerful for texture classification.
according to the mean success rates.                                    The energy of sub-band ���� containing N coefficients is
This paper is organized as follows; in section II, Discrete             defined as in equation (1) [5],
                                                                                                          1                                 2
Wavelet transform (DWT), Probabilistic Neural Network                                                              ����
                                                                                         ������������������������ =        ∙   ����,����   ���� �������� , ��������   9           (1)
(PNN) and Wavelet energy are mentioned. The hybrid                                                        ����

model of 2D DWT and PNN is described in section III.                   C. Probabilistic neural network
The effectiveness of the proposed hybrid model for                      It is shown that, by replacing the Sigmoid activation function
classification of texture images and comparative                        often used in neural networks with an exponential function, a
evaluations are demonstrated in section IV .Finally,                    neural network can be formed which computes nonlinear
section V presents discussion and conclusion.                           decision boundaries. The resulted network is considered as
                    II.    PRELIMINARIES                                an estimator to the probability density functions which can
                                                                        be used to map input patterns to output patterns and to
A. Discrete wavelet transform                                           classify patterns. This technique yields decision surfaces
    Wavelets are functions that satisfy certain mathematical            which approach the Bayes optimal under certain conditions
requirements. They are used to cut up data into different               [3].
frequency components and then study each component with a               PNN is a kind of these networks that called radial basis
resolution matched to its scale. The basic idea of the wavelet          network. It is an artificial neural network with radial basis
transform is to represent any arbitrary function as a                   function (RBF) as a transfer function. RBF is a bell shape
superposition of wavelets. Any such superposition                       function that scales variable nonlinearly [15]. This network
decomposes the given function into different scale levels               provides a general solution to pattern classification problems
where each level is further decomposed with a resolution                by following an approach developed in statistics, called
adapted to that level [12].                                             Bayesian classifiers [6]. PNN is suitable for these kinds of
By applying DWT, the image is actually divided i.e.,                    classification problems for many advantages: Its training
decomposed into four sub-bands and critically sub sampled as            speed is many times faster than standard feed forward
shown in Figure 1. (a). These four sub-bands arise from                 backprobagation network, it can approach a Bayes optimal
separable applications of vertical and horizontal filters. The          result under certain easily met conditions and it is robust to
sub-bands labeled LH1, HL1 and HH1 represent the finest                 noise examples.
scale wavelet coefficients, i.e., detail images while the sub           The most important advantage of PNN is that training is easy
band LL1 corresponds to coarse level coefficients, i.e.,                and instantaneous that weights are not “trained” but
approximation image. To obtain the next coarse level of                 assigned. Existing weights will never be alternated but only
wavelet coefficients, the sub band LL1 alone is further                 new vectors are inserted into weight matrices when training.
decomposed and critically sampled. This result in two- level            So, it can be used in real-time. Since the training and running
wavelet decomposition as shown in Figure 1. (b).This process



                                                                 149                                http://sites.google.com/site/ijcsis/
                                                                                                    ISSN 1947-5500           9

                                                                                                                                            ‫9ال‬
                                                              (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                        Vol. 8, No. 5, 2010
procedure can be implemented by matrix manipulation, the
speed of PNN is very fast [15].

The probabilistic neural network uses a supervised training
set to develop distribution functions within a pattern layer.
These functions, in the recall mode, are used to estimate the
likelihood of an input feature vector being part of a learned
category, or class. The learned patterns can also be
combined, or weighted, with the a priori probability, also
called the relative frequency, of each category to determine
the most likely class for a given input vector. If the relative
frequency of the categories is unknown, then all categories
can be assumed to be equally likely and the determination of
category is solely based on the closeness of the input feature                                  Figure 2. PNN Architecture
vector to the distribution function of a class [6][1].
                                                                           The bias b allows the sensitivity of the radial basis neuron to
Probabilistic neural networks can be used for classification               be adjusted. Each bias in the first layer is set to
problems. When an input is presented, the first layer                      0.8326/SPREAD. This determines the width of an area in the
computes distances from the input vector to the training                   input space to which each neuron responds. SPREAD should
input vectors and produces a vector whose elements indicate                be large enough that neurons respond strongly to
how close the input is to a training input. The second layer               overlapping regions of the input space [13].
sums these contributions for each class of inputs to produce
as its net output a vector of probabilities. Finally, a competed           The Probabilistic Neural Network is based on Bayesian
transfer function on the output of the second layer picks the              classification and the estimation of probability density
maximum of these probabilities, and produces a 1 for that                  function that is necessary to classify the input vectors into
class and a 0 for the other classes. The architecture for this             one of the target classes approaching the Bayesian optimality
system is shown below in Figure 2. [6].                                    [15].
It is assumed that there are ���� input vector/target vector pairs.
Each target vector has ���� elements. One of these elements is                   III.    HYBRID MODEL FOR TEXTURE CLASSIFICATION
1 and the rest are 0. Thus, each input vector is associated                        The texture classification scheme is based on two
with one of ���� classes.                                                    principles, choosing features that provide the best
The first-layer input weights ��������1 are set to the transpose of            characterization to the texture image and working with fast,
the matrix formed from the ���� training pairs, ����′. When an                 easy and robust classifier in order to reach the best
input is presented, the || ���������������� || box produces a vector whose         classification result.
elements indicate how close the input is to the vectors of the             In this study a viable algorithm with high precision and
training set. These elements are multiplied, element by                    low calculating load is proposed to classify texture images
element, by the bias and sent to the radial basis transfer                 using wavelet transform and its combination with
function. An input vector close to a training vector is                    probabilistic neural network. In the proposed combinatory
represented by a number close to 1 in the output vector a1. If             configuration the DWT and PNN function as black boxes
an input is close to several training vectors of a single class,           in a complementary manner. The functionality manner
it is represented by several elements of a1 that are close to 1.                               involved can be combined in two phases:
The second-layer weights ��������2 are set to the matrix T of
target vectors. Each vector has a 1 only in the row associated                i.      Texture characterization phase and
with that particular class of input, and 0’s elsewhere. The                  ii.      PNN classification phase.
multiplication Ta1 sums the elements of a1 due to each of the
���� input classes. Finally, the second-layer transfer function,             The texture classification phase starts with taking the texture
compete, produces a 1 corresponding to the largest element                 images as an input and with the help of the DWT, the texture
of n2, and 0’s elsewhere. Thus, the network classifies the                 images are analyzed and features vectors are constructed.
input vector into a specific ���� class because that class has the           The obtained features vectors are entered to the PNN for
maximum probability of being correct [9].                                  training which starting the PNN classification phase that
                                                                           continues with testing and ends with displaying the
                                                                           classification result as illustrated in Figure 3.




                                                                     150                              http://sites.google.com/site/ijcsis/
                                                                                                      ISSN 1947-5500
                                                            (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                      Vol. 8, No. 5, 2010
                                                                         calculate the standard average of matrix elements that is
                                                                         computed as in equation (2),
                                                                                  1
                      Texture image                                          ���� = ∙ ���� |����(�������� , �������� )|
                                                                                      ����,����                                    (2)
                                                                                         ����

                                                                         And the last four elements represent the standard deviation of
                           DWT                                           the approximation and the detail coefficients matrices which
                                                                         measure the variability in matrix elements and are computed
                                                                         as the square root of variance as in equation (3),
       Calculate wavelet energy and wavelet                                                    1     ����                 2
       statistical features for 1-level wavelet                                 ���� =                 ����=1   �������� − ����                                      (3)
                                                                                              ����−1
               decomposed sub-bands.
                                                                           Wavelet Energy                               Mean           Standard Deviation
                                                                          LL        LH        HL     HH      LL     LH      HL   HH    LL     LH    HL     HH

                    Features vectors                                                                      Figure 4. Features vector

                                                                         Hence, at the end of this phase, each texture image has been
                        Train PNN                                        well characterized by 12 elements features vector that captures
                                                                         all essential information needed for discrimination.
                                                                         B. PNN classification phase
                         Test PNN
                                                                            PNN is considered as a good estimator to probability
                                                                          density function that can be used to map input patterns to
                                                                          output patterns and to classify patterns efficiently with fast
           Display and compare the result
                                                                          execution and ease implementation.
                                                                          PNN classification phase has two parts, training and testing.
                                                                          After constructing the features vectors that represent the
       Figure 3. Structure of Hybrid Classification model                 texture images in the texture characterization phase, the
                                                                          network is trained with the features vectors and the
                                                                          corresponding texture images as described in the following
A. Texture characterization phase
                                                                          steps:
    In order to overcome the obstacle of texture
characterization due to its scale dependent property, the                      1)        The input is of size ���� × ���� with feature elements
discrete wavelet transform is used as a powerful tool for multi-                         ���� = 12 and training samples����.
resolution analysis.                                                           2)        Radial basis layer weight ��������1 is set to the transpose
For wavelet decomposition of various texture images, the                                 of the ���� × ���� matrix of training samples that ��������1 is of
decomposition at 1-level is performed using different                                    size���� × 12.
wavelet transform filters. Thus, the image is decomposed                       3)        The dot product between the input vector ���� with size
into one approximate image with one approximation                                        12 × 1and the ���� ����ℎ row of ��������1 produces the ���� ����ℎ
coefficients LL and three detail images with horizontal LH,                              element of the distance vector ||��������1 − ����|| whose
vertical HL and diagonal HH detail coefficients.                                         size���� × 1.
Wavelet transform of an image measures light fluctuation in
                                                                               4)        The radial basis layer biases ���� are all set to0.8326 ∕
different scales. Therefore, the wavelet energy that reflects
                                                                                         ������������������������, that ������������������������ is a constant chosen
the distribution of energy along the frequency axis over scale
                                                                                         according to experiment.
and orientation is calculated for the approximation and the
                                                                               5)        The net input ����1 is obtained from element-by-element
detail coefficients matrices. Also to increase sensitivity and
                                                                                         multiplication of the bias vector ���� with the distance
precision, some wavelet statistical features are calculated
                                                                                         vector ||��������1 − ����||that denoted as
such as the mean and the standard deviation of the
approximation matrix as well as the mean and the standard
                                                                                         ����1 = ||��������1 − ����|| ∙∗ ����                                        (4)
deviation of the detail coefficients matrices; and then they
are added to image features. The features obtained construct
                                                                               6)        The transfer function is the radial basis function that
a feature vector with 12 elements organized as illustrated in
Figure 4. The first four elements represent wavelet energy of                            defined as in equation (5) and its shape is illustrated
the approximation and the detail coefficients matrices which                             in Figure 5.
is computed as in equation (1).
                                                                                                                            2
The second four elements represent the arithmetic mean of the                            ���� = ������������������������ ���� = ���� −����                                     (5)
approximation and the detail coefficients matrices which




                                                                   151                                              http://sites.google.com/site/ijcsis/
                                                                                                                    ISSN 1947-5500
                                                                 (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                           Vol. 8, No. 5, 2010
                                                                              success rates for each classifier. The obtained results are
                                                                              discussed below.

                                                                              The number of incorrect classifications of the PNN to the
                                                                              tested texture images using the wavelet filters mentioned
                                                                              before is tabulated in TABLE I. It is found that the wavelet
                                                                              filters db2, bior2.2 and db3 give the best characterization to
                                                                              texture images Bark.0000 and Paintings.11.0003, wavelet
                                                                              filters sym6, bior2.2 and coif2 give the best characterization to
                                                                              leaves.0008, wavelet filters db2, sym6 and bior2.2 give the
                                                                              best characterization to Metal.0004 and wavelet filters sym6,
                  Figure 5. Radial basis function                             db5, coif2 and db3 give the best characterization to
                                                                              Flowers.0006.
If the input ���� is identical to the ���� ����ℎ row of ��������1 , then the                  TABLE I.THE NUMBER OF INCORRECT CLASSIFICATIONS WITH VARIOUS
���� ����ℎ element of ����1 is equal to 1                                                                         WAVELET FEATURES
If the input ���� is close to the ���� ����ℎ row of ��������1 , then the radial
basis function produces a value near 1, else it produces a value                                                     Number of incorrect classifications
far from 1.                                                                    No         Texture images
     7) Competitive layer weights matrix ��������2 is set to                                                        (db2)    (sym6)   (db5)     (bior2.2)   (coif2)   (db3)
           ���� × ���� matrix with target classes����.
                                                                                1     Bark.0000.pgm              1         3       2           1          3        1
     8) In competitive layer, the vector ����1 is multiplied with                 2     Bark.0006.pgm              1         1       1           1          1        1
           matrix ��������2 producing the output vector ����2 of size                 3     Fabric.0015.pgm            1         1       1           1          1        1
           ���� × 1.                                                              4     Flowers.0006.pgm           2         1       1           2          1        1
     9) The competitive function ���� produces 1 corresponding                    5     Food.0000.pgm              0         0       0           0          0        0
           to the largest value of ����2 and 0 elsewhere.                         6     Leaves.0008.pgm            1         0       1           0          0        2
                                                                                7     Metal.0002.pgm             0         0       0           0          0        0
     10) For testing the network, an unknown features vector is                 8     Metal.0004.pgm             0         0       1           0          1        2
           entered as input and the network classifies it according             9     Misc.0003.pgm              1         1       1           1          1        1
           to the class associated with the largest probability.               10     Paintings.11.0003.pgm      2         4       3           2          4        2
                                                                               11     Water.0001.pgm             0         0       0           0          0        0
                                                                               12     Stone.0004.pgm             1         1       1           1          1        1
                  IV.    EXPERIMENTAL RESULT                                   13     Wood.0002.pgm              0         0       0           0          0        0
                                                                               14     Water.0005.pgm             0         0       0           0          0        0
In order to assess the discrimination capability of the proposed
hybrid model, experiments are conducted with 14 Vision
Texture (Vistex) images database, each of size 512×512 which                  The mean success rates of the chosen wavelet filters are shown
is divided into sub-images of size 64×64 for totally 896 texture              in Figure 6. It is found that the characterization of texture
images with 14 texture classes.                                               images using bior2.2 wavelet filter has the highest mean
First, for applying texture characterization phase, one-level                 success rate (93.57%) and the lowest mean success rate
wavelet decomposition is applied to each texture image using                  (90.72%) was to the characterization using coif2 wavelet filter
db2, sym6, db5, bior2.2, coif2, db3 wavelet filters separately.               which is observed in Figure 7.
After extracting the features, each texture image is
characterized and represented with 12 elements features vector
                                                                                                       The Mean success rates (%)
constructing totally 896×12 features vectors for all texture
images used. Then, before entering the second phase, the PNN                                               93.57%
classification phase, the features vectors obtained at the end of                                                                                       92.86%
the texture characterization phase are divided into 756 features
vectors for training and 140 for testing.
Finally, the PNN is trained with 756×12 input features vectors                   91.43%                                 91.43%            91.43%
and 14 target classes. After training, the PNN is tested with
                                                                                              90.72%
140×12 features vectors.
For illustrating the effectiveness and efficiency of this
model, two comparative evaluations have been done. The
first one was between features extracted by the chosen
wavelet filters according to the corresponding success rate in
order to evaluate the efficiency of characterization for each                       db3        coif2       bior2.2        db5             sym6            db2
filter. And the other was between the PNN and
Backpropagation NN as a classifier according to the mean                               Figure 6. The mean success rates of various wavelet filters




                                                                        152                                    http://sites.google.com/site/ijcsis/
                                                                                                               ISSN 1947-5500
                                                                              (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                                        Vol. 8, No. 5, 2010
                                                                        100
                                                                                                               V.    CONCLUSION
                                                                        90
                                                                        80
                                                                                        Texture classification problem has become the business of a
                                                                        70              lot due to its great effect on many fields. As an attempt to
                                                                        60              solve this problem many studies have been done in spite of
                                                                        50
                                                                        40              hindrances they suffered in the two sides, characterization and
                                                                        30              classification. As for characterization, researchers tried to
                                                                        20
                                                                        10              choose features that provide the best representation of the
                                                                        0               texture images in the presence of texture scale dependent
                                                                                        property. While they work on the use of the appropriate
                                                                                        classifier algorithm to provide the best discrimination
                                                                                        capability.

                                                                                        In this work a hybrid model is presented to classify texture
                                bior2.2          coif2                                  images. The 2D DWT is combined with the PNN constructing
                                                                                        this hybrid model. The discrete wavelet transform is used as a
    Figure 7. Comparison between the best and the worst characterization
                                                                                        powerful tool for multi-resolution analysis so it is used for
   according to the corresponding correct classification rates of 14 texture            texture analysis as an attempt to overcome the obstacle of
                                  classes.                                              texture scale dependent property. The PNN is a radial basis
                                                                                        network that is considered as an estimator to the probability
The features vectors that have been obtained from bior2.2                               density functions which can be used to map input patterns to
wavelet filter which represent the best characterization of the                         output patterns and to classify patterns. The PNN is suitable
texture images have been used to compare the performance of                             for these kinds of classification problems that it can approach
the PNN used in this model and the Backpropagation NN in                                a Bayes optimal result under certain easily met conditions; as
classification.                                                                         well as the training is easy, fast and robust.
As illustrated in Figure 8, the mean success rate that has been                         The structure of the proposed hybrid model is divided into two
obtained over the 14 texture classes using the PNN is far more                          phases. The first phase is the texture classification phase in
than the other obtained using the Backpropagation NN. In                                which 1-level wavelet decomposition has been performed and
addition to the great differences in speed and simplicity of the                        4 sub-bands have been obtained representing the
performance between the two classifiers prove the                                       approximation, vertical, horizontal and diagonal detail. As a
effectiveness and efficiency of the PNN that has been used in                           way to select features that capture all the essential information
this model with respect to Backprobagation NN.                                          needed to uniquely characterize the texture, wavelet energy
                                                                                        that reflects the distribution of energy along the frequency axis
              The Mean Success Rates of PNN vs. NN (%)
                                                                                        over scale and orientation has been calculated for the
                                                                                        approximation and the detail coefficients matrices. Also to
                                                                        100             increase sensitivity and precision, some wavelet statistical
                                                                        90              features are calculated such as the mean and the standard
                                                                        80              deviation of the approximation matrix as well as the mean and
                                                                        70              the standard deviation of the detail coefficients matrices; and
                                                                        60              then they are added to image features. The features obtained
                                                                        50              construct a feature vector with 12 elements which is fed with
                                                                        40
                                                                                        the corresponding target class as input to the PNN starting the
                                                                        30
                                                                                        second phase, the PNN classification phase. In the PNN
                                                                        20
                                                                                        classification phase, the PNN is trained with the input features
                                                                        10
                                                                        0
                                                                                        vectors then it is tested with other features vectors for
                                                                                        evaluating its discrimination capability.

                                                                                        Experiments have been conducted for evaluating the
                                                                                        performance of the proposed hybrid model. The model proved
                                                                                        that the features derived from the approximation and detail
                                                                                        coefficients, the wavelet energy and the statistical features,
                                                                                        uniquely characterize a texture. In order to find the best
               Probablistic Neural Network        Neural Network                        features extractor a comparative evaluation has been done with
                                                                                        features extracted by different wavelet filters and the
     Figure 8. Comparison between the mean success rates that have been                 corresponding      correct   classification   rates.  Another
     obtained using the PNN verses the mean success rates that have been                comparative evaluation with respect to classifiers has been
                     obtained using Backpropagation NN.
                                                                                        performed between the PNN and Backpropagation NN which
                                                                                        provided great evidence about the effectiveness of the PNN as



                                                                                  153                               http://sites.google.com/site/ijcsis/
                                                                                                                    ISSN 1947-5500
                                                                        (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                                  Vol. 8, No. 5, 2010
a classifier for texture images with simple structure, fast                          [5]    G.Van de Wouwer, P.Scheunders and D.Van Dyck, “Statistical texture
                                                                                            characterization from discrete wavelet representations,” IEEE
execution and high performance.                                                             Transactions on Image Processing, Belgium,1999
This hybrid classification model achieved a good progress in                         [6]    Leila Fallah Araghi, Hamid Khaloozade and Mohammad Reza Arvan,
solving the texture classification problem by the proposed                                  “Ship Identification Using Probabilistic Neural Networks (PNN),” The
complementary manner of the 2D DWT and PPN. Compared                                        International Multi-Conference of Engineers and Computer Scientists,
with other methods, the system functionality is fast in                                     Hong Kong, March 2009
execution, efficient in recognition and easy in implementation.                      [7]    Maria Petrou and Pedro Garcia Sevilla, “Image processing-Dealing with
                                                                                            texture,” John Wiley&Sons Ltd, England, 2006
We are still in a need of finding more features that capture
                                                                                     [8]    M.Ghazvini, S.A.Monadjemi, N.Movahhedinia and K.Jamshidi,
more essential information of texture in order to provide the                               “Defect Detection of Tiles Using 2D-Wavelet Transform and Statistical
best characterization and achieve the optimal classification                                Features, ” World Academy of Science,Engineering and Technology,
results. More extended efforts are under development in order                               Iran, 2009
to improve the efficiency of the system.                                             [9]    Neural Network Matlab Toolbox 2008
                                                                                     [10]   Sami Gazzah and Najoua Essoukri Ben Amara, “Arabic Handwriting
                          ACKNOWLEDGMENT                                                    Texture Analysis for Writer Identification Using the DWT-Lifting
                                                                                            Scheme, ” Ninth International Conference on Document Analysis and
                                                                                            Recognition, Parana, 2007
   First of all, I thank my father, my mother, my sisters and                        [11]   S.Arivazhagan and L. Ganesan “Texture Classification using Wavelet
                                                                                            Transform ,” The Sixth International Conference on Computational
my brother for their moral support I required in my life at all.                            Intelligence and Multimedia Applications (ICCIMA’05), IEEE, India,
I deeply thank my advisor, Dr. ElSayed Radwan, whose help,                                  2005
advice and supervision was invaluable.                                               [12]   S.Arivazhagan, S.Deivalakshmi and K.Kannan, “ Performance Analysis
                                                                                            of Image Denoising System for different levels of Wavelet
Lastly, I offer my regards and blessings to my friends and all of                           decomposition,” International Journal of Imaging Science and
                                                                                            Engineering (IJISE), USA, JULY 2007
those who supported me in any respect during the completion
of the paper.                                                                        [13]   S.N. Sivandm, S. Sumathi and S.N.Deepa, “Introduction to Neural
                                                                                            Netwrks Using Matlab7.0,” Tata mcgraw-Hill Publishing Company
                                                                                            Limited, Copyright 2006.
                              REFERENCES
                                                                                     [14]   S.Shivashankar and P.S.Hiremath, “Wavelet Based Features For Texture
[1]    Ali H. Al-Timemy, Fawzi M. Al-Naima and Nebras H. Qaeeb,                             Classification,” ICGST International Journal on Graphics, Vision and
      “Probabilistic Neural Network for Breast Biopsy Classification,”                      Image Processing, India, December 2006
      MASAUM Journal of Computing, Iraq, September 2009.
                                                                                     [15]   Stephen Gang Wu, Forrest Sheng Bao, Eric You Xu, Yu-Xuan Wang,
[2]   Anagnostopoulos I., Anagnostopoulos C., Vergados D., Loumos V., and                   Yi-Fan Chang and Qiao-Liang Xiang, “A Leaf Recognition Algorithm
      Kayafas E., “A Probabilistic Neural Network for Human Face                            for Plant Classification Using Probabilistic Neural Network,” arXiv,
      Identification based on Fuzzy Logic Chromatic Rules,” IEEE MED03,                     USA, July 2007.
      Greece, 2007.
                                                                                     [16]   http://www.svms.org/disadvantages.html, 2010-07-07
[3]   Donald F. Specht, “Probabilistic neural networks for classification,
      mapping, or associative memory,” IEEE International Conference on
      Neural Networks, California , 2001.
[4]   Engin Avci, Abdulkadir Sengur and Davut Hanbay, “An optimum
      feature extraction method for texture classification,” Pergamon Press,
      Inc. Tarrytown, NY, USA, 2009.




                                                                               154                                    http://sites.google.com/site/ijcsis/
                                                                                                                      ISSN 1947-5500

				
DOCUMENT INFO
Description: IJCSIS invites authors to submit their original and unpublished work that communicates current research on information assurance and security regarding both the theoretical and methodological aspects, as well as various applications in solving real world information security problems.