Image Classification in Transform Domain

Document Sample
Image Classification in Transform Domain Powered By Docstoc
					                                                                (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                 Vol. 10, No. 3, March 2012

              Image Classification in Transform Domain

          Dr. H. B. Kekre                                Dr. Tanuja K. Sarode                                        Jagruti K. Save
           Professor,                                    Associate Professor,                                Ph.D. Scholar, MPSTME,
     Computer Engineering                              Computer Engineering,                                    NMIMS University,
Mukesh Patel School of Technology                   Thadomal Shahani Engineering                                Associate Professor,
  Management and Engineering,                                 College,                                      Fr. C. Rodrigues College of
 NMIMS University, Vileparle(w)                   Bandra(W), Mumbai 400-050, India                        Engineering, Bandra(W), Mumbai
    Mumbai 400–056, India                             tanuja_0123@yahoo.com                                        400-050, India
     hbkekre@yahoo.com.                                                                                      jagrutik_save@yahoo.com


Abstract— Organizing images into meaningful categories using                [11], Artificial Neural Network [12] [13], Genetic algorithm
low level or high level features is an important task in image              [14] are used.
databases. Although image classification has been studied for
many years, it is still a challenging problem within multimedia
                                                                                                 II.    IMAGE TRANSFORMS
and computer vision. In this paper the generic image
classification approach using different transforms is proposed.
The two main steps in image classification are feature extraction           A. Discrete Fourier Transform (DFT)
and classification algorithm. This paper proposes to generate                   The discrete Fourier transform (DFT) is one of the most
feature vector from image transform. The paper also investigates            important transforms that is used in digital signal processing
the effectiveness of different transforms (Discrete Fourier                 and image processing [15]. Two dimensional discrete Fourier
Transform, Discrete Cosine Transform, Discrete Sine Transform,              transform for an image f(x, y) of size N by N is given by
Hartley and Walsh Transform) in classification task. The size of            equation 1.
feature vector also varied to see its impact on the result.
Classification is done using nearest neighbor classifier. Euclidean
and Manhattan distance is used to calculate the similarity
                                                                                                                − j2π  ux + 
                                                                                               N −1 N −1                    vy
                                                                                                                      
                                                                                                                       N      
measure. Images from the Wang database are used to carry out                          F(u, v) = ∑    ∑ f(x, y)e            N 
the experiments. The experimental results and detailed analysis                                x =0 y =0                                         (1)
are presented.                                                                                                    for   0 ≤ u, v ≤ N − 1
   Keywords- Image classification; Image Transform; Discrete
Fourier Transform (DFT); Discrete Sine Transform(DST);                      B. Discrete Cosine Transform (DCT)
Discrete Cosine Transform(DST); Hartley Transform; Walsh
Transform; Nearest neighbor Classifier.
                                                                                The discrete cosine transform (DCT), introduced by
                                                                            Ahmed, Natarajan and Rao [16], has been used in many
                                                                            applications of digital signal processing, data compression,
                        I.    INTRODUCTION                                  information hiding and content         based Image Retrieval
    Though the image classification is usually not a very                   system(CBIR)[17]. The discrete cosine transform (DCT) is
difficult task for humans, it has been proven to be an extremely            closely related to the discrete Fourier transform. It is a
complex task for machines. In the existing literatures, most of             separable linear transformation; that is, the two-dimensional
the frameworks for image classification include two main                    transform is equivalent to a one-dimensional DCT performed
steps: feature extraction and classification algorithm. In the first        along a single dimension followed by a one-dimensional DCT
step, some discriminative features are extracted to represent the           in the other dimension. The two dimensional DCT can be
image content such as color [1] [2], shape [3] and texture [4].             written in terms of pixel values f(x, y) for x, y= 0, 1,…, N-1
There has been a lot of research work done in the area of                   and the frequency-domain transform coefficients F(u, v) as
feature extraction. Saliency map is used to extract features to             shown in equation 2.
classify both the query image and database images into
attentive and non-attentive classes [5]. The image texture
feature is calculated based on gray-level co-occurrence matrix                  F(u, v) =
(GLCM) [6]. Color Co-occurrence method in which both the                                                   (2x + 1)uπ       (2y + 1)vπ        (2)
                                                                                α(u) α(v) ∑ ∑ f(x, y) cos              cos  2N         
color and texture of an image are taken into account, is used to                                              2N                       
generate the features [7]. Transforms have been applied to gray                                                              for 0 ≤ u, v ≤ N − 1
scale image to generate feature vector [8]. In classification
algorithm step, various multi-class classifiers like k nearest                 Where
neighbor classifier [9], Support Vector Machine (SVM) [10]




                                                                       91                                 http://sites.google.com/site/ijcsis/
                                                                                                          ISSN 1947-5500
                                                                       (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                        Vol. 10, No. 3, March 2012
    α (u ) = 1 /       N             for u = 0                                          •    Wj takes on the values +1 and -1
                   2                                                                        •     Wj[0] = 1 for all j
    α (u ) =                  for 1 ≤ u ≤ N − 1
                   N
                                                                                            •     Wj x [Wk]t=0, for j≠k and Wj x [Wk]t =N, for
    α (v ) = 1 /       N            for v = 0                                                     j=k.
                   2                                                                        •     Wj has exactly j zero crossings, for j = 0, 1,..., N-1
    α (v ) =                  for 1 ≤ v ≤ N − 1
                   N
                                                                                     Each row Wj is even (when j is even) and odd (when j is
C. Discrete Sine Transform (DST)                                                  odd) w.r.t. to its midpoint.
   The discrete sine transform was introduced by A. K. Jain in
1974. The two dimensional sine transform is defined by an                                             III. ROW MEAN VECTOR
equation 3.                                                                           The row mean vector [25] [26] is the set of averages of the
                                                                                  intensity values of the respective rows as shown in equation 5.
                    2                  (x + 1)(u + 1)π 
      F(u, v) =         ∑∑ f(x, y)sin                  
                   N +1                     N +1                                                                    Avg(Row 1) 
          (y + 1)(v + 1)π                                                                                                      
      sin                                                            (3)                                            Avg(Row 2) 
               N +1                                                                             Row mean vecto r =       :                            (5)
                                                 for 0 ≤ u, v ≤ N− 1                                                             
                                                                                                                           :     
                                                                                                                      Avg(Row N) 
                                                                                                                                 
   Discrete Sine transform has been widely used in signal and
image Processing [18] [19].                                                                         IV. PROPOSED ALGORITHM
                                                                                      The image database is divided into a training set and a
D. Discrete Hartley Transform (DHT)                                               testing set. The feature vector of each training/testing image is
    The Hartley transform [20] is an integral transform closely                   calculated. Given an image to be classified from testing set, a
related to the Fourier transform. It has some advantages over                     nearest neighbor classifier compares it against the images of a
the Fourier transform in the analysis of real signals as it avoids                training set, in order to identify the most similar image and
the use of complex arithmetic.                                                    consequently the correct class. Euclidean and Manhattan
                                                                                  distance is used as similarity measure.
    A discrete Hartley transform (DHT) is a Fourier-related
transform of discrete, periodic data similar to the discrete
                                                                                  A.    Generation of feature vector
Fourier transform (DFT), with analogous applications in signal
processing and related fields [21]. Its main distinction from the                      1.  For each color image f(x,y), generate its three color
DFT is that it transforms real inputs to real outputs, with no                             (R, G, and B) planes fR(x,y), fG(x,y) and fB(x,y)
intrinsic involvement of complex numbers. Just as the DFT is                               respectively.
the discrete analogue of the continuous Fourier transform, the                         2.       Apply transform T (DCT, DFT, DST, HARTLEY,
DHT is the discrete analogue of the continuous Hartley                                          WALSH) on the columns of three image planes as
transform. The discrete two dimensional Hartley Transform for                                   given in equation 6 to 8 to get column transformed
image of size N x N is defined as in equation 4.                                                images.

                       F(u, v) =                                                                     [T ]× [ f R ( x , y ) ] = F R ( x , v )              (6)
                       1                  2π (ux + vy )
                         ∑ ∑ f(x, y) cas               
                                                                       (4)
                       N                 N             
                       where casθ = cos θ + sin θ                                                    [T ]× [ f G ( x , y ) ] = F G ( x , v )              (7)

E. Discrete Walsh Transform (DWT))
    The Walsh Transform [22] has become quite useful in the                                          [T ]× [ f B ( x , y ) ] = F B ( x , v )              (8)
applications of image processing [23] [24]. Walsh functions
were established as a set of normalized orthogonal functions,                          3.       Calculate row mean           vector     of each column
analogous to sine and cosine functions, but having uniform                                      transformed image.
values ± 1 throughout their segments. The Walsh transform
matrix is defined as a set of N rows, denoted Wj, for j = 0, 1, ...                    4.       Make a feature vector of size 75 by fusing the row
, N - 1, which have the following properties:                                                   mean vectors of R, G, and B plane. Take first 25
                                                                                                values from R plane followed by first 25 values from
                                                                                                G plane followed by first 25 values from B plane.
   Identify applicable sponsor/s here. (sponsors)



                                                                             92                                    http://sites.google.com/site/ijcsis/
                                                                                                                   ISSN 1947-5500
                                                             (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                              Vol. 10, No. 3, March 2012
     5.   Do the above process for training images to generate
          the feature database.
   The different values of feature vector size like 150 (50R +
50G + 50B), 225 (75R + 75G + 75B), 300 (100R + 100G +
100B), 450(150R + 150G + 150B), and 768 (256R + 256G +
256B) are also considered to generate feature vectors.

B.    Classification
     1.   In this phase, for given testing images, their feature
          vectors are generated.
     2.    Euclidean distance and Manhattan distance is
           calculated between each testing image feature vector
           and each training image feature vector.
     3.    Minimum distance indicates the most similar training
           image for that testing image. Then the given testing
           image is assigned to the corresponding class.
    We have also considered another training set where each
feature vector is the average of feature vectors of all training                     Figure 2. Sample database of testing images
images of a particular class.
                                                                        Each image is resized to 256 x 256. Table I and Table II shows
                                                                        the number of correctly classified total images (out of 240) for
                          V. RESULTS                                    different transforms over different vector sizes for two different
    The implementation of the proposed technique is done in             training sets. The correctness of classification is visually
MATLAB 7.0 using a computer with Intel Core 2 Duo                       checked.
Processor T8100 (2.1GHz) and 2 GB RAM. The proposed
technique is tested on the Wang image database. This database           With average training set Walsh transform gives better
was created by the group of professor Wang from the                     performance compared to other transforms with Manhattan as
Pennsylvania State University [27]. The experiment is carried           similarity measure. If Euclidean distance is used for
on 8 classes of Wang database. For testing, 30 images for each          calculation then feature vector size of 768 gives the marginally
class were used and for training, 5 images of each class were           better performance in all transforms. Considering the results as
used. Thus total testing images were 240 and total training             shown in Table 1, best results are obtained for Manhattan
images were 40. Training set contains 40 feature vectors. The           distance as similarity measure. DST Walsh and DFT gave
proposed method is also implemented using another training              better performance in that order.
set that contain 8 feature vectors where each feature vector is
the average of feature vectors of all training images of same           Now considering individual class classification performance
class. Fig. 1 shows the sample database of training images and          using these two similarity measures is shown in Table III to
Fig. 2 shows the sample database of testing images.                     Table VI. For this purpose the vector size is selected based on
                                                                        the performance. For Euclidean distance criterion, the number
                                                                        of correctly classified images in each class for different
                                                                        transforms over two training sets is shown in table III and table
                                                                        IV with feature vector size 768. If a Manhattan distance
                                                                        criterion is used, then there is a variation in the performance of
                                                                        the transforms for different feature vector sizes. In most cases
                                                                        vector size 225 gives better performance. So using this vector
                                                                        size, the number of correctly classified images in each class for
                                                                        different transforms over two training sets is shown in table V
                                                                        and table VI.




              Figure 1. Sample database of training images




                                                                   93                                http://sites.google.com/site/ijcsis/
                                                                                                     ISSN 1947-5500
                                                                   (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                    Vol. 10, No. 3, March 2012

TABLE I.        NUMBER OF CORRECTLY CLASSIFIED IMAGES (OUT OF 240) FOR DFT, DCT, DST, HARTLEY AND WALSH OVER DIFFERENT FEATURE VECTOR
                 SIZES USING EUCLIDEAN AND MANHATTAN DISTANCE., T RAINING SET: FEATURE VECTORS OF 5 IMAGES FROM EACH CLASS

                  Transform         Distance                                      Feature vector size
                                  E-Euclidean
                                  M-Manhattan
                                                        75              150         225          300          450         768

                                        E              155              159         159          159          160         167
                     DFT
                                        M              166              163         169          169          164         163
                                        E              151              156         159          162          162         163
                     DCT
                                        M              163              167         169          170          164         163
                                        E              159              160         160          160          161         160
                     DST
                                        M              164              173         176          174          168         161
                                        E              148              150         151          151          152         158
                  HARTLEY
                                        M              154              162         165          167          161         161
                                        E              149              152         155          156          160         161
                   WALSH
                                        M              160              162         166          170          171         170

TABLE II.        NUMBER OF CORRECTLY CLASSIFIED IMAGES (OUT OF 240) FOR DFT, DCT, DST, HARTLEY AND WALSH OVER DIFFERENT FEATURE VECTOR
             SIZES USING EUCLIDEAN AND MANHATTAN DISTANCE., T RAINING SET: AVERAGE OF FEATURE VECTORS OF 5 IMAGES FROM EACH CLASS

                  Transform         Distance                                      Feature vector size
                                  E-Euclidean
                                  M-Manhattan
                                                        75              150         225          300          450         768

                                        E              155              160         162          162          161         166
                     DFT
                                        M              175              173         171          169          164         156
                                        E              156              158         157          159          160         160
                     DCT
                                        M              171              172         169          168          163         156
                                        E              161              160         160          159          161         161
                     DST
                                        M              161              162         168          169          169         164
                                        E              159              162         161          162          163         167
                  HARTLEY
                                        M              169              168         172          171          168         164
                                        E              155              157         158          158          158         159
                   WALSH
                                        M              179              175         173          169          169         159



TABLE III.    TOTAL CLASSIFIED IMAGES (OUT OF 30 IMAGES) IN EACH              TABLE V.        TOTAL CLASSIFIED IMAGES (OUT OF 30 IMAGES) IN EACH
  CLASS FOR DIFFERENT TRANSFORMS, VECTOR SIZE: 768, DISTANCE                     CLASS FOR DIFFERENT TRANSFORMS, VECTOR SIZE: 225, DISTANCE
 CRITERIA: EUCLIDEAN D ISTANCE, TRAINING: FEATURE VECTORS OF 5                  CRITERIA: D ISTANCE CRITERIA: MANHATTAN D ISTANCE, TRAINING:
                   IMAGES FROM EACH CLASS                                               FEATURE VECTORS OF 5 IMAGES FROM EACH CLASS
   Classes         DFT     DCT    DST       HARTLEY      WALSH                    Classes       DFT     DCT     DST      HARTLEY        WALSH
    Beach           15      14     11          14         11                       Beach         23      21      19         24           23
  Monument          10      13     7           9           8                     Monument         9      11       9         11            8
     Bus            24      21     27          22         25                        Bus          25      20      27         22           24
   Dinosaur         30      30     30          30         30                      Dinosaur       30      30      30         30           30
   Elephant         24      23     23          24         24                      Elephant       22      23      20         22           21
    Flower          27      25     26          27         25                       Flower        30      28      30         30           25
    Horse           26      28     26          25         28                       Horse         22      23      24         19           25
Snow Mountain       11      9      10          7          10                   Snow Mountain      8      13      17         7            10

TABLE IV.     TOTAL CLASSIFIED IMAGES (OUT OF 30 IMAGES) IN EACH              TABLE VI.      TOTAL CLASSIFIED IMAGES (OUT OF 30 IMAGES) IN EACH
 CLASS FOR DIFFERENT TRANSFORMS, VECTOR SIZE: 768, DISTANCE                      CLASS FOR DIFFERENT TRANSFORMS, VECTOR SIZE: 225, DISTANCE
 CRITERIA: EUCLIDEAN D ISTANCE, TRAINING: AVERAGE OF FEATURE                  CRITERIA: MANHATTAN D ISTANCE, TRAINING SET: AVERAGE OF FEATURE
             VECTORS OF 5 IMAGES FROM EACH CLASS                                            VECTORS OF 5 IMAGES FROM EACH CLASS

   Classes         DFT     DCT    DST       HARTLEY      WALSH                    Classes       DFT     DCT     DST      HARTLEY         WALSH
    Beach           20      18     14          19         17                      Beach          24      23      16         24             26
  Monument           3       4     9           6           5                     Monument         9      9        7         11             6
     Bus            23      24     25          23         24                        Bus          24      25      26         25             28
   Dinosaur         30      30     30          30         30                     Dinosaur        30      30      30         30             30
   Elephant         25      22     24          25         24                     Elephant        21      18      21         21             19
    Flower          30      30     29          30         30                      Flower         30      30      30         30             30
    Horse           16      17     17          16         16                       Horse         20      22      22         20             22
Snow Mountain       19      15     13          18         13                       Snow
                                                                                                  13     12         16       11                12
                                                                                 Mountain




                                                                       94                               http://sites.google.com/site/ijcsis/
                                                                                                        ISSN 1947-5500
                                                                          (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                           Vol. 10, No. 3, March 2012
                                                                                                   No. of correctly classified images
    The comparisons of performances of different transforms
are shown in Fig. 3 to Fig. 6.                                                                       Euclidean distance criterion
                                                                                            180
                  No. of correctly classified images                                        175
                    Euclidean distance criterion
                                                                                            170
           180
                                                                                            165
           175                                                                              160
           170                                                                              155
           165                                                                              150
           160                                                                              145
           155                                                                              140
           150                                                                                       75     150      225      300     450      768
                                                                                                                Feature vector size
           145                                                                                        WALSH             DCT                DST
                                                                                                      HARTLEY           DFT
           140
                    75      150      225      300      450      768
                                 Feature vector size
                  WALSH             DCT                DST                            Figure 5. Performance of different transform (training set: Average of
                  HARTLEY           DFT
                                                                                                  feature vectors of 5 images from each class)


Figure 3. Performance of different transform (training set: Feature vectors                          No. of correctly classified images
                     of 5 images from each class)
                                                                                                       Manhattan distance criterion
                                                                                                180
                   No. of correctly classified images
                                                                                                175
                     Manhattan distance criterion
                                                                                                170
            180
                                                                                                165
            175
                                                                                                160
            170
                                                                                                155
            165
                                                                                                150
            160                                                                                 145
            155                                                                                 140
            150                                                                                           75    150 225 300 450 768
            145                                                                                                     Feture vector size

            140                                                                                        WALSH         DCT      DST       HARTLEY          DFT
                    75      150      225      300     450      768
                                 Feature vector size
                WALSH         DCT       DST      HARTLEY         DFT
                                                                                      Figure 6. Performance of different transform (training set: Average of
                                                                                                  feature vectors of 5 images from each class)


Figure 4. Performance of different transform (training set: Feature vectors                               VI. CONCLUSIONS
                     of 5 images from each class)
                                                                                        This paper proposes to prepare the feature vector from an
                                                                                    image column transform and use it for image classification.
                                                                                    This gives considerable saving of computational time as
                                                                                    compared to full transform. The paper investigates the
                                                                                    performance of different transforms. The performance is
                                                                                    tested thoroughly using different criteria like distance
                                                                                    measure (Euclidean distance, Manhattan distance); size of
                                                                                    feature vector (75, 150, 225, 300, 450 and 768) and training
                                                                                    sets (feature vectors, average of feature vectors). Conclusion




                                                                              95                                  http://sites.google.com/site/ijcsis/
                                                                                                                  ISSN 1947-5500
                                                                        (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                         Vol. 10, No. 3, March 2012
from the results of individual class classification is given in                          classification,” the IEEE Symposium on System Theory, SSST,
Table VII.                                                                               pp.44-48, Aug 2009.
                                                                                  [13]   S. Sadek, A. Hamadi, B. Michaelis,and U. Sayed, “Robust Image
                                                                                         Classification Using Multi-level Neural Networks,” Proc. of the IEEE
 TABLE VII.      BEST 3 CLASS PERFORMANCES FOR DIFFERENT CRITERIA                        International Conference on Intelligent Computing and Intelligent
                                                                                         Systems, Vol.: 4, pp. 180 – 183, Shanghai Dec 2009.
      Training Set         Similarity        Best 3 performer classes
                                                                                  [14]   J. Z. Wang, J. Li and G. Wiederhold, “SIMPLIcity: semantic sensitive
                           Measure
                                                                                         integrated matching for picture libraries,” IEEE Transactions on
                                                Dinosaur (100%)                          Pattern Analysis and Machine Intelligence, 2001, vol.23, no.9,
                           Euclidean             Horse (88.66%)                          pp.947-963.
  Feature vectors of 5
                                                Flower (86.66%)
   images from each                                                               [15]   E. O. Brigham, R. E. Morrow, “The Fast Fourier Transform,”
         class                                  Dinosaur (100%)
                                                                                         Spectrum, IEEE, Dec. 1967, Vol. 4, Issue 12, pp. 63-70.
                           Manhattan            Flower (95.33%)
                                                  Bus (78.66%)                    [16]   N. Ahmed, T. Natarajan, and K. R. Rao, “Discrete Cosine
                                                Dinosaur (100%)                          Transform,” IEEE Transctions, Computers, 90-93, Jan 1974.
                           Euclidean            Flower (99.33%)                   [17]   H. B.Kekre, T. K. Sarode, S. D. Thepade, “Color-Texture Feature
  Average of feature
                                                 Elephant (80%)                          based Image Retrieval using DCT applied on Kekre’s Median
  vectors of 5 images                                                                    Codebook”, International Journal on Imaging (IJI), Volume 2,
    from each class                             Dinosaur (100%)
                           Manhattan             Flower (100%)                           Number A09, Autumn 2009,pp. 55-65. Available online at
                                                  Bus (85.33%)                           www.ceser.res.in/iji.html (ISSN: 0974-0627) .
                                                                                  [18]   S. A. Martucci, “Symmetric convolution and the discrete sine and
                                                                                         cosine transforms,” IEEE Transactions on Signal Processing, Vol. 42,
    Results also show that the training set containing average                           Issue 5, pp. 1038-1051, 1994.
of feature vectors, gives better results and since they are less                  [19]   H. B.Kekre and D. Mishra, “Feature Extraction of Color Images using
in numbers, the computation is fast. It is also seen that                                Sectorization of Discrete Sine Transform,” IJCA Proceedings on
Manhattan distance gives high performance for small feature                              International Conference and workshop on Emerging Trends in
                                                                                         Technology (ICWET), Vol. 4, pp.:27-32, 2011.
vector size when compared with Euclidean distance criterion.
                                                                                  [20]   Hartley, R. V. L., “A More Symmetrical Fourier Analysis Applied to
                                                                                         Transmission Problems,” Proceedings IRE 30, pp.144–150, Mar-
                            REFERENCES                                                   1942.
[1]  M. J. Swain and D. H.. Ballard, “Color indexing,” International              [21]   R. P. Millane, “Analytical properties of the Hartley Transform and its
     Journal of Computer Vision, vol.7, no.1, pp.11-32, 1991.                            Implications”, Proceedings of the IEEE, Mar. 1994, Vol. 82, Issue 3,
                                                                                         pp. 413-428.
[2] A. K. Jain and A. Vailaya, “Image retrieval using color and shape,”
     Pattern recognition, vol.29, no.8, pp.1233-1244, 1996                        [22]   J. L.Walsh, “A Closed Set of Orthogonal Functions,” American
                                                                                         Journal of Mathematics, vol. 45, pp. 5-24, 1923 .
[3] F. Mokhtarian and S. Abbasi, “Shape similarity retrieval under
     affinetransforms,” Pattern Recognition, 2002, vol. 35, pp.31-41.             [23]   H. B.Kekre and D. Mishra, “Density Distribution and Sector Mean
                                                                                         with Zero-Sal and Highest-Cal Components in Walsh transform
[4] B.S.Manjunath and W.Y.Ma, “Texture feature for browsing and                          Sectors as Feature Vectors for Image Retrieval,” International Journal
     retrieval of image data,” IEEE Pattern Analysis and Machine                         of Computer Scienece and Information Security (IJCSIS), vol.8, No.
     Intelligence, no. 18, vol. 8, pp. 837- 842, 1996.                                   4, 2010, ISSN 1947-5500.
[5] Z. Liang, H. Fu, Z. Chi, and D. Feng, “Image Pre-Classification               [24]   H. B.Kekre, Vinayak Bharadi, “Walsh Coefficients of the Horizontal
     Based on Saliency Map for Image Retrieval,” Proc. of the IEEE                       & Vertical Pixel Distribution of Signature Template”, In Proc. of Int.
     International Conference on Information, Communications and Signal                  Conference ICIP-07, Bangalore University, Bangalore. 10-12 Aug
     Processing, pp. 1-5, Dec 2009.                                                      2007.
[6] F. Siraj, M. Salahuddin, and S. Yusof, “Digital Image Classification          [25]   H. B.Kekre, Sudeep D. Thepade, Akshay Maloo “Performance
     for Malaysian Blooming Flower,” the IEEE Second International                       Comparison       for   Face    Recognition using        PCA,     DCT
     Conference on Computational Intelligence, Modelling and                             &WalshTransform of Row Mean and Column Mean”, ICGST
     Simulation, (CIMSiM), pp. 33-38,Bali, Sept 2010.                                    International Journal on Graphics, Vision and Image Processing
[7] D. Bashish, M. Braik, and S. Bani-Ahmad, “A Framework for                            (GVIP), Volume 10, Issue II, pp.9-18, June 2010.
     Detection and classification of Plant Leaf and Stem Diseases,” Proc.         [26]   H.B.Kekre, Tanuja Sarode, Sudeep D. Thepade, “DCT Applied to
     of the IEEE International Conference on signal and image processing                 Row Mean and Column Vectors in Fingerprint Identification”, In
     (ICSIP), pp. 113-118, Chennai Dec 2010.                                             Proceedings of Int. Conf. on Computer Networks and Security
[8] H.B. Kekre, T. K. Sarode, M. S. Ugale, “Performance Comparison of                    (ICCNS), 27-28 Sept. 2008, VIT, Pune.
     Image Classifier Using DCT, Walsh, Haar and Kekre’s Transform,”              [27]   Wang, J. Z., Li, J., Wiederhold, G.: SIMPLIcity: Semantics-sensitive
     International Journal of Computer Science and Information                           Integrated Matching for Picture LIbraries, IEEE Trans. on Pattern
                           ol
     Security,(IJCSIS), V ..9, No. 7, 2011                                               Analysis and Machine Intelligence, vol 23, no.9, pp. 947-963, (2001).
[9] M. Szummer and R. W. Picard, “Indoor-Outdoor Classification,”
     IEEE International workshop Content based Acess of Image and
     Video Databases, in conjunction with ICCV’98, pp. 384-390, Jan                                           AUTHORS PROFILE
     2009.
[10] O. Chapelle, P. Haffner, and V. Vapnik, “Support vector machines                                  Dr. H. B. Kekre has received B.E. (Hons.) in
     for histogram- based image classification,” IEEE Transactions on                                  Telecomm. Engineering. from Jabalpur University in
     Neural Networks, vol. 10, pp. 1055-1064, 1999.                                                    1958, M.Tech (Industrial Electronics) from IIT
[11] S. Agrawal, N. Verma, P. Tamrakar, and P. Sircar, “Content Based                                  Bombay in 1960, M.S.Engg. (Electrical Engg.) from
     Color Image Classification using SVM,” in Proc. of IEEE                                           University of Ottawa in 1965 and Ph.D. (System
     International Conference on Information Technology: New                                           Identification) from IIT Bombay in 1970 He has
     Generations (ITNG), pp. 1090 – 1094, Las Vegas, April 2011.                                       worked as Faculty of Electrical Engineering and then
                                                                                                       HOD Computer Science and Engg. at IIT Bombay.
[12] M. Lotfi1, A. Solimani, A. Dargazany, H. Afzal, and M. Bandarabadi,          For 13 years he was working as a professor and head in the Department of
     “Combining wavelet transforms and neural networks for image                  Computer Engg. at Thadomal Shahani Engineering. College, Mumbai.




                                                                            96                                    http://sites.google.com/site/ijcsis/
                                                                                                                  ISSN 1947-5500
                                                                     (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                      Vol. 10, No. 3, March 2012
Now he is Senior Professor at MPSTME, SVKM’s NMIMS University. He              Dept. of Computer Engineering at Thadomal Shahani Engineering College,
has guided 17 Ph.Ds, more than 100 M.E./M.Tech and several B.E./ B.Tech        Mumbai. She is life member of IETE, ISTE, member of International
projects. His areas of interest are Digital Signal processing, Image           Association of Engineers (IAENG) and International Association of
Processing and Computer Networking. He has more than 450 papers in             Computer Science and Information Technology (IACSIT), Singapore. Her
National /International Conferences and Journals to his credit. He was         areas of interest are Image Processing, Signal Processing and Computer
Senior Member of IEEE. Presently He is Fellow of IETE and Life Member          Graphics. She has more than 100 papers in National /International
of ISTE Recently twelve students working under his guidance have               Conferences/journal to her credit.
received best paper awards and six research scholars have beenconferred
Ph. D. Degree by NMIMS University. Currently 7 research scholars are                            Jagruti K. Save has received B.E. (Computer Engg.)
pursuing Ph.D. program under his guidance.
                                                                                                from Mumbai University in 1996, M.E. (Computer
                                                                                                Engineering) from Mumbai University in 2004,
                     Tanuja K. Sarode has Received Bsc. (Mathematics)                           currently Pursuing Ph.D. from Mukesh Patel School of
                     from    Mumbai           University    in    1996,                         Technology, Management and Engineering, SVKM’s
                     Bsc.Tech.(Computer Technology) from Mumbai                                 NMIMS University, Vile-Parle (W), Mumbai, INDIA.
                     University in 1999, M.E. (Computer Engineering)                            She has more than 10 years of experience in teaching.
                     from Mumbai University in 2004, currently Pursuing                         Currently working as Associate Professor in Dept. of
                     Ph.D. from Mukesh Patel School of Technology,             Computer Engineering at Fr. Conceicao Rodrigues College of Engg.,
                     Management and Engineering, SVKM’s NMIMS                  Bandra, Mumbai. Her areas of interest are Image Processing, Neural
University, Vile-Parle (W), Mumbai, INDIA. She has more than 10 years          Networks, Fuzzy systems, Data base management and Computer Vision.
of experience in teaching. Currently working as Associate Professor in         She has 6 papers in National /International Conferences/journal to her
                                                                               credit.




                                                                          97                               http://sites.google.com/site/ijcsis/
                                                                                                           ISSN 1947-5500

				
DOCUMENT INFO
Description: International Journal of Computer Science and Information Security (IJCSIS) provide a forum for publishing empirical results relevant to both researchers and practitioners, and also promotes the publication of industry-relevant research, to address the significant gap between research and practice. Being a fully open access scholarly journal, original research works and review articles are published in all areas of the computer science including emerging topics like cloud computing, software development etc. It continues promote insight and understanding of the state of the art and trends in technology. To a large extent, the credit for high quality, visibility and recognition of the journal goes to the editorial board and the technical review committee. Authors are solicited to contribute to the journal by submitting articles that illustrate research results, projects, surveying works and industrial experiences. The topics covered by this journal are diversed. (See monthly Call for Papers) For complete details about IJCSIS archives publications, abstracting/indexing, editorial board and other important information, please refer to IJCSIS homepage. IJCSIS appreciates all the insights and advice from authors/readers and reviewers. Indexed by the following International Agencies and institutions: EI, Scopus, DBLP, DOI, ProQuest, ISI Thomson Reuters. Average acceptance for the period January-March 2012 is 31%. We look forward to receive your valuable papers. If you have further questions please do not hesitate to contact us at ijcsiseditor@gmail.com. Our team is committed to provide a quick and supportive service throughout the publication process. A complete list of journals can be found at: http://sites.google.com/site/ijcsis/ IJCSIS Vol. 10, No. 3, March 2012 Edition ISSN 1947-5500 � IJCSIS, USA & UK.