; Compensatory Fuzzy Min-Max Neural Network for ObjectRecognition
Documents
Resources
Learning Center
Upload
Plans & pricing Sign in
Sign Out
Your Federal Quarterly Tax Payments are due April 15th Get Help Now >>

Compensatory Fuzzy Min-Max Neural Network for ObjectRecognition

VIEWS: 0 PAGES: 6

Object recognition is important component in computer vision. Object recognition system (ORS) is mainly concern with determining the identity of the object in the image. Many ORS are available but they are quite computationally expensive or they are not invariant under rotation, translation and scale (RTS). More over fuzzy min max neural network (FMNN) classifier having problem with classification and gradation error. The object recognition system is divided into two steps namely, feature extraction and pattern classification. Feature Extraction part consist of rotation, translation and scale invariant features. These features are used to train fuzzy min max neural network with compensatory neuron architecture (FMCNs). FMCN uses hyperbox fuzzy sets to represent the pattern classes. The concept compensatory neurons are inspired from the reflex system of human brain which takes over the control in hazardous condition. Compensatory neurons are getting activated when the testing sample falls in the overlapped regions of different classes.

More Info
  • pg 1
									IJCSN International Journal of Computer Science and Network, Volume 2, Issue 3, June 2013
ISSN (Online) : 2277-5420       www.ijcsn.org
                                                                                                                                       42


                    Min-
 Compensatory Fuzzy Min-Max Neural Network for Object
                     Recognition
                                                             1
                                                                 Paras A. Tolia
                          1
                              Department of Computer Engineering, Sinhgad College of Engineering.
                                                     University of Pune
                                                    Pune- 411 041, India



                              Abstract                                    sample to a class and classify them. As the data set
Object recognition is important component in computer vision.             become available FMM start creation of hyperbox and
Object recognition system (ORS) is mainly concern with                    expand it up to the max point, FMM checks the overlap
determining the identity of the object in the image. Many ORS             among the different classes by overlap test, and if there
are available but they are quite computationally expensive or
                                                                          overlap exists then FMM eliminates the overlap by
they are not invariant under rotation, translation and scale (RTS).
More over fuzzy min max neural network (FMNN) classifier                  contraction method. But by using this contraction method
having problem with classification and gradation error. The               hyperbox cannot able to access the full class membership.
object recognition system is divided into two steps namely,
feature extraction and pattern classification. Feature Extraction         An object recognition system finds objects in the real
part consist of rotation, translation and scale invariant features.       world from an image. Object recognition mainly involves
These features are used to train fuzzy min max neural network             two steps, feature extraction and pattern classification. In
with compensatory neuron architecture (FMCNs). FMCN uses                  the feature extraction part, the features of the object are
hyperbox fuzzy sets to represent the pattern classes. The concept         extracted. Pattern classification extracts the underlying
compensatory neurons are inspired from the reflex system of
                                                                          structure of data and then performs recognition.
human brain which takes over the control in hazardous condition.
Compensatory neurons are getting activated when the testing
sample falls in the overlapped regions of different classes.              Remaining part of the paper is organized as follows.
                                                                          Section II describes the feature extraction part. Section III
Keywords: Object recognition, Neural network, RTS, FMCN.                  explains the classification part. Compensatory neuron
                                                                          architecture is discussed in section IV and Section V
                                                                          explains the conclusion of the work. References are cited
1. Introduction                                                           at the end.

Object recognition is an important part of computer vision.               2. Feature Extraction
Humans recognize a multitude of objects in images with
very trivial effort, despite the fact that the image of the               Feature extraction part extracts the features. Feature can be
objects may vary somewhat in different point of view, in                  defined as quantitative description of input. Feature
many different sizes/ scale or even when they are                         extraction plays an important role in object recognition
translated or rotated. Object recognition system inspired                 systems (ORS) since the information related to an object is
from the human brain to recognize the object from an                      contained within the extracted features. For the extraction
image [1].                                                                of the features gray scale image is converted into binary
                                                                          image and then centroid is calculated firstly. For an
In earlier days there were many existing methods Liz.                     invariant ORS feature extraction must be invariant to
Fuzzy min-max classification neural network, General                      translation, rotation and scale [1]. These features includes
fuzzy min-max neural network for clustering and                           normalized moment of inertia, max to average ratio,
classification, etc. Fuzzy min max classification is made                 average to max-min difference ratio, radial coding and
using hyperbox fuzzy set. A hyperbox determines area of                   radial angles. To extract features, a computation of the
the n-dimensional pattern space with full class                           centroid for an object is necessary. The centroid (Cx, Cy)
membership. A hyperbox is defined with min point and                      of two dimensional object is given by,
max point, and a membership function defines with
respect to min point and max point of hyperbox. The min                            ∑    ∗ ( ,   )              ∑    ∗ ( ,   )
points are stored in matrix V and the max points are stored                   Cx =              ,       Cy =                     (1)
in matrix W. Hyperboxes are created and adjusted during
the training phase, and in the test phase these hyperboxes                                          1              ( , )∈
                                                                                     ( , )=
                                                                                                    0                  ℎ
and their membership function are used to assign an input                 Where,                                                 (2)
IJCSN International Journal of Computer Science and Network, Volume 2, Issue 3, June 2013
ISSN (Online) : 2277-5420       www.ijcsn.org
                                                                                                                               43

xi, yi : co-ordinates values                                         regardless of its position orientation and size. Radial
                                                                     angles (RA) are found as follows:
N: total number of object pixels.
                                                                            1    Obtain the centroid of the object.

A. Normalized Moment of Inertia                                             2    Generate K equidistant concentric Ci around
                                                                                 the centroid. The spacing is equal to the
In general the moment of inertia quantifies the inertia of                       distance between centroid and furthest pixel of
rotating object by considering its mass distribution. The                        the object divided by K.
moment of inertia (MI) is normally calculated by dividing
the object into N-small pieces of mass m1, m2,.., mN, each                  3    For each circular boundary, count the number
piece is at a distance di from the axis of rotation. MI is                       of intensity changes (zero to one or one to
given by,                                                                        zero) that occur in the image. These are radial
                                                                                 coding features.
I=∑     ...                                      (3)                        4    Find the largest angles (θ) between the two
                                                                                 successive intensity changes for every circle.
In case of object in a binary image, consider pixel as unit                      This is called Radial Angles. If θ > π then take
pieces (i.e. m=1). Due to the finite resolution of any                           θ as 2π – θ. If there is no intensity change then
digitized image, a rotated object may not conserve the                           take θ = 0.
number of pixels. So moment of inertia may vary but
normalized moment of inertia reduces this problem.                   3. Classification
Normalized MI is invariant to translation, rotation and
scale. The normalized moment of inertia (MI) of an object            Pattern classification is the key element for extracting the
is computed by,                                                      underlying structure of the data[4]. Classifier is used to
                                                                     determine the input output relationship. Neural network
IN =    ∑       =    ∑   ((    −    ) +(    −   ) )          (4)     classifier that creates classes by aggregating several
                                                                     smaller fuzzy sets into a single fuzzy set class. Fuzzy min-
Where (Cx, Cy) are centroid co-ordinates and xi, yi are              max classification neural networks are made up of
object pixel co-ordinates. di pixel distance from centroid.          hyperbox fuzzy sets. A hyperbox defines a region of the n-
                                                                     dimensional pattern space that has patterns with full class
B. Max to Average Length Ratio                                       membership function. A hyperbox is completely defined
                                                                     by its min point and max point, and a membership
Max to average length ratio is a ratio of maximum (dmax)             function is defined with respect to these hyperbox min-
of distance of object pixels from centroid to the average            max points. The min-max (hyperbox) membership
pixel distance (davg) from centroid.                                 function combination defines a fuzzy set, hyperbox fuzzy
                                                                     sets are aggregated to form a single fuzzy set class, and the
       MAR =                                           (5)           resulting structure fits naturally into a neural network
                                                                     framework; due to this, classification system is called a
                                                                     fuzzy min-max classification neural network.
C. Average to Max-Min Difference Ratio
                                                                     Fuzzy min-max classification neural network recall
Average to max-min difference is a ratio of average pixel
                                                                     consists of computing the fuzzy union of the membership
distance from centroid davg to difference between
                                                                     function values produced from each of the fuzzy set
maximum (dmax) and minimum (dmin) of pixel distance
                                                                     hyperboxes. Fuzzy min max classification learning
from centroid. It is given by,
                                                                     algorithm has three steps process, they are as described
                                                                     below.

   AMMD = (                )
                                                   (6)               A. Expansion:

D. Radial Coding and Radial Angles                                   In the expansion process, identify the hyperbox that can
                                                                     expand and expand it. If an expandable hyperbox cannot
The radial coding features are based on the fact that circle         be found any more then add a new hyperbox for that class.
is the only geometrical shape that is naturally and perfectly
invariant to rotation. RC is computed by counting the                B. Overlap Test:
number of intensity changes on circular boundaries of
some radius inside the object. This simple coding scheme             Determine the overlap exists between any hyperboxes that
extracts the topological characteristics of an object                belongs to the different classes.
IJCSN International Journal of Computer Science and Network, Volume 2, Issue 3, June 2013
ISSN (Online) : 2277-5420       www.ijcsn.org
                                                                                                                             44

C. Contraction:

If the overlap exists between the two hyperboxes that                The data shown in figure 3 is used for training FMNN
belongs to the different classes then eliminate the overlap          classifier; two hyperboxes are created with an overlap.
by adjusting the each of the hyperboxes.                             To remove the overlap, hyperboxes are contracted. Note
                                                                     that after contraction, training samples B and C are
In the fuzzy min max neural network classifier with                  contained in the hyperboxes of classes 1 and 2,
compensatory neurons (FMCN) overlapped regions are                   respectively. Thus, point C gets full membership of class 2
handled by the compensation nodes (CN). The concept of               and partial membership of class. This can be solved by the
CN is inspired from the reflex system of human brain that            compensatory neuron architecture.
reacts in certain hazardous situation. Compensatory
neurons (CNs) imitate this behavior by getting activated
whenever a test sample falls in the overlapped regions
amongst different classes [2]. These neurons are capable of
handle the hyperbox overlap and containment more
efficiently. These compensation nodes are divided into
overlap compensation node (OCN) and containment
compensation node (CCN). OCN is used to handle simple
overlap regions and the CCN is used to handle overlaps,
that one box fully or partially is contained in another box.
Hyperbox overlapping is as shown in the below figure.


                                                                                     Fig. 3 Partial containment problem

                                                                     FMCN is capable to approximate the complex topology of
                                                                     data more accurately. Hence, the performance of FMCN is
                                                                     better in single pass through the data [2]. FMCN can avoid
                                                                     the dependency of the classifier on the learning parameter
                                                                     to a larger extent compared to FMNN and GFMN.

                                                                     4. Compensatory Neuron Architecture
              Fig. 1 FMCN membership function for CN
                                                                     An object recognition system uses the supervised
FMNN retains information regarding the learned                       classification technique with Compensatory neuron
hyperboxes by storing their min-max points. Hyperbox                 architecture. The concept of compensatory neuron is
min-max points represent the acquired knowledge. It is               inspired from the reflex system of human brain which
analyzed that contraction process involved in the learning           takes over the control in hazardous conditions.
algorithm modifies these min-max points to remove
ambiguity in the overlapped classes. This creates                    Compensatory neurons (CNs)         imitate this behavior by
classification errors for the learned data itself. Below             getting activated whenever a       test sample falls in the
figure 2 depicts a hyperbox overlap case [2].                        overlapped regions amongst         different classes. These
                                                                     neurons are capable to handle     the hyperbox overlap and
                                                                     containment more efficiently.

                                                                     Fuzzy min-max neural network classifier with
                                                                     compensatory neurons (FMCNs) uses hyperbox fuzzy sets
                                                                     to represent the pattern classes. FMCN is capable to learn
                                                                     the data online in a single pass through with reduced
                                                                     classification and gradation errors. One of the good
                                                                     features of FMCN is that, its performance is less
                                                                     dependent on the initialization of expansion coefficient,
                                                                     i.e., maximum hyperbox size [7]. The architecture of the
                                                                     FMCN is as showing in below figure 4,

                   Fig. 2 Full containment problem
IJCSN International Journal of Computer Science and Network, Volume 2, Issue 3, June 2013
ISSN (Online) : 2277-5420       www.ijcsn.org
                                                                                                                                45




                                                       Fig. 4 FMCN architecture



Number of nodes in the input layer is equal to the                    A. Classifying Neurons
dimension of applied input vector Ah, where,
                                                                      A hyperbox node in CLN section is created if training
ah1 - ahn : Input samples ϵ In,                                       sample belongs to a class which has not been encountered
                                                                      so far or existing hyperboxes of that class cannot be
a1 - an : Input nodes,                                                expanded further to accommodate it. The connections
                                                                      between hyperbox and class nodes in CLN section are
b1 - bj : Classification hyperbox nodes,                              represented by matrix U. Node in classifying section is as
                                                                      shown below,

c1 - ck : Class nodes,

d1 - dp : Overlap compensation hyperbox nodes,

e1- eq : Containment compensation hyperbox nodes,

o1 - ok : Overall compensation nodes.

The middle layer neurons and output layer nodes are
divided into three parts: Classifying Neuron (CLN)
section, Overlap Compensation Neurons (OCN) section,
and Containment Compensation Neurons (CCN) section.                                          Fig. 5 Node in CLN section
A neuron in the middle layer represents an –dimensional
hyperbox. The connections between an input node and a                 A connection between a hyperbox node bj to a class node
hyperbox node in the middle layer represent min-max                   ci is given by,
                                                                                        1,      {    ∈     }
points (V, W). Hyperbox nodes in OCN and CCN sections
represent overlap and containment of hyperboxes in CLN                             =
                                                                                        0,      {    ∉     }
                                                                                                                          (7)
section, respectively. All middle layer neurons are created
during the training process.
IJCSN International Journal of Computer Science and Network, Volume 2, Issue 3, June 2013
ISSN (Online) : 2277-5420       www.ijcsn.org
                                                                                                                                     46

B. Overlap Compensation Neurons                                      The number of output layer nodes in CLN section is the
                                                                     same as the number of classes learned. The number of
A Hyperbox node in the middle layer of OCN sections is               class nodes in OCN and CCN section depends on the
created whenever the network faces problem of overlap.               nature of the overlap the network faces during the training
The OCN section takes care of the overlap problem. The               phase.
connections between hyperbox and class nodes in OCN
section are represented by matrix Y. Node in the OCN                 5. Conclusion
section is as shown in the below figure.
                                                                     Fuzzy min max neural network with compensatory neuron
                                                                     architecture (FMCN) is based on compensatory neuron
                                                                     (CNs) inspired from the reflex system of the human brain
                                                                     which takes over the control in hazardous condition.
                                                                     FMCN uses compensatory neurons to handle overlap
                                                                     regions. Compensatory neurons imitate this behavior by
                                                                     getting activated whenever a test sample falls in the
                                                                     overlapped regions amongst different classes. FMCN
                                                                     learns the data online, in a single pass through and
                          Fig. 6 Node in OCN section                 maintain simplicity in learning. Further OCN used to
                                                                     handle overlap between min point and the max point of the
The connection weights from a neuron dp to ith and jth class
                                                                     two different classes, and CCN used to handle fully or
nodes in OCN section are given by,
                                                                     partially containment problem.

                   1,       {         ∈        ⋂   , ≠ }
               =
                            0,         ℎ
                                                           (8)       Acknowledgment
                                                                     I humbly thank to Prof. D. R. Pawar (Professor of
                                                                     Computer Engineering Department, Sinhgad College of
C. Containment Compensation Neurons
                                                                     Engineering Pune, Maharashtra, India) for lending her
                                                                     invaluable expertise by refereeing this project.
A hyperbox node in CCN section is created whenever
hyperbox of one class is contained fully or partially within
a hyperbox of other class. The connections between the               References
hyperbox and class nodes in CCN section are represented
                                                                     [1]    A.V. Nandedkar, P.K. Biswas, Dept of Electronics &
by matrix Z. Node in the CCN section is as shown in the                     Electrical      Communication           Engg.,       “Object
below figure 7.                                                             Recognition using Reflex Fuzzy Min-Max Neural
                                                                            Network with Floating Neurons.”
                                                                     [2]    A. V. Nandedkar and P. K. Biswas, " A Fuzzy Min-Max
                                                                            Neural Network Classifier with Compensatory Neuron
                                                                            Architecture," IEEE Transactions on Neural Networks,
                                                                            vol. 18, pp. 42-54, 2007.
                                                                     [3]    A B. Gabrys and A. Bargiela, “General Fuzzy Min-Max
                                                                            Neural Network for Clustering and Classification,” IEEE
                                                                            Transactions on Neural Networks, vol.11, no. 3 pp.769-
                                                                            783, 2000.
                                                                     [4]    Duda, R. O., Hart, P. E. et. al., Pattern Classification 2nd
                                                                            edition, John Wiley and Sons Inc, Singapore (2001).
                                                                     [5]    Gonzalez R. C and Woods R.E.: Digital Image
                                                                            Processing, 2nd edition, Pearson Education Pvt. Ltd.,
                        Fig. 7 Node in CCN section
                                                                            Delhi (2002).
                                                                     [6]    L.A.Zadeh, Dept. of Electrical Engineering and
The connection weights from a neuron eq to a class node ci                  Electronics Research Laboratory, “Fuzzy Sets,”
                                                                            Information and Control, Vol. 8, pp. 338-353,1965.
in CCN section are given by,
                                                                     [7]    Nandedkar A. V., Biswas P. K., “A Fuzzy min-max
          1,                                               , ≠
                                                                            neural network classifier with compensatory neuron
      =                                                                     architecture”, 17th Int. Cnf. On Pattern Recognition
                                 0,        ℎ                                (ICPR2004), Cambridge Uk, Vol. 4, Aug(2004) 553-556.
(9)                                                                  [8]    P.K.Simpson, “Fuzzy Min-Max Neural Network-Part 1:
                                                                            Classification,” IEEE Trans. Neural Network, vol. 3, no.
IJCSN International Journal of Computer Science and Network, Volume 2, Issue 3, June 2013
ISSN (Online) : 2277-5420       www.ijcsn.org
                                                                                            47

       5, pp. 776-786, sep.1992. “PDCA12-70 data sheet,” Opto
       Speed SA, Mezzovico, Switzerland.
[9]    Perantonis S. J. and Lisboa P. J. G, “Translation, Rotation
       and Scale- Invariant pattern recognition by high order
       Neural Network and Moment Classifier”, IEEE
       Transation on Neural Network, Vol. 4, July (1993), 276-
       283.
[10]   Reza D., Mostafa P., et. al. Computer Engineering, “ M-
       FMCN: Modified Fuzzy Min-Max Classifier using
       Compensatory Neurons.” Recent Researches in Artificial
       Intelligence and Database Management.
[11]   The C. H. and Chin R. T. , “On image analysis by the
       methods of moments”, IEEE Trans, Pattern Analysis and
       Machine Intelligence, Vol. 10, July (1988) 496-513.
[12]   Torres-Mendez L. A., Ruiz-Suarez J. C., et. al.,
       “Transaction, Rotation and Scale Invariant object
       recognition”, IEEE Transaction on systems, Man and
       Cybemetrics, Vol. 30, no. 1, February (2000) 125-130.


Paras Tolia received B.E. degree in Computer Science &
Engineering from Visvesvaraya Technmological University,
Karnataka in the year 2011 and currently pursuing for M.E. in
Computer Networks from Sinhgad College of Engineering,
affiliated under Pune University. His current research interest
includes fuzzy neural network and object recognition.

								
To top