50120130405009

Document Sample
50120130405009 Powered By Docstoc
					International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
 INTERNATIONAL JOURNAL OF COMPUTER ENGINEERING &
ISSN 0976 - 6375(Online), Volume 4, Issue 5, September - October (2013), © IAEME
                                TECHNOLOGY (IJCET)

ISSN 0976 – 6367(Print)
ISSN 0976 – 6375(Online)                                                       IJCET
Volume 4, Issue 5, September – October (2013), pp. 67-75
© IAEME: www.iaeme.com/ijcet.asp                                             ©IAEME
Journal Impact Factor (2013): 6.1302 (Calculated by GISI)
www.jifactor.com




   A NEW APPROACH TO CONTEXT AWARE SALIENCY DETECTION
                USING GENETIC ALGORITHM

                              Geetkiran Kaur1 & Parvinder Kaur 2
          1
           M. Tech. Student, Department of Computer Science & Engg., SUSCET, Tangori
      2
          Assistant Professor, Department of Computer Science & Engg., SUSCET, Tangori




ABSTRACT

        In this paper we present a new saliency detection model which not only detect the the
dominant salient object but also the image regions that give some information about the scene.
Starting from the initial segmentation result obtained from applying multi iteration multithresholding
algorithms, and then applying morphology based edge detection method. The proposed method
applies hough transforms to detect the energy content of the image, then genetic algorithms can be
expoited to detect the salient region in the image.

Keywords: Saliency, Informative saliency, Content based saliency, context aware, Genetic
algorithms, Hough transform, multithresholding, computer vision, pattern recognition.

1. INTRODUCTION

        Saliency detection is the process of detecting the interesting visual information in an image It
is fascinating to know that human visual system can adapt to wide range of environmental changes
automatically and extract only useful information needed from the complex scence quickly and this
has led to a wide spread view that such an efficient information processing capability relates a lot to
human attention mechanism
        The main information to be of concern to the visual system are the brightness of the environ-
ment, depth of the object intensity, colour, shape motion etc. In recent years may visual attention
models are being proposed although they have different structures and methods but a general con-
clusion is eached that visual attentional model consist of two aspects bottom up and top down. While
bottom up is concern with the sensory data that is, what a observer viewing freely without any bais
will notice in a scene. It takes sensory input like brightness, color, motion, depth, orientation of the
scene into consideration to predict the attention spotlight, top down aspect deals with the mental

                                                  67
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 4, Issue 5, September - October (2013), © IAEME

viewing condition of the viewers like his experiences, state of mind, etc. it relates to reflection of
the human activities.
        At present, a significant number of saliency detection algorithms are developed relevant
to bottom up mechanisms some takes local consideration into account i.e brightness, intensity,
color, depth etc ,some take global consideration like suppressing the frequency appearing fea-
tures etc .some use both local and global methods. Most of aims to extracts the most salient ob-
ject from the visual scence.
        But when we describe a picture then it is not only the main dominant object but also the
immediate surrounding that has to be taken into account to give the complete description of the
object .so, what we actually describe is the main object with its context. goferman et.al in [1] in-
troduces the context aware saliency . We propose here a context aware visual model that will use
edge information, energy distribution and genetic algorithms to detect the salient objects along
with the salient context .we can say it will provide information about the salient object.
        Genetic algorithm is a search heuristic that mimics the process of natural evolution .this
heuristics is routinely used to generate useful solutions to optimization and search problems .In
genetic algorithms a population of candidate solutions called individuals creatures or phenotypes
to and optimization problems is evolved towards better solutions. Each candidate solution have a
set of properties (its chromosomes or genotype) which can be mutated or altered, traditionally
solutions are in the forms of ‘0’ and ‘1’. The evolution usually starts from a population of ran-
domly generated individuals and is an iterative process. In each iteration a generation is created.
In each generation fitness of every individual is evaluated, the fitness is usually the value of the
objective function in the optimization problem being solved. The most fit individuals are selected
from the population and each individual genome is modified to form an new generation, the new
generation is then used for the next iteration of the algorithm .the algorithm terminates when ei-
ther a maximum number of generations are produced or a satisfactory fitness level has been
reached for the population.

2. RELATED WORK

        Many visual attention models have been proposed for detecting saliency Zhi liu et.al [2]
proposed an image segmentation model aimed at salient object extraction starting from an over-
segmented image and performing region merging using a novel dissimilarity measure consider-
ing the impact of color difference ,area factor and binary partion tree is generate to record the
merging sequence Uvika et.al [3] proposed a morphology based edge detection method and bi-
nary partition tree for object extraction using intensity based region merging. Xiangyun hu et.al
[4] proposed a simple method for measuring the saliency of texture and object based on the edge
density and spatial evenness of the edge distribution in the local window of each pixel. Xuejje
Zhang et.al [5] proposed a salient detection model from an oversegmented image. Segment that
are similar and spread over the image receive low saliency and segment which is distinct in the
whole image or in local region receive high saliency zhenzhong chen et.al [6] proposed hybrid
saliency detection using the low level and high level clues imposed by the photographer. shang-
wang liu et.al [7] proposed an automatic region detection algorithm by extending graph based
visual saliency model using pulse coded neural networks (PCNN) to implement the well defined
criteria for saliency detection .




                                                 68
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 4, Issue 5, September - October (2013), © IAEME

3. IMPLEMENTATION

3.1 Proposed Algorithm

Step 1: Read the RGB image.




                                        Fig 1: Input Image

Step 2: Extract the red, green and blue components from the image.




       (a) Red Component             (b) Green component         (c) Blue Component

                                    Fig 2: Extracted Component

Step 3: Applying multi thresholding and morphological function on each component.




     (a) Red component            (b) Green Component        (c) Blue Component

                              Fig: 3 Thresholded Components of image




                                                69
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 4, Issue 5, September - October (2013), © IAEME

Step 4: Recombining the edges to form the required oversegmented Result




                                  Fig 4: Oversegmented Image

Step 5: To determine the energy content of the image apply Hough transform




                                       Fig 5: Hough Image

Step 5: To normalize the energy from Hough generated Image, Maximum energy value is calculated
for the Image and then iteratively, Various Energy level are generated in the angle of 155 to 255.




                                    Fig 6: Normalized Image




                                               70
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 4, Issue 5, September - October (2013), © IAEME

Step 6: Genetic Algorithm is applied to enhance the energy level of the output image of Step 5.
Intensity based fitness function is used for this case.




                               Fig 7: Genetically Enchanced Image

Step 7: Output image now contain two parts forground(Salient part)and Background, Intensity of the
foreground is increased and Background is Decreased further to improve the result.




                                   Fig: 8: Gradient of the image

Step 8: Global and Local Centre of focus is determined with the help of energy content and gradeint
calculated.




                                      Fig 9: Centre of focus




                                                71
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 4, Issue 5, September - October (2013), © IAEME

Step 9: The focus of the gradient is done from the centre of the focus determined in the step 9.
To generate the final saliency map




                                        Fig 10: Saliency Map


3.2 Flowchart for proposed algorithm

                                Image loading               Calculate parameter
                                 in Memory                   to compute ROC
                                                                  Curve


                               Multithresholding for       Output Saliency Map
                                Oversegmentation



                               Hough Transform as          Gradient focus suing
                                Energy Detection          from Centre of saliency
                                    Cretaria


                                Normalization of            Predicting Centre of
                                 Energy Levels              Gravity using Gra-
                                                                    dient


                               Genetic Algorithm          Increasing the intensi-
                                for Energy Level            ty of salient region
                                  Enhancement              based on the Energy
                                                                  Content
                            Fig: 11: Flowchart of the Proposed Algorithm

4. EXPERIMENTAL RESULTS

        We consider that edges are the main source of information for the salient part, taking in con-
sideration the energy density of the edges of the image, we can incorporate both the local and global
features, As the region of high energy is the region that get the attention of the human eye .Center of
focus can easily be detected by Energy content of the image. We have use Hough transforms, which
will divide the image into various energy level and genetic algorithms to enhance the result.


                                                   72
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 4, Issue 5, September - October (2013), © IAEME

       In Fig12 we compares our results with Local method and Global method where Local method
where Fig 12(a) is the input image Fig 12(b) Saliency map from a local method [4], Fig 12(c) Salien-
cy map from a global method [8] .Fig 12(d) depicts our approach




     Fig: 12 a) Input              b) Local             c) Global               d) Our
             Image                Method. [4]           Method. [8]             Approach

5.        QUANTITATIVE EVALUATION

        To obtain quantitative evaluation we plot ROC Curve Firstly true positive rate (TPR) and
false positive rate (FPR) are calculated on the base of the ground truth database. The TPR defines
how many correct positive results occur among all positive results given by the algorithm, FPR, on
the other hand, defines how many incorrect positive results occur among all negative results given by
the algorithm.
                                                 73
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 4, Issue 5, September - October (2013), © IAEME

        A ROC space is defined by FPR and TPR as x and y axes respectively, which depicts relative
trade-offs between true positive and false positive. Daigonal divides the the ROC space the points
above the diagonal represents good classification results and the points below shows poor results .we
have compared our Results with 7 different state of art algorihms [8], [9], [10], [11], [12],
[13], [14]. From the roc curve plotted we find that our algorithm outperform the other state of art
algorithms as our curve shown in red colour is higher than other algorithms.




                                   Fig 13: Receiver Operating Curve


6. CONCLUSION

        In this paper, we present a new saliency detection method, which not detect the salient object
but also the salient background which convey some meaning about the salient object. As shown by
the quantitative results, our approach which considers the energy density of the image to calculate
the salient regions of the image outperforms most of the present state of art salient detection algo-
rithms.

REFERENCES

 1.   STAS GOFERMAN,LIHI ZELNIK-MANOR,AND AYELLET TAL ,”CONTEXT AWARE SALIENCY”,
      IEEE TRANSACTION ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
      VOL 34, NO .10, OCTOBER 2012
 2.   Zhi Liu, Liquan Shen, Zhaoyang Zhang, “Unsupervised Image Segmentation Based on Analy-
      sis of Binary Partition Tree for Salient Object Extraction”, ELSEVIER, Signal Processing 91
      (2011) pp. 290-299.
 3.   Uvika and Kaur Sumeet,” Image Segmentation and Object Extraction using Binary Partition
      Tree “, IJCSC.Vol-8 no.1, January –June 2012. pp 147-150.

                                                 74
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 4, Issue 5, September - October (2013), © IAEME

 4.    Xiangyun Hu, Jiajie Shen, Jie Shan, and Li Pan “Local Edge Distributions for Detection of Sa-
       lient Structure Textures and Objects” IEEE GEOSCIENCE AND REMOTE SENSING LET-
       TERS, VOL. 10, NO. 3, MAY 2013.
 5.    Xuejie Zhang, Zhixiang Ren, Deepu Rajan, Yiqun Hu “Salient Object Detection through Over-
       Segmentation” 2012 IEEE International Coference on Multimedia and Expo.
 6.    Zhenzhong Chen, Junsong Yuan, and Yap-Peng Tan “Hybrid Saliency Detection for Images”
       IEEE SIGNAL PROCESSING LETTERS, VOL. 20, NO. 1, JANUARY 2013.
 7.    Shangwang Liu, Dongjian He, and Xinhong Liang “An Improved Hybrid Model for Automat-
       ic Salient Region Detection” Proceedings of the 2012 International Conference on Wavelet
       Analysis and Pattern Recognition, Xian, 15-17 July, 2012.
 8.    R. Achanta, S. Hemami, F. Estrada, and S. Susstrunk, “Frequency Tuned Salient Region
       Detection,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 1597-1604, 2009.
 9.    C. Guo, Q. Ma, and L. Zhang, “Spatio-Temporal Saliency Detection Using Phase Spectrum of
       Quaternion Fourier Transform,” Proc. IEEE Conf. Computer Vision and Pattern Recognition,
       (pp. 1-8, 2008)
 10.   J. Harel, C. Koch, and P. Perona, “Graph-Based Visual Saliency,” Advances in Neural Infor-
       mation Processing Systems, vol. 19, pp. 545- 552, 2007.
 11.   X. Hou and L. Zhang, “Saliency Detection: A Spectral Residual Approach,” Proc. IEEE Conf.
       Computer Vision and Pattern Recognition, pp. 1-8, 2007.
 12.   L. Itti, C. Koch, and E. Niebur, “A Model of Saliency-Based Visual Attention for Rapid Scene
       Analysis,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 20, no. 11, pp. 1254-
       1259, Nov. 1998.
 13.   T. Judd, K. Ehinger, F. Durand, and A. Torralba, “Learning to Predict Where Humans Look,”
       Proc. IEEE Int’l Conf. Computer Vision, pp. 2106-2113, 2009.
 14.   E. Rahtu, J. Kannala, M. Salo, and J. Heikkila¨, “Segmenting Salient Objects from Images and
       Videos,” Proc. 11th European Conf. Computer Vision, pp. 366-379, 2010.
 15.   Prof. S.V.M.G.Bavithiraja and Dr.R.Radhakrishnan, “Power Efficient Context-Aware Broad-
       casting Protocol for Mobile Ad Hoc Network”, International Journal of Computer Engineering
       & Technology (IJCET), Volume 3, Issue 1, 2012, pp. 81 - 96, ISSN Print: 0976 – 6367,
       ISSN Online: 0976 – 6375.
 16.   Shameem Akthar, Dr. D Rajaylakshmi and Dr. Syed Abdul Sattar, “A Modified Pso Based
       Graph Cut Algorithm for the Selection of Optimal Regularizing Parameter in Image
       Segmentation”, International Journal of Advanced Research in Engineering & Technology
       (IJARET), Volume 4, Issue 3, 2013, pp. 273 - 279, ISSN Print: 0976-6480, ISSN Online:
       0976-6499.
 17.   Gaganpreet Kaur and Dr. Dheerendra Singh, “Pollination Based Optimization for Color Image
       Segmentation”, International Journal of Computer Engineering & Technology (IJCET),
       Volume 3, Issue 2, 2012, pp. 407 - 414, ISSN Print: 0976 – 6367, ISSN Online: 0976 – 6375.




                                                 75

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:0
posted:10/18/2013
language:Latin
pages:9