Docstoc

RECOGNITION OF BASIC KANNADA CHARACTERS IN SCENE IMAGES USING EUCLIDEAN DIS

Document Sample
RECOGNITION OF BASIC KANNADA CHARACTERS IN SCENE IMAGES USING EUCLIDEAN DIS Powered By Docstoc
					  International Journal of JOURNAL OF COMPUTER (IJCET), ISSN 0976-
 INTERNATIONALComputer Engineering and Technology ENGINEERING
  6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 2, March – April (2013), © IAEME
                           & TECHNOLOGY (IJCET)

ISSN 0976 – 6367(Print)
ISSN 0976 – 6375(Online)                                                     IJCET
Volume 4, Issue 2, March – April (2013), pp. 632-641
© IAEME: www.iaeme.com/ijcet.asp
Journal Impact Factor (2013): 6.1302 (Calculated by GISI)
                                                                         ©IAEME
www.jifactor.com




     RECOGNITION OF BASIC KANNADA CHARACTERS IN SCENE
        IMAGES USING EUCLIDEAN DISTANCE CLASSIFIER

                            M.M.Kodabagi1, Shridevi.B.Kembhavi2
    1
       Department of Computer Science and Engineering, Basaveshwar Engineering College,
                             Bagalkot-587102, Karnataka, India
     2
      Department of Computer Science and Engineering, Basaveshwar Engineering College,
                             Bagalkot-587102, Karnataka, India



  ABSTRACT

          Character recognition in scene images is a challenging visual recognition problem.
  The research field of scene text recognition receives a growing attention due to the
  proliferation of digital cameras and the great variety of potential applications, as well. Such
  applications include robotic vision, image retrieval, intelligent navigation systems and
  applications to provide assistance to visual impaired persons. In this paper, a novel
  methodology for recognition of basic Kannada characters in scene images is proposed. It is
  divided into two phases namely: training and testing. During training, zone wise horizontal
  and vertical profile based features are extracted from training samples and knowledge base is
  created. During testing, the test image is processed to obtain features and recognized using
  euclidean distance classifier. The method has been evaluated on 460 Kannada character
  images captured from 2 Mega Pixels cameras on mobile phones at various sizes 240x320,
  600x800 and 900x1200 which contains samples of different sizes, styles and with different
  degradations, and achieves an average recognition accuracy of 91%. The system is efficient
  and insensitive to the variations in size and font, noise, blur, dark background, slant/tilt and
  other degradations.

  Keywords: Character Recognition, Scene Images, Zone Wise Horizontal and Vertical
  Features, Euclidean Distance Classifier.




                                                632
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-
6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 2, March – April (2013), © IAEME

1.     INTRODUCTION

        Character recognition in scene images is a challenging visual recognition problem.
Until a few decades ago, research in the field of Optical Character Recognition was limited to
document images acquired with flatbed desktop scanners. The usability of such systems is
limited as they are not portable because of large size of the scanners and the need of a
computing system. Moreover, the shot speed of a scanner is slower than that of a digital
camera. Hence research field of scene text recognition receives a growing attention due to the
proliferation of digital cameras and the great variety of potential applications, as well. Such
applications include robotic vision, image retrieval, intelligent navigation systems and
applications to provide assistance to visual impaired persons.
        Recognition of characters from scene images is a very complex problem. Natural
scene images usually suffer from low resolution and low quality, perspective distortion,
complex background, font style and thickness, background as well as foreground texture,
camera position which can introduce geometric distortions, image resolution, shadows, non-
uniform illumination, low contrast and large signal dependent noise, slant and tilt as shown in
figure 1. The problem is significantly more difficult than recognizing text from scanned
images.




                       Figure 1. Sample Images of Display Boards

        In this paper, a novel method for recognizing basic Kannada characters in natural
scene images is proposed. The proposed method uses zone wise horizontal and vertical
profile based features to extract features of character images. The method works in two
phases. During training phase, zone wise horizontal and vertical profile based features are
extracted from training samples and knowledge base is created. During testing, the test image
is processed to obtain features and recognized using euclidean distance classifier. The method
is evaluated on 460 Kannada character images captured from 2 Mega Pixels cameras on
mobile phones at various sizes 240x320, 600x800 and 900x1200 which contains samples of
different sizes, styles and with different degradations, and achieves an average recognition
accuracy of 91%. The system is efficient and insensitive to the variation in size and font,
noise, blur, dark background, slant/tilt and other degradations.
        The rest of the paper is organized as follows; the detailed survey related to character
recognition of character in scene images is described in Section 2. The proposed method is
presented in Section 3. The experimental results and discussions are given in Section 4.
Section 5 concludes the work and lists future directions of the work.

                                             633
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-
6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 2, March – April (2013), © IAEME

2.     RELATED WORK

Some of the related works on character recognition of text in scene images are summarized
below:
        A robust approach for recognition of text embedded in natural scenes is given in [11].
The proposed method extracts features from intensity of an image directly and utilizes a local
intensity normalization to effectively handle lighting variations. Then, Gabor transform is
employed to obtain local features and linear discriminant analysis (LDA) is used for selection
and classification of features. The proposed method has been applied to a Chinese sign
recognition task. This work is further extended integrating sign detection component with
recognition [12]. The extended method embeds multi-resolution and multi-scale edge
detection, adaptive searching, color analysis, and affine rectification in a hierarchical
framework for sign detection. The affine rectification recovers deformation of the text
regions caused by an inappropriate camera view angle and significantly improve text
detection rate and optical character recognition.
         A framework that exploits both bottom-up and top-down cues for scene text
recognition at word level is presented in [13]. The method derives bottom-up cues from
individual character detections from the image. Then, a Conditional Random Field model is
built on these detections to jointly model the strength of the detections and the interactions
between them. It also imposes top-down cues obtained from a lexicon-based prior, i.e.
language statistics. The optimal word represented by the text image is obtained by
minimizing the energy function corresponding to the random field model. The method reports
significant improvements in accuracies on two challenging public datasets, namely Street
View Text and ICDAR 2003 compared to other methods. The test results showed that the
reported accuracy is only 73% and requires further improvement.
    The hierarchical multilayered neural network recognition method described in [14]
extracts oriented edges, corners, and end points for color text characters in scene image. A
method called selective metric clustering which mainly deals with color is employed in [15].
A fast lexicon based and discriminative semi-Markov models for recognizing scene text are
presented in [16, 17]. An object categorization framework based on a bag-of-visual-words
representation for recognition of character in natural scene images is described in [18]. The
effectiveness of raw grayscale pixel intensities, shape context descriptors, and wavelet
features to recognize the characters is evaluated in [19]. A method for unconstrained
handwritten Kannada vowels recognition based upon invariant moments is described in [20].
     The technique presented in [21] extracts stroke density, length, and number of strokes for
handwritten Kannada and English characters recognition. The method found in [22] uses
modified invariant moments for recognition of multi-font/size Kannada vowels and numerals
recognition. A model employed in [23] calculates features from connected components and
obtains 3k dimensional feature vectors for memory based recognition of camera-captured
characters. A character recognition method described in [24] uses local features for
recognition of multiple characters in a scene image.
    After the thorough study of literature, it is noticed that, the some [18, 12, 23, 14] of the
reported methods work with limited datasets, other cited works [18, 17, 16] report low
recognition rates in the presence of noise and other degradations and very few works [18-22]
pertain to recognition of Kannada characters from scene images. Hence, more research is
desirable to obtain new set of discriminating features suitable for Kannada text in scene
images. In the current paper, zone wise horizontal and vertical profile based features are

                                              634
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-
6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 2, March – April (2013), © IAEME

employed for recognition of Kannada characters in low resolution images. The detailed
description of the proposed methodology is given in the next section.

3.        PROPOSED METHODOLOGY

       The proposed method uses zone wise horizontal and vertical profile based features for
recognition of basic Kannada characters. The proposed method contains various phases such
as preprocessing, feature extraction, construction of knowledge base and character
recognition using euclidean distance classifier. The block diagram of the proposed model is
as shown in Figure 2. The detailed description of each phase is given in the following
subsections.

       3.1 Preprocessing
       The input character image is preprocessed for binarization, bounding box generation
and resized to a constant resolution of size 30×30 pixels. Further, the image is thinned.


     Training          Pre-Processing              Feature
     Sample                                       Extraction             Database
     Images




      Testing          Pre-Processing              Feature               Character
     Character                                                                             Recognised
                                                  Extraction            Recognition
      Image                                                                                 character
                                                                          Model


                            Figure 2. Block Diagram of Proposed Model

        3.2 Feature extraction
        Features are extracted from the pre-processed image, each image is divided into 15
vertical zones and 15 horizontal zones, where size of each horizontal zone is 2*30 and the
size of each vertical zone is 30*2. Then sum of all pixels in every zone is determined as a
feature value. Finally we obtain 30 features that are stored in feature vector FV as described
in equations (1) to (5):

                                                                                    (1)

                                                                   1≤ i ≤ 15        (2)

                                                                   1≤ i ≤ 15        (3)

Where,
                 is a feature value of ith horizontal zone and it is computed as shown in (4).
                  is a feature value of ith vertical zone and it is computed as shown in (5).


                                                   635
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-
6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 2, March – April (2013), © IAEME




Where,            is ith zone that encompasses the chosen region of the character image. The
dataset of such feature vectors obtained from training samples is further used for construction
of knowledge base.

       3.3 Construction of knowledge base
       For the purpose of knowledge base construction, the images are captured from display
boards of Karnataka Government offices, names of streets, institute names, names of shops,
building names, company names, road signs, traffic direction and warning signs captured
from 2 Mega Pixels cameras on mobile phones. The images are captured at various sizes
240x320, 600x800, 900x1200 at a distance of 1 to 6 meters. All these images are used for
evaluating the performance of the proposed model. The images in the database are
characterized by variable font size and style, uneven thickness, minimal information context,
small skew, noise, perspective distortion and other degradations. The image database consists
of 460 Kannada basic character images of varying resolutions. Then from the database, 80%
of samples are used for training. During training, the features are extracted from all training
samples and knowledge base is organized as a dataset of feature vectors as depicted in (6).
The stored information in the knowledge base sufficiently characterizes all variations in the
input. Testing is carried out for all samples containing 80% trained and 20% untrained
samples.



        Where, KB is knowledge base comprising feature vectors of training samples. FVj is a
feature vector of jth image in the KB and N the number of training sample images as shown
in figure 3.




                           Figure 3. Sample Characters Images




                                             636
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-    0976
6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 2, March – April (2013), © IAEME

        3.4 Training and Recognition using Euclidean Distance Classifier
        After the data set is obtained and organized into knowledge base of basic Kannada
character images, training and recognition tasks are carried out using euclidean distance
classifier. The details of training and recognition are described in the following:
In this phase test image is processed to obtain zone wise horizontal and vertical profile based
                                ture                                            .
features and stored into feature vector FV1 using the above equation (1). Then classifier
determines minimum value between the test image and every record in the knowledge base
using the euclidean distance measure as in equation (7).

                                                           1≤ j ≤ N

        The minimum distance between the test image and the record in the knowledge base
is used to recognize the character. The proposed methodology performs well for variability in
                                                .
font size, style and dark background images. However, the method requires sufficient training
of all variations in font size, style and other degradations.

4.     EXPERIMENTAL RESULT AND ANALYSIS

                                 ogy
        The proposed methodology has been evaluated for 460 basic Kannada character
                    g                                 thickness,
images of varying font size and style, uneven thickness, dark background and other
degradations. The experimental results of processing a sample character image is described in
section 4.1. And the results of processing several other character images dealing with various
issues, the overall performance of the system are reported in section 4.2.

       4.1 An Experimental Analysis for a Sample Kannada Character Image
       The Character image with uneven thickness, uneven lighting conditions, and ot   other
                                4a                                      ation,
degradations given in Figure. 4 is initially preprocessed for binarization, bounding box
           ,
generation, resized to a constant size of 30x30 pixels and thinned as shown in figure 4b, 4c
and figure 4d.




                  a)                    b)                 c)              d)

      Figure 4.    a) Test image     b) Image with Bounding Box c) Resized image
                                     d) Thinned image

                                                                              zones
       Further, the image is divided into 15 vertical zones and 15 horizontal zones. Then, the
                                                    ures
zone wise horizontal and vertical profile based features are computed for the images and are
organized into a feature vector FV as in (1) to (5). The experimental values of all zones are
shown in Table 1.




                                             637
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-
6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 2, March – April (2013), © IAEME

 TABLE 1. Zone Wise Vertical and Horizontal Profile based Features of Sample Input
                               Image in Figure 4d


    Feature Vector     [Vertical_Features (3 12 6 6 8 10 8 4 10 8 4 4 5 14 0)
         FV            Horizontal_Features (0 7 9 8 8 8 11 10 12 4 4 4 6 11 0) ]


           FV= [3 12 6 6 8 10 8 4 10 8 4 4 5 14 0 0 7 9 8 8 8 11 10 12 4 4 4 6 11 0]

        The experimental values in Table 1 clearly depict the distribution of pixels in various
primitives of the character image. And these distributions are different from character to
character because of varying positions and shapes of primitives of basic Kannada characters.
This is demonstrated considering two sample images in Table 2.

  TABLE 2. Vertical and Horizontal Features of Two Sample Images Demonstrating
                           Pixel Distribution Patterns
                                    Zone Wise Vertical and Horizontal Profile based
        Character Image
                                                       Features


                                 21 5 5 8 8 16 4 4 20 4 9 14 4 5 9 0 25 8 8 7 9 8 8 8
                                 11 5 4 6 7 22




                                 15 7 8 9 10 11 8 8 10 9 8 7 6 9 18 3 23 3 3 5 13 15
                                 13 16 3 4 7 11 14 10


        The values in Table 2 clearly show that, the feature values in most of the
corresponding zones of the characters are distinct. The arrangement of these features into a
feature vector creates a pixel distribution pattern that makes samples distinguishable. It is also
observed that, the proposed zone wise features also take care of uncertainty in the appearance
of primitives of character image. After extracting features from test input image in Figure. 4a,
the euclidean distance classifier is used to recognize the character.

        4.2 An Experimental Analysis Dealing with Various Issues
        The proposed methodology has produced good results for scene images containing
basic Kannada characters of different size, font, and alignment with varying background. The
advantage lies in less computation involved in feature extraction and recognition phases of
the method. Since the feature set is reduced by taking sum of consecutive values of zone wise
horizontal and vertical profile based features. Hence, the proposed work is robust and
achieves an average recognition accuracy of 91%. The overall performance of the system
after conducting the experimentation on the dataset is reported in Table 3.


                                               638
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-
6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 2, March – April (2013), © IAEME

                               TABLE 3. Overall System Performance
 Character   Number    Number of     Number         % of        Character   Number    Number of     Number         % of
  Image         of      Samples         of       Recognition     Image         of      Samples         of       Recognition
             Samples    Correctly   Samples       Accuracy                  Samples    Correctly   Samples       Accuracy
              Tested   Recognized     Miss                                   Tested   Recognized     Miss
                                    Classified                                                     Classified
               10         10            0           100                       10          8            2            80

               10          9            1            90                       10          8            2            80

               10          9            1            90                       10         10            0           100

               10          8            2            80                       10          9            1            90

               10          9            1            90                       10          8            2            80

               10         10            0           100                       10          8            2            80
               10         10            0           100                       10          9            1            90

               10         10            0           100                       10         10            0           100

               10          8            2            80                       10          9            1            90

               10         10            0           100                       10          9            1            90
               10         10            0           100                       10          9            1            90

               10          9            1            90                       10         10            0           100

               10          8            2            80                       10          8            2            80

               10          9            1            90                       10         10            0           100

               10         10            0           100                       10          9            1            90

               10         10            0           100                       10          8            2            80

               10         10            0           100                       10          9            1            90
               10          9            1            90                       10          8            2            80

               10         10            0           100                       10          8            2            80

               10          9            1            90                       10          8            2            80

               10         10            0           100                       10          8            2            80

               10          8            2            80
               10         10            0           100

               10          8            2            80
               10         10            0           100




5.       CONCLUSION

        In this paper, a novel methodology for an approach to recognition of basic Kannada
characters from scene images is proposed. The proposed method uses zone wise horizontal
and vertical profile based features and euclidean distance classifier for basic Kannada
character recognition. The system works in two phases: training phase and testing phase.
Exhaustive experimentation was done to analyze horizontal and vertical profile based
features. The results obtained by considering zone wise horizontal and vertical profile
features and euclidean distance classifier are encouraging and it has been observed that the
system is robust and insensitive for several challenges like unusual fonts, variable lighting

                                                          639
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-
6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 2, March – April (2013), © IAEME

condition, noise, blur, orientation etc. The method is tested with 460 samples and gives
recognition accuracy of 91%. The proposed method can be extended for character recognition
considering new set of features and classification algorithms. This method can be extended to
recognize the characters of other languages.

REFERENCES

[1] Abowd Gregory D. Christopher G. Atkeson, Jason Hong, Sue Long, Rob Kooper, and
Mike Pinkerton, 1997, “CyberGuide: A mobile context-aware tour guide”, Wireless
Networks, 3(5): pp.421-433.
[2] Natalia Marmasse and Chris Schamandt, 2000, “Location aware information delivery
with comMotion”, In Proceedings of Conference on Human Factors in Computing Systems,
pp.157-171.
[3] Tollmar K. Yeh T. and Darrell T., 2004, “IDeixis - Image-Based Deixis for Finding
Location-Based Information”, In Proceedings of Conference on Human Factors in
Computing Systems (CHI’04), pp.781-782.
[4] Gillian Leetch, Dr. Eleni Mangina, 2005, “A Multi-Agent System to Stream Multimedia
to Handheld Devices”, Proceedings of the Sixth International Conference on Computational
Intelligence and Multimedia Applications (ICCIMA’05).
[5] Wichian Premchaiswadi, 2009, “A mobile Image search for Tourist Information System”,
Proceedings of 9th international conference on SIGNAL PROCESSING,
COMPUTATIONAL GEOMETRY and ARTIFICIAL VISION, pp.62-67.
[6] Ma Chang-jie, Fang Jin-yun, 2008, “Location Based Mobile Tour Guide Services
Towards Digital Dunhaung”, International archives of phtotgrammtery, Remote Sensing and
Spatial Information Sciences, Vol. XXXVII, Part B4, Beijing.
[7] Shih-Hung Wu, Min-Xiang Li, Ping-che Yanga, Tsun Kub, 2010, “Ubiquitous Wikipedia
on Handheld Device for Mobile Learning”, 6th IEEE International Conference on Wireless,
Mobile, and Ubiquitous Technologies in Education, pp. 228-230.
[8] Tom yeh, Kristen Grauman, and K. Tollmar., 2005, “A picture is worth a thousand
keywords: image-based object search on a mobile platform”, In Proceedings of Conference
on Human Factors in Computing Systems, pp.2025-2028.
[9] Fan X. Xie X. Li Z. Li M. and Ma. 2005, “Photo-to-search: using multimodal queries to
search web from mobile phones”, In proceedings of 7th ACM SIGMM international
workshop on multimedia information retrieval.
[10] Lim Joo Hwee, Jean Pierre Chevallet and Sihem Nouarah Merah, 2005, “SnapToTell:
Ubiquitous information access from camera”, Mobile human computer interaction with
mobile devices and services, Glasgow, Scotland.
[11] Jing Zhang, Xilin Chen, Andreas Hanneman, Jie Yang, and Alex Waibel.,2002, “A
Robust Approach for Recognition of Text Embedded in Natural Scenes”, proc. 16th
International conf. Pattern recognition, volume 3, pp. 204-207 (2002).
[12] Xilin Chen, Jie Yang, Jing Zhang, and Alex Waibel, January 2004, “Automatic
Detection and Recognition of Signs From Natural Scenes”, IEEE Transactions On Image
Processing, Vol. 13, No. 1, pp. 87-99 (January 2004).
[13] Anand Mishra, Karteek Alahari, C. V. Jawahar, 2012, “Top-Down and Bottom-Up Cues
for Scene Text Recognition” , Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), 2012.


                                            640
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-
6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 2, March – April (2013), © IAEME

[14] Zohra Saidane and Christophe Garcia, 2007, “Automatic Scene Text Recognition using a
Convolutional Neural Network”, CBDAR, p6, pp. 100-106 (2007).
 [15] Céline Mancas-Thillou, June 2007, “Natural Scene Text Understanding”, Segmentation
and Pattern Recognition, I-Tech, Vienna, Austria, pp.123-142 (June 2007).
[16] Jerod J. Weinman, Erik Learned-Miller, and Allen Hanson, September 2007, “Fast
Lexicon-Based Scene Text Recognition with Sparse Belief Propagation”, Proc. Intl. Conf. on
Document Analysis and Recognition, Curitiba, Brazil (September 2007).
[17] Jerod J. Weinman, Erik Learned-Miller and Allen Hanson, December 2008, “A
Discriminative Semi-Markov Model for Robust Scene Text Recognition”, IEEE, Proc. Intl.
Conf. on Pattern Recognition (ICPR), Tampa, FL, USA, pp. 1-5 (December 2008).
[18] Te´ofilo E. de Campos and Bodla Rakesh Bab, 2009, “Character Recognition In Natural
Images”, Computer Vision Theory and Applications, Proc. International Conf. volume , pp.
273-280 (2009).
[19] Onur Tekdas and Nikhil Karnad, 2009, “Recognizing Characters in Natural Scenes: A
Feature Study”, CSCI 5521 Pattern Recognition, pp. 1-14 (2009).
[20] Sangame S.K., Ramteke R.J., and Rajkumar Benne, 2009, “Recognition of isolated
handwritten Kannada vowels”, Advances in Computational Research, ISSN: 0975– 3273,
Volume 1, Issue 2, pp 52-55 (2009).
[21] B.V.Dhandra, Mallikarjun Hangarge, and Gururaj Mukarambi, 2010, ”Spatial Features
for Handwritten Kannada and English Character Recognition”, IJCA Special Issue on Recent
Trends in Image Processing and Pattern Recognition (RTIPPR), pp 146-151 (2010).
[22] Mallikarjun Hangarge, Shashikala Patil, and B.V.Dhandra, 2010, “Multi-font/size
Kannada Vowels and Numerals Recognition Based on Modified Invariant Moments”, IJCA
Special Issue on Recent Trends in Image Processing and Pattern Recognition (RTIPPR),
pp 126-130 (2010).
[23] Masakazu Iwamura, Tomohiko Tsuji, and Koichi Kise, 2010, “Memory-Based
Recognition of Camera-Captured Characters”, 9th IAPR international workshop on document
analysis systems, pp. 89-96 (2010).
[24] Masakazu Iwamura, Takuya Kobayashi, and Koichi Kise, 2011, “Recognition of
Multiple Characters in a Scene Image Using Arrangement of Local Features”, IEEE,
International Conference on Document Analysis and Recognition, pp. 1409-1413(2011).
[25] Primekumar K.P and Sumam Mary Idicula, “Performance of on-Line Malayalam
Handwritten character Recognition using Hmm And Sfam”, International Journal of
Computer Engineering & Technology (IJCET), Volume 3, Issue 1, 2012, pp. 115 - 125, ISSN
Print: 0976 – 6367, ISSN Online: 0976 – 6375.
[26] Mr.Lokesh S. Khedekar and Dr.A.S.Alvi, “Advanced Smart Credential Cum Unique
Identification and Recognition System. (Ascuirs)”, International Journal of Computer
Engineering & Technology (IJCET), Volume 4, Issue 1, 2013, pp. 97 - 104, ISSN Print: 0976
– 6367, ISSN Online: 0976 – 6375.
[27] M. M. Kodabagi, S. A. Angadi and Chetana. R. Shivanagi, “Character Recognition of
Kannada Text in Scene Images using Neural Network”, International Journal of Graphics and
Multimedia (IJGM), Volume 4, Issue 1, 2013, pp. 9 - 19, ISSN Print: 0976 – 6448,
ISSN Online: 0976 –6456
[28] M. M. Kodabagi, S. A. Angadi and Anuradha. R. Pujari, “Text Region Extraction from
Low Resolution Display Board Images using Wavelet Features”, International Journal of
Information Technology and Management Information Systems (IJITMIS), Volume 4,
Issue 1, 2013, pp. 38 - 49, ISSN Print: 0976 – 6405, ISSN Online: 0976 – 6413

                                           641

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:1
posted:7/15/2013
language:
pages:10