Docstoc

Implementation of Omni Directional Cognitive Memory In Office Mobile Robot

Document Sample
Implementation of Omni Directional Cognitive Memory In Office Mobile Robot Powered By Docstoc
					                                                         (IJCSIS) International Journal of Computer Science and Information Security,
                                                         Vol. 11, No. 6, June 2013




    IMPLEMENTATION OF OMNI DIRECTIONAL
     COGNITIVE MEMORY IN OFFICE MOBILE
                  ROBOT
                     S.Sivagnana sundari,                                                      Dr.C.Vijayalakshmi,
           Research Scholar, Department of Statistics,                             Professor, School of Advance Sciences
            Manonmaniam Sundaranar University,                                     Department of Mathematics Division,,
                     Tirunelveli, INDIA,                                             VIT University, Chennai, INDIA,




                                                                              iii) In case any one of the landmarks has been
                                                                          displaced, then distance between present adjacent
    Abstract-This paper presents a method for building omni
                                                                          landmarks should be known.
directional memory from two dimensional memory storage.                       iv) By using the images acquired at different positions
The omni directional memory is implemented in the office                  on route from starting point to destination and back to
mobile robot. Time series method is used to estimate the next             starting point.
position of the robot based on the stored memory. Images and
sounds are collected in the office environment to store expert                v) If the robot is displaced on its route, but still it has
database in the memory of the robot. A section of the image               to find way to reach next landmark.
frames taken in the corridor and how the image is associated in
the omni directional memory is shown. Based on the                            vi) By using the sounds that occur regularly to confirm
information in the memory and the speed with which the robot              the path.
is moving, the method of predicting the next position by time
series method is discussed.

  Keywords: Omni directional, Mobile robot, Two dimensional                               II      .LITERATURE REVIEW
memory, Time series method
                                                                           Cao zuo et al, 1986, described algorithm for
                                                                        omnidirectional     vision   navigation.   A     prototype
                                                                        omnidirectional vision system and the implementation of the
                                                                        navigation techniques using this modern sensor and an
                    I     INTRODUCTION                                  advanced automatic image processor is described.
    The concept of omni directional memory plays an                         Krotkov, 1989, assumed a mobile robot is equipped with
important role in implementing a full fledged self navigating           a single camera and a map marking the positions in its
autonomous office mobile robot. Conventional robots are                 environment of landmarks. The robot moves on a flat
mostly fixed to a point or they are guided with remote                  surface, acquires one image, extracts vertical edges from it,
control or through wired path. Heavy operations require                 and computes the directions to visible landmarks. The
robot to be mostly fixed to a place. A preprogrammed                    problem is to determine the robot's position and orientation
movements will be stored using teach pendent procedures.                (pose) by establishing the correspondence between
However, robots that are on the mobility should be able to              landmark directions and points in the map. This approach
navigate. The robot must be able to retrace and come back               capitalizes on the excellent angular resolution of standard
to the starting point without any guidance of floor pasted              CCD cameras, while avoiding the feature-correspondence
line sensors. For the robot to move from starting point to the          and 3D reconstruction problems. The problem is formulated
required destination and back to starting, information like             as a search in a tree of interpretations (pairings of landmark
images and sounds should be preprogrammed in one form or                directions and landmark points), and an algorithm to search
the other. The information for preprogramming can be any                the tree efficiently to determine the solution poses(s) is
one or combinations of the following:                                   developed, taking into account errors in the landmark
    i) By knowing the distance of important landmarks                   directions extracted by image processing. Quantitative
 from the starting point to reach a given destination.                  results from simulations and experiments with real imagery
                                                                        are presented.
    ii) By knowing the distance between adjacent
 landmarks.                                                                Leonard et al, 1992 presented an algorithm for
                                                                        autonomous map building and maintenance for a mobile




                                                                  86                                 http://sites.google.com/site/ijcsis/
                                                                                                     ISSN 1947-5500
                                                        (IJCSIS) International Journal of Computer Science and Information Security,
                                                        Vol. 11, No. 6, June 2013


robot. They represent each feature in the map by a location            motion of the robot, an environmental map of an indoor
estimate (the feature state vector) and two distinct measures          scene is generated by monitoring azimuth change in the
of uncertainty: a covariance matrix to represent uncertainty           image.
in feature location, and a credibility measure to represent in
the validity of the feature. During each position update                   Branca et al, 1997, estimate the egomotion from the
cycle, predicted measurements are generated for each                   motion field to determine the robot’s direction and the time
geometric feature in the map and compared with actual                  to collide (TTC) with the environment. The mobile robot is
sensor observations. Successful matches cause a feature's              restricted to travel on a flat surface in a stationary
credibility to be increased. Unpredicted observations are              environment.
used to initialize new geometric features, while unobserved                Development of sensors for small size indoor mobile
predictions result in a geometric feature's credibility being          robots have been proposed with sonar-ring which can
decreased. They describe experimental results obtained with            measure accurate direction and distance to reflecting points
the algorithm that demonstrate successful map building                 rapidly for this purpose. This sonar ring needs to receive
using real sonar data. Mobile robotic devices hold great               echo signal by multiple receivers and to detect multiple
promise for a variety of applications in industry. A key step          Time-of-Flights (TOFs) in received signal at each receiver
in the design of a mobile robot is to determine the                    simultaneously for achieving fast and accurate
navigation method for mobility control.                                measurement, Teruko et al, 2000.
    Atiya and Hager, 1993, described an algorithm for                     Stoffler and Schnepf,1998; Stoffler and Schnepf 2000,
determining robot location from visual landmarks. This                 created an application for a mobile robot that is capable of
algorithm determines both the correspondence between                   detecting potential obstacles, roughly estimating their
observed landmarks (in this case vertical edges in the                 positions, and planning an avoidance task.
environment) and a stored map, and computes the location
of the robot using those correspondences. The primary                      Krishna and Kalra, 2001, dealt with the advantages of
advantages of this algorithm are its use of a single geometric         incorporating cognition and remembrance capabilities in a
tolerance to describe observation error, its ability to                sensor-based real-time navigation algorithm. The specific
recognize ambiguous sets of correspondences, its ability to            features of the algorithm apart from real-time collision
compute bounds on the error in localization, and fast                  avoidance include spatial comprehension of the local
execution. The algorithm has been implemented and tested               scenario of the robot, remembrance and recollection of such
on a mobile robot system. In several hundred trials it has             comprehended scenarios and temporal correlation of similar
never failed, and computes location accurate to within a               scenarios witnessed during different instants of navigation.
centimeter in less than 0.5s                                           These features enhance the robot’s performance by
                                                                       providing for a memory-based reasoning whereby the
   Yagi et al, 1994, described a conic projection image                robot’s forthcoming decisions are also affected by its
sensor (COPIS) and its application: navigating a mobile                previous experiences during the navigation apart from the
robot in a manner that avoids collisions with objects                  current range inputs. The environment of the robot is
approaching from any direction. The COPIS system                       modeled by classifying temporal sequences of spatial
acquires an omnidirectional view around the robot, in real-            sensory patterns. A fuzzy classification scheme coupled to
time, with use of a conic mirror. Based on the assumption of           Kohonen’s self-organizing map and fuzzy ART network
constant linear motion of the robot and objects, the objects           determines this classification. A detailed comparison of the
moving along collision paths are detected by monitoring                present method with other recent approaches in the specific
azimuth changes. Confronted with such objects, the robot               case of local minimum detection and avoidance is also
changes velocity to avoid collision and determines locations           presented. As for escaping the local minimum barrier is
and velocities.                                                        concerned this paper divulges a new system of rules that
    Yagi et al, 1995, designed an omnidirectional image                lead to shorter paths than the other methods. The method
sensor COPIS (Conic Projection Image Sensor) to guide the              has been tested in concave, maze-like, unstructured and
navigation of a mobile robot. The feature of COPIS is                  altered environments and its efficacy established.
passive sensing of the omnidirectional image of the                        The supervisory control of mobile robots using a
environment, in real-time (at the frame rate of a TV                   biologically inspired short-term memory structure called a
camera), using a conic mirror. COPIS is a suitable sensor for          sensory Egosphere has been described. The Egospehere is
visual navigation in a real world environment. They report             implemented as a virtual geodesic dome upon which sensory
here a method for navigating a robot by detecting the                  data from the surroundings of the robot are written. The
azimuth of each object in the omnidirectional image. The               Egosphere is managed by a distinct agent in a distributed
azimuth is matched with the given environmental map. The               agent-based robot control architecture called the Intelligent
robot can precisely estimate its own location and motion               Machine Architecture. A human-robot interface and a test
(the velocity of the robot) because COPIS observes a 360°              bed for evaluating the control system has been proposed,
view around the robot, even when all edges are not extracted           Kawamura et al, 2001.
correctly from the omnidirectional image. The robot can
avoid colliding against unknown obstacles and estimate                     Memory-based methods estimate self-location by
locations by detecting azimuth changes, while moving about             directly comparing the input image with stored images in
in the environment. Under the assumption of the known                  memory. This approach has received much attention,




                                                                 87                                http://sites.google.com/site/ijcsis/
                                                                                                   ISSN 1947-5500
                                                         (IJCSIS) International Journal of Computer Science and Information Security,
                                                         Vol. 11, No. 6, June 2013


because the localization accuracy will increase by simply               during a task execution. The existing components can be
increasing the number of images to memorize. In this paper,             used i cognitive robot architecture, such as the Long-Term
we propose a novel self-localization method based on                    Memory, the Short-Term Memory, with the addition of a
eigenspace analysis as one of memory-based methods. We                  working memory system and a control mechanism called the
employ an omni directional image sensor that can take an                Central Executive Agent to create a modular control system.
image of surrounding environment of the sensor at video-                This control method is used to drive the behaviors of our
rate. The proposed method identifies the sensor's location by           humanoid robot ISAC.
evaluating the similarity between autocorrelation images
which are converted from omni directional images and are                    2005 The design and creation of an episodic memory
invariant against the rotation of the sensor. To reduce the             system for a cognitive robot is presented. This memory
computational cost and memory space, autocorrelation                    system is interfaced with a machine emotion system with
images are projected into eigenspaces. The similarity is                the goal of producing intelligent behaviors in a cognitive
evaluated in eigenspaces to reduce the computational cost.              humanoid robot. The design of the system is tested and
The experimental results show that the method is capable of             analyzed through the explanation of a case in which emotion
estimating the location of sensor even with low dimensional             assists in the retrieval of an episode, Will Dodd et al, 2005.
images, Nobuhiro et al, 2003.                                                Briggs et al, 2006, presented a method for navigation
    The omni directional vision sensor for the view-based               and localization of a mobile robot equipped with an
navigation method has been proposed with an extended                    omnidirectional camera. They represent the environment
model of a route called the ‘omni view sequence’. The                   using a collection of one-dimensional panoramic images
author propose a map named the ‘view-sequenced map’                     formed by averaging the center scanlines of a cylindrical
which represents an entire corridor environment in a                    view. Such 1D images can be stored and processed with few
building. A method for the automatic acquisition of a view-             resources, allowing a fairly dense sampling of the
sequenced map based on the exploration in a corridor using              environment. Image matching proceeds in real time using
both stereo and omni directional vision is also described.              dynamic programming on scale-invariant features extracted
The experimental results of the autonomous navigation and               from each circular view. By analyzing the shape of the
the map acquisition are presented to show the feasibility of            matching curve, the relative orientation of pairs of views can
the proposed methods, Yoshio et al, 2003.                               be recovered and utilized for navigation. When navigating,
                                                                        the robot continually matches its current view against stored
    A technique for vision-based robot navigation,                      reference views taken from known locations, and determines
Emanuele et al, 2004, with basic framework is to localize               its location and heading from the properties of the matching
the robot by comparing images taken at its current location             results. Experiments show that our method is robust to
with reference images stored in its memory. The only sensor             occlusion, repeating patterns, and lighting variations.
mounted on the robot is an omni directional camera. The
Fourier components of the omni directional image provide a                  Ching-Chih et al, 2010, presented a behavior-based
signature for the views acquired by the robot and can be                navigation method for a mobile service robot with a three-
used to simplify the solution to the robot navigation                   wheeled omnidirectional mobile platform. A heuristic fuzzy
problem. The proposed system can calculate the robot                    Kohonen clustering network (FKCN) is presented to obtain
position with variable accuracy (“hierarchical localization”)           laser area weights and desired robot motion heading for all
saving computational time when the robot does not need a                types of indoor environments. An efficient behavior-based
precise localization (e.g. when it is traveling through a clear         navigation structure is then proposed to merge the heuristic
space). In addition, the system is able to self-organize its            FKCN with three fuzzy behaviors in order to navigate the
visual memory of the environment. The self-organization of              robot without any collisions in crowded and cluttered indoor
visual memory is essential to realize a fully autonomous                environments. Simulation and experimental results are
robot that is able to navigate in an unexplored environment.            conducted to verify the feasibility and effectiveness of the
Experimental evidence of the robustness of this system is               proposed method.
given in unmodified office environments.                                    Jwu-Sheng Hu et al, 2011, proposed a framework that
    Campbell et al. 2004,2005, proposed a new technique                 simultaneously localizes the mobile robot and multiple
for estimating egomotion on which a video camera is                     sound sources using a microphone array on the robot. First,
mounted on a mobile robot so that the optical flow field is             an eigen structure-based generalized cross correlation
divided into two portions: the ground region and the sky                method for estimating time delays between microphones
region which are separated by a horizon line. The motion in             under multi-source environments is described. A method to
the ground portion is used to compute the robot translation             compute the far field source directions as well as the speed
while the sky portion is used to compute the robot rotation.            of sound using the estimated time delays is proposed. In
The application is created for the mobile robot’s visual                addition, the correctness of the sound speed estimate is
odometry.                                                               utilized to eliminate spurious sources, which greatly
                                                                        enhances the robustness of sound source detection. The
   A method has been developed for a cognitive robot                    arrival angles of the detected sound sources are used as
behavior, Palis et al, 2005, control in which a small number            observations in a bearings-only SLAM procedure. As the
of behaviors are loaded into a workspace, called working                source signals are not persistent and there is no
memory, where they are combined to generate actions                     identification of the signal content, data association is




                                                                  88                                http://sites.google.com/site/ijcsis/
                                                                                                    ISSN 1947-5500
                                                        (IJCSIS) International Journal of Computer Science and Information Security,
                                                        Vol. 11, No. 6, June 2013


unknown which is solved using FastSLAM. The                            between two neighboring observations. Our research
experimental results demonstrate the effectiveness of the              handles only such discrete time series.
proposed approaches.
                                                                           A time series is multivariate if there is more than one
             III SCHEMATIC DIAGRAM                                     variable involved in each observation and if these variables
                                                                       are cross-related. The time series estimation method can be
                                                                       parametric and non-parametric. In parametric method, a
                                                                       model is assumed. However, there is no predefined model
                                                                       for non-parametric methods. Instead, features are extracted
                                                                       from the available time series data. The features can be
                                                                       mean values, variances, correlations and frequencies, and do
                                                                       the estimation job based on these features.
                                                                           A parametric approach like AutoRegression Moving
                                                                       Average, ARMA (p, q) is proposed. ARMA (p, q) is used
                                                                       for modeling stationary and Gaussian-distributed time
                                                                       series.
                                                                          ARMA (p, q) is an ideal model; for a certain time series
                                                                       sample, there is one and only one ARMA(p, q) model with
                                                                       specified coefficients corresponding to it. This property
                                                                       makes the estimation. Compared with non-parametric
                                                                       methods, our ARMA (p, q) model based estimation methods
                                                                       provide lag influence and interactions, which is not easily
                                                                       summarized by features.
           Give instruction of rotation through azimuth,                   Given a time series observation sequence, we can
elevation. Present position is known. If present position is           estimate the parameters of the ARMA model which
not known for the machine vision system, then plan for                 generates this time series. Hence, representing a time series
home for the head to return to home position. The                      sequence by a limited dimensional vector consisting of the
information is transmitted through wireless method.                    ARMA parameters is done.
Information from the attached sensor is recorded, for
example a proximity sensor will mention the distance
between robot and the surroundings. Similarly an imaging                   ARMA (p,q) model
sensor can describe about the lighting conditions. For each
azimuth and elevation, store the information coming from                  In time series, the current signal xt influenced by the
each sensor with filenames as ‘azimuth value’ concatenated             previous signals xt-1, xt-2, and so on. A general time series
with elevation value concatenated with {image or distance              model may be formed as,
or color}.
          The image information can be taken at certain
intervals. However, information from other sensors is to be
                                                                            x t  f(x t 1 , x t 2 ,..........., u t 1 , u t 2 )  ξ t
analyzed and stored. Information received from each sensor
at a particular distance and at a particular location is to be            Where ut is the control knob and ξ t is the white noise at
fused and presented. At certain points of time, it does not            time t.
require receiving information from the camera; however by
                                                                           A linear model is tractable and is given by:
knowing the azimuth and elevation through the machine
vision system along with the position of the robot, the                     x t  α1 x t 1  α 2 x t  2  .......... 
relevant images can be retrieved based on the information
stored. Based on the information received from the sensor,                  α p x t  p  1ξ t -1   2 ξ t -2  .....                        (1)
collaborative signal processing is done. Time series method
is applied for next position of the Robot.                                    q ξ t -q  ξ t
                                                                           .
     IV. TIME SERIES METHOD FOR ESTIMATING                                Without the control term, the equation is called Auto
                      NEXT POSITION                                    Regression model, or AR (p), in which p is the window size.
    A time series is a sequence of signals or observations xt,            The  s are the coefficients of the AutoRegression part,
each one recorded at a specified time instant t. A discrete            and  s are the coefficients of the Moving Average part.
time series is one in which xt are recorded at different time
points. Without special notation, a discrete time series often
refers to an observation sequence with fixed time interval




                                                                 89                                     http://sites.google.com/site/ijcsis/
                                                                                                        ISSN 1947-5500
                                                               (IJCSIS) International Journal of Computer Science and Information Security,
                                                               Vol. 11, No. 6, June 2013


   Memory-based time series recognition                                                            p
                                                                                   dist 2 (k)   u i (α i (k)  α i (q)) 2 
                                                                                                       ˆ         ˆ
    Memory-based time series estimation consists of                                               i 1
following four phases.                                                                           q
                                                                                                                                                  (2)
                                                                                                 w j (β j (k)  β j (q)) 2
                                                                                                 j1
                                                                                                       ˆ         ˆ
   Collection of time series samples

   Many time series samples are collected (images and                             in which k = 1, ..., N.
sounds from important locations of the path of the robot).
The parameters         ˆ
                  and 
                 ˆ         are involved in the ARMA (p, q)                        Weights, ui and wj, can be inserted into the distance
model.                                                                        definition and assign high weight values to those more
                                                                              significant parameters. If dist(k) is small enough, we claim
                                                                              that the query series is similar to a time series observed
    If several time series samples shares the same               ˆ
                                                            and  ,
                                                           ˆ                  previously, which is represented by the k’th data point in this
it means these samples are homogeneous.                                       memory.

   3. Construction of the knowledge memory                                                               IV. EXPERIMENTS
                                                                                  The experimental setup includes collection video from
    In this work, many time series images and sounds are                      the movement of robot along the cell is shown in Figure 1.
collected and converted into limited dimensional vectors of                   Only 15 frames are shown due to space availability. The
                                    ˆ
ARMA (p, q) model parameter  and  . Each time series
                                  ˆ                                           details of the camera are as follows:
sample has an evaluation value v.                                                 Canon PowerShot A1100 IS
                                                                                  focul length=6.2mm
   In N images and sounds are collected, a knowledge
memory containing N data points is constructed. Each data                         image size =640 X 480
point stands for robot location. It consists of two parts, the                    Lens 6.2 -24.8mm
ARMA (p,q ) parameter vectors                       ˆ
                                            i and  i ,
                                            ˆ               and the
                                                                                  Aperture value 8
evaluation value vi, i = 1, ..., N.
                                                                                  Shutter speed 1/60


                                                                                 The position of the mobile office robot is estimated
   4. Query recognition
                                                                              based on the images and sounds already stored in the robot
                                                                              memory. Images are segmented using standard
                                                                              segmentation algorithms of the matlab software and the
                                                                              important features are extracted. Similarly, cepstrum values
   A query time series is defined as a time series                            are extracted from the sounds using matlab software and
observation sequence whose evaluation value is unknown.                       they are stored in the memory. Hence features from images
The objective of query recognition is to approximate a query                  and sounds are stored in the memory. Time series model is
time series’ evaluation value.                                                applied using matlab software and the next position of the
                                                                              robot is estimated.
   This task can be done in three steps,

    a. Estimating the parameters of the query series,
         ˆ
 q and  q .
 ˆ

   b. Calculating the distance from the query series to
every data point in the memory. Let’s define the distance
from the query to the k’th data point in the memory as,




                                                                        90                                 http://sites.google.com/site/ijcsis/
                                                                                                           ISSN 1947-5500
                                                                     (IJCSIS) International Journal of Computer Science and Information Security,
                                                                     Vol. 11, No. 6, June 2013


                                                                                         [10] Kawamura K., Peters R.A., Johnson C., Nilas P., and Thongchai
                                                                                              S., Supervisory control of mobile robots using sensory
                                                                                              Egosphere, Proceedings of 2001 IEEE International Symposium
                                                                                              on computational intelligence in robotics and automation, July
                                                                                              29–August 1, 2001, Canada.
                                                                                         [11] Krishna K.M., and Kalra P.M., 2001, Perception and
                                                                                              remembrance of the environment during real-time navigation of
                                                                                              a mobile robot. Robotics and Autonomous Systems, Vol.37,
                                                                                              Issue 1, pp.25-51.
                                                                                         [12] Krotkov E., 1989, Mobile robot localization using a single
                                                                                              image, Proc. IEEE Int. Conf. Robotics Automation., Vol.2,
                                                                                              pp.978 -983.
                                                                                         [13] Leonard J.J., Durrant-Whyte H.F., and Cox I.J., 1992, Dynamic
                                                                                              map building for an autonomous mobile robot, Int. J. Robotics
                                                                                              Res., Vol.11, No.4, pp.286 -298.
                                                                                         [14] Nobuhiro Aihara, Hidehiko Iwasa, Naokazu Yokoya, and Haruo
                                                                                              Takemura, 2003, Memory-Based Self-Localization Using Omni
                                                                                              directional Images, Systems and Computers in Japan, Vol.34,
                                                                                              Issue 5, pp.56–68.
   Fig. 1: Frames of a corridor along the path of the office mobile robot                [15] Palis Ratanaswasd, Will Dodd,, Kazuhiko Kawamura, and
                                                                                              David C. Noelle,, 2005, Modular Behavior Control for a
                   VI            CONCLUSION                                                   Cognitive Robot,
                                                                                         [16] Stoffler N.O., and Schnepf Z., 1998, An MPEG-Processor-
             Time series approach is used for estimating the                                  Based Robot Vision System for Real-Time Detection of Moving
next position of the robot. The input parameters for the                                      Objects by a Moving Observer, Proceedings of the 14th
model are features extracted from the images and sounds                                       International Conference on Pattern Recognition, Vol.1, pp.477-
using matlab software. The next position of the robot is                                      481.
                                                                                         [17] Stoffler N.O., Burkert T., and Farber G., 2000, Real-Time
based on the current position of the robot corresponding to                                   Obstacle Avoidance Using an MPEG-Processor-Based Optic
equivalent image and sound feature stored in the memory                                       Flow Sensor, 15th International Conference on Pattern
                                                                                              Recognition, Vol.4, pp.161-166.
                                                                                         [18] Teruko YATA, Akihisa OHYA, Shin'ichi YUTA, 2000, Using
                                                                                              one bit wave memory for mobile robots' new sonar-ring sensors,
                        REFERENCES                                                            2000 IEEE International Conference on Systems, Man and
                                                                                              Cybernetics
    [1]   Atiya S., and Hager G., 1993, Real-time vision-based robot                     [19] Will Dodd and Ridelto Gutierrez, 2005, The Role of Episodic
          localization, IEEE Transactions on Robotics and Automation,                         Memory and Emotion in a Cognitive Robot, IEEE International
          Vol.9, Issue 6, pp.785-800.                                                         Workshop on Robots and Human Interactive Communication,
    [2]   Branca A., Stella E., Distante A., 1997, Mobile robot navigation                    13-15 Aug. 2005, pp.692-697.
          using egomotion estimates, Proceedings of IEEE/RSJ                             [20] Yagi Y., Kawato S., and Tsuji S., 1994 , Real-time
          International Conference on Intelligent Robots and Systems,                         omnidirectional image sensor (COPIS) for vision-guided
          Vol.2, pp.533-537.                                                                  navigation, IEEE Trans. Robotics Automation, Vol.10, No.1,
    [3]   Briggs A., Li Y., Scharstein D., Wilder M., 2006, Robot                             pp.11-22.
          Navigation Using 1D Panoramic Images. In: IEEE International                   [21] Yagi Y., Nishizawa, Y., Yachida M., 1995, Map-based
          Conference on Robotics and Automation, pp.2679–2685.                                navigation for a mobile robot with omnidirectional image sensor
    [4]   Cambell J., Sukthankar R., and Nourbakhsh I., 2004,                                 COPIS. IEEE Transaction on Robotics and Automation, Vol.11,
          Techniques for Evaluating Optical Flow for Visual Odometry in                       Issue 5, pp.634–648.
          Extreme Terrain , Proceedings of IEEE/RSJ International                        [22] Yoshio Matsumoto, Masayuki Inaba, Hirochika Inoue, 2003,
          Conference on Intelligence Robots and Systems, Vol.4,                               View-based navigation using an omni view sequence in a
          pp.3704-3711.                                                                       corridor environment, Machine Vision and Applications,
    [5]   Cambell J., Sukthankar R., Nourbakhsh I., and Pahwa A., 2005,                       Vol.14, Issue 2, pp.121–128.
          A Robust Visual Odometry and Precipice Detection System
          Using Consumer-grade Monocular Vision, Proceedings of IEEE
          International Conference on Robotics and Automation, pp.3412-
          3427.
    [6]   Cao Zuo L., Oh Sung J., and Hall Earnest L., 1986, Dynamic
          omnidirectional vision for mobile robots, Journal of Robotic
          Systems, Vol.3, No.1, pp.5-17.
    [7]   Ching-Chih T., Chin-Cheng C., Cheng-Kain C, Yi Yu L., 2010,
          Behavior-based navigation using heuristic fuzzy kohonen
          clustering network for mobile service robots, International
          Journal of Fuzzy Systems, Vol.12, No.1, March, pp.25-32.
    [8]   Emanuele Menegatti, Takeshi Maeda and Hiroshi Ishiguro
          Elsevier Science, 2004, Image-based memory for robot
          navigation using properties of omni directional images,
          Robotics and Autonomous Systems, Vol.47, Issue 4, 31,
          pp.251–267.
    [9]   Jwu-Sheng Hu, Chen-Yu Chan, Cheng-Kang Wang and Chieh-
          Chih Wang, 2011, Simultaneous Localization of Mobile Robot
          and Multiple Sound Sources Using Microphone Array,
          Advanced Robotics, Vol.25, pp.135–152.




                                                                              91                                 http://sites.google.com/site/ijcsis/
                                                                                                                 ISSN 1947-5500
                                              (IJCSIS) International Journal of Computer Science and Information Security,
                                              Vol. 11, No. 6, June 2013


AUTHORS BIOGRAPHY
               Dr. C. Vijayalakshmi is currently
               working     as    Professor     in
               Mathematics Department, SAS,
               VIT      University,     Chennai,
               Tamilnadu. She has more than 17
               years of teaching experience at
               Graduate and Post Graduate level.



She has published more than Twenty research
papers in International and National Journals and
authored three books for Engineering colleges,
Anna University. She has received Bharat Jyothi
Award for outstanding services, achievements
and contributions, Best Teachers award for
academic excellence. Her area of specialization
is Stochastic Processes and their applications.
Her other research interests include Optimization
Techniques, Data mining and Bio-Informatics.
               Mrs.S.Sivagnana      Sundari     is
               currently working as Professor in
               Mathematics           Department,
               D.B.Jain College, Thuraipakkam,
               Chennai - 97, Tamilnadu. She has
10 years of teaching experience. She has
research papers to her credit.




                                                       92                                http://sites.google.com/site/ijcsis/
                                                                                         ISSN 1947-5500

				
DOCUMENT INFO
Shared By:
Stats:
views:0
posted:7/29/2013
language:
pages:7