Docstoc

IJETTCS-2013-10-25-085.pdf

Document Sample
IJETTCS-2013-10-25-085.pdf Powered By Docstoc
					    International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)
       Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com
Volume 2, Issue 5, September – October 2013                                    ISSN 2278-6856

                Planning architecture for control of an
                           intelligent agent
                                      Rabia Moussaoui1 and Hicham Medroumi2
                     1,2
                       Hassan II University Casablanca, National Height School of Electricity and Mechanics,
                          Architecture System Team – LISER, Eljadida Road, BP 8118 Oasis, Morocco

Abstract: In this paper we propose a planning architecture         listed above, is the ability to make decisions while
based on agents for the control and supervision, to improve        conducting and coordinating complex missions. This
the global autonomy of a mobile system based on multi-agent        capacity is very important because, for many robotic
systems. This architecture is the combination of two parts in      tasks, may be many ways to achieve them. The robot has
particular: the structural part and control part. After studying   to select the best actions to take to achieve its mission
the existing solutions kinematic wheeled mobile robots, it         adequately, by reasoning.
turned out how one can bring a maximum autonomy of a
                                                                   Although many other capacities of perception and action
mobile robot moving by focusing on its architecture. The
obtained result is an omnidirectional robot. This aspect is
                                                                   can be added to this list, it is important to bear in mind
only part of improving itself. Indeed, the overall concept of      that the mobile robotic is a highly multidisciplinary
autonomy is to complete work on the structural architecture        subject of much research in diverse disciplines. For this
by improving the control architecture. To perform the tasks        reason, in this article, we use most of these robotic
of an intelligent agent in the optimization of energy and          capabilities and we believe that we have access to them
communication between them, our architecture consists of           because they are derived from other works, apart those
three layers of control and three associated knowledge bases       presented here.
that represent the agent and the environment at different
levels of abstraction.
                                                                   2. STATE OF ART
                                                                   2.1 Concepts of robotics
Keywords: Multi-Agent Systems, Sensors, Actuators,
                                                                   Before entering in our subject, it is important to have a
Control architecture, Internet, Agent UML.
                                                                   general idea about mobile robots function in order to
                                                                   understand the interactions between the different modules
1. INTRODUCTION                                                    we refer to.
The most basic tasks in the daily life of a human can              2.1.1 Components of a mobile robot
become extremely complex when analyzed more closely.               Basically, a mobile robot consists of hardware and
For example, attending a seminar or conference can be a            software components. Among the hardware components,
real headache. Above all, he must consult his calendar to          there is a moving platform which holds all the other
ensure availability. Subsequently, he must go to the scene         components such as sensors, actuators and energy source
of the event by combining various modes of transportation          (batteries).
such as cars, buses and perhaps even plane while these             a) Sensors
tasks may seem simple to us humans, it is not nearly as            The sensors are operable to acquire data from the
obvious to a robot. To be autonomous, mobile robot must            environment. Sensors typically installed on a mobile robot
have many skills. First, it must be able to perceive its           (see Figure 1) there are ultrasonic sonar, proximity laser
environment and locate in it.                                      sensor, wheels encoder’s (odometer), one or two optical
To do this, a robot has sensors, such as sonar and laser           cameras and microphones. The kinds of information
scanning device for measuring distances between itself             collected as well as their accuracy change greatly from
and nearby obstacles.                                              one sensor to another. For example, Figure 2[1] shows a
Once located in the environment, the robot must be able            laser proximity sensor (c) which has a better perception of
to move from one point to another by finding safe and              the contours of the environment than (a) sonar and (b)
effective ways to avoid collisions with obstacles. In              sonar because the sensor offers better angular resolution
addition, a robot is often called upon to communicate              and better accuracy on distance.
with people or other agents nearby. This can be done in
various ways, such as voice or from a Gateway.
In addition to be able to perceive its environment, a robot
must often be able to identify objects, recognize people,
read signs, and even identifying graphic symbols. These
operations are performed by analyzing pictures acquired
by the camera(s) installed on the robot. After identifying
and locating an object, we can imagine that the robot is
then manipulating that object with its robot arm.
Finally, another robotic capacity, as important as those
Volume 2, Issue 5 September – October 2013                                                                         Page 221
   International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)
       Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com
Volume 2, Issue 5, September – October 2013                                    ISSN 2278-6856




            Figure 1 Mobile robot components

The robot has eight IR-sensors of which six are placed in
the front and two in the back (Figure 1). These sensors
can be used both to sense obstacles (called proxy sensors       Figure 3 Sensitive of proximity sensors to different colors
in this text) and to sense bright light in the vicinity                      and glossyness of the obstacle
(called light sensors in this text). These sensors all return
real-values in the range [0..1] where the extreme values 1
indicates bright light or a very close obstacle and 0           2.1.2 Software modules
darkness or no obstacle.                                        To operate a mobile robot, several software modules are
Note that the proximity sensors are sensitive to different      involved. These modules can be used to interpret data
colors and glossyness of the obstacle. Darker objects can       collected by the sensors in order to extract information or
reduce the distance of the first sensory reading to as low      to process high-level commands to generate more
as ~10 mm. The figure below shows the readings                  commands to a lower level. Among the most frequently
obtained from the wooden blocks (Figure 3) [2].                 modules used are modules of positioning, navigating,
b) Actuators                                                    visioning...
To move inside their environment and interacting with it,       (a) Location
a robot is equipped with actuators. For example, a robot is     One of the most important functions for the robot is to be
provided with one or more motors witch can rotate its           able to locate in its environment. Using data provided by
wheels to perform movements. Generally, the wheels of           the sensors, the location module estimates the current
the robot are controlled by two motor commands, a               position of the robot. Typically, this position is expressed
forward speed and rate of rotation. Usually, these              by a tuple (X, Y, _) representing a position and an
commands are expressed in meters per second (m / s) and         orientation on a two dimensional plan [5]. Localization
degrees of rotation per second (deg / s).                       can be done using techniques based on the theory of
                                                                Markov decision processes [6], using sampling techniques
                                                                Monte Carlo (particle filter) [7] or other methods.
                                                                (b) Vision
                                                                When we analyze the pictures captured by cameras, we
                                                                can extract a wealth of information. For example, by
                                                                using a segmentation algorithm, we can recognize object
                                                                colors in addition to estimating their relative position
                                                                (angle) relative to the camera view. Using three-
                                                                dimensional vision techniques, it is also possible to
                                                                estimate some distances in the environment. We can also
                                                                recognize symbols, characters and read messages such as
                                                                posters in a corridor, direction signals or conference
                                                                badges.
                                                                c) Navigation
                                                                A navigation module is responsible for moving a robot
                                                                from its current position to a desired destination safely
                                                                and efficiently. In addition to including features of the
                                                                environment perception and localization, navigation
                                                                module also has the responsibility of finding a path
                                                                between the position of origin and destination, consisting
                                                                of a list of intermediate points to reach and to guide the
           Figure 2 Perception of some sensors

Volume 2, Issue 5 September – October 2013                                                                       Page 222
   International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)
       Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com
Volume 2, Issue 5, September – October 2013                                    ISSN 2278-6856

robot through this path. To efficiently finds a path using a    modeling perception-planning-execution (Figure 5) [12].
plan path.
2.2 Multi agents approach
2.2.1 Notion of agent
An agent is a physical or virtual feature, which owns all
or part of the following: [3]
Located in an environment: means that the agent can
receive sensory input from its environment and can
perform actions that are likely to change this
environment.
Independent: means that the agent is able to act without
the direct intervention of a human (or another agent) and
he has control of its actions and its internal state.
Flexible: means that the agent is able to respond in time,
it can perceive its environment and respond quickly to
changes taking place.
Proactive: it does not simply act in response to its
environment; it is also able to behave opportunistically,        Figure 5 Loop modeling perception-planning-execution
led by its aims or its utility function, and take initiatives    3.1 Path planning
when appropriate.                                               A path planner is a planner different from task scheduler.
Social: it is capable of interacting with other agents          When the scheduler generates an action type
(complete tasks or assist others to complete theirs).           displacement, the path planner is called to find a way to
2.2.2 Multi agents system                                       move the robot from its current position to the
A multi - agents system (MAS) is a system composed of a         destination, optimally and securely.
set of agents located in some environment and interacting        3.2 Map of the environment
according to some relations [10]. There are four types of       To find a path, a path planner needs a map of the
agent architecture [4]:                                         environment in which the robot operates. Usually, this
Reactive agent: is responding to changes in the                 map is represented using an occupancy grid obtained by
environment.                                                    discretizing the environment into cells. A cell is either
Deliberative agent: makes some deliberation to choose           free or occupied by one or more obstacles, as in the shown
its actions based on its goals.                                 below (Figure 6). From a cell, a robot can reach a
Hybrid agent: that includes a deliberative as well as a         neighboring cell, if it is free. There are two definitions of
reactive component.                                             neighborhood relationship: 4 and 8 neighbors. In the first,
Learner agent: uses his perceptions not only to choose          the robot can move in the four cardinal directions, north,
its actions, but also to improve its ability to act in the      south, east and west. In the second, the four oblique
future.                                                         directions are also allowed.
                                                                Model the occupancy grid can be improved by adding
                                                                some attributes. One can for example assign costs to the
                                                                cells or even the probability of presence.
                                                                Mission Planner: converts the objectives and constraints
                                                                defined by the user cost functions for digital planner way.
                                                                In simplest case, it is necessary to define a destination
                                                                point for the robot. Other constraints can be added like
                                                                staying near a road or be as inconspicuous as possible
                                                                [13].
               Figure 4 Multi agents system

3. APPROACH OF THE CONTROL OF ROBOTS
BASED ON PLANNING
A first approach to robot control favors a centralized
representation of the environment.        In general, a
scheduler is responsible for developing an action plan for
future robot to reach a goal in the ideal case of a
geometric or topological model of the environment. This
approach is to establish a single loop (perception,
decision, action)[8] with however many possible
activities, including decomposition arises in terms of
                                                                   Figure 6 Example of a path in a grid of occupation.

Volume 2, Issue 5 September – October 2013                                                                       Page 223
   International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)
       Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com
Volume 2, Issue 5, September – October 2013                                    ISSN 2278-6856

4. PLANNING ALGORITHM                                       equivalent to the library plans, and finally Social-KB
                                                            represents the beliefs of the agent on other agents in the
                                                            system, including their abilities to help to achieve its
                                                            goals.




                                                                    Figure 8 Layers of planning architecture

                                                            For example, the reactive behavior layer includes three
                                                            modules: a module for activation purposes, a module for
    Initial state n = 1                                     the recognition of situations and a module for planning
    n = 1 layer of reactive behavior                        and execution. Perceptions of the environment are
    BC n = 1 world knowledge base                           transmitted to the situation recognition module of the first
    n = 2 layer local planning                              layer (layer reactive behavior) to the module aims to
    BC n = 2 knowledge base planning
                                                            enable the planning and execution module flow control
    n = 3 layer collaborative planning
    BC n = 3 basic social knowledge
                                                            action passes to the upper layer (local planning layer).
                                                            Knowledge Base KB-World represents the information
               Figure 7 Planning algorithm                  that the agent has on the environment (figure 9).


5. ARCHITECTURE
5.1 Planning architecture
                                                                         Figure 9 Reactive behavior layers
Our architecture is composed of three layers of control
and three associated knowledge bases that represent the
                                                            5.2 Control architecture
agent and the environment at various levels of
                                                            Our architecture includes an explicit representation of the
abstraction, as shown in Figure 8. Each layer has a
                                                            cooperation process agent with other agents in the system
specific set of operations associated and an upper layer
uses the simpler operation of the layer above to run its
operations more elaborate. The flow of control passes
from bottom to top, and a layer takes control when the
previous layer can no longer contribute its operations to
the performance goals. Each layer has three modules: a
module for the activation of goals and the recognition of
situations (RS) and a module of planning and execution
(PE). Perceptions of the environment are transmitted to
the RS module of the first layer and, from module to
module, to the top of the hierarchy. Flow control actions
passes up and down to get to the module end of the last
layer PE and associated actions are executed on the
environment.
Knowledge Base KB World represents information that
the agent has on the environment (beliefs about the                Ex: Execution   Ac: Activation   Re: Recognition
environment), the knowledge base KB-Planning is                           Figure 10 Control architecture
Volume 2, Issue 5 September – October 2013                                                                   Page 224
   International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)
       Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com
Volume 2, Issue 5, September – October 2013                                    ISSN 2278-6856

The physical layer which defines the physical agent            for a task simpler but more "emergent". For this purpose,
consisting of sensors and its effectors. It establishes a      the operation of a module located in an upper layer is
direct contact with the environment. The reaction layer        subject to a lower module.
includes collection agent which allows the perception
after treatment to determine the location of the mobile
unit and to create representations of the environment
[11].
For the perception of the environment, several modules
can perform the execution of a different action, to choose
the most appropriate action; the modules are organized in
hierarchical layers, each layer having a different priority.
The upper layers represent more abstract tasks that are
detailed with tasks more concrete and simple, the upper
layers with a priority less than the lower layers. The lower
layers correspond to simple tasks and they have a higher
priority.
The agent action defines the sequence of actions leading
to the goal by the remote user, the location of the mobile
system, the current action, the representations of the
environment and their validity.
The communication layer provides communication
between layers (physical, reaction, planning and control).
Finally, with the layer of control, mobile system can
monitor and make decisions. There is a hierarchy that is
connected with the knowledge base and layers of
planning.
Each level of abstraction gives information to the next
level.

6. TEST AND EXPERIMENTATION                                              Figure 11 Control software platform
The proposed software platform used to control the robot
in its environment, which can be known in advance or           A module on a lower layer can modify the input of a
not.                                                           module with a higher node deletion, and invalidate the
Our application allows the control of the robotic platform     action of the upper module with a node inhibition, for
from a remote computer using internet. The remote              example, if our robot wants move eastwards starting from
computer is running a program that can send commands           some position and that there is no obstacle in this
over the Internet to the local computer embarked on the        direction, the action performed by the execution
robot (figure 11).                                             component that controlled by M1 to move eastward. If
Our agent is an agent that can move on a grid and must         there is an obstacle, the module M0 takes into account
collect objects that are in some grid squares while            this obstacle by its perception of the environment and
avoiding any obstacles (figure 12), the agent is able to       inhibits the moving eastward. M1 will then try to move in
perform the following actions:                                 another direction.
Move up, down, left, right, perceive the environment to
see if there is an object in the box, avoid obstacles and
collect objects.

We can define:
• M0 - a module that has the ability to avoid obstacles;
• M1 - M1 module which is responsible for the movement
in the environment while avoiding obstacles using M0;
• M2 - a module that has the superior skill, which can
abstract the environment systematically (the grid) by
moving through to the actions of the module M1.
• M3 - a module that collects objects.

A module on a lower layer has a higher priority than a
module located on a higher layer, because it is responsible            Figure 12 Applied test for robot control

Volume 2, Issue 5 September – October 2013                                                                     Page 225
   International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)
       Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com
Volume 2, Issue 5, September – October 2013                                    ISSN 2278-6856

7. Conclusion                                                    [13] A. Albore, E. Beaudry, P. Bertoli et F. Kabanza :
In this article, our goal was to create a behavior                    Using a contingency planner within a robot
arbitration mechanism for autonomous mobile agent; the                architecture. à paraître dans Proceedings of
mechanism must ensure the adaptation of the agent to                  Workshop on Planning, Learning and Monitoring
unfamiliar environments.                                              with Uncertainty and Dynamic Worlds, in
The proposed architecture is generic enough to be used in             conjunction with the 17th European Conference on
the same way on different mobile systems to combine the               Artificial Intelligence, 2006.
capabilities of the mobile agent and allow the agent
autonomous navigation without the use of man.
The autonomy of mobile agents is an international                AUTHORS
concern, this autonomy is necessary for the application of
mobile agents in dangerous activities such as exploration                     Rabia Moussaoui received her engineer degree in
of nuclear areas.                                                             Computer science in 2009 at the National School of
                                                                              computer science and systems analysis (ENSIAS)
While we have shown several examples of agent usage,
                                                                              School, Rabat, Morocco. In 2010 she joined the
there are many more.                                             system architecture team of the National and High School of
                                                                 Electricity and Mechanic (ENSEM), Casablanca, Morocco. Her
References                                                       actual main research interests concern Modeling and simulating
[1] J.C. Gallagher, S. Perretta, «WSU Khepera Robot              complex systems based on Multi-Agent Systems. Ms.
     Simulator User's Manual», université de Wright              Moussaoui is actually a Software Engineer in the Office of
     State, Dayton, Ohio, 24 Mars 2005.                          Vocational Training and Employment Promotion (OFPPT) of
                                                                 Casablanca.
[2] Stephan C.F Neuhauss, University Zürich
     Switzerland“A Robotics-Based Behavioral Paradigm                        Hicham Medroumi received his PhD in engineering
     to Measure Anxiety-Related Responses in                                 science from the Sophia Antipolis University in
     Zebrafish”,July 29, 2013.                                               1996; Nice, France .He is responsible of the system
[3] B. Chaib-Ddraa, I. Jarras, B. Moulin, Systèmes                           architecture team of the ENSEM Hassan II
     multiagents : Principes généraux et applications,           University, Casablanca, Morocco. His current main research
     (2001) Hermès.                                              Interests concern Control Architecture of Mobile Systems Based
[4] J. Ferber, “Les systèmes multi-agents, vers une              on Multi Agents Systems.
                                                                 Since 2003 he is a full professor for automatic productic and
     intelligence collective”, Inter Editions (1995).
                                                                 computer sciences at the ENSEM school, Hassan II University,
[5] Aerospace        and    Electronic     Systems,     IEEE     Casablanca.
     Transactions on (Volume:41 , Issue: 4 ) Oct. 2005
[6] Robust Markov Decision Processes Mathematics of
     Operations Research 2013 38:153-183
[7] Monte Carlo Localization: Efficient Position
     Estimation for Mobile Robots 1999.
[8] Perception, Planning, and Execution for Mobile
     Manipulation in Unstructured environments 2012.
[9] Modeling Agent Interaction Protocols with AUML
     Diagrams and Petri Nets 2003.
[10] R.MOUSSAOUI, A.SAYOUTI, H.MEDROMI, "
     Conception d’une architecture Machine to machine
     appliquée à la localisation des systèmes mobiles". Les
     2èmes Journées Doctorales en Technologie de
     l’Information et de la Communication, Fès, Maroc,
     Juillet 2010
[11] R.MOUSSAOUI,H.MEDROMI,H.MANSOURI,
     “Intelligent Architecture based on SMA track, locate
     and      communicate        with     mobile      systems
     “.l'internationale workshop de technologie de
     l'information et de la communication (WOTIC'11)
     ,Casablanca, Maroc,
[12] R.MOUSSAOUI, H.MEDROMI,H.MANSOURI, “
     Architecture de la localisation des systèmes mobiles
     en       utilisant les SMA (système multi agent)“.la
     3ème édition des Journées Doctorales en
     Technologies       de    l'Information     et   de     la
     Communication (JDTIC 2011).

Volume 2, Issue 5 September – October 2013                                                                          Page 226

				
DOCUMENT INFO
Description: International Journal of Emerging Trends & Technology in Computer Science (IJETTCS) Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com Volume 2, Issue 5, September – October 2013 ISSN 2278-6856, Impact Factor 2.524 ISRA:JIF