Multi-Agent Systems-Project by xiangpeng



Hamzeh Khazaei   Professor: Dr. John Anderson, Multi agents system’s Project

     Introduction

     What   is network mapping?
     Multi agents cooperation/communication for network
     What is dynamic network routing?

     Multi agents cooperation for routing

     Simulation and results discussion

     Future works
    Two Scenario

       Mobile agents for Mapping networks
         Multi   agent aspects:
           Cooperation
           Communication   or somehow Learning

       Mobile agents for dynamic network routing
         Multi   agent aspects:
           Cooperation
    Network Mapping

       A bunch of wireless nodes, distributed in a physical domain
       Nodes have different wireless radios range, but fixed
       There is a link in between, if two nodes are in radios range of
        each other
       All nodes are stationary.
       The topology of the network will be a directed graph
       One example: Wireless sensor networks
       Here, network Mapping is:
           Detecting the edges in the network
           Or, finding the topology of the network
       This is obviously a distributed and decentralized problem
       Good test case for studying the behavior of multi-agents

       A swarm of mobile software agents inject to the
        network, mapping cooperatively
       We examine three different types of agents
         As a baseline, Random Agent, which simply move to a
          random adjacent node every update
         Conscientious agents, more sophisticated, choosing a
          neighbor which:
           Never  visited
           Don’t remember
           Or, last visited
         Like   depth-first search of the network

       Conscientious agents, use their first hand information
        for moving
       In contrast:

       Super conscientious agents, like conscientious agents,
        but use both their own experience and learned
        data from their peers, for moving to the next node
    Simulation and results

       The network is physically situated in two dimensions
       Implemented with a discrete event, time based
        simulation engine
       The simulation system consist of 40 classes, 5500
        lines of Java code which implements:
         Discreteevent scheduler
         Simulated objects:
           Agents
           Wireless nodes
           Wireless network
           Network monitoring entity
    Simulation and results

       Data collection system
       Graphical view
         User interface
         Plotting : JfreeChart (

       In order to compare results across population sizes and
        algorithms, a connected network consisting of 200
        nodes with 1424 edges was chosen.
       In each run, a population of agents randomly-placed in
        the network
       Simulation will continue, until all agents have perfect
        knowledge about the edges
    Sample networks
     Agent Operation in each step

        Every step of simulation time an agent does four
          Obtaining   current node information (edges), first hand
          Update the history, (agent cache)

          Visit other agents, which currently on the same node,
           learn from them about network (edge information),
           second hand data
          Choose one of the neighbors as next destination based
           on its underneath algorithm.
     Single agent results

        There is no opportunity for agent cooperation here
        The result just is a comparison between two core
         wandering algorithm, random and conscientious
        In case of one agent,
                 conscientious = super conscientious
     Single agent, 40 runs
     Effect of cooperation
     Effect of cooperation
     Effect of cooperation
     Surprising Results

        Super conscientious agent use own data and peer-
         obtained information to make movement decision
        After all, the more information that is factored into a
         decision the better that decision should be
        In small population super conscientious agents do
         perform the best
        For moderate populations, they are roughly the
        But gradually for larger population the
         conscientious agents outperform the Supers!!??
     Surprising Results
     Why? Clues from simulation…

        Super conscientious agents meet far more often over
         the course of a run than conscientious agents do
        Number of meeting increases over the course of a
         run in case of super conscientious agents while it
         remains fix for conscientious agents
        At the end of typical run, 100 conscientious agents
         distributed across 82 different nodes on average
         while in super conscientious case 100 agents
         distributed across on only 36 nodes.
         Why do super conscientious agents cluster

        When two agents meet each other they share all
         information, (all types)
        Conscientious agents assimilate this data but still make
         their movement decision based on first hand
         information. (own info.)
        In contrast, super consc. use all information they have
         for making their decision.(fist and second hand)
        Therefore two super consc. Agents which just met each
         other find themselves making subsequent choices based
         on identical sets of data; so choose identical path.
         Conclusion on why?

        The results shows that the performance of a multi agents
         algorithm depends not only on how efficiently each
         individual agent acts but how the agent population as whole
         distributes their effort
        Super cons. agents duplicate each other, or gradually the
         become homogeneous agents which acts like each other.
        Generally, diversity of behavior is important for the
         effectiveness of cooperating multi agent systems.
        If an agent population is too homogeneous then the benefits
         of having lots of them working together is lost.
        Adding 20% randomness to super consc. make super consc
         and conscientious the same.
               “Effective cooperation requires division of labor “
     Conscientious agent (different cache policy)
     Second Scenario

        Mobile agents for dynamic network routing
          Multi   agent aspects:
            Cooperation
     Network Description

        Consider a network of low-power, relatively short-
         range, radio frequency transceivers distributed
         throughout a two dimensional space.
        In simulation the network has 250 nodes which:
          Roughly half of them are mobile
          Mobile nodes have different velocity
          Nodes have a specific radio range
          5% of nodes are gateways
          The network diameter is roughly 20 hops

        Gateways are connected to the outside world,
         wired local network, Internet, etc
         network routing

        Most of packets in such a network requires multiple
         hops to travel from source to destination.
        Resident mobile agents need to move around the
         network in order to effectively gather data about the
         whole system.
        Due to mobility of some nodes, the radio links form and
         break as the nodes move in and out of range of each
        As result the topology of the network is quite dynamic.
         (directed graph)
     Routing table

        Every nodes knows who its neighbors are
        Here each node has a simple routing table, which
         contains the information about the route to the
        This routing tables are not update by nodes
         themselves; nodes are completely passive; they rely
         on mobile agents to update their tables.
     Agents – General characteristic

        Here, nodes are dumb: they run no program of their
         own, they simply host agents and provide a place to
         store a database of routing information.
        The mobile agents embody the intelligence in the system.
        They have one mission: explore the network, updating
         every node they visit with what they learn in their
        Each routing agent keep a history of where it has been.
         The history size is quite small.
        The longer the history, the higher the overhead of
         moving agent.
     Agents - General characteristic

        The system as a whole, relies on the cooperative
         behavior of population of agents.
        The population size is very important, the more routing
         agents, the higher the overhead.
        Agent in population don’t communicate with one another
         directly. They are blind to each other.
        Agents don’t even read information from routing table,
         they just write to them.
        In each step of simulation an agent does three things:
         first: select one of the neighbors to move on it, second:
         the agent moves itself to new node, learning about the
         edge it travels, third: it updates the routing table on the

        Here two algorithms were tested:

          Random  algorithm as baseline for comparison. Select
           next move randomly.

          Oldest-node   agent, that preferentially visits the
           adjacent node it last visited longest ago, never visited
           or doesn’t remember visiting. This agent using its history
           (cache) to try to avoid backtracking.
     Performance measures

        Here, the performance measure is connectivity:
          The fraction of nodes in the system that have a valid
           route to at least one gateway nodes.
         this measure is a reasonable aggregate of overall
         connectivity at any given time.
        Every run goes to 300 steps, by experience the
         system gets the stable condition in connectivity less
         than 150 steps.
     Performance of system over time, history size: 25
     Some hints:

        There are two significant results in this first
         experiment :

         A  population of agents can maintain a reasonable rate
           of connectivity across the simulated network

          Second,  the population maintains this connectivity rate
           with reasonable stability
     Parameters setting

        Here three main variables independently will be
          The   number of agents

          The   history size of each agent

          The   type of the agent (Random versus Oldest-Node)
     History window size
     Population size
     Some hints in reality:

        Oldest node agents perform better than random for
         every parameter settings.
        The random agent is not that bad as we expected.
        In some situation it may be better to use random
         agents: computational load is important or the
         network highly dynamic.
        Increasing the history size and population can not
         increase the connectivity as much as.
        However, more agents and bigger history size
         narrows the spread between maximum and minimum
     100 Random agents, varying history
     Random agent, varying population, history
     size = 25
     Oldest Node agent, varying history
     Oldest Node agent, varying population
     Overhead analysis

        Having more agents and more history capacity is
         good and lead to higher connectivity.
        But in the real world, these attributes come at some
        The more agents there are in the system or the more
         memory the agents have, the more overhead there
         will be for the network to support ht routing agent
        A sensible analysis must account for agent over
         head when measuring system performance.
     Overhead analysis

        Here is a some rough estimate on the cost of
         transmitting an agent:
         8  bits for node id
          120 bits for routing table

          So 128 bits for every node

        Therefore for an agent with size H of history size
         the overhead from carrying nodes are 128 * H
        Let us assume the agent data takes 128 bits and
         consider128 bits for digital signature of agent
     Overhead analysis

        Taking these estimates together, the overhead of the
         system is defined by the equation:
                   O = N * (128 * H + 256) bits/step
        A simple linear function dominated by the product of the
         number of agents and their history size.
        Using this formula, we can compare different
         configurations in terms of population and history size
         which have the same overhead.
        For 100 agents with history size of 25 the overhead will
         be 345600 bits/step.
        Many other parameter settings also produce the same
         over head.
     Finding optimal parameters setting for constant
     Future works

        Network Mapping
          Different links, different agents, adding some skills
          What if agents just wander in a specific part of the network

        Dynamic network routing
          Radio links are different but fixed during the course of
           experiment, but in real it is not like that.
          What if we have a kind of inter-agents communication?
          Agents have the ability to increase their population in order
           to maintain predefined connectivity. (different mobility or
          Network clustering based on gateways, distinct group of
           agents, in charge of a particular gateway
Any Questions?


Any Ideas?

To top