SOFTWARE MULTI-AGENTS FOR NETWORK MAPPING AND DYNAMIC NETWORK ROUTING Hamzeh Khazaei Professor: Dr. John Anderson, Multi agents system’s Project Outline 2 Introduction What is network mapping? Multi agents cooperation/communication for network mapping What is dynamic network routing? Multi agents cooperation for routing Simulation and results discussion Future works Two Scenario 3 Mobile agents for Mapping networks Multi agent aspects: Cooperation Communication or somehow Learning Mobile agents for dynamic network routing Multi agent aspects: Cooperation Network Mapping 4 A bunch of wireless nodes, distributed in a physical domain Nodes have different wireless radios range, but fixed There is a link in between, if two nodes are in radios range of each other All nodes are stationary. The topology of the network will be a directed graph One example: Wireless sensor networks Here, network Mapping is: Detecting the edges in the network Or, finding the topology of the network This is obviously a distributed and decentralized problem Good test case for studying the behavior of multi-agents Agents 5 A swarm of mobile software agents inject to the network, mapping cooperatively We examine three different types of agents As a baseline, Random Agent, which simply move to a random adjacent node every update Conscientious agents, more sophisticated, choosing a neighbor which: Never visited Don’t remember Or, last visited Like depth-first search of the network Agents 6 Conscientious agents, use their first hand information for moving In contrast: Super conscientious agents, like conscientious agents, but use both their own experience and learned data from their peers, for moving to the next node Simulation and results 7 The network is physically situated in two dimensions Implemented with a discrete event, time based simulation engine The simulation system consist of 40 classes, 5500 lines of Java code which implements: Discreteevent scheduler Simulated objects: Agents Wireless nodes Wireless network Network monitoring entity Simulation and results 8 Data collection system Graphical view User interface Plotting : JfreeChart (http://www.jfree.org/jfreechart/) In order to compare results across population sizes and algorithms, a connected network consisting of 200 nodes with 1424 edges was chosen. In each run, a population of agents randomly-placed in the network Simulation will continue, until all agents have perfect knowledge about the edges Sample networks 9 Agent Operation in each step 10 Every step of simulation time an agent does four things: Obtaining current node information (edges), first hand date Update the history, (agent cache) Visit other agents, which currently on the same node, learn from them about network (edge information), second hand data Choose one of the neighbors as next destination based on its underneath algorithm. Single agent results 11 There is no opportunity for agent cooperation here The result just is a comparison between two core wandering algorithm, random and conscientious In case of one agent, conscientious = super conscientious Single agent, 40 runs 12 Effect of cooperation 13 Effect of cooperation 14 Effect of cooperation 15 Surprising Results 16 Super conscientious agent use own data and peer- obtained information to make movement decision After all, the more information that is factored into a decision the better that decision should be In small population super conscientious agents do perform the best For moderate populations, they are roughly the same. But gradually for larger population the conscientious agents outperform the Supers!!?? Surprising Results 17 Why? Clues from simulation… 18 Super conscientious agents meet far more often over the course of a run than conscientious agents do Number of meeting increases over the course of a run in case of super conscientious agents while it remains fix for conscientious agents At the end of typical run, 100 conscientious agents distributed across 82 different nodes on average while in super conscientious case 100 agents distributed across on only 36 nodes. Why do super conscientious agents cluster together? 19 When two agents meet each other they share all information, (all types) Conscientious agents assimilate this data but still make their movement decision based on first hand information. (own info.) In contrast, super consc. use all information they have for making their decision.(fist and second hand) Therefore two super consc. Agents which just met each other find themselves making subsequent choices based on identical sets of data; so choose identical path. Conclusion on why? 20 The results shows that the performance of a multi agents algorithm depends not only on how efficiently each individual agent acts but how the agent population as whole distributes their effort Super cons. agents duplicate each other, or gradually the become homogeneous agents which acts like each other. Generally, diversity of behavior is important for the effectiveness of cooperating multi agent systems. If an agent population is too homogeneous then the benefits of having lots of them working together is lost. Adding 20% randomness to super consc. make super consc and conscientious the same. “Effective cooperation requires division of labor “ Conscientious agent (different cache policy) 21 Second Scenario 22 Mobile agents for dynamic network routing Multi agent aspects: Cooperation Network Description 23 Consider a network of low-power, relatively short- range, radio frequency transceivers distributed throughout a two dimensional space. In simulation the network has 250 nodes which: Roughly half of them are mobile Mobile nodes have different velocity Nodes have a specific radio range 5% of nodes are gateways The network diameter is roughly 20 hops Gateways are connected to the outside world, wired local network, Internet, etc network routing 24 Most of packets in such a network requires multiple hops to travel from source to destination. Resident mobile agents need to move around the network in order to effectively gather data about the whole system. Due to mobility of some nodes, the radio links form and break as the nodes move in and out of range of each other. As result the topology of the network is quite dynamic. (directed graph) Routing table 25 Every nodes knows who its neighbors are Here each node has a simple routing table, which contains the information about the route to the gateways. This routing tables are not update by nodes themselves; nodes are completely passive; they rely on mobile agents to update their tables. Agents – General characteristic 26 Here, nodes are dumb: they run no program of their own, they simply host agents and provide a place to store a database of routing information. The mobile agents embody the intelligence in the system. They have one mission: explore the network, updating every node they visit with what they learn in their travels. Each routing agent keep a history of where it has been. The history size is quite small. The longer the history, the higher the overhead of moving agent. Agents - General characteristic 27 The system as a whole, relies on the cooperative behavior of population of agents. The population size is very important, the more routing agents, the higher the overhead. Agent in population don’t communicate with one another directly. They are blind to each other. Agents don’t even read information from routing table, they just write to them. In each step of simulation an agent does three things: first: select one of the neighbors to move on it, second: the agent moves itself to new node, learning about the edge it travels, third: it updates the routing table on the node. Algorithms 28 Here two algorithms were tested: Random algorithm as baseline for comparison. Select next move randomly. Oldest-node agent, that preferentially visits the adjacent node it last visited longest ago, never visited or doesn’t remember visiting. This agent using its history (cache) to try to avoid backtracking. Performance measures 29 Here, the performance measure is connectivity: The fraction of nodes in the system that have a valid route to at least one gateway nodes. this measure is a reasonable aggregate of overall connectivity at any given time. Every run goes to 300 steps, by experience the system gets the stable condition in connectivity less than 150 steps. Performance of system over time, history size: 25 30 Some hints: 31 There are two significant results in this first experiment : A population of agents can maintain a reasonable rate of connectivity across the simulated network Second, the population maintains this connectivity rate with reasonable stability Parameters setting 32 Here three main variables independently will be altered: The number of agents The history size of each agent The type of the agent (Random versus Oldest-Node) History window size 33 Population size 34 Some hints in reality: 35 Oldest node agents perform better than random for every parameter settings. The random agent is not that bad as we expected. In some situation it may be better to use random agents: computational load is important or the network highly dynamic. Increasing the history size and population can not increase the connectivity as much as. However, more agents and bigger history size narrows the spread between maximum and minimum connectivity 100 Random agents, varying history 36 Random agent, varying population, history size = 25 37 Oldest Node agent, varying history 38 Oldest Node agent, varying population 39 Overhead analysis 40 Having more agents and more history capacity is good and lead to higher connectivity. But in the real world, these attributes come at some cost. The more agents there are in the system or the more memory the agents have, the more overhead there will be for the network to support ht routing agent population. A sensible analysis must account for agent over head when measuring system performance. Overhead analysis 41 Here is a some rough estimate on the cost of transmitting an agent: 8 bits for node id 120 bits for routing table So 128 bits for every node Therefore for an agent with size H of history size the overhead from carrying nodes are 128 * H Let us assume the agent data takes 128 bits and consider128 bits for digital signature of agent content. Overhead analysis 42 Taking these estimates together, the overhead of the system is defined by the equation: O = N * (128 * H + 256) bits/step A simple linear function dominated by the product of the number of agents and their history size. Using this formula, we can compare different configurations in terms of population and history size which have the same overhead. For 100 agents with history size of 25 the overhead will be 345600 bits/step. Many other parameter settings also produce the same over head. Finding optimal parameters setting for constant overhead 43 Future works 44 Network Mapping Different links, different agents, adding some skills What if agents just wander in a specific part of the network Dynamic network routing Radio links are different but fixed during the course of experiment, but in real it is not like that. What if we have a kind of inter-agents communication? Agents have the ability to increase their population in order to maintain predefined connectivity. (different mobility or traffic) Network clustering based on gateways, distinct group of agents, in charge of a particular gateway Any Questions? And Any Ideas?
Pages to are hidden for
"Multi-Agent Systems-Project"Please download to view full document