Docstoc

IA

Document Sample
IA Powered By Docstoc
					Introduction to Intelligent Agents



            Jacques Robin




                   Ontologies
                   Reasoning
                   Components
                   Agents
                   Simulations
                            Outline

   What are intelligent agents?
   Characteristics of artificial intelligence
   Applications and sub-fields of artificial intelligence
   Characteristics of agents
   Characteristics of agents’ environments
   Agent architectures
         What are Intelligent Agents?

 Q: What are Software Agents?
 A: Software which architecture is based on                   Artificial                     Software
  the following abstractions:                                  Intelligence                 Engineering

       Immersion in a distributed environment, continuous                     Agents
        thread, encapsulation, sensor, perception, actuator,
        action, own goal, autonomous decision making
 Q: What is Artificial Intelligence?
                                                                              Distributed
 A: Field of study dedicated to:                                              Systems
       Reduce the range of tasks that humans carry out
       better than current software or robots
       Emulate humans’ capability to solve approximately
       but efficiently most instances of problems proven
       (or suspected) hard to solve algorithmically (NP-
       Hard, Undecidable etc.) in the worst case, using
       innovative, often human inspired, alternative
       computational metaphors and techniques
       Emulate humans’ capability to solve vaguely
       specified problems using partial, uncertain
       information
 Artificial Intelligence: Characteristics

 Highly multidisciplinary inside and outside computer science
 Ran-away field - by definition - at the forefront of computation
  tackling ever more innovative, challenging problems as the one it
  solved become mainstream computing
 Most research in any other field of computation also involves AI
  problems, techniques, metaphors
 Q: What conclusions can be derived from these characteristics?
 A: Hard to avoid; very, very hard to do well
   “Well” as in:
       Well-founded (rigorously defined theoretical basis, explicit simplifying
       assumptions and limitations)
       Easy to use (seamlessly integrated, easy to understand)
       Easy to reuse (general, extendable techniques)
       Scalable (at run time, at development time)
               What is an Agent?
            General Minimal Definition
 Any entity (human, animal, robot, software):
    Situated in an environment (physical, virtual or simulated) that
    Perceives the environment through sensors (eyes, camera, socket)
    Acts upon the environment through effectors (hands, wheels, socket)
    Possess its own goals, i.e., preferred states of the environments
     (explicit or implicit)
    Autonomously chooses its actions to alter the environment towards its
     goals based on its perceptions and prior encapsulated information about
     the environment
 Processing cycle:
   1.   Use sensor to perceive P
   2.   Interprets I = f(P)
   3.   Chooses the next action A = g(I,G) to perform to reach its goal G
   4.   Use actuator to execute A
                  What is an Agent?

                                                      Agent

              P       Perception         1. Environment percepts
    Sensors         Interpretation:      2. Self-percepts
                        I = f(P)         3. Communicative percepts
Environment




                            Autonomous
                                           Goals
                             Reasoning

                                         1. Environment altering
              A     Action Choice:          actions
 Effectors                               2. Perceptive actions
                     A = g(I,O)
                                         3. Communicative actions
                Agent              x         Object
 Intentionality: Encapsulate own           No goal
  goals (even if implicitly) in addition
  to data and behavior
 Decision autonomy:
     Pro-actively execute behaviors to     No decision autonomy:
      satisfy its goals                         Execute behaviors only reactively
     Can negate request for execution           whenever invoked by other
      of a behavior from another agent           objects
 More complex input/output:                    Always execute behavior invoked
  percepts and actions                           by other objects
 Temporal continuity: encapsulate          Simpler input/output: mere
  an endless thread that constantly          method parameters and return
  monitors the environment                   values
 Coarser granularity:
                                            Temporally discontinuous: active
     Encapsulate code of size
      comparable to a package or             only during the execution of its
      component                              methods
     Composed of various objects when
      implemented using an OO language
                Intelligent Agent x
              Simple Software Agent

                  Percept Interpretation: I = f(P)

 Sensors                              Conventional
                  AI
                                       Processing
Environment




                                                     Goals

                     Action Choice: A = g(I,O)

Effectors                             Conventional
                  AI
                                       Processing
                                      Classical AI
Intelligent Agent
                                        System
                  Situated Agent                     Disembodied
                                                         AI
                     Percept                           System
                  Interpretation
  Sensors
                              AI            Input
                                            Data
Environment




              Goals                                    Reasoning
                                            Goal
                                                           AI
                      Action Choice
Effectors                                  Output
                              AI            Data
                    What is an Agent?
                 Other Optional Properties
 Reasoning Autonomy:
    Requires AI, inference engine and knowledge base
    Key for: embedded expert systems, intelligent controllers, robots,
     games, internet agents ...
 Adaptability:
    Requires IA, machine learning
    Key for: internet agents, intelligent interfaces, ...
 Sociability:
    Requires AI + advanced distributed systems techniques:
        Standard protocols for communication, cooperation, negotiation
        Automated reasoning about other agents’ beliefs, goals, plans and
         trustfulness
        Social interaction architectures
    Key for: multi-agent simulations, e-comerce, ...
                    What is an Agent?
                 Other Optional Properties
 Personality:
    Requires AI, attitude and emotional modeling
    Key for: Digital entertainment, virtual reality avatars,
     user-friendly interfaces ...
 Temporal continuity and persistence:
    Requires interface with operating system, DBMS
    Key for: Information filtering, monitoring, intelligent control, ...
 Mobility:
    Requires:
        Network interface
        Secure protocols
        Mobile code support
    Key for: information gathering agents, ...
    Security concerns prevented its adoption in practice
Welcome to the Wumpus World!

               Agent-Oriented Formulation:
                Agents: gold digger
                Environment objects:
                    caverns, walls, pits, wumpus, gold,
                     bow, arrow
                Environment’s initial state
                Agents’ goals:
                    be alive cavern (1,1) with the gold
                Perceptions:
                    Touch sensor: breeze, bump
                    Smell sensor: stench
                    Light sensor: glitter
                    Sound sensor: scream
                Actions:
                    Legs effector: forward, rotate 90º
                    Hands effector: shoot, climb out
Wumpus World: Abbreviations


  4
      S                 B           P
                                        A - Agent
                G
  3    W      S, B, G       P           W - Wumpus
                                B
                                        P - Pit
                                        G - Gold
  2
      S                 B
                                        X? – Possibly X
                                        X! – Confirmed X
                                        V – Visited Cavern
  1    A                P
               B                B       B – Breeze
      start
                                        S – Stench
                                        G – Glitter
          1        2        3       4
                                        OK – Safe Cavern
      Perceiving, Reasoning and Acting
           in the Wumpus World
 Percept sequence:
       nothing                       breeze

 Wumpus world model maintained by agent:

       4                             4

       3                             3

       2   ok                        2   ok
                                                 P?

           A
                                                b A
       1                             1      V
                 ok                                   P?
           ok                            ok     ok

            1    2      3   4               1    2      3   4

                  t=0                             t=2
     Perceiving, Reasoning and Acting
          in the Wumpus World
 Percept sequence:
      stench                            {stench, breeze, glitter}

 Wumpus World Model:

      4                                 4           P?

      3   W!                            3     W!
                                                    A     P?
                                                   SBG
      2   s A                           2    S V    V
          ok    ok                           ok     ok    ok

      1   V     b V   P!                1     V    b V   P!
          ok    ok                            ok   ok

           1     2     3   4                   1     2     3    4

 Action Sequence:
      t=7: Go to (2,1),                     t=11: Go to (2,3) to find gold
           Sole safe unvisited cavern
             Classification Dimensions
              of Agent Environments
 Agent environments can be classified as points in a multi-dimensional
  spaces
 The dimensions are:
      Observability
      Determinism
      Dynamicity
      Mathematical domains of the variables
      Episodic or not
      Multi-agency
      Size
      Diversity
    Observability

 Fully observable (or accessible):
    Agent sensors perceive at each instant all the aspects of the
     environment relevant to choose best action to take to reach goal
 Partially observable (or inaccessible, or with hidden variables)
 Sources of partial observability:
      Realm inaccessible to any available sensor
      Limited sensor scope
      Limited sensor sensitivity
      Noisy sensors
     Determinism



 Deterministic: all occurrence of executing a given action in a given
  situation always yields same result
 Non-deterministic (or stochastic): action consequences partially
  unpredictable
 Sources of non-determinism:
    Inherent to the environment: quantic granularity, games with
     randomness
    Other agents with unknown or non-deterministic goal or action policy
    Noisy effectors
    Limited granularity of effectors or of the representation used to
     choose the actions to execute
Dynamicity: Stationary
and Sequential Environments

 Stationary: Single perception-reasoning-action cycle during which
  environment is static
Stationary Environment                                                State 1                                                 State 2

         Agent
                                                                      Reasoning
                         Percept




                                                                                                                       Ação
 Sequential: Sequence of perception-reasoning-action cycle during each of
  which the environment changes only as a result of the agent’s actions

Sequential Environment             State 1                        State 2                          State 3             ...         State N

        Agent
                                   Reasoning                                                        Reasoning
                         Percept




                                                                   Reasoning
                                                        Percept




                                                                                         Percept
                                               Action




                                                                                Action




                                                                                                                Ação
Dynamicity: Concurrent
Synchronous and Asynchronous
 Synchronous: Environment can change on its own between one action and the
  next perception of an agent, but not during its reasoning
    Synchronous Concurrent
         Environment                   State 1               State 2            State 3                 State 4                        State 5   ...
            Agent
                                       Reasoning                                                        Reasoning
                             Percept




                                                                                          Percept
                                                    Action




                                                                                                                              Action
 Asynchronous: Environment can change on its own at any time, including
  during the agent’s reasoning
  Asynchronous Concurrent
        Environment          State 1         State 2                  State 3     State 4           State 5                   State 6       ...
          Agent
                                        Reasoning                                             Percept
                             Percept




                                                                                                                     Action
                                                             Action




                                                                                                         Reasoning
                         Multi-Agency
 Sophistication of agent society:
    Number of agent roles and agent instances
    Multiplicity and dynamicity of agent roles
    Communication, cooperation and negotiation protocols
 Main classes:
      Mono-agent
      Multi-agent cooperative
      Multi-agent competitive
      Multi-agent cooperative and competitive
         With static or dynamic coalitions
    Mathematical Domain of Variables

 MAS variables:
    Parameters of agent percepts, actions and goals
    Attributes of environment objects
    Arguments of environment relations, states, events and locations

                                                           Booleanas
             Discreta                    Binárias         Dicotômicas

                               Qualitativas                 Nominal

                                                             Ordinal

                               Quantitativas               Intervalar

                                                            Fracional

                                                                R
              Contínua
                                                              [0,1]
       Mathematical Domain of Variables

 Binary:                                      Interval:
    Boolean, ex, Male  {True,False}              Finite partition of ordered set
    Dichotomic, ex, Sex  {Male, Female}           with measure m defining distance
 Nominal (or categorical)                          d: X,Y, d(X,Y) = |m(X)-m(Y)|
    Finite partition of set without order         No inherent zero
     nor measure                                   ex, Celsius temperature
    Relations: only = ou 
                                               Fractional (or proportional):
    ex, Brazilian, French, British
                                                   Partition with distance and
 Ordinal (or enumerated):                          inherent zero
    Finite partition of (partially or totally)
     ordered set without measure                   Relations: anyone
    Relations: only =, , , >                    ex, Kelvin temperature
    ex, poor, medium, good, excellent           Continuous (or real)
                                                    Infinite set of values
              Other Characteristics

 Episodic:
    Agent experience is divided in separate episodes
    Results of actions in each episode, independent of previous episodes
     ex.: image classifier is episodic, chess is not
          soccer tournament is episodic, soccer game is not
 Open environment:
    Partially observable, Non-deterministic, Non-episodic, Continuous
     Variables, Concurrent Asynchronous, Multi-Agent.
    ex.: RoboCup, Internet, stock market
                    Size and Diversity
 Size, i.e., number of instances of:      Diversity, i.e., number of classes of:
     Agent percepts, actions and goals        Agent percepts, actions and goals
     Environment agents, objects,             Environment agents, objects,
      relations, states, events and             relations, states, events and
      locations                                 locations
 Dramatically affects scalability of      Dramatically affects scalability of
  agent reasoning execution                 agent knowledge acquisition process
        Agents’ Internal Architectures

   Reflex agent (purely reactive)
   Automata agent (reactive with state)
   Goal-based agent
   Planning agent
   Hybrid, reflex-planning agent
   Utility-based agent (decision-theoretic)
   Layered agent
   Adaptive agent (learning agent)

 Cognitive agent
 Deliberative agent
                        Reflex Agent


              Sensors
Environment




                                Rules
                         Percepts  Action
                            A(t) = h(P(t))



        Effectors
                     Remember …

                                              Agent

              P        Percept
    Sensors         Interpretation:
                        I = f(P)
Environment




                  Reasoning           Goals



              A      Action Choice:
 Effectors
                      A = g(I,O)
                             So?

                P                  Percept Interpretation:
     Sensors
                                           I = f(P)
Environment




                      Rules
               Percepts  Action                   Goals
                  A(t) = h(P(t))


                A                      Action Choice:
  Effectors
                                        A = g(I,O)
                        Reflex Agent

 Principle:
     Use rules (or functions, procedures) that associate directly percepts to
      actions
           ex. IF speed > 60 THEN fine
           ex. IF front car’s stop light switches on THEN brake
     Execute first rule which left hand side matches the current percepts
 Wumpus World example
       IF visualPerception = glitter THEN action = pick
       see(glitter)  do(pick) (logical representation)
 Pros:
     Condition-action rules is a clear, modular, efficient representation
 Cons:
     Lack of memory prevents use in partially observable, sequential, or non-
      episodic environments
     ex, in the Wumpus World a reflex agent can’t remember which path it has
      followed, when to go out of the cavern, where exactly are located the
      dangerous caverns, etc.
                      Automata Agent


                   Percept Interpretation
         Sensors   Rules:
                   percepts(t)  model(t)  model’(t)



                     Model Update
Environment




                     Regras: model(t-1)  model(t)        (Past and) Current
                             model’(t)  model’’(t)       Enviroment Model



                                                           Goals
                    Action Choice
     Effectors      Rules:
                    model’’(t)  action(t),
                    action(t)  model’’(t)  model(t+1)
                    Automata Agent

 Rules associate actions to percept indirectly through the
  incremental construction of an environment model (internal state of
  the agent)
 Action choice based on:
    current percepts + previous percepts + previous actions + encapsulated
     knowledge of initial environment state
 Overcome reflex agent limitations with partially observable,
  sequential and non-episodic environments
    Can integrate past and present perception to build rich representation
     from partial observations
    Can distinguish between distinct environment states that are
     indistinguishable by instantaneous sensor signals
 Limitations:
    No explicit representation of the agents’ preferred environment states
    For agents that must change goals many times to perform well,
     automata architecture is not scalable (combinatorial explosion of rules)
       Automata Agent Rule Examples

 Rules percept(t)  model(t)  model’(t)
    IF visualPercept at time T is glitter
     AND location of agent at time T is (X,Y)
     THEN location of gold at time T is (X,Y)
    X,Y,T see(glitter,T)  loc(agent,X,Y,T)  loc(gold,X,Y,T).


 Rules model’(t)  model’’(t)
    IF agent is with gold at time T
     AND location of agent at time T is (X,Y)
     THEN location of gold at time T is (X,Y)
    X,Y,T withGold(T)  loc(agent,X,Y,T)  loc(gold,X,Y,T).
       Automata Agent Rule Examples

 Rules model(t)  action(t)
    IF location of agent at time T = (X,Y)
     AND location of gold at time T = (X,Y)
     THEN choose action pick at time T
   X,Y,T loc(agent,X,Y,T)  loc(gold,X,Y,T)  do(pick,T)


 Rules action(t)  model(t)  model(t+1)
    IF choosen action at time T was pick
     THEN agent is with gold at time T+1
    T done(pick,T)  withGold(T+1).
              (Explicit) Goal-Based Agent

                 Percept Interpretation
   Sensors
                 Rules: percept(t)  model(t)  model’(t)



                       Model Update
                       Rules: model(t-1)  model(t)            (Past and) Current
                              model’(t)  model’’(t)           Environment Model
Environment




                  Goal Update
                  Rules: model’’(t)  goals(t-1)  goals’(t)         Goals




                Action Choice
Effectors       Rules: model’’(t)  goals’(t)  action(t)
                       action(t)  model’’(t)  model(t+1)
          (Explicit) Goal-Based Agent

 Principle: explicit and dynamically alterable goals
 Pros:
    More flexible and autonomous than automata agent
    Adapt its strategy to situation patterns summarized in its goals
 Limitations:
    When current goal unreachable as the effect of a single action, unable
     to plan sequence of actions
    Does not make long term plans
    Does not handle multiple, potentially conflicting active goals
       Goal-Based Agent Rule Examples

 Rule model(t)  goal(t)  action(t)
     IF goal of agent at time T is to return to (1,1)
      AND agent is in (X,Y) at time T
      AND orientation of agent is 90o at time T
      AND (X,Y+1) is safe at time T                                  Y+1            ok
      AND (X,Y+1) has not being visited until time T
                                                                              v
      AND (X-1,Y) is safe at time T                                   Y             A
      AND (X-1,Y) was visited before time T
                                                                              ok
      THEN choose action turn left at time T
                                                                              X-1   X
     X,Y,T, (N,M,K goal(T,loc(agent,1,1,T+N))  loc(agent,X,Y,T)
       orientation(agent,90,T)  safe(loc(X,Y+1),T)
        loc(agent,X,Y+1,T-M)  safe(loc(X-1,Y),T)  loc(agent,X,Y+1,T-K))
       do(turn(left),T)
       Goal-Based Agent Rule Examples

 Rule model(t)  goal(t)  action(t)
     IF goal of agent at time T is to find gold
      AND agent is in (X,Y) at time T
      AND orientation of agent is 90o at time T
      AND (X,Y+1) is safe at time T                                  Y+1          ok
      AND (X,Y+1) has not being visited until time T
                                                                            v
      AND (X-1,Y) is safe at time T                                   Y           A
      AND (X-1,Y) was visited before time T
                                                                            ok
      THEN choose action forward at time T
                                                                            X-1   X
     X,Y,T, (N,M,K goal(T,withGold(T+N))  loc(agent,X,Y,T)
       orientation(agent,90,T)  safe(loc(X,Y+1),T) 
       loc(agent,X,Y+1,T-M)  safe(loc(X-1,Y),T)  loc(agent,X,Y+1,T-K))
       do(forward,T)
      Goal-Based Agent Rule Examples

 Rule model(t)  goal(t)  goal’(t)
   //If the agent reached it goal to hold the gold,
   //then its new goal shall be to go back to (1,1)
    IF goal of agent at time T-1 was to find gold
     AND agent is with gold at time T
     THEN goal of agent at time T+1 is to be in location (1,1)
    T, (N goal(agent,T-1,withGold(T+N))  withGold(T)
      M goal(agent,T,loc(agent,1,1,T+M))).
                        Planning Agent

              Percept Interpretation
   Sensors                                                                (Past and)
              Rules: percept(t)  model(t)  model’(t)
                                                                           Current
                                                                         Environment
              Model Update                                                  Model
              Rules: model(t-1)  model(t)
                     model’(t)  model’’(t)
Environment




              Goal Update
              Rules: model’’(t)  goals(t-1)  goals’(t)

                                                                            Goals
              Prediction of Future Environments
              Rules: model’’(t)  model(t+n)
                     model’’(t)  action(t)  model(t+1)

                                                                         Hypothetical
              Action Choice                                                Future
Effectors     Rules: model(t+n) = result([action1(t),...,actionN(t+n)]   Environment
                      model(t+n)  goal(t)  do(action(t))                Models
                      Planning Agent

 Percept and actions associated very indirectly through:
     Past and current environment model
     Past and current explicit goals
     Prediction of future environments resulting from different possible
      action sequences to execute
 Rule chaining needed to build action sequence from rules capture
  immediate consequences of a single action
 Pros:
     Foresight allows choosing more relevant and safer actions in sequential
      environments
 Cons: little point in building elaborated long term plans in,
     Highly non-deterministic environment (too many possibilities to
      consider)
     Largely non-observable environments (not enough knowledge available
      before acting)
     Asynchronous concurrent environment (only cheap reasoning can reach
      a conclusion under time pressure)
              Hybrid Reflex-Planning Agent

   Sensors                                  Reflex Rules          Reflex Thread
                                         Percepts  Actions

               Synchronization
Environment




                                                Planning Thread

                                    Percept Interpretation
                                                                         Current,
                                     Current Model Update                past and
                                                                          future
                                 Future Environments Prediction        environment
Effectors                                 Goal Update                     model

                                         Action Choice                    Goals
          Hybrid Reflex-Planning Agent

 Pros:
    Take advantage of all the time and knowledge available to choose best
     possible action (within the limits of its prior knowledge and percepts)
    Sophisticated yet robust
 Cons:
      Costly to develop
      Same knowledge encoded in different forms in each component
      Global behavior coherence harder to guarantee
      Analysis and debugging hard due to synchronization issues
      Not that many environments feature large variations in available
       reasoning time in different perception-reasoning-action cycles
                    Layered Agents

 Many sensors/effectors are too fine-grained to reason about goals
  using directly the data/commands they provide
 Such cases require a layered agent that decomposes its reasoning in
  multiple abstraction layers
 Each layer represent the percepts, environment model, goals, and
  actions at a different level of details
 Abstraction can consist in:
    Discretizing, approximating, clustering, classifying data from prior
     layers along temporal, spatial, functional, social dimensions
 Detail can consist in:
    Decomposing higher-level actions into lower-level ones along temporal,
     spatial, functional, social dimensions


                             Decide Abstractly
                           Abstract          Detail
                     Perceive in Detail   Act in Detail
             Layered Automata Agent

                     Percept Interpretation

                   Layer2: s(A,B)   r(B)  q(A)

                   Layer1: P(s)   P(z | y).P(y)

   Sensors         Layer0: y   f(x).dx


                                                     Environment Model
                  Environment Model Update
                  Layer2: s(A,B)   r(B)  q(A)     Layer2:
Ambiente




                                                     s(A,B)   r(B)  q(A)

               Action Choice and Execution Control

                  Layer2: s(A,B)   r(B)  q(A)

                  Layer1: P(s)   P(z | y).P(y)

Effectors         Layer0: y   f(x).dx
Exemplo de camadas de abstração:




Y




                   X
    Abstraction Layer Examples




Y




                    X
                    Utility-Based Agent

 Principle:
     Goals only express boolean agent preferences among environment states
     A utility function u allows expressing finer grained agent preferences
 u can be defined on a variety of domains and ranges:
     actions, i.e., u: action  R (or [0,1]),
     action sequences, i.e., u: [action1, ..., actionN]  R (or [0,1]),
     environment states, i.e., u: environmentStateModel  R (or [0,1]),
     environment state sequences, i.e., u: [state1, ..., stateN]  R (or [0,1]),
     environment state, action pairs,
      i.e., u: environmentStateModel x action  R (or [0,1]),
     environment state, action pair sequences,
      i.e., u: [(action1-state1), ..., (actionN-stateN)] R (or [0,1]),
 Pros:
     Allows solving optimization problems aiming to find the best solution
     Allows trading-off among multiple conflicting goals with distinct probabilities of
      being reached
 Cons:
     Currently available methods to compute (even approximately) argmax(u) do not
      scale up to large or diverse environments
   Utility-Based Reflex Agent


                   Percept Interpretation:
         Sensors   Rules: percept  actions
Environment




                                                Goals



                    Action Choice:
     Effectors                                Utility Function
                       do( argmax U(a))       u:actions  R
                            a  actions
              Utility-Based Planning Agent

   Sensors      Percept Interpretation
                Regras: percept(t)  model(t)  modelo’(t)
                                                                         Past &
                                                                        Current
                Model Update                                          Environment
                Regras: model’(t)  model’’(t)                           Model


                Future Environment Prediction
Environment




                Regras: model’’(t)  ação(t)  model(t+1)
                        model’’(t)  model(t+1)
                                                                     Hypothesized
                                                                        Future
                                          Utility Function:          Environments
                                          u: model(t+n)  R             Model



                Action Choice
Effectors            do( argmax U(result([
                                          action1 ... actionn ])))
                                                i           i

                           actioni
                                 1   i
                           Adaptive Agent

   Sensors                         Performance Analysis Component




                                             Learning Component
                Acting
              Component          • Learn rules or functions:
Environment




                                   •   percept(t)  action(t)
              • Reflex             •   percept(t)  model(t)  modelo’(t)
              • Automata           •   modelo(t)  modelo’(t)
              • Goal-Based         •   modelo(t-1)  modelo(t)
              • Planning           •   modelo(t)  action(t)
                                   •   action(t)  model(t+1)
              • Utility-Based
                                   •   model(t)  goal(t)  action(t)
              • Hybrid             •   goal(t)  model(t)  goal’(t)
                                   •   utility(action) = value
                                   •   utility(model) = value



Effectors                       New Problem Generation Component
              Simulated Environments

 Environment simulator:
    Often themselves internally follow an agent architecture
    Should be able to simulate a large class of environments that can be
     specialized by setting many configurable parameters either manually or
     randomly within a manually selected range
         ex, configure a generic Wumpus World simulator to generate world
          instances with a square shaped cavern, a static wumpus and a single gold
          nugget where the cavern size, pit numbers and locations, wumpus and gold
          locations are randomly picked


 Environment simulator processing cycle:
   1.   Compute percept of each agent in current environment
   2.   Send these percepts to the corresponding agents
   3.   Receives the action chosen by each agent
   4.   Update the environment to reflect the cumulative consequences of all
        these actions
 Environment Simulator Architecture

                                                          Simulation
                                                         Visualization
                                                             GUI
Environment Update
Rules: model(t-1)  model(t)
action(t)  model(t-1)  model(t)      actions
                                                            Agent
                                                  Rede     Client 1



Simulated                Environment
Environment               Simulation
Model




                                                              ...
                           Server

                                                            Agent
                                       percepts            Client N
  Percept Generation
  Rules: model(t)  percept(t)
                                   AI’s Pluridisciplinarity

                                    Economics     Sociology                  Zoology         Neurology
                                                                                                         Psychology
                       Decision                 Game                                                     (Cognitive)
                                                                              Paleontology
                        Theory                  Theory
                                                                                                                       Linguistics
             Operations
              Research
                                                                                                            Information
                                                                                                               Theory
    Mathematics:
    • Logic
    • Probabilities & Statistics                                                                               Computer Science:
    • Calculus                                                                                                 • Theory
    • Algebra                                                                                                  • Distributed Systems
                                                                                                               • Software Engineering
Philosophy                                                                                                     • Databases




                                                                Artificial
                                                              Intelligence
                                       AI Roadmap
                                           Generic Sub-Fields:
                                           • Heuristic Search
                                           • Automated Reasoning & Knowledge Representation      Generic Tasks:
                                           • Machine Learning & Knowledge Acquisition            • Clustering
  Specific Sub-Fields:                     • Pattern Recognition                                 • Classification
  • Multi-Agent Communication,                                                                   • Temporal Projection
    Cooperation & Negotiation                                                                    • Diagnosis
  • Speech & Natural Language Processing                                                         • Monitoring
  • Computer Perception & Vision                                                                 • Repair
  • Robotic Navigation & Manipulation                                                            • Control
  • Games                                                                                        • Recommendation
  • Intelligent Tutoring Systems               Computational Metaphors:                          • Configuration
                                               • Algorithmic Exploration                         • Discovery
                                               • Logical Derivation                              • Design
                                               • Probability Estimation                          • Allocation
                                               • Connectionist Activation                        • Timetabling
                                               • Evolutionary Selection                          • Planning
                                                                                                 • Simulation



                                AI Metaphors, Abstractions

Problem                                                                                       Algorithm
                                                                            + P(A|B)
    Today’s Diversity of AI Applications

 Agriculture, Natural Resource               Law
  Management, and the Environment             Law Enforcement & Public Safety
 Architecture & Design                       Libraries
 Art                                         Marketing, Customer Relations & E-
 Artificial Noses                             Commerce
 Astronomy & Space Exploration               Medicine
 Assistive Technologies                      Military
 Banking, Finance & Investing                Music
 Bioinformatics                              Networks - including Maintenance,
 Business & Manufacturing                     Security & Intrusion Detection
 Drama, Fiction, Poetry, Storytelling &      Politics & Foreign Relations
  Machine Writing                             Public Health & Welfare
 Earth & Atmospheric Sciences                Scientific Discovery
 Engineering                                 Social Science
 Filtering                                   Sports
 Fraud Detection & Prevention                Telecommunications
 Hazards & Disasters                         Transportation & Shipping
 Information Retrieval & Extraction          Video Games, Toys. Robotic Pets &
 Knowledge Management                         Entertainment
              AI Pays !

 AI Industry Gross Revenue:
      2002: US $11.9 billions
      Annual growth rate: 12.2%
      Projection for 2007: $21.2 billions
      www.aaai.org/AITopics/html/stats.html
 Companies specialized in AI:
    http://dmoz.org/Computers/Artificial_Intelligence/Companies/
 Corporations developing and using AI:
    Google, Amazon, IBM, Microsoft, Yahoo, ...
 Corporations using IA:
    www.businessweek.com/bw50/content/mar2003/a3826072.htm
    Wal-Mart, Abbot Labs, US Bancorp, LucasArts, Petrobrás, ...
 Government agencies using AI:
    US National Security Agency
       When is a Machine Intelligent?
          What is Intelligence?
                                    Turing Test
Who’s smarter?
 Your medical doctor or your
  cleaning lady?                    ?
 Your lawyer or your two
  year old daughter?
 Kasparov or Ronaldo?
 What did 40 years of AI
  research discovered?
    Common sense intelligence
                                   1997:
     harder than expert
     intelligence
    Embodied intelligence
     harder than purely
                                   2x1
     intellectual, abstract
     intelligence
    Kid intelligence harder
     than adult intelligence
    Animal intelligence harder
                                  2050?
     than specifically human

                                  2x1
     intelligence (after all we
     share 99% of our genes
     with chimpanzees !)
                  www.robocup.org

 New benchmark task for AI
 Annual competition associated to conference
  on AI, Robotics or Multi-Agent Systems
Tomorrow’s AI
Applications    Blade Runner




                      M
                      A
                      T
                      R
                      I
                      X



   A.I.

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:2
posted:10/22/2012
language:English
pages:60