; agents
Learning Center
Plans & pricing Sign in
Sign Out
Your Federal Quarterly Tax Payments are due April 15th Get Help Now >>


VIEWS: 194 PAGES: 40

  • pg 1
									    Artificial Intelligence
            CS 165A
            Fall 2004
         Lecture Notes 3

 1950 Turing paper
 Agents
Main Approaches to AI
1. Definition of intelligence:
     1. Acting humanly: Turing test approach (1950)
              idea to define intelligence by comparison with
               (acceptedly) intelligent entities
    2. Thinking humanly: cognitive modeling approach
    3. Thinking rationally: laws of thought approach
             typically based on logical representation
             focus on making “correct inferences”
    4. Acting rationally: rational agent approach
             rational agent acts to achieve best (expected)
•   Viewed in early years (e.g., Turing) as ability to think
AI as ideal behavior

                      Human               Ideal

 processes and   Systems that think Systems that think
 reasoning          like humans         rationally

 Behavior         Systems that act   Systems that act
                    like humans         rationally
Approach of text/course to AI
•   Will briefly overview Turing Test approach
    –   because of its historical importance
•   Will adopt rational agent approach
    –     can define standard of rationality more easily
          than if use comparison with humans
    – more general than approaches based on rational
    – Rational approach as “ideal” model
       Compare with ideal (frictionless) models of physics
Turing’s seminal AI paper
   Computing Machinery and Intelligence (1950)

• Considers the question, “Can Machines Think?”
   – Too subjective, meaningless – rather, replace this question with an
     operational definition of thinking/intelligence
• The Imitation Game

                                                 A        Human


        Interrogator                             B        Woman
Turing paper (cont.)
• The Turing Test
   – “Are there imaginable digital computers which would do well in
     the imitation game?”
   – i.e., Can a computer fool an interrogator into thinking it is a
• Properties of the Turing Test
   – Operational/functional/behavioral definition of intelligence
   – Distinguishes between physical and intellectual capacities
   – Question and answer method
       language comprehension and generation

• Might there be other kinds of Turing Tests?
   – Emotional, physical, visual…
Digital Computers
• In 1950, computers were not household items!
   – Turing had to define digital computers
       Distinguishes from “human computers”

   – States basic Theory of Computation results regarding universality
       All digital computers are essentially equivalent

       Don’t need different machines for different tasks

   – Main technical issues
       Adequate storage (109), Speed, Programming

• Key for Turing was “learning machines”
   – Probabilistic (not completely determined)
   – Simulate a child’s mind, then educate
“I should be surprised if [a storage of] more than l09 was
required for satisfactory playing of the imitation game…. It
is probably not necessary to increase the speed of operations
of the machines at all…. Our problem then is to find out how
to program these machines to play the game.”

“Instead of trying to produce a program to simulate the adult
mind, why not rather try to produce one which simulates the
child's? If this were then subjected to an appropriate course of
education one would obtain the adult brain…. Our hope is
that there is so little mechanism in the child-brain that
something like it can be easily programmed.”
Objections to intelligent computers (Turing)
1. The Theological Objection
   – Thinking is part of the soul, which is particular to man
2. The 'Heads in the Sand' Objection
   – I don’t want it to be true
3. The Mathematical Objection
   – Godel’s Incompleteness Theorem
4. The Argument from Consciousness
   – How would we really know?
5. Arguments from Various Disabilities
   – Computers will never be able to do X
Main Objections (Turing)
6. Lady Lovelace's Objection
   – Computers can only do what we instruct them to do
7. Argument from Continuity in the Nervous System
   – The nervous system is analog
8. The Argument from Informality of Behaviour
   – Rules cannot capture behavior
9. The Argument from Extra-Sensory Perception
   – What if ESP is real…?
Insight from 1950
“We may hope that machines will eventually compete with men in all
purely intellectual fields. But which are the best ones to start with?”
    - Chess
    - Understanding and speaking language

“I believe that in about fifty years time it will be possible to program
computers with a storage capacity of about 109 to make them play the
imitation game so well that an average interrogator will not have more
than 70 per cent chance of making the right identification after five
minutes of questioning.”

“I believe that at the end of the century the use of words and general
educated opinion will have altered so much that one will be able to speak
of machines thinking without expecting to be contradicted.”
The Loebner Prize
AI and Intelligent Agents
AI as ideal behavior

                      Human               Ideal

 processes and   Systems that think Systems that think
 reasoning          like humans         rationally

 Behavior         Systems that act   Systems that act
                    like humans         rationally

Our view of AI
• AIMA view: AI is building intelligent (rational) agents
    – Principles of rational agents
    – Models for constructing them
         Their components

• Rational:“Does the right thing” in a particular situation
    – Maximize expected performance (not actual performance)

• So a rational agent does the “right” thing (at least tries to)
    – Maximizes the likelihood of success, given its information
    – How is “the right thing” chosen?
        Possible actions (from which to choose)

        Percept sequence (current and past)

        Knowledge (static or modifiable)

        Performance measure (wrt goals – defines success)
What's an Agent?
"An intelligent agent is an entity capable of combining cognition,
  perception and action in behaving autonomously, purposively and
  flexibly in some environment." (agents@USC)

• Possible properties of agents:
    – Agents are autonomous – they act on behalf of the user
    – Agents can adapt to changes in the environment
    – Agents don't only act reactively, but sometimes also proactively
    – Agents have social ability – they communicate with the user, the
      system, and other agents as required
    – Agents also cooperate with other agents to carry out more
      complex tasks than they themselves can handle
    – Agents migrate from one system to another to access remote
      resources or even to meet other agents

Agent portal
   –   News
   –   Organizations
   –   Labs
   –   Courses
   –   Companies
   –   Software
   –   Topics
   –   Etc.
Our model of an agent
• An agent
   – perceives its environment,
   – reasons about its goals,
   – acts upon the environment
• Abstractly, a function from percept histories to actions
        f : P*  A
• Main components of an agent
   – Perception (sensors)
   – Reasoning/cognition
   – Action (actuators)
• Supported by
   – knowledge representation, search, inference, planning,
     uncertainty, learning, communication…
Our view of AI (cont.)
• So this course is about designing rational agents
   – Constructing f
   – For a given class of environments and tasks, we seek the agent (or
     class of agents) with the “best” performance
   – Note: computational limitations make complete rationality
     unachievable in most cases

• In practice, we will focus on problem-solving techniques
  for agents
   – Cognition (not perception or action)
   – View as ways of constructing f
Ideal Rational Agent
• Basic definition:

   “For each possible percept sequence, an ideal rational agent
     should do whatever action is expected to maximize its
     performance measure, on the basis of the evidence provided by
     the percept sequence and whatever built-in knowledge the
     agent has.”

• Potential problems?

         Note that:     Rational  Omniscient
                        Rational  Clairvoyant
                        Rational  Successful
Do the Right Thing
• Task: Get to the top
• What’s the right action?
Describing an agent

• PEAS description of an agent – Performance measure,
  Environment, Actuators, Sensors
   – Goals may be explicit or implicit (built into performance measure)

• Not limited to physical agents (robots)
   – Any AI program
The Vacuum World

   Performance measure, Environment, Actuators, Sensors
 Vacuum world
• Environment (E)
    – Location
    – Cleanliness
• Three actions (A)
    – Move right
    – Move left
    – Suck
• Sensed information (percepts) of environment (S)
    – Two locations
        Left
        Right
    – Two states
        Dirty
        Clean

• Performance (P)
    – Keep world clean (?)
PEAS Descriptions
Agent Program
• Implementing f : P*  A         …or…         f (P*) = A
  – Lookup table?
  – Learning?
                    Knowledge, past percepts, past actions

                                                       Add percept to percepts
                                                       LUT[percepts, table]
Basic types of agent programs
• Simple reflex agent
• Model-based reflex agent
• Goal-based agent
• Utility-based agent
• Learning agent
Simple Reflex Agent

• Input/output associations
• Condition-action rule: “If-then” rule (production rule)
    – If condition then action (if in a certain state, do this)
    – If antecedent then consequent
Simple Reflex Agent

  • Simple state-based agent – Classify the current percept into a
    known state, then apply the rule for that state
• Function REFLEX-VACUUM-AGENT ([location, status])
   – Returns an action
          If status=Dirty, then return Suck
          Else if location = A then return Right
          Else if location = B then return Left
• Early expert systems
   – Production system architecture
   – Short term memory (STM): state of world
   – Long term memory (LTM) IF-THEN rules
   – Matching

• Must make correct decision on basis of current
   – Environment must be fully observable
Alternatives to simple reflex agent model
• Maintain view of part of world can’t see
   – Construct and use models of the world

• Construct and use goals for agent
   – Simple goals
          Current/past states of environment not sufficient for action
   – Utility-based model of agent
          Goals may be too simple a representation
             – Constitutes special case of utility function

• Construct agents with learning capabilities
Model-Based Reflex Agent

• Internal state – keeps track of the world, models the world
Model-Based Reflex Agent

  • State-based agent – Given the current state, classify the
    current percept into a known state, then apply the rule for that
Goal-Based Agent

• Goal: immediate, or long sequence of actions?
    – Search and planning – finding action sequences that achieve the agent’s
Utility-Based Agent

• “There are many ways to skin a cat”
• Utility function: Specifies degree of usefulness (happiness)   U
    – Maps a state onto a real number                               cost
Learning Agent
• Properties of environments
   –   Fully vs. partially observable
   –   Deterministic vs. stochastic
   –   Episodic vs. sequential
   –   Friendly vs. hostile
   –   Static vs. dynamic
   –   Discrete vs. continuous
   –   Single agent vs. multiagent

• The environment types largely determine the agent design
• The real world is inaccessible, stochastic, nonepisodic,
  hostile, dynamic, and continuous
Coming next
• Chapter 3, Problem solving and search (blind search)
• Chapter 4, Heuristic search

To top