# Artificial Intelligence and Lisp #10

Document Sample

```					Artificial Intelligence and Lisp #10

Part II: Logic-Based Planning
Lab statistics 2009-11-09

Registration: 50 students

Number of:          lab2a lab2b lab3a lab3b
-------------------------
Lab completed         39     16     13      8
Incomplete upload      1      3      0      1

Other information: Notice that two lectures have
been rescheduled! Please check the schedule on
the official (university) webpage for the course.
Repeat from previous lecture:
Reasoning about Plans – two cases
   Given the logical rules, action laws, initial state of the
world, and one or more intentions by the agent – predict
future states of the world: done by inference for obtaining
logic formulas that refer to later timepoints
         rules, initstate, plan |= effects
   Given the logical rules, action laws, initial state of the
world, and an action term expressing a goal for the agent –
obtain a plan (an intention) that will achieve that goal
        rules, initstate, plan |= goal
   where red marks the desired result in both cases
   Planning is therefore an inverse problem wrt inference. In
formal logic this operation is called abduction. This will be
the main topic of the next lecture.
Major topics today

   1. How can planning-by-abduction be done
systematically?
   2. Examples of reasoning about actions that can well be
done in a logic-based approach but not in a state-space-
based approach
   3. Planning using the situation calculus, where plans can
be obtained deductively i.e. without the need for abduction
Important points in the example
   It was possible to work with actions that produce new objects
   It was possible to select between several objects (the parent rabbits)
that could be combined to produce the desired result
   Commitment to specific choices (of parents) could be postponed to a
later point in the reasoning process, and did not have to be done
when an action was selected. Uncertainty could be carried along.
   The same problem arises even if the set of objects is held fixed, but if
the outcome of an action on one object depends on that object's
relations to other objects and on attributes of those other objects
   Several subfunctions: suggest a possible action (or plan), check that
the plan is feasible, (modify the plan if not), execute the plan
   Alternation between deduction (for drawing conclusions) and
extralogical operations e.g. for choosing a hypothesis for an action.
Deduction is subordinate to the agent architecture; the agent is not
just a deduction machine.
What if things are not so simple?

   In the example, step 2 serves to select an action that may be tried,
and in step 3 it is verified that it is applicable and that all parts of the
goal are achieved. What if this does not happen? Several
possibilities:
   Select other bindings for variables in the resolution process (e.g.,
other parent rabbits)
   Select other resolution pairs, e.g. for how an action is performed
   Select another action instead of the first chosen one
   If the goal is not entirely achieved by the selected action a, then
replace a by [seq a b] so as to extend the plan
   If the precondition is not satisfied for the selected action a, then
replace a by [seq c a] for some action c so as to prepare the
precondition for a
   Notice that the extension of the plan can be done repeatedly.
Example of precondition enabling action

   In the example, suppose we add a precondition that the two
parents must be co-located for the breed action to be applicable
   Suppose also that the candidate parent rabbits are not co-
located at time 0 but there is an action for moving rabbits
   The planning process described above will select the breed
action, notice that it is not applicable, and replace the first
selected plan by
         [seq [move albert beda][breed albert beda]]
Variation 1: Extended Duration of Action

   Use rule such as [D s t [breed k r]] → [= t (+ s 12)], possibly
with additional preconditions so that the duration depends on
them
   Or: use rule [D s t a] → [= t (+ s (duration a))]
   Gestation period conditional on the species:
     [H s (species-of k) rabbit] → [= 12 (duration [breed k r])]
   The latter approach has the advantage of making the duration
of an action explicit, so that the agent can reason about it in
various ways, although at the expense of introducing one more
complexity
Variation 2: Undesirable Side-effect

   Suppose the birth of a rabbit at time 13 brings bad luck
   Write [D s 13 (breed k r)] → [H 13 badluck y]
   and [D s t a] & [H t badluck y] → [H s (dont-do a) y]
   Extend “Step 3” in the detailed example so that it not merely
identifies success, but also continues the deduction a bit
more so as to identify any dont-do conclusions
   If the above example is run with the plan
     [G 0 [seq [move albert beda][breed albert beda]]]
   and the above rules, it will conclude
     [H 0 [dont-do [seq [move albert beda][breed albert beda]]]

   The plan can be fixed by introducing a delay, for example
by having an action [wait n] that simply waits n timesteps
without anything happening
   This example is analogous with the example in the first
lecture of this course, of the toddler that moves the chair to
beside the freezer instead of in front of the freezer, in order
to avoid an undesirable side-effect
Relating logic-based planning of these kinds
to state-space planning methods

   Progressive planning is effectively realized using the
replacement a → [seq a b]
   Regressive planning is effectively realized using the
replacement a → [seq c a]
   Full partial-order planning is not realized by the method
shown above, but there are ways of realizing it also

   In the example, suppose we add a precondition that the two
parents must be co-located for the breed action to be applicable
   Suppose also that the candidate parent rabbits are not co-
located at time 0 but there is an action for moving rabbits
   The planning process described above will select the breed
action, notice that it is not applicable, and replace the first
selected plan by
         [seq [move albert beda][breed albert beda]]
   However, then the breed action will have as precondition that
rednose and whitefur apply at time 1. This is not obtained by
the axioms shown above.
   This problem is general one for all plans that consist of more
than one action
Simple solutions to the frame problem

   Forward frame axioms, of the form
    [D s t a] & [H s f v] → [H t f v]
   for all combinations of a and f where a does not change f
   Reverse frame axioms, of the form
    [H s f v] & [H t f v'] & v ≠ v' & [D s t a] →
      [a = action1] v [a = action2] v …
   with an enumeration of all actions that may influence f
   Nonmonotonic rules, of the form
    [D s t a] & [H s f v] & Unless [-H t f v] → [H t f v]
   where the nonstandard operator Unless says “if it can not be
proved that”
Nonmonotonic rules

   A & Unless B → C may be interpreted as follows: If you
know A, try to prove B. If you can not, then assume it is
false, and conclude C
   This falls outside the usual process of drawing successive
conclusions from given axioms, facts or assumptions, and
   Why is it called nonmonotonic? Standard logic is
monotonic in the following sense: If Th(S) is the set of
conclusions that can be obtained from the set S of
premises, and S  U, then Th(S)  Th(U)
   Nonmonotonic logic does not have this property
Pros and cons of these approaches

   Forward frame axioms: Resource consuming
   Reverse frame axioms: Not modular (significant when new
actions are added); do not accommodate causality easily; do
not accommodate nondeterministic actions easily
   Nonmonotonic rules: introduce a nonstandard form of logic
which requires new reasoning techniques
   However, there are techniques for automatically converting a
formulation using nonmonotonic rules to one using reverse
frame axioms
Other uses of nonmonotonic rules

   In inheritance with exceptions:
    [subsumes a b] & [subsumes b c] & Unless [-subsumes a c]
           → [subsumes a c]
   Another example later on in this lecture

Semantic interpretation of nonmonotonic rules

   In the previous lecture, we defined A, B,... |= G as
    Mod[{A, B, …}]  Mod[G]
   Now introduce a preference relation < on models, and let
Min(<,S) be the set of <-minimal members of S
   Define A,B, … |=< G as
    Min(<, Mod[{A, B, …}]  Mod[G]
   In the case of the frame problem, one can define < so that it
prefers a model where a feature does not change, over a model
where the same feature does change at a specific point in time
   Conversion from nonmonotonic to standard monotonic logic can
be done by transforming such preference conditions to
corresponding logical rules (clauses)
The main autonomous-agent behavior
aka: the agent architecture
   The procedure shown above can be generalized as follows.
Do repeatedly:
   Identify what your current goal(s) are, and select one of them if
there are several
   Identify possible actions for achieving the selected goal, and
pick one of the actions for consideration
   Consider the consequences of taking that action, including
possible side-effects. If necessary, consider several alternative
actions
   Decide on an action to take, and perform it
   Review the results, learn from the experience, and proceed with
the loop
   We have seen how logical deduction can be used for
implementing several of these steps
Agent architecture SOAR
   SOAR was proposed in the 1980's and is an early standard
reference for agent architectures
   Proposes to be a theory for general intelligence in an empirical
sense and, at the same time, a design for artificial intelligent
agents
   Similar to the routine on the previous slide, with two major
amendments:
   (1) At the beginning of the cycle, include a step for acquiring
up-to-date information e.g. from sensors
   (2) If a step does not have an immediate realization, then
consider it as an impasse and invoke a subprocess containing
the same cyclic behavior for resolving the impasse
   Both the lack of a method for performing a task and the
availability of several methods are considered as impasses
Procedural vs declarative mechanisms
in main autonomous-agent behavior
   The behavior described on the previous slide can be
implemented using brute-force deduction (e.g. using the
resolution operator) in each of several steps
   The resulting deduction often follows standard patterns, e.g.
from [G s a] for some action a to checking preconditions and
obtaining effects of the action. Such standard patterns can be
turned into deduction scripts, or even into parts of the
implementing software
   On the other hand it is possible to go further in the direction of
declarative (= deduction-oriented) mechanisms (next slide)
The “Consider” predicate
   Recall the end of Step 2 in the worked example:
      [D 0 t [breed albert beda]] → [Success t]
   Instead of having the program harvest such conclusions, introduce the
following rule
      ([D s t a] → [Success t]) → [Consider s a]
   (Separate issue how to write this as clauses)
   An additional deduction step in Step 2 will result in
      [Consider 0 [breed albert beda]]
   and one would instead let the program harvest conclusions of this form
   The advantage of this is that it makes it possible to write other rules that
also lead to a 'Consider' conclusion
   In particular, rules of the following form express precondition advise:
     [Consider s a] & [-P s a] → [Consider s [seq a' a]]
The “Consider” predicate, continued
   Instead of having the program harvest such conclusions, introduce
the following rule
      ([D s t a] → [Success t]) → [Consider s a]
   (Separate issue how to write this as clauses) and harvest clauses
(only) consisting of a Consider literal
   This rule can be improved as follows:
    ([D s t a] → [Success t]) & Unless [H s (dont-do a) y] →
          [Consider s a]
   This builds one part of Step 3, for checking the proposed plan, into
the deductive process in Step 2. Notice the introduction of a
nonmonotonic literal.
Alternative representation for planning:
the situation calculus

   Modify the H predicate so that it takes a composite action
(sequence of actions) as its first argument, instead of a
timepoint
   Example of action law:
   [H s (hungry a) y] → [-H [seq s [feed a]] (hungry a) y]
   expressing that if something is the case after the execution of
an action sequence s, then something else is the case after
execution of s followed by an additional action e.g. [feed a]
   The planning problem is then the problem of finding an s that
satisfies one or more literals formed using H
   In this formulation it is possible to find the solution by deduction
Example: breeding of rabbits, again
   [H [seq] (exists albert) y], etc.
   [H s (exists k) y] & [H s (exists r) y] →
     [H [seq s [breed k r]] (exists (offspring k r)) y] &
     ([H s (rednose k) y] → [H t (rednose (offspring k r)) y] & …
   [H u (exists m) y] & [H u (rednose m) y] & [H u (whitefur m) y] → [Success u]
   Rewrite as clauses. The goal statement becomes
   [-H u (exists m) y][-H u (rednose m) y][-H u (whitefur m) y] [Success u]
   Resolve against one of the clauses from the action law, which is
       [-H s (exists k) y][-H s (exists r) y][-H s (rednose k) y]
          [H [seq s [breed k r]] (rednose (offspring k r)) y]
   obtaining
    [-H s (exists k) y][-H s (exists r) y][-H s (rednose k) y] [Success [seq s [breed k r]]]
        [-H [seq s [breed k r]] (exists m) y] [-H [seq s [breed k r]] (whitefur m) y]
   Then all clauses can be resolved away besides the Success clause, bindings occur.
Resolution example, continued
   If precondition is not satisfied in the plan [seq] then some other
action has to be inserted before the breed action. This occurs
automatically through the resolution process, obtaining
[Success [seq [move...][breed …]]]
   The actual situation calculus literature writes e.g.
       Holds(s, rednose(albert), true)
   and Holds(Do(s, breed(albert,beda)),
               rednose(offspring(albert,beda)), true)
   The argument 's' is called a situation and was historically
considered as a partial “state of the world”. The initial situation,
corresponding to [seq], is usually written S0.
Perspective on situation calculus

   There are two main approaches to reasoning about actions: explicit
time logic (aka time and action logic, or modern event calculus),
which was treated first in this lecture, and situation calculus which
has been briefly described now
   The basic approach of situation calculus – inference of a plan by
regression from a goal statement – is elegant and compact
   The more complex aspects of the logic-based planning process which
were described in this lecture, are not provided by this situation-
calculus method
   The basic situation-calculus approach is not well suited for more
complex cases, like nondeterministic actions and concurrent actions
   A number of extensions of the situation calculus have been proposed
which makes the approach more general.

```
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
 views: 5 posted: 4/2/2010 language: English pages: 26