Learning Center
Plans & pricing Sign in
Sign Out

CS1412 Artificial Intelligence Fundamentals - PDF


CS1412 Artificial Intelligence Fundamentals

More Info
  • pg 1
									CS1412: Artificial Intelligence Fundamentals
M ary M c Gee W ood

7 February 2006

Lecture 3 - Production rule systems
Reading: Winston Ch. 7, “Rules and Rule Chaining” OR Rich & Knight Ch. 2, “Problems, Problem Spaces, and Search” (not as good) E. Friedman-Hill, JESS in Action, Part I G.A. Ringland & D.A. Duce, eds. Approaches to Knowledge Representation. Ch. 5, “Rule Based Systems”; Ch. 8, “The Explicit Representation of Control Knowledge”.

Production rules
Condition - action rules IF-THEN rules: IF condition THEN action Rewrite rules IF the cats are asking for food AND it’s after 6pm AND they haven’t been fed since breakfast → THEN feed them. IF there’s a teddy at a place, active AND there’s a “thing” at the same place AND the story-telling agent doesn’t already know about the seeing → THEN add to the story that the teddy sees the “thing”.


; Describe TEDDY seeing any MOBILE (defrule see_thing (declare (salience 4)) ?teddy <- (TEDDY (name ?teddy-name) (at ?location) (active TRUE)) ?thing <- (THING (name ?thing-name) (at ?location)) not NARRATION (predicate sees))) => (tellNarrator sees ?teddy-name ?thing-name))

Try this: z u p w q v r → → → → → → → l p h lo r w e

uqzv → ??

Or this “insult grammar”: insult → suggest "you" misname suggest → "buzz off" suggest → "go jump in a big hole" misname → "nasty fellow"’ misname → "little toad"


Production rule systems
Three main components: Working memory, rule memory, interpreter

Working memory: a store containing objects defined by attribute-value lists. Objects represent facts about the world (given or inferred; real or hypotheses). Facts can be changed or withdrawn as the system runs. (TEDDY (name Hector) (text "Hector") (proper True)) Rule memory: contains rules governing the system’s behaviour Conditions (antecedents, left-hand sides): define a pattern of objects and attributes to be matched against the content of the working memory Actions: define changes or additions to working memory Unlike IF-THEN statements in conventional programming languages: The conditional (IF) is a pattern, not a Boolean; The flow of control is not from rule to rule but is determined separately, by the interpreter. (Remember what I said in the first lecture about the importance of separating knowledge from inference) Interpreter or inference engine: selects rules from rule memory that match the contents of working memory, and performs their actions - fires the rules


Objectives of the interpreter: Fire rules as facts come in to the knowledge base Never fire a rule unless its conditions are satisfied Fire every rule whose conditions are satisfied Production System Strategy: All rules are tested at each cycle Only one rule fires at a time Contexts or modules can group rules to make testing more efficient: (defmodule HERO-MAIN) (defrule meet_local <conditions> => (focus HERO-EVENT)) (defrule play_hero <conditions> => (focus HERO-ACTION))

(defmodule HERO-EVENT) (defrule meet_spirit <conditions> => (focus HERO-SPIRIT)) (defrule meet_villain <conditions> => (focus HERO-VILLAIN))

(defmodule HERO-VILLAIN) (defrule stand <conditions> => (tellNarrator stands ?teddy_name ?villain_name)) (defrule flee <conditions> => (tellNarrator flees ?teddy_name ?flee_location))


Production System Cycle: 1 Test all rules 2 Put all rules satisfied into the “conflict set” 3 Choose one rule from the conflict set 4 Fire the rule 5 Update the dynamic database 6 Repeat until the goal is reached or no more rules are satisfied

Now write a small pseudo-code “production rule system” describing something you know how to do, where you make different choices according to the conditions. Try to think of something unusual - don’t all describe getting in to the Department from Owen’s Park!


Conflict resolution: possible conflict resolution strategies: Random Source file ordering Specificity Priority Explicit rules for conflict resolution - a rule based system within a rule based system IF cat_asking_for_food --> THEN say_"Yes, I’m coming" IF cat_asking_for_food --> THEN feed_cat IF cat_asking_for_food --> THEN feed_cat with Kit-e-Kat IF cat_asking_for_food AND cat_name "Charles Douglas" --> THEN feed_cat with Whiskas IF cat_asking_for_food AND cat_already_fed --> THEN feed_cat with water IF cat_asking_for_food AND cat_already_fed AND cat_name "Charles Douglas" --> THEN feed_cat with crunchy IF cat_asking_for_food AND cat_injured --> THEN take_cat_to_vet JESS handles conflict resolution using priority, which it calls “salience”. Thus the Hector’s World lab has rules like (defrule see_and_take_object (declare (salience 3)) ... (defrule meet_villain <no salience declared - default is 0> ... If Hector comes across an object and a Villain at the same time, he will see the object and pick it up before interacting with the Villain. 6

Control strategy
Forward chaining: from facts to goal (OPS5, CLIPS) Encodes knowledge about how to respond to situations Where am I? What can I do? Will it get me where I want to go? Backward chaining: from goal to facts (PROLOG) Encodes knowledge about how to achieve goals or test hypotheses Where do I want to go? How could I get there, in principle? Does it start from here? Both strategies are inefficient. Forward chaining will reason down lots of paths from where we are that go to the wrong place; backward chaining will consider lots of paths to the right place but not from here. Two ways of addressing this: Complex domain knowledge Hybrid control strategies “Production System” almost always means forward chaining.

Classic examples of rule based systems
R1 aka XCON (1982) is a forward-chaining system, implemented in OPS5, used by DEC to configure VAX and PDP11 minicomputers. It’s notable for its extensive use of domain knowledge, modelled on knowledge “elicited” from human domain experts, and for the highly efficient “Rete” pattern-matching algorithm, which avoids repetition of failed atttempts. MYCIN (1984) is a backward-chaining system to diagnose blood disease. It asks questions about a patient, choosing its questions so as to test one hypothesis at a time. When confident, it suggests a diagnosis and recommends a treatment. If questioned, it can show the rules it used to reach that conclusion, as an “explanation”. Details of both are given in the recommended reading.


To top