Documents
Resources
Learning Center
Upload
Plans & pricing Sign in
Sign Out

ch3

VIEWS: 3 PAGES: 85

									   Solving Problems By
        Searching
                 BY
 Ahmed M. Khedr PhD
    ECECS Department
University Of Cincinnati
                  Problem-Solving Agent

                  sensors


              ?
                                     environment
            agent

                     Effectors



Problem solving agents decide to do by finding
sequences of actions that lead to desirable states.
  Problem-Solving Agent continued

• We discuss informally how the agent can
  formulate an appropriate view of the
  problem it faces.
• Agents can adopt a goal and aim to satisfy
  it.
• What is the sequences of moves that will
  transform start state into the goal state.
 Problem-Solving Agent

     sensors


 ?
                          environment
agent

        Effectors
                    • Actions
                    • Initial state
                    • Goal state
What the agent need to know?
1- Global database Of the state description
     (State Space)
      2-Operators that transforms states
               (successor function)

3-Control Strategy “the order the operator
   should be applied to the start state”.
     (Actions)
State Space and Successor Function

       state space

         successor function




                     • Actions
                     • Initial state
                     • Goal state
  Initial State

state space

  successor function




              • Actions
              • Initial state
              • Goal state
   Goal State

state space

  successor function




              • Actions
              • Initial state
              • Goal state
        Example: 8-puzzle


8   2                 1     2   3

3   4    7            4     5   6

5   1    6            7     8

Initial state         Goal state
            Example: 8-puzzle

Size of the state space = 9!/2 = 181,440


15-puzzle  .65 x 1012
                                        0.18 sec
                               6 days
24-puzzle  .5 x 1025
                12 billion years

                               10 millions states/sec
            Search Problem

•   State space
•   Initial state
•   Successor function
•   Goal state
•   Path cost
               Search Problem
• State space
    – each state is an abstract representation of the
      environment
    – the state space is discrete
•   Initial state
•   Successor function
•   Goal state
•   Path cost
               Search Problem

• State space
• Initial state:
   – usually the current state
   – sometimes one or several hypothetical states.
• Successor function
• Goal state
• Path cost
             Search Problem
• State space
• Initial state
• Successor function:
  – [state  subset of states]
  – an abstract representation of the possible
    actions
• Goal state
• Path cost
              Search Problem

•   State space
•   Initial state
•   Successor function
•   Goal state:
    – usually a condition
    – sometimes the description of a state
• Path cost
              Search Problem
•   State space
•   Initial state
•   Successor function
•   Goal state
•   Path cost:
    – [path  positive number]
    – usually, path cost = sum of step costs
    – e.g., number of moves of the empty tile
Search of State Space
Search of State Space
Search State Space
Search of State Space
Search of State Space
      Search of State Space




 search tree
      Simple Agent Algorithm

Problem-Solving-Agent
1. initial-state  sense/read state
2. goal  select/read goal
3. successor  select/read action models
4. problem  (initial-state, goal, successor)
5. solution  search(problem)
6. perform(solution)
            Example: 8-queens

Place 8 queens in a chessboard so that no two queens
are in the same row, column, or diagonal.




       A solution                 Not a solution
Example: 8-queens

   Formulation #1:
   • States: any arrangement of
     0 to 8 queens on the board
   • Initial state: 0 queens on the
     board
   • Successor function: add a
     queen in any square
   • Goal state: 8 queens on the
     board, none attacked

     648 states with 8 queens
           Example: 8-queens
                 Formulation #2:
                 • States: any arrangement of
                   k = 0 to 8 queens in the k
                   leftmost columns with none
                   attacked
                 • Initial state: 0 queens on the
                   board
                 • Successor function: add a
                   queen to any square in the
                   leftmost empty column such
                   that it is not attacked
                   by any other queen
 2,067 states   • Goal test: 8 queens on the
                   board
         The vacuum world

States one of the eight states
Operators Move left Move right suck
Goal No dirt left in any square
Cost each action cost 1.
     Missionaries and Cannibals
     ML CL         MB CB        MR CR
S     3    3       0    0   0    0
G     0    0      0     0   3    3
Operators
1M L -> R or R-> L
1C   L -> R or R -> L
2M   L -> R or R-> L
2C   L -> R or R-> L
1C1M L -> R or R-> L
   Missionaries and Cannibals

Boat Capacity 2
#C > # Mon any bank
 M’s will be killed this situation must
be avoided.

Path Cost = # of moves
     Assumptions in Basic Search

•   The environment is static
•   The environment is discretizable
•   The environment is observable
•   The actions are deterministic

 open-loop solution
   Search Problem Formulation

• Real-world environment  Abstraction
  – Validity:
     • Can the solution be executed?
     • Does the state space contain the solution?
  – Usefulness
     • Is the abstract problem easier than the real-world
       problem?
  Search Problem Formulation

• Real-world environment  Abstraction
  – Validity:
     • Can the solution be executed?
     • Does the state space contain the solution?
  – Usefulness
     • Is the abstract problem easier than the real-world
       problem?
     Search Problem Variants

• One or several initial states
• One or several goal states
• The solution is the path or a goal node
  – In the 8-puzzle problem, it is the path to a goal
    node
  – In the 8-queen problem, it is a goal node
            Problem Variants

•   One or several initial states
•   One or several goal states
•   The solution is the path or a goal node
•   Any, or the best, or all solutions
         Important Parameters

• Number of states in state space

8-puzzle  181,440             8-queens  2,057
15-puzzle  .65 x 1012         100-queens  1052
24-puzzle  .5 x 1025


                         There exist techniques to solve
                         N-queens problems efficiently!

 Stating a problem as a search problem
 is not always a good idea!
       Important Parameters

• Number of states in state space
• Size of memory needed to store a state
       Important Parameters

• Number of states in state space
• Size of memory needed to store a state
• Running time of the successor function
              Applications

• Route finding: airline travel,
  telephone/computer networks
• Pipe routing, VLSI routing
• Robot motion planning
• Video games
                Summary

• Problem-solving agent
• State space, successor function, search
• Examples: 8-puzzle, 8-queens, route
  finding, robot navigation, assembly
  planning
• Assumptions of basic search
• Important parameters
   Blind Search

        Chapter 3
Sections 3.4–3.6
       By
 Ahmed M. Khedr
      Simple Agent Algorithm

Problem-Solving-Agent
1. initial-state  sense/read state
2. goal  select/read goal
3. successor  select/read action models
4. problem  (initial-state, goal, successor)
5. solution  search(problem)
6. perform(solution)
      Search of State Space




 search tree
         Basic Search Concepts

•   Search tree
•   Search node
•   Node expansion
•   Search strategy: At each stage it
    determines which node to expand
                    Search Nodes  States

8   2
3   4       7
5   1       6

8   2       7
                    The search tree may be infinite even
3   4
                    when the state space is finite
5   1       6


        8           2            8   2      8 4     2      8   2
        3       4   7        3   4   7      3       7      3   4   7
        5       1   6        5   1   6      5   1   6      5   1   6
          Node Data Structure

•   STATE
•   PARENT
•   ACTION
•   COST
•   DEPTH If a state is too large, it may
             be preferable to only represent the
             initial state and (re-)generate the
             other states when needed
                 Fringe

• Set of search nodes that have not been
  expanded yet
• Implemented as a queue FRINGE
  – INSERT(node,FRINGE)
  – REMOVE(FRINGE)
• The ordering of the nodes in FRINGE
  defines the search strategy
              Search Algorithm
1.   If GOAL?(initial-state) then return initial-state
2.   INSERT(initial-node,FRINGE)
3.   Repeat:
         If FRINGE is empty then return failure
         n  REMOVE(FRINGE)
         s  STATE(n)
         For every state s’ in SUCCESSORS(s)
         Create a node n’ as a successor of s
         If GOAL?(s’) then return path or goal state
         INSERT(n’,FRINGE)
       Performance Measures
• Completeness
 Is the algorithm guaranteed to find a solution
 when there is one?

           Probabilistic completeness:
           If there is a solution, the probability
           that the algorithms finds one goes
           to 1 “quickly” with the running time
       Performance Measures
• Completeness
 Is the algorithm guaranteed to find a solution
 when there is one?
• Optimality
 Is this solution optimal?
• Time complexity
 How long does it take?
• Space complexity
 How much memory does it require?
        Important Parameters
• Maximum number of successors of any state

   branching factor b of the search tree

• Minimal length of a path in the state space
  between the initial and a goal state

   depth d of the shallowest goal node in the
   search tree
   Blind vs. Heuristic Strategies

• Blind (or un-informed) strategies do not
  exploit any of the information contained in
  a state

• Heuristic (or informed) strategies exploits
  such information to assess that one node is
  “more promising” than another
            Example: 8-puzzle
                               heuristic strategy
                         For a blind strategy, N1
8   2
                         and N2 are just two nodes
                         counting the number of
            STATE
3   4   7                misplaced tiles, N2 is search
                         (at some depth in the more
                    N1   promising than N1
                         tree)
5   1   6

                                      1   2    3
1   2   3                             4   5    6
            STATE
4   5                                 7   8
                    N2
7   8   6                             Goal state
            Blind Strategies
• Breadth-first
  – Bidirectional

• Depth-first             Step cost = 1
  – Depth-limited
  – Iterative deepening

• Uniform-Cost        Step cost = c(action)
                                  >0
            Breadth-First Strategy

New nodes are inserted at the end of FRINGE

            1

    2               3       FRINGE = (1)


4       5       6       7

Root node are expanded first and then all
the nodes generated by the root node are
expanded next.
            Breadth-First Strategy

New nodes are inserted at the end of FRINGE

            1

    2               3       FRINGE = (2, 3)


4       5       6       7
            Breadth-First Strategy

New nodes are inserted at the end of FRINGE

            1

    2               3       FRINGE = (3, 4, 5)


4       5       6       7
            Breadth-First Strategy

New nodes are inserted at the end of FRINGE

            1

    2               3       FRINGE = (4, 5, 6, 7)


4       5       6       7
                Evaluation
• b: branching factor
• d: depth of shallowest goal node
• Complete
• Optimal if step cost is 1
• Number of nodes generated:
  1 + b + b2 + … + bd = (bd+1-1)/(b-1)
                     = O(bd)
• Time and space complexity is O(bd)
           Big O Notation

g(n) is in O(f(n)) if there exist two positive
constants a and N such that:

    for all n > N, g(n)  a f(n)
        Bidirectional Strategy
  2 fringe queues: FRINGE1 and FRINGE2
                                         Simultaneously
                                         Both forward
                                         from
                                         Forward from
                                         the initial
                                         State and
                                         backward
                                         From the goal
                                          state stop
                                         when the two
                                         Searches meet
                                         in the middle

Time and space complexity = O(bd/2) << O(bd)
  Time and Memory Requirements

  d       #Nodes        Time         Memory
  2       111           .01 msec     11 Kbytes
  4       11,111        1 msec       1 Mbyte
  6       ~106          1 sec        100 Mb
  8       ~108          100 sec      10 Gbytes
  10      ~1010         2.8 hours    1 Tbyte
  12      ~1012         11.6 days    100 Tbytes
  14      ~1014         3.2 years    10,000 Tb
Assumptions: b = 10; 1,000,000 nodes/sec;
100bytes/node
     Time and Memory Requirements

d            #Nodes        Time         Memory
2            111           .01 msec     11 Kbytes
4            11,111        1 msec       1 Mbyte
6            ~106          1 sec        100 Mb
8            ~108          100 sec      10 Gbytes
10           ~1010         2.8 hours    1 Tbyte
12           ~1012         11.6 days    100 Tbytes
14           ~1014         3.2 years    10,000 Tb
Assumptions: b = 10; 1,000,000 nodes/sec; 100bytes/node
         Depth-First Strategy

New nodes are inserted at the front of FRINGE
                   1

     2                                3
              FRINGE = (1)
4             5
         Depth-First Strategy

New nodes are inserted at the front of FRINGE
                   1

     2                                3
              FRINGE = (2, 3)
4             5
         Depth-First Strategy

New nodes are inserted at the front of FRINGE
                   1

     2                                3
              FRINGE = (4, 5, 3)
4             5
         Depth-First Strategy

New nodes are inserted at the front of FRINGE
                   1

     2                                3


4             5
         Depth-First Strategy

New nodes are inserted at the front of FRINGE
                   1

     2                                3


4             5
         Depth-First Strategy

New nodes are inserted at the front of FRINGE
                   1

     2                                3


4             5
         Depth-First Strategy

New nodes are inserted at the front of FRINGE
                   1

     2                                3


4             5
         Depth-First Strategy

New nodes are inserted at the front of FRINGE
                   1

     2                                3


4             5
         Depth-First Strategy

New nodes are inserted at the front of FRINGE
                   1

     2                                3


4             5
         Depth-First Strategy

New nodes are inserted at the front of FRINGE
                   1

     2                                3


4             5
         Depth-First Strategy

New nodes are inserted at the front of FRINGE
                   1

     2                                3


4             5
                    Evaluation
•   b: branching factor
•   d: depth of shallowest goal node
•   m: maximal depth of a leaf node
•   Complete only for finite search tree
•   Not optimal
•   Number of nodes generated:
    1 + b + b2 + … + bm = O(bm)
• Time complexity is O(bm)
• Space complexity is O(bm) or O(m)
It needs to store only a single path from root to leaf
   node
      Depth-Limited Strategy
• Depth-first with depth cutoff k (maximal
  depth below which nodes are not
  expanded)

• Three possible outcomes:
  – Solution
  – Failure (no solution)
  – Cutoff (no solution within cutoff)
    Iterative Deepening Strategy
Repeat for k = 0, 1, 2, …:
  Perform depth-first with depth cutoff k
   Choosing the best depth limit by trying all.
• Complete
• Optimal if step cost =1
• Time complexity is:
  (d+1)(1) + db + (d-1)b2 + … + (1) bd = O(bd)
• Space complexity is: O(bd) or O(d)
                    Calculation
db + (d-1)b2 + … + (1) bd
  = bd + 2bd-1 + 3bd-2 +… + db
  = bd(1 + 2b-1 + 3b-2 + … + db-d)

   bd(Si=1,…, ib(1-i))



  = bd (b/(b-1))2
                Uniform-Cost Strategy
• Each step has some cost   > 0.
• The cost of the path to each fringe node N is
              g(N) = S costs of all steps.
• The goal is to generate a solution path of minimal cost.
• The queue FRINGE is sorted in increasing cost.

          A                        S
                                       0
    1           10
S       5 B 5        G
                         A         B          C
                             1         5          15
                5
    15    C
                         G         G
                             11        10
      Comparison of Strategies

• Breadth-first is complete and optimal, but
  has high space complexity
• Depth-first is space efficient, but neither
  complete nor optimal
• Iterative deepening is asymptotically
  optimal
     Avoiding Repeated States

• Requires comparing state descriptions
• Breadth-first strategy:
  – Keep track of all generated states
  – If the state of a new node already exists, then
    discard the node
      Avoiding Repeated States
• Depth-first strategy:
   – Solution 1:
      • Keep track of all states associated with nodes in current path
      • If the state of a new node already exists, then discard the
        node
    Avoids loops
   – Solution 2:
      • Keep track of all states generated so far
      • If the state of a new node has already been generated, then
        discard the node
    Space complexity of breadth-first
     Modified Search Algorithm
1.   INSERT(initial-node,FRINGE)
2.   Repeat:
        If FRINGE is empty then return failure
        n  REMOVE(FRINGE)
        s  STATE(n)
        If GOAL?(s) then return path or goal state
        For every state s’ in SUCCESSORS(s)
        Create a node n’ as a successor of n
        INSERT(n’,FRINGE)
                 Exercises

• Adapt uniform-cost search to avoid
  repeated states while still finding the
  optimal solution
• Uniform-cost looks like breadth-first (it is
  exactly breadth first if the step cost is
  constant). Adapt iterative deepening in a
  similar way to handle variable step costs
                 Summary

• Search tree  state space
• Search strategies: breadth-first, depth-first,
  and variants
• Evaluation of strategies: completeness,
  optimality, time and space complexity
• Avoiding repeated states
• Optimal search with variable step costs

								
To top