Documents
Resources
Learning Center
Upload
Plans & pricing Sign in
Sign Out

Ch4 State Space Search

VIEWS: 31 PAGES: 72

  • pg 1
									State Space Search

    August 2008
  Suzaimah Ramli
  suzaimah@upnm.edu.my

                         1
     PEMBAHAGIAN MARKAH
• UJIAN 1 –    25%
• TUGASAN       – 30% - 4 TUGASAN & 1
                        PRESENTATION
• TUTORIAL      – 5%
• FINAL EXAM -     40%
                        Contents
1.   An Introduction : State Space Search
2.   Strategies for State Space Search
3.   Graph Search
4.   Recursion-based Search
5.   Pattern-directed Search
6.   Production System
7.   Machine Learning
8.   Natural Language Processing
9.   Game Playing


                                            3
   1 An Introduction : State Space Search
 Many problems in AI take the form of state-space search.

 The states might be legal board configurations in a game, towns
  and cities in some sort of route map, collections of mathematical
  propositions, etc.

 The state-space is the configuration of the possible states and
  how they connect to each other e.g. the legal moves between
  states.

 When we don't have an algorithm which tells us definitively how
  to negotiate the state-space we need to search the state-space to
  find an optimal path from a start state to a goal state.

 We can only decide what to do (or where to go), by considering
  the possible moves from the current state, and trying to look
  ahead as far as possible. Chess, for example, is a very difficult
  state-space search problem.                                   4
 State-space search is all about finding, in a state-space
  (which may be extremely large: e.g. chess), some optimal
  state/node.

 `Optimal' can mean very different things depending on the
  nature of the domain being searched.

 For a puzzle, `optimal' might mean the goal state e.g.
  connect4

 For a route-finder, like our problem, which searches for
  shortest routes between towns, or components of an
  integrated circuit, `optimal' might mean the shortest path
  between two towns/components.

 For a game such as chess, in which we typically can't see
  the goal state, `optimal' might mean the best move we think
  we can make, looking ahead to see what effects the possible
  moves have.
The problem space consists of:
 a state space which is a set of states
representing the possible configurations of
the world
 a set of operators which can change one
state into another
The problem space can be viewed as a
graph where the states are the nodes and the
arcs represent the operators.
Example of State-Space Representation
                        A          Initial state is the root




               E                    B



           D       F           F            C


                   C           C

 Connected nodes represent states in the domain.
 The branching factor denotes how many new states you
  can move to from any state. This problem has an average
  of 2.
 The depth of a node denotes how many moves away from
  the initial state it is. „C‟ has two depths, 2 or 3.
                                                               8
         Areas Of AI And Their
          Inter-dependencies
                               Knowledge
Search          Logic          Representation


         Machine
                         Planning
         Learning


                                      Expert
NLP         Vision      Robotics      Systems
 2 Strategies for State Space Search
 Data-Driven or Goal Driven
    Forward chaining / Backward chaining
 Irrevocable Strategy or Revocable
    Irrevocable
      Most popular in Human problem solving
      No shift of attention to suspended alternatives
      End up with local-maxima
   Revocable
     Uninformed search : Search does not depend on the
     nature of solution,
       Systematic Search Method
         Breadth-First Search
         Depth-First Search
         Uniform Cost Search
      Informed or Heuristic Search
         Best First
                                                          10
• “Blind/Uninformed Search”
   – do not use any specific problem domain information
      • e.g., searching for a route on a map without using any
        information about direction
   – the power is in the generality
   – examples: breadth-first, depth-first, etc

• “Heuristic/Informed Search”
   – use domain specific heuristics (“rules of thumb”, “hints”)
      • e.g. since Seattle is north of LA, explore northerly
        routes first
   – This is the AI approach to search
      • i.e., add domain-specific knowledge
 A search strategy is defined by picking the order of node
  expansion

 Strategies are evaluated along the following dimensions:
    completeness: does it always find a solution if one exists?
    time complexity: number of nodes generated
    space complexity: maximum number of nodes in memory
    optimality: does it always find a least-cost solution?

 Time and space complexity are measured in terms of
    b: maximum branching factor of the search tree
    d: depth of the least-cost solution
    m: maximum depth of the state space (may be ∞)
                              3 Graph Search
1. State Space
     Set of valid states for a problem
     Linked by operators

2. Graph
   A path that contains any node more than once is said to contain a cycle or loop.
   A tree is a graph in which there is a unique path between every pair of nodes.
   Two nodes are said to be connected if a path exists that includes them both.

3. Search Tree
     S, the starting point for the search, is always the root node
     The search algorithm searches by expanding leaf nodes
     Internal nodes are states the algorithm has already explored
     Leaves are potential goal nodes: the algorithm stops expanding once it finds
       (attempts to expand) the first goal node G

   Key Concept
    Search trees are a data structure to represent how the search algorithm
    explores the state space, i.e., they dynamically evolve as the search
    proceeds
                 3.1 Main Definitions
•   State Space - a graph showing states (nodes) and operators (edges)
•   Search tree - a tree showing the list of explored (closed) and leaf
    (open) nodes
•   Fringe (open nodes) - nodes on the priority queue waiting to be
    expanded, organized as a priority queue. Search algorithm differ
    primarily in the way they organize priority queue for the fringe
•   Node expansion - applying all possible operators the node (state
    corresponding to it) and adding the children to the fringe
•   Solution - is a path (sequence of states) from start state to a goal
•   Uninformed or blind search is performed in state spaces where
    operators have no costs, informed search is performed in search
    spaces where operators have costs and it makes sense to talk about
    optimality of a search algorithm
•   Optimal algorithm - finds the lowest cost solution (i.e. path from
    start state to goal with lowest cost)
•   Complete algorithm - finds a solution if one exists
3.2 Example of Graph : The city of Königsberg

The city is divided by a river. There are two
islands at the river. The first island is
connected by two bridges to both riverbanks
and is also connected by a bridge to the other
island. The second island two bridges each
connecting to one riverbank.

Question
Is there a walk around the city that crosses
each bridge exactly once?
Swiss mathematician Leonhard Euler invented graph
theory to solve this problem.
3.3 Graph Of The Königsberg Bridge System
3.4 A Labeled Directed Graph
3.5 A Rooted Tree : Exemplifying
      Family Relationships
A graph consists of :
   A set of nodes (can be infinite)
   A set of arcs that connect pairs of
  nodes.
If a directed arc connects N and M,
N is called the parent, and M is
called the child. If N is also
connected to K, M and K are siblings.
A rooted tree has a unique node which has no
parents. The edges in a rooted tree are directed away
from the root. Each node in a rooted tree has a
unique parent.
A leaf or tip node is a node that has no children
(sometimes also called a dead end).
A path of length n is an ordered sequence of n+1
nodes such that the graph contains arcs from each
node to the following ones. E.g., [a b e] is a path of
length 2.
On a path in a rooted graph, a node is said to be an
ancestor of all the nodes positioned after it (to its
right), as well as a descendant of all nodes before it
(to its left).
           3.6 Search Tree Notation
•   Branching Factor, b
     – b is the number of children of a
        node                                               d = Depth
•   Depth of a node, d
                                                   S
     – number of branches from root to a                         0
        node                                 b=2
•   Partial Paths
     – paths which do not end in a goal
•   Complete Paths                                               1
     – paths which end in a goal
•   Open Nodes (Fringe)
     – nodes which are not expanded (i.e.,                       2
        the leaves of the tree)                        G
•   Closed Nodes
     – nodes which have already been
        expanded (internal nodes)
   3.7 Why Search can be difficult
 At the start of the search, the search algorithm does not
  know
    the size of the tree
    the shape of the tree
    the depth of the goal states

 How big can a search tree be?
    say there is a constant branching factor b
    and one goal exists at depth d
    search tree which includes a goal can have
             bd different branches in the tree (worst case)
 Examples:
    b = 2, d = 10:      bd = 210= 1024
    b = 10, d = 10:     bd = 1010= 10,000,000,000
  3.8 What is a Search Algorithm ?
 A search algorithm is an algorithm which specifies precisely how the
  state space is searched to find a goal state

 Search algorithms differ in the order in which nodes are explored in
  the state space
    since it is intractable to look at all nodes, the order in which we
      search is critically important
    different search algorithms will generate different search trees

 For now we will assume
    we are looking for one goal state
    all goal states are equally good, i.e., all have the same utility = 1
             3.9 How can we compare
                Search Algorithms?
•   Completeness
     – is it guaranteed to find a goal state if one exists?

•   Time Complexity
     – if a goal state exists, how long does it take (worst-case) to find it?

•   Space Complexity
     – if a goal state exists, how much memory (worst-case) is needed
       to perform the search?

•   Optimality
     – if goal states have different qualities, will the search algorithm
       find the state with the highest quality?
         4 Recursion-based Search

A natural for search
Algorithms are
   easier to write
   easier to analyze formally
Recursion consists of
   a base case
   a recursive/repeat component that resolves
    into a smaller version of the problem and
    ultimately to the base case


                                                 26
         DFS Without/With Recursion
procedure depth-first search        function
                                       depthsearch(current_state)
open = [start] closed = [ ]
while open ≠ [ ] do
   remove leftmost state of         if current_state is a goal
   open, call it X                     then return success
                                    add current_state to closed
       if X is goal, return
     success.                       while current_state has
         generate all children of      unexmd children
     X                                   child := next unexmd
         put X on closed               child
                                          if depthsearch(child) =
       put children of X not
     already on open or closed         success
     on the left end of open               then return(success)

                                    return(fail)


27
          5 Pattern-directed Search

Apply recursive search to develop a
 general search procedure for predicate
 calculus.
How is goal-directed search for predicate
 calculus recursive?
   identify subgoal(s) that imply the goal
   recur on the subgoal(s)
   success if a subgoal matches a fact in the
    knowledge base


                                                 28
     This leaves out two possibilities:

     • current goal is negated ( p)
       - -success if the goal fails

     • current goal is a disjunction
     (p..)
       -- success if all are satisfied
     with
            consistent bindings

     And directions for consistent
     bindings

     Rewritten on p.198




29
               6 Production System

  Pattern-directed search: based on a set of
  production rules (similar to but different from a
  rep. in predicate calculus).
  Production system: a system for pattern-directed
  control of problem solving.
A production system consists of
1. A set of production rules (productions)
2. Working memory
3. Recognize/act cycle


                                                      30
1. A set of production rules (productions)
  (condition/action) pairs
   condition: when the action can be applied
    action: the problem-solving step (add something to known facts or do something)
    if it is cold or rainy then drive to work : p(x)  q(x)

2. Working memory
    the current state of the world
    what's true
    production rules can alter this

3.   Recognize/act cycle
     match patterns in working memory against conditions of production rules.
     conflict set = those rules with matches
     select one member of conflict set to fire (perform the action)
     this will change the contents of the working memory

Conflict resolution
pick which rule to fire. Often the first whose condition matches current state.
Pure production systems: no way to recover from dead ends (backtrack)

31
                 7 Machine Learning

 Machine learning is programming computers to
  optimize a performance criterion using example
  data or past experience.
 There is no need to “learn” to calculate payroll
 Learning is used when:
   – Human expertise does not exist (navigating on Mars),
   – Humans are unable to explain their expertise (speech
     recognition)
   – Solution changes in time (routing on a computer network)
   – Solution needs to be adapted to particular cases (user
     biometrics)


                                                                32
              7.1 What is Learning
 Webster's definition of “to learn”
  “To gain knowledge or understanding of, or skill
    in
  by study, instruction or experience''
     • Learning a set of new facts
     • Learning HOW to do something
     • Improving ability of something already learned



 Simon's definition of “machine learning”
  ``Learning denotes changes in the system that are
    adaptive in the sense that they enable the
    system to do the same task or tasks drawn from
    the same population more effectively the next
    time'„

                                    33
    7.2 Major Paradigms Of Machine Learning

• Rote learning – One-to-one mapping from inputs to
  stored representation. “Learning by memorization.”
  Association-based storage and retrieval.
• Induction – Use specific examples to reach general
  conclusions
• Clustering – Unsupervised identification of natural
  groups in data
• Analogy – Determine correspondence between two
  different representations
• Discovery – Unsupervised, specific goal not given
• Genetic algorithms – “Evolutionary” search techniques,
  based on an analogy to “survival of the fittest”
• Reinforcement – Feedback (positive or negative reward)
  given at the end of a sequence of steps

                            34
• AI systems grow from a minimal amount of
  knowledge by learning
• Herbert Simon (1983):
  – Any change in a system that allows it to perform
    better the second time on repetition of the same
    task or on another task drawn from the same
    population
• Machine learning issues:
  – Generalization from experience
     • Induction
     • Inductive biases
  – Performance change: improve or degrade

                                                   35
  7.3 Machine Learning Categories
Symbol-based learning
  – Inductive learning -- learning by examples
  – Supervised learning/unsupervised learning
    • Concept learning –- classification
    • Concept formation -- clustering
  – Explanation-based learning
  – Reinforcement learning
Neural/connectionist networks
Genetic/evolutionary learning


                                                 36
7.4 A General Model Of The Learning Process




                                        37
          7.5 Learning Components
 Data and goals of learning task
    What are given – training instances
    What are expected
 Knowledge representation
    Logic expressions
    Decision trees
    Rules
 Operations
    Generalization/specialization
    Heuristic rules
    Weight adjusts
 Concept space
    Search space: representation, format
 Heuristic search
    Search control in the concept space


                                            38
       8 Natural Language Processing

 NLP is the application of a computational theory of
  human language
 Language is the predominant repository of human
  interaction and knowledge
 Goal of NLP: programs that “listen in”
 The AI Challenge: the Turing test
 Lots of speech and text data available




                                                    39
8.1 NLP: Lots of Applications
   • Doc classification
   • Doc clustering
   • Spam detection
   • Information extraction
   • Summarization
   • Machine translation
   • Cross Language IR
   • Multiple language summarization
   • Language generation
   • Plagarism or author detection
   • Error correction, language restoration
   • Language teaching
   • Question answering
   • Knowledge acquisition (dictionaries, thesaurus, semantic lexicons)
   • Speech recognition
   • Text to Speech
   • Speaker Identification
   • (multi-modal) Dialog systems
   • Deciphering ancient scripts
         8.2 Natural Language

Answers from linguistics: the scientific study of
human language
Natural Language (NL) vs. Artificial Language
Genetic basis of human language
Mysteriously distinct from other species (human
language is unique to humans)
NL is complex, displays recursive structure
Learning of language is an inherent part of NL
Language has idiosyncratic rules and a complex
mapping to thought
       Language Has Structure


•   What he did was climb a tree
•   What he ran was to the store
•   Drink your beer and go home!
•   What are drinking and go home?
•   Linus lost his security blanket
•   Lost Linus blanket security his
             Language Is Recursive

• This is the house
• This is the house that Jack built
• This is the grain that lay in the house that Jack
built
• This is the rat that ate the grain that lay in the
house that Jack built
• This is the cat that killed the rat that ate the grain
that lay in the house that Jack built
• This is the dog that chased the cat that killed the
rat that ate the grain that lay in the house that Jack
built
    8.3 Facets of Language Structure
Phonetics acoustic and perceptual elements
Phonology inventory of basic sounds (phonemes) and
basic rules for combination, e.g. vowel harmony
Morphology how phonemes combine to form words,
relationship of phonemes to meaning, e.g. delight-ed vs.
de-light-ed
Syntax sentence (utterance) formation, word order and
the formation of constituents from word groupings
Semantics how do word meanings recursively
compose to form sentence meanings (from syntax to
logical formulas)
Pragmatics meaning that is not part of compositional
meaning, e.g. This professor dresses even worse than
Anoop!
                   9 Game Playing

 In solving problems, sometimes we have to search
  through many possible ways of doing something.
 Example :
   We may know all the possible actions our robot can do,
    but we have to consider various sequences to find a
    sequence of actions to achieve a goal.
   We may know all the possible moves in a chess game, but
    we must consider many possibilities to find a good move.
 Many problems can be formalised in a general way
  as search problems.



                                                           45
          9.1 Type of games
              Deterministic   Chance


Perfect       Chess, checkers, Backgammon,
information   go, othello      monopoly

Imperfect                     Bridge, poker,
information                   scrabble, nuclear
                              war
                                                  46
          9.2 Why Study Games?
• Clear criteria for success
• Offer an opportunity to study problems involving
  {hostile, adversarial, competing} agents.
• Historical reasons
• Fun
• Interesting, hard problems which require minimal
  “initial structure”
• Games often define very large search spaces
   – chess 35100 nodes in search tree, 1040 legal states
                       State Of The Art
• How good are computer game players?
    – Chess:
        • Deep Blue beat Gary Kasparov in 1997
        • Garry Kasparav vs. Deep Junior (Feb 2003): tie!
        • Kasparov vs. X3D Fritz (November 2003): tie!
           http://www.cnn.com/2003/TECH/fun.games/11/19/kas
           parov.chess.ap/
    – Checkers: Chinook (an AI program with a very large endgame database) is(?)
      the world champion.
    – Go: Computer players are decent, at best
    – Bridge: “Expert-level” computer players exist (but no world champions yet!)
• Good places to learn more:
    – http://www.cs.ualberta.ca/~games/
    – http://www.cs.unimass.nl/icga
                          Chinook
• Chinook is the World Man-Machine Checkers
  Champion, developed by researchers at the University
  of Alberta.
• It earned this title by competing in human
  tournaments, winning the right to play for the
  (human) world championship, and eventually
  defeating the best players in the world.
• Visit http://www.cs.ualberta.ca/~chinook/ to play a
  version of Chinook over the Internet.
• The developers claim to have fully analyzed the game
  of checkers, and can provably always win if they play
  black
• “One Jump Ahead: Challenging Human Supremacy in
  Checkers” Jonathan Schaeffer, University of Alberta
  (496 pages, Springer. $34.95, 1998).
            9.3 Typical Case
• 2-person game
• Players alternate moves
• Zero-sum: one player’s loss is the other’s gain
• Perfect information: both players have access to
  complete information about the state of the game.
  No information is hidden from either player.
• No chance (e.g., using dice) involved
• Examples: Tic-Tac-Toe, Checkers, Chess, Go, Nim,
  Othello
• Not: Bridge, Solitaire, Backgammon, ...
          How To Play A Game
• A way to play such a game is to:
   – Consider all the legal moves you can make
   – Compute the new position resulting from each move
   – Evaluate each resulting position and determine which is
     best
   – Make that move
   – Wait for your opponent to move and repeat
• Key problems are:
   – Representing the “board”
   – Generating all legal next boards
   – Evaluating a position
         9.4 Evaluation Function
• Evaluation function or static evaluator is used to evaluate
  the “goodness” of a game position.
   – Contrast with heuristic search where the evaluation function was
     a non-negative estimate of the cost from the start node to a goal
     and passing through the given node
• The zero-sum assumption allows us to use a single
  evaluation function to describe the goodness of a board
  with respect to both players.
   –   f(n) >> 0: position n good for me and bad for you
   –   f(n) << 0: position n bad for me and good for you
   –   f(n) near 0: position n is a neutral position
   –   f(n) = +infinity: win for me
   –   f(n) = -infinity: win for you
  Evaluation Function Examples
• Example of an evaluation function for Tic-Tac-Toe:
   f(n) = [# of 3-lengths open for me] - [# of 3-lengths open for you]
   where a 3-length is a complete row, column, or diagonal
• Alan Turing’s function for chess
   – f(n) = w(n)/b(n) where w(n) = sum of the point value of white’s
     pieces and b(n) = sum of black’s
• Most evaluation functions are specified as a weighted sum
  of position features:
   f(n) = w1*feat1(n) + w2*feat2(n) + ... + wn*featk(n)
• Example features for chess are piece count, piece
  placement, squares controlled, etc.
• Deep Blue had over 8000 features in its evaluation function
              9.5 Game Tree
Game Tree = Special case of AND/OR
 Graph
Objective of Game tree Search
   To find good first move
Static Evaluation Funtion e
   Measure the Worth?of a tip node
     if p is win for MAX, then e(p) =     
     if p is win for MIN, then e(p) = -   
• Problem spaces for typical games are
  represented as trees
• Root node represents the current
  board configuration; player must decide
  the best single move to make next
• Static evaluator function rates a board
  position. f(board) = real number with
  f>0 “white” (me), f<0 for black (you)
• Arcs represent the possible legal moves for a player
• If it is my turn to move, then the root is labeled a "MAX" node;
  otherwise it is labeled a "MIN" node, indicating my opponent's turn.
• Each level of the tree has nodes that are all MAX or all MIN; nodes at
  level i are of the opposite kind from those at level i+1
           9.6 Minimax Procedure
Complete search of game tree
      Usually depth-first
Apply utility function on leaves of the tree
Starting with leaves determine value of
 nodes
      If n max node, then value(n) = max succ(n)
      If n min node, then value(n) = min succ(n)
Max then chooses best move in root of the
 tree

57
• Create start node as a MAX node with current board
  configuration
• Expand nodes down to some depth (a.k.a. ply) of
  lookahead in the game
• Apply the evaluation function at each of the leaf nodes
• “Back up” values for each of the non-leaf nodes until a
  value is computed for the root node
   – At MIN nodes, the backed-up value is the minimum of the values
     associated with its children.
   – At MAX nodes, the backed-up value is the maximum of the values
     associated with its children.
• Pick the operator associated with the child node whose
  backed-up value determined the value at the root
                                                     2


                       2                1    2                 1


 2      7 1        8   2     7 1         8   2       7 1       8

                       This is the move                  2
Static evaluator       selected by minimax
      value
                                                 2                 1
                            MAX
                             MIN                 2       7 1       8
                 MAX node



                            MIN node




          value computed
f value     by minimax
       MiniMax for Tic-Tac-Toe
• Two player, X and O, X play first
• Static Evaluation function
  – If p is not a winning position
     e(p) = (#complete rows, coluumns, or diagonal that are
       still open for X) - (#complete rows, coluumns, or
       diagonal that are still open for O)
  – If p is a win for X, e(p) =
                                     
  – if p is a win for O, e(p) = -    
Partial Game Tree for Tic-Tac-Toe




                  • f(n) = +1 if the position is
                    a win for X.
                  • f(n) = -1 if the position is
                    a win for O.
                  • f(n) = 0 if the position is a
                    draw.
            9.7 Alpha-beta Pruning
 • We can improve on the performance of the minimax
   algorithm through alpha-beta pruning
 • Basic idea: “If you have an idea that is surely bad, don't take
   the time to see how truly awful it is.” -- Pat Winston


      MAX        >=2
                                     • We don’t need to compute
                                       the value at this node.
MIN    =2              <=1
                                     • No matter what it is, it
                                       can’t affect the value of
MAX                                    the root node.
        2   7     1    ?
• Traverse the search tree in depth-first order
• At each MAX node n, alpha(n) = maximum value found so
  far
• At each MIN node n, beta(n) = minimum value found so far
   – Note: The alpha values start at -infinity and only increase, while beta
     values start at +infinity and only decrease.
• Beta cutoff: Given a MAX node n, cut off the search below n
  (i.e., don’t generate or examine any more of n’s children) if
  alpha(n) >= beta(i) for some MIN node ancestor i of n.
• Alpha cutoff: stop searching below MIN node n if beta(n) <=
  alpha(i) for some MAX node ancestor i of n.
• Keep track of  for each MAX node
     –    never decreases
     –   Denotes value of best path so far below MAX node
     –   Compute  as value of best (largest) successor of MAX
     –   Discontinue search below MAX node when there exists a MIN ancestor
         of node for which   
• Keep track of  for each MIN node
     –    never increases
     –   Denotes value of best path so far below MIN node
     –   Compute  as value of best (smallest) successor of MIN
     –   Discontinue search below MIN when there exists a MAX ancestor of
         node for which   
• Alpha-beta result is identical to that of minimax ! Only more
  efficient

65
• When we have BUV(A) = -1, we know
   – BUV(S) >= -1   (LowerBound: -value)
• When we have BUV(C) = -2, we know
   – BUV(B) <= -2   (UpperBound: -value)
• Final value of B never exceed current value of S. We
  can prune the other children of B.
• -value of MAX never decrease
• -value of MIN never decrease
• Rules of Discontinuing Search
   – Any MIN node having -value <= -value of any MAX
     anscester (-cutoff)
   – Any MAX node having -value <= -value of any MIN
     anscester (-cutoff)
• -value computation during search
   – -value of MAX is set to largest final BUV of its successor
   – -value of MIN is set to smallest final BUV of its successor
               Alpha-beta Example

MAX                     3




MIN             3       2 - prune        14 1 - prune




      3   12    8   2               14   1
    Effectiveness Of Alpha-beta
• Alpha-beta is guaranteed to compute the same value for
  the root node as computed by minimax, with less or equal
  computation
• Worst case: no pruning, examining bd leaf nodes, where
  each node has b children and a d-ply search is performed
• Best case: examine only (2b)d/2 leaf nodes.
   – Result is you can search twice as deep as minimax!
• Best case is when each player’s best move is the first
  alternative generated
• In Deep Blue, they found empirically that alpha-beta
  pruning meant that the average branching factor at each
  node was about 6 instead of about 35!
70
71
         Group game presentation

•   9
•   7
•   10
•   12
•   3
•   6
•   8
•   4

								
To top