# Introduction to Artificial Intelligence 236501 (PowerPoint) by ewghwehws

VIEWS: 5 PAGES: 20

• pg 1
```									Introduction to Artificial Intelligence
Heuristic Search

Ruth Bergman
Fall 2002
Search Strategies
• Uninformed search (= blind search)
– have no information about the number of steps or the path
cost from the current state to the goal
• Informed search (= heuristic search)
– have some domain-specific information
– we can use this information to speed-up search
– e.g. Bucharest is southeast of Arad.
– e.g. the number of tiles that are out of place in an 8-puzzle
position
– e.g. for missionaries and cannibals problem, select moves
that move people across the river quickly
Heuristic Search
• Let us suppose that we have one piece of
information: a heuristic function
• h(n) = 0, n a goal node
• h(n) > 0, n not a goal node
• we can think of h(n) as a “guess” as to how far n is from the
goal

Best-First-Search(state,h)
nodes <- MakePriorityQueue(state, h(state))
while (nodes != empty)
node = pop(nodes)
if (GoalTest(node) succeeds return node
for each child in succ(node)
nodes <- push(child,h(child))
return failure
Heuristics: Example
• Travel: h(n) = distance(n, goal)
Neamt
Zerind                                           Iasi

Sibiu          Fagaras

Timisoara           Rimnicu Vilcea
Urziceni     Hirsova
Lugoj                 Pitesti
Bucharest
Eforie
Dobreta                         Giurgiu
Craiova
Heuristics: Example
• 8-puzzle: h(n) = tiles out of place

1 2 3   h(n) = 3
8 6
7 5 4
Example - cont
1 2 3    h(n) = 3
8 6
7 5 4
h(n) = 2      h(n) = 4
h(n) = 3
1 2 3               1 2
1 2 3
8 6 4               8 6 3
8       6
7 5                 7 5 4
7   5   4
1 2 3        h(n) = 3
8 6
7 5 4
h(n) = 2         h(n) = 4
h(n) = 3
1 2 3                      1 2
1 2 3
8 6 4                      8 6 3
8       6
7 5                        7 5 4
7   5   4
h(n) = 3         h(n) = 1
1 2 3          1 2 3
8 6            8 6 4
7 5   4        7        5
1 2 3        h(n) = 3
8 6
7 5 4
h(n) = 2             h(n) = 4
h(n) = 3
1 2 3                          1 2
1 2 3
8 6 4                          8 6 3
8       6
7 5                            7 5 4
7   5   4
h(n) = 3             h(n) = 1
1 2 3          1 2 3
8 6            8 6 4
7 5    4       7          5
h(n) = 0     h(n) = 2
1 2 3             1 2 3       1 2 3
h(n) = 2
8 6 4             8          4     8 6 4
7 5          7 6 5            7 5
Best-First-Search
Performance
• Completeness
– Complete if either finite depth, or minimum drop in h value for each
operator
• Time complexity
– Depends on how good the heuristic function is
– A “perfect” heuristic function will lead search directly to the goal
– We rarely have “perfect” heuristic function
• Space Complexity
– Maintains fringe of search in memory
– High storage requirement
3        2                 x
• Optimality
– Non-optimal solutions
Suppose the heuristic drops to         1
one everywhere except along the                 1                1       x
path along which the solution lies                      1
Iterative Improvement
Algorithms
make modifications to improve the quality
• Consider the states laid out on the surface of
a landscape
• Keep track of only the current state
=> Simplification of Best-First-Search
• Do not look ahead beyond the immediate
neighbors of that state
– Ex: amnesiac climb to summit in a thick fog
Iterative Improvement
Basic Principle
Hill-Climbing
“Like climbing Everest in thick fog with amnesia”
• Simple loop that continually moves in the direction of increasing
value
• does not maintain a search tree, so the node data structure
need only record the state and its evaluation.
• Always try to make changes that improve the current state
• Steepest-ascent: pick the steepest next state

Hill-Climbing(state,h)
Current = state
do forever
next = maximum valued successor of current
if (value(next) < value(current) return current
current = next
Drawbacks
• Local maxima : halt with local maxima
• Plateaux : random walk
• Ridges : oscillate from side to side, limit progress

• Random-Restart Hill-Climbing
Conducts a series of hill-climbing searches from
randomly generated initial states.
Hill-Climbing
Performance
• Completeness
– Not complete, does not use systematic search
method
• Time complexity
– Depends on heuristic function
• Space Complexity
– Very low storage requirement
• Optimality
– Non-optimal solutions
– Often results in locally optimal solution
Simulated-Annealing
• take some upnhill steps to escape the local minimum
• Instead of picking the best move, it picks a random
move
• If the move improves the situation, it is executed.
Otherwise, move with some probability less than 1.
• Physical analogy with the annealing process:
– Allowing liquid to gradually cool until it freezes
• The heuristic value is the energy, E
• Temperature parameter, T, controls speed of
convergence.
Simulated-Annealing
Algorithm
Simulated-Annealing(state,schedule)
Current = state
For t=1,2,…
T = schedule(t)
If T=0 return current
next = a randomly selected successor of current
DE = value(next)-value(current)
if (DE>0) current = next
DE / T
else current = next with probability   e
Simulated Annealing
Solution Quality
20
•The schedule determines the rate     18
at which the temperature is           14
lowered
10
•If the schedule lowers T slowly       6
enough, the algorithm will find a      2
global optimum
• High temperature T is                          5        10      15     20
characterized by a large portion of                    T= 100 – t*5
accepted uphill moves whereas at                     Probability > 0.9
low temperature only downhill
moves are accepted

=> If a suitable annealing schedule is chosen, simulated
annealing has been found capable of finding a good solution,
though this is not guaranteed to be the absolute minimum.
Beam Search
• Overcomes storage complexity of Best-First-Search
• Maintains the k best nodes in the fringe of the search
tree (sorted by the heuristic function)
• When k = 1, Beam search is equivalent to Hill-
Climbing
• When k is infinite, Beam search is equivalent to Best-
First-Search
• If you add a check to avoid repeated states, memory
requirement remains high
• Incomplete, search may delete the path to the
solution.
Beam Search
Algorithm
Beam-Search(state,h,k)
nodes <- MakePriorityQueue(state, h(state))
while (nodes != empty)
node = pop(nodes)
if (GoalTest(node) succeeds return node
for each child in succ(node)
nodes <- push(child,h(child))
If size(nodes) > k
delete last item in nodes
return failure
Search Performance
8-Square            Heuristic 1:                    Heuristic 1:
Tiles out of place              Manhattan distance*
Search Algorithm expanded solution length expanded solution length
Iterative Deepening                                                1105             9
hill-climbing                       2 no solution found              10             9
best-first                        495                  24              9            9
*Manhattan distance =.total number of horizontal and vertical moves required
to move all tiles to their position in the goal state from their current position.
3 2 5                                               0 1 2
7 1         h1 = 7                              3 4 5
4 6 8           h2 = 2+1+1+2+1+1+1+0=9              6 7 8

=> Choice of heuristic is critical to heuristic search algorithm performance.

```
To top