Professional Documents
Culture Documents
November 2009
Overview
Heuristics Informed Search Methods
Greedy Best-First Search A Search Iterative Deepening A Search Local Search
Conclusion
Example: n-queens
Put n queens on an n n board with no two queens on the same row, column, or diagonal
[Ref] http://en.wikipedia.org/wiki/Eight_queens_puzzle
241-320 Design Architecture & Engineering for Intelligent System
A Descend direction
B
State space
Current state
241-320 Design Architecture & Engineering for Intelligent System
Steepest Descent
1) S initial state 2) Repeat:
a) S arg minSSUCCESSORS(S){h(S)} b) if GOAL?(S) return S c) if h(S) < h(S) then S S else return failure
Application: 8-Queen
Repeat n times: 1) Pick an initial state S at random with one queen in each column 2) Repeat k times: a) If GOAL?(S) then return S b) Pick an attacked queen Q at random c) Move Q in its column to minimize the number of attacking queens new S [min-conflicts heuristic] 3) Return failure
1 2 3 3 2 2 241-320 Design Architecture & 3 2 0 2 2 2 2 Problem Solving and Search part 4 2
10
Application: 8-Queen
Repeat n times: Why does it work ??? 1) Pick an initial state S at random with one queen in each column 2) 1) There are many goal states that are Repeat k times: a) well-distributed over the state space If GOAL?(S) then return S b) Pick an attacked queen Q at found 2) If no solution has beenrandom after a few c) steps, its better to minimize the number of attacking Move Q in its column to start it all over again. queens new S [min-conflicts heuristic] Building a search tree would be much less 3) Return failure
efficient because of the high branching 1 factor 2 2 3) Running time almost independent of the 0 2 3 number of queens
3 2 2 241-320 Design Architecture & 3 2 2 2 Problem Solving and Search part 4 2
11
Steepest Descent
1) S initial state 2) Repeat:
a) S arg minSSUCCESSORS(S){h(S)} b) if GOAL?(S) return S c) if h(S) < h(S) then S S else return failure May easily get stuck in local minima (see next slides) Random restart (as in n-queen example), or Use the other technique, e.g. Simulated Annealing Search
241-320 Design Architecture & Engineering for Intelligent System
12
Objective function
A Descend direction
Suppose we are at point A, and would like to be at point B, the goal Everything goes fine until we get to m, a local minimum. Then we are stuck.
M B
Current state State space
Local minima
global minima 13
M
Good move Bad move Current state
Note: If youre curious, annealing refers to the process used to harden metals by heating them to a high temperature (hence, mealting) and then gradually cooling them
241-320 Design Architecture & Engineering for Intelligent System
14
x
x0 x1 x2 x4 x5 x6 x7 x8 x9 x10 x11 x12 x13
241-320 Design Architecture & Engineering for Intelligent System
15
Simulated Annealing
Improving moves always accepted Non-improving moves may be accepted probabilistically and in a manner depending on the temperature parameter T. loosely
the worse the move the less likely it is to be accepted a worsening move is less likely to be accepted, the cooler the temperature
The temperature T starts high and is gradually cooled as the search progresses.
Initially (when things are hot) virtually anything is accepted, at the end (when things are nearly frozen) only improving moves are allowed (and the search effectively reduces to hill-climbing)
241-320 Design Architecture & Engineering for Intelligent System
16
17
Setting p
What if p is too low? We dont make many downhill moves and we might not get out of many local maxima What if p is too high? We may be making too many suboptimal moves Should p be constant? We might be making too many random moves when we are near the global maximum
18
Setting p (cont.)
Decrease p as iterations progress Accept more uphill moves early, accept fewer as search goes on Intuition: as search progresses, we are moving towards more promising areas and quite likely toward a global minimum Decrease p as h(s) - h(s) increases Accept fewer uphill moves if slope is high See next slide for intutition
Problem Solving and Search part 4
19
h(S) - h(S) is large: we are likely moving towards a sharp (interesting) minimum so dont move uphill too much
241-320 Design Architecture & Engineering for Intelligent System
h(S) - h(S) is small: we are likely moving towards a smooth (uninteresting) minimum so we want to escape this local minimum
Problem Solving and Search part 4
20
b) S successor of S picked at random c) if h(S) h(S) then S S Else, accept the change with probability d) else - h = h(S)-h(S) - with probability ~ exp(h/T), where T is called the temperature, do: S S When enough iterations have 3) T = T passed without improvement, terminate.
Simulated annealing lowers T over the k iterations. It starts with a large T and slowly decreases T
241-320 Design Architecture & Engineering for Intelligent System
21
22
Convergence
If the schedule lowers T slowly enough, the algorithm will find a global optimum with probability approaching 1 In practice, reaching the global optimum could take an enormous number of iterations
23
T = 0.00001
24
25
26
Conclusion
Walking downhill is not as easy as youd think Informed search algorithms try to move quickly towards a goal based on the distance metric from their current point Greedy search algorithms only follow paths search space that bring them closest to the goal Local search algorithms have no memory to store tree structures, but work by intelligently covering selected parts of search space
241-320 Design Architecture & Engineering for Intelligent System
27
Search problems
Blind search
Heuristic search: best-first and A*
Construction of heuristics
241-320 Design Architecture & Engineering for Intelligent System
Variants of A*
Local search
28
29
Reading
2 2.4.3
30