You are on page 1of 30

241-320 Design Architecture and Engineering for Intelligent System Suntorn Witosurapot

Contact Address: Phone: 074 287369 or Email: wsuntorn@coe.psu.ac.th

November 2009

Lecture 5: Problem Solving and Search part 4 Informed Search

Overview
Heuristics Informed Search Methods
Greedy Best-First Search A Search Iterative Deepening A Search Local Search

Conclusion

241-320 Design Architecture & Engineering for Intelligent System

Problem Solving and Search part 4

Local search algorithms


In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution State space = set of "complete" configurations Find configuration satisfying constraints, e.g., n-queens In such cases, we can use local search algorithms Keep a single "current" state, try to improve it
(Ex: as in our part-2 slide, the Evaluation function should be lower and lower, when proceeding in each step)
241-320 Design Architecture & Engineering for Intelligent System

Problem Solving and Search part 4

Example: n-queens
Put n queens on an n n board with no two queens on the same row, column, or diagonal

Note: The eight queens puzzle has 92 distinct solutions

[Ref] http://en.wikipedia.org/wiki/Eight_queens_puzzle
241-320 Design Architecture & Engineering for Intelligent System

Problem Solving and Search part 4

Hill-climbing with Steepest Descent


It is a kind of local search algorithm. Analogy: Imagine you have climbed a hill, but it has got dark before you are back down. Your goal is the bottom of the hill. Your problem is finding it. You have no GPS, so greedy search is not an option. You would make sure that every step you made took you down hill and, ignoring cliffs, as steeply as possible

241-320 Design Architecture & Engineering for Intelligent System

Problem Solving and Search part 4

Hill-climbing with Steepest Descent


That is the fastest way to the bottom, unless:
You are at the bottom of a dip (M point), and every step you might make takes you up hill You are on a plateau (B region) and every step makes no difference
Objective function

A Descend direction

B
State space

Current state
241-320 Design Architecture & Engineering for Intelligent System

Problem Solving and Search part 4

Steepest Descent Search


Steepest descent searches are designed to take (you guessed it) the steepest path down the evaluation function The evaluation function equals zero at the goal and is positive elsewhere You actually dont need a goal, you could just be trying to minimise the cost of something Or reverse it all and try to maximise something

241-320 Design Architecture & Engineering for Intelligent System

Problem Solving and Search part 4

Steepest Descent
1) S initial state 2) Repeat:
a) S arg minSSUCCESSORS(S){h(S)} b) if GOAL?(S) return S c) if h(S) < h(S) then S S else return failure

241-320 Design Architecture & Engineering for Intelligent System

Problem Solving and Search part 4

Application: 8-Queen
Repeat n times: 1) Pick an initial state S at random with one queen in each column 2) Repeat k times: a) If GOAL?(S) then return S b) Pick an attacked queen Q at random c) Move Q in its column to minimize the number of attacking queens new S [min-conflicts heuristic] 3) Return failure
1 2 3 3 2 2 241-320 Design Architecture & 3 2 0 2 2 2 2 Problem Solving and Search part 4 2

Engineering for Intelligent System

10

Application: 8-Queen
Repeat n times: Why does it work ??? 1) Pick an initial state S at random with one queen in each column 2) 1) There are many goal states that are Repeat k times: a) well-distributed over the state space If GOAL?(S) then return S b) Pick an attacked queen Q at found 2) If no solution has beenrandom after a few c) steps, its better to minimize the number of attacking Move Q in its column to start it all over again. queens new S [min-conflicts heuristic] Building a search tree would be much less 3) Return failure

efficient because of the high branching 1 factor 2 2 3) Running time almost independent of the 0 2 3 number of queens
3 2 2 241-320 Design Architecture & 3 2 2 2 Problem Solving and Search part 4 2

Engineering for Intelligent System

11

Steepest Descent
1) S initial state 2) Repeat:
a) S arg minSSUCCESSORS(S){h(S)} b) if GOAL?(S) return S c) if h(S) < h(S) then S S else return failure May easily get stuck in local minima (see next slides) Random restart (as in n-queen example), or Use the other technique, e.g. Simulated Annealing Search
241-320 Design Architecture & Engineering for Intelligent System

Problem Solving and Search part 4

12

Steepest Descent Search (cont.)


Problem: depending on initial state, can get stuck in local minima

Objective function

A Descend direction

Suppose we are at point A, and would like to be at point B, the goal Everything goes fine until we get to m, a local minimum. Then we are stuck.

M B
Current state State space

Local minima

global minima 13

241-320 Design Architecture & Engineering for Intelligent System

Problem Solving and Search part 4

Simulated Annealing Search


Idea: escape local maxima by allowing some "bad" moves but gradually decrease their frequency
A

M
Good move Bad move Current state

Note: If youre curious, annealing refers to the process used to harden metals by heating them to a high temperature (hence, mealting) and then gradually cooling them
241-320 Design Architecture & Engineering for Intelligent System

Problem Solving and Search part 4

14

Simulated Annealing Search: No pain, no gain


z(x)
Allow non-improving moves so that it is possible to go down in order to rise again to reach global optimum

x
x0 x1 x2 x4 x5 x6 x7 x8 x9 x10 x11 x12 x13
241-320 Design Architecture & Engineering for Intelligent System

Problem Solving and Search part 4

15

Simulated Annealing
Improving moves always accepted Non-improving moves may be accepted probabilistically and in a manner depending on the temperature parameter T. loosely
the worse the move the less likely it is to be accepted a worsening move is less likely to be accepted, the cooler the temperature

The temperature T starts high and is gradually cooled as the search progresses.
Initially (when things are hot) virtually anything is accepted, at the end (when things are nearly frozen) only improving moves are allowed (and the search effectively reduces to hill-climbing)
241-320 Design Architecture & Engineering for Intelligent System

Problem Solving and Search part 4

16

Simulated Annealing Search


1) S initial state 2) Repeat:
a) S arg minSSUCCESSORS(S){h(S)} b) if GOAL?(S) return S c) if h(S) < h(S) then S S d) else with probability p This part is different from hill-climbing S S

Q: How can we calculate p?


241-320 Design Architecture & Engineering for Intelligent System

Problem Solving and Search part 4

17

Setting p
What if p is too low? We dont make many downhill moves and we might not get out of many local maxima What if p is too high? We may be making too many suboptimal moves Should p be constant? We might be making too many random moves when we are near the global maximum

241-320 Design Architecture & Engineering for Intelligent System

Problem Solving and Search part 4

18

Setting p (cont.)
Decrease p as iterations progress Accept more uphill moves early, accept fewer as search goes on Intuition: as search progresses, we are moving towards more promising areas and quite likely toward a global minimum Decrease p as h(s) - h(s) increases Accept fewer uphill moves if slope is high See next slide for intutition
Problem Solving and Search part 4

241-320 Design Architecture & Engineering for Intelligent System

19

Decreasing p as h(S) - h(S) increases


h(s) h(s) h(s) h(s)

h(S) - h(S) is large: we are likely moving towards a sharp (interesting) minimum so dont move uphill too much
241-320 Design Architecture & Engineering for Intelligent System

h(S) - h(S) is small: we are likely moving towards a smooth (uninteresting) minimum so we want to escape this local minimum
Problem Solving and Search part 4

20

Complete Simulated Annealing Search Algorithm


1) S initial state 2) Iterate:
Repeat k times: a) If GOAL?(S) then return S definitely accept the change

b) S successor of S picked at random c) if h(S) h(S) then S S Else, accept the change with probability d) else - h = h(S)-h(S) - with probability ~ exp(h/T), where T is called the temperature, do: S S When enough iterations have 3) T = T passed without improvement, terminate.

Simulated annealing lowers T over the k iterations. It starts with a large T and slowly decreases T
241-320 Design Architecture & Engineering for Intelligent System

Problem Solving and Search part 4

21

Simulated Annealing Search Algorithm (cont.)


Probability of moving downhill for negative h values at different temperature ranges:
High temp: accept all moves (Random Walk) Low temp: Stochastic Hill-Climbing

241-320 Design Architecture & Engineering for Intelligent System

Problem Solving and Search part 4

22

Convergence
If the schedule lowers T slowly enough, the algorithm will find a global optimum with probability approaching 1 In practice, reaching the global optimum could take an enormous number of iterations

241-320 Design Architecture & Engineering for Intelligent System

Problem Solving and Search part 4

23

Very Basic Simulated Annealing Example


Iteration 1 2 3 4 m Do 400 trial moves Do 400 trial moves Do 400 trial moves Do 400 trial moves Do 400 trial moves
T = 100 T = T 0.95 T = T 0.95 T = T 0.95 T = T 0.95 T = T 0.95

Do 400 trial moves

T = 0.00001
24

241-320 Design Architecture & Engineering for Intelligent System

Problem Solving and Search part 4

Conclusion: Simulated annealing


Design of neighborhood is critical Lots of parameters to tweak eg. , K, initial temperature Simulated annealing is usually better than hillclimbing if you can find the right parameters

241-320 Design Architecture & Engineering for Intelligent System

Problem Solving and Search part 4

25

Parallel Local Search Techniques


They perform several local searches concurrently, but not independently: Beam search Genetic algorithms (will be studied later)

241-320 Design Architecture & Engineering for Intelligent System

Problem Solving and Search part 4

26

Conclusion
Walking downhill is not as easy as youd think Informed search algorithms try to move quickly towards a goal based on the distance metric from their current point Greedy search algorithms only follow paths search space that bring them closest to the goal Local search algorithms have no memory to store tree structures, but work by intelligently covering selected parts of search space
241-320 Design Architecture & Engineering for Intelligent System

Problem Solving and Search part 4

27

Search problems

Blind search
Heuristic search: best-first and A*

Construction of heuristics
241-320 Design Architecture & Engineering for Intelligent System

Variants of A*

Local search
28

Problem Solving and Search part 4

See Visualization of N-Queen Solutions


For viewing an live demo of N-Queens solution algorithms with different algorithms, visit
http://yuval.bar-or.org/index.php?item=9

241-320 Design Architecture & Engineering for Intelligent System

Problem Solving and Search part 4

29

Reading

2 2.4.3

241-320 Design Architecture & Engineering for Intelligent System

Problem Solving and Search part 4

30

You might also like