45
Heuristic Search Heuristic Search Best First Search Best First Search A* A* IDA* IDA* Beam Search Beam Search Generate and Test Generate and Test Local Searches Local Searches Hill Climbing Hill Climbing Simple Hill Climbing Simple Hill Climbing Steepest Ascend Hill Climbing Steepest Ascend Hill Climbing Stochastic Hill Climbing Stochastic Hill Climbing Simulated Annealing Simulated Annealing Local beam search Local beam search Genetic Algorithms Genetic Algorithms 1 1

Heuristic Search

  • Upload
    zariel

  • View
    25

  • Download
    0

Embed Size (px)

DESCRIPTION

Heuristic Search. Best First Search A* IDA* Beam Search Generate and Test Local Searches Hill Climbing Simple Hill Climbing Steepest Ascend Hill Climbing Stochastic Hill Climbing Simulated Annealing Local beam search Genetic Algorithms. State spaces. - state space - PowerPoint PPT Presentation

Citation preview

Page 1: Heuristic Search

Heuristic SearchHeuristic Search Best First SearchBest First Search

– A*A*– IDA*IDA*– Beam SearchBeam Search

Generate and TestGenerate and Test Local SearchesLocal Searches

– Hill ClimbingHill Climbing Simple Hill ClimbingSimple Hill Climbing Steepest Ascend Hill ClimbingSteepest Ascend Hill Climbing Stochastic Hill ClimbingStochastic Hill Climbing

– Simulated AnnealingSimulated Annealing Local beam searchLocal beam search Genetic AlgorithmsGenetic Algorithms

11

Page 2: Heuristic Search

State spacesState spaces<S,P,I,G,W> - state space

S - set of statesP= {(x,y)|x,yS} - set of production rulesIS - initial stateGS - set of goal states W: PR+ - weight function

22

Page 3: Heuristic Search

Best first SearchBest first Search

BestFirstSearch(state space =<S,P,I,G,W>,f)Open {<f(I),I>}Closed while Open do

<fx,x> ExtractMin(Open)if Goal(x, ) then return xInsert(x,Closed)for y Child(x ,) do

if yClosed thenInsert(<f(y),y>,Open)

return fail

Open is implemented as a priority queue33

Page 4: Heuristic Search

Best First SearchBest First Search Example:Example: The Eight-puzzle The Eight-puzzle

The task is to transform the initial puzzleThe task is to transform the initial puzzle

22 88 33

11 66 44

77 55

into the goal stateinto the goal state

11 22 33

88 44

77 66 55

by shifting numbers up, down, left or right.by shifting numbers up, down, left or right.

44

Page 5: Heuristic Search

Best-First (Heuristic) SearchBest-First (Heuristic) Search22 88 33

11 66 44

77 5544

22 88 33

11 66 44

77 55

22 88 33

11 44

77 66 55

22 88 33

11 66 44

77 55 55 33 55

22 88 33

11 44

77 66 55

22 33

11 88 44

77 66 55

22 88 33

11 44

77 66 5533

to the goalto the goal

44

88 33

22 11 44

77 66 5533

33

22 88 33

77 11 44

66 5544

88 33

22 11 44

77 66 5533 to further unnecessary expansionsto further unnecessary expansions

55

Page 6: Heuristic Search

Best First Search - A*Best First Search - A*

<S,P,I,G,W> - state space

h*(x) - a minimum path weight from x to the goal state

h(x) - heuristic estimate of h*(x)

g(x) - a minimum path weight from I to x

f(x) = g(x) + h(x)

66

Page 7: Heuristic Search

Search strategies - A*Search strategies - A*A*Search(state space =<S,P,I,G,W>,h)

Open {<h(I),0,I>}Closed while Open do

<fx,gx,x> ExtractMin(Open) [minimum for fx] if Goal(x, ) then return xInsert(<fx,gx,x>,Closed)for y Child(x ,) do

gy = gx + W(x,y)fy = gy + h(y) if there is no <f,g,y>Closed with ffy then

Insert(< fy,gy,y>,Open) [replace existing<f,g,y>, if present]

return fail77

Page 8: Heuristic Search

Best-First Search (A*)Best-First Search (A*)22 88 33

11 66 44

77 550+40+4

22 88 33

11 66 44

77 55

22 88 33

11 44

77 66 55

22 88 33

11 66 44

77 55 1+51+5 1+31+3 1+51+5

22 88 33

11 44

77 66 55

22 33

11 88 44

77 66 55

22 88 33

11 44

77 66 552+32+3

goal !goal !

2+42+4

88 33

22 11 44

77 66 553+33+3

2+32+3

22 88 33

77 11 44

66 553+43+4

22 33

11 88 44

77 66 553+23+2

11 22 33

88 44

77 66 554+14+1

11 22 33

88 44

77 66 555+05+0

11 22 33

77 88 44

66 555+25+2

22 33

11 88 44

77 66 553+43+4

88

Page 9: Heuristic Search

Admissible searchAdmissible search

Definition

An algorithm is admissible if it is guaranteed to return an optimal solution (with minimal possible path weight from the start state to a goal state) whenever a solution exists.

99

Page 10: Heuristic Search

AdmissibilityAdmissibility

What must be true What must be true about h for Aabout h for A** to find to find optimal path?optimal path?

2 73

1 1

X Yh=100 h=1

h=0h=0

1010

Page 11: Heuristic Search

AdmissibilityAdmissibility

What must be true What must be true about h for Aabout h for A** to find to find optimal path?optimal path?

AA** finds optimal path if finds optimal path if h is admissible; h is h is admissible; h is admissible when it admissible when it never overestimates.never overestimates.

2 73

1 1

X Yh=100 h=1

h=0h=0

1111

Page 12: Heuristic Search

AdmissibilityAdmissibility

What must be true What must be true about h for Aabout h for A** to find to find optimal path?optimal path?

AA** finds optimal path if finds optimal path if h is admissible; h is h is admissible; h is admissible when it admissible when it never overestimates.never overestimates.

In this example, h is In this example, h is not admissible.not admissible.

2 73

1 1

X Yh=100 h=1

g(X)+h(X) = 102

g(Y)+h(Y) = 74

Optimal path is not found!

h=0h=0

1212

Page 13: Heuristic Search

Admissibility of a Heuristic FunctionAdmissibility of a Heuristic Function

Definition

A heuristic function h is said to be admissible if

0 h(n) h*(n) for all nS.

1313

Page 14: Heuristic Search

Optimality of AOptimality of A** (proof) (proof) Suppose some suboptimal goal Suppose some suboptimal goal GG22 has been has been

generated and is in the fringe. Let generated and is in the fringe. Let nn be an be an unexpanded node in the fringe such that unexpanded node in the fringe such that n n is on a is on a shortest path to an optimal goal shortest path to an optimal goal GG..

f(Gf(G22) = g(G) = g(G22)) since since hh(G(G22) = 0 ) = 0 f(G) = g(G)f(G) = g(G) since since hh(G) = 0 (G) = 0 g(Gg(G22) > g(G) ) > g(G) since Gsince G22 is suboptimal is suboptimal f(Gf(G22) > f(G)) > f(G) from above from above

1414

Page 15: Heuristic Search

Optimality of AOptimality of A** (proof) (proof) Suppose some suboptimal goal Suppose some suboptimal goal GG22 has been has been

generated and is in the fringe. Let generated and is in the fringe. Let nn be an be an unexpanded node in the fringe such that unexpanded node in the fringe such that n n is on a is on a shortest path to an optimal goal shortest path to an optimal goal GG..

f(Gf(G22) > f(G) ) > f(G) from above from above h(n) ≤ h*(n)h(n) ≤ h*(n) since h is admissiblesince h is admissible g(n) + h(n) ≤ g(n) + hg(n) + h(n) ≤ g(n) + h**(n) (n) f(n) f(n) ≤ f(G), Hence ≤ f(G), Hence f(Gf(G22) > f(n)) > f(n), , and Aand A** will never select G will never select G22 for expansion for expansion 1515

Page 16: Heuristic Search

Consistent heuristicsConsistent heuristics A heuristic is consistent if for every node A heuristic is consistent if for every node nn, every successor , every successor n'n' of of nn

generated by any action generated by any action aa, , h(n) ≤ c(n,a,n') + h(n')h(n) ≤ c(n,a,n') + h(n')

If If hh is consistent, we have is consistent, we have

f(n') f(n') = g(n') + h(n') = g(n') + h(n') = g(n) + c(n,a,n') + h(n') = g(n) + c(n,a,n') + h(n') ≥ ≥ g(n) + h(n) g(n) + h(n) ≥ ≥ f(n)f(n)

i.e., i.e., f(n)f(n) is non-decreasing along any path. is non-decreasing along any path. Thus: If Thus: If h(n)h(n) is consistent, A is consistent, A** using a graph search is optimal using a graph search is optimal The first goal node selected for expansions is an optimal solutionThe first goal node selected for expansions is an optimal solution

1616

Page 17: Heuristic Search

Consistent heuristicsConsistent heuristics

1717

Page 18: Heuristic Search

Extreme CasesExtreme Cases

If we set h=0, then AIf we set h=0, then A** is Breadth First search; is Breadth First search; h=0 is admissible heuristic (when all costs h=0 is admissible heuristic (when all costs are non-negative).are non-negative).

If g=0 for all nodes, A* is similar to Steepest If g=0 for all nodes, A* is similar to Steepest Ascend Hill ClimbingAscend Hill Climbing

If both g and h =0, it can be as dfs or bfs If both g and h =0, it can be as dfs or bfs depending on the structure of Opendepending on the structure of Open

1818

Page 19: Heuristic Search

How to chose a heuristic?How to chose a heuristic? Original problem P Relaxed problem P'

A set of constraints removing one or more constraintsP is complex P' becomes simpler

Use cost of a best solution path from n in P' as h(n) for P Admissibility:

h* hcost of best solution in P >= cost of best solution in P'

Solution space of P

Solution space of P'

1919

Page 20: Heuristic Search

How to chose a heuristic - 8-puzzleHow to chose a heuristic - 8-puzzle

Example: 8-puzzle

– Constraints: to move from cell A to cell B cond1: there is a tile on A cond2: cell B is empty cond3: A and B are adjacent (horizontally or vertically)

– Removing cond2:

h2 (sum of Manhattan distances of all misplaced tiles)

– Removing both cond2 and cond3:

h1 (# of misplaced tiles)

Here h2 h1

2020

Page 21: Heuristic Search

How to chose a heuristic - 8-puzzleHow to chose a heuristic - 8-puzzle

h1(start) = 7h2(start) = 18

2121

Page 22: Heuristic Search

Performance comparisonPerformance comparisonSearch Cost (nodes)Search Cost (nodes) Effective Branching FactorEffective Branching Factor

dd IDSIDS A* (A* (hh11)) A* (A* (hh22)) IDSIDS A* (A* (hh11)) A* (A* (hh22))

22

44

66

88

1010

1212

1414

1616

1818

2020

2222

2424

1010

112112

680680

63846384

4712747127

36440353644035

--

--

--

--

--

--

66

1313

2020

3939

9393

227227

539539

13011301

30563056

72767276

1809418094

3913539135

66

1212

1818

2525

3939

7373

113113

211211

363363

676676

12191219

16411641

2.452.45

2.872.87

2.732.73

2.802.80

2.792.79

2.782.78

--

--

--

--

--

--

1.791.79

1.481.48

1.341.34

1.331.33

1.381.38

1.421.42

1.441.44

1.451.45

1.461.46

1.471.47

1.481.48

1.481.48

1.791.79

1.451.45

1.301.30

1.241.24

1.221.22

1.241.24

1.231.23

1.251.25

1.261.26

1.271.27

1.281.28

1.261.26

Data are averaged over 100 instances of 8-puzzle, for varous solution lengths. Note: there are still better heuristics for the 8-puzzle.

2222

Page 23: Heuristic Search

Comparing heuristic functionsComparing heuristic functions Bad estimates of the remaining distance can Bad estimates of the remaining distance can

cause extra work!cause extra work! Given two algorithmsGiven two algorithms A A11 and A and A2 2 with with

aadmissible heuristics dmissible heuristics hh11 and and hh22 < h< h**(n),(n),

if if hh11(n) < h(n) < h22(n)(n) for all non-goal nodes for all non-goal nodes nn, then , then

AA1 1 expands at least as many nodes as Aexpands at least as many nodes as A22

hh22 is said to be more is said to be more informedinformed than h than h11 or, or,

A A22 dominatesdominates A A11

2323

Page 24: Heuristic Search

Weighted A* (WA*)

fw(n) = (1–w)g(n) + w h(n)

w = 0 - Breadth Firstw = 1/2 - A*w = 1 - Best First

2424

Page 25: Heuristic Search

IDAIDA**: : Iterative deepening AIterative deepening A**

TTo reduce the memory requirements at o reduce the memory requirements at the expense of some additional the expense of some additional computation time, combine uninformed computation time, combine uninformed iterative deepening search with Aiterative deepening search with A**

Use an Use an f-costf-cost limit instead of a depth limit limit instead of a depth limit

2525

Page 26: Heuristic Search

Iterative Deepening A*Iterative Deepening A* In the first iteration, we determine a In the first iteration, we determine a “f-cost limit”“f-cost limit”

f’(nf’(n00) = g’(n) = g’(n00) + h’(n) + h’(n00) = h’(n) = h’(n00), where n), where n00 is the start node. is the start node.

We expand nodes using the We expand nodes using the depth-first algorithmdepth-first algorithm and and backtrack whenever f’(n) for an expanded node n exceeds backtrack whenever f’(n) for an expanded node n exceeds the cut-off value.the cut-off value.

If this search does not succeed, determine the If this search does not succeed, determine the lowest f’-lowest f’-valuevalue among the nodes that were visited but not expanded. among the nodes that were visited but not expanded.

Use this f’-value as the Use this f’-value as the new limit valuenew limit value and do another and do another depth-first search.depth-first search.

Repeat this procedure until a goal node is found.Repeat this procedure until a goal node is found.

2626

Page 27: Heuristic Search

IDAIDA** Algorithm - Top level Algorithm - Top level

function IDA*(problem) returns solution

root := Make-Node(Initial-State[problem])

f-limit := f-Cost(root)

loop do

solution, f-limit := DFS-Contour(root, f-limit)

if solution is not null, then return solution

if f-limit = infinity, then return failure

end

2727

Page 28: Heuristic Search

IDAIDA** contour expansion contour expansion

function DFS-Countour(node,f-limit) returns

solution sequence and new f-limit

if f-Cost[node] > f-limit then return (null, f-Cost[node] )

if Goal-Test[problem](State[node]) then return (node,f-limit)

for each node s in Successor(node) do

solution, new-f := DFS-Contour(s, f-limit)

if solution is not null, then return (solution, f-limit)

next-f := Min(next-f, new-f); end

return (null, next-f)

2828

Page 29: Heuristic Search

IDA*: Iteration 1, Limit = 4IDA*: Iteration 1, Limit = 422 88 33

11 66 44

77 550+40+4

22 88 33

11 66 44

77 55

22 88 33

11 44

77 66 551+51+5 1+31+3

22 88 33

11 44

77 66 55

22 33

11 88 44

77 66 55

22 88 33

11 44

77 66 552+32+3 2+42+42+32+3

2929

Page 30: Heuristic Search

IDA*: Iteration 2, Limit = 5IDA*: Iteration 2, Limit = 522 88 33

11 66 44

77 550+40+4

22 88 33

11 66 44

77 55

22 88 33

11 44

77 66 551+51+5 1+31+3

22 88 33

11 44

77 66 55

22 33

11 88 44

77 66 552+32+3

goal !goal !

88 33

22 11 44

77 66 553+33+3

2+32+3

22 88 33

77 11 44

66 553+43+4

22 33

11 88 44

77 66 553+23+2

11 22 33

88 44

77 66 554+14+1

11 22 33

88 44

77 66 555+05+0

3030

Page 31: Heuristic Search

Beam Search (with limit = 2)Beam Search (with limit = 2)22 88 33

11 66 44

77 550+40+4

22 88 33

11 66 44

77 55

22 88 33

11 44

77 66 55

22 88 33

11 66 44

77 55 1+51+5 1+31+3 1+51+5

22 88 33

11 44

77 66 55

22 33

11 88 44

77 66 55

22 88 33

11 44

77 66 552+32+3

goal !goal !

2+42+4

88 33

22 11 44

77 66 553+33+3

2+32+3

22 88 33

77 11 44

66 553+43+4

22 33

11 88 44

77 66 553+23+2

11 22 33

88 44

77 66 554+14+1

11 22 33

88 44

77 66 555+05+0

11 22 33

77 88 44

66 555+25+2

22 33

11 88 44

77 66 553+43+4

3131

Page 32: Heuristic Search

Generated and TestGenerated and Test

AlgorithmAlgorithm1.1. Generate a (potential goal) state:Generate a (potential goal) state:

– Particular point in the problem space, orParticular point in the problem space, or– A path from a start stateA path from a start state

2.2. Test if it is a goal stateTest if it is a goal state– Stop if positiveStop if positive– go to step 1 otherwisego to step 1 otherwise

Systematic or Heuristic?Systematic or Heuristic?– It depends on “Generate”It depends on “Generate”

3232

Page 33: Heuristic Search

Local search algorithms In many problems, the path to the goal is

irrelevant; the goal state itself is the solution

State space = set of "complete" configurations

Find configuration satisfying constraints, e.g., n-queens

In such cases, we can use local search algorithms

keep a single "current" state, try to improve it3333

Page 34: Heuristic Search

Hill ClimbingHill Climbing

Simple Hill ClimbingSimple Hill Climbing – expand the current nodeexpand the current node– evaluate its children one by one (using the heuristic evaluate its children one by one (using the heuristic

evaluation function)evaluation function)– choose the FIRST node with a better valuechoose the FIRST node with a better value

Steepest Ascend Hill ClimbingSteepest Ascend Hill Climbing – expand the current nodeexpand the current node– Evaluate all its children (by the heuristic evaluation Evaluate all its children (by the heuristic evaluation

function)function)– choose the BEST node with the best valuechoose the BEST node with the best value

3434

Page 35: Heuristic Search

Hill-climbing

Steepest Ascend

3535

Page 36: Heuristic Search

Hill Climbing Problems:

– local maxima problem – plateau problem– Ridge

goal Ridge plateaugoal

3636

Page 37: Heuristic Search

Drawbacks

Ridge = sequence of local maxima difficult for greedy algorithms to navigate Plateaux = an area of the state space where the evaluation function is flat.

3737

Page 38: Heuristic Search

Hill-climbing example

a) shows a state of h=17 and the h-value for each possible successor.

b) A local minimum in the 8-queens state space (h=1).

a) b)

3838

Page 39: Heuristic Search

Hill-climbing variations Stochastic hill-climbing

– Random selection among the uphill moves.

– The selection probability can vary with the steepness of the uphill move.

First-choice hill-climbing– Stochastic hill climbing by generating successors randomly

until a better one is found. Random-restart hill-climbing

– A series of Hill Climbing searches from randomly A series of Hill Climbing searches from randomly generated initial statesgenerated initial states

Simulated AnnealingSimulated Annealing– Escape local maxima by allowing some "bad" moves but

gradually decrease their frequency 3939

Page 40: Heuristic Search

Simulated AnnealingSimulated Annealing T = initial temperatureT = initial temperature x = initial guessx = initial guess v = Energy(x)v = Energy(x) Repeat while T > final temperatureRepeat while T > final temperature

– Repeat n timesRepeat n times XX’’ Move(x) Move(x) VV’’ = Energy(x = Energy(x’’)) If vIf v’’ < v then accept new x [ x < v then accept new x [ x x x’’ ] ] Else accept new x with probability exp(-(vElse accept new x with probability exp(-(v ’’ – v)/T) – v)/T)

– T = 0.95T /* Annealing schedule*/T = 0.95T /* Annealing schedule*/ At high temperature, most moves acceptedAt high temperature, most moves accepted At low temperature, only moves that improve At low temperature, only moves that improve

energy are acceptedenergy are accepted

4040

Page 41: Heuristic Search

Local beam search Keep track of k “current” states instead of one

– Initially: k random initial states– Next: determine all successors of the k current states– If any of successors is goal finished– Else select k best from the successors and repeat.

Major difference with k random-restart search– Information is shared among k search threads.

Can suffer from lack of diversity.– Stochastic variant: choose k successors at

proportionally to the state success.

4141

Page 42: Heuristic Search

Genetic algorithms Variant of local beam search with sexual recombination.

A successor state is generated by combining two parent states

Start with k randomly generated states (population)

A state is represented as a string over a finite alphabet (often a string of 0s and 1s)

Evaluation function (fitness function). Higher values for better states.

Produce the next generation of states by selection, crossover, and mutation

4242

Page 43: Heuristic Search

Genetic algorithms

Fitness function: number of non-attacking pairs of queens (min = 0, max = 8 × 7/2 = 28)

24/(24+23+20+11) = 31% 23/(24+23+20+11) = 29% etc

4343

Page 44: Heuristic Search

Genetic algorithms

4444

Page 45: Heuristic Search

Genetic algorithm

function GENETIC_ALGORITHM( population, FITNESS-FN) return an individualinput: population, a set of individuals

FITNESS-FN, a function which determines the quality of the individualrepeat

new_population empty setloop for i from 1 to SIZE(population) do

x RANDOM_SELECTION(population, FITNESS_FN)y RANDOM_SELECTION(population,

FITNESS_FN)child REPRODUCE(x,y)if (small random probability) then child MUTATE(child )add child to new_population

population new_populationuntil some individual is fit enough or enough time has elapsedreturn the best individual

4545