29
Heuristic Search Heuristic Search Introduction to Introduction to Artificial Intelligence Artificial Intelligence COS302 COS302 Michael L. Littman Michael L. Littman Fall 2001 Fall 2001

Heuristic Search Introduction to Artificial Intelligence COS302 Michael L. Littman Fall 2001

  • View
    220

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Heuristic Search Introduction to Artificial Intelligence COS302 Michael L. Littman Fall 2001

Heuristic SearchHeuristic Search

Introduction toIntroduction toArtificial IntelligenceArtificial Intelligence

COS302COS302

Michael L. LittmanMichael L. Littman

Fall 2001Fall 2001

Page 2: Heuristic Search Introduction to Artificial Intelligence COS302 Michael L. Littman Fall 2001

AdministrationAdministration

Thanks for emails to Thanks for emails to LittmanLittman@@cscs!!

Test message tonight… let me know Test message tonight… let me know if you don’t get it.if you don’t get it.

Forgot to mention reading… (sorry).Forgot to mention reading… (sorry).

Page 3: Heuristic Search Introduction to Artificial Intelligence COS302 Michael L. Littman Fall 2001

AI in the NewsAI in the News

““Computers double their performance Computers double their performance every 18 months. So the danger is real every 18 months. So the danger is real that they could develop intelligence and that they could develop intelligence and take over the world.” take over the world.” Eminent physicist Eminent physicist Stephen HawkingStephen Hawking, advocating genetic , advocating genetic engineering as a way to stay ahead of engineering as a way to stay ahead of artificial intelligence. artificial intelligence. [Newsweek, [Newsweek, 9/17/01]9/17/01]

Page 4: Heuristic Search Introduction to Artificial Intelligence COS302 Michael L. Littman Fall 2001

Blind SearchBlind Search

Last time we discussed BFS and DFS Last time we discussed BFS and DFS and talked a bit about how to and talked a bit about how to choose the right algorithm for a choose the right algorithm for a given search problem.given search problem.

But, if we know about the problem But, if we know about the problem we are solving, we can be even we are solving, we can be even cleverer…cleverer…

Page 5: Heuristic Search Introduction to Artificial Intelligence COS302 Michael L. Littman Fall 2001

Revised TemplateRevised Template

• fringe = {(sfringe = {(s00, f(s, f(s00)}; /* initial cost */)}; /* initial cost */• markvisited(smarkvisited(s00););• While (1) {While (1) {

If empty(fringe), return failure;If empty(fringe), return failure;(s, c) = removemincost(fringe);(s, c) = removemincost(fringe);If G(s) return s;If G(s) return s;Foreach s’ in N(s)Foreach s’ in N(s)

if s’ in fringe, reduce cost if f(s’) smaller;if s’ in fringe, reduce cost if f(s’) smaller;else if unvisited(s’) fringe U= {(s’, f(s’)};else if unvisited(s’) fringe U= {(s’, f(s’)};markvisited(s’);markvisited(s’);

}}

Page 6: Heuristic Search Introduction to Artificial Intelligence COS302 Michael L. Littman Fall 2001

Cost as True DistanceCost as True Distance

44

GGGG

44 33

33

4444555555

44

GG

44 33

33

4444555555

4422

33

44

GG

44 33

33

44445555

55

44

22

33

4433

22112233

00

44 33

33

4444555555

4422

33

443322

11223322

22

1111 00

44 33

33

4444555555

4422

33

443322

11223322

22

1111

Page 7: Heuristic Search Introduction to Artificial Intelligence COS302 Michael L. Littman Fall 2001

Some NotationSome Notation

Minimum (true) distance to goalMinimum (true) distance to goal• t(s)t(s)

Estimated cost during searchEstimated cost during search• f(s)f(s)

Steps from start stateSteps from start state• g(s)g(s)

Estimated distance to goal (heuristic)Estimated distance to goal (heuristic)• h(s)h(s)

Page 8: Heuristic Search Introduction to Artificial Intelligence COS302 Michael L. Littman Fall 2001

Compare to OptimalCompare to Optimal

Recall b is branching factor, d is Recall b is branching factor, d is depth of goal (d=t(sdepth of goal (d=t(s00))))

Using true distances as costs in the Using true distances as costs in the search algorithm (f(s)=t(s)), how search algorithm (f(s)=t(s)), how long is the path discovered?long is the path discovered?

How many states get visited during How many states get visited during search?search?

Page 9: Heuristic Search Introduction to Artificial Intelligence COS302 Michael L. Littman Fall 2001

GreedyGreedy

True distance would be ideal. Hard True distance would be ideal. Hard to achieve.to achieve.

What if we use some function h(s) What if we use some function h(s) that that approximatesapproximates t(s)? t(s)?

f(s) = h(s): expand closest node first.f(s) = h(s): expand closest node first.

Page 10: Heuristic Search Introduction to Artificial Intelligence COS302 Michael L. Littman Fall 2001

Approximate DistancesApproximate Distances

We saw already that if the We saw already that if the approximation is perfect, the approximation is perfect, the search is perfect.search is perfect.

What if costs are +/- 1 of the true What if costs are +/- 1 of the true distance?distance?

|h(s)-t(s)| |h(s)-t(s)| 1 1

Page 11: Heuristic Search Introduction to Artificial Intelligence COS302 Michael L. Littman Fall 2001

Problem with GreedyProblem with Greedy

Four criteria?Four criteria?

0

2

2

1

1

2

1…

2

1

Page 12: Heuristic Search Introduction to Artificial Intelligence COS302 Michael L. Littman Fall 2001

Algorithm AAlgorithm A

Discourage wandering off:Discourage wandering off:f(s) = g(s)+h(s)f(s) = g(s)+h(s)

In words:In words:Estimate of total path cost: cost so Estimate of total path cost: cost so far plus estimated completion costfar plus estimated completion cost

Search along the most promising Search along the most promising path (not node)path (not node)

Page 13: Heuristic Search Introduction to Artificial Intelligence COS302 Michael L. Littman Fall 2001

A Behaves BetterA Behaves Better

Only wanders a littleOnly wanders a little

0

2

2

1

1

2

1…

2

1

3

2 3

3

2 3 4

53

3

Page 14: Heuristic Search Introduction to Artificial Intelligence COS302 Michael L. Littman Fall 2001

A TheoremA Theorem

If h(s) If h(s) t(s) + k (overestimate t(s) + k (overestimate bound), then the path found by A is bound), then the path found by A is no more than k longer than no more than k longer than optimal.optimal.

Page 15: Heuristic Search Introduction to Artificial Intelligence COS302 Michael L. Littman Fall 2001

A ProofA Proof

f(goal) is length of found path.f(goal) is length of found path.

All nodes on optimal path have f(s)All nodes on optimal path have f(s)= g(s) + h(s)= g(s) + h(s)

g(s) + t(s) + kg(s) + t(s) + k

= optimal path + k= optimal path + k

Page 16: Heuristic Search Introduction to Artificial Intelligence COS302 Michael L. Littman Fall 2001

How Insure Optimality?How Insure Optimality?

Let k=0! That is, heuristic h(s) must Let k=0! That is, heuristic h(s) must always underestimate the distance always underestimate the distance to the goal (be optimistic).to the goal (be optimistic).

Such an h(s) is called an “admissible Such an h(s) is called an “admissible heuristic”.heuristic”.

A with an admissible heuristic is A with an admissible heuristic is known as A*. (Trumpets sound!)known as A*. (Trumpets sound!)

Page 17: Heuristic Search Introduction to Artificial Intelligence COS302 Michael L. Littman Fall 2001

44

GGGG

55 44

44

5544666666

44

A* ExampleA* Example

GG

55 44

44

5555666666

4444

55

66

55 44GG

55 44

44

555566

6666

4444

55

66

6655444455

66 55 44

66

44

55 44

44

5555666666

4444

55

666655

44445566 55 44

66

66 55 55

55

66

44

55 44

44

55556666

66

44

44

55

6666

55444455

66 55 44

66

66 55 55

55

66

Page 18: Heuristic Search Introduction to Artificial Intelligence COS302 Michael L. Littman Fall 2001

Time ComplexityTime Complexity

Optimally efficient for a given Optimally efficient for a given heuristic function: No other heuristic function: No other complete search algorithm would complete search algorithm would expand fewer nodes.expand fewer nodes.

Even perfect evaluation function Even perfect evaluation function could be O(bcould be O(bdd), though. When?), though. When?

Still, more accurate better!Still, more accurate better!

Page 19: Heuristic Search Introduction to Artificial Intelligence COS302 Michael L. Littman Fall 2001

Simple MazeSimple Maze

SS

GG

44

GG

44

GG

5566 44

44

55556666 44

GG

5566 44

44

55556666

66

55

66

44

GG

5566 44

4466

66

55

77

55

66

77555566 66

7777 77 77

44

55

5566 44

44

55556666

66

55

66

77

55

66

77

55 55

66

77

77 66 66

66

77

7777 77 77

Page 20: Heuristic Search Introduction to Artificial Intelligence COS302 Michael L. Littman Fall 2001

Relaxation in MazeRelaxation in Maze

Move from (x,y) to (x’,y’) is illegalMove from (x,y) to (x’,y’) is illegal• If |x-x’| > 1If |x-x’| > 1• Or |y-y’| > 1Or |y-y’| > 1• Or (x’,y’) contains a wallOr (x’,y’) contains a wall

Otherwise, it’s legal.Otherwise, it’s legal.

Page 21: Heuristic Search Introduction to Artificial Intelligence COS302 Michael L. Littman Fall 2001

Relaxations AdmissibleRelaxations Admissible

Why does this work?Why does this work?

Any legal path in the full problem is Any legal path in the full problem is still legal in the relaxation. still legal in the relaxation. Therefore, the optimal solution to Therefore, the optimal solution to the relaxation must be no longer the relaxation must be no longer than the optimal solution to the full than the optimal solution to the full problem.problem.

Page 22: Heuristic Search Introduction to Artificial Intelligence COS302 Michael L. Littman Fall 2001

Relaxation in 8-puzzleRelaxation in 8-puzzle

Tile move from (x,y) to (x’,y’) is Tile move from (x,y) to (x’,y’) is illegalillegal

• If |x-x’| > 1 or |y-y’| > 1If |x-x’| > 1 or |y-y’| > 1• Or (|x-x’| Or (|x-x’| 0 and |y-y’| 0 and |y-y’| 0) 0)• Or (x’,y’) contains a tileOr (x’,y’) contains a tile

Otherwise, it’s legal.Otherwise, it’s legal.

Page 23: Heuristic Search Introduction to Artificial Intelligence COS302 Michael L. Littman Fall 2001

Two 8-puzzle HeuristicsTwo 8-puzzle Heuristics

• hh11(s): total tiles out of place(s): total tiles out of place

• hh22(s): total Manhattan distance(s): total Manhattan distance

Note: hNote: h11(s) (s) h h22(s), so the latter leads (s), so the latter leads to more efficient searchto more efficient search

Easy to compute and provides useful Easy to compute and provides useful guidanceguidance

Page 24: Heuristic Search Introduction to Artificial Intelligence COS302 Michael L. Littman Fall 2001

Knapsack ExampleKnapsack Example

Optimize value, budget: $10B.Optimize value, budget: $10B.• Mark.Mark. costcost valuevalue• NYNY 66 88• LALA 55 88• DallasDallas 33 55• AtlAtl 33 55• BosBos 33 44

Page 25: Heuristic Search Introduction to Artificial Intelligence COS302 Michael L. Littman Fall 2001

Knapsack HeuristicKnapsack Heuristic

State: Which markets to include, State: Which markets to include, exclude (some undecided).exclude (some undecided).

Heuristic: Consider including markets Heuristic: Consider including markets in order of value/cost. If cost goes in order of value/cost. If cost goes over budget, compute value of over budget, compute value of “fractional” purchase.“fractional” purchase.

Fractional relaxation.Fractional relaxation.

Page 26: Heuristic Search Introduction to Artificial Intelligence COS302 Michael L. Littman Fall 2001

Memory BoundedMemory Bounded

Just as iterative deepening gives a Just as iterative deepening gives a more memory efficient version of more memory efficient version of BFS, can define IDA* as a more BFS, can define IDA* as a more memory efficient version of A*.memory efficient version of A*.

Just use DFS with a cutoff on f Just use DFS with a cutoff on f values. Repeat with larger cutoff values. Repeat with larger cutoff until solution found.until solution found.

Page 27: Heuristic Search Introduction to Artificial Intelligence COS302 Michael L. Littman Fall 2001

What to LearnWhat to Learn

The A* algorithm: its definition and The A* algorithm: its definition and behavior (finds optimal).behavior (finds optimal).

How to create admissible heuristics How to create admissible heuristics via relaxation.via relaxation.

Page 28: Heuristic Search Introduction to Artificial Intelligence COS302 Michael L. Littman Fall 2001

Homework 2 (partial)Homework 2 (partial)

1.1. Consider the heuristic for Rush Hour of counting the Consider the heuristic for Rush Hour of counting the cars blocking the ice cream truck and adding one. (a) cars blocking the ice cream truck and adding one. (a) Show this is a relaxation by giving conditions for an Show this is a relaxation by giving conditions for an illegal move and showing what was eliminated. (b) For illegal move and showing what was eliminated. (b) For the board on the next page, show an optimal the board on the next page, show an optimal sequence of boards sequence of boards en routeen route to the goal. Label each to the goal. Label each board with the f value from the heuristic.board with the f value from the heuristic.

2.2. Describe an improved heuristic for Rush Hour. (a) Describe an improved heuristic for Rush Hour. (a) Explain why it is admissible. (b) Is it a relaxation? (c) Explain why it is admissible. (b) Is it a relaxation? (c) Label the boards from 1b with the f values from your Label the boards from 1b with the f values from your heuristic.heuristic.

Page 29: Heuristic Search Introduction to Artificial Intelligence COS302 Michael L. Littman Fall 2001

Rush Hour ExampleRush Hour Example