School of Computing. Dublin City University. Home Blog Teaching Research Contact My big idea: Ancient Brain 


f(n) = g(n) + h(n)
where g(n) = actual distance from start state to state n
h(n) = heuristic estimated distance from n to goal
In this case, lowest f is best.
Luger Fig 4.15
(See goal state at bottom)
e.g. middle choice has 2,8,1 out of place, h(n) = 3, g(n) = 1, f(n) = 4
How this would direct search:
Exercise: Put in all the h values.
Progression of OPEN and CLOSED: See Luger Notes
A heuristic is said to be
admissible
if it provably finds the (joint) shortest path to the goal
where it exists.
Breadthfirst search is admissible,
but impractical.
f(n) = g(n) + h(n)
where g(n) = distance from start state to state n along current, known path (haven't yet searched other paths, may find there are multiple different ways to get to n)
h(n) = heuristic estimated distance from n to goal
Consider:
f^{*}(n) = g^{*}(n) + h^{*}(n)
where g^{*}(n) = minimum distance from start state to state n along any path
h^{*}(n) = actual, minimum distance from n to goal along any path
Bestfirst search with f^{*} is admissible.
But we do not know f^{*}
(it would be an "oracle").
We want f to be as close to f^{*} as possible.
Note that g(n) may be too high  it may not have found the shortest path to n.
For all A algorithms: g(n) ≥ g^{*}(n) for all n.
An A^{*} algorithm is algorithm A where h(n) ≤ h^{*}(n) for all n.
i.e. All states have a worse distance to goal than predicted, or equal,
but never better.
h is always too optimistic.
h(n) = 0 for all n, for example.
Theorem:
All A^{*}
algorithms are admissible.
Proof:
See here.
Roughly:
When A* terminates, it has by definition found a path whose actual cost
is lower than the estimated cost of any path through any unsearched node.
But since those estimates are all optimistic (real cost will be equal or higher),
A* can ignore those nodes.
The trick is to find a nonzero h(n) that will prune the search, but that is still always an underestimate.
The states searched by any A^{*} algorithm will be a subset of the states searched by breadthfirst.
Shows that it may not be hard to find a nonzero h(n)
that is still always optimistic.
Then you have an algorithm guaranteed to find optimal path (eventually).
Obviously some h(n)'s are better (weed more states) than others.
Imagine we have two A^{*} heuristics.
h_{1}(n) ≤ h^{*}(n) for all n.
h_{2}(n) ≤ h^{*}(n) for all n.
If h_{1}(n) ≤ h_{2}(n) for all n, we say that heuristic h_{2} is more informed (i.e. better) than h_{1}
But still need h_{2}(n) ≤ h^{*}(n) for all n.
h_{1}(n) ≤ h_{2}(n) ≤ h^{*}(n) for all nh_{2} is more informed (better) than h_{1}
One can imagine a sequence of search spaces, each smaller than the previous one, converging on the optimal solution.
Luger Fig 4.28
"Control strategy cost" = Cost of calculating heuristic.
"Rule application cost" = Number of states searched.
LHS  Simple heuristic. Quick to calculate heuristic.
But doesn't prune much  huge space to search.
RHS  Good heuristic. Cuts space.
But calculating heuristic takes time.
Tradeoff  Find middle point where computational cost of problem solving is lowest.