The field of Artificial Intelligence (AI)
can be defined in many ways
(e.g. trying to model human intelligence,
trying to solve human-level problems by any means
even if quite unlike how humans do it).
But one definition is:
Trying to write programs that come up with some form of "novelty"
- that discover a new solution/strategy/algorithm
for solving a problem
that the human programmer did not imagine.
Obviously the programmer must do something
to get this discovery process started.
The interesting (and common) scenario is when the space is too big to search
That is, in any finite experiment (and all experiments are finite)
we can only ever search part of the space.
In this case, we usually
have to give up any guarantee of finding
the best solution.
Instead the standard strategy is:
Define a "heuristic"
rule to search the space.
This is a "rule of thumb"
that does a smart search of the space in the finite time available.
Gives a good answer most of the time, but no guarantee it won't fail in some situations.
This may be well-defined (the solution solves the problem
and receives some score).
Or the evaluation function itself may be a heuristic.
(e.g. The "solution" generated is the next move in a chess game,
that is, a mere step on the way to the ultimate win-lose score.
How "good" is it to reach one given half-way position in chess
Types of heuristic
So we may have:
Heuristic search algorithm.
Heuristic evaluation function.
These are different types of thing!
Example of a heuristic search algorithm
Initialise with 20 random solutions.
Pick the best 10.
Make small changes in them, and generate a new pool of 20 candidate solutions.
Is this a good heuristic?
Consider if an apparently good (better than its local neighbours),
but sub-optimal (compared with the global best)
solution is easy to find
at the start.
This could easily converge on 10 similar variants of it.
An easy-to-find local maximum
and a harder-to-find global maximum.
X-axis - The solution.
Y-axis - The fitness of the solution.
Find solution with max fitness.
Consider starting with a random x and then trying to improve it.
It is easy to get (by chance) onto the slopes of the local maximum,
and then repeatedly make changes in x looking for an improvement,
until you get to the local maximum.
The local maximum is very "visible".
Whereas unless you start (by chance) near the global maximum
you may never find out it is there.
Many heuristics might find the local maximum
but not the global one.
In fact, anything other than exhaustive search
would be at risk of finding the local maximum
but not the global one.
The global maximum could be arbitrarily hard to find:
Chess has an extremely high
of about 35.
This means that at a typical position, there are about 35 legal moves on average
(exact range is from 0 to 218).
The opponent can then make about 35 possible moves,
then you can make about 35, and so on.
To search 1 step into future, need to look at about 35 moves.
To search 2 steps into future, need to look at about 352 moves.
To search n steps into future, need to look at about 35n moves.
Quickly becomes impossible to do exhaustive search.
Move and Ply:
Average length of chess game
= 40 "moves"
(where move = each player moves).
i.e. 80 individual turn takes
So exhaustive search would need to look ahead to about
3580 possible situations.
Many of these will be the same situation duplicated many times,
but don't know until we calculate it:
McDermott discusses whether humans work by some unconscious search:
"When people express the opinion that human grandmasters do not examine 200,000,000 move sequences per second,
I ask them, 'How do you know?'
The answer is usually that human grandmasters are not aware
of searching this number of positions, or are
aware of searching many fewer. But almost everything that goes on in our minds we are unaware of.
I tend to agree that grandmasters are not searching the way Deep Blue does,
but whatever they are doing would, if implemented on a computer, seem equally 'blind.'"
When you choose your mate, you do not do an optimal, exhaustive search.
You use a heuristic.
Can't try out 1 billion partners.
Can't wait 80 years.
Have to make decision in limited time
after limited experience.
"Commit to first solution you find".
Obviously a poor search strategy
(you could meet the best first, but unlikely).
"Date for a few years but don't commit. Then switch mode.
Commit to next really good partner that comes along."
n = no. of samples before switch to "commit to next" mode.
f = tolerable fitness to stop with.
n and f could vary.
f probably evolves over time
- need to date a few n in order to have an accurate f.
For normal humans
it might be something like
n=4 and f = 80 percent.
With "normal" values of n and f, this heuristic
avoids committing too early.
But won't sample forever, avoiding good solutions until it is too late.
Will this heuristic guarantee finding best partner you
could possibly meet? No.
Won't meet the best possible partner on the planet because n is small finite.
Might even lose the best partner you do meet
if you met them before n.
UCLA student devises algorithm to generate dating profiles that get more responses.
Algorithm fails because it is focused on getting responses
not on being compatible.
He ends up going on 87 dates with incompatible people.
It is proven that these problems need to be solved by heuristics
The knapsack problem
and the Bin packing problem
This is explained in
Complexity and computability,
but basically means no easy way of finding global optimal solution
short of exhaustive search.
Exhaustive search only possible if small number of items,
impossible if large number.
If large number of items, need to use a heuristic.
Heuristic Search as a general-purpose, useful, computer algorithm
Even if one is not interested in AI
or classic AI problems,
the concept of defining a solution space
and using a smart heuristic to search it
is just a general-purpose useful addition to the toolkit of any computer software developer.
Consider where you have a system with 50 parameters to be modified,
software to control traffic lights.
Parameters define sequences of lights changing,
length of time they do red for,
synchronisation with lights down the road,
changes for rush-hour and night-time,
You define what you think are sensible values for all 50 parameters,
after perhaps a lot of tweaking.
You are still unsure which particular combination of the 50 parameters
leads to the best traffic flow.
You could simply set up a search algorithm.
Pick a random combination of the 50 parameters. Test it.
Pick another one. Test it.
Run in simulation overnight.
Search space could be too large to search exhaustively.
Heuristic search may be customised for one particular problem
(e.g. specialised chess heuristics).
Or it may be a
- General strategy across multiple problems
(e.g. General strategy to maximise a function of n parameters)
- The limitations of Metaheuristics.
"a general-purpose universal optimization strategy is theoretically impossible,
and the only way one strategy can outperform another is if it is
specialized to the specific problem under consideration"
More domain-specific tweaking
More work for human
Non-random search of the space
Guided by domain-specific knowledge
Non-random search of the space
But no domain-specific knowledge
| Random search
Random search of the space
Works ok if space small