State-Space Search - often have a heuristic "distance from goal".
Here - we have no idea how far away we might be from the optimal solution.
Neural Networks - we have measure of error E.
Here we don't.
Absolute fitness gives us less info than an E measure.
More on this in Comparison of Neural Net and GA.
Here, we're trying to find
the n-dimensional Input that leads to the highest Output.
i.e. Maximising the function.
We don't need to build up a representation of the function.
Requirements: Given any Input, tell me the Output.
Could we combine these?
Do our GA search,
and, as we go along, build up a network to represent the map.
Problem is we won't be representing it in same detail over the whole range.
We spend more time (get more detail) on the higher peaks.
We get random coverage
at the start, but it quickly becomes non-random.
The GA can track changing fitness landscape,
in that each generation has to re-prove its fitness.
Problem is that any
step size
and
temperature
must be increased again.
Also instead of starting with a random scattering of individuals,
we may be starting with them all concentrated in a small, sub-optimal
area of the landscape.
How do you detect if the fitness landscape has changed?
Typically, we need a random (or pseudo-random) number generator to randomly initialise our population, and also to make probabilistic decisions thereafter.
Useful for optimising n parameters, all of which influence each other.
Useful for scheduling that minimises some cost function. e.g. Timetabling of lectures/exams. Minimise no. of clashes (options). Minimise no. of exams on consecutive days, etc.