Brute force: Uninformed search (depthfirst, breadthfirst) doesn't scale well to large problems.
Heuristic Search:
Informed search.
Use problemspecific info.
Instead of systematically searching states in order in which they present themselves:
Restrict search by first choosing branch that is "more likely" to lead to goal.
How do we know "more likely" if don't know where goal is?
Heuristics give up the idea of ever doing an exhaustive search.
Symmetry reduction  Recognise that many states are mirror images of each other
and so don't have to search separately.
Symmetry reduction on 1st level reduces no. of paths to 3 . 8!
Symmetry reduction on 2nd level reduces no. of paths to 12 . 7! = 60,480
Symmetry reduction may require a lot of thought and work by the human.
But worth it if it reduces state space.
For Xs and Os, or any 2D board,
could write an algorithm to do the symmetry reduction (rotate and mirror).
Might not be board game:
e.g. 2D traffic lights junction simulation.
Could be rotated.
On next move, there are only actually 2 different replies the opponent can make:
Luger Fig 4.3
Depending which reply he makes, our heuristic tells us what to do next in each case.
Oddly, in both cases, it mightn't be the move you think.
The numbers circled show
"Number of winning opportunities (that contain at least one of my pieces)"
Two ways of using the heuristic:

Always go uphill, never downhill,
until stop,
probably at a local optimum
rather than the global optimum.
Unable to get temporarily into a worse state in order
to get into better one after that.
The local optimum problem:
Numbers show heuristic estimate fitness of each state (lowest number best).
When B is evaluated, its children E and F seem worse
/ further from goal
(our heuristic got it wrong, on either parent or child, or both).
E and F go into sorted queue.
C now has lowest number, so we jump to there.
We jump around the graph.
Next state to be evaluated may be at any level.
States actually expanded are in bold.
State O is deadend. Note heuristic was wrong about O.
When state O is not goal, we turn to next in queue  P.
We have backtracking when we go down an apparently promising but fruitless path.
Less likely to get stuck in local optimum
than if always took bestlooking next step (hillclimbing).
Eventually some other state probably evaluates as best
and we jump to that.
Note we could get stuck down a path if heuristic very inaccurate. e.g. Go down a line where heuristic says we are 2 steps from goal forever, no matter how deep we go.
Q. How might you solve this? (How might you compensate for an inaccurate heuristic?)