Dr. Mark Humphrys

School of Computing. Dublin City University.

Home      Blog      Teaching      Research      Contact

Search:

CA216      CA249      CA318

CA400      CA651      CA668


Alternatives to Supervised Learning

Supervised Learning requires a teacher, who actually knows what the right answer is for a large set of exemplars. This is not how most animal and human learning is done.


Unsupervised Learning

Machine is left to just make sense of the world as best as it can. Learns to divide input into regions (classification, category formation). Must be some sense of one set of category definitions being better than another. The basic learning algorithm is simply to learn to represent the world:

  1. Input x.
  2. Run through network to get output y.
  3. Compare y with x.
  4. Backpropagate the error.
Simply grouping together inputs is useful - e.g. Which countries' economies are similar to which? Should all post-communist economies adopt the same reforms?

Consider 40 dimensional input, reconstruct 40 dimensional output, but have to encode it in the middle in just 7 hidden nodes (not 40 hidden nodes). What is the encoding? Which x's are grouped together?

Less hidden units to represent and reconstruct the input means a more efficient network.

Of course, the network could consist of a dedicated hidden unit for every possible input, with all weights 1 or 0. Reconstruction would be perfect. But this would just be a lookup-table. Whole idea of a network is a more efficient representation than a lookup-table, because we want some predictive ability.




Reinforcement Learning

Machine learns not from being explicitly told the right answer, but from sporadic rewards and punishments. e.g. I cannot tell machine what signals to send to its motors to walk, but I can tell when it has succeeded, and will say "Good dog!"

See 4th year AI.

Further reading - In 4th year AI course - Using a Neural Network as a generalisation for behaviour rather than classification.




Recurrent (Feedback) Networks

All networks above are feed-forward. In a recurrent network, activity may feed back to earlier layers.

Activity flows around closed loops. The network may settle down into a stable state, or may have more complex dynamics.

Much of the brain seems better described by this model.

People who are evolving neural nets often give the DNA the ability to encode a recurrent network, in case it is found useful.

Also used in pattern recognition in time-varying signals. e.g. speech recognition. The state of the network encodes information of recent past events, which may be used to modify the processing of the current input pattern. e.g. Total input is sensory input at time t:   It   plus output from previous step:   Ot-1   (which itself was the result of running the network on It-1 and Ot-2, and so on)

e.g. The inputs to the network are the sensory inputs plus the output of the previous time step. The latter has some influence over how the raw sensory inputs are interpreted. It is like the current "emotional mood" you are in when you see the input. It is like an internal state.




The brain

The brain has 100 billion neurons, each with up to 15,000 connections with other neurons. (Actually these figures include the entire nervous system, distributed over the body, which can be seen as an extension of the brain).

The adult brain of H.Sapiens is the most complex known object in the universe. Perhaps the most complex object that has ever existed. One brain is far more complex than the entire world telephone system / Internet (which has smaller number of nodes, and much less connectivity).

If we considered each neuron as roughly the equivalent of a simple CPU with 100 k of memory, then we have 100 billion CPUs with 10,000 terabytes of memory, all working in parallel and massively interconnected with hundreds of trillions of connections.

It is not surprising that the brain is so complex and at the same time consciousness and intelligence are mysterious. What would be surprising would be if the brain was a simple object.

  



Parallel Hardware

Clearly a neural network maps perfectly to parallel hardware. It consists almost entirely of calculations that could be done in parallel, with a CPU at each node.

It is very wasteful to implement a neural network on serial hardware (though commonly done).




Feeds      w2mind.org

On Internet since 1987.