Dr. Mark Humphrys

School of Computing. Dublin City University.

Home      Blog      Teaching      Research      Contact

Online coding site: Ancient Brain

coders   JavaScript worlds

Search:

CA170      CA668      CA686

Online AI coding exercises

Project ideas


Types of learning

There are many forms of Machine Learning. Many of the concepts we have learnt can be transferred in some form to other forms of machine learning.
  


Alternatives to Supervised Learning

Supervised Learning requires a teacher, who actually knows what the right answer is for a large set of exemplars.
This is not how most animal and human learning is done.




Unsupervised Learning

With Unsupervised Learning, the program is focused not on exemplars but rather on dividing up the input space into regions (classification, category formation).
There must be some sense of one set of category definitions being better than another.
The basic learning algorithm is simply to learn to represent the world:

  1. Input x.
  2. Run through network to get output y.
  3. Compare y with x.
  4. Backpropagate the error.
Simply grouping together inputs is useful - e.g. Which countries' economies are similar to which? Should all post-communist economies adopt the same reforms?

Consider 40 dimensional input, reconstruct 40 dimensional output, but have to encode it in the middle in just 7 hidden nodes (not 40 hidden nodes). What is the encoding? Which x's are grouped together? Less hidden units to represent and reconstruct the input means a more efficient network.

Imagine if the network consisted of a dedicated hidden unit for every possible input, with all weights 1 or 0. Reconstruction would be perfect.
But this would just be a lookup-table. Whole idea of a network is a more efficient representation than a lookup-table, because we want some predictive ability.




Reinforcement Learning

With Reinforcement Learning, the program learns not from being explicitly told the right answer, but from sporadic rewards and punishments.
e.g. I cannot tell machine what signals to send to its motors to walk, but I can tell when it has succeeded, and can say "Good dog!"

Typically, the rewards are numbers. The program uses various trial-and-error algorithms to maximise these numeric rewards over time.
e.g. I do not program the robot soccer player, but I give it 10 points every time it scores a goal, and minus 10 points when the opposition scores. It uses these points to "grade" every action in every state.




Convolutional Neural Networks




Recurrent (Feedback) Networks

We have up to now considered feed-forward networks.
In a recurrent network, activity may feed back to earlier layers.

Activity flows around closed loops. The network may settle down into a stable state, or may have more complex dynamics. Much of the brain seems better described by this model.

  

Continuous streams of input

Some form of recurrent network model seems better suited for problems where we do not have discrete inputs (like images) but rather have a continuous stream of input.

Example: Speech recognition, where:

  1. It is not clear where words begin and end in the audio.
  2. Words are easier to understand in the context of the words that come before and after.
With a recurrent network, the state of the network encodes information of recent past events, which may be used to modify the processing of the current input pattern.



ancientbrain.com      w2mind.org      humphrysfamilytree.com

On the Internet since 1987.

Wikipedia: Sometimes I link to Wikipedia. I have written something In defence of Wikipedia. It is often a useful starting point but you cannot trust it. Linking to it is like linking to a Google search. A starting point, not a destination. I automatically highlight in red all links to Wikipedia and Google search and other possibly-unreliable user-generated content.