Re: Supervised Vs. Unsupervised Learning

From: HARNAD Stevan (
Date: Thu May 30 1996 - 21:24:07 BST

> From: "Petrie Susie" <>
> Date: Wed, 22 May 1996 09:26:48 GMT
> Neural nets like human neurons can learn to recognise any pattern.

Not yet known whether they can recognise ANY pattern; we humans
certainly can't...

> Inputs put into a device initiate particular patterns which in turn
> lead to an output. The net is guided to acknowledge a pattern by
> feedback (behaviourism).

Not sure what "acknowledge" means. Reinforcement learning is one form of
learning from feedback from the consequences of responses.

> Everytime an output is correct, its
> original pattern is back propagated past each connection and
> strengthened.

The connections that led from the input to that correct output are

> Once one unit is activated it is likely to be
> activated over and over (Hebb's rule).

Actually, backprop nets don't use the Hebb rule but the generalised
delta rule (see the chapter from Best). There are Hebbian nets too,

> Whereas if a wrong output is
> the case, those "input to output" connections are weakened.
> Eventually, this trial and error method succeeds to produce the right
> output more and more frequently. Learning has been supervised to
> improve results each try. Whereas, unsupervised learning receives no
> feedback on an outcome. The consequences of it have no significance.

There are no external consequences from classifying "incorrectly" in an
unsupervised net; the net is guided by the structure of the input;
it enhances differences as well as similarities, bringing out contrasts
and boundaries. Unsupervised learning includes "competitive" learning,
where units compete to be active, and when one gets the lead, this lead
is enhanced; "winner take all" is one of the learning principles used in
unsupervised nets.

> Learning has to rely on the existing physical structure of the
> pattern to categorise features to certain outputs.

Not to categorise features; to categorise inputs.

> "Nettalk" (Rosenberg and Seynowski, 1987) is an artificial example of
> supervised learning where the pronounciation of letters is the
> output.

And "written" letters are the input.

> Everytime the right letter is produced, feedback strengthens
> that connection or weakens it for a wrong letter. This supervised
> learning technique has an 80% success rate for pronouncing correct
> words. Feedback is also available in real life; getting sunstroke
> from too much sun teaches most people to moderate exposure for
> example.

Yes, but you should give a more cognitive example, where patterns or
rules are learned.

> Supervised learning of any pattern succeeds due to the
> guidance of feedback in nets with many layers. However the exclusive
> OR pattern (same output from different inputs) is unlearnable in
> basic two layer nets.

If you want to discuss XOR you need to describe it more fully than that.

> Supervision aids quick results whereas
> unsupervised learning takes an inefficient approach.

Supervised learning is not necessarily faster, and not necessarily more
efficient. It depends on the task and the problem.

This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:42 GMT