On Thu, 18 May 2000, Pentland, Gary wrote:
> Davidsson: Toward a General Solution to the Symbol Gorunding Problem
> http://www.cogsci.soton.ac.uk/~harnad/Temp/CM302/davidsson.pdf
>
> > DAVIDSON:
> > A possible solution to this problem would be to attempt to describe the
> > meaning of the symbols in a more powerful language. However, this would
> > only lead to another set of symbols, ones which would otherwise nee to be
> > interpreted, and in the end to an infinite regression.
>
> I fully agree!
>
> Has anyone any ideas about approaching this the opposite way? Give
> grounding to symbols using a lower level language. By doing this will the
> regression lead to insugnificantly small and pointless symbols so as not
> to matter?
I have no idea what you mean. No matter how "small" they get,
meaningless squiggles are just meaningless squiggles, squig, squi,
s...
> OR...
>
> Ground symbols in relation to eachother, although this will lead to an
> infinite loop of referals in the search for a grounding.
That's exactly what the Chinese/Chinese Dictionary-Go-Round did, and
that WAS the symbol-grounding problem. (A little more reflection is
perhaps called for here.)
> > DAVIDSON:
> > Concerning Harnads approach, one can remark that, although it seems clear
> > that a pure symbolic system does not suffice .....
> > .....regarding connectionist networks alone as being capable of serving
> > this function appears too limited.
>
> Connectionist networks (Neural Nets, back propogation) can't learn from
> single pieces of training data.
So what? I don't see the point.
> > DAVIDSON:
> > One such restriction is that the algorithm must be incremental. Since
> > the robot cannot control the environment, it will prbably not encounter
> > all instances of a catagory at one point in time.
>
> Harnad, Please respond.
Why/how should a robot encounter all instances of a category (sic) at one
point in time? We ourselves don't do so either (and no test should try
to be holier than T3).
I'm not sure what DAVIDSSON's "incremental algorithm" is (a learning
algorithm, that can change itself as a result of learning? fine).
And we (and robots) interact with our environment. Not sure what
"controlling" an environment would mean (seems only God could do that),
but interaction and feedback certainly gives us SOME control, no?
> > DAVIDSON:
> > .....the first is that designers tend to program their own grounding
> > based on thier own experience.....
>
> This non-explicit approach will restrict the potential of any system
> designed in this way.
I disagree. Program whatever you like, as long as the result can pass
T3. Putting in a lot of advance symbolic "knowledge" does not tend to be
useful, because you can't anticipate everything (so the robot runs into
the "Frame Problem").
But remember that getting along autonomously in the world (which
includes day-to-day learning) is part of T3.
Pylyshyn, Z. (Ed.). (1987). The Robot's Dilemma. Norwood, NJ: Ablex.
country.rs.itd.umich.edu/~lormand/phil/cogsci/frame.htm
Harnad, S. (1993) Problems, Problems: The Frame Problem as a Symptom
of the Symbol Grounding Problem. PSYCOLOQUY 4(34) frame-problem.11
http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?4.34
> The argument of vision being neccesary and thusly symbolic representations
> will not be needed I find hard to understand. If thier are no symbols
> then what does the system malipulate in order to perform any function.
I agree that symbols are likely to be needed, but there are other kinds
of manipulations besides syntactic ones (e.g., analog ones).
> As DAVIDSON states, if this is the case the symbol grounding problem is
> irrelavent and he goes on the say that no matter what the system uses it
> still MUST have some sort of concepts.
Yes, there is no symbol grounding problem without symbols -- but what it
the likelihood that T3 can be passed without any symbol-manipulation
(computation) at all?
And what on earth are "concepts"?
> DAVIDSON sets out to find a general solution for the symbols grounding
> problem. He suggests the type of learning that would be needed
> (incremental) and that it have to learn for example and from experience.
Sounds like part of the obvious performance specs for passing T3. Tells
you nothing about how to do it...
> This has not yet been achieved, as DAVIDSON states but when it is I
> believe that DAVIDSON's argument for a visual (or at least multi sensual)
> learning system will be the most affective approach that has been
> described along this course.
And what IS that approach?
> HARNAD's solution although workable does
> have the problem with the potential lack of back propogation and does not
> include much supervised learning which I believe is neccersary to increase
> the rate of learning in the early stages as an unsupervised system may
> take some years to learn and will still not interact with humans without
> some example of OUR grounding rules.
Not at all clear what this is referring to. My own models are all supervised
learning models, with backprop.
(Always use a spell-checker before posting Skywriting...)
Stevan
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:29 GMT