> Pentland:
> Has anyone any ideas about approaching this the opposite way? Give
> grounding to symbols using a lower level language. By doing this will the
> regression lead to insugnificantly small and pointless symbols so as not
> to matter?
Davidsson did not say a "higher level" language, but a "more powerful"
language. I don't think the two are necessarily connected.
Pentland mentions using a lower level language to describe symbols, but is
there a lower level than symbols? I don't think so.
> Ground symbols in relation to eachother, although this will lead to an
> infinite loop of referals in the search for a grounding.
Indeed, this will lead to a recursive grounding of symbols on symbols on
symbols...
One symbol may be grounded on another, but somewhere in the chain, there
must be a symbol grounded on something that doesn't need grounding. But
what doesn't need grounding? How do humans perform symbol grounding? Our
eyes and brain see and interpret what we see around us; we see a computer
and it means something to us. How can this be implemented in a robot?
> >DAVIDSON
> >One such restriction is that the algorithm must be incremental. Since
> >the robot cannot control the environment, it will prbably not encounter
> >all instances of a catagory at one point in time.
So the robot must be able to add instances of a category at any time.
Taking the chair example - the robot will not know what all chairs look
like, but will maybe have a few examples. Every time a new example of that
category is encountered, it must be able to add it to those already stored.
> The argument of vision being neccesary and thusly symbolic representations
> will not be needed I find hard to understand. If thier are no symbols
> then what does the system malipulate in order to perform any function.
>
> As DAVIDSON states, if this is the case the symbol grounding problem is
> irrelavent and he goes on the say that no matter what the system uses it
> still MUST have some sort of concepts.
A form of the symbol grounding problem will always be apparent. Vision
shouldn't be necessary, but certainly some sensory interaction with the
world is required. Humans born blind are still able to learn. Even without
sight and hearing, humans have tactile sensory interaction with the world
and this is an important sense to have. Babies and young children like to
touch and play with things, and I believe this is an important part of
learning. As humans learn, by interpretation of the world through any
senses available, that interpretation is the manipulation of symbols,
presented to the system, by the sensors.
> DAVIDSON sets out to find a general solution for the symbols grounding
> problem. He suggests the type of learning that would be needed
> (incremental) and that it have to learn for example and from experience.
Indeed, to get around the symbol grounding problem, the robot cannot be
told about the world by a supervisor, but has to make inferences for
itself.
> a visual (or at least multi sensual) learning system will be the most
> affective approach that has been described along this course.
For a robot to make inferences for itself (which is necessary to surmount
the symbol grounding problem) it must have sensors with which it can
interpret the world with which it is presented.
> supervised learning [which I believe] is neccersary to increase the
> rate of learning in the early stages as an unsupervised system may take
> some years to learn and will still not interact with humans without some
> example of OUR grounding rules.
Learning can indeed not be solely unsupervised. Humans do not learn totally
unsupervised. Humans see objects in the world, and assign them categories.
Babies and young children are told names for these categories (chair, cup),
and are maybe helped in assigning multiple instances to categories (different
types of chair and cup). This is the learning process that should be sought
in robots.
Steve
sjlb197@ecs.soton.ac.uk
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:29 GMT