Re: Ziemke on "Rethinking Grounding"

From: HARNAD, Stevan (harnad@coglit.ecs.soton.ac.uk)
Date: Sat May 13 2000 - 09:59:09 BST


On Tue, 9 May 2000, Shaw, Leo wrote:

> Ziemke, T. (1997) Rethinking Grounding.
> http://www.cogsci.soton.ac.uk/~harnad/Temp/CM302/ziemke.htm
>
> Shaw:
> The explanation of cognitivism seems fairly straightforward - the
> processes of transduction (percepts -> internal representations), and
> cognition (manipulation of the internal representations) are distinct.
> (Interestingly, this still permits cognitive processes to be
> implementation independent).

The trouble with cognitivism is that it may still be too "modular":
Maybe the sensorimotor transduction "component" cannot be separated
from the computational (symbol-manipulation) component. Maybe cognition
has to be hybrid through and through.

Note that this is all maybe: The arbiter will have to be T3, or rather,
whatever it takes to pass T3. If a modular system can do it, then it's
grounded.

> Shaw:
> Enactivism, on the other hand, is based on the less intuitive concept of
> cognition as a function of 'embodied action'. This term refers to the
> belief that cognition is inseparably linked to processes of perception
> and action that are experienced through sensorimotor faculties.

This is certainly a possibility. Perhaps a modular system, with two
autonomous components, one sensorimotor, the other symbolic, would not
be able to pass T3. (Who knows?) Maybe the hybrid system needs to be
fully integrated at all levels.

> > ZIEMKE:
> > a number of behavioural subsystems or components working in parallel
> > from whose interaction the overall behaviour of a system emerges. Hence,
> > each of these subsystems (as well as the overall system) can be viewed
> > as transducing sensory input onto motor output, typically more or less
> > directly, i.e. without being mediated by internal world models.
>
> Shaw:
> Incidentally, although this is not really the point of the paper, the
> enactivist approach seems a little unnatural. Although the idea of an
> agent's behavior 'evolving' as more components are added is sound, there
> is no central 'intelligence' that could think about, for example, which
> actions to take.

You are assuming that a central intelligence has to be a symbolic
module, rather than some integrated hybrid function. Maybe, maybe not.
Only T3 success can tell.

> Shaw:
> The crux of Ziemke's argument is that neither paradigm produces a
> sufficiently grounded system.

It seems to me that it's not helpful to speak of whether a system is
"sufficiently" grounded. A symbol system alone is ungrounded. A symbol
system module plus transducer modules interacting with the world might be
grounded, but until it approaches T3 scale the grounding is trivial. It
is T3 that determines the "sufficiency" of the grounding.

> Shaw:
> For cognitivism, there are two problems:
> The first is ungrounded behavior, in the case of Regier's system (that
> learns spatial relations between two objects, eg. 'on', 'into'):
>
> > ZIEMKE:
> > Accordingly, for the above labelling act to make sense to an agent, that
> > agent would have to be able to at least use its spatial labels in some
> > way, to profit in some way from developing the capacity to do so, etc.
>
> Shaw:
> point seems robust: an agent would certainly have to
> understand its actions so they would need to be 'intrinsic' to the
> system.

Real-world robotic interaction is indeed needed to ground the
labelling. But note that spatial sensorimotor performance and spatial
labelling alone are just "toy" (subtotal) tasks; and it is not at all
clear that in a T3-scale system this kind of task would be done by an
autonomous "spatial" module.

All these toy fragments of our capacity that we single out like this
might in reality be just arbitrary parts of our overall integrated
capacity, and modelling them separately might be unrealistic and
misleading.

It's not even clear that it is useful to think of such bits of our
capacity in terms of "making sense to an agent." Best to just try to
reverse-engineer the capacity and not worry about whether there's an
agent there, or anything going on in its mind.

> Shaw:
> the concept of a 'fully grounded system' hasn't really been justified

Correct. "Degree of grounding" still sounds like an arbitrary idea. The
symbol problem is real enough (how do we connect symbols to their
meanings without the mediation of an external interpreter's mind?), but
where does the "degree" come in? A symbol system whose meaning are
autonomously connected to the things they are about is grounded, but
only nontrivial symbol systems are worth talking about. (An "on/off"
system, whose only two symbolic states are "I am on" and "I am off" is
grounded if it's on when it's on and off when it's off, but so what?)

The only "degree" worth talking about is the degree to which a system
has scaled up from an arbitrary toy capacity to full T3 capacity. There
may be way-stations along the way (e.g., the T3 capacities of mammals
other than humans), but, for technical reasons (namely, that we can
"mind-read" humans but not mammals), the test is really only decisive
with human-grade T3. (This is partly true because only humans seem to
have language, and natural language is the "mother of all symbol
systems," and the "language of [human] thought." Don't forget that T2
is a part of [human] T3.)

> > ZIEMKE:
> > [With a modular transducer/symbolic system], [t]he result of the
> > transduction (i.e. the system's actions) could be considered grounded,
> > the transducer itself however (i.e. the agent function as composed of
> > the behavioural modules and their interconnection) is in no way
> > intrinsic to the system.
>
> Shaw:
> This argument against imposing artificial design on the system makes
> more sense in the context of the enactive system, because the functions
> of the transducers define the behavior of the whole agent. Hence, by
> artificially constructing 'behavior modules', the behavior of the system
> is made extrinsic.

I'm afraid I can't see this at all. In the case of T2 (the pen-pal TT),
being passed by a pure symbol system, the meaning of "intrinsic" and
"extrinsic" is quite clear:

A system has EXTRINSIC meaning if its symbols are systematically
interpretable by an external interpreter. (This is what we defined as
being a nontrivial symbol system; if it is not interpretable, its
squiggle-squoggle rules [algorithms] are just arbitrary and of not
interest.)

A system has INTRINSIC meaning (= it is grounded) if its symbols are
systematically interpretable WITHOUT the mediation of an external
interpreter. My own "language of thought" does not depend in any way on
YOUR interpretation to mean what it means; it means what it means TO
ME, intrinsically, on its own, autonomously.

But to meet this condition, to be grounded, all a system needs is
autonomy (and T3 power). With that, it's grounded, regardless of
whether it is integrated or modular, and regardless of whether (or how)
its transducers are "designed."

Remember that the "mediation" of a human mind in DESIGNING a system has
always been a red herring. The only requirement for groundedness is
that there should be no human mediator needed in the exercise of its T3
capacity. How it got that capacity is irrelevant. (In fact, we BETTER
be the designers, otherwise we do not have a T3 model, that we
understand, but merely a T3 clone, that we understand as little as
ourselves.)

> > ZIEMKE:
> > The problem of design, however, remains to some degree, since by choice
> > of architecture (including number of hidden units, layers, etc.) the
> > designer will necessarily impose extrinsic constraints on the system

This is arbitrary and incorrect. There is nothing wrong with imposing
constraints on a system to get it to perform. If the performance is
autonomous and T3-scale, it is grounded, and its meanings are as
grounded as yours and mine (for all we know about any of us -- or it).

Ziemke confuses a pure (hence ungrounded) symbol system's dependence on
external interpretation with any artificial system's dependence on
"external" design. Performance has to be autonomous, but design
certainly does not.

> Shaw:
> In light of the enactivist belief that cognition is a result of
> interaction between agent and environment, and crucially for this point,
> their mutual specification, it does seem fair to require that designer
> input be minimised.

The only sense in which designer input has to be "minimized" is that it
must be the system, and not the designer (pulling the strings) who
passes T3. The system must be DESIGNED to be autonomous. This means
that it can't be doing everything by rote, running off a pre-planned
video. It has algorithms, but those algorithms must make it able to
learn and adapt, because being able to learn and adapt is part of T3
capacity!

But, apart from that (which simply amounts to the requirement of
robotic groundedness and T3 scale), there are no designer
"minimisation" requirements.

(By the way, remember this extremely subtle point, which is again
implicit in the Turing Test and the Other-Minds Problem: "Being
grounded" is not equivalent to, nor does it guarantee "having a mind."
It is merely indistinguishable from it. That is
Turing-Indistinguishability. He reminds us that we cannot tell the
difference, and that no experiment can tell the difference either. The
only sure way to know whether a T3 candidate -- or anyone else -- has a
mind, is to BE that candidate. That is impossible; so
T3-indistinguishability -- and hence grounding -- are all we have left
to go by.)

> Shaw:
> In the section 'Grounding Complete Agents', mention is made of the fact
> that the only known intelligent systems are the result of millions of
> years' co-evolution between individual systems and their environment.

Irrelevant. We are only interested in the result (T3-power). If
understanding and/or imitating evolution helps us (or is the only way)
to design T3 power, then it is relevant; otherwise it is not.

> Shaw:
> [Ziemke] suggested that attempts to produce artificial
> agents should pay greater attention to factors like 'physiological
> grounding', of which one example is:
>
> > ZIEMKE:
> > Compare [the] natural pre-adaptation [in the]
> > ... perfect match/correspondence between the ultraviolet vision
> > of bees and the ultraviolet reflectance patterns of flowers.
> > ... to that of the typical robot
> > which is rather arbitrarily equipped with ultrasonic and infrared
> > sensors all around its body, because its designers or buyers considered
> > that useful (i.e. a judgement entirely extrinsic to the robot).
>
> Shaw:
> Again, this could be seen as taking the idea of minimising designer
> input too far. Although the sensory inputs of biological organisms are
> the result of evolution, it is hard to see how the presence of
> ultrasonic sensors on a robot would hinder its cognitive capacity.

Correct. The short rule is: Shoot for T3 power, use whatever works, but
other than passing T3 itself without cheating (i.e., autonomously),
there are no "design" rules.

> Shaw:
> To summarise, some of the ideas presented in the report seem very solid:
> the idea of producing grounded behavior by allowing the agent to 'learn'
> functions in an evolutionary style is sensible, and the enactivist
> concept of cognition as embodied action has merit. On the other hand,
> treating the removal of designer contribution as Holy Grail seems overly
> stringent, bearing in mind that the goal is to produce 'Artificial
> Intelligence'. In addition, while constructing a system in a 'bottom-
> up' fashion could be attractively simple, in my opinion there is a
> definite requirement of a central intelligence for cognition.

Correct.

HARNAD, Stevan



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:28 GMT