Harnad: The Symbol Grounding Problem

From: Cattell Christopher (cjc398@ecs.soton.ac.uk)
Date: Thu Mar 01 2001 - 15:35:06 GMT


http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad90.sgproblem.html

Cattell:
This paper, written by Steven Harnad in 1990, attempts to explain the
problem of symbol grounding in a formal symbol system. Harnad
explains how psychology had modelled the mind, from Behaviourism to
Cognitivism. He then goes on to explain symbol systems, connectionist
systems and the scope and limits of them both. This provides a
background to tackle the symbol grounding problem, which is the main
purpose of the paper. Lastly Harnad discusses human behavioural
capacity and using connectionism to form categorical representations.

> HARNAD:
> behaviorism had declared that it was just as illicit to theorize
> about what went on in the head of the organism to generate its
> behavior as to theorize about what went on in its mind. Only
> observables were to be the subject matter of psychology; and,
> apparently, these were expected to explain themselves.

Cattell:
To show the progression from behaviourism to cognitivism, Harnad
first explains that behaviourism's thought - that it was not how or
why an organism had certain behaviours that was useful, but only the
observation, which was apparently self-explanatory. This thought then
changed with the advent of cognitivism as Harnad explains.

> HARNAD:
> with the gradual advent of cognitivism (Miller 1956, Neisser 1967,
> Haugeland 1978), it became acceptable to make inferences about the
> unobservable processes underlying behavior.

Cattell:
Here, Harnad explain that as Psychology became more like an empirical
science, the Behaviourism view was taken over by cognitivism.

> HARNAD:
> for the hypothetical internal processes came embellished with
> subjective interpretations

Cattell:
Harnad also explains that, for the reason above, this new view also
let in mentalism again "through the back door".

> HARNAD:
> According to proponents of the symbolic model of mind such as Fodor
> (1980) and Pylyshyn (1980, 1984), symbol-strings of this sort
> capture what mental phenomena such as thoughts and beliefs are.
> Symbolists emphasize that the symbolic level (for them, the mental
> level) is a natural functional level of its own, with ruleful
> regularities that are independent of their specific physical
> realizations

Cattell:
Harnad then goes on to explain what symbol systems are and states 8
properties where, if something complies with them all then it is
symbolic. Harnad states that all 8 properties seem to be essential
for the definition of being symbolic. He goes on to say that many
phenomena have some of the properties, but this doesn't mean they are
symbolic.

> HARNAD:
> It is not enough, for example, for a phenomenon to be interpretable
> as rule-governed, for just about anything can be interpreted as
> rule-governed. A thermostat may be interpreted as following the
> rule: Turn on the furnace if the temperature goes below 70 degrees
> and turn it off if it goes above 70 degrees, yet nowhere in the
> thermostat is that rule explicitly represented.

Cattell:
Here, Harnad proceeds to distinguish between explicitly following a
rule and implicitly behaving in accordance with a rule. He states
that the critical difference is compositeness and systematically
criteria. Harnad also says that to be a symbolic the rule must be
part of a formal system, it can't just be a block on its own.

> HARNAD:
> So the mere fact that a behavior is "interpretable" as ruleful does
> not mean that it is really governed by a symbolic rule.[3] Semantic
> interpretability must be coupled with explicit representation (2),
> syntactic manipulability (4), and systematicity (8) in order to be
> symbolic.

Cattell:
Harnad then starts to explain some of the scope of the rest of the
paper. He explains why it is only the formal sense of symbolic and
symbol system the will be considered in symbol grounding. The reason
is that if you weaken any of the criteria (as they aren't arbitrary)
mentioned above you would lose the links with the formal theory of
computing.

> HARNAD:
> An early rival to the symbolic model of mind appeared (Rosenblatt
> 1962), was overcome by symbolic AI (Minsky & Papert 1969) and has
> recently re-appeared in a stronger form that is currently vying
> with AI to be the general theory of cognition and behavior
> (McClelland, Rumelhart et al. 1986, Smolensky 1988).

> HARNAD:
> in this paper it will be assumed that, first and foremost, a
> cognitive theory must stand on its own merits, which depend on how
> well it explains our observable behavioral capacity. Whether or not
> it does so in a sufficiently brainlike way is another matter, and
> a downstream one, in the course of theory development.

> HARNAD:
> To "constrain" a cognitive theory to account for behavior in a
> brainlike way is hence premature in two respects: (1) It is far
> from clear yet what "brainlike" means, and (2)we are far from
> having accounted for a lifesize chunk of behavior yet, even without
> added constraints

>HARNAD:
> Connectionism will accordingly only be considered here as a ~
> cognitive theory

Cattell:
The paper then moves on to discussing connectionist systems.
Connectionist systems are another word for neural networks, which
includes the study of the function of the brain. Here, Harnad
discusses the difficulty of studying the brain. Very little is known
about the behaviour and the structure of the brain, This ultimately
makes it very difficult, as a lot of the work on the function of the
brain is theoretical. It is for this reason that connectionism is
considered as cognitive theory in the paper. Harnad then goes to
discuss the scope and the limits of symbols and connectionism.

> HARNAD:
> nets seem to do what they do non symbolically. According to Fodor &
> Pylyshyn, this is a severe limitation, because many of our
> behavioral capacities appear to be symbolic, and hence the most
> natural hypothesis about the underlying cognitive processes that
> generate them would be that they too must be symbolic.

Cattell:
Harnad raises the issue that there is a lot of contention to whether
connectionism is symbolic. In this paper Harnad adopts the position
that it isn't as they don't meet all the criteria needed to be
symbolic (as discussed earlier in the paper). Harnad then goes on to
say that our linguistic capabilities and logical reasoning are prime
examples of skill we have that are symbolic. I am not too sure
whether Harnad is correct in stating this as I think it would be very
difficult to represent these in a symbolic fashion. I don't think
that there is enough reasoning in the paper to make that statement
stand up.

> HARNAD:
> for the symbolic approach turns out to suffer from a severe
> handicap, one that may be responsible for the limited extent of its
> success to date (especially in modeling human-scale capacities) as
> well as the uninteresting and ad hoc nature of the symbolic
> "knowledge" it attributes to the "mind" of the symbol system.

Cattell:
This is the reason why Harnad has come up with the symbol grounding
problem as he states that he has noticed it in various forms since
the advent of computing.

> HARNAD:
> according to the symbolic theory of mind, if a computer could pass
> the Turing Test(Turing 1964) in Chinese -- i.e., if it could
> respond to all Chinese symbol strings it receives as input with
> Chinese symbol strings that are indistinguishable from the replies
> a real Chinese speaker would make (even if we keep testing for a
> lifetime) -- then the computer would understand the meaning of
> Chinese symbols in the same sense that I understand the meaning of
> English symbols

> HARNAD:
> The symbols and the symbol manipulation, being all based on shape
> rather than meaning, are systematically interpretable as having
> meaning -- that, after all, is what it is to be a symbol system,
> according to our definition. But the interpretation will not be
> intrinsic to the symbol system itself: It will be parasitic on the
> fact that the symbols have meaning for us, in exactly the same way
> that the meanings of the symbols in a book are not intrinsic, but
> derive from the meanings in our heads.

Cattell:

Harnad then goes on to explain the symbol grounding problem using
Searle's Chinese room example. Searle imagined himself being the
computer, receiving Chinese symbols, manipulating them, and the
producing Chinese output. In doing this Searle realised that the
computer (or himself) would have no knowledge of Chinese but would
still be able to produce the correct output. This showed a big flaw
in the Turing Test, which was later modified by Harnad naming it "The
Total Turing Test".

> HARNAD:
> The difficult version is: Suppose you had to learn Chinese as a
> second language and the only source of information you had was a
> Chinese/Chinese dictionary. The trip through the dictionary would
> amount to a merry-go-round, passing endlessly from one meaningless
> symbol or symbol-string (the definientes) to another (the
> definienda), never coming to a halt on what anything meant.[6]

> HARNAD:
> Suppose you had to learn Chinese as a first language and the only
> source of information you had was a Chinese/Chinese dictionary![8]
> This is more like the actual task faced by a purely symbolic model
> of the mind: How can you ever get off the symbol/symbol merry-go-
> round? How is symbol meaning to be grounded in something other than
> just more meaningless symbols?[9] This is the symbol grounding
> problem.

Cattell:
This shows two versions of the symbol grounding problem. The top one,
the difficult version, describes how it would be impossible to learn
a second language if the only information you had was a dictionary
converting form one word in that language to another. Harnad says
that you would continue in a circle, never getting anywhere. The
second, impossible version, concentrates on the language being learnt
is your first language. Harnad asks how you are ever meant to get off
the symbol/symbol merry-go-round. I can agree with this view as the
way in which humans learn is trial and error and being told what is
right and what is wrong. If the only thing you had was the dictionary
you wouldn't know what word was right for a certain meaning and what
was wrong.

> HARNAD:
> Many symbolists believe that cognition, being symbol-manipulation,
> is an autonomous functional module that need only be hooked up to
> peripheral devices in order to "see" the world of objects to which
> its symbols refer (or, rather, to which they can be systematically
> interpreted as referring).[11] Unfortunately, this radically
> underestimates the difficulty of picking out the objects, events
> and states of affairs in the world that symbols refer to, i.e., it
> trivializes the symbol grounding problem.

Cattell:
Harnad explains about connecting the symbol system to the world in
the right way. He says that if that each definien in the
Chinese/Chinese dictionary problem was connected to the world in the
right way, there would be no need for the definienda. Harnad then
sketches a possible solution, a hybrid nonsymbolic/symbolic system,
which symbols are grounded in two kinds of nonsymbolic ways, pick
out categories to which the symbols refer. Harnad then goes on to
explain Human behavioural capacity starting with the difference
between discrimination and identification.

> HARNAD:
> Discrimination is a relative judgment, based on our capacity to
> tell things apart and discern their degree of similarity. To be
> able to identify is to be able to assign a unique (usually
> arbitrary) response -- a "name" -- to a class of inputs, treating
> them all as equivalent or invariant in some respect

Cattell: When discriminating between two objects, you don't need to know
what they are, you just need to see the differences. Harnad explains about
iconic and categorical representations, iconic representations allow us to
discriminate. Identification, on the other hand, is very different. To
identify something, prior knowledge of the object is required, this is
mainly learned from experience. This is categorical representation.

 
> HARNAD:
> For identification, icons must be selectively reduced to those
> "invariant features" of the sensory projection that will reliably
> distinguish a member of a category from any nonmembers with which
> it could be confused. Let us call the output of this category-
> specific feature detector the "categorical representation"

Cattell:
Both iconic and categorical representations are nonsymbolic,
according to Harnad. Iconic representations are analog copies of
sensory projection, and categorical representations are filtered
icons that preserve only some of the features. Harnad explains
symbolic representations.

> HARNAD:
> There is no justification for interpreting it holophrastically as
> meaning "This is a [member of the category] horse" when produced in
> the presence of a horse, because the other expected systematic
> properties of "this" and "a" and the all-important "is" of
> predication are not exhibited by mere passive taxonomizing. What
> would be required to generate these other systematic properties?
> Merely that the grounded names in the category taxonomy be strung
> together into propositions about further category membership
> relations

Cattell:
Harnad explains that is you have two grounded objects or names:
horse, and stripes. You could then create the category zebra by a
symbolic description of that category: "Zebra" = "horse" & "stripes".
I do not agree with this view because I think it is too open to
interpretation. For example, would a brown horse with stripes be a
zebra? No, It would be a brown horse with stripes. I think Harnad
should have explained this and backed it up more. Harnad then moves
on to another role for Connectionism.

> HARNAD:
> The symbol grounding scheme just described has one prominent gap:
> No mechanism has been suggested to explain how the all-important
> categorical representations could be formed

> HARNAD:
> Connectionism, with its general pattern learning capability, seems
> to be one natural candidate (though there may well be others):
> Icons, paired with feedback indicating their names, could be
> processed by a connectionist network that learns to identify icons
> correctly from the sample of confusable alternatives it has
> encountered by dynamically adjusting the weights of the features
> and feature combinations that are reliably associated with the
> names in a way that (provisionally) resolves the confusion, thereby
> reducing the icons to the invariant (confusion-resolving) features
> of the category to which they are assigned.

Cattell:
Harnad then explains the proposed hybrid system, which has no
autonomous symbolic level. This means that instead there is an
intrinsically dedicated symbol system, its elementary symbols
connected to nonsymbolic representations that can pick up objects to
which they refer, via connectionist networks that extract the
invariant features of their analog sensory projections. I can agree
with this view, as it seems that the connectionism system that Harnad
described could indeed help remedy the weaknesses of the two current
competitors. Harnad then concludes his paper.

> HARNAD:
> The expectation has often been voiced that "top-down" (symbolic)
> approaches to modeling cognition will somehow meet "bottom-up"
> (sensory) approaches somewhere in between. If the grounding
> considerations in this paper are valid, then this expectation is
> hopelessly modular and there is really only one viable route from
> sense to symbols: from the ground up

> HARNAD:
> In an intrinsically dedicated symbol system there are more
> constraints on the symbol tokens than merely syntactic ones.
> Symbols are manipulated not only on the basis of the arbitrary
> shape of their tokens, but also on the basis of the decidedly
> nonarbitrary "shape" of the iconic and categorical representations
> connected to the grounded elementary symbols out of which the
> higher-order symbols are composed

> HARNAD:
> ohe present grounding scheme is still in the spirit of behaviorism
> in that the only tests proposed for whether a semantic
> interpretation will bear the semantic weight placed on it consist
> of one formal test (does it meet the eight criteria for being a
> symbol system?) and one behavioral test (can it discriminate,
> identify and describe all the objects and states of affairs to
> which its symbols refer?).

Cattell:
In his conclusion Harnad states that the bottom-up approach and top-
down approach will not meet in the middle if the grounding
considerations are valid. I agree with this as I think the only way
in which to reach a good level is starting from the ground and
working up. This is the only way in which the symbols can be grounded
properly. Harnad then concludes that symbols are manipulated on their
catagorical and iconic representations. Harnad also concludes that
the properties of such symbol systems should depend on the
behavioural considerations. If a system can adhere to the 8 symbol
tests and discriminate, identify and describe all other objects then
the sementic interpritation of its symbols is fixes by the
behavioural capacity of the symbol system. He concludes that this is
no guarantee that the model has captured subjective meaning, but he
states this is as close as we can ever hope to get.
I agree with most of what Harnad says in his paper. Most of the
symbol grounding problem makes sense and, after reading the paper, I
can see how difficult it is to overcome this problem. Harnads
solutions seem to be reasonable, but I think more could have been
written to convince me of certain aspects.



This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:18 BST