COMPUTATION IS JUST INTERPRETABLE SYMBOL MANIPULATION; COGNITION ISN'T
HARNAD, Stevan
http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad94.computation.cognition.html
>> HARNAD:
>> not everything is
>> a computer, because not everything can be given a systematic interpretation;
>> ... mental states will not just be the
>> implementations of the right symbol systems, because of the symbol grounding
>> problem: The interpretation of a symbol system is not intrinsic to the
>> system; it is projected onto it by the interpreter. This is not true of our
>> thoughts.
> TERRY:
> There may be some argument here along the lines of "we just interpret
> our thoughts from some internal symbol system, and project a meaning
> onto them".
> This extra layer of abstraction doesn't actually matter though, as even
> if we give meaning to internal squiggles and squoggles, the
> interpretation is still intrinsic in the system (our brains).
I think here that the significant point is that the thoughts are not based
upon the system for the outcome, given the system, then the thoughts are
independent of this. Projecting meaning onto an internal system is true, as
we can see from the learning and ideas of a new born child.
>> HARNAD:
>> We must accordingly be more than just computers. My guess is that
>> the meanings of our symbols are grounded in the substrate of our robotic
>> capacity to interact with that real world of objects, events and states of
>> affairs that our symbols are systematically interpretable as being about.
>TERRY:
> And computers must therefore be less than us. It is interesting that
> Harnad supposes that interaction is key. Defining what level this
> interaction must occur at would seem an important problem. ie, is being
> told what a donkey looks like enough, or do we have to see a donkey, or
> do we have to see a donkey in the correct context to be able to
> correctly identify another donkey.
An interesting point can be made, given that a donkey could be seen in a
place, or a given context, then would we still interpret a donkey if it was
seen in space - or something else. I think that the we would have to see the
donkey in the correct context to believe it was real. And so hence a computer
would have to be able to interpret the context and situation to assure
itself that it is a donkey.
>> HARNAD:
>> Let me declare right away that I subscribe to
>> what has come to be called the Church/Turing Thesis (CTT) (Church 1956),
>> which is based on the converging evidence that all independent attempts to
>> formalise what mathematicians mean by a "computation" or an "effective
>> procedure," even when they have looked different on the surface, have turned
>> out to be equivalent (Galton 1990).
> TERRY:
> So do I, if only for the reason that no one has been able to disprove
> it.
> This is just to remind us that if we accept this, we know the limits of
> computatution, and can't make brash claims about what computers "may be
> able to do". I'll assume we are all familiar with the Turing machine's
> operation.
> Regarding this formal model of computation:
For sure, if you cannot disprove something then this must be true, or at
least we can say that it has not been disproved and so must hold some
weight. We may not be able to say what computers will be able to do, but
given the evidence and Moores law we can see the direction to which we
are travelling.
>> HARNAD:
>> it is still an open question whether people can "compute" things
>> that are not computable in this formal sense: If they could, then CTT would
>> be false. The Thesis is hence not a Theorem, amenable to proof, but an
>> inductive conjecture supported by evidence; yet the evidence is about formal
>> properties, rather than about physical, empirical ones.
> TERRY:
> It's good to keep the above in mind - CTT isn't a theorem. It has not
> yet been disproved, and subscribers to it believe it never will.
Yes, it has never been disproved, but also never been proved, the thesis
is a hard subject to either prove or disprove and we may never know, as
computation carries so many forms.
>> HARNAD:
>> There is a natural generalisation of CTT to physical systems (CTTP).
>> According to the CTTP, everything that a discrete physical system can do (or
>> everything that a continuous physical system can do, to as close an
>> approximation as we like) can be done by computation. The CTTP comes in two
>> dosages: A Weak and a Strong CTTP, depending on whether the thesis is that
>> all physical systems are formally equivalent to computers or that they are
>> just computers.
> TERRY:
> Harnad points out that much of his following argument is reliant on his belief
> in both CTT and CTTP.
> Systematic Interpretability
Would it be possible to have a number of levels between these CTTP, strong
and weak levels, to say that parts of a physical system could be formally
equivalent to computers.
>> HARNAD
>> shape-based operations are usually called "syntactic" to contrast them with
>> "semantic" operations, which would be based on the meanings of symbols,
>> rather than just their shapes.
> TERRY:
> As we know. Just keep it in mind below:
>> HARNAD:
>> Meaning does not enter into the definition of formal computation.
> TERRY:
> This is clearly the crux of the argument. Harnad then uses the first
> time you were formally taught arithmatic or similar.
>> At no time was the meaning of the
>> symbol used to justify what you were allowed to do with it. However,
>> although it was left unmentioned, the whole point of the exercise of
>> learning formal mathematics (or logic, or computer programming) is that all
>> those symbol manipulations are meaningful in some way ("+" really does
>> square with what we mean by adding things together, and "=" really does
>> correspond to what we mean by equality). It was not merely a meaningless
>> syntactic game.
> TERRY:
> When we are given some new symbol, the fist thing you want to know is
> what it means. The meaning of the symbol was entirely used to justify
> what we could do with it. The first time I was taught algebra, and the
> notion of "value X" we were taught that it's any number we like, and
> should be treated as such. Maybe I was just taught in a strange way. I
> agree that is isn't just syntax, but I think meaning was crucial in the
> teaching.
Given a new symbol, it is often enough to know what is around it to
work out the meaning of the given symbol. Given mathematics and the
example 3#1=4, here we can say that the symbol # is the addition
manipulator. But to know this we must also know the meaning of
all the other symbols and so is limited.
>> HARNAD:
>> definitional property of computation that symbol manipulations must be
>> semantically interpretable -- and not just locally, but globally: All the
>> interpretations of the symbols and manipulations must square systematically
>> with one another, as they do in arithmetic, at the level of the individual
>> symbols, the formulas, and the strings of formulas. It must all make
>> systematic sense, in whole and in part (Fodor & Pylyshyn 1988).
> TERRY:
> This is restating another of the requirements for computation, as
> defined in class. The symbols must be interpetable systematically,
> throughout the system, and they must make sense. As Harnad states, this
> is not trivial.
Given the whole system, or any given part of that system, this must make
systematic sense and can be fragmented into many parts, which must all be
>> HARNAD:
>> It is easy to pick a bunch of arbitrary symbols and to
>> formulate arbitrary yet systematic syntactic rules for manipulating them,
>> but this does not guarantee that there will be any way to interpret it all
>> so as to make sense (Harnad 1994b).
> TERRY:
> The definition of 'make sense' would be interesting. What makes perfect
> sense to one person may make no sense to the next. Chinese doesn't make
> sense to me, but it does to someone who speaks it. Should the above
> read "make sense to somebody" ?
I think that Harnad here is trying to make the point that given a totally new
symbol system that has been invented by something then this would be
garbage to anyone else but the person who invented it. It takes us back
to the problem of whether the symbol system could be interpreted by
context words such as a dictionary with the word 'dictionary' on it.
>> HARNAD:
>> the set of semantically interpretable formal symbol systems
>> is surely much smaller than the set of formal symbol systems simpliciter,
>> and if generating uninterpretable symbol systems is computation at all,
>> surely it is better described as trivial computation, whereas the kind of
>> computation we are concerned with (whether we are mathemematicians or
>> psychologists), is nontrivial computation: The kind that can be made
>> systematic sense of.
> TERRY
> So it's pointless to consider symbol systems that make no sense as they
> don't do anything usefull. We are only concerned with the sort that
> make sense. Further definitions of a trivial symbol system:
Given that we can truly say that symbol system is doing nothing then it is
trivial, although we may not have the capacity to understand the
symbol system and so may dismiss it as trivial when in fact it is no-trivial.
>> HARNAD:
>> Trivial symbol systems have countless arbitrary "duals": You can swap the
>> interpretations of their symbols and still come up with a coherent semantics
>> . Nontrivial symbol systems do not in
>> general have coherently interpretable duals, or if they do, they are a few
>> specific formally provable special cases (like the swappability of
>> conjunction/negation and disjunction/negation in the propositional
>> calculus). You cannot arbitrarily swap interpretations in general, in
>> Arithmetic, English or LISP, and still expect the system to be able to bear
>> the weight of a coherent systematic interpretation (Harnad 1994 a).
> TERRY:
> Clearly, if I learn chinese and randomly swap the meaning of words
> about, I will still be taking chinese, but not making any sense. Thus
> chinese is non-trivial.
> Harnad makes a stronger claim:
This can be said to be true for any symbol system with a unitary object
context, if the symbol system has a number of interpretations for the
same symbols then this would be possible for the symbol system to
make sense, given that the symbols are not randomly changed.
>> HARNAD:
>> It is this rigidity and uniqueness of the
>> system with respect to the standard, "intended" interpretation that will, I
>> think, distinguish nontrivial symbol systems from trivial ones. And I
>> suspect that the difference will be an all-or-none one, rather than a matter
>> of degree.
> TERRY:
> Things aren't generally classified as being "a bit trivial" or "half
> trivial".
>> HARNAD:
>> The shapes of the
>> symbol tokens must be arbitrary. Arbitrary in relation to what? In relation
>> to what the symbols can be interpreted to mean.
> TERRY:
> I think most people would assume that the shape of letters and numbers
> are arbitrary in relation to what they actually mean (apart from maybe
> the numbers 1 and 0). As Harnad points out.
I think that these types of symbols evolve as the systems are used, take
our language system, certain symbols such as ' : ) ' and '; )' pertain
to what something looks like.
> TERRY:
> Harnad then addresses my earlier question about interpretation:
>> HARNAD:
>> We may need a successful human interpretation
>> to prove that a given system is indeed doing nontrivial computation, but
>> that is just an epistemic matter. If, in the eye of God, a potential
>> systematic interpretation exists, then the system is computing, whether or
>> not any Man ever finds that interpretation.
TERRY:
> Isn't it possible that every symbol system has the potential to be
> systematically interpretable? Can we ever say "there is no systematic
> interpretation to system X" and be guaranteed correctness ?
As I argued earlier, a given symbol system may be interpretable only to
one entity, from that if only one entity can interpret it then it must be
a symbol system.
>> HARNAD:
>> It would be trivial to say that every object, event and
>> state of affairs is computational because it can be systematically
>> interpreted as being its own symbolic description: A cat on a mat can be
>> interpreted as meaning a cat on the mat, with the cat being the symbol for
>> cat, the mat for mat, and the spatial juxtaposition of them the symbol for
>> being on. Why is this not computation? Because the shapes of the symbols are
>> not arbitrary in relation to what they are interpretable as meaning, indeed
>> they are precisely what they are interpretable as meaning.
> TERRY:
> Another way of characterising the
> arbitrariness of the shapes of the symbols in a formal symbol system is as
> "implementation independent": Completely different symbol-shapes could be
> substituted for the ones used, yet if the system was indeed performing a
> computation, it would continue to be performing the same computation if the
> new shapes were manipulated on the basis of the same syntactic rules.
Given a syntactic system then for any given meaning symbols may be
introduced that have a the same meaning as the ones which are being
substituted, then the computation would have the same meaning.
> TERRY:
> So know we also have the implementation independence part of
> computation.
> If the symbols in a system are not shape independent it is not
> computation.
>> HARNAD:
>> The power of computation
>> comes from the fact that neither the notational system for the symbols nor
>> the particulars of the physical composition of the machine are relevant to
>> the computation being performed. A completely different piece of hardware,
>> using a completely different piece of software, might be performing exactly
>> the same formal computation. What matter are the formal properties, not the
>> physical ones. This abstraction from the physical particulars is part of
>> what gives the Universal Turing Machine the power to perform any computation
>> at all.
> TERRY:
> This is, of course, all leading us towards the hybrid system idea.
> Could our thoughts really be independent from our bodys ?
> Harnad then presents some arguments for Computationalism (C=C).
> He talks of the mind-body problem, "a problem we all have in seeing how
> mental states could be physical states" and offers how computation and
> cognition seemed related (computers can do many things only cognition
> can also do, and CTTP states that whatever physical systems can do
> computers can).
> Harnad mentions Turing's test and his interpretation:
I believe that while certain system can be said to be independent from there
hardware / body's, but given the limitations of the system then this leads
to the limitations of thought, being presumptuous I could say that a bird
could not interpret some of the Human minds complex thoughts because
of the system that it is operating on.
>> HARNAD:
>> So I see Turing as championing machines in general that have functional
>> capacities indistinguishable from our own, rather than computers and
>> computation in particular. Yet there are those who do construe Turing's Test
>> as support for C=C. They argue: Cognition is computation. Implement the
>> right symbol system -- the one that can pass the penpal test (for a
>> lifetime) -- and you will have implemented a mind.
> TERRY:
> This view is what we discussed in the first part of the course. Harnad
> then gives Searle's chinese room argument as refuting the above view. I
> had problems accepting Searle's test - it always seemed like a trick
> (Can we actually say we understand how _our_ minds process input and
> produce output? No.
> So we no more understand the symbol system going on in our heads that
> we do the memorised pen-pal program. So why is our symbol system the
> only mind present?)
> Anyway, Harnad defends the Turing test:
I agree here with Terry, we can never know how the mind works from
input to output, and every mind probably interprets the inputs in
different ways to the next mind. The input and output might be the same
although the computation on those may be differing.
>> HARNAD:
>> But, as I suggested, Searle's Argument does not really impugn Turing Testing
>> (Harnad 1989); it merely impugns the purely symbolic, pen-pal version of the
>> Turing Test, which I have called T2. It leaves the robotic version (T3) --
>> which requires Turing-indistinguishable symbolic and sensorimotor capacity
>> -- untouched (just as it fails to touch T4: symbolic, sensorimotor and
>> neuromolecular indistinguishability).
>> meaning, as stated earlier, is not contained in the symbol system.
>> Now here is the critical divergence point between computation and cognition:
>> I have no idea what my thoughts are, but there is one thing I can say for
>> sure about them: They are thoughts about something, they are meaningful, and
>> they are not about what they are about merely because they are
>> systematically interpretable by you as being about what they are about. They
>> are about them autonomously and directly, without any mediation. The symbol
>> grounding problem is accordingly that of connecting symbols to what they are
>> about without the mediation of an external interpretation (Harnad 1992 d,
>> 1993 a).
> TERRY:
> At this point I'd like to point out my previous problems with Searle's
> CRA are well and truly wiped out - this is the difference between
> Searle's mind and the program he's memorised.
> HARNAD:
> One solution that suggests itself is that T2 needs to be grounded in T3:
> Symbolic capacities have to be grounded in robotic capacities. Many
> sceptical things could be said about a robot who is T3-indistinguishable
> from a person (including that it may lack a mind), but it cannot be said
> that its internal symbols are about the objects, events, and states of
> affairs that they are about only because they are so interpretable by me,
> because the robot itself can and does interact, autonomously and directly,
> with those very objects, events and states of affairs in a way that coheres
> with the interpretation. It tokens "cat" in the presence of a cat, just as
> we do, and "mat" in the presence of a mat, etc. And all this at a scale that
> is completely indistinguishable from the way we do it, not just with cats
> and mats, but with everything, present and absent, concrete and abstract.
> That is guaranteed by T3, just as T2 guarantees that your symbolic
> correspondence with your T2 pen-pal will be systematically coherent.
> But there is a price to be paid for grounding a symbol system: It is no
> longer just computational! At the very least, sensorimotor transduction is
> essential for robotic grounding, and transduction is not computation.
> TERRY:
> Harnad then goes over the old "a virtual furnace isn't hot" argument
> and points out;
>> HARNAD
>> A bit less obvious is the equally valid fact that a
>> virtual pen-pal does not think (or understand, or have a mind) -- because he
>> is just a symbol system systematically interpretable as if it were thinking
>> (understanding, mentating).
> TERRY:
> Harnad goes onto point out that we could simulated a T3 robot, but it
> still wouldn't be thinking, it would still be ungrounded symbol
> manipulation. Only by interacting with the real world and grounding its
> understanding in what it interacts with can something be said to be
> cognizing. This seems to fit in with my understanding of how people
> work. We can of course imagine worlds different from our own,
> inventions not yet real etc. However, all these things must be based on
> the world we know. Otherwise, such things would make no sense to us.
For sure, given a new born child, if it was confined to a room with no
interaction then the child would have no understanding of anything
outside the room, if someone told the child of the world outside the room
the he would know of it, but not have a cognitive understanding of it.
>> HARNAD
>> I actually think the Strong CTTP is wrong, rather than just vacuous,
>> because it fails to take into account the all-important
>> implementation-independence that does distinguish computation as a natural
>> kind: For flying and heating, unlike computation, are clearly not
>> implementation-independent. The pertinent invariant shared by all things
>> that fly is that they obey the same sets of differential equations, not that
>> they implement the same symbol systems (Harnad 1993 a). The test, if you
>> think otherwise, is to try to heat your house or get to Seattle with the one
>> that implements the right symbol system but obeys the wrong set of
>> differential equations.
> TERRY:
> At this point you may well be thinking "But flying / being hot are
> physical states. Thinking is a mental state". So what is a mental state
> if it is anything more than a phyical thing? This is back to the Turing
> test, and if their is indeed some other thing present, we will never be
> able to produce machines that think.
Given the makeup of our brain, a physical state can be said to also
have a equal and equivalent mental state, for example when something is
hot the mind interprets this as electrical signal and can be said to have a
mental state.
>> HARNAD:
>> For cognition, defined by ostension (for lack of a cognitive scientific
>> theory), is observable only to the mind of the cognizer. This property --
>> the flip-side of the mind/body problem, and otherwise known as the
>> other-minds problem -- has, I think, drawn the Strong Computationalist
>> unwittingly into the hermeneutic circle. Let as hope that reflection on
>> Searle's Argument and the Symbol Grounding Problem, and especially the
>> potential empirical routes to the latter's solution (Andrews et al in prep;
>> Harnad 1987, Harnad et al 1991,1994), may help the Strong Computationalist
>> break out again. A first step might be to try to try to deinterpret the
>> symbol system into the arbitrary squiggles and squoggles it really is (but,
>> like unlearning a language one has learnt, this is not easy to do!).
> TERRY:
> It becomes eminently clear why we keep coming back to "It's just
> sqiggle squoggles" in class know. There was an interesting program
> about robots, where scientists had designed a system that used sonar
> (like bats) to recognise objects. They could learn the name of a human
> face, and if presented with the same face, could identify it again.
> This initially seems exciting, but you quickly realise that in order to
> learn concpets we need to be able to break the world into categories,
> and a signal-wave was completely incapable of doing this. So visual
> interpretation of the world (to the same level of detail as us to be as
> intelligent) would seem nessecery. I think more than visaul interaction
> is only nessacery to identify things in different ways. Having said
> that, certain things by their nature are only identifiable to us in one
> way (a smell, a noise). It's interesting to note that their would be no
> need to stop at our 5 senses when designing a robot - why not
> incoporate the bats sonar as well?
An interesting point, from this would could say that the interpretational
ability's of a computer with a symbol system similar to our own mind and
more senses would have a greater cognitive capacity?. I agree that
to truly model the Human mind we must have at least a T3 level of
compatibility. I think that it would be unrealistic to devise a system that
has the same molecular structure as in T4, and many senses and perceptions
are compared to distinguish objects. The bat system as Terry mentions
is interesting as we could incorporate this with other recognition system
such as computer vision to give a more thorough grounding of the
objects.
Terry, Mark <mat297@ecs.soton.ac.uk>
Nick Worrall<nw297@ecs.soton.ac.uk>
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:28 GMT