Harnad, S. (1993) Grounding Symbols in the Analog World with Neural Nets. Think 2: 12 - 78 (Special Issue on "Connectionism versus Symbolism" D.M.W. Powers & P.A. Flach, eds.). Pp. 23-26.
Stevan Harnad and I seem to be thinking about many of the same issues. Sometimes we agree, sometimes we don't; but I always find his reasoning refreshing, his positions sensible, and the problems with which he's concerned to be of central importance to cognitive science. His "Grounding Symbols in the Analog World with Neural Nets" (= GS) is no exception. And GS not only exemplifies Harnad's virtues, it also provides a springboard for diving into Harnad- Bringsjord terrain:
The Harnad-Bringsjord agreement looks like this:
(A1) Harnad claims in GS that "computationalism" is refuted by Searle's "Chinese Room Argument." I think he's right; in fact, in my What Robots Can and Can't Be (1992) I prove this doctrine false with the help of, among others, Jonah, a mono savant with a gift for flawlessly visualizing every aspect of Turing machine computation -- and my proof disarms both the multiple person rebuttal [Cole (1990), Dyer (1990)], and the pontifical complaint that human implementation isn't real implementation [Hayes et al. (1992)].
(A2) Harnad claims and in part tries to show in GS that computationalism and connectionism can be profitably distinguished. I think he's right; in fact, in [Bringsjord (1991a)] I formally distinguish these camps.
(A3) Harnad claims in GS that connectionism is refuted by his Searlean "Three Room Argument." Again, I think he's right: my Searlean argument [Bringsjord (1992)] successfully targets computationalism and connectionism.
(A4) Harnad claims in GS that the symbol grounding problem, essentially the problem of how a candidate AI can have intentionality (= genuine beliefs about objects in the world external to it) is a very serious problem. I heartily agree. I discussed one version of the symbol grounding problem in my dissertation. And thanks to Harnad's recent seminal work on the subject I'm currently burning more than a few grey cells again pondering the problem.
That's what we agree on. On the other hand, the Harnad-Bringsjord clash looks like this:
(C1) Contra Harnad, I think connectionism and logicism can be conflated for formal reasons (pertaining to the equivalence of neural nets and cellular automata, and the fact that there is an as-precise- as-you-like discrete mathematical representation of any analog computation), which makes the supposed clash between them a red herring [the conflation is achieved in Bringsjord (1991a)]. Since Harnad's hybridism presupposes the reality of the clash, his doctrine is apparently a non-starter.
(C2) The heart of Harnad's GS is his claim that TTT survives what TT couldn't, and that the symbol grounding problem can be solved for a candidate AI by insisting that it be a TTT-passer. I think that while TTT survives Searle, it (and other tests in the same spirit) succumbs to other thought-experiments [a defense of this view is in Bringsjord (in press)]. And I'm inclined to believe that no candidate AI, perhaps nothing physical, will ever have intentionality (which, yes, given that we have intentionality, does imply that I'm at least agnostic on the truth or falsity of substance dualism, the doctrine that human agents are incorporeal).
(C3) Harnad (hastily) rejects in GS the idea that we could in principle survive the complete loss of transduction (the loss of limbs, sensory surfaces, neurological motor analogs, ...) and become "brains in vats." I think it's easy to imagine existing in a cerebration-filled but transduction-empty state, and that such thought-experiments establish not only the logical possibly of such existence, but the physical possibility [in which case sensorimotor capacity is superfluous for an AI-building project; see Bringsjord & Zenzen (1991b)].
(C4) Harnad ends his paper with a large disjunction meant to capture "the possible ways his proposal" -- hybridism -- "could be wrong." The disjunction isn't exhaustive. My own position fails to appear, but perhaps comes closest to the Chomskyian view [Chomsky (1980)]. In my opinion, people are provably symbol systems able in principle to get along just dandy without sensorimotor capacity [= (C3)]; moreover, they're "infinitary" symbol systems of a sort beyond the power of a Turing machine to handle. [My specification and defense of this position can be found in Bringsjord (1993) and Bringsjord & Zenzen (forthcoming).]
That, then, is what Harnad-Bringsjord terrain looks like. The topography seems interesting enough, but -- who's right, who's wrong, and are they ever both right or both wrong? Isn't that the question? We haven't sufficient space to take informed positions on all (Ai) and (Ci) -- but I will endeavor to substantiate a significant part of (C2), since this issue falls right at the heart of Harnad's GS.
As is well known, Turing (1964) holds that if a candidate AI can pass TT, then it is to be declared a conscious agent. His position is apparently summed up by the bold proposition that
(TT-P) If x passes TT, then x is conscious.
[Turing Harnadishly said -- in my opinion incorrectly -- that the alternative to (TT-P) was solipsism, the view that one can be sure only that oneself has a mind. See Turing's discussion of Jefferson's "Argument from Consciousness" in Turing (1964).] Is (TT-P) tenable? Apparently not, not only because of Searle, but because of my much more direct "argument from serendipity" [Bringsjord (in press)]: It seems obvious that there is a non-vanishing probability that a computer program P incorporating a large but elementary sentence generator could fool an as-clever-as-you-like human judge within whatever parameters are selected for a running of TT. I agree, of course, that it's wildly improbable that P fool the judge -- but it is possible. And since such a "lucky" case is one in which (TT-P)'s antecedent is true while its consequent is apparently false, we have a counter-example.
This sort of argument, even when spelled out in formal glory, and even when adapted to target different formal renditions of Turing's conditional [all of which is carried out in Bringsjord (in press)], isn't likely to impress Harnad. For he thinks Turing's conditional ought to be the more circumspect "none the wiser"
(TT-P') If a candidate passes TT we are no more (or less) justified in denying that it has a mind then we are in the case of real people.
Hence, TTT's corresponding conditional, which encapsulates GS' heart of hearts, would for Harnad read
(TTT-P) If a candidate passes TTT we are no more (or less) justified in denying that it has a mind then we are in the case of real people.
Unfortunately, this conditional is ambiguous between a proposition concerning a verdict on two TTT-passers, one robotic, one human, and a proposition concerning a verdict on a TTT-passer matched against a verdict on a human person in ordinary circumstances. The two construals, resp., are:
(TTT-P1) If h, a human person, and r, a robot, both pass TTT, then our verdict as to whether or not h and r are conscious must be the same in both cases.
(TTT-P2) If a robot r passes TTT, then we are no more (or less) justified in denying that r is conscious then we are justified in denying that h, a human, observed in ordinary circumstances, is conscious.
But these propositions are problematic:
First, it must be conceded that both conditionals are unacceptable if understood to be English renditions of formulae in standard first- order logic -- because both would then be vacuously true. After all, both antecedents are false, since there just aren't any robotic TTT-passers around (the domain of quantification, in the standard first-order case, includes, at most, that which exists); and the falsity of an antecedent in a material conditional guarantees vacuous truth for the conditional itself. The other horn of the dilemma is that once these propositions are formalized with help from a more sophisticated logic, it should be possible to counter-example them with armchair thought-experiments [like that upon which my argument from serendipity is based -- an argument aimed at a construal of (TT-P) that's stronger than a material conditional]. Harnad is likely to insist that such propositions are perfectly meaningful, and perfectly evaluable, in the absence of such formalization. The two of us will quickly reach a methodological impasse here.
But -- there is a second problem with (TTT-P1): Anyone disinclined to embrace Harnad/Turing testing would promptly ask, with respect to (TTT-P1), whether the verdict is to be based solely on behavior performed in TTT. If so, someone disenchanted with this proposition at the outset would simply deliver a verdict of "No" in the case of both h and r -- for h, so the view here goes, could be regarded conscious for reasons not captured in TTT. In fact, these reasons are enough to derail not only (TTT-P1), but (TTT-P2) as well, as will now be shown.
(TTT-P2) is probably what Harnad means to champion. But what is meant by the phrase "ordinary situations," over and above "outside the confines of TTT"? Surely the phrase covers laic reasons for thinking that other human persons are conscious, or have minds. Now, what laic reasons have I for thinking that my wife has a mind? Many of these reasons are based on my observation that her physiognomy is a human one, on my justified belief that her sensory apparatus (eyes, ears, etc.), and even her brain, are quite similar to mine. But such reasons -- and these are darn good reasons for thinking that my spouse has a mind -- are not accessible from within TTT, since, to put it another way, if I put my wife in TTT I'll be restricted to verifying that her sensorimotor behavior matches my own. The very meaning of the test rules out emphasis on (say) the neurophysiological properties shared by Selmer and Elizabeth Bringsjord. The upshot of this is that we have found a counter- example to (TTT-P2) after all: we are more justified in denying that TTT-passing r is conscious than we are in denying that Elizabeth is. And as (TTT-P2) goes, so goes the entire sensorimotor proposal that is GS.
In response to my argument Harnad may flirt with supplanting TTT with TTTT, the latter a test in which a passer must be neurophysiologically similar to humans [see Harnad's excellent discussion of TT, TTT, TTTT (1991)]. Put barbarically for lack of space, the problem with this move is that it gives rise to yet another dilemma: On the one hand, if a "neuro-match" is to be very close, TTTT flies in the face of functionalism, the view that mentality can arise in substrates quite different than our own carbon-based one; and functionalism is part of the very conrnerstone of AI and Cognitive Science. On the other hand, if a "neuro-match" is relaxed so that it need only be at the level of information, so that robotic and human "brains" match when they embody the same program, we face in an attempt to administer TTTT what may well be an insurmountable mathematical hurdle: it's in general an uncomputable problem to decide, when given two finite argument- value lists, whether the underlying functions are the same.
Bringsjord, S. & Zenzen, M. (forthcoming) Non-Algorithmic Cognition.
Bringsjord, S. (1993) "Toward Non-Algorithimc AI," in Ryan, K. T. & Sutcliffe, R. F. E., eds., Proceedings of AICS '92: The Fifth Irish Conference on AI and Cognitive Science, University of Limerick, September 10-12 (New York, NY: Springer-Verlag).
Bringsjord, S. (in press) "Could, How Could We Tell If, and Why Should -- Androids Have Inner Lives," in Ford, K. & Glymour, C., eds., Android Epistemology (Greenwich, CT: JAI Press).
Bringsjord, S. (1992) What Robots Can and Can't Be (Dordrecht, The Netherlands: Kluwer), ISBN 0-7923-1662-2.
Bringsjord, S. (1991a) "Is the Connectionist-Logicist Clash One of AI's Wonderful Red Herrings?" Journal of Experimental and Theoretical Artificial Intelligence 3.4: 319-349.
Bringsjord, S. & Zenzen, M. (1991b) "In Defense of Hyper-Logicist AI," IJCAI T91, Morgan Kaufman Publishers, Mountain View, CA, pp. 1066Q1072.
Chomsky, N. (1980) "Rules and Representations," Behavioral and Brain Sciences 3: 1-61.
Cole, D. (1990) "Artificial Intelligence and Personal Identity," APA Central Division Meeting, New Orleans, LA, April 27.
Dyer, M. G. (1990) "Intentionality and Computationalism: Minds, Machines, Searle and Harnad," Journal of Experimental and Theoretical Artificial Intelligence 2.4.
Harnad, S. (1991) "Other Bodies, Other Minds: A Machine Incarnation of an Old Philosophical Problem," Minds and Machines 1: 43-54.
Hayes, P., Harnad, S., Perlis, D. & Block, N. (1992) "Virtual Symposium on the Virtual Mind," Minds and Machines (in press).
Turing, A. M. (1964) "Computing Machinery and Intelligence," in A. R. Andersen, ed., Minds and Machines, Contemporary Perspectives in Philosophy Series (Englewood Cliffs, NJ: Prentice Hall), pp. 4-30.
It's difficult to determine whether Bringsjord's is another friendly commentary, or he's really changing the subject. I don't think we differ about connectionism. In my model, neural nets play a circumscribed role in a hybrid system. The hybridism is analog/symbolic, not connectionist/symbolic, however, and as I wrote, I don't much care whether neural nets are like room one (PAR) or room two (SIM) in my three-room argument, if they can do the job (TTT-scale categorization). The analog transducer projection is not optional, however, and can no more be replaced by a digital approximation than the world can. So much for discrete approximations.
Bringsjord can imagine cerebration without transduction. I don't see how that establishes its physical possibility (and that's why I think he's talking about another subject). I have a similar reaction to his "argument from serendipity": So chimpanzees might write Shakespeare by chance: What light does that cast on how Shakespeare wrote Shakespeare? And of course hypotheses conditional on passing the TT or TTT by whatever means do not yet have any true antecedents. How one is to go about finding a true antecedent is the name of the game here (my game, at any rate)! In this game, whatever it turns out to take to pass the TT, the TTT, or the TTTT exhausts all the empirical possibilities. Pure chance is not one of the possibilities worth considering.
It is, I think, easy to see why, if the TTTT -- requiring a system whose every observable property, behavioral and neuromolecular, is totally indistinguishable from those of our brains -- FAILs to capture mental properties, then we can certainly not hope to be any the wiser. It is only a bit more difficult to see that, although it too may be fallible, the TTT, rather than the TTTT, is all we have to go on in judging whether anyone else has a mind (since no one to date, amateur or professional, has been near expert enough about his neighbor's brain to base much on that -- and I assume this applies also to Selmer and Elizabeth). The only vexed question is whether the Blind-Watchmaker had anything stronger to go on. If He did -- if there are TTT-equivalent but TTTT-inequivalent options that somehow differ in their survival value, then we had better turn to the TTTT instead of just the TTT. My own hunch, however (and I can only repeat it), is that the TTT already narrows down the empirical degrees of freedom for success enough so that a mindless option need not be worried about (or not worried about much beyond the normal limits of scientific underdetermination).
-- S.H.