Harnad, S. (1995) What Thoughts Are Made Of. Nature 378: 455-456. Book Review of: Churchland, PM. (1995) The Engine of Reason, the Seat of the Soul: A Philosophical Journey into the Brain (MIT Press) and Greenfield, SA (1995) Journey to the Centers of the Mind. (Freeman)
Upon minor prodding by Hamlet, Polonius shows himself quite capable of seeing a cloud as, successively, a camel, a weasel, or a whale. Let us call this remarkable capacity to see one thing in terms of another -- and the pattern-completing cognitive closure, hence insight, that it brings with it -- the human hermeneutic capacity. It is responsible for everything from our ability to answer wrenching existential questions with a bible and a pin to our inability to avoid seeing every dog as looking like its owner. But when it comes to explaining the mind, it is not clear whether this Polonian predilection of ours is a boon or a bain.
Paul Churchland is a philosopher of what used to be called the "eliminative materialist" turn of mind. The objective was to eliminate our everyday view of the mind in favor of a new, scientific one, in which there is no longer a mind/body problem. What is the mind/body problem? It's a difficulty we all seem to have in seeing how mental states -- feelings, thoughts, indeed any form of experience -- can be physical, or even objectively explicable. It is fashionable these days to blame this problem on Descartes, but the much more ancient pedigree of religion and belief in the immaterial soul hint that the problem may be more deeply rooted than anything planted in our minds in the seventeenth century.
Churchland makes no secret of his strategy to uproot it: "I hope to make available here a conceptual framework of sufficient richness and integrity [so] that you will be able to reconceive at least some of your own mental life in explicitly neurocomputational terms" (p. 19). And so he does, surveying a good bit of contemporary cognitive neuroscience, including brain and behavioral modeling (to which he himself has made some original computational contributions, putting into practice his principle that philosophical problems can be solved by assimilating them to science).
Churchland sets great store by "neural nets." The term is ambiguous, because there are, on the one hand, undeniable networks of neurons in the brain, but there are also networks of artificial or notional "neurons" that are implemented on computers. Worse still, the nets implemented on computers are usually discrete, serial simulations of what are intended to be continuous, parallel, distributed systems. Churchland is betting on the first and the third of these, indeed he is taking them to be one and the same thing; but he is an explicit critic of purely computational models of the mind, in that respect making common cause with other critics of computation such as the philosopher John Searle and the mathematician Roger Penrose, though he is at pains to dissociate his own arguments from theirs, which he finds wanting.
Churchland first introduces the notion of "vector activations" by surveying the sensory physiology of taste, color and smell. Each of these senses has detectors, with complex sensations involving the activity of combinations of them. Color is a familiar example. Among the cones in the retina there are some that are selectively activated by red, others by green, and still others by blue. Leaving out a few complicating details, it can be said that linear combinations of the activation levels of each of those three detectors will generate all the colors we are capable of seeing. It accordingly seems correct to describe color space vectorially as a sensory activation space.
As a similar story can be told about the other senses, it seems reasonable to think of all sensory input as vector activations of some kind. Indeed, since, to a human, all "input" is sensory input, vector activation sounds like a very general notion indeed. Churchland next moves a level higher than raw sensory processing to the more complex case of face recognition. In doing so he also takes leave of brain and behavioral evidence and passes to computational evidence: Like ourselves, artificial neural nets turn out to be able to classify faces (both facial expressions and facial identities), and they do it by partitioning vector activation space. Neural nets consist of layers: the input layer is the sensory input vector; the output layer is the motor activation vector; and various hidden layers in between are internal vectors, their components interconnected with the input and output vectors. The net accomplishes what it does by adjusting the strength of its interconnections, and hence the strength of the input, hidden, and output vector activations, on the basis of various kinds of built-in learning rules.
Two suggestive concepts emerge for Churchland from the computational modeling work, that of prototypes (the centers of regions in the partitioned vector space) and "vector completion" (input vectors that are correctly classified even though they are incomplete). Mere feedforward nets (in which activation flows only in one direction, from input to hidden layer to output) are contrasted with recurrent nets in which there are descending feedback connections as well, and repetitive activation cycles are possible. Feedforward nets seem to be data-driven learners of static invariants, as in face classification, whereas recurrent nets can learn temporal sequences, such as whether certain symbol strings are grammatical or ungrammatical (although I think Churchland misconstrues this particular toy model of a tiny fragment of grammar, taking it to be a refutation of Chomsky's evidence and arguments for an unlearnable, hence inborn, universal grammar).
With this repertoire of concepts, we learn that "If having a feedforward neural architecture is what allows one to discriminate instances of prototypical things, then having a recurrent neural architecture is what provides one with the further capacity to discriminate instances of processes" (p. 104) and that "Without [the vectorial sequences generated within a well-trained recurrent network]... we would have no concept of temporal extension or of causal processes at all" (p. 107).
This is eliminative materialism at work. Chapter 5 is a grander and grander series of hermeneutic exercises (we are playing Polonius to Churchland's Hamlet here) on recurrent nets' capacity to recognise the past, understand causality, disambiguate figures, do scientific discovery! The trouble is that in place of empirical evidence that nets' actual behavioral capacities can scale all the way up to our own in this way, the few tiny toy demos that exist are instead ratcheted up hermeneutically, by being subjected to a Protean mentalistic interpretation in which recurrent nets turn out to have the seven core properties of consciousness singled out by Churchland: short-term memory, independence of sensory input, steerable attention, alternative interpretation capability, absence in sleep, presence in dreaming, unity of experience. And conscious knowledge turns out to be what passes along "auto-connected" pathways (directly connected to the information source, as in my knowledge that my own bladder is full) in contrast to "hetero-connected" pathways (as in my knowledge that your bladder is full). (The reason this vocabulary doesn't quite do the eliminative trick for me is that even my hetero-connected knowledge that your bladder is full strikes me as conscious; moreover, the auto-connected thermostat strikes me as being unlikely to be conscious of heat. Ditto for any servosystem or recurrent net.)
It is not that it is impossible that this is what the mind amounts to; it just seems grossly premature to say so on the evidence to date. Most of the mileage seems to be coming instead from our Polonian impressionability and credulity (and I take this to be bad news for the eliminativist programme as a whole, since it shows that we can easily be brainwashed into thinking the mind/body problem has been solved when it hasn't).
I also think Churchland underestimates the power and purpose of the Turing Test, dismissing it as the trivial game to which the Loebner Prize (offered for the computer program that can fool judges into thinking it's human) has reduced it, whereas it is really an exacting empirical criterion: It requires that the candidate model for the mind have our full behavioral capacities -- so fully that it is indistinguishable from any of us, to any of us (not just for one Contest night, but for a lifetime). Scaling up to such a model is (or ought to be) the programme of that branch of reverse bioengineering called cognitive science. It's harmless enough to do the hermeneutics after the research has been successfully completed, but self-deluding and question-begging to do it before.
Following her own informative and well-written survey of the neurobiological data, Susan Greenfield's book proceeds straight to the hermeneutics without even pausing over the problem of performance modeling: She offers a consciousness criterion list even shorter than Churchland's, consisting of "concentric" epicycles -- mind-patterns of which she espies an abundance nestled in the brain-clouds.