NEURAL ACTIVATION, INFORMATION, AND PHENOMENAL CONSCIOUSNESS.
Max Velmans, Department of Psychology, Goldsmiths, University of London, New Cross, London SE14 6NW, England. Email M.Velmans@gold.ac.uk.
ABSTRACT O’Brien & Opie defend a "vehicle" rather than a "process" theory of consciousness largely on the grounds that only conscious information is "explicit". I argue that preconscious and unconscious representations can be functionally explicit (semantically well-formed and causally active). I also suggest that their analysis of how neural activation space mirrors the information structure of phenomenal experience fits more naturally into a dual-aspect theory of information than into their reductive physicalism.
It is self-evident that something in the brain must differentiate conscious from preconscious and unconscious states. In their thoughtful article, O’Brien and Opie suggest that conscious states are characterised by stable (versus unstable) patterns of activation in neural networks - a physical "vehicle theory" of consciousness in which each phenomenal experience is identical to a stable pattern of neural activation. Their argument in favour of a vehicle theory rather than a classical "process" theory largely centres on the claim that only conscious information is explicit (is formed into physically distinct, semantically interpretable objects) - and a stable activation pattern is appropriately explicit. Classical processing theories involving symbol manipulation assume that much nonconscious information is also explicit (in which case something has to be done to the information to make it conscious). Neural nets, they suggest, combine explicitness and consciousness in a more natural way. Given its potential for advancing our understanding of the physical substrates of phenomenology, their case merits serious consideration.
Much depends, of course, on whether only conscious information is explicit. Given the massive evidence that at least some preconscious and unconscious information is "explicit" (in the sense of being sufficiently well-formed to be semantically interpretable) the authors’ claim requires all the evidence to the contrary (not just some of it) to be methodologically flawed - and it is notable that in making their case they rely on a strictly one-sided reading of the literature (for example, they cite reviews by Holender, 1986 and Shanks & St John, 1994, but ignore extensive, contrary reviews by Dixon, 1971, 1981, Kihlstrom, 1996, Reber, 1997, and Velmans, 1991). Even one good example of preconscious or unconscious semantic processing would be troublesome for their theory and there are many examples which, to my knowledge, have never been challenged. Groeger (1984) for example, found evidence of preconscious semantic analysis in a nonattended ear, under conditions that cannot be explained by focal-attentive switching (with accompanying consciousness). That is, he found that the effects of disambiguating words in the nonattended ear on a sentence completion task in the attended ear were different if the nonattended words were at threshold (consciously detectable) versus below threshold. For example, in one experiment subjects were asked to complete the sentence "She looked ___ in her new coat" with one of two completion words, "smug" or "cosy". Simultaneous with the attended sentence the word "snug" was presented to the nonselected ear (a) at threshold, or (b) below it. With "snug" presented at threshold, subjects tended to choose "smug" which could be explained by subjects becoming momentarily aware of the physical form of the cue. With "snug" presented below threshold, subjects tended to choose "cosy" indicating semantic analysis of the cue without accompanying awareness. That is, below-threshold, nonattended, semantic information can be causally active - and according to O’Brien & Opie (section 3.2, para 3) that makes it explicit. Other experiments show that when spoken words are attended to, their multiple meanings are simultaneously, preconsciously activated (in the first 250 milliseconds). Depending on context, one meaning is selected and the subsequent entry of the word into consciousness is accompanied by inhibition (or deactivation) of inappropriate meanings (Pynte, Do & Scampa, 1984; Swinney, 1979, 1982). Such briefly-activated, preconscious, semantic codes, give every appearance of being sufficiently well-formed to influence subsequent processing, as classical theory suggests. Long-term memory provides an additional store of encoded meaning, comprising our knowledge of the world. Such knowledge is largely unconscious and stable, although it is causally active in determining our expectations and interactions with the world. The authors suggest that in PDP systems this can be handled by the connection weights and patterns of connectivity (section 4.1, para 12). But, in a sense, the "vehicle" which carries this information is irrelevant to whether it is unconscious, causally active and functionally "explicit". If a waiter gives one the bill before the menu, one knows immediately that something is wrong - one does not have to consciously rehearse a script of what is supposed to happen in restaurants! So, even if they are right, such unconscious "Connection Weight Representations" must be sufficiently "explicit" (semantically well-formed) to act as they do.
Their physicalist reductionism also needs to treated with caution. The authors take it for granted that if "vehicle" theory is correct, then, "the complex physical object constituted by the stable pattern of spiking frequencies is the phenomenal experience" (section 5.1, para 8). Nowhere in their paper, however, do they bother to defend this ontological identity claim.. A neural activation "vehicle" is a carrier of information. If the authors are right, such activation patterns correlate with phenomenal experience - and, in section 5.4, they give an interesting analysis of how similarities and differences in the "dimensionality" and "shape" of neural "activation spaces" might mirror patterns of similarity and difference in phenomenal experience. The necessary and sufficient conditions for the creation of such "activation spaces" could also then be thought of as the causes of phenomenal experience. But correlation and causation are very different to ontological identity (cf Velmans, 1998). I do not have space to elaborate on these distinctions here. But it should be clear that while "information structure" can express the patterns of similarity and difference in phenomenal experience, it does not capture its "subjectivity" and "qualia." One might, for example, know everything there is to know about the "shape" and "dimensionality" of a given neural activation space and still know nothing about what it is like to have the corresponding experience. This is obscured in the normal, human case by the fact that third-person access to brain states is complemented by first-person access to our own experience. By means of this dual access, we can discover whether certain "activation spaces" correspond to "auditory experiences" others to "visual experiences" and so on. If silicon had the appropriate "qualia producing" powers, it might then be possible to construct neural nets with the same "activation spaces" and corresponding experiences. But suppose we arrange a net to operate in a nonhuman configuration, with an "activation space shape" which is quite unlike that of the five main, human, sensory modalities. What would it experience? We cannot know! And here’s the point: if we can know the "shape" of the space very precisely and still not know what it is like to have the experience, then having a particular activation space can’t be all there is to having an experience!
Such points (which echo Nagel, 1974) are very difficult to accommodate within a reductive "physicalism" or "functionalism" which tries to translate the phenomenology of first-person experience entirely into how things appear from a third-person point of view, although they present no impediment to nonreductive positions. O’Brien & Opie’s analysis of how the information structure of neural activation space mirrors that of phenomenal space fits naturally, for example, into a dual-aspect theory of information (of the kind that I have proposed in this journal in Velmans, 1991, 1993, 1996). This accepts that information encoding in the brain, PDP systems and so on can only be properly known from a third-person perspective, while phenomenal experience can only be properly known from a first-person perspective. The patterns of similarity and difference ("the information structure") within a given phenomenal experience and its neural correlates is identical, but this information appears in very different neural and phenomenal formats for the reason that first- and third-person ways of accessing that information (the "observational arrangements") are very different. A shared information structure allows one to relate first-person phenomenology to third-person neural accounts very precisely, but it does not reduce the phenomenology to "activation space" (or to any other physical correlate). On this view, first- and third-person observations of consciousness and brain are complementary and mutually irreducible. A complete account of mind requires both.
REFERENCES (additional to those in target article)
Dixon, N.F.(1981) Preconscious Processing. Wiley.
Dixon, N.F.(1971) Subliminal perception: the nature of a controversy. McGraw-Hill.
Groeger, J.A.(1984) Evidence of unconscious semantic processing from a forced error situation. British Journal of Psychology, 75, 305-314.
Kihlstrom, J.F.(1996) Perception without awareness of what is perceived, learning without awareness of what is learned. In The Science of Consciousness: Psychological, Neuropsychological, and Clinical Reviews, ed. M. Velmans. Routledge.
Pynte, J., Do, P. & Scampa, P.(1984) Lexical decisions during the reading of sentences containing polysemous words. In Preparatory States and Processes, ed. S. Kornblum & J. Requin. Erlbaum.
Reber, A.S. (1997) How to differentiate implicit and explicit modes of acquisition. In Scientific Approaches to Consciousness, ed. J.D.Cohen & J.W.Schooler. Erlbaum.
Swinney, D.A.(1979) Lexical access during sentence comprehension: (Re)consideration of context effects. Journal of Verbal Learning and Verbal Behavior 18:645-659.
Swinney, D.A.(1982) The structure and time-course of information interaction during speech comprehension: Lexical segmentation, access, and interpretation. In Perspectives on Mental Representation, ed. J. Mehler, E.C.T. Walker & M. Garrett. Erlbaum.
Velmans, M. (1998) Goodbye to reductionism. In Toward a Science of Consciousness II: The Second Tucson Discussions and Debates, ed. S. Hameroff, A. Kaszniak, and A. Scott, MIT Press.
Velmans, M. (1996) Consciousness and the "causal paradox." Behavioral and Brain Sciences, 19(3), 537-542.
Velmans, M.(1993) Consciousness, causality and complementarity. Behavioral and Brain Sciences, 16(2), 409-416.
Velmans, M.(1991) Is human information processing conscious? Behavioral and Brain Sciences, 14(4), 651-726.