This view is not without its critics. Daniel Dennett (Dennett, 1991; Dennett and Kinsbourne,1992) has developed a "multiple drafts" theory of consciousness that purports to do away with the Cartesian theater in which the events of conscousness take place, showing how the events of consciousness cannot be localized to any particular place or time in the perceptual processes that occur in the brain. Dennett concludes that since the events of consciousness cannot be localized, the notion of a single phenomenal substrate for consciousness is incoherent. More recently he has asserted (1996) that "all that the things that people talk about under the rubric of phenomenal consciousness get very handsomely included into a proper notion of access consciousness."
The issue is whether the subject of consciousness is located in a mind
consisting of nonphysical, "mental" stuff, or in a mind consisting of the
basis, structure and operation of world-type stuff, i.e. the brain. Without
precommitting oneself to a particular answer to this question, or even
to the the existence or nonexistence of P-consciousness, one can approach
it by addressing in more detail what is being accessed by A-consciousness.
If the subject of consciousness is in the world, then consciousness might
be adequately explained as a process that links the world to behavior.
Critiques of this functional definition (Chalmers, 1996;
Searle, 1992) assert that awareness must be in
a nonphysical mind, since awareness remains uncaptured even when all the
functional and behavioral properties of humans and minds are exhaustively
described. They argue that the chain of functional transformations between
sensory input and behavioral output leaves no place for consciousness,
and thus it must be a mental phenomenon, with an ontological status independent
of the physical world.
The method of this paper is to present a precise, unambiguous functional definition of representations, and to use this definition to link the abstract idea of a function to the dynamics of the classical electrochemistry that operate the human brain, and thus show how physical systems can contain dynamical structures with all the properties of phenomenal awareness. A fundamental component of this definition is an enhanced functional form, in which the function is reified as a datum on which other functions can operate. While this first-class functional is commonly used in computer science and cognitive science, it is rarely found in physics, biology, or philosophy [Note 1].
We use the word "awareness" throughout this paper in order to avoid the association of essential mystery that often comes along with the word "consciousness". The preservation of mystery in the nature of consciousness often appears to be a goal in many discussions of the subject. The operation, utility, origin and need for the preservation of mysterious elements in the understanding of the universe is a large and complex topic in psychology and sociology that is beyond the scope of this paper. Since our goal here is to provide a framework for understanding which eliminates the need for mysterious elements in the nature of consciousness, we attempt to avoid triggering needless injections of mystery by not using the word except in quotational contexts.
Allen Newell provided the key definition in Unified Theories of Cognition (1990), in the form of The Representation Law -
"This is called the representation law because it is the general form. Actually, there are myriad representation laws, one each for the myriad particular encode-apply-decode paths that represent an external path....The processes of encoding and decoding are part of the path, so they become an essential part of the law."
The notion of function supporting the Representation Law is more powerful than that which may be familiar to many philosophers and natural scientists. In this system, a function T is not only a process that transforms one state Xi to another state Xj, written as Xj = T(Xi), but it is an object that can be operated on by other functions to produce still more functions. This kind of function appears, for example, in the Representation Law as D = encode(T). The value D might be more precisely written D = encode(T()), and is of type function - it is fully capable of acting upon other objects, Yj = D(Yi). The idea that functions can operate on other functions is a fundamental concept of computation, providing structure to the understanding of computerized phenomena such as compilers, interpreters, and loaders that convert static text into changes in machine behavior (Friedman et al., 1992). Introducing higher-order functions into the theoretical armamentarium applicable to the problems of awareness greatly expands the range of phenomena that can be encompassed effectively. As we shall see later, the absence of unitary serial processing in the brain leads to a radical difference in the way higher-order functions appear in its operation, yet the fundamental concept of spatially organized patterns in a uniform physical substrate causing corresponding changes in the sequential structure of behavior remains.
This functional definition of representational system sheds light on a number of confusions about the nature of representations that lead to unfounded claims of the insufficiency of representational explanations of awareness. Representational systems are not static sets of brain states. A frozen brain is not a representational system and does not contain any representations, since the transformational functions of a frozen brain are radically different from those of a normal awake brain. A representational system is not a composition of pure functions providing a stateless series of transformations from sensory input to behavioral output, but contains an encoded state component Y that provides a place to locate a ground for any content that might be revealed by the process of introspection that interrogates Access-consciousness.
Awareness of the past of course takes the form of memories, and there is an enormous body of psychological and physiological work devoted to the topic of "memory and learning". Most of this body has developed within a paradigm directed at following the trace of a particular event through the various memory systems of an organism, discovering how the memory trace evolves in consolidation and degradation as it moves from one subsystem to another by processes of access and storage. In the context of awareness, however, we want to adopt a complementary, spacetime-centric perspective, in which events are fixed and the experience of the organism moves, being influenced by some of the incidents in a stream of events as they occur, while being uninfluenced by others until they are recalled from storage to pass once again through the processing structures that make memories available to Access-consciousness.
In introducing memory as a constraint on the structure of awareness,
we are led to the observation that perception of the past is continuous
with perception of the present. A well-formed theory of Access-consciousness
will make no fundamental distinction between what kinds of experiences
are made available by A-consciousness of past events and what kinds of
experiences are made available by A-consciousness of present events. Thus
a representational relation between events in a system and events in its
environment is not a relation of awareness unless the internal transformation
aspect of those internal events occurs at the same time as the external
transformation aspect of the corresponding external events. A system is
aware of an external phenomenon only if it contains a synchronous representation
of the phenomenon.
The notion that time and memory are somehow deeply related to awareness has been lurking in the literature for some time (e.g. Tulving, 1985; Edelman, 1989; Schacter, 1989) yet it is rarely found in more abstract discussions of the subject (but see Hardcastle, 1995). This absence has aided the survival of a number of unsound refutations of the possibility that functional models have the capability to explain all concepts of awareness. For example, once realtime is introduced, it is easy to explain much of the discomfort associated with the relation between Searle's Chinese Room situation and awareness. The Chinese Room (Searle, 1980) is unaware not because of any necessary flaws in the explanatory power of functionalism, but because it is defined as an offline system - it contains no timing functionality to synchronize its internal operations with its environment, and thus is incapable of maintaining any temporally sound "awareness", or even responding with any temporally appropriate behavior to its input. The behaviorally unsound Chinese Room is a fortiori unsound with respect to its interor properties such as intentionality and awareness. [Note 2]
Next we can consider a digital television system. It operates in realtime, encodes the scene viewed by the camera according to the MPEG algorithm (ISO, 1996), and transmits to the digital receiver not a simple encoding of the scene, but encoded instructions to the receiver concerning how it should update its internal state to transform it from a representation of the previous state of the external scene to a representation of the scene's current state. The internal state is then decoded and presented to the viewer via a CRT or LCD display, and this displayed scene is (approximately) the same as the original scene. The digital TV system satisfies the Representation Law in realtime.
But the television is clearly not itself aware of the scene. What might be missing? We can find the answer in our question: there's no "itself" in a simple realtime representational system. An essential step in the development of a self is the detachment of control over one's fate from the influence of others. The representation in the receiver has no causal autonomy, and the Representation Law as stated by Newell has no place for an autonomous representation. This can be remedied by replacing the equality term in Newell's Law by an equalizing function, which continually applies the internal transformation to the internal state, and then verifies that the updated state remains in accurate correspondence (via the encode/decode relation) with the external state. This replacement accomplishes two things: it brings dynamics explicitly into our functional description, and it provides a causal role for awareness.
The relation between the ASR Rule and awareness can be stated as a hypothesis:
Motivational systems are critical to the survival of mobile organisms; without them, either motion is absent and the organism is dependent on passive processes such as diffusion, convection, and the behavior of mobile prey to bring nutrients to it, or motion is random and undirected, which changes the space searched by the organism for additional resources in only a sublinear way. Thus even bacteria are sensitive to the distribution of nutrients in their environment and modify their swimming in a way that leads them in the direction of greater nutrient concentrations. Without understanding the way this modification of behavior occurs, it is characteristic of people to attribute awareness and motivation to each bacterium, saying that it "wants to go" up the nutrient concentration gradient. After decades of study, however, bacterial chemotaxis is now understood at the molecular level (Hazelbauer, Berg and Matsumura, 1993; Armitage, 1992). The models that have been developed are sufficiently detailed that they can be analyzed exhaustively for adherence to the ASR Rule. Although such an analysis has not been attempted, it appears likely that there are no processes or structures in bacteria for which the ASR rule holds: bacteria are not aware of their environment.
In more complex systems, motivation has endured intense scrutiny from the perspective of reinforcement learning, with analyses at both the behavioral level (Bower and Hilgard, 1981) and physiological level (Grossman, 1973). The theories developed in these analyses have not used the higher-order functions and dynamics that are essential to the ASR framework, and thus have not been able to capture explicitly the involvement of consciouness in motivated behavior.
We can use William James's theory of emotion (James, 1884) to see how the ASR framework expands our theory-building capabilities for interoceptive systems. In its original form, this theory stated that emotions arise as responses to behavior: aware fear appears in us as a consequence of running away, and not vice versa. Research since then has shown that the autonomic correlates of emotion (changes in respiration, perspiration, heart rate, gastric motility, etc.) can appear without somatic activity. Their appearance in these situations was credited to the influence of learning, and classical conditioning provides an effective means for accomplishing this link without involving awareness. In the ASR framework, for emotions to be aware requires the existence of a neural representation of the behavioral and autonomic effects of the perceptual situation which evolves semi-independently of the represented situation and behavior. When the rate of evolution of the internal representation exceeds the evolution of the represented situation, a phase differential occurs. This differential can be identified with the experience of a motivational drive in the situation, the "urge to run".
The phase differential between the represented situation and the representing situation is one of the places in the ASR framework where qualia might be located. To bring this differential to access-consciousness requires a second-level ASR system. In the second-level system the first-level ASR system becomes the represented subject, and another representing system is linked to it via encoding and decoding transformations. The range of changes in the differential between represented and representing systems are captured as the representing transformation function for the represented system.
The role of second-level ASR systems is especially salient in the case of aversive motivation. For instance in the analysis of pain, the evolutionary requirements for intelligent organisms are conflicting. In order to minimize the effects of stimuli that cause damage to the organism, it is necessary to escape the situation as rapidly as possible. Yet in order to identify the correct mode of escape and learn how to avoid future damaging incidents, it is necessary to focus attention towards the focus of damage. With a second-level ASR, we can suggest that the representation of the conflict between approach and avoidance constitutes an essential aspect of the conscious painfulness of damaging situations.
In cross-modal and higher-order awareness, in which, for instance, the subject produces verbal descriptions of perceptual phenomena or responds in a mode requiring comparisons in multiple sensory modalities, multiple ASRs linked by additional, intermodal transduction functions are required. Disruption of these functions due to brain damage produces disconnection syndromes (Geschwind, 1965; Gazzaniga, 1970). In some disconnection scenarios, the positive feedback channels needed to sustain autonomous activity are damaged, leading to a decline in autonomous activity, while in others, the controlling, negative feedback channels are the ones that are affected, leading to the runaway activity found in syndromes such as fluent aphasia.
Research in the semantics of
programming languages has approached these problems in the context of procedural
reflection. Procedural reflection was developed as a way of permitting
programs in high-level, functional languages such as Lisp to modify their
own execution while they are running while maintaining the simplicity
and purity of their semantics. It does this in two steps. The first step
gives the program access to its execution environment by the introduction
of only two functions, often named reify() and reflect().
These bring the hidden portions of the execution environment, such as the
call stack and the name-value bindings, into the realm of normal data structures
where they can be examined or modified, and then reinstalled back into
their original functional roles. The second step preserves the semantics
of the language while this is happening by use of a meta-circular interpreter,
in which the language is implemented by an interpreter as usual, but instead
of being expressed in another language or in the semantics of physical
hardware as is usually done, with the corresponding increase in semantic
complexity, the interpreter is viewed as a program in the same reflective
language, with the same reflective powers to modify its own interpreter.
This combination of reflection and meta-circularity contains a difficulty in its structure: there is now an infinite tower of meta-circular interpreters in operation simultaneously, with the interpreter at each level implementing the program at the level below it, and being implemented by the level above (des Rivieres and Smith, 1984)[Note 4]. But this difficulty is only semantic. In the actual implementation of reflective languages, there is a top level of physical implementation where all the work really occurs, and intermediate levels are only created on demand. In fact, reification of interpreter contents one level up need not even occur by a recursive level-shift (Friedman and Wand, 1984) but can be accomplished iteratively within a single-level interpreter, without any actual level-shifting (Wand and Friedman, 1988).
The collapse of the reflective tower in certain implementations of reflective programming languages shows how the issue of the causal power of awareness can be resolved in the case of interior awareness, but does not extend to awareness of the world. For this we need to recognize that the ASR rule involves transfer of information between the system and the world via the encode() and decode() functions, in computational terms input and output, and that the concepts of reflection can be applied to this process as well. We can distinguish this mode of reflection from its original mode with two new names: exteroflection and interoflection, respectively. With exteroflection, we find an important role for the notion of referential transparency.
All universally powerful formal languages have mechanisms for expressing opaque references. Commonly this is provided via special syntax such as quotation marks, but sometimes in other ways, such as Lisp's QUOTE special form. In conventional computer hardware it is provided via "immediate-mode" instructions, which transport a value from the preloaded, fixed instruction stream into mutable storage. In Turing machines this capability appears in the state transition table as an operation that combines tape motion with a state change, contingent on the value of the tape mark at the current head position. A prediction from this theory of perception is then that the neural architectures of brains that support conscious awareness contain structures supporting opaque reference within their perceptual systems.
Identification of a direct mechanism for perceptual opacity at the level of neural circuitry may prove difficult. Consider the problem of identifying which transistors in a conventional computer are responsible for execution of an ADD-IMMEDIATE instruction, if one is denied access to the processor manual describing the instruction encoding and to the architecture manual describing the direction of opcodes to functional units. However, an indirect mechanism has already been described.
In research on the physiology of visual attention, Anderson and others
(Anderson and Van Essen, 1987; Olshausen,
Anderson and Van Essen, 1993)
have proposed three classes of neural circuits, shifter circuits, scaler
circuits, and control circuits, that are anatomically, physiologically
and developmentally plausible, and can be combined to form a system with
many of the properties of attention-guided visual perception, including
position-invariance, scale-invariance, and object-centered coordinate transformations.
Beyond their analyses, these circuits, with the addition of recurrence
and associative memory, constitute sufficient elements to support universal
computability, and also provide for a physiological substrate to the mental
zooming that accompanies the focusing of visual attention.
When attention is directed at the perceptual environment, its has a fundamental role of bringing certain aspects of it into play for more complex kinds of processing than those aspects that leave its focus. This direction of focus can be accomplished by peripheral means such as eye movements, direction of ear pinnae, or locomotion towards the object of attention. [Note 5] It can also be accomplished by purely neural transformations, in which the activity flows within the brain are reorganized to apply central processing to restricted portions of the perceptual field, with an effect on the information delivered to central processes nearly equivalent to physical approach, but with significantly reduced energy expenditure, as well as reduced risk of detection by movement-sensitive predators and prey. However, due to limitations in receptor resolution, an attentional zoom operation reaches a limit at which the central attentional field is focused to fill its capacity with the information delivered by a single peripheral element, and further extension of the operation provides no addional value. At this limit, attention is fully occupied, the broader context of the phenomenon that produced the stimulation of the peripheral perceptual element is lost, and a single element of perceptual quality, a quale, has been delivered to the processes of higher-order perception and cognition. A quale is a limit fixpoint of focused perceptual attention. [Note 6]
That this linkage is possible is proved every time a student in a course on the theory of computation solves a homework problem involving the hand-execution of the specifications for a universal Turing machine. Yet attempts by mathematicians to make the linkage explicit have proved unproductive, when approached from the direction of discrete computation (Blum, Shub and Smale, 1989; Wolpert and MacLennan, 1993; Seligmann, 1995), with little effect on the broader research community following from these efforts.
However, a more active community is advancing under the banner of recurrent neural networks (Tino, Horne and Giles, 1995; Omlin and Giles, 1996; Hyötyniemi, 1996). This work analyzes the properties of neural networks as discrete-time dynamical systems. It remains unrealistically simplified at this date, since even with the assumption that all neural activity consists of discrete action potentials, neurons operate asynchronously except in pathological situations such as epileptic siezures. An additional limitation of analyses so far is that they have been limited to a single level of recurrence. Multiple recurrence levels are required to obtain the complexities of human memory phenomena such as as short-term memory and rehearsed recall.
It is possible to develop a series of conjectures about the results that may emerge in development of this line of research. As the spectrum of recurrence intervals in an asynchronous neural net broadens from the sharp line that corresponds to full synchronization, certain modes will appear in which one set of units operating at a long interval are linked to another set of units operating at a much shorter interval. (cf. Strogatz and Stewart 1993) This will indicate the appearance of a "rehearsal" capability.
The appearance of multiple, linked recurrence modes will permit the introduction of three-dimensional phase portraits. These will make it possible to view certain system components from the perspective of a "potential" as is required for the use of catastrophe theory in analyses. With the help of catastrophe theory, it will be possible to classify the modes of appearance of stable constellations of attractor basins out of an undifferentiated embryonic net. These stable constellations will have properties that correspond to important psychological phenomena. These phenomena include the "catastrophic" reclassification that occurs in the course of insight learning and categorical perception, the resistance to reorganization of conceptual structures that controls the historical evolution of phonology, syntax and semantics in human language, and most importantly for our purposes, the explicit representing transformations that are necessary for conscious awareness.
The representing transformation in the ASR rule can be viewed as a communication channel within the context of classical information theory. The channel capacity of the representing transformation is the amount of state-change that it induces in the representing system due to changes in the represented system. Since the rate of change of the represented system varies, the amount of information passing through the encoding, decoding, and tranforming functions varies correspondingly, and cannot be determined a priori. Further, since behavior can be directly, "reflexively" generated in response to situations without any information passing through a conscious representational process, no behavioral measurement can be guaranteed to tap even the minimum of the capacity of the encoding, transforming, and decoding processes.
What can be determined are the modes of processing that produce the least amount of activity in the equalize function of the ASR rule. In a certain sense, these are modes of perfect consiousness, since the equalization is needed only when the representing transformation operates incorrectly, updating the represented situation in a way that does not reflect the changes in the represented system.
There are two of these modes, located at the extremes of the continuum produced by the assumption of a limited total processing capacity that can be shared between the representing transformation and the encoding process. In the first mode, the capacity of the encoding process is minimized, and all mental capacity is available for reflective thought. Since reflection in the ASR framework occurs in realtime, greater height of the reflective tower can be achieved only at the cost of reducing the amount of information passed between higher and lower reflective levels. Perfect reflective closure is achieved at the limit of an infinitely tall reflective tower, but its cost is the reduction to zero in the content of any single reflective level.
The counterpart to total reflection with minimal awareness is the maximization of awareness and minimization of reflection. In this mode, processing capacity is directed to the encoding process, and the complexity of the representing transformation is minimized. Autonomous "thinking" about the "meaning" of percepts is supressed, and the awareness of the organism about its environment, while simplified, achieves a level of detail and completeness not possible in more complex modes.
Between these extremes lies a broad range of content-laden, meaningful awareness. Within this range, encoding complexity, processing speed, and reflective depth are traded off against each other according to the dictates of both experience and the current situation. In principle, there is a maximum predictive accuracy achievable at each point on this range, but determining the encoding required to achieve this maximum is dependent on the basic "hardware-level" processing operations of the system, and is in general unsolvable (Li and Vitanyi, 1997).
The extremes of closed, reflective consciousness, and open, nonreflective awareness are aspects of the two most well-studied meditative disciplines, Yoga and Zen Buddhism. While practitioners' verbal reports are not directly verifiable, electrophysiological evidence corresponding to the ASR-based analysis has been available for some time (e.g. Kasamatsu and Hirai, 1966; Anand, Chhina and Singh, 1961). In these studies the phenomenon of "alpha-blocking" was monitored during meditation. The electroencephalographic alpha rhythm typically appears during idle, unfocused thought and is suppressed at the onset of active attention. This suppression normally habituates in the course of repeated stimulus presentations, decreasing as the stimulus becomes uninformative and added to the repertoire of the representing transformation. However, in the Zen practitioners, the habituation did not occur, indicating that their mode of awareness had reduced the adaptive role of the equalization function of the ASR framework. In the Yoga practitioners, on the other hand, the alpha rhythm was not blocked, providing no phenomenon to habituate. In the ASR framework, this is analyzed as due to supression of the encoding process. In the thirty years since these studies, more sophisticated techniques for relating electrophysiology to attention have been developed (e.g. Donchin and Coles, 1988); we would expect these results to be replicated with ERP measures as well as in the EEG.
The ontological transformation that is essential to the operation of access consciousness is transduction: the conversion of a property (optical reflectance, temperature, sound pressure level, surface roughness, etc.) into spatio-temporal structure. At the structural endpoint of a transductive system, functional processes can pick up the transduced structure and incorporate it into ongoing activity that may ultimately end up as behavioral reports of awareness.
Over the course of evolutionary time, selectional pressure could lead to the development of highly sensitive sensory systems. The quantum limit of sensitivity has been shown to have been reached in some species (Hecht, Shlear and Pirenne, 1942; Fain, 1975).
We are not arguing here that quantum mechanics is invalid, but that
the ASR framework for awareness highlights an often-ignored inadequacy
in quantum theory, namely its linear superposition function, in addition
to its well-known interpretation problems and incompleteness with respect
to gravitation. Incorporation of considerations of nonlinearity and universal
continuum computability into the constraints on fundamental theories of
physics provides a new key for sorting out the many interpretations of
quantum mechanics. For example, it could be argued that the collapse of
the wavefunction provides a location in the theory in which arbitrary amounts
of nonlinearity can be introduced into the evolution of a system.
Yet interpretations such as those of Everett and Bohm have no collapse
-- in order to accomodate ASR awareness, these cannot stand without modification
in some other way. An alternative source of nonlinearity might be
the spatial curvature that is gravity. Since the essence of universal
computation is the conversion of spatial arrangement into function, the
introduction of nonlinear space ultimately leads to nonlinear function,
as well. The extraordinary weakness of the gravitational interaction
requires that its influence on linear superposition be amplified in some
way, however, in order to have significant influence within a volume as
small as the human skull.
The ASR framework's requirement for the incorporation of realtime dynamics into the analysis of awareness holds an additional place for gravity. In the realization of a representational system in neurons, the dynamics are fundamentally controlled by the transport of charged ions and neurotransmitters through the intracellular and extracellular medium, and by the diffusion of these messengers through transmembrane channels. The rate of diffusion is controlled by a diffusion coefficient that is related to the medium's temperature, to the carrier's ionic charge, to geometric factors resulting from the carrier's molecular conformation and the porosity of the intracellular and extracellular matrix, and the square root of the molecular weight of the carrier. Thus in an ultimate theory of relativistic quantum gravity, one of the routes by which awareness will enter is via Einstein's equivalence principle for inertial and gravitational mass [Note 9]. Penrose's (1989) hypothesis that quantum gravity is involved in the ultimate analysis of awareness turns out to be consistent with the ASR framework in general, although his detailed proposal that quantum gravity is the essential and only mechanism for the appearance of awareness is unnecessary. In the ASR framework, classical mass is sufficient.
The theoretical framework presented here, along with the respective frameworks of Edelman (1989; 1992), Crick (1995), Baars and Newman (Baars, 1994; Baars and Newman, 1994), Stapp (1993), and to a lesser extent Churchland (1986) and Dennett (1991; Dennett and Kinsbourne, 1992) takes science seriously. Until philosophers accept the content of science as aggressively as they dispute its structure, their metaphysics will remain adrift in a Sargasso of concepts, free of attachment to the long-range navigational structures that modern science has linked to the land and the stars.
Which is not to say that, even taking science seriously, confirming any theory of awareness as abstract as the ASR model is significantly short of impossible. Although solvable in principle, identifying ASRs in human brains is equivalent to the problem of identifying uses of continuation passing style (Friedman et al., 1992) or structured exception handling (Custer, 1993) in an operating computer system without reference to the source code. Even with logic analyzers, acid baths and scanning tunneling microscopes that can detect the logic state of individual memory cells, the existence of virtual memory, address translation, data relocation and code compression, as well as the structure convolution introduced by optimizing compilers makes this task almost unimaginably complex. Yet unlike computer technology, which produces new designs every four months and revolutionizes itself every decade, the neural substrate for the human mind is not a moving target: the design of the human brain has been stable for hundreds of thousands of years. This stability, along with the thousands of variant family models provided by the evolutionary origin of the human species, provides hope that just as the gene was an abstract mathematical concept a hundred years ago but is now understood as a physically grounded structure that simultaneously defines and participates in a vast network of biochemical reactions, awareness can someday be understood as a physically grounded structure that arises in and participates in an equally vast network of flowing neuropsychological activity.
ASR theories of awareness provide for:
Since this issue is often seen as the fundamental problem of awareness, it is worth addressing directly. Chalmers (1996), for example, presents what he believes is an impossibility argument against functional conceptions of awareness. This argument comes down to the claim that functions have inputs and outputs, and that the only possible operation on functions is composition. In organisms the inputs are activity in sensory receptors and the outputs are behavior; there is no place in any possible compositional chain of functions linking sensation and behavior for the percepts of radically different type that make up conscious awareness.
In previous sections of this article we've pointed out that an additional transformation is available to functional systems beyond composition. This transformation consists of reification operations that convert functions into data and back. It appeared in early functional programming languages such as Lisp as apply, and achieved full development in reflective languages such as 3lisp and Brown. With this capability, functional systems can support autonomous representations, which when properly synchronized with the world, constitute the substrate of conscious awareness. Upon reification, differences in perceptual qualities become differences in the spatial locations of the representations of those qualities, and their qualitative distinctions are maintained by the absence of functions to transform them to other qualities. When separation of modality-specific transform functions breaks down or "modality-specific" representations become overlapped, synesthetic confusion can appear (Cytowic, 1989).
In human awareness, there are many Autonomous Synchronous Representations,
tightly linked within modalities and across levels, and joined into a unified
whole by synchronization constraints. They are sustained by identity transformations,
modified by learning, and linked into multimodal percepts by long-distance
cross-modal connections and via nonspecific processing regions. Their reificational
capabilities permit them to separate recalled past percepts from current
experiences, to reason about their possible futures and to install the
results of that reasoning into behavioral functions, becoming functionally
revised persons in the process.
[Note 2] Block (1978) noted that many people recognized the realtime difficulty with his "Chinese Nation"; had he chosen to take their objections seriously, he could have discovered the distinction between exteroflective consistency, which is destroyed by uncompensated changes in observer processing speed, and interoflective consistency, which can survive such changes. Lycan (1987) notes that Michael DePaul has also observed that realtime considerations weaken the force of Block's example. [back]
[Note 3] Some might suggest that each higher-level ASR element, though it may be smaller than the sum of the sizes of the lower-level elements that it abstracts, should be larger than any single one of its represented elements. [back]
[Note 4] There is an unfortunate inversion in the historical terminology for procedural reflection. In this framework, the application program is at "Level 0", and one proceeds "up" to deeper levels of interpretation, reaching the physical substrate of computation at the "highest" level. [back]
[Note 5] In social species, additional forms of attention occur, via herd or pack leaders and sentinels. In technological societies, attentional processing acquires instrumental means such as news reports and scientific equipment.. [back]
[Note 6] This definition introduces the possibility of unstable, chaotic attentional processes, which never stabilize into coherent percepts from which adaptive behavioral responses might develop. This kind of chaotic perception would constitute a new form of developmental disorder, and could be related to the difficulties of social perception in autism and the "flight of ideas" symptoms that appear schizophrenia. [back]
[Note 7] This is essentially the point made by Akins (1993). [back]
[Note 8] This means that the favorite mathematical tool of quantum physicists, the Hilbert space, is inappropriate for neurodynamics. Hilbert spaces are orthonormal linear function spaces, while the connectivity space defined by real neurons is neither orthogonal nor linear, and its "normalization" varies from region to region. It is possible that the renormalization techniques of modern quantum field theory may bring neural space into conformance with quantum space, but the additional insight to be gained by this is quite unclear. [back]
[Note 9] The ASR framework is additionally dependent on the fundamental structure of spacetime itself. This dependency appears in two ways: in the required spatial separation between the represented system and the representing system, and in the spatio-temporal scanning that occurs in the conversion of a static representation into dynamically active behavior. [back]
B. K. Anand, G. S. Chhina, and Baldev Singh (1961) Some aspects of electroencephalographic studies in yogis. Electroencephalography and Clinical Neurophysiology 13:45-456. Reprinted in Tart(1969).
Charles H. Anderson and David C. Van Essen (1987) Shifter circuits: A computational strategy for dynamic aspects of visual processing. Proceedings of the National Academy of Sciences USA 84: 6297-6301.
Bruce Alberts, Dennis Bray, Julian Lewis, Martin Raff, Keith Roberts, and James D. Watson (1983) Molecular Biology of the Cell. Garland Publishing, New York.
Judith P. Armitage (1992) Behavioral responses in bacteria. Annual Review of Physiology 54: 683-714.
Bernard J. Baars (1994) A global workspace theory of conscious experience. in Revonsuo and Kamppinen (1994), pp. 149-171.
Bernard J. Baars and James Newman (1994) A neurobiological interpretation of global workspace theory. in Revonsuo and Kamppinen (1994), p. 211-226.
Claude Bernard (1878) Lectures on the Phenomena of Life Common to Animals and Plants. Translated by Hebbel E. Hoff, Roger Guillemin, and Lucienne Guillemin (1974). Charles C. Thomas, Springfield Illinois.
Net Block (1978) Troubles with functionalism. in C.W.Savage (ed.) Perception and Cognition: Issues in the Foundations of Psychology. Univ. of Minnesota Press, Minneapolis.
Ned Block (1995) On a confusion about a function of consciousness. Behavioral and Brain Sciences 18: 227-287.
Lenore Blum, Mike Shub and Steve Smale (1989) On a theory of computation and complexity over the real numbers: NP-completeness, recursive functions and universal machines. Bulletin (New Series) of the American Mathematical Society 21(1): 1-46.
Gordon H. Bower and Ernest R. Hilgard (1981). Theories of learning. (5th Ed.) Englewood Cliffs, NJ: Prentice-Hall.
David J. Chalmers (1996) The Conscious Mind: In search of a fundamental theory. Oxford Univ. Press, New York.
Patricia Smith Churchland (1986) Neurophilosophy: Toward a Unified Science of the Mind-Brain. MIT Press, Cambridge MA.
Rodney M.J. Cotterill (1995) On the unity of conscious experience. Journal of Consciousness Studies 2: 290-312.
Francis Crick (1995) The Astonishing Hypothesis: The scientific search for the soul. Simon and Schuster, New York.
Helen Custer (1993) Inside Windows NT. Microsoft Press, Redmond, Washington.
Richard E. Cytowic (1989) Synesthesia: A Union of the Senses. Springer-Verlag, New York.
Daniel Dennett (1991) Consciousness Explained. Little, Brown & Co., Boston.
Daniel Dennett (1996) comments made at the Second Tucson Conference on Consciousness, Plenary Session 7 recordings, 37:30 of talk by N.Block: Is V1 conscious and in what sense?
Daniel Dennett and Marcel Kinsbourne (1992) Time and the observer: the where and when of consciousness in the brain. Behavioral and Brain Sciences 15: 183-220.
Jim des Rivieres and Brian Cantwell Smith (1984) The implementation of procedurally reflective languages. in Conference Record of the 1984 ACM Symposium on Lisp and Functional Programming. pp. 331-347.
Emanuel Donchin and Michael G.H.Coles (1988) Is the P300 a manifestation of context updating? Behavioral and Brain Sciences 11:357-374.
Fred Dretske (1995) Naturalizing the Mind. MIT Press, New York.
Gerald M. Edelman (1989) The Remembered Present: a biological theory of consciousness. Basic Books, New York.
Gerald M. Edelman (1992) Bright Air, Brilliant Fire: On the Matter of the Mind. Basic Books, New York.
Gordon L. Fain (1975) Quantum sensitivity of rods in the toad retina. Science 187:838-841.
Richard P. Feynman (1985) QED: The Strange Theory of Light and Matter. Princeton University Press, Princeton.
Barbara L. Finlay and Richard B. Darlington (1995) Linked regularities in the development and evolution of mammalian brains. Science 268: 1578-1584.
Daniel P. Friedman and Mitchell Wand (1984) Reification: Reflection without metaphysics. in Conference Record of the 1984 ACM Symposium on Lisp and Functional Programming. pp. 348-355.
Daniel P. Friedman, Mitchell Wand, and Christopher T. Haynes (1992) Essentials of Programming Languages. MIT Press, Cambridge MA.
Michael S. Gazzaniga (1970) The Bisected Brain. Appleton-Century-Crofts, New York.
Norman Geschwind (1965) Disconnexion syndromes in animals and man. Brain 88:237-294, 585-644.
Sebastian P. Grossman (1973) Essentials of Physiological Psychology. John Wiley & Sons, New York.
Valerie G. Hardcastle (1995) Locating Consciousness. John Benjamins, Philadelphia.
Gerald L. Hazelbauer, Howard C. Berg, and Phillip Matsumura (1993) Bacterial motility and signal transduction. Cell 73:15-22.
S. Hecht, S. Shlear, and M. H. Pirenne (1942) Energy, quanta, and vision. Journal of General Physiology 25:819-840.
Heikki Hyötyniemi (1996) Turing Machines are Recurrent Neural Networks. in STeP'96---Genes, Nets and Symbols, edited by Alander, J., Honkela, T., and Jakobsson, M., Finnish Artificial Intelligence Society, pp. 13-24.
International Standards Organization (1996) Information Technology: Generic coding of moving pictures and associated audio information. ISO/IEC JTC1/SC29 13818-1:1996.
William James (1884) What is emotion? Mind 9: 188-205.
Akira Kasamatsu and Tomio Hirai (1966) An electroencephalographic study on the Zen meditation (Zazen). Folio Psychiatrica and Neurologica Japonica 20:315-336. Reprinted in Tart (1969).
Bernard Katz (1966) Nerve, Muscle, and Synapse. McGraw-Hill, New York.
S. M. Kosslyn, W. L. Thompson, I. J. Kim, and N. M. Alpert (1995) Topographical representations of mental images in primary visual cortex. Nature 378:496-498.
D. Le Bihan, R. Turner, T. A. Zeffiro, C.A. Cuenod, P. Jezzard, and V. Bonnerot (1993) Activation of human primary visual cortex during visual recall: A magnetic resonance imaging study. Proceedings of the National Academy of Sciences USA 90:11802-11805.
Ming Li and Paul M.B.Vitanyi (1997) An Introduction to Kolmogorov Complexity and its Applications. (2nd ed.) Springer-Verlag, New York.
William G. Lycan (1987) Consciousness. MIT Press, Cambridge MA.
William G. Lycan (1996) Consciousness and Experience. MIT Press, Cambridge MA.
T. Nagel (1974) What is it like to be a bat? Philosophical Review 4: 435-450.
Allen Newell (1990) Unified Theories of Cognition. Harvard University Press, Cambridge MA.
E.A. Newsholme and C. Start (1973) Regulation in Metabolism. Wiley, New York.
Bruno A. Olshausen, Charles H. Anderson, and David C. Van Essen (1993) A neurobiological model of visual attention and invariant pattern recognition based on dynamic routing of information. Journal of Neuroscience 13(1): 4700-4719.
C.W. Omlin and C.Lee Giles (1996) Constructing deterministic finite-state automata in recurrent neural nets. Journal of the ACM 45(4): 937.
George F. Oster, Alan S. Perelson, and Aharon Katchalsky (1973) Network thermodynamics: Dynamic modeling of physical systems. Quarterly Review of Biophysics 6: 1-134.
Roger Penrose (1989) The Emperor's New Mind. Penguin Books, New York.
Jean Piaget (1952) The Origins of Intelligence in Children. International Universities Press, New York.
W.V.O. Quine (1960) Word and Object. MIT Press, Cambridge MA.
Antti Revonsuo and Matti Kamppinen. eds. (1994) Consciousness in Philosophy and Cognitive Neuroscience. Lawrence Erlbaum Associates, Hillsdale, N.J.
W.S. Robinson (1996) The hardness of the hard problem. Journal of Consciousness Studies 3(1): 14-25.
Daniel L. Schacter (1989) On the relation between memory and consciousness: Dissociable interactions and conscious experience. in H.L.Roediger III and F.I.M. Craik (eds.) Varieties of Memory and Consciousness. pp. 355-389.
Alwyn Scott (1996) On quantum theories of the mind. Journal of Consciousness Studies 3(5-6): 484-491.
John R. Searle (1980) Minds, brains and programs. Behavioral and Brain Sciences 3: 417-424.
John R. Searle (1992) The Rediscovery of the Mind. MIT Press, Cambridge MA.
Hava T. Seligmann (1995) Computation beyond the Turing limit. Science 268: 545-547.
H.S. Sondergaard and P. Sestoft (1990) Referential transparency, definiteness and unfoldability. Acta Informatica 27(6): 505-517.
Henry P. Stapp (1993) Mind, Matter, and Quantum Mechanics. Springer-Verlag, Berlin.
Henry P. Stapp (1995) Why classical mechanics cannot naturally accommodate consciousness but quantum mechanics can. Psyche 2(5) psyche-2-05-stapp.
Steven H. Strogatz and Ian Stewart (1993) Coupled oscillators and biological synchronization. Scientific American 269(6):68-75.
Charles T.Tart, ed. (1969) Altered States of Consciousness. John Wiley and Sons, New York.
Peter Tino, Bill G. Horne and C. Lee Giles (January 1995) Finite State Machines and Recurrent Neural Networks -- Automata and Dynamical Systems Approaches. University of Maryland, College Park Technical Report CS-TR-3396
Endel Tulving (1985) How many memory systems are there? American Psychologist 40:385-398.
Michael Tye (1995) Ten Problems of Consciousness: A Representational Theory of the Phenomenal Mind. MIT Press, Cambridge MA.
Mitchell Wand and Daniel P. Friedman (1988) The mystery of the tower revealed: A non-reflective description of the reflective tower. in Pattie Maes and Daniele Nardi (eds.) Meta-Level Architectures and Reflection. Elsevier Science Publishers, Amsterdam. pp. 111-134.
Lawrence Weiskrantz (1986) Blindsight: A Case Study and its Implications. Oxford University Press, New York.
David H. Wolpert and Bruce J. MacLennan (1993) A computationally universal field computer that is purely linear. Santa Fe Institute Technical Report 93-09-056.