The Engine of Awareness: Autonomous Synchronous Representations

George McKee


Objections to functional explanations of awareness assert that although functional systems may be adequate to explain behavior, including verbal behavior consisting of assertions of awareness by an individual, they cannot provide for the existence of phenomenal awareness. In this paper, a theory of awareness is proposed that counters this assertion by incorporating two advances: (1) a formal definition of representation, expressed in a functional notation: Newell's Representation Law, and (2) the introduction of real time into the analysis of awareness . This leads to the definition of phenomenal awareness as existing whenever an object contains an autonomously updated configuration satisfying the Representation Law with respect to some aspects of its environment. The relational aspect of the Representation Law permits the development of multiple levels of awareness, which provides for the existence of illusions and hallucinations, and permits the identification of a new measure, accuracy of awareness . The relational perspective also permits the incorporation of referential concepts into the framework. Qualia can then be identified with referentially opaque elements of awareness. The functional form of the Representation Law is linked to neurophysiology and the underlying phenomena of chemistry and physics by phenomena involving activity-dependent connectivity.


  1. Introduction: Access-consciousness and Phenomenal-consciousness
  2. The Representation Law
  3. Realtime Representations
    1. The Current Limit of the Represented Past
    2. Autonomous Representations
    3. The ASR Rule
  4. Detached Representations
    1. Interoceptive Representations
      1. Nonrepresentational Functions
      2. Pain, Hunger, and Other Motivational Systems
    2. Inaccessible Access
      1. Blindsight
  5. Reflective Dynamics: Awareness of One's Own Awareness
  6. Exteroflection, Referential Opacity, and the Appearance of Qualia
  7. The Perspectivity Transform: from Being to Being Like
  8. The Neurodynamics of Representations: Transforming serial computations into parallel neurodynamics
  9. The Limit of Accuracy of Representations
  10. The Physics of Representations
    1. Quantum Theory Has Many Dualities
    2. The Four Grounds of Perceptual Representations: Quantum Ontology
    3. The Quantum Limit of Accuracy of Awareness
    4. Where in the Neuron is the Ground of Self-awareness?
    5. Is Quantum Mechanics Necessary for Awareness?
  11. Conclusions
    1. Taking Science Seriously
    2. Properties of the ASR Framework
    3. Reflecting Reality into the Mind: with Magic Mirrors or Real Ones?
  12. Notes
  13. Bibliography

1. Introduction: Access-consciousness and Phenomenal-consciousness

In a watershed paper, Ned Block (1995) clarified the distinction between that aspect of consciousness that provides for access to the contents of thought, access consciousness (A-consciousness), and a distinctly different aspect of consciousness, phenomenal consciousness (P-consciousness), which is consciousness per se, the substrate of conscious thoughts. Block defines A-consciousness as that which is "poised for the control of language and behavior", yet he does not directly approach what it may consist of. P-consciousness, in contrast, need have no connection with behavior.

This view is not without its critics. Daniel Dennett (Dennett, 1991; Dennett and Kinsbourne,1992) has developed a "multiple drafts" theory of consciousness that purports to do away with the Cartesian theater in which the events of conscousness take place, showing how the events of consciousness cannot be localized to any particular place or time in the perceptual processes that occur in the brain. Dennett concludes that since the events of consciousness cannot be localized, the notion of a single phenomenal substrate for consciousness is incoherent. More recently he has asserted (1996) that "all that the things that people talk about under the rubric of phenomenal consciousness get very handsomely included into a proper notion of access consciousness."

The issue is whether the subject of consciousness is located in a mind consisting of nonphysical, "mental" stuff, or in a mind consisting of the basis, structure and operation of world-type stuff, i.e. the brain. Without precommitting oneself to a particular answer to this question, or even to the the existence or nonexistence of P-consciousness, one can approach it by addressing in more detail what is being accessed by A-consciousness. If the subject of consciousness is in the world, then consciousness might be adequately explained as a process that links the world to behavior. Critiques of this functional definition (Chalmers, 1996; Searle, 1992) assert that awareness must be in a nonphysical mind, since awareness remains uncaptured even when all the functional and behavioral properties of humans and minds are exhaustively described. They argue that the chain of functional transformations between sensory input and behavioral output leaves no place for consciousness, and thus it must be a mental phenomenon, with an ontological status independent of the physical world.

The method of this paper is to present a precise, unambiguous functional definition of representations, and to use this definition to link the abstract idea of a function to the dynamics of the classical electrochemistry that operate the human brain, and thus show how physical systems can contain dynamical structures with all the properties of phenomenal awareness. A fundamental component of this definition is an enhanced functional form, in which the function is reified as a datum on which other functions can operate. While this first-class functional is commonly used in computer science and cognitive science, it is rarely found in physics, biology, or philosophy [Note 1].

We use the word "awareness" throughout this paper in order to avoid the association of essential mystery that often comes along with the word "consciousness".  The preservation of mystery in the nature of consciousness often appears to be a goal in many discussions of the subject. The operation, utility, origin and need for the preservation of mysterious elements in the understanding of the universe is a large and complex topic in psychology and sociology that is beyond the scope of this paper.  Since our goal here is to provide a framework for understanding which eliminates the need for mysterious elements in the nature of consciousness, we attempt to avoid triggering needless injections of mystery by not using the word except in quotational contexts.

2. The Representation Law

The concept of a representation is basic to work in cognitive science and artificial intelligence. It is representations that are the objects upon which cognitive processes operate. The concept has made its way into the philosophy of consciousness, leading to a class of theories called representationalism (Dretske, 1995; Tye, 1995; Lycan, 1987, 1996). Yet the nature of a representation is never precisely defined. A symbol in a representational system acquires its meaning by its correspondence with the thing that it symbolizes. How does that correspondence work?

Allen Newell provided the key definition in Unified Theories of Cognition (1990), in the form of The Representation Law -

That is, a representational system R for a phenomenon P consists of not only a set of internal states Y matching the external situation X, but a set of transformations D converting one element of Y to another, and reflecting the transformations T that convert one element of X to another. Further, the internal states and transformations Y and D are linked to the external states and transformations X and T by encoding and decoding functions that are different for each R, P pair. R is a representational system for P whenever and only when the Representation Law holds for the relations between them.

The notion of function supporting the Representation Law is more powerful than that which may be familiar to many philosophers and natural scientists. In this system, a function T is not only a process that transforms one state Xi to another state Xj, written as Xj = T(Xi), but it is an object that can be operated on by other functions to produce still more functions. This kind of function appears, for example, in the Representation Law as D = encode(T). The value D might be more precisely written D = encode(T()), and is of type function - it is fully capable of acting upon other objects, Yj = D(Yi). The idea that functions can operate on other functions is a fundamental concept of computation, providing structure to the understanding of computerized phenomena such as compilers, interpreters, and loaders that convert static text into changes in machine behavior (Friedman et al., 1992). Introducing higher-order functions into the theoretical armamentarium applicable to the problems of awareness greatly expands the range of phenomena that can be encompassed effectively. As we shall see later, the absence of unitary serial processing in the brain leads to a radical difference in the way higher-order functions appear in its operation, yet the fundamental concept of spatially organized patterns in a uniform physical substrate causing corresponding changes in the sequential structure of behavior remains.

This functional definition of representational system sheds light on a number of confusions about the nature of representations that lead to unfounded claims of the insufficiency of representational explanations of awareness. Representational systems are not static sets of brain states. A frozen brain is not a representational system and does not contain any representations, since the transformational functions of a frozen brain are radically different from those of a normal awake brain. A representational system is not a composition of pure functions providing a stateless series of transformations from sensory input to behavioral output, but contains an encoded state component Y that provides a place to locate a ground for any content that might be revealed by the process of introspection that interrogates Access-consciousness.

3. Realtime Representations

3.1 The Current Limit of the Represented Past

Simply saying that awareness consists of representations will not work. Many representational systems exist and have been built that are not aware in any ordinary sense of the word. The representations involved in awareness must be somehow special. This specialness can be approached by keeping the perspective of the A-consciousness - P-consciousness distinction. One of the aspects of the world that awareness has access to is its history. Our awareness of our past is as strong and unassailable as our awareness of our present state and environment. How is awareness of the past related to awareness of the present?

Awareness of the past of course takes the form of memories, and there is an enormous body of psychological and physiological work devoted to the topic of "memory and learning". Most of this body has developed within a paradigm directed at following the trace of a particular event through the various memory systems of an organism, discovering how the memory trace evolves in consolidation and degradation as it moves from one subsystem to another by processes of access and storage. In the context of awareness, however, we want to adopt a complementary, spacetime-centric perspective, in which events are fixed and the experience of the organism moves, being influenced by some of the incidents in a stream of events as they occur, while being uninfluenced by others until they are recalled from storage to pass once again through the processing structures that make memories available to Access-consciousness.

In introducing memory as a constraint on the structure of awareness, we are led to the observation that perception of the past is continuous with perception of the present. A well-formed theory of Access-consciousness will make no fundamental distinction between what kinds of experiences are made available by A-consciousness of past events and what kinds of experiences are made available by A-consciousness of present events. Thus a representational relation between events in a system and events in its environment is not a relation of awareness unless the internal transformation aspect of those internal events occurs at the same time as the external transformation aspect of the corresponding external events. A system is aware of an external phenomenon only if it contains a synchronous representation of the phenomenon.

The notion that time and memory are somehow deeply related to awareness has been lurking in the literature for some time (e.g. Tulving, 1985; Edelman, 1989; Schacter, 1989) yet it is rarely found in more abstract discussions of the subject (but see Hardcastle, 1995). This absence has aided the survival of a number of unsound refutations of the possibility that functional models have the capability to explain all concepts of awareness. For example, once realtime is introduced, it is easy to explain much of the discomfort associated with the relation between Searle's Chinese Room situation and awareness. The Chinese Room (Searle, 1980) is unaware not because of any necessary flaws in the explanatory power of functionalism, but because it is defined as an offline system - it contains no timing functionality to synchronize its internal operations with its environment, and thus is incapable of maintaining any temporally sound "awareness", or even responding with any temporally appropriate behavior to its input. The behaviorally unsound Chinese Room is a fortiori unsound with respect to its interor properties such as intentionality and awareness. [Note 2]

3.2 Autonomous Representations

Restricting the class of potentially aware systems to those containing synchronous representations still admits ones that are obviously unaware; further qualifications are necessary. Systems of mirrors and lenses do not support awareness, even when provided with projection screens for the formation of real images, although they provide for synchronous transformation between external events and the internal images, because they do not include a decoding function. Optical systems simply encode the external situation; it is up to the judgment of an observer to perform the decoding function that makes it possible to check the integrity of the encoding.

Next we can consider a digital television system. It operates in realtime, encodes the scene viewed by the camera according to the MPEG algorithm (ISO, 1996), and transmits to the digital receiver not a simple encoding of the scene, but encoded instructions to the receiver concerning how it should update its internal state to transform it from a representation of the previous state of the external scene to a representation of the scene's current state. The internal state is then decoded and presented to the viewer via a CRT or LCD display, and this displayed scene is (approximately) the same as the original scene. The digital TV system satisfies the Representation Law in realtime.

But the television is clearly not itself aware of the scene. What might be missing? We can find the answer in our question: there's no "itself" in a simple realtime representational system. An essential step in the development of a self is the detachment of control over one's fate from the influence of others. The representation in the receiver has no causal autonomy, and the Representation Law as stated by Newell has no place for an autonomous representation. This can be remedied by replacing the equality term in Newell's Law by an equalizing function, which continually applies the internal transformation to the internal state, and then verifies that the updated state remains in accurate correspondence (via the encode/decode relation) with the external state. This replacement accomplishes two things: it brings dynamics explicitly into our functional description, and it provides a causal role for awareness.

3.3 The ASR Rule

We can call this revised law the ASR Rule, and write it as
The ASR Rule
{encodet(Xt+dt),encodet+dt,decodet+dt} = equalize(decodet[encodet(T)(encodet(Xt))],T(Xt))
where X is the original external situation and T is the external transformation as before.

The relation between the ASR Rule and awareness can be stated as a hypothesis:

4.  Detached Representations

The ASR framework was originally developed as a solution to the problem of primary exteroceptive awareness, in which a system with well-defined boundaries is aware of phenomena outside of those boundaries. But it can encompass without strain other forms of awareness by relocating the "external situation" to the inside of the system, by introducing additional representational levels into the model, and by permitting the encoding and decoding processes to malfunction or otherwise operate less than perfectly. In all of these modifications, the ASR rule still holds, but its implementation has become detached, either from the external world or within itself. In order to keep the terminology clear, in what follows we will deviate from Newell's names, and rename the "external situation" to be the "represented situation" and rename the "internal situation" to be the "representing situation".

4.1 Interoceptive Representations

4.1.1 Nonrepresentational Functions
Except in panpsychist models, the vast majority of the processes occurring in living systems are unconsious. We are not aware of the neural systems involved in the regulation of breathing, the sequencing of heart atrial and ventricular contractions, or the myriad of muscular microadjustments in the maintenance of posture, not to mention basic cellular functions such as the molecular exchanges of respiration, metabolism and hormonal secretions. These are regulated in astonishly complex homeostatic networks (Bernard, 1878; Newsholme and Start, 1973; Oster, Perelson and Katchalsky, 1973), but do not satisfy the Representation Law, since they do not explicitly encode a representation of the transformation function for the represented system. In many cases these regulatory systems operate in what might be called a semi-representational mode, since like a simple remote thermostat, they contain an image of the represented situation, but operate on the image directly, instead of via an explicit representing transformation that is isomorphic to the represented transformation.
4.1.2 Pain, Hunger, and Other Motivational Systems
When awareness becomes involved, however, the systems can be seen to operate with a fully representational architecture. Most interoceptive systems, such as awareness of muscular load and effort, can be brought into the ASR framework simply by locating the represented system within the body, and keeping the representing system in the brain. "Appetitive systems" involved in motivation require additional structures. These structures are introduced from two different directions, both involving learning.

Motivational systems are critical to the survival of mobile organisms; without them, either motion is absent and the organism is dependent on passive processes such as diffusion, convection, and the behavior of mobile prey to bring nutrients to it, or motion is random and undirected, which changes the space searched by the organism for additional resources in only a sublinear way. Thus even bacteria are sensitive to the distribution of nutrients in their environment and modify their swimming in a way that leads them in the direction of greater nutrient concentrations. Without understanding the way this modification of behavior occurs, it is characteristic of people to attribute awareness and motivation to each bacterium, saying that it "wants to go" up the nutrient concentration gradient. After decades of study, however, bacterial chemotaxis is now understood at the molecular level (Hazelbauer, Berg and Matsumura, 1993; Armitage, 1992). The models that have been developed are sufficiently detailed that they can be analyzed exhaustively for adherence to the ASR Rule. Although such an analysis has not been attempted, it appears likely that there are no processes or structures in bacteria for which the ASR rule holds: bacteria are not aware of their environment.

In more complex systems, motivation has endured intense scrutiny from the perspective of reinforcement learning, with analyses at both the behavioral level (Bower and Hilgard, 1981) and physiological level (Grossman, 1973). The theories developed in these analyses have not used the higher-order functions and dynamics that are essential to the ASR framework, and thus have not been able to capture explicitly the involvement of consciouness in motivated behavior.

We can use William James's theory of emotion (James, 1884) to see how the ASR framework expands our theory-building capabilities for interoceptive systems. In its original form, this theory stated that emotions arise as responses to behavior: aware fear appears in us as a consequence of running away, and not vice versa. Research since then has shown that the autonomic correlates of emotion (changes in respiration, perspiration, heart rate, gastric motility, etc.) can appear without somatic activity. Their appearance in these situations was credited to the influence of learning, and classical conditioning provides an effective means for accomplishing this link without involving awareness. In the ASR framework, for emotions to be aware requires the existence of a neural representation of the behavioral and autonomic effects of the perceptual situation which evolves semi-independently of the represented situation and behavior. When the rate of evolution of the internal representation exceeds the evolution of the represented situation, a phase differential occurs. This differential can be identified with the experience of a motivational drive in the situation, the "urge to run".

The phase differential between the represented situation and the representing situation is one of the places in the ASR framework where qualia might be located. To bring this differential to access-consciousness requires a second-level ASR system. In the second-level system the first-level ASR system becomes the represented subject, and another representing system is linked to it via encoding and decoding transformations. The range of changes in the differential between represented and representing systems are captured as the representing transformation function for the represented system.

The role of second-level ASR systems is especially salient in the case of aversive motivation. For instance in the analysis of pain, the evolutionary requirements for intelligent organisms are conflicting. In order to minimize the effects of stimuli that cause damage to the organism, it is necessary to escape the situation as rapidly as possible. Yet in order to identify the correct mode of escape and learn how to avoid future damaging incidents, it is necessary to focus attention towards the focus of damage. With a second-level ASR, we can suggest that the representation of the conflict between approach and avoidance constitutes an essential aspect of the conscious painfulness of damaging situations.

4.2 Inaccessible Access

Given the presence of the transduction functions encode() and decode() in the ASR framework, it is possible to investigate the effects of their absence. Since the representations are autonomously active, their transformations continue to operate even in the absence of one or more transduction functions. A missing encoding function provides a straightforward explanation for the phenomenon of phantom limbs, in which the internal representation of the limb and its encoded transformations continue to operate autonomously, while the true pathway beyond the limb stump is absent.

In cross-modal and higher-order awareness, in which, for instance, the subject produces verbal descriptions of perceptual phenomena or responds in a mode requiring comparisons in multiple sensory modalities, multiple ASRs linked by additional, intermodal transduction functions are required. Disruption of these functions due to brain damage produces disconnection syndromes (Geschwind, 1965; Gazzaniga, 1970). In some disconnection scenarios, the positive feedback channels needed to sustain autonomous activity are damaged, leading to a decline in autonomous activity, while in others, the controlling, negative feedback channels are the ones that are affected, leading to the runaway activity found in syndromes such as fluent aphasia.

4.2.1 Blindsight
Attention in the consciousness community has recently been attracted to the phenomenon of blindsight (Weiskrantz, 1986), in which patients with damage to the visual cortex can be shown to have the ability to perform discriminations in areas of their visual field in which they deny seeing anything. Unlike Block's (1995) analysis which takes the patients' reports at face value, saying that blindsight involves a failure of both phenomenal and access consciousness, the ASR model suggests that a certain amount of phenomenal consciousness of the visual world may well remain intact. Since functional brain imaging has shown the involvement of primary visual cortex in mental imagery (Kosslyn, 1993; Le Bihan et al., 1993), it may be this tissue is a necessary relay for the reporting of visual experience as Block suggests. The portions of brain tissue that remain may have lost the ability to relay their knowledge through to the language system, while retaining phenomenal consciousness of certain aspects of the visual world.

5. Reflective Dynamics: Awareness of One's Own Awareness

We have already seen that motivational awareness requires the introduction of second-order ASRs. There is no fundamental reason why even higher-order ASRs cannot exist. They require additional brain tissue and physiological support infrastructure, but these became available in abundance during the exponential expansion of brain size as humans evolved (Finlay and Darlington, 1995). The idea of a pyramid of ASRs, with elements at each level maintaining an abstraction of the elements immediately below them, superficially fits in with classical concepts of how awareness might be implemented, with a final topmost element of the pyramid located in some central place in the brain, perhaps Descartes' pineal gland, perhaps Cottterill's (1995) anterior cingulate. But this model appears to assume at best a constant amount of neural tissue to implement each ASR element [Note 3]. If there were a fixed structure of abstractive layers to the mind, this model could be acceptable, but it conflicts with both the dynamic structure of thought, in which percepts continually enter and leave awareness, and with the reflective capacity of awareness, which gives awareness access not only to perceptive elements, but to the relations between them. Not only can this not be encompassed within the pyramid model without destroying its layered, hierarchical structure, but the raising of links between elements to the status of elements that must be represented creates an infinitely expanding series of shells containing representations of links, and links between representations of links, and links between representations of link-representations, ad infinitum.

Research in the semantics of programming languages has approached these problems in the context of procedural reflection. Procedural reflection was developed as a way of permitting programs in high-level, functional languages such as Lisp to modify their own execution while they are running while maintaining the simplicity and purity of their semantics. It does this in two steps. The first step gives the program access to its execution environment by the introduction of only two functions, often named reify() and reflect(). These bring the hidden portions of the execution environment, such as the call stack and the name-value bindings, into the realm of normal data structures where they can be examined or modified, and then reinstalled back into their original functional roles. The second step preserves the semantics of the language while this is happening by use of a meta-circular interpreter, in which the language is implemented by an interpreter as usual, but instead of being expressed in another language or in the semantics of physical hardware as is usually done, with the corresponding increase in semantic complexity, the interpreter is viewed as a program in the same reflective language, with the same reflective powers to modify its own interpreter.

This combination of reflection and meta-circularity contains a difficulty in its structure: there is now an infinite tower of meta-circular interpreters in operation simultaneously, with the interpreter at each level implementing the program at the level below it, and being implemented by the level above (des Rivieres and Smith, 1984)[Note 4]. But this difficulty is only semantic. In the actual implementation of reflective languages, there is a top level of physical implementation where all the work really occurs, and intermediate levels are only created on demand. In fact, reification of interpreter contents one level up need not even occur by a recursive level-shift (Friedman and Wand, 1984) but can be accomplished iteratively within a single-level interpreter, without any actual level-shifting (Wand and Friedman, 1988).

The collapse of the reflective tower in certain implementations of reflective programming languages shows how the issue of the causal power of awareness can be resolved in the case of interior awareness, but does not extend to awareness of the world. For this we need to recognize that the ASR rule involves transfer of information between the system and the world via the encode() and decode() functions, in computational terms input and output, and that the concepts of reflection can be applied to this process as well. We can distinguish this mode of reflection from its original mode with two new names: exteroflection and interoflection, respectively. With exteroflection, we find an important role for the notion of referential transparency.

6. Exteroflection, Referential Opacity, and the Appearance of Qualia

The phrase "referential transparency" was introduced by Quine (1960; see Sondergaard and Sestof (1990) for a more formal discussion) to capture certain phenomena in the semantics of natural language. It has since then been absorbed into computing research as an essential characteristic of functional programming languages. In a linguistic context, transparent reference describes the relation between the phrase "George Washington's horse" and a certain white equine, while opaque reference describes the relation between the phrase "George Washington's horse" and a certain 25-element string of characters. In a perceptual context, transparent reference describes the relation between the perceptual representation of a certain white equine and the horse itself, while opaque reference describes the relation between the perceptual representaiton of a certain white equine and the experiential characteristics of perceiving that horse apart from the horse itself. In representational theories of perception, opaque perceptual references turn out to be precisely qualia.

All universally powerful formal languages have mechanisms for expressing opaque references. Commonly this is provided via special syntax such as quotation marks, but sometimes in other ways, such as Lisp's QUOTE special form. In conventional computer hardware it is provided via "immediate-mode" instructions, which transport a value from the preloaded, fixed instruction stream into mutable storage. In Turing machines this capability appears in the state transition table as an operation that combines tape motion with a state change, contingent on the value of the tape mark at the current head position. A prediction from this theory of perception is then that the neural architectures of brains that support conscious awareness contain structures supporting opaque reference within their perceptual systems.

Identification of a direct mechanism for perceptual opacity at the level of neural circuitry may prove difficult. Consider the problem of identifying which transistors in a conventional computer are responsible for execution of an ADD-IMMEDIATE instruction, if one is denied access to the processor manual describing the instruction encoding and to the architecture manual describing the direction of opcodes to functional units. However, an indirect mechanism has already been described.

In research on the physiology of visual attention, Anderson and others (Anderson and Van Essen, 1987; Olshausen, Anderson and Van Essen, 1993) have proposed three classes of neural circuits, shifter circuits, scaler circuits, and control circuits, that are anatomically, physiologically and developmentally plausible, and can be combined to form a system with many of the properties of attention-guided visual perception, including position-invariance, scale-invariance, and object-centered coordinate transformations. Beyond their analyses, these circuits, with the addition of recurrence and associative memory, constitute sufficient elements to support universal computability, and also provide for a physiological substrate to the mental zooming that accompanies the focusing of visual attention.

When attention is directed at the perceptual environment, its has a fundamental role of bringing certain aspects of it into play for more complex kinds of processing than those aspects that leave its focus. This direction of focus can be accomplished by peripheral means such as eye movements, direction of ear pinnae, or locomotion towards the object of attention. [Note 5] It can also be accomplished by purely neural transformations, in which the activity flows within the brain are reorganized to apply central processing to restricted portions of the perceptual field, with an effect on the information delivered to central processes nearly equivalent to physical approach, but with significantly reduced energy expenditure, as well as reduced risk of detection by movement-sensitive predators and prey. However, due to limitations in receptor resolution, an attentional zoom operation reaches a limit at which the central attentional field is focused to fill its capacity with the information delivered by a single peripheral element, and further extension of the operation provides no addional value. At this limit, attention is fully occupied, the broader context of the phenomenon that produced the stimulation of the peripheral perceptual element is lost, and a single element of perceptual quality, a quale, has been delivered to the processes of higher-order perception and cognition. A quale is a limit fixpoint of focused perceptual attention. [Note 6]

7. The Perspectivity Transform: from Being to Being Like

With the ability of procedural reflection to transform the dynamic state of a functional system into a structure that can be operated upon and reasoned about, then reinstalled into the dynamics of the system, comes the ability to transmit that structure to a different system, into which it can be installed and activated. When this occurs, differences in the physical location of the receiving system and thus its external situation will cause a violation of Newell's Representation Law, while differences in processing capabilities will cause the realtime constraint in the ASR rule to be violated. Thus awareness cannot be communicated without serious loss. Yet the loss is not total or absolute. In complexly aware systems such as humans, multiple representations of many external phenomena occur. Locations are represented in object-centered as well as egocentric space. Temporal changes can be represented at multiple resolutions and are provided by the synchronization systems required for ASR capability with the ability to be accelerated or decelerated to some degree. Insofar as these adjustments can be made successfully, conformance with the ASR rule is preserved and transmission of awareness will be more or less complete. [Note 7].

8. The Neurodynamics of Representations: Transforming serial computations into parallel neurodynamics

A critical issue in the relation of the ASR framework to human, biologically based awareness arises from the fact that the mathematical theory of functional computation within which the Representation Law is stated is developed in terms of a single, serial thread of function applications to discrete, digital items. Its theory is fundamentally linked to arithmetic over the integers. However, the neural activity of the brain occurs in an essentially parallel, analog mode. Its theory is fundamentally linked to continuous spaces of nonlinear differential equations over the reals. [Note 8] In order for the ASR framework to have a mathematically rigorous foundation, these two frameworks, digital computation and nonlinear dynamics, must be formally linked.

That this linkage is possible is proved every time a student in a course on the theory of computation solves a homework problem involving the hand-execution of the specifications for a universal Turing machine. Yet attempts by mathematicians to make the linkage explicit have proved unproductive, when approached from the direction of discrete computation (Blum, Shub and Smale, 1989; Wolpert and MacLennan, 1993; Seligmann, 1995), with little effect on the broader research community following from these efforts.

However, a more active community is advancing under the banner of recurrent neural networks (Tino, Horne and Giles, 1995; Omlin and Giles, 1996; Hyötyniemi, 1996). This work analyzes the properties of neural networks as discrete-time dynamical systems. It remains unrealistically simplified at this date, since even with the assumption that all neural activity consists of discrete action potentials, neurons operate asynchronously except in pathological situations such as epileptic siezures. An additional limitation of analyses so far is that they have been limited to a single level of recurrence. Multiple recurrence levels are required to obtain the complexities of human memory phenomena such as as short-term memory and rehearsed recall.

It is possible to develop a series of conjectures about the results that may emerge in development of this line of research. As the spectrum of recurrence intervals in an asynchronous neural net broadens from the sharp line that corresponds to full synchronization, certain modes will appear in which one set of units operating at a long interval are linked to another set of units operating at a much shorter interval. (cf. Strogatz and Stewart 1993) This will indicate the appearance of a "rehearsal" capability.

The appearance of multiple, linked recurrence modes will permit the introduction of three-dimensional phase portraits. These will make it possible to view certain system components from the perspective of a "potential" as is required for the use of catastrophe theory in analyses. With the help of catastrophe theory, it will be possible to classify the modes of appearance of stable constellations of attractor basins out of an undifferentiated embryonic net. These stable constellations will have properties that correspond to important psychological phenomena. These phenomena include the "catastrophic" reclassification that occurs in the course of insight learning and categorical perception, the resistance to reorganization of conceptual structures that controls the historical evolution of phonology, syntax and semantics in human language, and most importantly for our purposes, the explicit representing transformations that are necessary for conscious awareness.

9. The Limit of Accuracy of Representations

With the ASR framework it is possible to analyze properties of awareness that were not accessible to previous theories. One of these properties is the quantification of the amount of information passing through the awareness of a conscious system.

The representing transformation in the ASR rule can be viewed as a communication channel within the context of classical information theory. The channel capacity of the representing transformation is the amount of state-change that it induces in the representing system due to changes in the represented system. Since the rate of change of the represented system varies, the amount of information passing through the encoding, decoding, and tranforming functions varies correspondingly, and cannot be determined a priori. Further, since behavior can be directly, "reflexively" generated in response to situations without any information passing through a conscious representational process, no behavioral measurement can be guaranteed to tap even the minimum of the capacity of the encoding, transforming, and decoding processes.

What can be determined are the modes of processing that produce the least amount of activity in the equalize function of the ASR rule. In a certain sense, these are modes of perfect consiousness, since the equalization is needed only when the representing transformation operates incorrectly, updating the represented situation in a way that does not reflect the changes in the represented system.

There are two of these modes, located at the extremes of the continuum produced by the assumption of a limited total processing capacity that can be shared between the representing transformation and the encoding process. In the first mode, the capacity of the encoding process is minimized, and all mental capacity is available for reflective thought. Since reflection in the ASR framework occurs in realtime, greater height of the reflective tower can be achieved only at the cost of reducing the amount of information passed between higher and lower reflective levels. Perfect reflective closure is achieved at the limit of an infinitely tall reflective tower, but its cost is the reduction to zero in the content of any single reflective level.

The counterpart to total reflection with minimal awareness is the maximization of awareness and minimization of reflection. In this mode, processing capacity is directed to the encoding process, and the complexity of the representing transformation is minimized. Autonomous "thinking" about the "meaning" of percepts is supressed, and the awareness of the organism about its environment, while simplified, achieves a level of detail and completeness not possible in more complex modes.

Between these extremes lies a broad range of content-laden, meaningful awareness. Within this range, encoding complexity, processing speed, and reflective depth are traded off against each other according to the dictates of both experience and the current situation. In principle, there is a maximum predictive accuracy achievable at each point on this range, but determining the encoding required to achieve this maximum is dependent on the basic "hardware-level" processing operations of the system, and is in general unsolvable (Li and Vitanyi, 1997).

The extremes of closed, reflective consciousness, and open, nonreflective awareness are aspects of the two most well-studied meditative disciplines, Yoga and Zen Buddhism. While practitioners' verbal reports are not directly verifiable, electrophysiological evidence corresponding to the ASR-based analysis has been available for some time (e.g. Kasamatsu and Hirai, 1966; Anand, Chhina and Singh, 1961). In these studies the phenomenon of "alpha-blocking" was monitored during meditation. The electroencephalographic alpha rhythm typically appears during idle, unfocused thought and is suppressed at the onset of active attention. This suppression normally habituates in the course of repeated stimulus presentations, decreasing as the stimulus becomes uninformative and added to the repertoire of the representing transformation. However, in the Zen practitioners, the habituation did not occur, indicating that their mode of awareness had reduced the adaptive role of the equalization function of the ASR framework. In the Yoga practitioners, on the other hand, the alpha rhythm was not blocked, providing no phenomenon to habituate. In the ASR framework, this is analyzed as due to supression of the encoding process. In the thirty years since these studies, more sophisticated techniques for relating electrophysiology to attention have been developed (e.g. Donchin and Coles, 1988); we would expect these results to be replicated with ERP measures as well as in the EEG.

10. The Physics of Representations

10.1 Quantum Theory Has Many Dualities

Quantum theory contains a number of dualities to confuse researchers in consciousness who may be focusing on the mind-body duality that follows from a nonphysical conception of awareness. The Heisenberg uncertainty principle defines a duality between position and momentum. The Bohr interpretation of QM rests on a duality between observer and observed systems. The wave-particle duality shown by the two-slit experiment has been called the central mystery of quantum mechanics. Finally, there is the multiplicity of quantization states whose separation is governed by Planck's constant. In understanding the limits to the accuracy and detail that may enter awareness, only the first and last need concern us.

10.2 The Four Grounds of Perceptual Representations: Quantum Ontology

The ASR framework permits the identification of the grounds of awareness. If an internal representation exists to cause a report by access consciousness, then it must have a counterpart external, represented situation, which is what the awareness is of. The represented situations can be grouped into two classes, each supporting both transparent and opaque perceptual modes. The external world and the non-CNS somatic structures form one class; within the central nervous system, the self-regulatory and reflective systems form the other class.

The ontological transformation that is essential to the operation of access consciousness is transduction: the conversion of a property (optical reflectance, temperature, sound pressure level, surface roughness, etc.) into spatio-temporal structure. At the structural endpoint of a transductive system, functional processes can pick up the transduced structure and incorporate it into ongoing activity that may ultimately end up as behavioral reports of awareness.

10.3 The Quantum Limit of Accuracy of Awareness

The ultimate limits to the accuracy of awareness lie in the fineness of the transduction event that can be obtained and transmitted by a transducer. The magnitude of this event occurs in discrete, quantized steps that are never more closely spaced than intervals of Planck's constant. The timing and location of the event are constrained by Heisenberg's uncertainty principle to be uncorrelated with the event magnitude to a degree also controlled by Planck's constant.

Over the course of evolutionary time, selectional pressure could lead to the development of highly sensitive sensory systems. The quantum limit of sensitivity has been shown to have been reached in some species (Hecht, Shlear and Pirenne, 1942; Fain, 1975).

10.4 Where in the Neuron is the Ground of Self-awareness?

To correspond with the exterior awareness that originates in the transduction of environmental properties into the spatio-temporal structure of neural activity, there must be a transduction step for qualia. We can identify its location by considering what is transduced. In the opaque representations that are qualia, the property to be transduced is the activation state of neurons themselves. Activation state is a complex quantity, but it can be summarized as the composite concentration transmembrane concentration difference of the ionic species involved in the propagation of the membrane potential. The en passant axo-dendritic synapse, which taps the activation of its presynaptic cell and tranforms that potential into the temporal structure of its postsynaptic activity, while leaving the transmission of activity via direct synapses relatively unchanged, fills the bill quite well.

10.5 Is Quantum Mechanics Necessary for Awareness?

Henry Stapp (1995) and others have argued that awareness cannot be explained without the fundamental involvement of quantum mechanics. The relation of the ASR framework to the theory of computation shows that it can. While no universal mechanical computer has ever been built, the universal capability of designs such as Babbage's Analytical Engine is unquestioned. The existence of interpreters and compilers for functional languages in which the Representation Law can be implemented makes the existence of a purely mechanical system in which the ASR rule holds a clear possiblity. In such a mechanical representational system, awareness is possible, yet classical physics is the governing theory and quantum effects are irrelevant. In fact, in standard theories of quantum mechanics (Feynman, 1985) the evolution functions are purely linear (Scott, 1996), and so, inverting Stapp's slogan, the field theory of quantum mechanics cannot accommodate the essential nonlinearity of computationally universal systems required by the Representation Law and the ASR rule, while classical mechanics can.

We are not arguing here that quantum mechanics is invalid, but that the ASR framework for awareness highlights an often-ignored inadequacy in quantum theory, namely its linear superposition function, in addition to its well-known interpretation problems and incompleteness with respect to gravitation. Incorporation of considerations of nonlinearity and universal continuum computability into the constraints on fundamental theories of physics provides a new key for sorting out the many interpretations of quantum mechanics. For example, it could be argued that the collapse of the wavefunction provides a location in the theory in which arbitrary amounts of nonlinearity can be introduced into the evolution of a system.  Yet interpretations such as those of Everett and Bohm have no collapse -- in order to accomodate ASR awareness, these cannot stand without modification in some other way.  An alternative source of nonlinearity might be the spatial curvature that is gravity.  Since the essence of universal computation is the conversion of spatial arrangement into function, the introduction of nonlinear space ultimately leads to nonlinear function, as well.  The extraordinary weakness of the gravitational interaction requires that its influence on linear superposition be amplified in some way, however, in order to have significant influence within a volume as small as the human skull.

The ASR framework's requirement for the incorporation of realtime dynamics into the analysis of awareness holds an additional place for gravity. In the realization of a representational system in neurons, the dynamics are fundamentally controlled by the transport of charged ions and neurotransmitters through the intracellular and extracellular medium, and by the diffusion of these messengers through transmembrane channels. The rate of diffusion is controlled by a diffusion coefficient that is related to the medium's temperature, to the carrier's ionic charge, to geometric factors resulting from the carrier's molecular conformation and the porosity of the intracellular and extracellular matrix, and the square root of the molecular weight of the carrier. Thus in an ultimate theory of relativistic quantum gravity, one of the routes by which awareness will enter is via Einstein's equivalence principle for inertial and gravitational mass [Note 9]. Penrose's (1989) hypothesis that quantum gravity is involved in the ultimate analysis of awareness turns out to be consistent with the ASR framework in general, although his detailed proposal that quantum gravity is the essential and only mechanism for the appearance of awareness is unnecessary. In the ASR framework, classical mass is sufficient.

11. Conclusions

11.1 Taking Science Seriously

Robinson (1996) argued that the Hard Problem of awareness cannot be solved within philosophers' current conceptual framework. I agree with this conclusion. However, the recasting of the conceptual framework used in this paper is probably quite different from what Robinson had in mind, namely that the concepts of twentieth century science be taken seriously. These include: (1) Mathematics is the proper mode for describing relations among entities in the universe, not logic, and certainly not nontechnical English. The ASR Rule constitutes one of the mathematical relations among conscious entities. Mathematics does not mean that papers must be dense with greek letters in order to be acceptable, it means that theoretical statements must be related by formally sound means to repeated, repeatable observations. (2) There is no privileged reference frame for statements about entities and their relations [relativity]. This means that a proper description of awareness will be intersubjectively verifiable, and will describe my awareness of a perceived phenomenon in the same terms as it does your awareness of the same phenomenon, just as it describes my visual acuity (my awareness of the bottom rows of the Snellen eye chart) in the same terms as yours. There must exist datasets that are agreeably shared and agreed upon as sound by disagreeing theorists. Theories that reject certain subsets of the standard data must be considered for rejection as not addressing the relevant problem. (3) There are no static objects with fixed properties [quantum field theory]. The whole stream of philosophical argumentation that takes awareness to be a unitary, structureless item which may or may not be attached by property or identity relations to items in the physical world is scientifically naive. In modern science, the fundamental phenomenon is change, and no philosophical conception of awareness that does not incorporate changes in awareness as a fundamental characteristic of the phenomenon will have any significant success at attaching itself to scientific knowledge. At the most abstract level, the mathematics of nonlinear dynamic systems provides the common ground for describing change. (4) Humans are attached by a common genetic history to all other living things on the planet [evolution]. Theorists of awareness that conclude (or worse, presuppose) the inaccessiblity of theories to explain the awareness of other species of mammal (e.g. Nagel, 1974), much less reptiles, amphibia, fishes, invertebrates and protozoa, end up denying their own biological heritage.

The theoretical framework presented here, along with the respective frameworks of Edelman (1989; 1992), Crick (1995), Baars and Newman (Baars, 1994; Baars and Newman, 1994), Stapp (1993), and to a lesser extent Churchland (1986) and Dennett (1991; Dennett and Kinsbourne, 1992) takes science seriously. Until philosophers accept the content of science as aggressively as they dispute its structure, their metaphysics will remain adrift in a Sargasso of concepts, free of attachment to the long-range navigational structures that modern science has linked to the land and the stars.

Which is not to say that, even taking science seriously, confirming any theory of awareness as abstract as the ASR model is significantly short of impossible. Although solvable in principle, identifying ASRs in human brains is equivalent to the problem of identifying uses of continuation passing style (Friedman et al., 1992) or structured exception handling (Custer, 1993) in an operating computer system without reference to the source code. Even with logic analyzers, acid baths and scanning tunneling microscopes that can detect the logic state of individual memory cells, the existence of virtual memory, address translation, data relocation and code compression, as well as the structure convolution introduced by optimizing compilers makes this task almost unimaginably complex. Yet unlike computer technology, which produces new designs every four months and revolutionizes itself every decade, the neural substrate for the human mind is not a moving target: the design of the human brain has been stable for hundreds of thousands of years. This stability, along with the thousands of variant family models provided by the evolutionary origin of the human species, provides hope that just as the gene was an abstract mathematical concept a hundred years ago but is now understood as a physically grounded structure that simultaneously defines and participates in a vast network of biochemical reactions, awareness can someday be understood as a physically grounded structure that arises in and participates in an equally vast network of flowing neuropsychological activity.

11.2 Properties of the ASR Framework

The ASR framework is a "framework" rather than a theory because it does not specify a particular implementation. Just as Newell's Representation Law has many realizations, the ASR framework provides a unified structure upon which many theories of awareness can be created, one for each class of conscious system. Thus there can be theories of human awareness, mammalian awareness, vertebrate awareness, and even theories of digital and analog computer awareness, with subtheories for sessile systems and mobile robots. Yet each theory based on ASRs will share certain properties. The list below is not exhaustive, additional properties of ASR theories remain to be discovered. It's certainly possible that new properties of awareness can be documented that cannot be captured within the ASR framework and will force its revision. To force its abandonment, two things are required: consensus on the list of facts about awareness that theories must encompass, and the demonstration of a theory that encompasses those facts more comprehensively and accurately than an ASR theory.

ASR theories of awareness provide for:

11.3 Reflecting Reality into the Mind: with Magic Mirrors or Real Ones?

Dualist conceptions of awareness leave two problems unresolved:  first is the nature of the non-physical substrate of awareness. Second, but equally important, is the transduction problem: what is the mechanism that transforms neural activity in the brain into those nonphysical, conscious percepts? There must be some kind of magic mirror in the brain that performs this work. The ASR framework does not eliminate the substrate of awareness, it locates it in physical space, and the mirror it uses to accomplish the transformation of physical qualities into conscious properties is built of ordinary physical elements -- a cascade of sensorineural transduction stages. But a dualist, or in fact any advocate of a non-functional notion of awareness, must argue that this cascade of functions is metaphysically incapable of generating the mental sensations that are the foundation of awareness.

Since this issue is often seen as the fundamental problem of awareness, it is worth addressing directly. Chalmers (1996), for example, presents what he believes is an impossibility argument against functional conceptions of awareness. This argument comes down to the claim that functions have inputs and outputs, and that the only possible operation on functions is composition. In organisms the inputs are activity in sensory receptors and the outputs are behavior; there is no place in any possible compositional chain of functions linking sensation and behavior for the percepts of radically different type that make up conscious awareness.

In previous sections of this article we've pointed out that an additional transformation is available to functional systems beyond composition. This transformation consists of reification operations that convert functions into data and back. It appeared in early functional programming languages such as Lisp as apply, and achieved full development in reflective languages such as 3lisp and Brown. With this capability, functional systems can support autonomous representations, which when properly synchronized with the world, constitute the substrate of conscious awareness. Upon reification, differences in perceptual qualities become differences in the spatial locations of the representations of those qualities, and their qualitative distinctions are maintained by the absence of functions to transform them to other qualities. When separation of modality-specific transform functions breaks down or "modality-specific" representations become overlapped, synesthetic confusion can appear (Cytowic, 1989).

In human awareness, there are many Autonomous Synchronous Representations, tightly linked within modalities and across levels, and joined into a unified whole by synchronization constraints. They are sustained by identity transformations, modified by learning, and linked into multimodal percepts by long-distance cross-modal connections and via nonspecific processing regions. Their reificational capabilities permit them to separate recalled past percepts from current experiences, to reason about their possible futures and to install the results of that reasoning into behavioral functions, becoming functionally revised persons in the process.


[Note 1] One can find it in philosophy by considering logic to be a branch of philosophy, since second order predicate logic, in which predicates can operate on other predicates, contains the capability we refer to. A similar notion appears in physics as the operator, but operators apply only to functions, yielding other functions of similar type, while the kind of functions used here generalize the idea, with the capability of acting not only on primitive values, but functions on those values, functions on functions, functions on functions on functions, etc.[back]

[Note 2] Block (1978) noted that many people recognized the realtime difficulty with his "Chinese Nation"; had he chosen to take their objections seriously, he could have discovered the distinction between exteroflective consistency, which is destroyed by uncompensated changes in observer processing speed, and interoflective consistency, which can survive such changes. Lycan (1987) notes that Michael DePaul has also observed that realtime considerations weaken the force of Block's example. [back]

[Note 3] Some might suggest that each higher-level ASR element, though it may be smaller than the sum of the sizes of the lower-level elements that it abstracts, should be larger than any single one of its represented elements. [back]

[Note 4] There is an unfortunate inversion in the historical terminology for procedural reflection. In this framework, the application program is at "Level 0", and one proceeds "up" to deeper levels of interpretation, reaching the physical substrate of computation at the "highest" level. [back]

[Note 5] In social species, additional forms of attention occur, via herd or pack leaders and sentinels. In technological societies, attentional processing acquires instrumental means such as news reports and scientific equipment.. [back]

[Note 6] This definition introduces the possibility of unstable, chaotic attentional processes, which never stabilize into coherent percepts from which adaptive behavioral responses might develop. This kind of chaotic perception would constitute a new form of developmental disorder, and could be related to the difficulties of social perception in autism and the "flight of ideas" symptoms that appear schizophrenia. [back]

[Note 7] This is essentially the point made by Akins (1993). [back]

[Note 8] This means that the favorite mathematical tool of quantum physicists, the Hilbert space, is inappropriate for neurodynamics. Hilbert spaces are orthonormal linear function spaces, while the connectivity space defined by real neurons is neither orthogonal nor linear, and its "normalization" varies from region to region. It is possible that the renormalization techniques of modern quantum field theory may bring neural space into conformance with quantum space, but the additional insight to be gained by this is quite unclear. [back]

[Note 9] The ASR framework is additionally dependent on the fundamental structure of spacetime itself. This dependency appears in two ways: in the required spatial separation between the represented system and the representing system, and in the spatio-temporal scanning that occurs in the conversion of a static representation into dynamically active behavior. [back


Kathleen A. Akins (1993) A bat without qualities? in Martin Davies and Glyn W.Humphreys (eds.) Consciousness. Blackwell, Oxford. pp. 258-273.

B. K. Anand, G. S. Chhina, and Baldev Singh (1961) Some aspects of electroencephalographic studies in yogis. Electroencephalography and Clinical Neurophysiology 13:45-456. Reprinted in Tart(1969).

Charles H. Anderson and David C. Van Essen (1987) Shifter circuits: A computational strategy for dynamic aspects of visual processing. Proceedings of the National Academy of Sciences USA 84: 6297-6301.

Bruce Alberts, Dennis Bray, Julian Lewis, Martin Raff, Keith Roberts, and James D. Watson (1983) Molecular Biology of the Cell. Garland Publishing, New York.

Judith P. Armitage (1992) Behavioral responses in bacteria. Annual Review of Physiology 54: 683-714.

Bernard J. Baars (1994) A global workspace theory of conscious experience. in Revonsuo and Kamppinen (1994), pp. 149-171.

Bernard J. Baars and James Newman (1994) A neurobiological interpretation of global workspace theory. in Revonsuo and Kamppinen (1994), p. 211-226.

Claude Bernard (1878) Lectures on the Phenomena of Life Common to Animals and Plants. Translated by Hebbel E. Hoff, Roger Guillemin, and Lucienne Guillemin (1974). Charles C. Thomas, Springfield Illinois.

Net Block (1978) Troubles with functionalism. in C.W.Savage (ed.) Perception and Cognition: Issues in the Foundations of Psychology. Univ. of Minnesota Press, Minneapolis.

Ned Block (1995) On a confusion about a function of consciousness. Behavioral and Brain Sciences 18: 227-287.

Lenore Blum, Mike Shub and Steve Smale (1989) On a theory of computation and complexity over the real numbers: NP-completeness, recursive functions and universal machines. Bulletin (New Series) of the American Mathematical Society 21(1): 1-46.

Gordon H. Bower and Ernest R. Hilgard (1981). Theories of learning. (5th Ed.) Englewood Cliffs, NJ: Prentice-Hall.

David J. Chalmers (1996) The Conscious Mind: In search of a fundamental theory. Oxford Univ. Press, New York.

Patricia Smith Churchland (1986) Neurophilosophy: Toward a Unified Science of the Mind-Brain. MIT Press, Cambridge MA.

Rodney M.J. Cotterill (1995) On the unity of conscious experience. Journal of Consciousness Studies 2: 290-312.

Francis Crick (1995) The Astonishing Hypothesis: The scientific search for the soul. Simon and Schuster, New York.

Helen Custer (1993) Inside Windows NT. Microsoft Press, Redmond, Washington.

Richard E. Cytowic (1989) Synesthesia: A Union of the Senses. Springer-Verlag, New York.

Daniel Dennett (1991) Consciousness Explained. Little, Brown & Co., Boston.

Daniel Dennett (1996) comments made at the Second Tucson Conference on Consciousness, Plenary Session 7 recordings, 37:30 of talk by N.Block: Is V1 conscious and in what sense?

Daniel Dennett and Marcel Kinsbourne (1992) Time and the observer: the where and when of consciousness in the brain. Behavioral and Brain Sciences 15: 183-220.

Jim des Rivieres and Brian Cantwell Smith (1984) The implementation of procedurally reflective languages. in Conference Record of the 1984 ACM Symposium on Lisp and Functional Programming. pp. 331-347.

Emanuel Donchin and Michael G.H.Coles (1988) Is the P300 a manifestation of context updating? Behavioral and Brain Sciences 11:357-374.

Fred Dretske (1995) Naturalizing the Mind. MIT Press, New York.

Gerald M. Edelman (1989) The Remembered Present: a biological theory of consciousness. Basic Books, New York.

Gerald M. Edelman (1992) Bright Air, Brilliant Fire: On the Matter of the Mind. Basic Books, New York.

Gordon L. Fain (1975) Quantum sensitivity of rods in the toad retina. Science 187:838-841.

Richard P. Feynman (1985) QED: The Strange Theory of Light and Matter. Princeton University Press, Princeton.

Barbara L. Finlay and Richard B. Darlington (1995) Linked regularities in the development and evolution of mammalian brains. Science 268: 1578-1584.

Daniel P. Friedman and Mitchell Wand (1984) Reification: Reflection without metaphysics. in Conference Record of the 1984 ACM Symposium on Lisp and Functional Programming. pp. 348-355.

Daniel P. Friedman, Mitchell Wand, and Christopher T. Haynes (1992) Essentials of Programming Languages. MIT Press, Cambridge MA.

Michael S. Gazzaniga (1970) The Bisected Brain. Appleton-Century-Crofts, New York.

Norman Geschwind (1965) Disconnexion syndromes in animals and man. Brain 88:237-294, 585-644.

Sebastian P. Grossman (1973) Essentials of Physiological Psychology. John Wiley & Sons, New York.

Valerie G. Hardcastle (1995) Locating Consciousness. John Benjamins, Philadelphia.

Gerald L. Hazelbauer, Howard C. Berg, and Phillip Matsumura (1993) Bacterial motility and signal transduction. Cell 73:15-22.

S. Hecht, S. Shlear, and M. H. Pirenne (1942) Energy, quanta, and vision. Journal of General Physiology 25:819-840.

Heikki Hyötyniemi (1996) Turing Machines are Recurrent Neural Networks. in STeP'96---Genes, Nets and Symbols, edited by Alander, J., Honkela, T., and Jakobsson, M., Finnish Artificial Intelligence Society, pp. 13-24.

International Standards Organization (1996) Information Technology: Generic coding of moving pictures and associated audio information. ISO/IEC JTC1/SC29 13818-1:1996.

William James (1884) What is emotion? Mind 9: 188-205.

Akira Kasamatsu and Tomio Hirai (1966) An electroencephalographic study on the Zen meditation (Zazen). Folio Psychiatrica and Neurologica Japonica 20:315-336. Reprinted in Tart (1969).

Bernard Katz (1966) Nerve, Muscle, and Synapse. McGraw-Hill, New York.

S. M. Kosslyn, W. L. Thompson, I. J. Kim, and N. M. Alpert (1995) Topographical representations of mental images in primary visual cortex. Nature 378:496-498.

D. Le Bihan, R. Turner, T. A. Zeffiro, C.A. Cuenod, P. Jezzard, and V. Bonnerot (1993) Activation of human primary visual cortex during visual recall: A magnetic resonance imaging study. Proceedings of the National Academy of Sciences USA 90:11802-11805.

Ming Li and Paul M.B.Vitanyi (1997) An Introduction to Kolmogorov Complexity and its Applications. (2nd ed.) Springer-Verlag, New York.

William G. Lycan (1987) Consciousness. MIT Press, Cambridge MA.

William G. Lycan (1996) Consciousness and Experience. MIT Press, Cambridge MA.

T. Nagel (1974) What is it like to be a bat? Philosophical Review 4: 435-450.

Allen Newell (1990) Unified Theories of Cognition. Harvard University Press, Cambridge MA.

E.A. Newsholme and C. Start (1973) Regulation in Metabolism. Wiley, New York.

Bruno A. Olshausen, Charles H. Anderson, and David C. Van Essen (1993) A neurobiological model of visual attention and invariant pattern recognition based on dynamic routing of information. Journal of Neuroscience 13(1): 4700-4719.

C.W. Omlin and C.Lee Giles (1996) Constructing deterministic finite-state automata in recurrent neural nets. Journal of the ACM 45(4): 937.

George F. Oster, Alan S. Perelson, and Aharon Katchalsky (1973) Network thermodynamics: Dynamic modeling of physical systems. Quarterly Review of Biophysics 6: 1-134.

Roger Penrose (1989) The Emperor's New Mind. Penguin Books, New York.

Jean Piaget (1952) The Origins of Intelligence in Children. International Universities Press, New York.

W.V.O. Quine (1960) Word and Object. MIT Press, Cambridge MA.

Antti Revonsuo and Matti Kamppinen. eds. (1994) Consciousness in Philosophy and Cognitive Neuroscience. Lawrence Erlbaum Associates, Hillsdale, N.J.

W.S. Robinson (1996) The hardness of the hard problem. Journal of Consciousness Studies 3(1): 14-25.

Daniel L. Schacter (1989) On the relation between memory and consciousness: Dissociable interactions and conscious experience. in H.L.Roediger III and F.I.M. Craik (eds.) Varieties of Memory and Consciousness. pp. 355-389.

Alwyn Scott (1996) On quantum theories of the mind. Journal of Consciousness Studies 3(5-6): 484-491.

John R. Searle (1980) Minds, brains and programs. Behavioral and Brain Sciences 3: 417-424.

John R. Searle (1992) The Rediscovery of the Mind. MIT Press, Cambridge MA.

Hava T. Seligmann (1995) Computation beyond the Turing limit. Science 268: 545-547.

H.S. Sondergaard and P. Sestoft (1990) Referential transparency, definiteness and unfoldability. Acta Informatica 27(6): 505-517.

Henry P. Stapp (1993) Mind, Matter, and Quantum Mechanics. Springer-Verlag, Berlin.

Henry P. Stapp (1995) Why classical mechanics cannot naturally accommodate consciousness but quantum mechanics can. Psyche 2(5) psyche-2-05-stapp.

Steven H. Strogatz and Ian Stewart (1993) Coupled oscillators and biological synchronization. Scientific American 269(6):68-75.

Charles T.Tart, ed. (1969) Altered States of Consciousness. John Wiley and Sons, New York.

Peter Tino, Bill G. Horne and C. Lee Giles (January 1995) Finite State Machines and Recurrent Neural Networks -- Automata and Dynamical Systems Approaches. University of Maryland, College Park Technical Report CS-TR-3396

Endel Tulving (1985) How many memory systems are there? American Psychologist 40:385-398.

Michael Tye (1995) Ten Problems of Consciousness: A Representational Theory of the Phenomenal Mind. MIT Press, Cambridge MA.

Mitchell Wand and Daniel P. Friedman (1988) The mystery of the tower revealed: A non-reflective description of the reflective tower. in Pattie Maes and Daniele Nardi (eds.) Meta-Level Architectures and Reflection. Elsevier Science Publishers, Amsterdam. pp. 111-134.

Lawrence Weiskrantz (1986) Blindsight: A Case Study and its Implications. Oxford University Press, New York.

David H. Wolpert and Bruce J. MacLennan (1993) A computationally universal field computer that is purely linear. Santa Fe Institute Technical Report 93-09-056.

Revision History
Begun 96/6/10
First draft completed 97/6/1
Conclusions revised 97/7/12
Citation cleanup 97/8/10
Further cleanup, minor revisions on suggestions of P.Hayes, check with IE4 97/10/11