Problems in the “functional”
investigations of Consciousness
Morten Overgaard
Dept.
of Psychology
University
of Aarhus
Asylvej
4, 8240 Risskov
Denmark
e-mail:
Morten.Overgaard@psy.ku.dk
This
article presents the view that the “problem of consciousness” – per definition
– can not be seen as a strictly scientific or strictly philosophical problem.
The first idea, especially, leads to important difficulties: First of all, the
idea has in most cases implied some rather superficial reductionistic or functionalistic
a priori assumptions, and, secondly, it can be shown that some of the most
commonly used empirical methods in these regards are inadequate. Especially so
in the case of contrastive analysis, widely used in cognitive neuroscience.
However, this criticism does not lead to the conclusion that scientific methods
are inadequate as such, only that they always work on a pre-established
background of theory, of which one must be explicit.
Key
words: Consciousness; science;
functionalism; contrastive analysis; scientific explanation; philosophical
explanation; methodology
1. INTRODUCTION
According to a common
distinction that was made explicit by Crick and Koch (1990), the problems of
consciousness are to be seen as either: a) a philosophical question, best left
to philosophers to discuss, or b) a scientific problem that can be solved with
scientific methods alone, which have not yet been developed. In this article, I
will present the view that both of these possibilities lead into blind alleys,
but they are blind in different ways. The first view fails because it is very
unlikely that a philosophical approach alone can lead to answers that all will
find acceptable, and the second view is---following my line of argument---a
direct scientific mistake.
I think that few will disagree with my first claim
that philosophy on its own (or even with the help of science) is unlikely to
produce one single theoretical account on which everybody agrees. If nothing
else, the history of philosophy of mind teaches us this. Besides, philosophy is
basically dependent on observations just like science (e.g. the observation
that there are mental states), even though these are not collected in a
systematic way. Therefore, I shall focus on the second claim that a scientific
approach alone cannot solve the mind-body problem, arguing that in often it
seems to be on a wrong track, especially when dealing with conscious experience
or “qualia”.
Following Popper,
one can not imagine scientific methods independent of hypotheses, for theory
and observation have reciprocal connections, determining each other. Thus, the
scientists who clearly argue against philosophical solutions to the problems of
consciousness must necessarily make philosophical speculations themselves
(Crick, 1994; Baars, 1988). For instance, Crick argues that there is no sense
in saying that my experience of the colour blue is different from those of
others. If my neural correlate of the function of seeing blue is identical to
other peoples‘ neural correlates, there is no reason to assume phenomenological
differences. To be exact, the statement goes like this: “If it turns out that
the neural correlate of blue is exactly the same in your brain as in mine, it
would be scientifically plausible to infer that you see blue as I do.” (Crick
and Koch, 1998, p. 104). This argument must be considered very central to the
work of Crick, since it expresses the foundation upon which most of his
mind/brain-interpretation is built, and it is also used as an argument against
making “too many philosophical speculations”. Yet, his suggestion is not a scientific idea (in the
“anti-philosophical” sense), as it guides the scientific investigations in
certain directions before any collection of data has begun. His clearly philosophical hypothesis seems to be
that neural circuits and conscious experiences construct a 1:1 relationship,
and from this it follows that we should direct our scientific interest toward
“brain phenomena” before cognitive, artificial or even phenomenological events.
Needless to say, this is a theoretical postulate, going against e.g. theories
of multiple realisations, and is as such not supported by any scientific data.
Therefore it is a self-contradiction. Besides, it should be clear that it does
not follow from even an empirically well-established 1:1-relationship that you
can speak about the one without the other. In order to claim identity between
two phenomena, it is necessary to operate with exact separate definitions of them both to begin with, and they would
thus presuppose each other (e.g. Praetorius, 2000). In essence, this leads to
the conclusion that any discussion of brain phenomena as relevant to
psychological phenomena rests on how you define psychological phenomena, and it
is only on this basis that the discussion of the brain phenomena even makes
sense.
Thus if we do not accept philosophical analysis as a tool on the one side of the reciprocal connection between theory and observation, other objectives in a scientific study are quickly overlooked, and we end up with a one-sided theoretical interpretation of data. In other words, overlooking the philosophy in a scientific hypothesis confuses what the data actually represents with one’s own unproven and empirically unsupported a priori views.
One of Baars’s central
arguments is that we do not need complicated philosophical discussions in order
to arrive at a scientific understanding of consciousness. We can simply treat
conscious experience as a variable (with two different values) and perform some
sort of contrastive analysis in a comparing between conscious and unconscious
cognitive processes (Baars, 1997a). On the surface this may seem more innocent
than Crick’s self-contradiction, but I believe it is not. In section 2.3, I
shall discuss the contrastive analysis[1]
as a scientific method in more detail. At a more general level, it is strange
to begin the study of something by not
defining it and by treating it just as a variable of something else, after
which this “something else” is to be thoroughly examined. This approach does
not necessarily lead to bad science and may give valuable information about
those other processes, but it will hardly provide any new insights in conscious
experience itself.
According to the
“reciprocal connection” understanding of science, there can be no such thing as
science without philosophy, because science is always looking for variables
that have already been theoretically identified. Thus, I find that the two
examples quickly examined above are prototypes of the fundamental scientific
misunderstandings in this area. It is reasonable, we have agreed, to explain
our experience of the world through looking at what happens in the brain as a
scientific discourse, among other simultaneous approaches, of course. Along
with the wish to have an understanding of something,
this idea is purely theoretical. Having said this, could one not imagine that
Crick and Baars have an important mission in suggesting that we set aside
philosophical speculations for a while, at least while doing science? To
provide a proper answer, we need a basic understanding of what science can
actually do.
2. THE NATURE OF SCIENTIFIC EXPLANATION
Classical
science, understood as a set of methods and thus different from theory, is
based on the gathering and quantification of information. The most conservative descriptions of science might
claim that science must live up to at least the following three criteria:
1. The
observed phenomenon must be generally accessible through a “third person
perspective”. If only one person is able to observe a particular object, it
cannot be accepted as a scientific object.
2. Scientific
results must always be replicable. When the same causal conditions are present
at time A and B, the same effects must be observed.
3. Science
must provide data that can be analysed consistently with methods, generally
agreed upon.
Most modern
standards would probably also include that:
4. Science
deals with generalisable laws. It is thus not scientifically interesting why my
nose bleeds at high altitudes. However the law that governs that any nose would
is.
5. Science
must work with a background of accepted theory, based on which the data should
be interpreted.
A scientific problem must then be defined as a problem that can be expressed meaningfully and solved in such a framework. Cognitive psychology shows that it is possible to use varieties of the methods of natural science when studying mental phenomena, as long as the questions asked can be stated as “how many words can you actively remember at one time?” or “when in the process of perception do you filter out relevant information?” Thus in order to have a strictly scientific terminology of consciousness, it must be possible to answer the problems of consciousness sufficiently in terms of information, accessible through a third person perspective, and scientific data. Traditionally, there has been two ways of defining the problem of consciousness among the scholars who claim to take the problem seriously:
1) Consciousness is a special problem because we have
a very clumsy terminology about conscious phenomena. The correct strategy for
the study of consciousness is first of all to develop a more appropriate set of
concepts, which then may lead to the conclusion that consciousness was a
conceptual misunderstanding in the first place (Wilkes, 1984; Dennett, 1988) or
that consciousness actually can be studied based on concepts more easily
assimilated in traditional science (Churchland, 1986). Very often, this view is
present in theories of a physicalistic persuasion where conscious experience at
present stands in the way for a further extension of physics as a scientific
model for the study of everything.
2) Consciousness is an ontological problem, and the
phenomenon of conscious experience is different from all or most other
phenomena in the world, and should thus be studied with different methods. One
can never achieve a full explanation of the relation between consciousness and
(other) physical matters with a traditional scientific approach. One can only
get data that gives information to philosophy of mind about neural correlates,
time factors etc. (Chalmers, 1995a; Jackson, 1986; Nagel, 1974).
Both of these alternatives suggest solutions that are
primarily based on philosophy. They both define the problem in philosophical
terms and show how science may be used to clarify the problem. In the first
version, this is achieved through a direct analogy or maybe even as a
replacement, however only after the philosophical work has cleared the way. The
other version identifies science as a helpful tool, parallel to theory but
nothing more. It seems from this very general perspective that no matter which
version you prefer, a strictly
scientific approach is impossible. So let us turn to some of the problems in a
seemingly strictly scientific account.
2.1
The problem of reduction
The first of the above-mentioned views has correctly
been criticised for being reductionist. Those who---like myself---believe that
any scientific or theoretical account of consciousness must place subjectivity
(the “first person perspective”) as central have delivered many substantial
arguments against reductions. Most of these arguments will be well-known to the
reader, and I shall not repeat them in detail here. Central to them all, and
uniting so in many ways different authors as David Chalmers, Max Velmans, John
Searle and Owen Flanagan (for discussion see Varela, 1996), is the notion that
any complete reduction of consciousness always will leave something out. Most
widely discussed is Nagel’s argument that any discussion of qualia at a
non-phenomenological level of description is not taking the notion of what it
is like to be in a conscious state seriously (Nagel, 1974).
Once it is decided to place the first person account as central to a science of consciousness, the arguments against reduction seem strong enough to resist any reductionist counter-argument, and for that reason, absolute reductionism does look like an impossible project. This, however, has not overthrown the wish to have a scientific approach to consciousness. Based on the general success of cognitive science over the last fifty years, a number of cognitive or functionalistic studies of consciousness have been made. Although these are not seriously threatened by the objections against reductionism, there are serious problems in a number of these studies as well, as I shall argue in the following.
2.2.
How are conscious states functional?
The approach of many cognitive scientists who insist on a purely scientific solution to the problem of consciousness seems to be identical to or very close to that put forward by John Searle:
Suppose someone
asked, what is the evolutionary function of wings on birds? (...) But there is
no question that relative to their environments, seagulls, for example, are
immensely aided by having wings with which they can fly. Now suppose someone
objected by saying that we could imagine the birds flying just as well without
wings. What are we supposed to imagine? That the birds are born with rocket
engines? (...) Now similarly with consciousness. The way that human and animal
intelligence works is through consciousness. We can easily imagine a science
fiction world in which unconscious zombies behave exactly as we do. (...) But
that is irrelevant to the actual causal role of consciousness in the real
world. (...) ‘What is the function of consciousness?’ is like the question,
‘What is the evolutionary advantage of being alive?’ (Searle, 1998, p. 384)
But I am not satisfied with this argument because it implies: 1) everything that exists has a function that can be defined, and 2) every function is basically the same kind of function (hence questions about the function of wings, consciousness and being alive are basically to be understood as identical questions). Let us introduce new examples, inspired by Searle’s approach. The fact that the molecular structure H2O has the quality of being fluid is functional just like the seagull’s wings. But this seems like nonsense because for whom is the fluidity functional? Being functional, in the Darwinian evolutionary terms that Searle chooses to use the concept, means that the individual can survive and reproduce with the help of that function. Even though the bird might survive without wings, it is obvious that wings contribute to the bird’s survival. But this does not seem like the right way to discuss the fluidity of H2O. This molecule is not trying to survive or reproduce or achieve any other kind of goal through this strange property. It makes more sense to claim that being fluid is one aspect of H2O or that it reveals something about how nature is organising itself. This difference becomes even more obvious when introducing Searle’s own example of “being alive”. How are we even to begin to understand that being alive is an evolutionary survival strategy? How does being alive contribute to being alive?
The example of birds and wings shows the classical evolutionary description of object-property relations, where an object and its properties are tied together by the way that a property is of use to the object. The other examples of being fluid and alive show a different kind of cause-effect relationship where the property is present because of certain states in the object. Thus, a body can be dead or alive, and H2O can be fluent or vaporised, depending on its current state. Obviously, this is not a discussion between two philosophical traditions---functionalism versus emergentism---but suggestions for possible relations between objects and properties, where more possibilities may be added. There is no need to explain all object-property relations in the same way.
I hope that I
have now convinced you that the question of functions must be considered as a
more complex matter, where functional
might have several meanings, and that functionality is not always the most
obvious way of explaining why something is there. The question is now: What is
the correct metaphor for the mind-brain problem? Birds with wings, H2O with
fluid properties, or something different from both of them? In other words, are
states of conscious experience present because the brain somehow actively uses
them as properties? Or are they present as property states because some yet
undiscovered natural law makes them present when certain object states (that
for some reason or other must be present in the brain quite regularly) also
occur?
We
have yet seen no empirical data indicating which if any of these possibilities
is correct. Yet, those scholars insisting on the purely scientific approach
admit to no doubts: Not only are conscious states functional states, they are
functional in the same way that we understand cognition to be functional.
Cognitive scientists generally consider mental functions as functions in the
understanding first mentioned, i.e. that the brain actively uses mental
functions in order to accomplish certain tasks. So we can, from this
perspective, treat consciousness as any other cognitive function and isolate it
and study it through experiments of the classic cognitive type, and even find
correlates of consciousness as such in the brain through PET and fMRI studies
as done in cognitive neuroscience (Baars, 1997a).
2.3
Consciousness and contrastive analysis
The idea of submitting consciousness to contrastive analysis, as suggested by Baars (1983, 1997a, 1997b), is to me the essence of what is wrong with functional explorations of consciousness. In a study by Düzel et al. (1997), it is suggested that since we can identify different ways of recalling events in our past---autonoetic (remembering, accompanied by the feeling of almost reliving the experience) and noetic (knowing, characterised with a feeling of general knowledge of “this is how it is”)---the measured brain potentials of the two states directly reflect the two different associated conscious experiences.
Imagine now that
Düzel et al. were only interested in the functional aspect: What are the neural
correlates of the cognitive processing behind autonoetic and noetic memory?
Could they have used the very same experimental paradigm to answer this
question? Evidently the answer is yes; in fact, the experiment seems perfectly
designed to answer that question. In general, it is always a problem if you can
get two very different interpretations (two different results) from the same
experiment. More specifically, the problem of using the same experiment to make
statements about consciousness is what was raised by Chalmers (1995a, 1996)
that it is not the same thing to make observations of functions and of
consciousness (hence the much discussed “zombie hypothesis”). This can be
argued since it is possible from the philosopher’s armchair to separate being
conscious from the contents of the conscious experience (here, the two kinds of
memory) but impossible in the scientific laboratory. The logic of the
separation between (phenomenal) consciousness and function is even at least
somewhat underlined by neuropsychological syndromes like blindsight, where patients seem to have some functional visual
processing left after the damage, but no conscious knowledge of this
(Kentridge, Heywood and Weiskrantz, 1999). Thus when looking at the evoked
response potentials from the experiment of Düzel et al., one wonders what
measures represent “being conscious” (if any) and what measures represent the
cognitive processes? Some may argue that this distinction is false and that
those correlates must be identical. This may be true, although this also is a
purely theoretical (“unscientific”) hypothesis. But then it must seem absurd to
speak of a neural correlate of consciousness as such, and it becomes
unintelligible that cognitive functions are sometimes not accompanied by
consciousness, or at least it follows that the answer for that is not to be
found in the activity of the brain. This confusion is not only obvious in the
dozens of studies that perform contrastive analysis and claim to be studying
consciousness, but also in the theories based on such experiments. For
instance, the very same critique could be raised when Baars (1997b) claims that
the limited capacity of memory as studied in some of the oldest cognitive
experiments (Miller, 1956) reveals a limited capacity of consciousness. Again,
this is just one interpretation of Miller’s original data and of the mind-brain
relation as such, in that it could just as well be argued that the information
processing of memory (i.e. the function
of memory) is limited, and that only that information that is being processed
can be experienced consciously. In this last understanding there are no limits of consciousness. It goes without saying
that both understandings are capable of accounting for the data.
The problem in this kind of contrastive analysis in consciousness studies becomes even more obvious in the suggestions by Frith, Perry and Lumer (1999) that we can simply look at PET-images of subjects when having conscious visual perceptions and compare these to images of subjects processing visual information unconsciously, and (eureka!) there you have the neural correlate of conscious visual perception.
The problems in
this suggestion are actually more far-reaching than in the study by Düzel et
al. (1997). For not only do the above criticisms apply, but it is also assumed
that the conscious and unconscious perceptions are so much alike on a strictly “functional
description level” that they actually can be compared. In other words, the
presence of conscious experience is the only thing that is different in the two
perceptual states. If I said I was to subtract a PET image of a subject’s
visual attention from one of the same subject moving his left foot up and down
in order to find the neural substrate of consciousness, other scholars would
consider me mad or at least pursuing the wrong line of career. What you get out
of a subtraction like this would definitely not just be the neural correlate of
consciousness, they would say, but
the differences in the functional procedures behind the two behaviours. So if
the suggestion by Frith, Perry and Lumer (1999) is to hold water, they must be
able to argue that the only thing that really makes (say) episodic memory
different from implicit memory is that consciousness is present in the first
example and not in the second. There should thus not be any significant
differences in information processing between the two, which seems unlikely.
So even though I intuitively believe that consciousness is in some sense a product or maybe even function of the brain, I am very sceptical when it comes to treating conscious experience as a cognitive function, not only using the methods of traditional cognitive science, but actually including consciousness in the list of phenomena that cognitive science has studied with success over the years. In fact, the very suggestion by Baars (1997a) to treat consciousness as a variable should in itself indicate that it must be different from those (other) functions to which the variable applies.
Cognitive functions have in common
that they do something – i.e. they
rest on strictly functional definitions. I can say for sure that I have the
function of memory, used as a verb, I remember
something. And I can try to copy some of the procedures through which my brain
carries out memory and implements them in a computer so it could be said to
“remember” something. Based on simple observation, however, one cannot say that
consciousness does anything, and it
would seem rather absurd to use it as a verb, and even more so to use it as
transitive. What could it mean to say that “I quale something”. Quite the
opposite, I seem to be conscious of objects in my surroundings and of thoughts
in my head, as if I was somehow watching the functions I can perform.
I am not now arguing that the
fluidity of water makes a perfect analogy to consciousness, or that there can
be no functional descriptions of consciousness. But contrary to the case of
memory or perception, it is not logically given
that it is so. And even if we were to agree that “consciousness must have a
function” (as Searle states it) and we were to make a contrastive analysis
based on this, it would still be absurd until we knew exactly what
consciousness itself changes in those
processes in which it could be considered a variable.
So far I have tried to make two points: 1) there can be no scientific studies of consciousness without also making philosophical assumptions, and that this simple fact makes it necessary for scientists to operate with explicit philosophical definitions in order to really know what they are doing, and 2) functional studies of consciousness can not differentiate between the proposed neural substrates of consciousness and the cognitive function that the subjects are conscious of in the particular experiment.
3. DO SCIENCE AND CONSCIOUSNESS MIX?
Now, having
pessimistic viewpoints on the prospects of functional oriented studies of
consciousness are often considered the same as having pessimistic viewpoints
regarding the prospects of experimental studies in this area altogether. This
would in fact lead me to agree with the initial point, made by Crick, that
consciousness is either for philosophers or scientists, and I would simply end
up taking the opposite choice of what he has done. Yet, I find that the really pessimistic viewpoint is that a
science of consciousness somehow should be dependent on the success of the
functionalist/cognitive research programme. No a priori logic has shown why the
cognitive understanding of science should be so intimately linked with the
study of consciousness. Thus, it is also intriguing that the discussion of
whether or not consciousness fits into cognitive information processing as
described by science today (see Velmans, 1991) so quickly becomes polarised
between authors suggesting that this is in fact the case and those suggesting
that there is no place for consciousness in a scientific terminology. This does
not seem as a necessary development.
Valerie
Hardcastle expresses some of the same considerations as I have made in the
following passage: “The problem is that consciousness crosscuts functionalism.
Arguments of this sort are fairly interesting since they are accepted on both
sides of the naturalising fence.” (Hardcastle, 1993, p. 37) A point, Hardcastle
also makes, is that people on both sides of “the fence” may have an unrealistic
idea of what role science actually can have in this debate. In my view, it is
fairly obvious that under the expectation that one or two or fifty experiments
could actually solve the
psychophysical problem, it is necessary to simplify and reduce the elements of
the problem or simply to give up using experiments at all. This is pretty much
the strategy that most authors currently are choosing, which alone may point to
a growing need for defining the goal of science in this area.
I shall now present a few examples of what I believe are necessary considerations for a science of consciousness. These are to be seen more as indications that are not in conflict with what I have pointed out as scientific problems, than as an actual research program. In that respect, however, I would like to point at the neurophenomenology discourse, suggested by Fransisco Varela (1996), as a theoretical and methodological viewpoint – at least in intentions - closely related to my own.
3.1.
Technical considerations
First of all, a lot could be done to meet the methodological criticism that I have raised. Perhaps PET, fMRI and other instruments developed primarily to localise where in the brain certain signals originate from are not very helpful at every stage of consciousness research. Instead, it would be interesting to employ other techniques, such as Transcranial Magnetic Stimulation (TMS), because this tool makes it possible to disturb ongoing neural activity, and thus compare different stages of the same cognitive processes instead of different ones. The application of this technique is not immune to my criticism above, but it will at least give a different perspective on the relation between brain activity and consciousness. For instance, TMS-impulses could be delivered with different time intervals, and through introspection subjects could describe how they experienced the moments before the disturbance, thus giving information about the temporal relations between brain activity, information processing and conscious experience.
3.2.
Applied introspection
The
research program of neurophenomenology implies an active use of subjects’
phenomenological and/or introspective descriptions in actual experimental
frameworks. It is of course an obvious necessity to use such descriptions and
allow them to have influence on our experimental designs, hence also on our
understanding of brain processes. This points at the ideal relation between
first person and third person research, where the two are considered as equals
in validity, and still are able to work together through reciprocal constraints.
3.3
Exact concepts and definitions
Obviously,
the strategy above implies exact and operational definitions of conscious
experience which, quite likely, will amount to a whole set of concepts,
referring to different aspects of experience. It should be clear that there
will be problems when trying to correlate the relatively precise and
well-developed language of neuroscience with the still very vague and
undeveloped language of phenomenology. For instance, the study by Düzel et al.
(1999) is a study of whether certain mental representations are experienced or
not. However, a precise vocabulary of consciousness, as in all other sciences,
should be based upon referrals to real observations. Thus, mental
representations, explicit knowledge, information and the like are not
sufficient for a description of experience, since such concepts are derived
from descriptions of observed behaviour, performance tests etc., and not
descriptions of the mental phenomena themselves from the first person
perspective (Roy et al., 2000).
From
that perspective, consciousness has many observable aspects, and quite often,
scientists refer to different aspects when referring to “consciousness” as an
explanandum. For instance, an experience seems to consist of a something, being
experienced, a someone, experiencing it, and an intentional relation between
the two. In other cases, scientists refer to differences between consciousness
and unconsciousness (as in either “creature consciousness”, “state
consciousness”) or the “qualitative feel” of consciousness (for a discussion of
these distinctions, see van Gulick, 1995).
It
should be explicitly stated which aspects of consciousness an experiment is
trying to single out. As argued above, any experiment focusing on experienced
representations might get different observations, than one focusing on “the
someone” who is experiencing, etc.
3.4.
Precise problem definitions
Somehow,
a non-reductionistic framework makes it less clear what exactly is to be found
out in an experimental approach to conscious experience. Because, clearly, the
results will not be considered as sufficient descriptions and explanations of
consciousness. Why are correlations between some or other aspect of
consciousness and brain activity, in general, interesting, and what problem
would they solve? We may hope for the possibility to find that the correlations
between mental states and brain states are so strong that we can form
psychophysical laws with the same explanatory value as any natural law
(Chalmers, 1995b). But this simply points at a new question: What, exactly, are
we talking about when saying that something explains something else? For some
reason, in the case of the mind-body problem, there seems to be a special need
to know what it takes to achieve a complete explanation.
One remaining question is if an integrated framework, with a logical combination of theory and scientific observation, would be just a third blind alley, much like the strictly philosophical and strictly scientific ones. My best answer would be a speculative extension of an earlier point - that if science could clarify what aspects of the so-called physical world that correlate precisely with mental phenomena, there will be an agreement on exactly which objects or phenomena our theory should include. This would not bridge the gap, but define from which locations the bridge should be built, so to speak. Following the idea of reciprocal connections, one should think that a higher degree of agreement on the observation side should lead to more agreement on the theory side.
This would of course only apply to theories that actually accept the existence of both sides of the gap. Those scientists who deny the existence of conscious experience will not indulge in philosophical speculations of how to approach the problem of consciousness, and those philosophers who deny any scientific approach have only their own creativity and logic to examine relations between phenomena in the real world (and good luck, not confusing those with your own theories of those relations).
That an agreement about observations would actually unite scientists and philosophers in one theoretical paradigm does however seem overly optimistic; yet when it comes to simplifying and systematising the messy field of consciousness, this broad approach seems more attractive than the other two.
I am grateful to
Alwyn Scott and Ruediger Vaas for correcting my English and for their
thoughtful criticism. Also thanks to Henrik Skovlund and Nini Praetorius for
important discussions based on previous versions of the article.
REFERENCES
Baars, B. (1983): Conscious contents provide the
nervous system with coherent, global information, in R. Davidson, G. Schwartz
and D. Shapiro (eds.): Consciousness and
Self-Regulation, vol. 3, Plenum Press
Baars, B. (1988): A
Cognitive Theory of Consciousness, Cambridge University Press
Baars, B. (1997a): A thoroughly empirical approach to
consciousness: Contrastive analysis, in N. Block, O. Flanagan and G. Guzeldere
(eds.): The Nature of Consciousness,
MIT Press
Baars, B. (1997b): In the theatre of consciousness, Journal of Consciousness Studies, 4, 4,
292-309
Chalmers, D.J. (1995a): Facing up to the problem of
consciousness, Journal of Consciousness
Studies, 2, 200-219
Chalmers, D.J. (1995b): Absent qualia, fading qualia,
dancing qualia, in: T. Metzinger (ed.): Conscious
Experience, Schöningh
Chalmers, D.J. (1996): The Conscious Mind, Oxford University Press
Churchland, P.S. (1986): Neurophilosophy, MIT Press
Crick, F. (1994): The
Astonishing Hypothesis, Simon & Schuster
Crick, F. and Koch, C. (1990): Towards a
Neurobiological Theory of Consciousness, Seminars
in the Neurosciences, 2, 263-275
Crick, F. and Koch, C. (1998): Consciousness and
neuroscience, Cerebral Cortex, 8,
97-107
Dennett, D.C. (1988): Quining qualia, in A. Marcel and
E. Bisiach (eds.): Consciousness in
Contemporary Science, Clarendon Press
Düzel, E., Yonelinas, A.P., Mangun, G.R., Heinze, H.J.
and Tulving, E. (1997): Event-related brain potential correlates of two states
of conscious awareness in memory, Proceedings
of the National Academy of Sciences of the USA, 94, 5973-5978
Frith, C., Perry, R. and Lumer, E. (1999): The neural
correlates of conscious experience: An experimental framework, Trends in Cognitive Sciences, 3, 3,
105-114
Hardcastle, V.G. (1993): The naturalists versus the
skeptics: The debate over a scientific understanding of consciousness, The Journal of Mind and Behavior, 14, 1,
27-50
Jackson, F. (1986): What Mary didn’t know, Journal of Philosophy, 83, 291-295
Kentridge, R.W., Heywood, C.A. and Weiskrantz, L.
(1999): Attention without awareness in blindsight, Proceedings of the Royal Society of London - Series B, 266,
1805-1811
Libet, B. (1985): Unconscious cerebral initiative and
the role of conscious will in voluntary action, Brain & Behavioral Sciences, 8, 529-566
Miller, G.A. (1956): The magical number seven plus or
minus two: Some limits in our capacity for processing information, Psychological Review, 63, 81-97
Praetorius, N. (2000): Principles of Cognition, Language & Action, Kluwer Academic
Press
Roy, J.M., Petitot, J., Pachoud, B. & Varela, F.J.
(2000): Beyond the gap: And introduction to naturalizing phenomenology, in: J.
Petitot, F.J. Varela, B. Pachoud & J.M. Roy (eds.): Naturalizing Phenomenology, Stanford University Press
Searle, J.R. (1998): How to study consciousness
scientifically, Brain Research Reviews,
26, 379-387
van Gulick, R. (1995): What would count as explaining
consciousness?, in T. Metzinger (ed.): Conscious
Experience, Schöningh
Varela, F.J. (1996): Neurophenomenology, Journal of Consciousness Studies, 3, 4,
330-349
Velmans, M. (1991): Is human information processing
conscious?, Behavioral & Brain
Sciences, 14, 651-726
Wilkes, K. (1984): Is consciousness important? British Journal for Philosophy of Science,
35, 223-243
[1] For non-scientists: Contrastive analysis is a
method for analysing e.g. brain scannings. Basically, you contrast an image of
an active state with a ”relaxing state”, so that the final result will be the
image of the activations, caused by the specific task of the experiment,
without the normally occuring background noise.