Brain-Sign
or
The End of
Consciousness
Philip Clapson, May
2004
© Philip Clapson
2004
The right of Philip
Clapson to be identified as the author of this work has been asserted in
accordance with the Copyright, Designs and Patents Act of 1988
There is no question
that something goes on in the head, which has been called consciousness. But is
it consciousness? Over the last fifty years, there has been a concerted attempt
to show how consciousness can be physical, of the brain. The diversity of views
is characteristic of a Kuhnian pre- normal science revolution: but the
revolution has not arrived. This is because the assumption that
consciousness exists is wrong. In this paper consciousness (with e.g. its
subjective/objective distinction) is characterized as a pre-scientific theory.
The biological ontology of the phenomenon is revealed, and its placement in
organismic biology explained. The phenomenon will be termed brain-sign,
as appropriate to its biological function. The nature of this function
completely reconstructs our view of ourselves, and other creatures in which it
is manifest. The detail and ramifications cannot be addressed at length in a
paper, but a research program is outlined briefly.
1. Introduction
Over the last fifty
years, urgent attention has been given to showing how consciousness can be of
the brain. No account has universal assent. Thus the situation has not the
hallmark of a science. So cognitive science and neuroscience,
indeed psychology itself, which depend explanatorily upon consciousness,
face an unanswered question: How is consciousness to be drawn from the brain’s
fabric: cells, axons, dendrites, neurotransmitters, peptides, etc.? Indeed,
given that the workings of the brain appear organic and knowledgeless, how can
it generate the knowledge properties associated with consciousness?
This paper will
propose that the topic be recast. It will propose that there is no such thing
as consciousness, as historically understood, and no experience, as
historically understood. It will say there is a phenomenon of the brain that
has been mistaken as consciousness, and while humans (and probably more
creatures) appear to experience, this does not entail consciousness.
Why would such a
proposal be made? The answer is straightforward. To gain a science of the
phenomenon (mis-)understood as consciousness, and so identify how humans can be
of the physical world, an account is required that satisfies our understanding
both of science and scientific method. Otherwise we are left with the situation
that nature created an amazing sport: a creature who could experientially know,
a causal property entirely different from any other physical property so far
encountered.[1] The answer here is
that we are physical, and this means that consciousness does not exist. What is
required to explain the brain’s activity is a new science (now at its origins),
and more radically, a new approach to our understanding of experience as of
the brain. This will involve a change of biological paradigm (Kuhn 1970),
and an entirely new view of ourselves.
2. Clearing the
ground
Before proceeding
to the proposal (commencing at section 3.5), we must put aside our preconceptions.
This is no small task. Firstly our cultural history and language is built on
the assumption that consciousness does exist. Secondly, our own experience is
so comprehensive and compelling, it seems absurd to suppose that it is not
doing what we think it is doing.
The ambition to
prove that consciousness exists and is scientifically tractable is widespread;
but some will reject science if it proves impossible to solve the problem (e.g.
proponents of the so-called hard problem). Remarkably, even
epiphenomenalism seems better than no consciousness at all.[2]
2.1 The theory of
consciousness
When Descartes
established the modern idea of the mind, he pointed to hypothetical states
of human being which were subsequently termed consciousness. They are
perceptions, thoughts, feelings and sensations. And he “invented” the subject
of those states. “I think, therefore I am” identifies the think with the
am, plus the existence fact of the I. It is these states that
seem not the same as the physical brain. But let us consider the logic of our
current assumption here.
1. There is an
ontological state termed consciousness. Therefore humans experience.
This is the
situation post-Descartes. But proposition 1. is not the same as:
2. Human beings
experience. Therefore there must be an ontological state called consciousness,
which entails all kinds of properties that are incommensurate with the physical
brain—i.e. those of our knowing experience.
The placement of
experience in our cultural understanding as consciousness resulted in the
legacy that, for experience, there must be consciousness. But there is no
scientific justification for this. Descartes’ theory, and its heritage, is
exactly that: a theory. What Putnam refers to as, unfortunately, “after
Berkeley and Hume...the only way to think” (1999, p23). Descartes’ theory did
not arrive sui generis. He theorized a notion already burgeoning in
philosophical history. (So Descartes’ hypothetical states effectively had the
theory built into them.) But for science, we do not have to associate
experiencing with the theory of consciousness. Indeed, we must prize the
two apart.
This prizing apart
is difficult precisely because we appear to experience, and this in two ways.
The first is that it is, now, almost impossible for us to visualize our
experiential existence in any other way than consciousness. Secondly, what
experience seems to give us is knowledge. Right there in front of us. What we
see. What we can describe, and think about. Thus we have the indelibility of
the experiencing situation.
2.2 Consciousness
as knowledge
The idea of
knowledge is crucial. Before the theory of consciousness there was the mind.
This mind related to the world in a particular way. As Robert Tarnas says: “The
belief that the universe possesses and is governed according to a comprehensive
regulating intelligence [Nous, Logos], and that this same
intelligence is reflected in the human mind [nous, logos], rendering it
capable of knowing the cosmic order, was one of the most characteristic and
recurring principles in the central tradition of Hellenic thought” (1991, p47).
Descartes (1985) endorsed this. And he considered feelings and sensations to be
confused thinking. We could know about injured feet and decide what to do about
them by a rational assessment based upon the real sensation of
pain.
For Descartes, what
we see (for example)—mountains, trees, houses, people—we see because God has
placed them (in representational form) in our minds, because he created us so.
And we can reach beyond mere seeing by our understanding (“natural light”),
which is not bound merely by what we see. This understanding, too, of mountains
and mountainness, arrives by God’s grace.
Both the Greek
notion, and Descartes’ variation, resulted from prevailing views. Since God
created the universe, he knew its contents and operation in essence. To finite
creatures like humans, knowledge of the world must be sub-divine. But knowledge
per se gave humans their distinct property of being made in God’s image. As
subjects, we exist in a luminous mental relation to the world and
its objects, including ourselves.
With Hume (1739,
1740) and Kant (1781, 1787) (Enlightenment characters), the rationale
for how mountains and mountainness could be of the human mind lost its divine
inception. Hume’s impressions and Kant’s sensible intuition simply placed
mountains there, to be worked over by other features of the mind like Hume’s
association, or Kant’s categories of the understanding by which mountainness
arrives, and thence reason by which it can be thought. Hume said that how
this could be was inexplicable by human reason (indeed, he saw it as
irrelevant), and Kant proposed transcendental features of the mental machinery
that enabled its happening.
But neither of these
accounts was satisfactory as a physical account, because they simply
ignored the issue of the mind as part of the physical universe. And
unfortunately, whilst history then gave us perception as if it involved direct
sense information (as e.g. sense data), the capacity of the mind to have these
(potentially) true portrayals of the world with which the mind can operate was
simply ignored. God could not be given the task because Hume and Kant
(mistakenly) thought they were doing science on the senses.
2.3 Intentionality
as the mark of the mental
What did happen,
with Brentano (1874), was that the notion of intentionality came into fashion
again. This gave a way of talking about aboutness. As Husserl said in 1913: “The
spatial physical thing which we see is...perceived, given ‘in person’ in the
manner peculiar to consciousness” (1982, p92).[3] For some
philosophers, this aboutness cannot be reconciled with the brain in any more
definite sense than can supposed phenomenal states (or qualia). For others, for
a time, the computer model altered this view, at least concerning language,
because computers appeared to process propositional meaning (semantics) by
syntax and rules, leaving subjective (i.e. of the I) phenomenal states
(the hard problem) adrift. But the brain, in processing terms, does not seem to
be like a computer.[4] And besides,
whilst the computer can process information, it still requires some knowing person
to devise the input and understand the output.
2.4 Consciousness
theory under scientific assessment
But what the
history of science has shown is that what our ancestors took as evident about
the universe could turn out not to be what was required for a scientific
understanding of the universe. Descartes’ theory, and its legacy, are
pre-scientific. While what we know about the world and ourselves appears to
result from our being experiencers, it is likely that we are completely
mistaken about this, as we were about an earth-centered cosmology. For while we
seem to know as a result of experience, we cannot reconcile this with a science
that requires all properties to be physical, operating under mass/energy
space-time.
Neuroscientists dealing with pure
physicality might ask: Why should a brain need to make experience as a knowing
anyway? What added benefit can there be over the neural processing of physical
information? True, one brain is not another brain. In that sense, they are
individual brains. But why should a brain manufacture a subject for experience
rendering knowledge for a person?[5] Humans could be
just robotic devices with adaptational skills as part of their bio-neural
make-up. But alas, neuroscientists themselves experience, and they have
no other theory than consciousness.
2.5 Proposed models
for consciousness as physicality
There are attempts
to resolve this impasse. Daniel Dennett (e.g. 1991, 1996), and Ryle (1947)
before him, claim that experience just is physical, with, in Dennett’s case, an
argument as to why this is so by altering our notion of consciousness’s
functional properties—function without phenomena. Paul Churchland (1995) claims
a future science will explain the identity of the neural processing with
consciousness (“epistemic access”). Antonio Damasio’s version (1999) is that
the brain effects consciousness by higher order representation. And John Searle
(1992; 2002) says that the brain has emergent causal properties which do mental
work in a (“higher” brain state) phenomenal field. These are well known. They
attempt to solve “How”, in Michael Gazzaniga’s words, “the brain enables the
mind [which] is the question of the twenty first century—no doubt about it.”
(1998, pxii). But the multiplicity of “solutions” (and of course there are
more) is an indicator that no one has come up with a version that convinces,
one with scientific credibility.[6] For they all have
well known problems. Dennett, without explanation, merges two ontological
realms, neurophysiology and consciousness. (Why is assembled (from sub-personal)
neurophysiology not still neurophysiology?) Churchland’s future science is a
dubious promissory note, not an explanation; and why (not least) is it consciousness
(functionally) that is reducible (Churchland & Churchland 1991)? Damasio
does not engage the problem of knowledge as physicality at all. And Searle
supposes a science of the brain involves unknown, one might say miraculous,
properties.
The key question
is: Are these “solutions” attempts to solve the theory of consciousness (i.e.
our cultural history), or a problem in the physical universe? Did evolution
really generate experience as knowledge?
3. The question of
model
Section 2. was
scandalously brief, but issues necessary to the argument of this paper have
been exposed. We have a duality of explanatory and ontological domains. On one
side there is the mind with its perception/thought/feeling/sense features. On
the other is the brain which is pure neurophysiology of enormous complexity,
operating by electro-chemical means. (Neuroscience has yielded results in terms
of structure and function in the brain, but it has not identified mentality.)[7]
But suppose that
what is necessary to resolve the problem is not a wrenching of one side into
the other (reduction or emergence), or an explanatorily deficient conjunction
of the two (e.g. higher order representation), but the introduction of a third
factor. This third factor would not be a phlogiston-like enabler, but a
functional property that recasts our understanding of the two sides. What could
it be? To locate that third factor, we must go back over the material already
covered. For our current duality, from a biological stance, has both missing
and mistaken content.
3.1 The primacy of
a neural account
A scientific
approach will not do what Descartes, and his descendants, did. It will not
assume, from a religio-philosophical pre-scientific heritage, that
consciousness exists. Nor will it, as Dennett says, look for “A physical
structure that could be seen to accomplish the puzzling legerdemain of the
mind” (1994, p237), which echoes Gazzaniga’s words above. The physical brain
(qua physicality) must be regarded as wholly adequate to perform its biological
role. We will call this bio-physical adequacy. There are then two
questions:
1. What is
the brain’s biological role?
2. Why does the
brain create experience? I.e. What, for the brain, is experience?
These are the
central questions. Before addressing them directly, let us continue to review
the material.
3.2 What mind-body
problem?
The mind-body problem
is founded upon the extraordinary idea that, having received all the
requisite physical information from the sensory receptors, the brain, instead
of acting upon it directly, turns it into intentional content and
feeling/sensation as subjectively available. These kinds of states are
understood to be per se causal. Yet they then have to be recast as pure
physicality again to render them causal by having physical properties.
This recasting is the mind-body problem. However, and quite obviously,
the intervening experiential state, in causal terms, appears redundant.
There are
essentially two (non-divine) justifications for the interposed experiential
state. Though linked, they do not always both occur. The first is that the
complexity of the brain’s assessment of its incoming information requires the
creation of the complexity of consciousness. Since we assume that the knowing
we have as consciousness is causal, and is complex—so complex indeed, that much
of it, in terms of an immediate task, seems redundant—it is proposed that
consciousness must be the result of what the brain does to manage its
complexity (e.g. Edelman & Tonini 2000). The second reason is that it is
assumed that we must know: for we cannot imagine (viz. indelibility) how
the brain, in terms of its bio-physical adequacy, could do what it does without
a knowing state (viz. our experience). How can we deal with mountains and
mountainness if we do not know about them (e.g. Damasio 1999)? (Reference is to
neuroscientists because philosophers, by and large, presuppose consciousness.
But these justifications are universal in all disciplines. That the origin of
knowledge is from the God concept has been long forgotten.)
To support the
knowing thesis (the second reason), certain conditions, as blindsight, are
nominated. As Anthony Marcel (1988) has said, although a blindsighted person
might be able to guess (for example) from a pregiven list with above average
success what is being shown in their blind field (indicating unconscious
information), they would not act voluntarily upon that information, so
demonstrating, for action, the necessity of our experiencing-perception, or
awareness. Allen and Reber (1998, p322) well illustrate the role of awareness,
that alternative word for consciousness. “Consciousness is a limited channel
processor.... It is difficult to see how a connectionist model could operate if
the organism must be aware of all processes at all times.... There is ample
evidence that unconscious processes [i.e. the connectionist model] are involved
in adaptation and intelligent functioning.” But if our notion of consciousness
(or awareness) is wrong, we will have misinterpreted what happens with
blindsight. Awareness will therefore not be a causal factor, and the
justification that the brain creates consciousness for our voluntary action in
the world will be lost.
Allen and Reber, in
the above quote, are assuming, not justifying, that the brain needs to
create consciousness to be causal. For if the brain functions to cause
organismic action to fulfill biological “aims”—by what we have called
bio-physical adequacy, with neurophysiological states (i.e. Allen and Reber’s
“connectionist model”) associated with adaptational effect—we might suppose
that action causes result from neural-state integration by neural firing across
the brain as a pure physical assembly. There is a term for this, viz. binding.
The debate as to whether binding needs generate psychological properties
(i.e. consciousness) can be seen in such papers as Hardcastle (1998) pro, and
Prinz (2001) con. Support for the psychology option may be driven by our sense
of conscious causality, with the profound significance that has had for human
self-understanding. But in terms of brain explanation, the creation of
psychology by the brain as a (complex) causal function has no biological
justification. That is, the organism, of Allen and Reber, is not “aware”
of anything. This category mistake bedevils the literature, as we shall see.
3.3
Counter-positions in mentalism, that still lack physical explanation
In recent years,
various writers, including John McDowell and Hilary Putnam (influenced by
Wilfrid Sellars’ Myth of the Given (1997)), have questioned the model that
derives from Descartes. They do not accept that perception is effected by
sensory input (as e.g. sense data, non-conceptual content, bare appearances) to
the cognitive functions of the mind. They take perception to be of the world,
in which cognitive effects (i.e. conceptualizing) are already extant in the
perceptual state. As McDowell puts it in one of many variations: “Conceptual
capacities are already operative in the deliverences of sensibility themselves”
(1994, p39).
This might appear
relevant to the position to be developed here. For we have proposed, from
bio-physical adequacy, that causal processing in the brain takes place as
neurophysiology. Thus, in McDowell’s account, when so-called perception arises,
there is no mental pre-conceptual experiential state then to be worked
over by other mental properties (what McDowell calls “spontaneity”, in
Kant-speak): perception is, in Putnam’s words, an “unmediated contact with the
environment” (1999, p44). For Putnam claims (in commentary somewhat as here)
that “early modern epistemology and metaphysics saddled us with an interface
conception of conception as well as an interface conception of perception”
(ibid. p45). Rather, he wishes us to see directly the pot of jam as a pot of
jam.[8]
However, whilst
this approach attempts to undermine the notion of the mental as our interface
to the world (the true real, but courtesy of God, with its problematic
subjective variation), it does not solve the mind-body problem. For according
to McDowell, although there is to be biologically stable sensory input (i.e.
a God + evolution conspiracy has happened, which removes problematic subjective
variation),[9] our freedom in
making judgments in perception still results from our (conceptual)
spontaneity. But how can he justify the notion of freedom as
neurophysiology (blind, thoughtless physicality)? While McDowell aims to show
that conceptualizing is constrained by sensory input, thus to be
genuinely of the world, at what stage does he suppose that sensory input exists
as a mental (i.e. intentional) pre-conceptual content, thence to be
integrated for our perception with the concepts of e.g. pot and
jam? Because his claim is that it isn’t. An adequate account, by contrast,
would specify whether we are talking wholly in mentalist terms (non-conceptual
input + conception) or neurophysiological terms (physical sensory states +
neurophysiological reactive processing), and, if the two are to be linked, exactly
when and how.
Kant fails to
explain physically the presence of the sensible intuition as sensory
input. McDowell, too, gives no such explanation.[10] So we can no more
assert grounds for freedom in judging (actively) that actually there is
a pot of jam (as a result of our conceptualizing capacities—as McDowell
proposes—and not e.g. a pot marmalade or a can of paint) as to suppose that we
might unfreely first see (passively) an unarticulated, or unintelligible
bounded locus of space (whatever that might be).
For what we
experience at each moment in perception is perception as it happens, even if it
may alter in relation to some particular object.[11] So when McDowell
says (ibid. p43) “The faculty of spontaneity is the idea of something that
empowers us to take charge of our lives”, we ask: What us? The Cartesian
subject? Our brain? Some other notion? This us (to which Putnam also
wants to give direct access to the world) is utterly obscure, except that it
occurs in the domain of giving reasons (which arise from our conceptual
capacities).[12] But a physical account
will ask more fundamental questions. They are: What kind of thing gives
reasons? And what are reasons anyway?
Putnam’s take is no
more satisfactory. He rightly criticizes Fodor’s (consciousness-ignoring) idea
of “perception modules” in the brain as input to conceptual functions (ibid.
p36), because we would still need to understand how actual perception is
reducible to neurophysiology as we would irreducible propositional attitudes to
neurophysiology. (I.e. Fodor simply mixes up ontological terminologies.)[13] But a Putnam-type
account of direct perceptual (sensory + conceptual) grasp of a pot of jam as still
(as he proposes) irreducible to neurophysiology, does not solve the problem of
how or why the brain makes the, apparently redundant, perceptual state
of a pot of jam in the first place.[14] For a genuinely
explanatory account of what is going on in the brain, this is what we must
get a grasp of. Why is there any kind of experience at all?
To comment on
Putnam’s own comment about Dennett (ibid. p157): There is no point in having a
philosophy about a topic that is itself outside the physical universe (e.g.
direct perception that is physically inexplicable). Even so, talk of the
physical universe (science) still requires a philosophy since we do not
(God-wise) know what the ultimate grounds of the physical universe are, or indeed
much of its operation—including ourselves as experiencers. Thus our philosophy
will be of a newly established region of biology (what our experience really
is), not a philosophy of an invented mind.
Perception, in the
accounts of McDowell and Putnam, is not irrelevant to what will be proposed
here. Reasons, or explanation, will feature, but will be given a cogent
physical account. Thus Putnam’s wish can be fulfilled: By “giving up th[e]
picture of perception as a set of ‘representations’ in an inner theater...we
will...escape from the endless recycling of positions that do not work in the
philosophy of mind” (ibid. p102).[15] But it will not be
by Putnam’s irreducible direct perception route.
3.4 The
physical/functional incoherence of mind
Cognitive science,
psychology, much philosophy of mind, depend upon an inexplicable assumption
that the Allen and Reber quote above (consciousness vs. connectionist model)
demonstrates. To illustrate with another quote, from William Seager:
“Descartes’ vision of the mind is the foundation of modern cognitive science.
The linchpin idea of this upstart science is that the mind is in essence a
field of representations—encompassing perception and action and everything in
between—some conscious, most unconscious, upon which a great variety of
cognitive processes operate” (1999, p4).
But how can
consciousness, i.e. states of experiential aboutness, operate with unconscious,
i.e. connectionist, states? If the informational content of the pot of
jam is as we see it, how can it cause a connectionist neural state to
operate, e.g. that we pick it up?[16] We must suppose
that, for the perceptual pot of jam, the neural brain has made of itself a kind
of state which is not the physical neural synaptic weighting of the (causally
active) connectionist model. Yet somehow that different kind of state must, via
its content, cause a purely neural activity for action to take place (otherwise
why is it there?). But we have no explanation for this.[17]
Putnam rightly
criticizes Fodor’s brain “perceptual modules”. But the whole explanatory
modus of the mind/brain depends upon an unfathomable causal connection between
experiential intentionality and causal neurophysiology—two entirely different
ontological realms (though apparently both of the brain). Without such
explanation, Seager’s cognitive science has not elevated itself to the status
of a science at all; and the very notion of mind is undermined. For saying that
we are conscious or unconscious is not the same as saying the brain
is conscious or unconscious. For the latter (whilst actually a category
mistake) would require a functional explanation as to how the two kinds
of states causally interact to effect, not only action, but thought itself, the
very source of McDowell’s/Kant’s freedom.[18]
3.5 Commencement of
the new model
To sum up the
position now reached. Rather than beginning with the theory of consciousness
and trying to reconcile that with the physical brain, we should start with the
physical brain and try and work out why what has been termed experience came
into being. Because of the intractable difficulties with pre-scientific
consciousness, we assert that a new model is required of the function of
experiencing. This should be founded upon a theory of function under evolutionary
principles.
We will now answer
the first central question. What is the brain’s biological role? The
brain, as an organ of the body, causes bodily action (including internal
organic regulation) to fulfill the biological “aims” of the organism (given
entirely as organic states), and has developed over millions of years by, in
more complex creatures, evolving through other species. In other words, there
is an evolutionary continuity from the most primitive species to man. There is
no evolutionary discontinuity, though there is evolution. Whatever is
accomplished by the brain to effect the biologically adequate operation of the
organism in the world (given ultimately by its survival and reproduction) is
accomplished by its neurophysiology, and effected by the central and peripheral
nervous systems, and other organs and processes of the body. Thus the ontology
and function of the brain’s causal neural status is now defined.
As a statement in
biology, this role-specification appears entirely uncontroversial. Solipsistically,
there is no reason for the brain to have any other operative mechanism than its
physiological features. But organisms are not solipsistic. So without
elevating the brain’s neurophysiology beyond the physical (i.e. with consciousness),
we must identify what the brain does to communicate, for the purposes of
common action, with the brains of the organism’s conspecifics, and others.
This will answer the second question: why is there experience?
4. Brain-to-brain
communication
To read the
following, one must constantly “see through” one’s familiar state, so to
identify one’s existence as a purely biological organism without
self-ascriptive knowledge powers. There is neither consciousness, nor
unconsciousness. The brain’s control of the organism is maintained by
neurophysiological programs, where this word implies largely
pre-established (i.e. “learned” through time) series of activation (cf.
Edelman’s remembered present, in e.g. Edelman & Tonini 2000) which
cause the organism to act, from the smallest gesture to the most complex and
“future-projected” plan.
Now consider this
quote from Heidegger:
“The bee is simply given over to the
sun and to the period of flight without being able to grasp either of these
as such.... The bee can only be given over to things in this way because it is
driven by the fundamental drive of foraging.... The fact that the bee is driven
in a particular direction is and remains embedded within the context of the
fundamental drive for going out and foraging” (1995, p247).
The notion of
embeddedness has come into recent philosophical discussion in cognitive science
(e.g.
Indeed, a
proto-model that accounts for how we can operate in the world without
consciousness is already available. It is the neural net, connectionist model.
However it seems to us—e.g. that the pot of jam appears before us, and we
respond to the actual pot because we see its real image—rather we should
understand that the brain grasps the pot of jam as a physiological structure
in electro-chemical terms. Indeed, not as a unitary structure, but as a
dispersed set of characteristics represented purely in neural net (i.e.
brain-synaptic) weightings which have no structural equivalence to the pot of jam
as perceived. Indeed, not as a set, but as many different groupings,
overlapping (superpositional) with features of numerous other represented
entities. These other entities may bear no obvious (i.e. intentional) relation
to the object pot of jam. But in brain function terms, it is the
operative neural structure that facilitates our (hugely complicated) action.[20]
In practice, this
is an early-stage hypothesis, since the actual working of the brain is vastly
elaborate and will involve other factors than synaptic weightings.[21] But the
connectionist model demonstrates the plausibility of purely physiological
processes (processes apparently in the brain) achieving the action-causing
functions of mental powers: e.g. recognition, comparison, logical steps, etc. The
connectionist model is representational: but, as William Bechtel argues
(2001, p332), it is a minimal representational notion which “is viable” (i.e.
is neuroscientifically extant). It does not pose the problems of
(Descartes’/Seager’s) representational mentality in the brain, a point Bechtel
hints at but does not really engage.
If we accept the
brain’s bio-physical adequacy along connectionist lines, what is left over?
What is the missing third factor that we proposed at the beginning of section 3?
The answer is: Physical brains must communicate with each other to effect
common action. There must be a means by which an assembly of pure
neurophysiology can engage with other assemblies of neurophysiology in the
physical world. For cooperative action (as in individual brains) will
result from pre-established programs, but across individual organisms.
Collective action is not a result of the conscious decision of two (mental)
subjects on the basis of their individual representations of the world. For the
assumed objective world of that collective action of minds with subjects results
from the definition of mind, not from how adaptational organs, viz. brains,
grasp their environment for the purpose of action.
The requirement
for brain communication is invisible in the tradition because the assumption is
that we see and understand the same things because we process the same
information as minds. Kant formalized this with his a priori space/time
concepts for perception, and the categories of the understanding (sourced by
Aristotle). Perception as appearances, while not of things in themselves, is
nonetheless objective: i.e. available to all per se for a
universally possible understanding. Essentially this simply tinkered with Greek
foundationalism, and has nothing to do with evolved brains.[22]
But causal brains
are an incommunicable scramble of causal neurophysiology. Their causal
properties are deeply obscured (though not, in principle, undiscoverable). For
solipsistic brains, the nature of that causality is irrelevant, since they
operate under adaptational development, and their success is judged
biologically according to the (action-effected) fulfillment of their biological
aims, about which they have no idea, for they have no idea about anything. But
for brains to communicate on their collective programs supra-organismically,
the situation is quite different.
Let us hold on this
position for a moment. The knowledge ontology that drove the theory of
consciousness, with its need to identify what can be truly known, arises from a
view of the universe in which there are knowable and known things because the
universe was created by a knower. Indeed, in Hegel’s Aristotle of the Metaphysics
(1961) (thence to Hegel’s (1807) Absolute Subject, and “domestication” by
Sellars and McDowell) divine thinking (Spirit) was the universe (the fall-out
is still apparent in Fodor, see below). But if the universe is entirely other,
i.e. is purposeless and requires no knowing but conforms “merely” to lawful
regularities—which applies to organisms as much as anything else (including
ourselves)—then we see that the brain does not know at all. Its
evolutionarily derived operation is founded upon its ability to fulfill its
biological aims (adapted organic states). What person A “knows”, i.e. what
occurs as their (supposed) consciousness, is not what person B “knows”. There
is no objectivity as Kant specified it. For the source of the apparent knowing
is the brain, and the brain knows nothing. Nor does the brain see anything,
feel anything, or sense anything.[23] The brain reacts
to the world by its neurophysiological states in purely electro-chemical terms
which, between us, will not be (exactly) the same.[24] Thus the proposal
for identity between brain states and causal psychological states is a
red-herring, though evidently whatever seeing, feeling, sensing and knowing
actually are, are brain states—what else could they be?
But brains must
communicate for survival which, in a biological account à la Richard Dawkins
(1976), results in a perpetuation of the genes (as the unit of selection). For
that communication, brains must create communicative states. To be effective,
these states must be able to communicate: in other words, they must be
adequately interpretable by other brains.[25]
We are now going to
make a fundamental biological distinction. This is a distinction in the
literature, but not as it appears here. We distinguish between closed
(or fixed) organismic cooperative patterns of behavior, and those which
we term open patterns of behavior. (In biology, reference is made to
closed and open instincts, the latter being modifiable. We will not
address the notion of instinct.) An example is the bee dance (Von Frisch 1966).
With the bee dance, signaling is carried out from one group to another to
locate the presence of honey (as the biological aim) at a specific position in
relation to the sun. This dance may take place in the dark of the hive;
nonetheless the receiver bees can, within some degrees, detect the direction,
which will be refined near the target by scent (receptor molecular
impingement!). This kind of activity can be put down to a primitive
consciousness, and associated with a symbol manipulating capacity (e.g.
The structure of
the dance, and the structures of fixed communication generally, is to trigger
from one organism to another a complete cooperative task, either jointly
or, as with the bee, to effect a beneficial-to-the-group consequent altered
state of behavior in a conspecific. But even if symbolism is involved (the
dance), the point is: How does the alteration of a conspecific’s behavior come
about? (Obviously bees do not sign honey or sun or direction,
because they cannot propositionally specify them.) The answer is: By the modification
of the receiver organism’s neural states for a prestructured program of the
(fixed) pattern of behavior. In other words, even as a behavioral
signal/symbolism in the world, the neural structure and operation of the
organism is adequate for behavior alteration without the presence of
consciousness. The behavior may be relatively complex, but it is not
open-ended. It is mapped to the world-domain of the organism’s (neurally
defined) fixed behavior capacity.
Now this fixed
behavioral structure is to be contrasted with open-ended behavior. And it must
be said that where, in evolutionary development, this occurs is as yet
unspecifiable. The difference between the bee-type behavior and open-ended
behavior is not, as the Heidegger quote above proposes, one in which a mental
grasping takes place. It is one in which the behavioral characteristics are
dynamically modified within the program as a result of the progress of the
program itself. This kind of modification is not predictable; yet adaptation has
taken place which allows conspecific brain communication to effect the
completion of the program (to target neural states) through an unpredictable
form of cooperation within the program itself. The idea is that open-ended
behavior demands an in-program brain communicative status to be conveyed.
Brains contain no (problematic) conscious subjects perceiving, thinking,
feeling, sensing, willing. So how is in-program brain status signification to
take place?
The illustrative
example is two lions hunting a gazelle on the grasslands. They will be called A
and B. The program of the hunt is the pursuit of the gazelle to the kill. The
gazelle may run, change direction, employ aspects of the environment to its
advantage. But these two lions can cooperatively pursue it to a result. How?
If lion A moves to
the right in line with the program as they stalk the gazelle, lion B must grasp
the significance of lion A’s move. As in the bee case, we might suppose that
the mere fact of A’s move, modifying B’s neural structure by sensory input, is
sufficient. But there is a further requirement here. It is that lion A, to
continue with the hunt program, must be able to assume that lion B has grasped
its move as appropriate within the program. It cannot yell, “Have you got it?” since
it has no language or vocal capacity so to do—and besides which, that would be
the end of the gazelle hunt. The assumption of appropriateness needs a means of
reciprocal signification which is sustained remotely between the (purely
physical) states of their brains.
There is a way this
can understood. It is that the image in B’s brain of A’s move (previously
called conscious perception) is a sign indicating to A’s brain B’s grasp
of A’s move. That is, when A moves, B’s brain grasps A’s move physically causally
by its neural-state modification; then it signs to A that it has grasped
the move by creating the image of A moving in the environment, which A’s brain
can take as the sign of B’s grasp. The image is a signifying analogy of
A’s move in the world.
Considering B’s
brain, therefore, we see it in these two parts, biologically. In and of its own
causal activity it is purely the electro-chemical modification of its physical
states. But since it is involved in collective action, in this case with a conspecific
for the purpose of obtaining body-fuel (the gazelle), it needs to signify the
progress of the program in the world until the fulfillment of it. This
signification is what the brain interprets of its own causal physical
states. This phenomenon is not operating causally for the organism, but is
purely a sign. As a sign (or signifier of the brain’s causal status) it is
unproblematically physical, i.e. can be understood in terms of its sustaining
medium, the neural base. Since it is a sign, it will be referred to henceforth
as the brain-sign. Thus the causality of the brain-sign phenomenon is
directed outwards.
In normal
discussion this sounds peculiar, because the obvious question is: If the
analogical image is a brain-sign, how can A’s brain see it since it cannot look
into B’s head? But this is a (mentalist) misconstrual of the physicality. The
program of the hunt is genetically endowed, and exposure realized in each of
the lions. They have no knowing capacity to grasp the idea of the hunt, and
then cooperate. The program can only be effected, as two (or more) lions, by
two (or more) lions. So the brain capacity for the hunt takes for granted
that other lions can participate. The hunt structure “assumes”, in the brains
of the lions, that a signifying status of the physical program is extant. Put
more scientifically: The supra-organismic structure of the hunt is, as it were,
imposed upon the single organic entity of each lion by the construction of all
their brains in genetic inheritance. Insofar as they perform the hunt, they are
collective organisms, and are so “designed”. Thus the function of the
brain-sign itself is supra-organismic, i.e. nothing to do with individual minds
or souls.
To continue with
the hunt program. Since B’s brain has grasped A’s move (in its physical
modification), it then can move B’s organism in an appropriate and reciprocal
way. A’s brain will grasp B’s move and sign to B, reciprocally, its grasp by
the analogical image in its head of B’s move.
But now we note that the image in B’s head is not only a sign to A of its
(brain’s) causal grasp of A’s move, but an explanation of B’s move,
since the causal grasp of A’s move results both in the analogical image in B’s
head and its reciprocating move. Again, A’s brain cannot see that B’s image is
this explanation, but it does not need to since this explanation is assumed in
their collective action. The complementary nature of the image, and
underlying neural causality, will be termed complementary resonance.
To enhance this point,
consider the situation in which lion A spies a small tree that it can take as
cover. There is no (mentalist) understanding in A resulting from its
conscious perception of the tree to be worked over by reason precipitating
its move. We must see this as a purely physical grasp of the
hunt-program/terrain-structure by A’s neural states from electro-magnetic
radiation input, thence steering it toward the tree (i.e. the neural grasp is
simply behavior determining). But, on the other hand, there will be in A’s head
an image of the tree on the terrain, in whatever representational capacity
lions have. What, then, is the point of the image? Its significance lies in the
fact that B’s neural states will grasp A’s move and the tree and
their relation in causal (electro-chemical) terms, and will produce an
(analogical) image of this. Specifically, under appropriate circumstances, the
fact that A moves toward the tree for cover must be an establishable part of
the hunt program. Thus the tree itself attains what we call meaning for
both of the lions in the hunt. Its meaning lies firstly in the causal status of
the neural states of both the lions, and then as a common signifier in
both brains. In A it means “the object for cover in the program” and for B it
means “the object in the program that A is using for cover”. Of course, neither
lion could propositionally specify it is a tree nor what cover is. But that is
not the point, for the analogical image is already the physical expression of
the meaning of the tree.[26] And, moreover,
since it is, in this context, a common signifier, it is the way both brains, as
just physical objects, communicate the commonality of their states for the
purpose of common action. Meaning, however, is entirely a physical state
now, both causally and communicatively. It functions purely to enable
action and has no transcendent status.
To grasp the point
here requires a complete appreciation that humans (minds) do not operate with
meanings (as words or images) that somehow exist outside themselves (in
God’s head, other organisms’ heads or the ether) because they are intrinsically
attached to external objects or qualities (a mentalist foundationalism, or
“mirror of nature” in Rorty’s (1980) phrase).[27] As long as
biologically adequate action is enabled, whatever is brain-signed has
achieved its role.
Indeed, the answer
to why there is a brain-sign is precisely so that states can be signified to
establish communicative reciprocation or commonality.[28] By contrast with
the complementary resonance of reciprocation, common neural
casual/signification states will be termed congruent resonance. If
creatures were solipsistic (i.e. the solipsistic brain already referred
to), there would be no need for meaning or brain-sign at all. So, for example,
just as A sets out toward the small tree as an action in the world, so the
small-tree sign is the output projection of its causal status. And as B regards
A and the tree, so it has them as output, linked by their analogical
relation—and will itself act accordingly. Analogical picturing, therefore, is
not some means of (conscious perceptual) causal grasping of the world, but
neural portrayal as the world. (But not, of course, the real world,
nor the physical world.)
Without going into
greater detail, it must be added that brain-sign, as e.g. so-called perception,
is not a series of still shots pieced together so that we see continuously
(cf. Damasio’s (1999) characterization as a “movie in the brain”). Both Husserl
in 1913 (1982, p174 and below) and William James (1950) in 1890 had observed
that our experience retains somewhat the past (Husserl called it retention)
and anticipates the future (protention). Thus a perceptual state,
whilst appearing as a sequence of instantaneous events (how else could it?), is
experienced as a continuum. Husserl took this to be an intentional feature of
mentality. But the biological interpretation is far more cogent. The feature
illustrates the pre-established nature of (so-called) perception as of a brain
program, fitting the structure of Edelman’s “remembered present”.[29],[30] Indeed, brain-sign
generally, as a signifying physiological state, is to be understood as
essentially discursive. The temporal, sequential nature of language is
evolutionarily preceded by other aspects of the brain-sign function, to which
it is integrally related.[31]
By the time humans
arrive, the situation is more complicated but is founded on the same
evolutionary principles. Three characteristics seem to differentiate humans
from other creatures, but some other creatures show elements of them. The first
is a sense of self. The second is language. The third, associated with both the
preceding, is the apparent reflectiveness/reflexivity of consciousness.
The detail cannot
be explored here, but the significance, in the context of the approach, can be
stated. With a sense of self there is an explicit demarcation between the world
and the experiencer. This has been identified in chimpanzees and dolphins (e.g.
These brain-sign
features are still, of course, portrayals of causal neural status, not evidence
of a more developed mentality. When a chimp appears to recognize itself in a
mirror, this is a neural accomplishment which presumably results in a
brain-sign state of the chimp that it can project to its conspecifics (and us!)
because, in principle, they can reciprocate. Similarly, when we suppose we are
considering our own thoughts by some mental means, these reflective acts are no
more than analogical projections of our causal neural states which could
be conveyed to others. We cannot literally be reflective by conscious decision.
There are no conscious decisions; and there is no enduring causal I for
reflection/reflexion, as in consciousness theory.
To see the function
of language, consider two humans discussing their view of a ship on the sea.
They do not constantly point out to each other: “There is the ship, there is
the sea, there is the sky.” Their (brain) communication takes place on the implicit
supposition of an adequately common view. But not only are they not
conscious in grasping the world and discussing it, their communication is caused
by their neural states. These two people have no idea what they are
going to say until they say it. And when they hear what the other says, no
subsequent mental processing on what they hear results. What they hear
(content, tone, sound) is already output to the other person (brain),
for their neural states have already determined what their response will be, founded
upon the informational content in the compression waves received into their
ears, and the electro-magnetic radiation to their eyes (i.e. a causal
functioning of pure physicality). Hearing what the other says is “merely” an
explanation of whatever results as the hearer’s explicit reciprocal
words (content and tone) and/or actions. I.e., we do not decide how to
respond to another’s words by what we hear. We (as experiencers)
communicate our brain’s (neural) causality by being its sign.
This addresses,
functionally, McDowell’s and Putnam’s “concepts in perception”. For humans,
what occurs as the brain-sign in terms of our seeing occurs as a construct with
the communicable status apposite to the current neurophysiology (e.g. mountains
and mountainness). There is no mentalist sense-input worked over for
conceptual addition. Thus Kant’s/McDowell’s spontaneity/freedom is resolved, as
are Putnam’s irreducible direct perceptions. (Cf. e.g. McDowell’s account in
chapter IV, 1994.)[32]
But the problematic
“logical space of reasons” is revealed as the signification of the logic of (embedded) neurophysiology,
and is the (oftentimes) adequate communicative modus that is brain-interpreted,
brain-selected, to enable collective action. Reasons, as spoken, are not a manifestation
of rational minds operating via consciousness. The world, and its
intelligibility, are not (dualistically) opened up to a subject contemplating
its (God-endowed) luminous condition. (Reasons are minuscule local fragments of
the potential analogical read-out of a brain’s status. That is why we can think
different things (even opposite things) at different times about apparently the
same topic. For different action neural programs are involved in the
signification, which have no regard for knowledge. The determining
characteristic of our statements is not a coherent network of beliefs (no
beliefs exist in the brain), but a time-given law-observing action-determined
signification of a neural status, which itself is identified as an (often only
potential) program of action.[33] We speak, not from
the unconscious, but from a type of source that is inaccessible directly. The
causal power of our writings or utterances is not the words (by which concepts
are deemed to exist), but as brain-input as electro-magnetic radiation or
compression waves. The truth of what we say does not (for example) result from
the truth of what we see, for in that sense there is no truth. The
nearest to what we might call truth (which per se depends upon a pre-scientific
God-dependent knowledge ontology) is the adequate communicative modus
(neurophysiological resonance) for effective collective action in the
world. (A link to pragmatism is evident.) That is how, together, we get
space vehicles to Jupiter, and beyond.[34]
In the wake of any
collective project, there is (usually) a trail of information and reason. But
the causal history, for each solipsistic brain which did the work, is not that
documented brain-sign history. It is the operative neurophysiologies, to which
we have no access. The information and reasons are signifiers which both are
approximate, since they are brain-interpretations of the causal
neurophysiologies, and can function to re-engage more neurophysiologies, e.g.
historians of the project and us. We cannot replicate the project, because it
was a time-dependent, neurophysiological-dependent trajectory in the world in
mass/energy space-time terms. But we can gain some approximation to the project
by the programming of our neurophysiology by the communicative medium of
brain-sign features: written words, spoken commentary, visual images. That is
how history works. What is then causal in us (but inaccessible to us as
experiencers) is our neurophysiology, by which our brain-sign statuses
communicate to others. I.e. brains communicate themselves by us, for we
are their signs. (Little wonder there are no fixed texts nor authoritative
authors.)[35]
Now consider the
self and the reflective/reflexive capacity. Continental philosophy identified
(in Sartre’s terms, 1943) the pre-reflective cogito, or as Heidegger
designated it (from the neo-Kantians, initially Fichte), facticity (see
Kisiel, 1993). The state was consciousness (or lived experience) before the
reflective/reflexive acts take place.[36] These philosophers
did not attempt a biological interpretation of this state, nor indeed that of
the reflective/reflexive states. But a biological view can be understood which
endorses the phenomenon (though not its interpretation). We might well suppose
that, for our lions, there are no reflective/reflexive states, for there is no
neural function that projects the self of the lions or gives them language to
express it and its concepts. But we might well suppose that lions function
pre-reflectively, in that their brain images (broadly meant) do allow
sufficient brain communication for their collective acts without a designated
self.
By the time of
chimps and dolphins, then humans, expressive possibility has evolved. The self,
reflective/reflexive states, and language in which they are couched, as
brain-sign features, have evolved, exploding communicative power, and resulting
in the vastly complicated world humans, in particular, create. Our experience
as a self with world represented to it is not eliminated, but our
interpretation of this state is now entirely different (from e.g. Kant), as we
comprehend its biological function.
Indeed, it is the
reflective/reflexive states that make it so difficult for us to see what the
brain does by our being experiencers. For the reflective/reflexive states
appear to grant us a relationship to the content of our own mind, either to
interrogate it (reflectiveness) or alter it (reflexivity). They appear to allow
in the capacity to judge, which for McDowell, and the history before him, is
key to our freedom in spontaneity (as what McDowell refers to as “second
nature”).[37] But this is an
illusion, which can be understood by seeing (as we have said) that an
occurrence of a reflective/reflexive state is still physically determined. We
have no capacity to decide or judge, in a mentalist sense, since we
are bound (at the instant) to the content of our occurrence.[38]
Of course we can
analyze language. But analysis is of a sign of something else. Language is
an interpretation, in organisms, of (embedded) neurophysiology, of which it is
a sign. The apparent logic in language is evolutionary preceded by other
brain-sign features. Fodor’s requirement for e.g. productivity and
systematicity in language is preceded by pre-reflective/reflexive image
states. (John loves Mary, for Fodor, can
be replaced by Mary loves John only if the speaker understands language. What
this understands means, beyond speaking the thinking, is unspecified.)[39]
The image of tree A
falling against tree B indicates for communicating lions an event in the world
commonly projected by their neurophysiologies as their brain-signs. Tree B
falling against tree A would indicate an inverse event. The lions have no
language to say: Tree A fell against tree B, or then: Tree B fell against tree
A. But what evolves as language is the explicit over the implicit.
This is not because of an understanding humans have and lions don’t. It
is neurophysiology communicating its status by a new powerful medium. Fodor
says: “Linguistic capacities are systematic, and that’s because sentences have
systematic structure. But cognitive capacities are systematic too, and that
must be because thoughts have constituent structure. But if thoughts have
constituent structure, then
“Cognition” is an embedded neural structure responsive
to the world both: without language, symbols or thought, and with other
kinds of causal properties. But aspects of the environmental domain (including
the organism’s body) in which action programs operate are represented by brains
as adequate signs. Symbols do not exist in the head. Brain-signs arise
afresh each moment. The brain may sign a version “cow” or “horse”,
depending upon its neural grasp of the environment. (And if nothing is actable,
nothing is signed, or there is inapposite signing.)[40] That does not mean
that cow or horse exist in the head to be matched by cow or horse
in the world. Signs are about causal neural status, not (knowable) world
features. Intentionality, as a casual property (Husserl’s perceptual
image in the head, or the word tree), is not biological.[41]
Is there evidence
that the neural brain is causal before the brain-sign occurs? In our common
experience we withdraw our hand from the iron before we feel it burned, and we
hunch our shoulders before hearing the thunder overhead. The experiments of
Benjamin Libet et al. have been a confirmation (1983).[42] Libet demonstrated
a delay, in voluntary(!) action, between the readiness
potential, signifying the commencement of brain activity, and the
experience. The delay is between 350 and 500 milliseconds. This finding prompts
the question: Why should there be a delay between the brain initiation of
neural activity and our experienced acts of will, when the latter supposedly
depend upon consciousness? How can we act voluntarily (i.e. on a conscious
decision) if the brain has already begun the act without us? The reason is
clear in the model here proposed. The brain-sign is the brain’s (bio-physical)
explanation of its acts, and has no causal property for the organism in which
it occurs.[43] Our will is not
positioned in consciousness as was supposed. Our actions result from
statistically expressible modalities of physical structures to which we (as
brain-signs) have no access.
5. Brain-Sign vs.
Mentality
The detail and
ramifications of the model proposed, while crucial to the interpretation of our
selves and our being in the world, are way beyond the scope of one paper. The
aim of this section is to help grasp brain-sign functioning by contrasting it,
in a few cases, with the mentalist model, and thus (I hope) exorcising the
notion of the mental (the ghost in the machine). First a list of advantages of
the brain-sign model over consciousness/mentality, before a brief expansion
upon some of them.
i. Brain-sign makes biological sense. Brain-sign facilitates
communication between organisms, which enhances gene survival, and is
continuous with the general organismic rise in complexity. We resist the
inexplicability of mentality, with its religious and moral (con-science)
heritage, and its dependence on causal intentionality.
ii. The ontology of the brain-sign is physically feasible, and does not
require the identification of new physical causal properties by consciousness
emergentists, or the “willing suspension of disbelief” demanded by
reductionists. We may suppose the phenomenon, as a sign, has some
characteristic of physicality like the skin reflectance/patterning of the
chameleon, and occurs at neuronal level. We now have a characteristic to look
for, since we are not hijacked by mentalist subjectivity.
iii. Brain-sign explains how we (as experiencers) are an aspect
of brains, how we fit into the natural world tout à fait. We can lay aside
the red herrings of values, beliefs, sensibilities, etc. (They are communicative
biological modalities.) The account is directly susceptible to
naturalistic explanation.
iv. That we are this phenomenon can be separated from the
information this phenomenon conveys (indelibility is overcome). What we “know”
of the world and ourselves in brain-sign terms is for communication between
organisms. No knowledge property reveals the world to us. The world
remains unknowable directly. We aim at that knowledge indirectly by our
theoretical constructs (e.g. physics, chemistry and biology—including
this one) and our devices for detection and measurement.
v. Our own causality remains where it always was: in the
electro-chemical processes of our neural structure. Causality is not explained
by folk psychology, nor is there an equivalence of folk psychology in the
brain. This does not prevent our communication in folk psychological terms,
although the experienced terms (i.e. verbal expression) are not what is causally
transacted within us.
vi. We now understand ourselves as fundamentally collective
organisms. Biologically we are not experientially separated, coming
together out of need or desire (as the notion of individual minds supposes). We
are designed to operate collectively in the fundamentals of our neural
construction. So intimate is this, it is not revealed to us in brain-sign
content, i.e. our experience.
There is a
similarity between brain-sign and consciousness in the sense that, for the
brain-sign, the operative neural structure must have developed the means to
translate aspects of its causal properties into other properties—those of a
sign. But the sign is not, as in mentality, imbued with causality for the host
organism. How the brain creates the brain-sign, both as a physical process and
as specific content in relation to the causal neurophysiology, is, of course,
of crucial significance. It cannot be explored here, but the state of the
brain-sign (its ontology) will be hypothesized.
5.1 Psychological
explanation
There is a scientific
way to understand the causality of humans, and the nature of their internal
states. But it is not by psychology. Psychological explanation demands a causal
realm, viz. our experience, to be different from the physical; it proposes, for
example, free will (or voluntariness). Of course, the psychological realm is
not supposed ultimately to be different from the physical. How could it be? It
is supposed that we have no access, currently, to the equivalent (or
generating) physical. But the equivalent physical that would empower the
causality of psychology does not exist, as we have seen. Therefore the
topic—how psychology can be of the brain—is still-born.[44]
It might be countered
that, since the brain itself creates brain-sign communication for other brains
as its explanation of its causality, why cannot that found psychological
explanation? For example, if I seem to avoid the tree because I see it, why is
that not adequate as a psychological explanation for my avoiding the
tree? The answer is that what the brain explains by the seeing is for other
brains, and is not psychological. The brain, in causing the avoidance of the
tree, does not see the tree. The brain is “just” causal neurophysiology.
The tree we see as to be avoided is defined by the brain’s program of action
in the world, pre-established to the seeing. For us to see the tree,
i.e. actually to look at it, would be for the brain to perform some other
activity, as which we would see it differently—i.e. what would be communicable
to another brain about our neural status would be constituted
differently.
Although the world
appears to appear to us in perception, this is not so that we can manipulate
(mentally) these representations of the world, thence the world itself—the
intentional/psychological position. Psychological explanation, and the
equivalent contents of cognitive science, are dualist interpretations of
our experience as if there were a psyche. There is no psyche. We can
only explain the brain’s action causation by understanding the neurophysiology
itself.[45] The nature of this
causation is not apparent in the brain-sign, for it would serve no useful
function in brain communication.
When we communicate in psychological
terms (e.g. “I love you”), our neurophysiology manipulates our behavior and
brain-sign content: this can appear to work successfully enough (e.g. effect a
response). But it is not because psychological terms themselves (or any kind of
statement) have a causal impact because of meaning. For there is neither
a transcendent nor physical foundation for meaning. What is activated is
an adequate brain resonance for a causal activity to be sustained between e.g.
individuals. The resonance is already pre-programmed (as usage) to be
effective. But of course, many different expressions can keep a program
activated, and alternatively the same expression may not always gain (in person
B) the result (for person A), because the resonance activated is on the wrong
program. (Thus the aetiology of “irrationality”.) Lawfulness is absolute, but
its implementation is invisible since the domain of causality, neurophysiology,
is invisible. Therefore our causal being (currently) is a mystery (cf.
Churchland 1981). But this is not because it is “mostly unconscious”, contra
Seager.
Now the correct
interpretation of blindsight can be understood. Multiple processing in the
brain of visual input is well established (thirty three locations so far,
according to Van Essen & Gallant, 1984). To act in a so-called voluntary
manner, what is required is sufficient neural processing (including binding)
that corresponds to what is portrayed as the perceived image: but the function
of the image is still to indicate to other brains the explanation of the
causality of the neurophysiology, implicitly in common action, explicitly
by pointing out. Behavior in blindsight, under experimental conditions, is
varied, presumably depending upon the level and type of damage in the sufferer
(e.g. Weiskrantz 1997). But the complete image (or sufficient of same)
indicates that sufficient of the necessary level of causal neural processing
works for “voluntary action” to occur. Thus blindsight does not support the
proposition that consciousness is necessary for voluntary action for,
neurophysiologically, it is a meaningless notion.
Why do we dream,
have illusions, have delusions, have fantasies? Why do these occur as
quasi-perceptual states? Because in these states, the brain is using precisely
the same communicative mechanism to portray its status to others.[46] The beautiful girl
or man who appear in fantasy do not provoke arousal. The neurophysiological
status generates the fantasy. The reason, in the phi experiment, that a
continuous trajectory is seen, not two rapidly and alternating illuminated
dots, is because the neurophysiology interprets itself to present the
trajectory. (Dennett’s (1991) Stalinist/Orwellian wrestling with this latter
topic, and his “solution” of the multiple drafts model, is an inevitable, and
unfortunate, consequence of his dualist materialist “mild realism” (1994).)[47]
If we wish to
explain ourselves by illusion, we cannot point to the girl/man-fantasy, or the
trajectory, for they do not exist. All we can say is that they signified our
causal neurophysiology. And so for perception too, though in this case we assume
a reality which seems in-line with our brain’s status.[48] For we assume (in
knowledge theory) that what we see remains the thing seen, as our
knowledge, for our action. But the brain’s action-control in relation to
the thing does not depend upon what we see. What we see is constantly “made up”
to effect collective action. Optical illusions long since demonstrated that our
seeing is a construct. Yet the realist position remained unshaken
because the significance of perception being a construct was not
appreciated.
Now the biological
point of imagination becomes obvious. It exists to communicate the brain’s
causal activity (actual or potential) concerning non-present locales &
situations (erotic fantasy; space travel to Jupiter; quarks). In a way it might
be said that Kant anticipated the treatment here in his reconstruction of
Hume’s perceptions, by attributing the manufacture of consciousness to the
autonomous productive imagination.[49]
But the topic
cannot be explored further here.
5.2 Intrinsic and
derived intentionality
As e.g. Searle
(1992; 2002) has it, if George Washington is the person one is thinking about,
our mental states of his image, or events in his life, manifest intrinsic
(sometimes called original) intentionality. Searle distinguishes this
from derived intentionality, which pictures and statues have. They gain
this because humans have intrinsic intentionality in creating or experiencing
them.
But the brain
does not think about anything. For action communication, the brain
signifies its status to other brains with images (implicitly), language
(explicitly), gestures (explicitly). Biologically understood, images of the
world in perceptual states are not about the world as intentionality
proposes. The images are about neural status. Though images may refer to the
world (and damaged feet, etc.), the reference holds no causal property for the
host organism. This has been laid out here already.
But note, too, that
the derived intentionality of a picture is a misconstrual. A picture’s content
comes into being because the neurophysiology of a human brain, via sensory
input, outputs its status as the image. To be causal, i.e. for human
communication, it must be processed by a brain. But the brain does not need (à
la Searle) some as yet unknown physical property to render the intentional
content causal for the host human.
Indeed, the reason
humans (as biological organisms) create pictures in the world is precisely
to extend their implicit brain-communicative capacity to an explicit
capacity.
5.3 Adequacy of
content, the nature of memory & the chameleon
Not only is there
is no Husserlian in-personhood intentionality, what is conveyed between us by
the brain-sign is “merely” a biological adequacy. For each of us individually,
the tree we see is refined from the complex dispersed causal neural processing
to become this image now. The tree’s appearance is (functionally) specified
by that action orientation for other brains that may be involved in the action
(cf. 5.1).
Thus we do not see
the same tree (either between us, or in each of us from moment to moment), in
the sense of having exactly the same image. And the image brings different associations
for each of us. But the tree we see normally does the job of allowing us
implicitly to refer to the tree in our collective actions, and explicitly
discuss it.[50]
There is massive
redundancy with the brain-sign, since its content hardly could be conveyed to
an audience, for there is no audience for much of it. But the brain cannot
switch itself off because there is no audience, and it cannot know in advance
what will need to be conveyed. Communication is, in Millikan’s terms (1984),
the proper function of the brain-sign, which it performs regardless of
actual or continual success.
The impact on our
mentalist assumptions can be seen concerning memory. We suppose that memories
of actual events, those that constitute our experience, are stored away and
recalled or re-experienced at a later time.[51] Indeed, our
personal and cultural self-understanding depends upon just these kind of
memories. They make us persons. The implicit assumption that this is the
case, in the literature concerning memory, results from a (generally)
unanalyzed notion of what consciousness is.[52]
It may be a
(culturally) sad but salutary realization that we have no such memories, for we
had no such experiences in the first place. What exists of our causal past is
entirely as causal neurophysiology, not our past experience. When
we supposedly recollect—our parents, our lovers, our first flight on an
airplane—there is no activation of those particularities. Activation is of laid
down neural structures (usually modified by subsequent activity), along with
unrelated other structures. For what we experience now in supposed
recollection is to convey our current neural status. We do not bring
back the past, but have activated the causal present. As false memory syndrome,
and our inability to get the past right in recall demonstrate, causality now
is dependent upon the brain’s capacity in activation. Hume’s idea, and the
subsequent mentalist history as well as prior culture-lore, that our
(experiential) impressions implant themselves on our minds, and are used
by our minds as a kind of causal/referential past, is false biology. This is an
example of the pernicious effect of the notion of mind, and demonstrates why,
for scientific explanation, it must be discarded. Together with no perceptions,
thoughts or feelings, the brain processes no past or future.
To help grasp the
biological function of the brain-sign, a comparison can be made with the
chameleon’s skin. On the rock, the chameleon’s skin (roughly) assumes the
rock’s reflectance and patterning. The physicality of the skin functions to
contain and protect the organs, and keep out the weather. The function of the
reflectance/patterning is to communicate, even though it is also physical skin.
One medium, two ontological realms. (Skin is made of cells, as is the brain
(substantially).) The chameleon does not know about the communication function,
and neither does the predator that is deceived by it. For the electro-magnetic
radiation that determines the state of the chameleon and the predator is in no
wise apparent to them (were there such a thing). The modification of the
skin-physicality has preserved chameleons, and is therefore a successful
(communicative) evolutionary device. So it is with human communication in
the world. The physicality of our brains signify their status. Humans are
unaware (collectively expressed) in their experience that this is what
happens between them.
It will always be
impossible to see the brain-sign in brains, because it can only be grasped as
the communication between brains. But, as with looking at the chameleon’s skin
(which we can do, collectively expressed), we may still observe brains
to specify, as scientific theory, how such a function is established.
5.4 Irreducible
subjectivity and phenomenality
A recent topic is
the irreducibility of subjective-phenomenal states (qualia). This is founded
upon two errors: the first the Cartesian, namely that there is an ontological
difference between the physical and the mental; the second is that
consciousness functions for a subject and therefore possesses some kind of
uniqueness that could not be ascribed to mere and objective physicality.
These errors have been exposed here.
In the case of
phenomenality and the so-called hard problem, we are invited by fiat to
see, in e.g. Chalmers’ (1996) approach, that there is no way in which the
physical can be reconciled with feeling or sense as we experience it.[53] Experience may be
a feature of brains, but the hard problem supposes that, being subjective,
experience places us outside the brute operational function of our brains. But
the physical fact is: Our experience facilitates brain communication (we
are the brain’s communication medium) and possesses no privileged insight into
its own (non-physical) nature at all. For we (as experiencers) are neither
subjective nor objective (contra Nagel 1986). Indeed, Nagel’s (1974) What is
it like to be a bat? begins from the false supposition that the being
like is a mind state open to reflective/reflexive engagement.[54] This is a
mis-characterization since no enduring subject exists to render experience
subjective, and therefore, also, it cannot reflect or reflex. So the idea of
consciousness (functionally) as a private place where we think, and feel
and sense is wrong.[55] Experience welds
brains together for supra-organismic action. It may be an approximate
and insecure device. But apparently it is nature’s means of facilitating
complex cooperation in the world between isolated bits of matter.
5.5 Pain
The topic of pain
illustrates dramatically the hold Descartes’ ghost has on our cultural
suppositions. For pain (the quintessential subjective-phenomenal state) is
supposed to signify bodily damage and precipitate our actions in relation to
it. Damasio (1999) calls pain an incentive: precisely Descartes’ formulation of
the situation requiring his non-physical mind. But there cannot be two causal
worlds operating, one mass/energy space-time, and the other termed mind. To see
how firmly the idea is established, consider these passages from the student
text Neuroscience (Bear, et al. 2001, p422).
“Pain teaches us to avoid harmful situations. It
elicits withdrawal reflexes from noxious stimuli. It exhorts us to rest an
injured part of the body. The most convincing arguments for this are the very
rare people who are born without the sensation of pain. They go through life in
constant danger of destroying themselves, because they do not realize the harm
they are doing. Many of them die young.”
They conclude,
“Exactly how the parallel streams of sensory data are melded into perception,
images and ideas remains the Holy Grail of neuroscience.” (ibid. p434)
It may seem
impossible that it could be otherwise concerning pain, but that confirms our
(cultural) failure to grasp our biological nature. Since our experiencing is
the brain’s interpretation of its causal status as a sign to other brains, pain
has no causal impact upon us at all. (This does not make it less painful!)
One person helps
another damaged person because their neural brain causes them to, and it causes
them to because in evolutionary terms, this mechanism has proved to be
beneficial. Each brain operates with pain as a signifier of physical status.
Person A is drawn to person B assuming the pain status in person B, and A’s
feelings in the situation are complementary states signifying to B their causal
reaction. Although we may sometimes register aspects of this in our
complementary states, even imagining the other person’s feelings, our
experience does not convey the underlying causality, because that is not what
our experience is for. But why would we have complementary feelings at all
unless the brain-sign were functionally supra-organismic?[56]
As evidence of
this, consider that pain is not an inevitable consequence of physical damage, and
that pain can be experienced without physical damage. In Patrick Wall’s book Pain:
The Science of Suffering (1999, p11), under a photograph, is the caption:
“President Ronald Reagan seconds after being shot in
the chest with a 9-mm bullet on 30 March 1981 outside a
In the chaos and
panic from the shooting, presumably Reagan’s brain signified its causal
extraction of his body from the scene, under the pressure of his minders. Only
when that priority was dealt with could his brain then portray the
characteristics of his body. In other words, pain, like every other brain-sign
occurrence, is selected by the brain, depending upon what the brain has,
and is able, to convey of its currency.
The unfortunate
people who do not feel pain do not fail to look after themselves as a
consequence (contra Bear et al.). As with blindsight, what they lack is the
effective neurophysiology that causally implements the required actions. That
they do not feel results from that fact, for pain signifies to other
brains the status of the organism, in this case “signifying” (by its absence)
the failed causal response to damage.
6. The research
program
It may be untypical
of a philosophy paper to propose a research program, albeit briefly. But the
radical nature of the content here may leave the reader wondering what should
come next. The last few pages will touch on this.
For the brain-sign
model to be fully accepted, direct experimentation is required. Such examples
as have been taken as proof that consciousness exists and is causal (e.g.
blindsight and pain), have been shown here to be susceptible to alternative and
genuinely physical explanation. Uniquely, the brain-sign account
provides an alternative to consciousness on which experimentation can
take place. We are no longer stuck with epiphenomenalism as the only
alternative to consciousness.
If we accept the
brain-sign account, we find the logic of the brain’s actions lies in the
embedded structure and operation of the brain. That will explain the
causes of human action—which is a scientific enterprise in its own right. This
enterprise will determine what may be called the invisible nexus, i.e.
the explanatory modus for human (and other organisms’) action in the world from
neural activity. This nexus is not directly given in the brain-sign,[57] since the
brain-sign serves another biological function.
If the brain-sign
model is accepted under experiment (or indeed before), then a new research
program can proceed. Some pointers are outlined, grouped under the headings:
brain, phenomenology and organismic interpretation.
6.1 Brain
The where of
consciousness is a current issue. Descartes specified the pineal gland as the
transducer of physical information to mind substance. Dennett (1991; 1996) says
there is no such Cartesian theater.
Damasio describes
consciousness as a story told by the brain about its engagement with the world.
He proposes consciousness is a representation of the: “Viscera,
vestibular system and musculoskeletal frame” (1999, p170); and (as the proto-self)
in: “Some brain-stem nuclei, the hypothalamus and basal forebrain, and
somatosensory cortices” (ibid. p182). The second order structures that enhance
and cohere the first order structures to render core consciousness are
the: “
Francis Crick and
Christof Koch, who say they wish to “avoid a precise definition of
consciousness because of the dangers of premature definition” (1998, p255),
nonetheless claim, of visual consciousness, that: “Activity in V1 may be
necessary for vivid and veridical visual consciousness.... [But] at each stage
in the visual hierarchy the explicit aspects of the representation...is always
recoded. We have assumed that any neurons expressing an aspect of the NCC
(neural correlates of consciousness) must project directly, without recoding,
to at least some parts of the brain that plan voluntary action—that is what we
have argued seeing is for” (ibid. p265).
The confusion about
what consciousness is ensures its obscurity under investigation. For the brain
to act upon sensory information, it will bind its (electro-chemical) content,
for relevant factors need (in principle) to be accounted for. The binding need
not be at an instant, nor could be so. But the binding does not involve
consciousness (cf. Prinz 2001).
The confusion is
caused by the erroneous notion of the unity of causal consciousness:
that conscious content should be all-of-a-piece, and relate to the knowing
subject of consciousness who (with their will) can act on it—Kant’s transcendental
unity of apperception. This is the source of Damasio’s (causal) proto-self
and core consciousness, and Crick and Koch’s neural correlates and voluntary
action. But the underpinning notion is not neuroscientific, nor biologically
relevant.
The manufacture of
the brain-sign is an additional function over causality. Therefore searching
for the brain-sign does not involve looking for neural correlates (NCCs), nor
involves the entire neural structure that effects action from input. The
brain-sign, as a brain interpretation of its status as a sign to others, is
both selected (i.e. not exhaustive or inevitable) and conjectural
(i.e. not veridical). Thus the search for the brain-sign assumes a different rationale
from the examples here of current research—however accurate they may be
in identifying locales.
As to evolution,
the brain-sign offers a clear line of exploration, since the mimicking of the
environment is the way many species disguise themselves (as the chameleon) as
communication-for-survival externally. The mechanism exists biologically, and
may have been adapted for communication-for-common-action internally. Clearly
feeling, sensation and pain have been created, rather than mimicked, and their
opaqueness presumably results from their lesser communicative specificity. But
even the mimicry is an evolutionary/genetically endowed brain convention,
not an attempt by the brain to grasp reality. The latter is both impossible and
biologically inapposite.
6.2 Phenomenology
Historically,
phenomenology has been about what is represented to the mind. As brain-sign,
phenomenology concerns the brain’s representation for communicative status.
Although we have
referred to perception, thought, feeling and sense as features of both
consciousness and the brain-sign, that these are differentiated as
causal factors results from the mind notion. But the true picture is more
likely that any brain-sign manifestation incorporates all features, since what
is conveyed is a causal brain status across the features: tree-perception, i.e.
physical structure; tree-thoughts, e.g. causal identification and associations,
including generalization; tree-feelings, e.g. reaction to causal history,
including genetic endowment. Aspects will be emphasized in a given case, just
as vision itself has a foveated area, and attention centers on particularity.
The brain itself is
never operating on one thing, and the brain-sign will be both rich in
overall content, and complex in terms of what is signified. It continuously
changes, since the brain is continually modifying itself in its reaction to the
world. But since we do not look to the brain-sign for an understanding of
causality, our approach to phenomenology will be quite different, for now we
needs grasp: what the brain selects for presentation of its causal status; what
it conjectures as presented; how it is manifest. (I.e. these states are no
longer to be interpreted as mental.) This includes an appreciation that the
brain-sign must be sparse, by comparison with what the brain effects
causally, since what is to be conveyed needs only relate to communicative
relevance.
We now have a
genuine physical status for the phenomenon which makes scientific investigation
possible, an investigation that, in human history, has not yet begun.
Our mentalist terms—perception, thought, feeling, sense, memory, reason, self,
reflection, belief, desire, indeed the very indelible sense of our existing—all must be recast in a biological account which, in
terms of phenomenology, convey a physical status, whilst not being the
physical status.[58]
6.3 Organismic
interpretation
What all this leads
to is the need for an entirely new scientific vocabulary to specify organismic
status in the world. It will translate neurophysiological causality into action
in the world (the invisible nexus) according to biological function (aims), and
will include the communicative function of the brain-sign as indicator of
causal neural status. The creation of such a vocabulary is no small task. But
even to conceive of its necessity is to begin to escape the impossible
situation neuroscience, and particularly neuropsychology, find themselves in,
employing ancient and medieval terminology, devised before the true nature of
our physical existence could be embraced.
Such a vocabulary
will be used for scientific understanding of organisms. But its impact will
likely influence everyday discourse, altering both our self-understanding and
institutions. For it recasts, fundamentally, our interpretation of the nature
and significance of our experience, and points to a causality in us over which
we have no control from that experience.
Allen, R. &
Reber, A.S. (1998) Unconscious Intelligence. In: Bechtel & Graham ed.
(1998) 314-323
Aristotle (1961) Metaphysics,
trans. John Warrington, Everyman’s Library, Dent
Baars, B. (1996) In
the Theater of Consciousness: The Workspace of the Mind,
Baars, B. (2004) A
Stew of Confusion, Journal of Consciousness Studies, vol. 11, no. 1,
29-31
Bear, M.F. &
Connors, B.W. & Paradiso, M.A. (2001) Neuroscience: Exploring the Brain,
2nd edition, Lippincott, Williams & Wilkins
Bechtel, W. &
Graham, G. eds. (1998) A Companion to Cognitive Science, Blackwell
Bechtel, W. &
Mandik, P. & Mundale, J. & Stufflebeam, R.S. eds. (2001) Philosophy
and the Neurosciences, Blackwell
Bechtel, W. (2001)
Representations: From Neural Systems to Cognitive Systems, in Bechtel et al.
(2001) 332-348
Black, Ira B.
(1991) Information in the Brain, MIT
Block, N. (2003) Do
Causal Powers Drain Away? Philosophy and Phenomenological Research, vol.
LXVII, no. 1, 133-150
Brandom, R. (1997)
Study Guide. In: Sellars (1997)
Brentano, F. (1874)
Psychology from the Empirical Standpoint, trans. 1973 A. Rancurello, D.
Terrell, L. McAllister, Routledge and Kegan Paul
Chalmers, D.
(1995-7) Facing up to the Problem of Consciousness. In: Explaining
Consciousness—The Hard Problem, ed. J. Shear, 9-30, MIT
Chalmers, D. (1996)
The Conscious Mind,
Churchland, P.M.
(1981) Eliminative Materialism and the Propositional Attitudes. In: e.g.
Rosenthal ed. (1991), 601-612
Churchland, P.M.
(1995) The Engine of Reason; the Seat of the Soul, MIT
Churchland, P. M.
& Churchland, P.S. (1991) Intertheoretic Reduction: A Neuroscientist’s
Field Guide. In: Bechtel et al. ed. (2001), 419-430
Clark, A. (1997) Being
There: Putting Brain, Body and World Together Again, MIT Press
Clark, A. (2001) Mindware:
An Introduction to the Philosophy of Cognitive Science,
Claxton, G. (1999)
Whodunnit? Unpicking the ‘Seems’ of Free Will. The Volitional Brain,99-113,
ed. B. Libet, A. Freeman & K. Sutherland, Imprint Academic
Crick, F. &
Koch, C. (1998) Consciousness and Neuroscience. In: Bechtel et al. ed. (2001),
254-257
Cummins, R. (1989) Meaning
and Mental Representation, MIT Press
Damasio, A. (1999) The
Feeling of what Happens, William Heineman
Damasio, A. (2003) Looking
for Spinoza: Joy, Sorrow and the Feeling Brain, William Heineman
Davidson, D. (1970)
Mental Events. In: Rosenthal ed. (1991), 247-256
Dawkins, R. (1976) The
Selfish Gene,
Descartes, R.
(1985) The Philosophical Writings of Descartes, Vol 1 and 2, trans. J.
Cottingman, & R. Stoohoff & D. Murdoch, Cambridge University Press
Dummett, M. (2003)
The Dewey Lectures, Lecture 1: The Concept of Truth, The Journal of
Philosophy, vol. C, no.1
Edelman, G.M. &
Tonini, G. (2000) Consciousness: How Matter Becomes Imagination,
Fell, J. P. (1981)
Fodor, J. (1987) Psychosemantics,
Fodor, J. (1994)
Fodor. In: Guttenplan ed. (1994), 292-299
Gazzaniga, M.S.
(1998) The Mind’s Past,
Guttenplan, S.
(1994) ed. A Companion to the Philosophy of Mind, Blackwell
Hardcastle, V.G.
(1998) The binding problem. In: Bechtel & Graham ed. (1998), 555-565
Hebb, D. (1949) The
Organization of Behaviour: A Neuropsychological Theory, Wiley
Hegel, (1807) The
Phenomenology of Mind, trans. J.B. Baillie (1910), George Allen and Unwin,
Humanities Press
Heidegger, M.
(1927) Being and Time, trans. 1962 J. Macquarrie & E. Robinson,
Blackwell
Heidegger, M.
(1990) Kant and the Problem of Metaphysics, trans. Richard Taft, (first published in German 1973 Vittorio
Klostermann), Indiana University Press
Heidegger, M.
(1995) The Fundamental Concepts of Metaphysics, trans. W. McNeill &
N. Walker, from the 1929/30 lecture course, Indiana University Press
Hume, D. (1739,
1740) A Treatise of Human Nature, 1962 Fonatana
Husserl, E. (1982) Ideas
Pertaining to a Pure Phenomenology and to Phenomenological Psychology, book
one, (first published 1913), trans. F. Kersten, Kluwer Academic Publishers
James, W. (1950) Principles
of Psychology, (first published 1890 Henry Holt),
Kant,
Kim, J. (2003)
Blocking Causal Drainage and Other Maintenance Chores with Mental Causation, Philosophy
and Phenomenological Research, vol. LXVII, no. 1, 151-176
Kisiel, T. (1993) The
Genesis of Heidegger’s Being and Time,
Kuhn, T. S. (1970) The
Structure of Scientific Revolutions, 2nd edn.
Lakoff, G. (1987)
Women, Fire and Dangerous Things: What Categories Reveal About the Mind,
Libet, B. &
Curtis, A.G. & Wright, E.W. & Pearl, D.K., (1983) Time of conscious
intention to act in relation to onset of cerebral activity (readiness
potential). The unconscious initiation of a freely voluntary act, Brain,
106, 623-642
Marcel, A.J. (1988)
Phenomenal Experience and Functionalism, Consciousness and Contemporary
Science, 121-158, eds. A.J. Marcel and E. Bisiach, Clarendon Press
Martin, M.G.F.
(1994) Perceptual Content. In: Guttenplan ed. (1994), 463-471
McDowell, J. (1994)
Mind and World,
Mesulam, M.-M.
(1998) From sensation to cognition. Brain, 121: 1013-1052
Millikan, R.G.
(1984) Language, Thought and other Biological Categories, MIT
Nagel, T. (1974)
What is it like to be a bat?, Philosophical Review, 83, 435-50. In e.g.
Rosenthal ed. (1991)
Nagel, T. (1986) The
View from Nowhere,
Nietzsche, F.
(1967) The Will to Power, trans W. Kaufmann & R.J. Hollingdale,
Vintage Books
Panksepp, J. (1998)
Affective Neuroscience: The Foundations of Human and Animal Emotions,
Penfield, W. & Rasmussen, T. (1952) The Cerebral
Cortex of Man, Macmillan
Petit, J.-L. (1999)
Constitution by Movement: Husserl in the Light of Recent Neurobiological
Findings. In: Petitot et al. ed. (1999), 220-244
Petitot, J. &
Varela, F. J. & Pachoud, B. & Roy, J.-M., ed. (1999) Naturalizing
Phenomenology,
Peschl, M.F. (1999)
The development of scientific concepts and their embodiment in the
representational activities of cognitive systems, 184-214. In: The Nature of
Concepts, (1999) ed. P.Van Loocke, Routledge
Pinker, S. (1997)
How the Mind Works, W.W. Norton
Prinz, J. (2001)
Functionalism, Dualism, and the Neural Correlates of Consciousness. In: Bechtel
et al. ed. (2001)
Putnam, H. (1999) The
Threefold Cord: Mind, Body, and World,
Putnam, H. (2002)
McDowell’s Mind and McDowell’s World. In:
Reading McDowell ed. N.H. Smith, Routledge (2002) 174-190
Rorty, R. (1980) Philosophy
and the Mirror of Nature,
Rosenthal, D.M.
(1991) ed. The Nature of Mind,
Ryle, G. (1949) The
Concept of Mind,
Schlipp, P. A. ed.
(1981) The Philosophy of Jean-Paul Sartre, Open Court
Scott, C. P. (1981)
Role of Ontology in Sartre and Heidegger. In: Schlipp ed. (1981), 277-299
Seager, W. (1999) Theories
of Consciousness, Routledge
Searle, J.R. (1992)
The Rediscovery of Mind, MIT
Searle, J. R.
(2002) Consciousness and Language,
Sellars, W. (1997) Empiricism
and the Philosophy of Mind, (first published 1956 in Minnesota Studies
in the Philosophy of Science), Study Guide by R. Brandom, Harvard
Tarnas, R. (1991) The
Passion of the Western Mind, Crown
Untereker, J.
(1957) A Reader’s Guide to W. B. Yeats, Thames and
Von Frisch, K.
(1966) The Dancing Bees, 2nd edition,
Wall, P. (1999) Pain:
The Science of Suffering, Wiedenfeld and Nicholson
Wegner, D. M.
(2002) The Illusion of Conscious Will, MIT Press
Weiskrantz, L.
(1997) Consciousness Lost and Found,
Wittgenstein, L.
(1968) Philosophical Investigations, trans. G.E.M. Anscombe, Basil
Blackwell
[1] William Seager ends his book with the following (1999, p251/2): “In trying to explain consciousness itself...the standpoint from which the explanation is offered and in terms of which it is understood, contains the very thing we want to explain.... Cold comfort to end with the tautology that an unpatchable hole is...unpatchable.” Thus the wrong standpoint gives a scientifically unacceptable result, as we shall see.
[2] Epiphenomenalism has crept back into fashion, but with the idea that the function of experiencing is merely to convince us of our experience. E.g. Gazzaniga: “The left brain weaves its story to convince itself and you that it is in full control” (1998, p25). Also Claxton (1999) and Wegner (2002). It is impossible to make sense of this idea, since the brain does not convince itself of anything.
[3] This definition is in the context of rejecting perception as symbolic, contra much cognitive science. This is because, for Husserl, perception is directed toward (about) the object, not the symbol.
[4] As Ira Black said over a decade ago (1991, p3), “Extensive evidence indicates that the brain is not an immutable series of circuits of invariant elements; rather it is in constant structural flux. The digital computer analogy is fatally misleading.”
[5] Neuroscientists, Damasio (1999) and Panksepp (1998), have promoted theories that propose to locate subjectivity in the neuronal fabric. Damasio has a higher order theory which is non-explanatory, see next, and Panksepp is a functional emergentist. (See further section 6.)
[6] Precisely as attempts to show how planetary motion (epicycles, eccentrics, equants, etc.) could be reconciled with the earth at the center of the universe.
[7] Hence J Graham Beaumont’s complaint (1999, p527): “Neuropsychology is in a conceptual morass. Neuropsychologists seek to study the relation between brain and mind, but without really addressing the status of these two constructs, or what potential form the relationship between them might take.”
[8] The differences between Putnam and McDowell are explored in Putnam (2002), and McDowell’s response.
[9] Thus McDowell says: “We need an account of the biological imperatives that structure the lives of the creatures in question [e.g. bats, ref. Nagel’s bat paper (1978)], and an account of the sensory capacities that enable them to respond to their environment in ways that are appropriate in the light of those biological imperatives” (1994, p122). This removes problematic subjective variation, of the inverted spectrum kind, by defining consistency of sensory states in brains under evolutionary response to the environment. But it fails to explain why neural sensory states should become experiencable (i.e. God-endowed) sensory states. Nagel’s position will be remarked on in section 5.
[10] In his reply to Putnam (2002), McDowell claims he is not addressing physicalist theories, only the question of physical lawfulness. Unfortunately, this disavowal simply begs the question of adequacy.
[11] An example of this is optical illusions, as the duck-rabbit figure.
[12] This obscure us exists, of course, already in Kant in the first Critique (p 67). “By means of outer sense, a property of the mind, we represent to ourselves objects as outside us.” Kant’s statement, in physical terms, is non-explanatory at every point. The us, of course, is itself a given.
[13] Fodor explicitly ignores consciousness (e.g. 1994), as do many writers, supposing the mind exists, though ontologically it depends fundamentally upon an unexplainable consciousness.
[14] Putnam, loc. cit.: “But we would not identify sense data with processes in the eye, simply because one can have color sensations even after one has lost one’s eyes. This suggests that the constraint on any account of ‘qualitative’ sense data should include the fact that we are conscious of them: but how plausible is it that one should be able to reduce (hypothetical) ‘laws’ involving the notion of consciousness without becoming involved in ‘reducing’ the propositional attitudes?... The notions of ‘reductions of theories’ and ‘theoretical identifications’ lack any real content in the context.”
[15] It is interesting that Putnam proposes getting rid of the theater, but the problematic us already referred to is, presumably, the audience for the theater, of which Putnam must also be rid.
[16] Yet Bernard Baars (1996) has proposed a Workspace Theory of Consciousness on just this proposition. The problem is that Baars does not recognize an ontological problem with consciousness, as still can be seen in his 2004 paper. He blames philosophers for inventing difficulties.
[17] This does not support Davidson’s (1970) anomalous monism, since what is at issue is not the untranslatability of physical and mental terms (Quine), but the possibility of causation per se.
[18] This supposed issue has generated the notion of neural correlates of consciousness. But see below section 6.1. In other words, the situation has made no progress in four hundred years, as Putnam says.
[19] A parallel may be found, but is not explored here, with such writers as George Lakoff (e.g. 1987) who, for language, have countered an objectivist or God’s-eye view of world/mind categories with experiential terms, thus leading to e.g. prototype theory. But these writers have not investigated, from a biological view, why the notion of consciousness has been the misleading factor.
[20] A recent overview and readings can be found in e.g. Clark 2001, chapter 4.
[21] The connectionist model aims at what Mesulam (1998) terms the channel operation of the brain, by contrast with what he terms the state operation of the brain.
[22] Kant himself identified the problem of his own position in the Antinomy of Reason (p466). “The common but fallacious presupposition of the absolute reality of appearances...manifests its injurious influence, in the confounding of reason. For if appearances are things in themselves, freedom cannot be upheld” Exactly why appearances, as brain functions, are without freedom is addressed below.
[23] It is because the brain has no sensory receptors that Penfield (and Ramussen 1952) could conduct his experiments on patients’ brains whose skulls were opened for surgery. These patients’ brains, stimulated with mild electric current, evoked memories reported by the patients. The brain has no sensory receptors, because the brain creates the sensations we feel (and perceptions, thoughts and feelings). This is why brains, or organisms, are not conscious. Thus Damasio’s title Looking for Spinoza: Joy, Sorrow and the Feeling Brain (2003) is an error of category, which demonstrates the hold the consciousness theory has even on a modern neurobiologist. Damasio’s work, in this regard, is devoted to the notion that the brain creates knowing states, à la Descartes. For example (1999, p316): “The drama of the human condition comes from consciousness because it concerns knowledge obtained in a bargain that none of us struck: the cost of a better existence is the loss of innocence about that very existence.” This way science does not lie.
[24] I.e., the brain’s job is not to know. This point is made by Peschl (1999).
[25] The topic of mirror neurons may be indicative, here. But though conjectures have been made about them, e.g. Jean-Luc Petit (1999), there is as yet no comprehensive theory as proposed in this paper.
[26] Putnam (1999) claims that animals have proto-concepts, which mischaracterizes the biology, as is seen further below.
[27] An empirical undermining of the knowledge ontology results from Gazzaniga’s “finding” of the left-brain interpreter (e.g. 1998). This resulted from experiments on patients whose brains were severed across the corpus callosum, thus preventing hemisphere communication, to prevent the spread of severe epilepsy. When these patients were (for example) shown distressing scenes to their right brain, their left brain invented stories to explain the distress; but of course these stories could not relate to the scenes, for no adequate information was being passed across to the verbal/rational left brain from the visual processing right brain, though some information is passed by lower regions of the brain. Whilst he does not conjecture this, Gazzaniga might appreciate the extension of his left-brain interpreter’s fabricating explanation (rationalization) modus to perceptual images as equivalently “made up”, since there is no ontological difference in the causal neurophysiology, and the visual image is also a (brain-)sign.
[28] Since the foundationalism required by belief is thus eliminated, philosophical debates about internalism and externalism are, to coin a word, meaningless.
[29] Though Edelman remains committed to a causal consciousness, Edelman and Tonini (2000).
[30] The concept of working memory (to be distinguished from short term memory) is a standard topic in brain science literature, and depends functionally upon the notion of consciousness. The analysis here requires a different account of the concept.
[31] Thus the proposal by Wilfrid Sellars (1997) to differentiate (in the words of Brandom’s commentary) the sentient and the sapient, thereby debunking the “most familiar form [of] the Myth of the Given” (p121/122) because “this is the distinction between being merely awake (which we share with nondiscursive animals—those that do not share concepts) on the one hand, and, on the other hand, being aware in the sense that involves knowledge”, is a misconstrual of the nature of the function and physicality of each. Doubtless seeings and speakings are different modalities of brain-sign, the latter occurring (perhaps) only in humans. But to suppose that reasons qua conceptual knowledge thereby escape reduction to neurophysiology by being in their own logical space is implausible (see below, this section). (Conceptual) reasons have no such transcendence. To suppose otherwise is the Myth of the Normative, or more fundamentally, the Myth of the Person.
[32] McDowell says: “To see exercises of spontaneity as natural, we do not need to integrate spontaneity-related concepts into the realm of law; we need to stress their role in capturing patterns in a way of living” (1994, p78). The question is: As what is this role? And the answer is: As signs. And what the signs signify is precisely law: the law-determined function of causal neurophysiology. That does not mean the explanatory function of signs itself behaves lawfully, of course. Therefore, though “patterns of living” becomes a feasible notion, psychology cannot claim scientific status, see section 5.
[33] Andy Clark says, from recent empirical work (2001, p88): “The internal structure of worldly events may be less like a passive data structure and more like a direct recipe for action.” But the external, which is implied by his “internal”, remains undefined in valid scientific terms.
[34] The question of truth in terms of linguistic statement is discussed by Michael Dummett (2003, p14). “What, in general, does accepting some statement as true involve? We do not merely react piecemeal to what other people say to us: we use the information we acquire, by our own observation and inferences and by what we are told, in speech and writing by others, to build an integrated picture of the world. To take a statement as contributing to this picture of the world, that is reality as independent of our will, is to take it as true. So our practice of language does impart to us an implicit grasp of the concept of truth.” This kind of analysis is dependent upon the mentalist model (Where physically is the “integrated picture of the world” or “reality”?) which presupposes the possibility of truth for a person. It is simply not sufficiently ontologically fundamental.
[35] Thus John Unterecker could say, of W.B. Yeats: “Somewhere between the great writer and the intelligent reader great poetry is born” (1957, p42). He had no explanation for this, however, since no physical theory was available.
[36] What Sartre and Heidegger identified was not, of course, the same thing: i.e. their interpretations of the state were quite different, according to their own philosophical positions. This cannot be explored here. But see e.g. papers in Schlipp (1981), particularly by Joseph Fell and Charles Scott.
[37] As can be seen in the following (1994, p125): “Being at home in the space of reasons involves not just a collection of propensities to shift one’s psychological stance in response to this or that, but the standing potential for a reflective stance at which the question arises whether one ought to find this or that persuasive.” McDowell has no explanation for how such capacities arise from neurophysiology, so the explanation remains physically unjustified. He attributes reason-capacity to language, which is no help as an explanation, see next. This quoted passage occurs in Lecture VI, where animals receive their normal demeaning treatment.
[38] Heidegger’s displacement of the Cartesian subject was of fundamental importance for the history of philosophy, and in itself. For example, in Being and Time (1927, p89): “When Dasein directs itself towards something and grasps it, it does not somehow first get outside of an inner sphere in which it has been proximally encapsulated, but its primary kind of Being is such that it is always ‘outside’ alongside entities which it encounters and which belong to a world already discovered.... The perceiving of what is known is not a process of returning with one’s booty to the ‘cabinet’ of consciousness.” He saw, by comparison with e.g. Husserl, that a genuine phenomenology could not elevate the ego or self from the experience (the obscure us). But Heidegger deliberately eschewed biology as an adequate ontological domain for an explanation of the target phenomenon, thus remaining within the tradition he set out to overturn, as had Descartes before him.
[39] Cf. Millikan’s (1984) critique of meaning rationalism.
[40] And what we call curiosity or fear might be evoked on the lack of specificity, depending on context (see below ).
[41] Thus we are dealing with signs, but not for a conscious subject, and not as a processor.
[42] For example, Libet (1983, p640). “The brain evidently ‘decides’ to initiate, or, at least, prepares to initiate the act at a time before there is any reportable subjective awareness that a decision has taken place. It is concluded that cerebral activity even of a spontaneously voluntary act can and usually does begin unconsciously.”
[43] Nietzsche had anticipated this result by at least 1888. “Everything of which we become conscious is a terminal phenomenon, an end” (1967, p265). “That which becomes conscious is involved in causal relations which are entirely withheld from us” (ibid. p284). But he did not work out the proposal here. Characteristically of Western thought, his biologism stopped at the individual.
[44] On the question of mental causation, and the scientific status of psychology, consider the following debate between Ned Block and Jaegwon Kim. Block, supporting his nonreductive mentalist position, says (2003, p138): “It is hard to believe that there is no mental causation, no physiological causation, no molecular causation, no atomic causation but only bottom level physical causation.” Note that Block simply assumes (i.e. does not justify) an ontological parallel between physical things and mental things in a seamless hierarchy. Kim (2003) comments directly on this statement (p164). “A number of writers have expressed the view that if the supposed problem of mental causation is a real problem, a parallel problem arises for all other levels of causation, except causation at the most fundamental level.” Kim rejects this, and says that his aim is to show that: “Either [there is] reduction or causal impotence” (p165). The theory here explains why there is no mental causation without epiphenomenalism: there is no mentality. But signs are reducible. The exchange should be read in full to see the complete argument
[45] When the word understand (or grasp) is used here in a scientific sense, it entails two things: 1. Program our neurophysiology where the causality of our brain resides; 2. Create its communicable status/content as brain-sign individually, and at this moment, in each of us. As specified in section 6, an entirely new vocabulary is needed, distinguishing brain-sign function from mentalist terminology.
[46] Philosophical problems with error in perception result from getting the function of perception, biologically/neuroscientifically, back to front (e.g. Martin 1994).
[47] Dennett’s materialist dualism leads him into support for memes, and attempts to rescue free will.
[48] As Gazzaniga’s left-brain interpreter potently illustrates (indeed, more than he conjectures), even perception is, as brain function, what we would call conjectural.
[49] Kant says, in a footnote (p144), “Psychologists have hitherto failed to realize that imagination is a necessary ingredient of perception itself. This is due partly to the fact that that faculty has been limited to reproduction, partly to the belief that the senses not only supply impressions but also combine them so as to generate images of objects. For that purpose something more than the mere receptivity of impressions is undoubtedly required, namely, a function for the synthesis of them.” But Kant backed away from his initial view of the autonomy of imagination from reason (1781) in the second edition (1787). He “shrank back from the abyss”, of which Heidegger (1990) was later to make much. However, imagination is not a neuroscientific construct, and so fulfills no scientific explanatory role.
[50] The profound difference between the brain-sign and psychological approaches can be seen in this one sentence of Steven Pinker concerning mental-rotation theory (1997, p280). “People definitely rotate shapes in their mind.” There are no minds, there are no shapes in them, and people do not rotate them. It is not merely a matter of a “convenient way of talking”. Pinker purports to be conveying information in a scientific way. But his explanation depends upon a mentalist foundationalism that does not exist. The science required is neuroscience, not psych(e)-ology, for evidently there is something to be explained in the physical universe. For a penetrating analysis of the problem of representation, see Cummins 1989.
[51] Termed episodic or personal memories, as contrasted with semantic or procedural which are not discussed.
[52] No reference is required because, remarkably, any text turned to (I offer good odds) will make this assumption, despite Hebb’s (1949) initially neural specification.
[53] E.g. Chalmers (1995-97, p19). “An analysis of the problem shows us that conscious experience is just not the kind of thing that a wholly reductive account could succeed in explaining.”
[54] Reflective bats?!
[55] Wittgenstein’s private language argument (1968) is clearly a precursor here.
[56] Of course, there is no inevitability that person C will go to help person D, since other forms of biological response are possible, as are the complementary feelings, e.g. what we call horror or fear.
[57] Therefore is invisible. But is not unconscious causality.
[58] Bear et al. (2001) devote a whole chapter (16) to motivation. “Motivation can be thought of as a driving force on behavior” (p523). The notion that the brain is involved in motivation is, frankly, absurd, yet neuroscience cannot progress on this because of the mentalist grip.