> From: "Henderson, Laura" <LAURA92@psy.soton.ac.uk>
> Date: Mon, 27 Feb 1995 14:04:33 GMT
> > > From: "Darren Snellgrove (Philos 3rd yr)" <D.P.Snellgrove@soton.ac.uk>
>
> I agree with what you have said up to the point where you suggest that
> there must be something more; it seems to contradict your argument. I
> believe you are referring to the notion of consciousness. The fact that
> there seems to be a gap between neurological descriptions of what
> happens when, for example, I look at the computer, and my actual
> experience of seeing the computer.
There SEEMS to be a gap, indeed. But there also IS a gap. That
consciousness is not captured by a brain theory is evident even without
an appeal to THE WAY things seem or appear. The fact that they seem or
appear AT ALL is what is left out.
> But why does this gap have to
> exist? Doesn't the gap exist because the neurological description is
> not attempting to describe my "seeing" of the page, but rather it is
> defining the processes which occur when I see the page... there is no
> gap. Your experience of water is not the same as the definitional
> description provided by H2O.
Here you mix two things. A complete theory WOULD have to account for my
consciousness, so if that's missing from the neural theory, that's just
the point.
But let's not mix up water vs. H2O and my EXPERIENCE of one or the other.
The whole point (and Tom Nagel made it) is that you can reduce water to
H2O or heat to molecular motion because it is not the EXPERIENCE of
these things you are reducing, but the things behind the appearances. Our
experiences adjust, eventually, to the new descriptions (we swap one set
of appearances for another). But there is no way that this same strategy
could be applied to explaining experience (consciousness, appearance)
ITSELF, because the adjustment that would be required would be to
substitute something nonexperiential for experience, and that doesn't
work (swapping nothing in place of appearances). That doesn't just SEEM
not to work (though it does indeed seem not to work); it doesn't work.
See:
Nagel, T. (1974) What is it like to be a bat?
Philosophical Review 83: 435 - 451.
Nagel, T. (1986) The view from nowhere.
New York: Oxford University Press.
And also "Other Bodies, Other Minds," by me:
ftp://cogsci.ecs.soton.ac.uk/pub/Harnad/harnad91.otherminds
In the archived version of this discussion on the Web you need only
click on that URL to get to the paper. (In email that won't work.)
> With reference to the turing machines, I fail to see why an artificial
> system of some form cannot have "experience". It does not have to be
> just a Zombie. Furthermore, Functionalism may serve well in explaining
> input/output relationships but I do not see how it can provide any
> assistance to the mind/brain problem, indeed functionalism is perfectly
> compatible with just about every other theory of the mind.
I too fail to see why an artificial system of some form cannot have
experience (why put it in quotes? if it's experience, it's experience).
But I also fail to see how we could ever know that it did, apart from
functional indistinguishability, which is the same fallible indicator we
use with one another.
See also "How and Why We Are Not Zombies":
ftp://cogsci.ecs.soton.ac.uk/pub/Harnad/harnad95.zombies
Chrs, Stevan
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:15 GMT