Re: Chalmers: Computational Foundation

From: Patel Krupali (
Date: Wed May 30 2001 - 17:18:48 BST

Within the Abstract of this paper, Chalmers states clearly that his
paper attempts to address questions put forward by computation and
cognition such as: What is the role of computation in a theory of
cognition? What is the relation between different sorts of
computational theory, such as connectionism and symbolic computation?

Chalmers tries to explain the analysis of computation and its relation
to cognition by using artificial intelligence. However he also states
the relationships is controversial. He believes very strongly that
computation is central to the foundations of modern cognitive science.

> Computation is central to the foundations of modern cognitive
> science, but its role is controversial.

This leaves me to believe that this central idea has become very
popular recently and Chalmers thinks it is the relationship will become
even stronger in years to come.

> Justifying the role of computation requires analysis of
> implementation, the nexus between abstract computations and
> concrete physical systems. I give such an analysis, based on the
> idea that a system implements a computation if the causal structure
> of the system mirrors the formal structure of the computation. This
> account can be used to justify the central commitments of
> artificial intelligence and computational cognitive science: the
> thesis of computational sufficiency, which holds that the right
> kind of computational structure suffices for the possession of a
> mind, and the thesis of computational explanation, which holds that
> computation provides a general framework for the explanation of
> cognitive processes. The theses are consequences of the facts that
> (a) computation can specify general patterns of causal
> organisation, and (b) mentality is an organisational invariant,
> rooted in such pattern.

This analysis is based on the idea that a system implements a
computation if the causal structure of the system mirrors the formal
structure of the computation. Again he emphases on the important of
artificial intelligence and cognitive science for the explanations. He
then goes on and questions computational sufficiency, for the
simulation of the mind, and all its properties or mentality. Thereby
asking is computation enough or is more needed?

> Perhaps no concept is more central to the foundations of modern
> cognitive science than that of computation. The ambitions of
> artificial intelligence rest on a computational framework, and in
> other areas of cognitive science, models of cognitive processes are
> most frequently cast in computational terms. The foundational role
> of computation can be expressed in two basic theses. First,
> underlying the belief in the possibility of artificial
> intelligence there is a thesis of computational sufficiency,
> stating that the right kind of computational structure suffices
> for the possession of a mind, and for the possession of a wide
> variety of mental properties. Second, facilitating the progress of
> cognitive science more generally there is a thesis of
> computational explanation, stating that computation provides a
> general framework for the explanation of cognitive processes and of
> behavior.

Chalmers is strong in his belief that the right kind of computation is
enough for the possession of a mind, and for the possession of the
minds properties i.e. all the mind or brains capabilities. His
argument form before is still very much the futuristic view that
cognitive science is the foundation and central to computation,
touching on artificial intelligence. All these concepts are suppose to
explain how the mind works?

> These theses are widely held within cognitive science, but they are
> quite controversial. Some have questioned the thesis of
> computational sufficiency, arguing that certain human abilities
> could never be duplicated computationally (Dreyfus 1974; Penrose
> 1989), or that even if a computation could duplicate human
> abilities, instantiating the relevant computation would not suffice
> for the possession of a mind (Searle 1980). Others have questioned
> the thesis of computational explanation, arguing that computation
> provides an inappropriate framework for the explanation of
> cognitive processes (Edelman 1989; Gibson 1979), or even that
> computational descriptions of a system are vacuous (Searle 1990,
> 1991).

Many people such as (Searle 1980) and others have questioned Chalmers
argument, and think it is impossible to duplicate human abilities, by
using computation. The human mind is very complex and I think
computation is not enough to build a duplication of it, and further
more all its functions including memory. How would it be possible to
build all the memories that one possesses form childhood? And does
anyone actually know how the mind works?

> Advocates of computational cognitive science have done their best
> to repel these negative critiques, but the positive justification
> for the foundational theses remains murky at best. Why should
> computation, rather than some other technical notion, play this
> foundational role? And why should there be the intimate link
> between computation and cognition that the theses suppose? In this
> paper, I will develop a framework that can answer these questions
> and justify the two foundational theses.

Chalmers states that in the past many people have tried to argue
against these negative comments, but many of their arguments do not
stand for justification. His purpose is to provide answers using his
standard relationship based on computation and cognitive science.

What does he mean by some other technical notion? This other notion
plays an important part in his argument, however I feel he has not
clarified what exactly it is? Especially since it is so relevant.

> In order for the foundation to be stable, the notion of computation
> itself has to be clarified. The mathematical theory of computation
> in the abstract is well-understood, but cognitive science and
> artificial intelligence ultimately deal with physical systems.

I agree with this statement to a point. He is right in saying that
computation is based on a mathematical theory, and the questions ask
about this topic have some clarification. On the hand he argues that
cognitive science and artificial intelligence do are on based on such
theories and deal with physical systems. I think he is right in saying
that because I would suggest the mind is a physical system in itself.
What I mean is that we can see it, and touch it as surgeons do when
operating on it. My argument is that if Chalmers aims to use cognitive
science and computation to explain how the mind works, then surely if
these systems are physical then some sort of physical theory is

> A bridge between these> systems and the abstract theory of
> computation is required. Specifically, we need a theory of
> implementation: the relation that holds between an abstract
> computational object (a> "computation" for short) and a physical
> system, such that we can say that in some sense the> system
> "realizes" the computation, and that the computation "describes"
> the system. We cannot> justify the foundational role of
> computation without first answering the question: What are the
> conditions under which a physical system implements a given
> computation?

My point again, some sort of physical theory needs to be discovered so
it can coincide with the mathematical theory of computation. I think if
both these are manipulated together then we have the abstract theory of
computational which Chalmers suggests we require. Then this abstract
helps the system or the mind realise the computation and describes the

> Once a theory of implementation has been provided, we can use it to
> answer the second key question: What is the relationship between
> computation and cognition? The answer to this question lies in the
> fact that the properties of a physical cognitive system that are
> relevant to its implementing certain computations, as given in the
> answer to the first question, are precisely those properties in
> virtue of which (a) the system possesses mental properties and (b)
> the system's cognitive processes can be explained.

I dont know if a theory of implementation is enough to answer the
question what is the relationship between computation and cognitive
science? I agree with the fact he says properties of a physical
cognitive are relevant to implementing certain computations, because I
suggested above the mind is a physical system in itself. However I do
not think that those properties are enough to explain a systems mental
properties and cognitive processes. Again because I think the brain is
far too complicated.

The author talks about the Theory of Implementation and
Combinatorial-State Automata. This to used to support his idea of how
computations can work together with a physical.

> Does every system implement any given computation? The added
> requirement that the mapped states must satisfy reliable
> state-transition rules is what does all the work. In this case,
> there will effectively be at least 10^{1000} constraints on
> state-transitions (one for each possible state-vector, and more if
> there are multiple possible inputs). Each constraint will specify
> one out of at least 10^{1000} possible consequents (one for each
> possible resultant state-vector, and more if there are outputs).
> The chance that an arbitrary set of states will satisfy these
> constraints is something less than one in (10^{1000})^{10^{1000}}
> (actually significantly less, because of the requirement that
> transitions be reliable).

What does the author mean by such figures like 10^{1000}?

> If even digestion is a computation, isn't this vacuous? This
> objection expresses the feeling that if every process, including
> such things as digestion and oxidation, implements some
> computation, then there seems to be nothing special about
> cognition any more, as computation is so pervasive. This objection
> rests on a misunderstanding. It is true that any given instance of
> digestion will implement some computation, as any physical system
> does, but the system's implementing this computation is in general
> irrelevant to its being an instance of digestion. To see this, we
> can note that the same computation could have been implemented by
> various other physical systems (such as my SPARC) without it's
> being an instance of digestion. Therefore the fact that the
> system implements the computation is not responsible for the
> existence of digestion in the system.

Chalmers compares the bodily functions such as digestion and oxidation
to computational behaviour. It further states that cognition not
relevant anymore as computation is so pervasive. Then he explains how
this idea is merely a misunderstanding. Chalmers argues that digestion
can that digestion can take place by any other physical system. This
would mean that any physical system can be implemented to make the
bodily function of digestion to occur. This to me seems almost absurd,
digestion requires other parts of the body such as the small and large
intestines, the mouth, tongue and teeth. Would it be physical possible
to compute digestion without computing or implementing all the
functions carried out by the above bodily parts too? I dont think so.

> What about semantics? It will be noted that nothing in my account
> of computation and implementation invokes any semantic
> considerations, such as the representational content of internal
> states. This is precisely as it should be: computations are
> specified syntactically, not semantically. Although it may very
> well be the case that any implementations of a given computation
> share some kind of semantic content, this should be a consequence
> of an account of computation and implementation, rather than built
> into the definition. If we build semantic considerations into the
> conditions for implementation, any role that computation can play
> in providing a foundation for AI and cognitive science will be
> endangered, as the notion of semantic content is so ill-understood
> that it desperately needs a foundation itself.

Chalmers suggests that semantics do not play an important role in
computations. He argues that syntax is more important. He argues that
the notion of semantic content is itself not understood well enough and
requires a foundation before we can use it for computation. I think
semantics and syntax work together and one without the other is
hopeless when implementing anything.

> I have said that the notion of computation should not be dependent
> on that of semantic content; neither do I think that the latter
> notion should be dependent on the former. Rather, both computation
> and content should be dependent on the common notion of causation.
> We have seen the first dependence in the account of computation
> above. The notion of content has also been frequently analyzed in
> terms of causation (see e.g. Dretske 1981 and Fodor 1987). This
> common pillar in the analyses of both computation and content
> allows that the two notions will not sway independently, while at
> the same time ensuring that neither is dependent on the other for
> its analysis.

The author later touches on my argument above. I think semantics and
syntax are interdependent. Chalmers doesnt state whether he thinks the
same, but states that they should not be dependent on eachother,
because it would help his argument of discarding semantics altogether.
He instead suggests computation should be dependent on causation.

According to Searles Chinese room the strong program of AI argument a
computer maybe able to manipulate symbols according to syntactical
procedures. The idea of semantics and syntax is used here too, however
I do not think the operations of computations can be mated to the human
mind. This is because the mind proceeds syntactically because it also
possess semantics. This capacity to operate semantically is for Searle,
a necessity to a human biological apparatus as it is for Chalmers.

> Is the brain a computer in this sense? Arguably. For a start, the
> brain can be "programmed" to implement various computations by the
> laborious means of conscious serial rule-following; but this is a
> fairly incidental ability. On a different level, it might be argued
> that learning provides a certain kind of programmability and
> parameter-setting, but this is a sufficiently indirect kind of
> parameter-setting that it might be argued that it does not qualify.
> In any case, the question is quite unimportant for our purposes.
> What counts is that the brain implements various complex
> computations, not that it is a computer.

This is a very important statement by Chalmers, he believes the brain
can be programmed to implement computations only to a limit. He
emphases that learning is an important part of the implementing and
that computation may not have this ability to program a brain because
it can not learn things. Learning is also regarded as a complex
computation even though he states that this point is unimportant for
his purposes. I feel that the learning process can only be done by
following a set of rules which is what Chalmer is sort of saying and
that also the brain is not a computer.

> Justification of the thesis of computational sufficiency has
> usually been tenuous. Perhaps the most common move has been an
> appeal to the Turing test, noting that every implementation of a
> given computation will have a certain kind of behavior, and
> claiming that the right kind of behavior is sufficient for
> mentality. The Turing test is a weak foundation, however, and one
> to which AI need not appeal. It may be that any behavioral
> description can be implemented by systems lacking mentality
> altogether (such as the giant lookup tables of Block 1981). Even if
> behaviour suffices for mind, the demise of logical behaviorism has
> made it very implausible that it suffices for specific mental
> properties: two mentally distinct systems can have the same
> behavioural dispositions. A computational basis for cognition will
> require a tighter link than this, then.

The right kind of behaviour is enough for a system to possess mentality
according to the above. All implementations of computations will have
some sort of behaviour and if this behaviour is enough then mentality
exists. The ability to behave in a logical fashion, such as looking up
data in tables doesnt require any sort of mental ability as long as the
behaviour is the right kind. Therefore I think what Chalmers is
stating is that if the right kind of behaviour is found then maybe the
mind can be explained using computation even without the mental

Throughout the paper Chalmers frequently remarks that behaviourism and
mentalism are twins of the same philosophical coin. I would just like
to state that behaviourists such as Skinner have argued that the mental
phenomena of the human mind such as thinking, understanding, reasoning
etc, are basically private inner events. Also that these are
inaccessible to scientific areas and should be pointed towards the
non-scientific area of studies.

Does this mean that Chalmers idea of computational being central to
cognitive science will not apply here? Also According to Skinner we can
not depend on behaviourism to simulate mentality.

> Instead, the central property of computation on which I will focus
> is one that we have already noted: the fact that a computation
> provides an abstract specification of the causal organization of a
> system. Causal organization is the nexus between computation and
> cognition. If cognitive systems have their mental properties in
> virtue of their causal organization, and if that causal
> organization can be specified computationally, then the thesis of
> computational sufficiency is established. Similarly, if it is the
> causal organization of a system that is primarily relevant in the
> explanation of behavior, then the thesis of computational
> explanation will be established. By the account above, we will
> always be able to provide a computational specification of the
> relevant causal organization, and therefore of the properties on
> which cognition rests.

Chalmers states that casual organization is what brings computation and
cognitive science together. The cognitive systems that have mental
properties have casual organizations too, then the connection that lies
is that these casual organizations can be specified computational. He
seems to think that this is enough to generate mentality. He then goes
on to talk about the abstract specification that is provided for these
casual organisations. I do not think this specification is enough to
denote his argument because he states it is abstract. I think this is
not a logical argument since the casual organization is unreliable.

> Most properties are not organizational invariants. The property of
> flying is not, for instance: we can move an airplane to the ground
> while preserving its causal topology, and it will no longer be
> flying. Digestion is not: if we gradually replace the parts
> involved in digestion with pieces of metal, while preserving causal
> patterns, after a while it will no longer be an instance of
> digestion: no food groups will be broken down, no energy will be
> extracted, and so on. The property of being tube of toothpaste is
> not an organizational invariant: if we deform the tube into a
> sphere, or replace the toothpaste by peanut butter while
> preserving causal topology, we no longer have a tube of
> toothpaste. In general, most properties depend essentially on
> certain features that are not features of causal topology. Flying
> depends on height, digestion depends on a particular physiochemical
> makeup, tubes of toothpaste depend on shape and physiochemical
> makeup, and so on. Change the features in question enough and the
> property in question will change, even though causal topology might
> be preserved throughout.

The idea of casual topology is investigated further more and used to
back his argument for the connection to computation and cognition. He
uses organisational invariance to argue that if functions that involve
digestion took place with metal being substituted for food then it can
not be digestion any more. Similarly if a tube of toothpaste is
replaced with peanut butter and is a sphere then it can not be called a
tube of toothpaste anymore. So even though the normal functions (casual
topology exists) in these cases the are still organizational
invariant. Does this mean the system does not require any mentality in
order to carry out these functions? However if the functions are still
taking place then there must be some form of mentality. Or is mentality
not so important anymore, as long as the functions are carried out? So
basically if we replace the tube of toothpaste with peanut butter would
it still be a tube of toothpaste? Similarly if we replace the bits of
the brains with metal and it still carried out the normal functions of
the brain is it still mental activity?

> The central claim of this section is that most mental properties
> are organizational invariants. It does not matter how we stretch,
> move about, or replace small parts of a cognitive system: as long
> as we preserve its causal topology, we will preserve its mental
> properties.

The author attempts to answer the question that I posed earlier.
According to wording above if the brain did carry out its normal
function without metal substitutes then it is still possess mental
behaviour. I disagree with this. I dont see how the brain can carry out
normal functions without the veins and blood circulation it possess.
Surely if the blood was substituted by water then the brain can not be
simulating sense at all. How can this be mental activity?

What if again a given physical object does for some reason not carry
out its casual topology? For e.g. we assume a colour of a rose is red,
this subjective phenomenon, caused to appear within us (by the
wavelengths of light emitted by the object)? However, we quite
naturally distinguish between the colour an object appears to have or
to be, and the colour it actually is. Red things are not red solely
based on how they look and appear to the normal sighted under normal
lighting conditions, since something which only looks red or appears to
be red may not may actually be red. Also red things can be seen to be
red by the normally sighted under normal lighted conditions, and this
is part of what is meant to be normal sighted (Hacker, 1987, p125).
This would suggest a break down in the casual topology argument?

> The central claim can be justified by dividing mental properties
> into two varieties: psychological properties - those that are
> characterized by their causal role, such as belief, learning, and
> perception - and phenomenal properties, or those that are
> characterized by way in which they are consciously experienced.
> Psychological properties are concerned with the sort of thing the
> mind does, and phenomenal properties are concerned with the way it
> feels. (Some will hold that properties such as belief should be
> assimilated to the second rather than the first class; I do not
> think that this is correct, but nothing will depend on that here.)
> Psychological properties, as has been argued by Armstrong (1968)
> and Lewis (1972) among others, are effectively defined by their
> role within an overall causal system: it is the pattern of
> interaction between different states that is definitive of a
> system's psychological properties. Systems with the same causal
> topology will share these patterns of causal interactions among
> states, and therefore, by the analysis of Lewis (1972), will share
> their psychological properties (as long as their relation to the
> environment is appropriate).

Chalmers talks about psychological properties such as belief, learning
and perception, which are characterised by their casual roles, and says
that they are consciously experienced. I think an important part of
recognising a physical object like an apple is related to the idea of
learning. You can only recognise the apple if you can see it. I think
this is a distinctive achievement. Chalmers talks about learning and
other properties like they are ordinary characteristics. Surely it is
more important than that because to be able to recognise something
requires a sensory input which is a vital point in the argument.

> If what has gone before is correct, this establishes the thesis of
> computational sufficiency, and therefore the the view that Searle
> has called "strong artificial intelligence": that there exists some
> computation such that any implementation of the computation
> possesses mentality. The fine-grained causal topology of a brain
> can be specified as a CSA. Any implementation of that CSA will
> share that causal topology, and therefore will share
> organizationally invariant mental properties that arise from the
> brain.
> A computational basis for cognition can be challenged in two ways.
> The first sort of challenge argues that computation cannot do what
> cognition does: that a computational simulation might not even
> reproduce human behavioral capacities, for instance, perhaps
> because the causal structure in human cognition goes beyond what a
> computational description can provide. The second concedes that
> computation might capture the capacities, but argues that more is
> required for true mentality.

I agree very much with what Chalmers is saying here, his first argument
states that computation can not produce human behaviour and that human
cognition goes beyond computation. He then says computation may be able
to capture some of the behaviour capabilities, but this is still not
enough for true mental simulation. However within the abstract of this
paper Chalmers believes very strongly about his relationship between
cognition and computation. The above wording sounds like he is not so
convinced anymore. If computation can not provide what computation can
then how is he going to explain how the brain works? Surely both need
to coincide at some point? Just because computation can capture some of
cognitions capabilities does not give it the foundation for a strong

> But a computational model is just a simulation! According to this
> objection, due to Searle (1980), Harnad (1989), and many others,
> we do not expect a computer model of a hurricane to be a real
> hurricane, so why should a computer model of mind be a real mind?
> But this is to miss the important point about organizational
> invariance. A computational simulation is not a mere formal
> abstraction, but has rich internal dynamics of its own. If
> appropriately designed it will share the causal topology of the
> system that is being modeled, so that the system's organizationally
> invariant properties will be not merely simulated but replicated.

Yes a computational model is only a simulation. A person can sit in a
plane know what it feels like to be flying, similarly a person can
experience a hurricane. The feelings and emotions involved in these
experiences are real and do exist, so one knows what to expect almost.
On contrary how does one know what it is like to be sitting in a brain?
Or what feelings and emotions are supposed to exist? Does anyone
actually know how the brain works? Surely you can not generate a
simulation for something without knowing what it is you are supposing
to be expecting to see or feel? The idea of causal topology can not be
carried out either if we do not know these things. Who knows what
functions occur when and how in the brain, so how do we know how to
organise its casual topology? What happened to the importance of
mentality? If the casual topology of the mind is enough to produce
replications of the simulations, then where is it all the mental
activity? The human mind does not produce a set of replicated rules
every time it has a problem to deal with. The rules the brain uses when
dealing with mental behaviour are produced by some sort of cognition.
If these rules were just replicated then there would be no mentality.

> Some mental properties, such as knowledge and even belief, depend
> on the environment being a certain way. Computational organization,
> as I have outlined it, cannot determine the environmental
> contribution, and therefore cannot fully guarantee this sort of
> mental property. But this is no problem. All we need computational
> organization to give us is the internal contribution to mental
> properties: that is, the same contribution that the brain makes
> (for instance, computational organization will determine the
> so-called "narrow content" of a belief, if this exists; see Fodor
> 1987). The full panoply of mental properties might only be
> determined by computation-plus-environment, just as it is
> determined by brain-plus environment. These considerations do not
> count against the prospects of artificial intelligence, and they
> affect the aspirations of computational cognitive science no more
> than they affect the aspirations of neuroscience.

I think there are two arguments that can be proposed there. Firstly I
think the environment plays an important part when stimulating mental
activity. What humans as well as all living life do in their everyday
lives depend on the environment we live. Creatures birds etc will react
to their environment for example when the climate changes certain birds
and creatures will emigrate and go into hibernation. This is all mental
activity - no one tells them to apart from the environment. Therefore
I do not see how the author can say that the environment does not play
an important part when simulating mental activity. The feelings we
experience when in the environment such as being burnt in the sun are
easy to model. But then every one does not get burnt in the sun, so how
would you model something that has more than one meaning. Similarly
just because someone passes an exam with flying colours does not mean
that persons colleague is happy with the same results. How would you
model the fact the two people feel differently about the same exam even
though both got the same marks? This again depends on the environment.
Secondly it is true that the mind is a feature of the physical world
i.e. the environment. Yes it can be true that computation can used to
combine physical object such as tables, chairs, apples and also objects
that seem apparently physical such as tastes, colours and smells. But
then the environment we recognise, as physical (e.g. mountains) could
be logical constructed, by using physics.

This is probably too in depth, but this logical construction can
register our senses as described using physics. Such as the mountain is
a cognitive construction made by us, which can be interpreted by
millimicrons of wavelengths of the light on our photoreceptors. The
colour of the mountain, and its texture are a deeper level of
construction from the physics phenomena. The rough edges and brown
colour of the mountain are contributions by our sensorimotor skills
such as our internal supply of concepts and memories.

So now we could probably compute the physics phenomena, but how do we
compute the sensorimotor skills?

> Artificial intelligence and computational cognitive science are
> committed to a kind of computationalism about the mind, a
> computationalism defined by the theses of computational sufficiency
> and computational explanation. In this paper I have tried to
> justify this computationalism, by spelling out the role of
> computation as a tool for describing and duplicating causal
> organization.I think that this kind of computationalism is all that
> artificial intelligence and computational cognitive science are
> committed to, and indeed is all that they need. This sort of
> computationalism provides a general framework precisely because it
> makes so few claims about the kind of computation that is central
> to the explanation and replication of cognition. No matter what the
> causal organization of cognitive processes turns out to be, there
> is good reason to believe that it can be captured within a
> computational framework

Chalmers tries to conclude on his paper by stating that he has used
computational sufficiency and computational explanation to explain how
the mind can be modelled computational. Then has used computation as a
tool for describing and duplicating causal organization. I disagree
with that he says this is all that is needed by artificial intelligence
and computational cognition. I do not think this is enough. My
argument with the replicated simulation from before can be brought back
here. By duplicating casual organization you are not creating any
mentality. I would say it is like going into an exam and rewriting the
lecturers notes word for word. How would the examiner know if the
student has actually understood the subject, even though the notes were
explained in full detail? The notes could have been memorised and just
duplicated. Similarly if casual organisation is just duplicated how
can we tell if mentality has been created? Then he says that this is
enough to provide a general framework and a few claims about the kind
of computation it is. I thought this was the central argument of this
paper. How can a few claims be enough to argue the point that
computation is the central idea behind this paper. I am not sure what
Chalmers is trying to say.

> But artificial intelligence and computational cognitive science are
> not committed to the claim that the brain is literally a Turing
> machine with a moving head and a tape, and even less to the claim
> that that tape is the environment. The claim is simply that some
> computational framework can explain and replicate human cognitive
> processes. It may turn out that the relevant computational
> description of these processes is very fine-grained, reflecting
> extremely complex causal dynamics among neurons, and it may well
> turn out that there is significant variation in causal organization
> between individuals. There is nothing here that is incompatible
> with a computational approach to cognitive science.

He is right in saying that there is nothing here that is incompatible
with a computational approach to cognitive science. As technology is
advanced we can see more amazing things such as being able to be
connected to the internet by your mobile phone. I am sure in the future
Chalmers argument about how the mind can be modelled computational will
be made to stance. However I still think the idea is absurd mainly
because the mind is too complex and we do not know how it works

I conclude on saying that I think human beings (minds) are supposed to
exist within a mechanically operating material reality. Where the
reality is based the environment we are surrounded in. This reality is
available through new advances in scientific research and
interpretation. I still do not think this scientific research is
enough to explain how the mind works because I believe the world
according to science or the environment is different or unlike the
world which is available to us via our senses.

This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:31 BST