Re: Pylyshyn on Cognitive Architecture

From: Egerland, Matthias (
Date: Thu Mar 09 2000 - 04:45:50 GMT

> However, as Al Newell never
> tires of pointing out, there is no principled distinction between a
> virtual and a real architecture: they are all equally real physical
> architectures. The only different between them may come down to
> something like the theoretically irrelevant fact that after the power
> has been turned off (or the system reset) some machines do revert to a
> different architecture,

> Here, he is stating that both a virtual and real architectures are
> equally the same with no differences, I do not agree with this point as
> if different languages are used to implement say 1 algorithm, won't they
> all behave slightly differently, i.e. Prolog undergoes back-propagation
> C is executed sequentially.
> This leads to a difference in which the information is perceived when
> these algorithms are executed, and so therefor may lead to different
> results.

If you really implement the same algorithm in two different programming
languages the result after execution of both programs with the same
input parameters will definitely be the same. Why should there be a
difference? A difference might only occur in the way the programming
language can handle a specific problem (as you pointed out with
back propagation vs. sequential execution). But this is just a
difference of implementation and we wanted to deal with computation as
an implementation independent symbol system.

> There is a knowledge based system that can take algorithms as input and
> process them, the outcome depends on the system, for example if you have
> two identical algorithms and they were processed on two different
> machines the likely hood that the outcome (representation) will be the
> same are slim.

So you assume that the knowledge base and the rules of such a system are
always very different leading to dissimilar results. But why do you do
that and what does it prove?

> When we process a mental image in our minds, it is done instantly, there
> are no time delays in which information is being fetched executed and
> processed, if this was to be implemented in a computer model algorithms
> would have to have the power in which they can instantly process
> something, without having any time delays associated with them, at the
> present moment there is no computer language that can handle this. But
> if the design some how incorporated information processing functions
> that are used to aid the actual information processing architecture then
> this can be feasible.

I think that you mean that we can comprehend a mental image as a whole
at once rather than having to 'scan' it somehow and start some
calculation afterwards to 'see' what the image does look like. But
couldn't we simulate the same thing with some kind of parallel machine?
The human mind also needs some time to remember specific detail of a
mental image and we also do not have all the associations at once which
are caused by our knowledge base. So we could still consider the process
of thinking about a mental image as the result of a rule based system.

> But how can the behavior of a system not be due to its internal
> construction or its inherentproperties? What else could possibly
> explain the regularities it exhibits? It is certainly true thatthe
> properties of the box determine the totality of its behavioral
> repertoire, or its counterfactualset; i.e. its capacity. But as long as
> we have only sampled some limited subset of this repertoire (say, what
> it "typically" or "normally" does) we may not be in any position to
> infer what itsintrinsically constrained capacity is, hence the observed
> regularity may tell us nothing about the internal structure or inherent
> properties of the device. It is easy to be misled by a sample of
> asystem's behavior into assuming the wrong sample space or
> counterfactual set

> The behaviour of a system does not have to essentially come from with in
> itself i.e. its internal data structure(the architecture of the system)
> there are other factors that can cause its
> behaviour to change, such the environment in which the system is in.
> We our selves act differently in different environments which in turn
> have effects on our internal structure i.e. our minds

Therefore psychologists have to keep down the influence of the
environment when they are doing experiments with human beings. In order
to build an exact but artificial copy of the mind we first would have to
understand all these influences, which is impossible in my opinion.

> Again it is an empirical question, though this time it seems much more
> likely that aknowledge-level ("tacit" knowledge, to be sure)
> explanation will be the correct one. The reason for this is that it
> seems likely that the way colors mix in one's image will depend on what
> one4knows about the regularities of perceptual color mixing -- after
> all, we can make our image of a certain region be whatever color we
> want it to be!
> Here he states that the mind can infer some image/information straight
> away instead of having to process that information, it is said it comes
> from knowledge, for a architure to be able to do this it has to follow a
> set of governed rules, which will have to be defined, for example
> when this colour and that colour are mixed they will produced this
> colour.
> For architecture to be as good as the mind it will have to process
> information just like a real mind does in a biological way.

According to the colour example one should just consider a analogue
sensors for light with different filters. Putting a coloured object in
front of the sensor will lead to a result instantly (OK, at light speed
but the brain isn't faster anyway) and the colour being recognized
depends on the calibration of the sensor.
Analogue computers are a good example of machines which do some sort of
computation without time delay.

This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:27 GMT