From: Yusuf Larry (kly198@ecs.soton.ac.uk)
Date: Fri Mar 30 2001 - 17:14:42 BST
> > Yusuf L:
> > we are encouraged to judge the machine and the human by performance
> > capacity and not by aesthetics. Hence, a machine that can think is one
that
> > can do what we do, indistinguishably from the way we do it. This has
nothing
> > to do with the look and feel of the system.
>
> Harnard S:
> Yes, but in excluding appearance (it's appearance, not really
> "aesthetics") have we unwittingly excluded a lot of performance capacity
> too (T3)?
Yusuf L:
The idea was to negate the superficial criticisms of the machine. However,
if human performance is based on our physiology, then we could have
restricted or even excluded a lot of performance capacity
> > Yusuf L:
> > the emphasis needs to be more on developing true systems, i.e. ones that
can
> > truly analyze a question or situation and reply based on an
understanding of
> > this question, rather than on a set of inputs and outputs.
>
> HARNAD:
> But more is there, besides external inputs, internal states, and
> outputs? What do you mean be "truly"?
Yusuf L:
By truly, I am referring to an understanding and appreciation of what the
machine is processing, doing, or saying, instead of just running some
algorithm. But looking ahead I can see how this would lead to the other
minds problem.
> > > TURING:
> > > It is natural that we should wish to permit every kind of engineering
technique to
> > > be used in our machines. We also wish to allow the possibility than an
engineer or
> > > team of engineers may construct a machine which works, but whose
manner of
> > > operation cannot be satisfactorily described by its constructors
because they have
> > > applied a method, which is largely experimental.
> >
> > Yusuf L:
> > In building the thinking machine, we should not restrict ourselves to
> > particular techniques but rather explore every possible form of
engineering.
> > And just like humans can not explicitly describe how it is we think, or
how
> > another person thinks, does something intelligent or in fact if they
think
> > or are intelligent, we may not be able to do the same with the machine
or
> > indeed need to, in order to justify the machine as a thinking system.
>
>HARNAD:
> Not quite. ....
> But I don't think what Turing means is the fact that we don't know how
> our own minds work (hence we cannot build such models simply by
> introspecting on how our minds do it, and "reading off" the algorithm or
> mechanism).
Yusuf L:
Agreed.
But what do you think about the above statement, i.e. we cannot build models
by just introspecting on how our minds do it, and "reading off" the
algorithm or mechanism (until we can find explicit ways of defining those
processes that are implicit but cannot be defined normally)?
> > TURING:
> > we only permit digital computers to take part in our game.
>
> HARNAD:
> (1) T2 may not be a strong enough test for intelligence. (Penpal
> capacity may not put the candidate through enough of its paces, just
> as a toy test on chess playing alone, or a mere 10-minute test would
> not.) Hence T2 may fail Turing's own criterion of Total
> Indistinguishability in performance capacity.
Yusuf L:
I think at this stage we can definitely say that T2 is not a strong enough
test for intelligence.
> HARNAD:
> Was this a big mistake on Turing's part, or just the failure to dot some
> i's and cross some t's. I'm inclined to think it was more the latter.
> But his failure to dot those i's and cross those t's may have had big
> consequences, leading a lot of lesser minds needlessly astray.
Yusuf L:
Totally agree, but if he'd spent time dotting his i's and t's, the lesser
minds wouldn't be led astray (remember kid sib).
This has less to do with dotting i's and t's than it has to do with not
realising that the i's and t's aren't dotted and by extension, what would
the consequences be?
> > Yusuf L:
> > Prior to this statement, Turing had described digital computers as
machines
> > capable of performing computation; where computation is
> > implementation-independent, semantically interpretable, syntactic rule
> > based, symbol manipulation. Following Turing’s argument of
> > indistinguishable entities, if this digital computer can mimic the human
> > computer indistinguishably then it could be the human computer. This
implies
> > that the human computer is simply a machine that performs computation. I
> > disagree. However, it is possible that a computational system might be
able to mimic
> > some to many of the human computer functions (hence the very closely),
in
> > which case, this is possible but mimicking is as different from being as
> > simulation is as different from real life.
>
> HARNAD:
> This is a little vague. We are no longer speaking of "mimicry" but about
> T2 or T3 (or even T4). Are you saying (with Searle) that only T5 (the
> full, exact, organic functional explanation of real biological brain
> activity) can explain or implement a mind? (The position is tenable, but
> do you really want to defend it, and argue that a lifelong T3 robot, or
> even a lifelong T4 synthetic body and mind could not really have a
> mind?)
Yusuf L:
T4 can be used to explain and implement a mind but for comparing humans and
machines, nothing short of T5 will do. We might talk about equivalence and
indistinguishability till the end of time but to the common man on the
street, A and B are either the same or different (small minded, i know. but
remember your example of the two planets in space and the one about blue
swans ).
> > Yusuf L:
> > a digital computer cannot be programmed to mimic the human computer...
> > because we are not discrete state machines but truly random
>
> HARNAD:
> And you think "true randomness" is the critical element for implementing
> a mind? Why?
Yusuf L:
The mind is a flexible and supple entity capable of operating in x
dimensions and x planes (where x is unspecified) . Even though through
experience, growth and learning we tend to restrict ourselves or more
precisely our minds by restricting our views of what is right/wrong, what we
choose to believe and so on, the mind still has the capability of true
randomness. Implementing something short of this would be a great step but
not equivalent or indistingusishable (even if the probabilty of using the
parts of the mind that the machine implementation cannot duplicate might be
minute).
This is beacuse the machine mind will be restricted and might not cover some
of the parameters the human mind covers. or it might get there by takinga
very long, ineeficient route.
> > > TURING:
> > > The original question, "Can machines think?" I believe to be too
meaningless
> > > to deserve discussion. Nevertheless I believe that at the end of the
century
> > > the use of words and general educated opinion will have altered so
much that
> > > one will be able to speak of machines thinking without expecting to be
> > > contradicted.
> >
> > Yusuf L:
> > If it is such a meaningless topic, why are we discussing it? To say
rather
> > than check whether a machine can think, make it take the TT (imitation
test)
> > and if it is indistinguishable it can think is a good approach but does
not
> > diminish the scientific interest of whether machines can think. If we
were
> > to develop a digital computer that passes the TT, people would still be
> > interested in whether it can or cannot think. Better yet what if we
develop
> > a system that passes the TT but can’t think?
>
> The question "But is is really thinking? does it really have a mind?"
> may be undecidable by the rest of us, but I assure you that (if the
> answer is yes), it is not only meaningful but the MOST meaningful
> question for at least one of us: The candidate himself. Hence it is and
> always was false, and almost fatuous, to say that the question is
> meaningless! (Let us learn the wise lessons, not the foolish ones, from
> Turing Indistinguishability.)
Yusuf L:
Now you're getting all paradoxical on me. What i was trying to say is that
if we have a TT passer and the developer could not say how it works or what
makes it have a mid, people would be interested. To say not to bother
because it works will not do. The other minds problem is one of the most
discussed problems in AI but we all know its meaningless to discuss it as we
never really get anywhere. My point being - it might be foolish but will
still happen, it's the way of the world.
Larry
This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:30 BST