From: Clark Graham (ggc198@ecs.soton.ac.uk)
Date: Thu May 24 2001 - 11:11:09 BST
> Mo:
> For a start a "machine" can refer to any causal system, be it
> physical or probabilistic.
Clark:
What is a probabilistic causal system? If you mean something like a
coin-flipping machine, although at the high level of someone or
something flipping the coin there is a "50% chance" that it will land
on heads, at the low(est) level, several factors do directly affect
the outcome. The coin (or anything else) does not make some sort of
choice as to which side to land on, it is entirely determined by the
strength of the flip, any wind, and at a very low level, the position
of air molecules in space. The sheer number of factors involved mean
that from a high level it seems that even if the strength of the flip
and the wind are kept constant, the coin only lands on heads 50% of
the time. This is because it is impossible to keep all the varibales
constant for multiple flips. If the physical did not matter and the
system was completely probabilistic, then someone could travel back in
time and keep flipping the same coin (at exactly the same position and
time), and expect different results. Just because a system seems
probabilistic on a macroscopic level and is in fact physical on a
microscopic level does not mean we should label it as "probabilistic"
just because we cannot appreciate the complexities of its physical
reality.
> Mo:
> If a pen-pal could convince you all your lifetime that they are
> human, and at your deathbed, in rolled your pen-pal. Clearly from
> all the electronics, you see it was just a machine, but without
> ever seeing it, you would have never known.
Clark:
It is possible for a T3 (or even T5)-passing system to fool someone
into thinking that they are human, because they are clearly not - they
were built by humans, not borne by them. However, the whole point of
these systems is that they do not have to fool people into believing
they have a MIND. If they pass the Turing Test and are therefore
indistinguishable from a human MIND, then we have no right
to say that they do not have a mind, because there is no perceivable
difference between their "mind" and someone else's "mind". Because of
the Other Minds barrier, we can't be sure, but we CAN be as sure about
the T3/4/5-passing system as we are about other humans.
> Mo:
> Humans often make mistakes, they could be vague in their answering
> of questions, they may take a long time to come up with an answer
> or may not even understand the question. An algorithm could be
> implemented to include errors and give wrong answers if required.
Clark:
This seems similar to the argument I have heard that the human senses
can be fooled, and these affect our judgement and actions (i.e. our
mind), so it would be extremely hard, if not impossible, to build a
system that was indistinguishable from a human (mind-wise, for the
time being). If the system we built really was indistinguishable, then
surely it would be affected by the same defects as us (we are
sometimes, usually often wrong, we can be fooled by ambiguous inputs
to our sensory system, etc)? People decide whether to make up an
answer that might be wrong or admit that they do not know in
situations, therefore a T3-passing system should be able to do the
same. I think that the only way to accquire this knowledge (in which
situations to lie and in which to admit defeat) is by learning in much
the way a child does - through trial and error. We don't (at least it
seems like we don't) have an algorithm in our minds that "randomly"
spits out a wrong answer on occasion, it is purely down to our level
of knowledge and the circumstances we find ourselves in.
If a system was built that had great hopes of passing T3, I do not
think it would pass for even a few minutes when confronted with a
human who had no reason to believe it wasn't another human with a
mind. (I say another human just so no premature biases or prejudices
come into play, in the same way that Turing originally ruled that the
system and the "judge" should be in separate rooms.) If the "judging"
human was told he was talking to a baby or an infant, he might be more
convinced. The test would be as if you were talking to someone who
somehow had had all there knowledge, memories, accquired skills, etc
wiped from their mind. If you didn't know this, you'd think them
pretty strange, maybe even mindless.
Therefore, I think the only way a system can pass the T3 is if it goes
through a substantial "childhood" phase. It needn't be as long as
10-15 years, because some parts of the learning process
(reinforcement, the actual learning of a fact in the first place)
could be sped up, but it still must take place.
This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:30 BST