From: Marinho Francis-Oladipo (fom198@ecs.soton.ac.uk)
Date: Fri May 18 2001 - 13:32:26 BST
>Sparks:
>I believe that want leads to emotion. We are happy, sad, worried,
>relieved, all because of how a situation affects the outcome of
>something that we want.
Marinho:
This is probably true. It is clear that everybody is often asked the
question "What do you want out of life?", and there is always an answer to
that question signifying that wanting is a wide spread occurrence. However,
is it not possible that the act of wanting could itself be an emotion
rather than a prerequisite?
After all just as you have a feeling of happiness, sadness or worry, you
also have a feeling of wanting. It is possible to go to bed upset for some
reason, and wake up the next day with a very good positive feeling for no
apparent reason.
>Sparks:
>If a robot never wants for anything, I don't think it will ever pass
>the T3 Turing Test. It is a common argument that robots could never
>exhibit emotion or emotive responses to situations, that they could
>never have feeling.
Marinho:
Buddhist literature talks of the founder as a man whose sole purpose was to
attain the realization of "perfect wisdom & enlightenment" that would lead
to the end of impermanence and anguish. If we assume for a second that this
man found exactly what he wanted, then by the importance of his discovery,
he would have wanted no more out of life. All things that you would not
want, suffering, e.t.c would have been abolished. Would such a man then be
considered unworthy of passing the Turing test or being intelligent.
Moreover, if he indeed found this ultimate truth, surely that would make
him more rather than less intelligent.
>Sparks:
>To want new and changing things, and to
>act on those wants, is to maintain an active and varied existence,
>the alternative being a repetative, predictable existance.
>Notice how current robot imlementations follow the second pattern much
>More closely than the first.
Marinho:
I agree, but it can be taken up that the repetitiveness of the robot
implementations depend on the inputs they receive. With the Granny argument
on computers doing new things, they do new things when given new input.
Francis Oladipo Marinho
This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:30 BST