On Thu, 4 May 2000, Edwards, Dave wrote:
> Things matter to us, principally, because it is beneficial to our
> welfare, such as eating, sleeping, friends, money, etc. There are
> things that matter to Cog as well, because Cog has been told to protect
> itself, such as not damaging itself.
If Cog's major motivation is not to damage itself, it would be perfectly
reasonable to expect it to refuse to move or do anything. I think the
other things that Edwards points out, such as the inbuilt need to eat
and sleep are nessecary to actually provide motivation. If the robot
doesn't want anything, it would only learn things if we tell it to, which
would hardly be practical (No one had to tell us to learn to do most of
the things we can do, we learnt beacuase we had to) . This seems to me to
be a very important point, and it may come down to complexity again.
> Edwards:
> The obvious problem may arise from just believing Cog^s own words. He
> may learn to lie, probably from us, just like a child. Unfortunately,
> we will have little choice but to believe him, as he gets more and more
> complex.
Again, the robot would have to have a motivation to lie (children lie if
they think they're in trouble - how could a robot be punished if it
misbehaves?)
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:28 GMT