**From:** HARNAD Stevan (*harnad@coglit.ecs.soton.ac.uk*)

**Date:** Wed Jun 06 2001 - 15:48:59 BST

**Next message:**HARNAD Stevan: "Re: MacLennan: Grounding Analogue Computers"**Previous message:**Clark Graham: "Re: Sony Turing Test"**In reply to:**Hudson Joe: "Morgenstern: Frame Problem"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

On Sat, 3 Mar 2001, Hudson Joe wrote:

*> 'The Problem with Solutions to the Frame Problem'
*

*> http://citeseer.nj.nec.com/morgenstern95problem.html
*

*> Hudson:
*

*> our 'explanation' may only
*

*> be (and often is) in terms of 'concepts' that are themselves based on
*

*> inductive reasoning ('it's been that way so far so I assume it will
*

*> continue to be that way, don't ask me why.'). But this is straying into
*

*> the symbol grounding problem
*

No, the symbol grounding problem is not the same as the problem of

inductive reasoning. There is already a problem of inductive

reasoning in the world of symbols alone (consider mathematical

induction!). And there is a credit/blame assignment problem for

purely symbolic induction too.

The grounding of symbols in sensorimotor categories provides an extra

constraint on them, over and above (or rather "under and below") the

purely syntactic constraints of formal symbol manipulation rules. It is

a way of making their "shape" less arbitrary.

If the symbol "dog" actually had to be shaped like a dog, then we could

not compute with it at all. But if it is connected to the "shape" of

sensorimotor dog-detectors/manipulators, this can provide extra

constraints without losing the computational power of the arbitrary

symbols either.

Yes, the connection is usually arrived at via induction, so that

links the symbol grounding problem to the induction problems. But

remember real-time-history is not essential in principle. All of our

sensorimotor category-detectors could have been born fully tuned

without any prior induction, in principle. (It's just that that's as

unlikely to happen as chimps typing Shakespeare.)

*> Hudson:
*

*> "Do we even know if a causal rule is true?".
*

The problem if induction (how can you know that past patterns will

continue?) is related to the problem of causation (how can you know

that something is the cause of something else?).

*> Hudson:
*

*> "What is the connection between causation and material implication"?
*

*> There is no difference at-all that I can see.
*

"A materially implies B" is true if A and B are true, false if A is

true but B is false, and true otherwise (i.e., if A is false, it's

true whether or not B is true).

That's just a formal relation; and it certainly is not the same as

"A causes B".

(Find counter-examples of both; i.e., cases of causation that violate

material implication, and cases of material implication that are

noncausal.)

*> Hudson:
*

*> the frame problem is a problem of reasoning because
*

*> it will continue to occur until we make our solution capable of
*

*> reasoning. But I think that that includes inference as well as
*

*> induction.
*

Reasoning and induction are all forms of inference. And inference is

symbol manipulation. If the frame problem is a problem with purely

symbolic models of reasoning, no form of reasoning will be sufficient

to solve it on its own.

*> Hudson:
*

*> I think a workable solution would be something more eclectic and
*

*> algorithm centred
*

That's still symbolic...

*> Hudson:
*

*> it would be closely tied in with the learning mechanisms and
*

*> motor-sensory sub-systems
*

So maybe not "algorithm-centred" after all?

*> Hudson:
*

*> Finally whatever method is adopted to solve the frame problem the
*

*> issues of data representation and processing will be key. If the system
*

*> is given a strong learning ability then these issues will be less
*

*> critical at the design stage, but then we move the difficulty to making
*

*> sure the architecture of the learning mechanisms is powerfull enough for
*

*> the system to develope effective ways to represent and process
*

*> data itself.
*

It's a tough problem, but learning capacity and experience will

certainly be part of the solution.

Stevan Harnad

**Next message:**HARNAD Stevan: "Re: MacLennan: Grounding Analogue Computers"**Previous message:**Clark Graham: "Re: Sony Turing Test"**In reply to:**Hudson Joe: "Morgenstern: Frame Problem"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

*
This archive was generated by hypermail 2.1.4
: Tue Sep 24 2002 - 18:37:31 BST
*