**Next message:**Harnad, Stevan: "Re: Chapter 9: Pragmatics and Communication"**Previous message:**Gill, Lynn: "Re: Chapter 9: Pragmatics and Communication"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ]

Chapter 11: Problem-Solving

Combinatorial Explosion

The concept of a "problem" is a pretty general one. Just about every

conscious (or unconscious) behaviour can be seen as the solution

to a problem -- from the rat solving the "problem" of getting

food by pressing the lever, to Gary Kasparov finding the right

chess move to win the game.

Newell and Simon (the latter a Nobel laureate, not for his

work in Psychology but for his earlier work in Economics)

define a "problem" as an initial (first) state, plus a "goal"

state: Solving the problem amounts to decreasing the distance

to the goal state until you reach it.

It was Newell & Simon who invented the first chess-playing computer

programme. Theirs could not beat a Grand Master in Chess

(it was written in the 60's) but, as I hope you know, the latest chess

playing computer programme, "Deep Blue", has just beaten Gary Kasparov.

The defeat is of no interest to a cognitive psychologist: Why?

Because the programme used a method that we know people cannot use, and

therefore it casts no light whatsoever on how people play chess:

Deep Blue very quickly tries millions of moves. The human brain

does not have the speed or memory capacity to do that, and if it did,

Chess would not be an interesting game (or perhaps it would change from

being a game of intellectual skill to something closer to a footrace,

like speeded double-solitaire.

It is not only in chess that we know that the strategy of trying out

every possibility in advance cannot be the one we use, and in fact

cannot work even for a computer: The reason is that for most tasks,

trying every combination would take more time than the time that

has elapsed since the Big Bang until the present day:

A branch of mathematics and computer science called "complexity theory"

studies the size of the "search space" for the solution of problems.

There are some problems for which the number of possible outcomes is

not that big, so it makes sense to try all of them, one by one, till

the correct outcome is reached. This would be the case if I asked you

to guess someone's name and I told you it was a male, begins with a D,

and is not an uncommon name. You would be very likely to find it by

taking a name dictionary and trying all the male names beginning with D.

If I instead asked you to guess the (Standard, English) word I am

thinking of you could still do it, though not in a reasonable amount

of time, by reading off every word in the dictionary.

Now you might think that that method (trying every word in the

dictionary) would not be the most efficient one, and you're right, it

wouldn't be (it would be much more sensible to narrow it down by

playing "20 Questions"). But in principle, you COULD find the word

eventually before you died (or I died).

But if instead of a word, I thought of a sentence -- say, a sentence

10 words long, then what you would have to do was try every

10-word combination of the dictionary words. Even if you skipped

combinations that were not grammatical, or made no sense, you would

still be faced with a stupendously large number of possible

combinations. This situation -- when the number of possibilities

grows faster than the number of steps you take toward the solution -- is

not only impossible for people, but it is impossible for computers as

well. (Problems like this are called "NP-Complete" by complexity

theorists, and it is thought that except where special short-cuts can be

found, they cannot be solved at all: For an intuition of what's

involved, imagine how long Chimpanzees with typewriters would

have to type before they came up, by chance, with a line from

a play be Shakespeare!)

So if the strategy for solving a problem is combinatorial, trying out

all the possibilities one by one, you can be pretty sure that's not the

way the mind does it. What DOES the mind do?

Mental Models

We all know how we do arithmetic: If I ask you what 7 + 8 is you'll

probably come up with the answer from memory. A child just learning to

count might count on his fingers: "7 8 9 10 11 12 13 14 15," starting

with one of the numbers, 7, and then counting up from there to the

solution the 2nd number (8) of counts. That has a bit of the "try all

possibilities flavour to it, but by the time the child can do 3-digit

sums (904 + 213) the sums are no longer done by counting up from

904 213 times. The problem has been "recoded". The sums of all the

single digits have been memorised, and the whole problem has been

recoded as we do in adding large sums: by summing the last column,

"carrying" any digit over 9, etc.

And it's a good thing that the problem is recoded that way, by

memorising (overlearning) and "rechunking" the problem (handling

columns rather than the whole number, and learning to "carry"), because

there would not be enough time in the day to keep doing sums the

child's way (at least not when you are doing problem sets in grammar

school, having to add many 5-digit numbers per day).

Now suppose you had to do the sums in your head, rather than

on paper: You would have to do something similar to what you

did on paper: you'd have to remember the numbers, line up

their columns, start on the rightmost column, add, carry, etc.,

till you got the whole sum.

What you did in your head would be a "mental model" for an arithmetic

problem. Mental models are possible for other problems too, for

example, logical ones. Here are two "premises" (propositions that we

assume are true):

All the botanists are couch potatos. (1)

None of the acrobats are couch potatos. (2)

What can you conclude from those two premises?

The textbook describes one way that you could do the "mental logic" to

solve that problem. The mental model the text uses is little symbol

tokens (A, B, C], lined up, with various kind of brackets, etc.

standing for what the sentences say is true about botanists. acrobats,

and couch potatos.

I happen to use a different model to solve this kind of logical problem

in my head:

For "All the botanists are couch potatos" I imagine a small circle

which contains all the botanists, and a big circle which contains

all the couch potatos, and I imagine the small circle completely

contained by the big circle.

For "None of the acrobats are couch potatos" I imagine a third circle,

the acrobats, located beside, rather than inside the couch-potato

circle.

What conclusion can I draw? Well there are several, but few of them are

interesting. [Recall Sperber and Wilson's model in which the number of

conclusions that you can draw (weighted by how much effort is needed to

draw them) determines how relevant one sentence is to another.]

One conclusion is that no acrobat is a botanist. Another is that

no botanist is an acrobat. Another is that no one is both a

acrobat and a botanist. Another is that no non-couch-potato is a

botanist, etc. etc. but this is all getting very trivial.

You see how it works, though. My circles within circles (which are

called "Euler's circles," closely related to "Venn Diagrams) produce

the same solution as Johnson-Laird's tokens. I leave it to you to work

it out using his method. The conclusions are the same.

When you have to solve problems like these, you can do it by using

mental models. If instead of trying to draw conclusions, you are

supposed to test whether or not a conclusion is true, you can also do

it with mental models: If you had been given the same two sentences and

asked whether or not "All the couch potatos are either not acrobats

or not botanists" is true, your mental model would tell you that too,

once you reflected on it. (It's not true.)

It has been found that easy logical problems are the ones that have few

mental models, that is, few possibilities, and hard ones have many

mental models. Often it is easier to show that a conclusion is false

than it is to show that it's true. So one important kind of mental

modeling is the construction of a counterexample: This is a model

in which the premises are true but the conclusion is false.

Note that we are very happy to find counterexamples to what our

adversaries claim is true, but we are not that anxious to find

counterexamples to what we ourselves claim is true. To a certain

extent this makes sense: We should not be too eager to abandon our

premises. But we need to have some flexibility, otherwise we just end up

clinging to old, wrong ideas.

Mental Logic

There is an alternative to solving logical problems with mental models:

You can do it by using symbols and rules. For example, the famous

"modus ponens" rule is that whenever you have two premises

of the form:

X is true (1)

and

If X is true them Y is true (2)

then you can always draw the conclusion:

Y is true (3)

This is true no matter what X and Y mean; modus ponens is a general

rule of logical deduction.

Let's call this theory the "rule-based" theory of logical deduction.

It is the rival to the mental-models theory. The truth seems

to be that we use both methods.

So far we have been talking about arithmetic and logical problems,

and what we have been doing with them is deduction: finding

out what conclusions follow from them, and whether or not certain

conclusions are true.

Logical deduction could be done by a computer -- whether it uses models

or symbols does not matter: We know that these things can be computed.

But what about problems whose solution we cannot derive with mental

models or logical deduction? What about new, creative ideas?

Expertise

In problem-solving, the strategies of experts seem to be different from

those of novices: The expert will usually have recoded the material

in the problem domain into bigger "chunks". This, for example, is how it

is thought that chess masters can remember moves and find strategies

without having to try every possibility, as Deep Blue does.

How does one progress from being a novice to being an expert? It

seems that these bigger chunks (as in the rechunking of digits

in digit span tests, allowing you to remember more than Miller's

limit of 6-9) gradually become automatic, freeing your attention to

consider bigger and bigger "chunks." The same is probably true in

sports, say, tennis, where you first have to be told and to concentrate

on every little movement in a serve, and eventually you can do that all

automatically and you can think more about top-spin strategies, and so

on.

No need to worry about ACT* or Production Systerms in thie Chapter;

just note that they are Symbol Systems (hence susceptible to the Frame

Problem).

http://www.cogsci.soton.ac.uk/psyc-bin/newpsy?frame-problem.1

Creativity and Analogy.

Nobody has a formula for creativity: Indeed, if there were a formula for

it, we would not call it creativity.

But we have some idea of the kind of process going on in the head that

sometimes leads to creativity: In mathematics, for example,

certain kind of mathematicians say they "see" the solution to a problem

by visualising it spatially, and "noticing" that a hard problem

in one domain is analogous to an easy problem in another domain.

So all that's needed is a way to map one domain into the other, based on

this analogy, and you get the easy solution in the hard domain for free.

For more on creativity, see:

http://cogsci.soton.ac.uk/harnad/Papers/Harnad/harnad.creativity.html

**Next message:**Harnad, Stevan: "Re: Chapter 9: Pragmatics and Communication"**Previous message:**Gill, Lynn: "Re: Chapter 9: Pragmatics and Communication"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ]

*
This archive was generated by hypermail 2b30
: Tue Feb 13 2001 - 16:23:52 GMT
*