**Next message:**EMMA FLETCHER: "Re: Thinking Probabilistically"**Previous message:**Stevan Harnad: "Re: Libet: Mental Timing"**Next in thread:**EMMA FLETCHER: "Re: Thinking Probabilistically"**Maybe reply:**EMMA FLETCHER: "Re: Thinking Probabilistically"**Maybe reply:**Stevan Harnad: "Re: Thinking Probabilistically"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ]

Here's a summary of the last seminar. Take a look at the Koehler papers,

and the Tversky & Kahnemann if you're especially interested!

We all have intuitions about probability. Some of them are right and

some of them are wrong. The "image" of probability is a big urn with

white and black marbles, say, half and half. You reach in and take out,

say, ten marbles without looking, count how many are black and how many

are white, then put them all back in. (That's called "sampling with

replacement.")

Now the question: If, as with the 6-button sandwich machine we talked

about a few weeks ago, your daily lunch depended on your saying exactly

how many white marbles there were out of the ten each time, how do you

make sure you get lunch the most often, and how often would that be?

Answer: Always guess 5/10, and that way you will eat the most often,

but I made a mistake in class: It will not be, on average, every second

day. It will be less often than that. I'd need a table of the binary

distribution to find out exactly how often, but here's why it's less

than half the time: If you were taking out only one marble every time,

and predicting black or white, then it would be just like tossing a

coin, with only two possibilities, and you would indeed eat eat, on

average, half the time. But with the urn, on any one sample of 10 at a

time, the possibilities are of course more than just two: it could be

1/10, 2/10, ... 9/10, 10/10. The most frequent case will still be 5/10,

but it won't happen half the time, but less often. So 5/10 is still

your best bet. The point of the example is that, even where an outcome

is random, there are ways to bet that maximise your chances.

Then I think I gave the example of the obstetrician who makes money-back

guarantees to predict the gender of your baby before it's born (and

before the era of in utero imaging): His best strategy is to predict all one

gender -- and in fact it's better to predict boys only, because slightly

more than 50% of births is boys (but they don't survive quite as often

as girls). Even with the money-back guarantee, he'd make money half the

time -- and even a bit more.

That little bit more is what gives the gambling casino the edge over

clients -- except card counters (people who have a system for keeping

track of the cards that have already been dealt: that's like sampling

from the urn WITHOUT replacement, so you can predict better than chance

what's left, based on what's already been removed). And that's why they

get barred from casinos when they are discovered.

I asked also when under what conditions it was rational to buy a lottery

ticket DAILY if the prize was 100 pounds and there were 100 tickets each

time: Answer: if they cost a bit less than a pound each. Then in the

long run you'd always come out ahead. Otherwise, you'd just break even

or lose. But of course the Camelot lottery is not like that!

I also mentioned that it was illegal to double your bets in casinos, for

obvious reasons: Suppose you're betting 100 pounds on a coin toss. You

lose, but you're allowed to bet 200 on the next coin toss. You lose

again (so far you lost 300) so you double and bet 400 on the next one.

You win! (Eventually you would have one, because of the odds.) So you

pocket the 100 you made and start again. Patiently, you'd eventually

keep adding the amount of you initial bet to your pocket, if you had

enough money to keep doubling (and the Casino allowed you) in between.

But they don't allow you...

Why does doubling work? Because in the long run the averages always

prevail, if the coin is fair.

So I then asked: Supposing there is a FAIR coin and before your eyes you

see it tossed 19 times and each time it comes out heads: You bet now,

what do you bet on? More than half the seminar said heads, though

Nik changed his mind when I reminded him the coin was fair. You would of

course lose money if you were ready to bet any more than the usual 50/50

that the next one would be heads. "It doesn't matter" is the right

answer, because there are no "winning streaks" that you can grab a hold

of. Some people would be ready to bet more on tails, because they think

the coin is dues for a tails after all those heads, since it has to come

out even in the end. But the coin has no memory and doesn't care about

how things come out in the end! There are lots of coins, tossed lots of

times; sometimes they'll come out 19 heads in a row by chance; sometimes

even more! It's just like occasionally pulling out more or less than

exactly 5 white marbles from a half/half urn: It happens, but you can't

bank on it; you can only bank on the average.

Then we got to the case of the disease (From Tversky, A. &; Kahneman,

D. (1982) Evidential impact of base rates. In Judgment under

uncertainty: Heuristics and biases, (pp. 153-160), eds.

D. Kahneman, P. Slovic, &; A. Tversky. Cambridge University Press)

You have a medical symptom; you go to the doctor. He says it's serious,

it's a bacteria, it's fatal if untreated, but if treated, it's curable.

However, the bacteria comes in two strains, a common strain (85% of the

time it's that one), called A and a rarer strain (15% of the time it's that

one), called B. You need to decide which one you want to be treated for,

because the treatment for one does not work for the other and vice

versa, and you only have enough time to treat for one. Obviously you'd

want to choose A, though with a certain amount of nervousness.

But wait, says the doctor, there IS another test I can do, to see

whether you've got the A or B kind. So you take the test, and the result

says you've got B, the rare kind. The test itself, is 80% reliable:

8 times out of 10, it gets the strain right.

Question: Which one do you ask to be treated for? Most people say B,

because that was MY test, it was about ME, whereas those others

statistics, about how often it tends to be A or B in general are not

about me.

But the fact is that those base rates about how often it tends to be A

and B ARE about you, and if you did the calculations, you would find

that even if the test said you had B, your chances are still better to

be treated for A (so probably it was a bad idea to take the test at all,

since it could only make you more nervous).

The correct calculation is based on Bayes' rule, for calculating

conditional probabilities (the probability that you will have A given

that the test says you have B). This calculation is complicated, and

most people simply ignore the base rate for irrational reasons, so it is

said but see:

http:/cogsci.soton.ac.uk/~bbs/Archive/bbs.koehler.html

or

ftp://cogsci.soton.ac.uk/pub/harnad/Psycoloquy/1993.volume.4/psyc.93.4.49.base-rate.1.koehler

In the case of the gambler's fallacy (thinking the coin has some sort of

memory that makes things average out), the error is in the intuition

that you can defy the averages based on what you know of the history of

the coin. In the case of the Kahneman & Tversky baserate fallacy the

error is ignoring the population baserates. Now the well-known Monte

Hall Paradox, which seems to go exactly against the CORRECT intuition

when you see the coin has no memory, and it makes no difference if you

bet heads or tails after a series of 19 heads:

You are in a quiz show. There are three curtains. One conceals a big

prize. You can choose any of the three curtains. (To stick to our

principle that none of this makes sense on a one-time basis, you should

imagine yourself doing this every day, and the prize is lunch.) If the

lunch is hidden randomly, you would eat, on average 1/3 of the time.

Now the announcer gives you another chance (as with the test for the

disease): He knows where the prize is, so after you have made your

choice, he opens one of the other two curtains, always one where the

prize ISN'T. Now, the question is: Given a second chance, do you (1) stick

with your choice, (2) switch to the other unopened curtain, or (3) it

doesn't matter?

Most people say, reasoning as they do with the coin, that it doesn't

matter, but it does! For if you stick to your original choice, it is

clear that in the long run you are "married" to a 1/3 chance. But

opening one curtain has now changed the odds to 1/2 -- IF you choose

randomly between the remaining two. That's already better then 1/3.

But you can do even better if you switch, for the one you chose is

"married" to 1/3, the one that was revealed is out of the running, so

the third curtain actually has the remaining 2/3 chance of being right!

Sounds like magic? No, you have to remember that, doing this over and

over, you are getting INFORMATION by being told where the lunch ISN'T.

You must make use of that information or be condemned to the ignorance

that the 1/3 guess represents.

Then came Donald McKay's "Brain-Reading" Machine about free will and

determinism. I had forgotten to say that the famous mathematician

Laplace, thinking he was being completely logical about cause and

effect and determinism, and the predictive laws of physics, said that

if he knew the position and momentum (speed/mass/direction) of every

particle in the universe right now, then he could predict everything

that would ever happen. He was wrong, because the interactions are too

complex to be calculated exactly, except for two-way interactions, so

his predictions could just be statistical, like a weather report.

But never mind that. Suppose someone had a complete "scan" of everything

going on in your brain right now, and using that, could correctly

predict everything you would do no matter what happened. (Note: Because

of Laplace, he cannot predict what happens in the rest of the world, but

he CAN predict exactly what you would do for anything that might

happen.)

MacKay, a believer, said you would still have free will if there were

such a machine, because, if told its predictions, you could always

decide to do otherwise (Oh yeah? You predict I'll say yes? Well then I

say "no"). Trouble is, that if the premise is true, that the machine can

predict it all, it can also predict what you will do if told its

prediction. That's a prediction too, and it's always one step ahead of

you. If you COULD do anything that machine didn't predict, that would

simply show it couldn't predict, whereas the ASSUMPTION here is that it

can predict it all, with 100% accuracy.

No problem here: MacKay is simply wrong in thinking there would be any

room left for your free will. But then came the last puzzle, where there

is no right or wrong answer:

Again, a machine that can read your brain state so completely that it

can correctly predict anything you will do under any conditions. It has

taken its reading, and based on its results has done the following:

It has placed 100 pounds underneath a transparent globe. Beside it,

under another globe that you cannot see through, it has placed either

1000 pounds or nothing, based on the following rule, based in turn on

what it had (correctly) predicted you would do: If it correctly

predicted that you would be "greedy" and take what was under BOTH bowls,

it put nothing under the opaque bowl. If it predicted you would be

"temperate," and would voluntarily forego the 100 pounds that you could

see, and take only what was under the opaque bowl, then it put 1000

pounds under the opaque bowl.

But that is all history. What's done has been done. You now face the

bowls and can do what you like: What do you do, and why?

Recall that the premise is that the machine has correctly read you mind

and can predict everything you do EXACTLY (not statistically, which

would make all this much easier, since you could do it over and over):

Most people say they would go for both bowls because it would be

superstitious to give up the 100 pounds given that whatever had happened

had already happened. This is the choice that rejects backwards

causation in time as absurd.

Some people say they forego the 100 pounds because their faith in the

power of truth -- on the assumption that the premise that the machine

never errs is true -- is "stronger" than their rejection of backwards

causation. Either way, it's a bit like choosing who would win in a

supercontest between an immovable force and an irresistible power!

There is no correct answer.

But what if the game were iterated, i.e., if the machines predictive

powers were only statistical, and you could play the game every day?

What would you do then, and why?

This is related to the iterated prisoner's dilemma, which I will discuss

next time...

Chrs, Stevan

**Next message:**EMMA FLETCHER: "Re: Thinking Probabilistically"**Previous message:**Stevan Harnad: "Re: Libet: Mental Timing"**Next in thread:**EMMA FLETCHER: "Re: Thinking Probabilistically"**Maybe reply:**EMMA FLETCHER: "Re: Thinking Probabilistically"**Maybe reply:**Stevan Harnad: "Re: Thinking Probabilistically"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ]

*
This archive was generated by hypermail 2b30
: Tue Feb 13 2001 - 16:24:15 GMT
*