> From: jlh597@soton.ac.uk
I have deleted most of this message, Jo, because most of it was exactly
right! I am only commenting here on the controversial parts:
> Many scientists, including psychologists, believe that Science can give
> us 100% certainty.
No, only maths and logic can give 100% certainty: The theorems that
mathematicians prove are not just true but NECESSARILY true (i.e., on pain of
contradiction, which means that for them to be untrue, anything and
everything else would have to be true and false at the same time, which is
impossible). Impossibility is the flip side of necessity -- so the theorems
of maths and logic are necessarily true; true with certainty.
But apart from maths/logic, other truths are not NECESSARILY true. At
best, they just happen to be true, we cannot be 100% certain that they
are true (though it's almost 100% certain that they are). It is almost
certainly true, for example, that F=ms and E=mc_squared (Newton's and
Einstein's equations, respectively), but it is not
certain that they are true; nor are they necessarily true, even if they
really are true. Only 2+2=4 is necessarily true; it is on the strength of
a mathematical proof that we can be certain about that.
Don't worry too much about the difference between these two kinds of
truths, but try not to use "prove" or "proof" outside of mathematics. Use
the words "supported by the evidence" (or "by the data"), because
that's how it is with scientific truths. The well-confirmed ones,
supported time and again by all the evidence, have an extremely high
probability of being true, but to be certain that they are true would
require a proof that they are necessarily true, and there is no such
proof.
(Needless to say, the language of "proof" belongs even less in the law
court than in the scientific lab, yet lawyers and judges and policemen
speak of "proving" things all the time. What they mean is that the
supporting evidence is strong, not that it is proof. The legal idea of
something that is shown by the evidence to be true "beyond a reasonable
doubt" is a good one, because that's all you can ever have outside
maths anyway: evidence that so overwhelmingly supports a theory that it
is no longer reasonable to doubt it.)
(This is a philosophical issue, but if this difference between necessary
and merely probable truth has caught your imagination, consider that
everyone who buys a lottery ticket can be sure beyond reasonable doubt
that they will NOT win, yet someone always does win! So improbable things
DO happen now and again, and that is a "risk" that science faces too.)
> Scientific Method... Francis Bacon (16th Century)...
> proposed the idea of induction where one gathers the information
> required in research and then devises a theory to explain the data.
As I said in response to Sam, this is not really a method, because a
method can be used to gather data but not to come up with a theory!
Where theories come from, and how, is part of the psychologist's
explanatory burden, and we will be talking about creativity later in the
year, though very little is yet known about it. Have a look at
"Creativity: Method or Magic":
http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad.creativity.html
> However, in Psychology, inductive methods are very rarely used but this
> tends to be the case for Science in general.
Well your source is a bit garbled here: There is no inductive "method"
other than observe, theorise, test, revise, theorise, test, etc.
> Karl Popper (1972) believes... researchers try to refute 'old'
> theories and do not attempt to prove them correct
Popper, a philosopher of science, stressed that because scientific
theories cannot be "proved" to be true with certainty, but can (at
best) only be shown to be true beyond a reasonable doubt on the basis
of their supporting evidence, the only certainty is in DISproving
scientific theories, by testing them and finding that the evidence goes
against them rather than supporting them.
So, according to Popper, what scientists are really doing is observing,
theorising, and then doing tests that would show that their theories
were WRONG. Each time a piece of data SUPPORTs your theory, that just
makes it a bit more more likely that your theory is true; but when the
data CONTRADICT your theory, then that makes it certain that your
theory is false!
Popper's philosophy of science is called "falsificationism," according
to which scientists formulate theories that are TESTABLE. The theories
are then tested in an attempt to show them to be false. If the evidence
supports the theory, then the theory is retained for the time being; if
the evidence contradicts the theory, the theory must be discarded or
modified.
Ordinary philosophy of science is closer to "verificationism." Here too,
scientists formulate theories that are testable, but their objective is
to find evidence to support their theory rather than to reject it.
Popper's falsificationism is controversial. On the one hand, many
scientists have accepted it as a fair description of what they are
trying to do; on the other hand, everyone knows that theorists don't
try to show that their theory is false, they try to show that it is
true (and, usually, that a rival theory is false).
Probably most scientists agree with Popper only about the requirement
that scientific theories should be testable, so that the evidence can
go either way, either confirming or disconfirming the theory (although
scientists are not necessarily reliable authorities about what they are
really doing when they do science). The rest of "falsificationism" seems
more like a cult that has formed around Karl Popper.
If any of you has a deeper interest in these methodological questions,
think about this:
The American Psychological Association (APA) is now considering whether
to change the rules about how to analyse psychological data
statistically. You have all heard about "significance tests" by now.
These are statistical tests that calculate how likely it is that your
findings are real, rather than just a random accident. The 5% and 1%
significance levels you have been hearing about in stats refer to how
reasonable it would be to doubt your findings: if they are shown to be
significant at the 5% level, that means they could have happened by
chance only 5% of the time: the "certainty" of your theory is 95%. If
you feel it is reasonable to doubt 95%, then you can try to gather more
data to see whether you can reduce the probability that the outcome was
an accident to 1% or even to .1%, making your theory 99% or 99.9%
certain. (As you can see, you never get an outcome that is 100% certain
-- i.e., with a probability of 0 that it happened by chance.)
Now that's fine for positive results, where your data support your
theory. But what about negative results, where your data contradict
("falsify") your theory: (Your theory predicted that the right hand
would be better at a certain task, but it turned out that the left hand
was.) Does that mean your theory is NECESSARILY false, as Popper
suggested? No, for a negative result is still based on probability: It
too may have been a chance event, never (or rather extremely rarely)
likely to happen again.
This means that, contrary to what Popper has suggested, it is not true
that positive results are only probable whereas negative results are
certain. Any statistical outcome -- positive or negative -- has only a
probability of being true (and of occurring again, if the test is
repeated), and that probability is never 100%. Neither confirmation nor
disconfirmation leads to certainty; both are just matters of probability.
Having considered positive and negative results, what about NULL
results: Suppose your theory predicts that there should be no
difference between the left and right hand on the task, and you do the
experiment, and there is indeed no statistically significant difference
between the hands: Does that confirm your theory? and if so, how
certain does it make your theory? With ordinary significance tests,
there is no answer, but with other kinds of stats, more can be about
the probabilities of theories and their rivals.
For more about Popper, see:
http://www.eeng.dcu.ie/~tkpw/
For more on the APA task force on significance testing, see
http://www.apa.org/science/tfsi.html
and
http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.chow.html
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:24:19 GMT