Re: The Economist: Publish and perish

From: Stevan Harnad <harnad_at_ecs.soton.ac.uk>
Date: Tue, 19 Nov 2002 02:26:15 +0000

On Mon, 18 Nov 2002, Imre Simon wrote:

> The Bogdanov Affair, by John Baez
> http://math.ucr.edu/home/baez/bogdanov.html
>
> I wonder: is this affair related to the discussion between Andrew
> and Stevan on
>
> Peer Review and Self-Selected Vetting: Supplement or Substitute?

The Bogdanovs' papers are not a hoax, they are just quackery. The journals
in which they appeared are rather low in the journal quality hierarchy;
I suspect something similar might be true of the departments that granted
them their doctoral degrees.

Nothing follows from any of this. It is not evidence that there
is something wrong with peer review, nor that there is a better
alternative. It only shows (yet again) that human judgment at every
level is fallible, and will occasionally go wrong, but that science is
self-corrective: If it slips past peer review, the scientific community
will catch it as soon as anyone tries to use it to build something upon.

In this case, I suspect that if the press had not gotten wind of it, it
would never have been caught at all, and that's not a problem either!
It's nonsense, and it leads nowhere. So no one would ever have tried to
build anything on it.

Apart from the inevitable proportion of sheer nonsense that the gaussian
distribution here (as everywhere) produces, it is also a fact that
most of the lower-quality work appearing in obscure journals is never
bothered with by anyone ever again, whether it is right or wrong. So it
does no harm, apart from weighing down bookshelves (soon to be replaced
by digital files that weigh much less).

Would we be better off with more rigorous refereeing for the lower-level
journals? I doubt it. There is a gaussian distribution not only among
authors and papers and journals, but also among referees (every paper and
"peer" eventually finds its own level). There aren't enough of the best
experts to have them review all the papers, good and bad; that would
not be the best use of their expertise and time. So there is triage. The
literature grades all the way down to what is virtually a vanity press
at the bottom, which no one reads.

Get rid of the low-level journals and the low-level work, that no one
reads or uses, you say? But to do that, you would have to scale back the
sheer volume of research to what it was 100 years ago, and then you'd
be losing a lot of good quality along with the bad. Because, you see,
there is no way to filter a gaussian distribution without dealing with
both of its tails.

This is just newspaper fodder. It makes good copy (for a while) to depict
scientists as producing and being unable to discern nonsense from sense;
it makes us feel better about the fact that we laymen can't tell the
difference either, and that it all sounds like nonsense to us. And a
good conspiracy/hoax theory, along with implications of wasted tax-payers'
research-supporting dollars always make good copy too.

But to end on a more upbeat note: Errors occur in both directions, and
that is one of the other uses of the lower-level journals. They are
not just waste-baskets, they are also safety nets. If a valid finding is
erroneously rejected by the journals at the level at which it should have
appeared, it can appear at a lower level. Yes, it is more likely to be
ignored, but it is not altogether suppressed either. Its author might go
on to write another paper; it might eventually be discovered by others
later. And then proper priority can be assigned too. In the long list of
things to recommend open online access is the fact that the likelihood
that buried treasures will be discovered is far higher with a digital
literature in which one need not subscribe to and browse the contents
of the mid-atlantic journal of obscure results to have them occasionally
crop up in an OAI full-text boolean search...

Stevan Harnad
Received on Tue Nov 19 2002 - 02:26:15 GMT

This archive was generated by hypermail 2.3.0 : Fri Dec 10 2010 - 19:46:42 GMT