SUMMARY: (1) Peer review just means the assessment of research by qualified experts.
(2) Peer review, like all human judgment, is fallible, and susceptible to error and abuse.
(3) Funding and publishing without any assessment is not a solution:
(3a) Everything cannot be funded and even funded projects first need some expert advice in their design.
(3b) Everything does get published, eventually, but there is a hierarchy of journals with a hierarchy of peer-review quality standards, their names and track-records serving as an essential guide for users about what they can take the risk of trying to read, use and build upon.
(4) So far, nothing as good as or better than peer review (i.e., qualified experts vetting the work of their fellow-experts) has been found, tested and demonstrated.
(5) Peer review's efficiency can be improved in the online era.
(6) Metrics are not a substitute for peer review, they are a supplement to it.
"Help Wanted: A pall of gloom lies over the vital system of peer review. But the British Academy has some bright ideas". The Guardian, Jessica Shepherd reports, Tuesday September 4, 2007
The 4 Sept
Guardian article on peer review (on the 5 Sept
British Academy Report, to be published tomorrow) seems to be a good one. The only thing it lacks is some conclusions (which journalists are often reluctant to take the responsibility of making):
(1) Peer review just means the assessment of research by qualified experts. (In the case of research proposals, it is assessment for fundability, and in the case of research reports, it is assessment for publishability.)
(2) Yes, peer review, like all human judgment, is fallible, and susceptible to error and abuse.
(3) Funding and publishing without any assessment is not a solution:
(3a) Everything cannot be funded (there aren't enough funds), and even funded projects first need some expert advice in their design.
(3b) And everything does get published, eventually, but there is a hierarchy of journals, with a corresponding hierarchy of peer-review quality standards. Their names and track-records are essential for users, guiding them on what they can take the risk of trying to read, use and build upon. (There is not enough time to read everything, and it's to risky to try to build on just anything that purports to have been found -- and even accepted papers first need some expert advice in their revision.)
(4) So far, nothing as good as or better than peer review (i.e., qualified experts vetting the work of their fellow-experts) has been found, tested and demonstrated. So peer review remains the only straw afloat, if the alternative is not to be tossing a coin for funding, and publishing everything on a par.
(5) Peer review
can be improved. The weak link is always the editor (or Board of Editors), who choose the reviewers and to whom the reviewers and authors are answerable; and the Funding Officer(s) or committee choosing the reviewers for proposals, and deciding how to act on the basis of the reviews. There are
many possibilities for experimenting with ways to make this meta-review component more accurate, equitable, answerable, and efficient, especially now that we are in the online era.
(6) Metrics are not a
substitute for peer review, they are a
supplement to it.
In the case of the UK, a
Dual Support System of prospective funding of (i) individual competitive proposals (
RCUK) and (ii) retrospective top-sliced funding of entire university departments, based on their recent past research performance (
RAE), metrics can help inform and guide funding officers, committees, editors, Boards and reviewers. And in the case of the RAE in particular, they can shoulder a lot of the former peer-review burden: The RAE, being a retrospective rather than a prospective exercise, can benefit from the prior publication peer review that the journals have already done for the submissions, rank the outcomes with metrics, and then only add expert judgment afterward, as a way of checking and fine-tuning the metric rankings. Funders and universities explicitly recognizing peer review performance as a metric would be a very good idea, both for the reviewers and the researchers being reviewed.
Harnad, S. (2007)
Open Access Scientometrics and the UK Research Assessment Exercise. In
Proceedings of 11th Annual Meeting of the International Society for Scientometrics and Informetrics 11(1), pp. 27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds.
Brody, T., Carr, L., Gingras, Y., Hajjem, C., Harnad, S. and Swan, A. (2007)
Incentivizing the Open Access Research Web: Publication-Archiving, Data-Archiving and Scientometrics.
CTWatch Quarterly 3(3).
Shadbolt, N., Brody, T., Carr, L. and Harnad, S. (2006)
The Open Research Web: A Preview of the Optimal and the Inevitable, in Jacobs, N., Eds.
Open Access: Key Strategic, Technical and Economic Aspects. Chandos.
Harnad, S. (ed.) (1982)
Peer commentary on peer review: A case study in scientific quality control, New York: Cambridge University Press.
Harnad, Stevan (1985)
Rational disagreement in peer review.
Science, Technology and Human Values, 10 p.55-62.
Harnad, S. (1986) Policing the Paper Chase. [Review of S. Lock, A difficult balance: Peer review in biomedical publication.]
Nature 322: 24 - 5.
Harnad, S. (1996)
Implementing Peer Review on the Net: Scientific Quality Control in Scholarly Electronic Journals. In: Peek, R. & Newby, G. (Eds.)
Scholarly Publishing: The Electronic Frontier. Cambridge MA: MIT Press. Pp 103-118.
Harnad, S. (1997)
Learned Inquiry and the Net: The Role of Peer Review, Peer Commentary and Copyright. Learned Publishing 11(4) 283-292.
Harnad, S. (1998/2000/2004)
The invisible hand of peer review.
Nature [online] (5 Nov. 1998),
Exploit Interactive 5 (2000): and in Shatz, B. (2004) (ed.)
Peer Review: A Critical Inquiry. Rowland & Littlefield. Pp. 235-242.
Peer Review Reform Hypothesis-Testing (started 1999)
A Note of Caution About "Reforming the System" (2001)
Self-Selected Vetting vs. Peer Review: Supplement or Substitute? (2002)
Stevan Harnad
American Scientist Open Access Forum