In
"Open Access is the short-sighted fight" Daniel Lamire [
DL] writes:
DL: "(1) If scientific culture rewarded readership and impact above all else, we would not have to force authors toward Open Access."
(a) University hiring and performance evaluation committees do reward impact. (It is no longer true that only publications are counted: their citation impact is counted and rewarded too.)
(b) Soon readership (e.g., download counts, link counts, tags, comments) too will be counted among the metrics of impact, and rewarded -- but this will only
become possible once the content itself is Open Access (OA), hence fully accessible online, its impact measurable and rewardable. (See references cited at the end of this commentary under heading
METRICS.)
(c) OA mandates do not force authors toward OA -- or no moreso than the universal "publish or perish" mandates force authors toward doing and publishing research: What these mandates do is close the loop between research performance and its reward system.
(d) In the case of OA, it has taken a long time for the world scholarly and scientific community to become aware of the causal connection between OA and research impact (and its rewards), but awareness is at long last beginning to grow. (Stay tuned for the announcement of more empirical findings on the OA impact advantage later today, in honor of OA week.)
DL: "You know full well that many researchers are just happy to have the paper appear in a prestigious journal. They will not make any effort to make their work widely available because they are not rewarded for it. Publishing is enough to receive tenure, grants and promotions. And the reward system is what needs to be fixed."
This is already incorrect: Publishing is already not enough. Citations already count. OA mandates will simply make the causal contingency between access and impact, and between impact and employment/salary/promotion/funding/prizes more obvious and explicit to all. In other words, the reward system will be fixed (including the development and use of a rich and diverse new battery of OA metrics of impact) along with fixing the access system.
DL: "(2) I love peer review. My blog is peer reviewed. You are a peer and just reviewed my blog post."
Peer commentary is not peer review (as surely I -- who founded and edited for a quarter century a rather good peer-reviewed journal that also provided open peer commentary -- ought to be in a position to know!). Peer commentary (as well as post-hoc metrics themselves) are an increasingly important
supplement to peer review, but they are themselves neither peer review nor a
substitute for it. (Again, see references at the end of this commentary under the heading
PEER REVIEW.)
DL: "(3) PLoS has different types of peer review where correctness is reviewed, but no prediction is made as to the perceived importance of the work. Let me quote them:"“Too often a journal’s decision to publish a paper is dominated by what the Editor/s think is interesting and will gain greater readership — both of which are subjective judgments and lead to decisions which are frustrating and delay the publication of your work. PLoS ONE will rigorously peer-review your submissions and publish all papers that are judged to be technically sound. Judgments about the importance of any particular paper are then made after publication by the readership (who are the most qualified to determine what is of interest to them).”
You have profoundly misunderstood this, Daniel:
(i) It is most definitely a part of peer review to evaluate (and where necessary correct) the quality, validity, rigor, originality, relevance, interest and importance of candidates for publication in the journal for which they are refereeing.
(ii) Journals differ in the level of their peer review standards (and with those standards co-vary their acceptance criteria, selectivity, acceptance rates -- and hence their quality and reliability).
(iii) PLoS Biology and PLoS Medicine were created explicitly in order to maintain the highest standards of peer review (with acceptance criteria selectivity and acceptance rates at the level of those of Nature and Science [which, by the way, are, like all peer judgments and all human judgment, fallible, but also corrigible post-hoc, thanks to the supplementary scrutiny of peer commentary and follow-up publications)).
(iv) PLoS ONE was created to cater for a lower level in the hierarchy of journal peer review standards. (There is no point citing the lower standards of mid-range journals in that pyramid as if they were representative of peer review itself.)
(vi) Some busy researchers need to know the quality level of a new piece of refereed research a-priori, at point of publication -- before they invest their scarce time in reading it, or, worse, their even scarcer and more precious research time and resources in trying to build upon it -- rather than waiting for months or years of post-hoc peer scrutiny or metrics to reveal it.
(v) Once again: commentary -- and, rarer,
peer commentary -- is a
supplement, not a substitute for peer review.
DL: "(4) Moreover, PLoS does publish non-peer-reviewed material, see PLoS Currents: Influenza for example."
And the journal hierarchy also includes unrefereed journals at the bottom of the pyramid. Users are quite capable of weighting publications by the quality track-record of their provenance, whether between journals, or between sections of the same journal.
Caveat Emptor.
METRICS:
Brody, T., Kampa, S., Harnad, S., Carr, L. and Hitchcock, S. (2003)
Digitometric Services for Open Archives Environments. In
Proceedings of European Conference on Digital Libraries 2003, pp. 207-220, Trondheim, Norway.
Harnad, S. (2006)
Online, Continuous, Metrics-Based Research Assessment.
Technical Report, ECS, University of Southampton.
Brody, T., Carr, L., Harnad, S. and Swan, A. (2007)
Time to Convert to Metrics.
Research Fortnight pp. 17-18.
Brody, T., Carr, L., Gingras, Y., Hajjem, C., Harnad, S. and Swan, A. (2007)
Incentivizing the Open Access Research Web: Publication-Archiving, Data-Archiving and Scientometrics.
CTWatch Quarterly 3(3).
Harnad, S. (2008)
Self-Archiving, Metrics and Mandates.
Science Editor 31(2) 57-59
Harnad, S. (2008)
Validating Research Performance Metrics Against Peer Rankings.
Ethics in Science and Environmental Politics 8 (11)
The Use And Misuse Of Bibliometric Indices In Evaluating Scholarly Performance
Harnad, S., Carr, L. and Gingras, Y. (2008)
Maximizing Research Progress Through Open Access Mandates and Metrics.
Liinc em Revista 4(2).
Harnad, S. (2009)
Multiple metrics required to measure research performance.
Nature (Correspondence) 457 (785) (12 February 2009)
Harnad, S. (2009)
Open Access Scientometrics and the UK Research Assessment Exercise.
Scientometrics 79 (1) Also in
Proceedings of 11th Annual Meeting of the International Society for Scientometrics and Informetrics 11(1), pp. 27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds. (2007)
Harnad, S; Carr, L; Swan, A; Sale, A & Bosc H. (2009)
Maximizing and Measuring Research Impact Through University and Research-Funder Open-Access Self-Archiving Mandates.
Wissenschaftsmanagement 15(4) 36-41
PEER REVIEW:
Harnad, S. (1978)
BBS Inaugural Editorial.
Behavioral and Brains Sciences 1(1)
Harnad, S. (ed.) (1982)
Peer commentary on peer review: A case study in scientific quality control, New York: Cambridge University Press.
Harnad, S. (1984) Commentaries, opinions and the growth of scientific knowledge.
American Psychologist 39: 1497 - 1498.
Harnad, Stevan (1985)
Rational disagreement in peer review.
Science, Technology and Human Values, 10 p.55-62.
Harnad, S. (1986) Policing the Paper Chase. (Review of S. Lock, A difficult balance: Peer review in biomedical publication.)
Nature 322: 24 - 5.
Harnad, S. (1995)
Interactive Cognition: Exploring the Potential of Electronic Quote/Commenting. In: B. Gorayska & J.L. Mey (Eds.)
Cognitive Technology: In Search of a Humane Interface. Elsevier. Pp. 397-414.
Harnad, S. (1996)
Implementing Peer Review on the Net: Scientific Quality Control in Scholarly Electronic Journals. In: Peek, R. & Newby, G. (Eds.) Scholarly Publishing: The Electronic Frontier. Cambridge MA: MIT Press. Pp 103-118.
Harnad, S. (1997)
Learned Inquiry and the Net: The Role of Peer Review, Peer Commentary and Copyright. Learned Publishing 11(4) 283-292.
Harnad, S. (1998/2000/2004)
The invisible hand of peer review.
Nature [online] (5 Nov. 1998),
Exploit Interactive 5 (2000): and in Shatz, B. (2004) (ed.)
Peer Review: A Critical Inquiry. Rowland & Littlefield. Pp. 235-242.
Harnad, S. (2003/2004)
Back to the Oral Tradition Through Skywriting at the Speed of Thought.
Interdisciplines.
Retour a la tradition orale: ecrire dans le ciel a la vitesse de la pensee. Dans: Salaun, Jean-Michel & Vendendorpe, Christian (dir).
Le défi de la publication sur le web: hyperlectures, cybertextes et meta-editions. Presses de l'enssib.
Harnad, S. (2003)
BBS Valedictory Editorial.
Behavioral and Brain Sciences 26(1)