Friday, October 23. 2009Don't Count Your Metric Chickens Before Your Open-Access Eggs Are Laid
In "Open Access is the short-sighted fight" Daniel Lamire [DL] writes:
DL: "(1) If scientific culture rewarded readership and impact above all else, we would not have to force authors toward Open Access."(a) University hiring and performance evaluation committees do reward impact. (It is no longer true that only publications are counted: their citation impact is counted and rewarded too.) (b) Soon readership (e.g., download counts, link counts, tags, comments) too will be counted among the metrics of impact, and rewarded -- but this will only become possible once the content itself is Open Access (OA), hence fully accessible online, its impact measurable and rewardable. (See references cited at the end of this commentary under heading METRICS.) (c) OA mandates do not force authors toward OA -- or no moreso than the universal "publish or perish" mandates force authors toward doing and publishing research: What these mandates do is close the loop between research performance and its reward system. (d) In the case of OA, it has taken a long time for the world scholarly and scientific community to become aware of the causal connection between OA and research impact (and its rewards), but awareness is at long last beginning to grow. (Stay tuned for the announcement of more empirical findings on the OA impact advantage later today, in honor of OA week.) DL: "You know full well that many researchers are just happy to have the paper appear in a prestigious journal. They will not make any effort to make their work widely available because they are not rewarded for it. Publishing is enough to receive tenure, grants and promotions. And the reward system is what needs to be fixed."This is already incorrect: Publishing is already not enough. Citations already count. OA mandates will simply make the causal contingency between access and impact, and between impact and employment/salary/promotion/funding/prizes more obvious and explicit to all. In other words, the reward system will be fixed (including the development and use of a rich and diverse new battery of OA metrics of impact) along with fixing the access system. DL: "(2) I love peer review. My blog is peer reviewed. You are a peer and just reviewed my blog post."Peer commentary is not peer review (as surely I -- who founded and edited for a quarter century a rather good peer-reviewed journal that also provided open peer commentary -- ought to be in a position to know!). Peer commentary (as well as post-hoc metrics themselves) are an increasingly important supplement to peer review, but they are themselves neither peer review nor a substitute for it. (Again, see references at the end of this commentary under the heading PEER REVIEW.) DL: "(3) PLoS has different types of peer review where correctness is reviewed, but no prediction is made as to the perceived importance of the work. Let me quote them:"You have profoundly misunderstood this, Daniel:“Too often a journal’s decision to publish a paper is dominated by what the Editor/s think is interesting and will gain greater readership — both of which are subjective judgments and lead to decisions which are frustrating and delay the publication of your work. PLoS ONE will rigorously peer-review your submissions and publish all papers that are judged to be technically sound. Judgments about the importance of any particular paper are then made after publication by the readership (who are the most qualified to determine what is of interest to them).” (i) It is most definitely a part of peer review to evaluate (and where necessary correct) the quality, validity, rigor, originality, relevance, interest and importance of candidates for publication in the journal for which they are refereeing. (ii) Journals differ in the level of their peer review standards (and with those standards co-vary their acceptance criteria, selectivity, acceptance rates -- and hence their quality and reliability). (iii) PLoS Biology and PLoS Medicine were created explicitly in order to maintain the highest standards of peer review (with acceptance criteria selectivity and acceptance rates at the level of those of Nature and Science [which, by the way, are, like all peer judgments and all human judgment, fallible, but also corrigible post-hoc, thanks to the supplementary scrutiny of peer commentary and follow-up publications)). (iv) PLoS ONE was created to cater for a lower level in the hierarchy of journal peer review standards. (There is no point citing the lower standards of mid-range journals in that pyramid as if they were representative of peer review itself.) (vi) Some busy researchers need to know the quality level of a new piece of refereed research a-priori, at point of publication -- before they invest their scarce time in reading it, or, worse, their even scarcer and more precious research time and resources in trying to build upon it -- rather than waiting for months or years of post-hoc peer scrutiny or metrics to reveal it. (v) Once again: commentary -- and, rarer, peer commentary -- is a supplement, not a substitute for peer review. DL: "(4) Moreover, PLoS does publish non-peer-reviewed material, see PLoS Currents: Influenza for example."And the journal hierarchy also includes unrefereed journals at the bottom of the pyramid. Users are quite capable of weighting publications by the quality track-record of their provenance, whether between journals, or between sections of the same journal. Caveat Emptor. METRICS: Brody, T., Kampa, S., Harnad, S., Carr, L. and Hitchcock, S. (2003) Digitometric Services for Open Archives Environments. In Proceedings of European Conference on Digital Libraries 2003, pp. 207-220, Trondheim, Norway. Harnad, S. (2006) Online, Continuous, Metrics-Based Research Assessment. Technical Report, ECS, University of Southampton. Brody, T., Carr, L., Harnad, S. and Swan, A. (2007) Time to Convert to Metrics. Research Fortnight pp. 17-18. Brody, T., Carr, L., Gingras, Y., Hajjem, C., Harnad, S. and Swan, A. (2007) Incentivizing the Open Access Research Web: Publication-Archiving, Data-Archiving and Scientometrics. CTWatch Quarterly 3(3). Harnad, S. (2008) Self-Archiving, Metrics and Mandates. Science Editor 31(2) 57-59 Harnad, S. (2008) Validating Research Performance Metrics Against Peer Rankings. Ethics in Science and Environmental Politics 8 (11) The Use And Misuse Of Bibliometric Indices In Evaluating Scholarly Performance Harnad, S., Carr, L. and Gingras, Y. (2008) Maximizing Research Progress Through Open Access Mandates and Metrics. Liinc em Revista 4(2). Harnad, S. (2009) Multiple metrics required to measure research performance. Nature (Correspondence) 457 (785) (12 February 2009) Harnad, S. (2009) Open Access Scientometrics and the UK Research Assessment Exercise. Scientometrics 79 (1) Also in Proceedings of 11th Annual Meeting of the International Society for Scientometrics and Informetrics 11(1), pp. 27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds. (2007) Harnad, S; Carr, L; Swan, A; Sale, A & Bosc H. (2009) Maximizing and Measuring Research Impact Through University and Research-Funder Open-Access Self-Archiving Mandates. Wissenschaftsmanagement 15(4) 36-41 PEER REVIEW: Harnad, S. (1978) BBS Inaugural Editorial. Behavioral and Brains Sciences 1(1) Harnad, S. (ed.) (1982) Peer commentary on peer review: A case study in scientific quality control, New York: Cambridge University Press. Harnad, S. (1984) Commentaries, opinions and the growth of scientific knowledge. American Psychologist 39: 1497 - 1498. Harnad, Stevan (1985) Rational disagreement in peer review. Science, Technology and Human Values, 10 p.55-62. Harnad, S. (1986) Policing the Paper Chase. (Review of S. Lock, A difficult balance: Peer review in biomedical publication.) Nature 322: 24 - 5. Harnad, S. (1995) Interactive Cognition: Exploring the Potential of Electronic Quote/Commenting. In: B. Gorayska & J.L. Mey (Eds.) Cognitive Technology: In Search of a Humane Interface. Elsevier. Pp. 397-414. Harnad, S. (1996) Implementing Peer Review on the Net: Scientific Quality Control in Scholarly Electronic Journals. In: Peek, R. & Newby, G. (Eds.) Scholarly Publishing: The Electronic Frontier. Cambridge MA: MIT Press. Pp 103-118. Harnad, S. (1997) Learned Inquiry and the Net: The Role of Peer Review, Peer Commentary and Copyright. Learned Publishing 11(4) 283-292. Harnad, S. (1998/2000/2004) The invisible hand of peer review. Nature [online] (5 Nov. 1998), Exploit Interactive 5 (2000): and in Shatz, B. (2004) (ed.) Peer Review: A Critical Inquiry. Rowland & Littlefield. Pp. 235-242. Harnad, S. (2003/2004) Back to the Oral Tradition Through Skywriting at the Speed of Thought. Interdisciplines. Retour a la tradition orale: ecrire dans le ciel a la vitesse de la pensee. Dans: Salaun, Jean-Michel & Vendendorpe, Christian (dir). Le défi de la publication sur le web: hyperlectures, cybertextes et meta-editions. Presses de l'enssib. Harnad, S. (2003) BBS Valedictory Editorial. Behavioral and Brain Sciences 26(1) Tuesday, September 29. 2009The Added Value of Providing Free Access to Paid-Up Content
On Sun, Sep 27, 2009 at 8:07 PM, Elizabeth E. Kirk, Dartmouth College Library, on liblicense-l, wrote:
"Stevan, it is, as you say, about content. But it's not only about the content of Dartmouth's research output, or that of our peers. It's also about the value of the content provided through publishers, and the willingness of readers and institutions to look for that value."Elizabeth, I am not quite sure what you have in mind with the "it" that it "is... about." But if it's OA (Open Access), then the issue is not the value of the content or the contribution of the publisher or the willingness of readers and institutions to "look for" that value. The value of peer-reviewed publication is already very explicitly enshrined in the fact that OA's specific target content is peer-reviewed content. What OA is equally explicitly seeking -- now that the advent of the online era has at last made it possible -- is free (online) access to that valued content, so it is no longer accessible only to those users whose institutions can afford to subscribe to the journal in which it was published, but to all would-be users, web-wide, in order to maximize research usage, impact and progress. The cost of the portion of that value that is added by publishers is being paid in full by institutional subscriptions today. Hence what is missing is not a recognition of that value, but open access to that valued content. That is why it is so urgent and important that each institution should first adopt a Green OA self-archiving mandate -- to make its own valued content openly accessible to all users web-wide and not just to those whose institutions can afford subscription access to the journals in which that content appears. This institutional self-help thereby also encourages reciprocal mandates by other institutions, to open the access to their own content as well. Having thus seen to it that all its own peer-reviewed output is made (Green) OA, an institution is of course free to spend any spare cash it may have on paying for Gold OA publication, over and above what it already spends on subscriptions. But an institution's committing pre-emptively to Gold OA funding compacts like COPE before or instead of mandating Green OA self-archiving is not only a waste of a lot of scarce money in exchange for very little OA value: it is also a failure to add OA value to all of the institution's research output at no extra cost (by mandating Green OA self-archiving). "We both agree that the peer review process is a critical step in creating the finished work of scholarship, as well as "certifying" the work."Yes indeed; but peer review is already being paid for -- in full, many times over -- for most journals today (including most of the journals users want and need most) through multi-institutional subscription fees, paid by those institutions that can afford to subscribe to any given journal. (There are about 10,000 universities and research institutions in all, worldwide, and 25,000 peer-reviewed journals, publishing about 2.5 million articles per year. No institution can afford to subscribe to more than a small fraction of those journals.) To repeat: The value of peer review is not at issue. What is at issue is access -- access to paid-up, published, peer-reviewed articles. "Currently, open access journals--as you rightly put it--are a very small subset of the publishing pie."And committing to fund that small subset of an institution's own contribution to the "publishing pie" today, before or instead of committing to mandate OA for the vast supra-set of that institution's total journal article output, is committing to spend a lot of extra money for little OA while failing to provide a lot of OA for no extra money at all. "Without a predictable financial stream, there are few avenues of growing an OA sector that can furnish peer review, copy editing, DOIs, and all of the other parts of publishing that have costs involved."What is missing and urgently needed today -- for research and researchers -- is not "predictable financial streams" but online access to every piece of peer-reviewed research for every researcher whose institution cannot afford subscription access to it today. The "peer review, copy editing, DOIs, and all of the other parts of publishing that have costs involved" for those articles are already being paid in full today -- by the subscription fees of those institutions that can afford to subscribe to the journals in which they are published. "Open Access" is about Access, not about "financial streams." The wide-open "avenue" that urgently needs to be taken today (for the sake of research and researchers today) is the already-constructed, and immediately traversable (green) toll-road to accessing the vast paid-up subscription stream that already exists today, not the uncertain and still-to-be-constructed (golden) road of "growing" a future "OA sector," by paying still more, over and above the tolls already being paid, for a new "stream" of Gold OA journals. Institutions first need to provide immediate access to the peer-reviewed content they already produce today (its peer review already paid in full by subscriptions from all the institutions that can afford subscriptions to the journals in which that content already appears, today). Having done that, there's no harm at all in an institution's going on to invest its spare cash in growing new Gold OA "sectors." But there's plenty of harm in doing so instead, pre-emptively, instead of providing the Green OA all institutions are already in the position to provide, cost-free, today. "Trying to grow that kind of OA sector by supporting those costs, and overcoming the misconception that OA means "not peer reviewed" (which many people said about 10-15 years ago about all electronic journals, if you remember) is a honking good reason to join the compact."Misconceptions about OA certainly abound. But the fact that OA means OA to peer-reviewed content has been stated explicitly from the very outset by the OA movement (BOAI), loud and clear for all those with ears to hear the honking. Committing to funding Gold OA for a small subset of an institution's peer-reviewed output instead of first mandating Green OA for the vast supra-set of an institution's peer-reviewed output is a rather pricey way to drive home the home-truth that OA's target content is indeed, and always has been, peer-reviewed content... "That kind of OA sector, which of course can only be built when more institutions join us, is one that may create actual competition in journal publishing over time, by which I mean competition that results in lower prices, more players, and multiple models. It could include, as well, any current publisher who might wish to move to producer-pays from reader-pays.""Prices, players, models, competition, payment, sectors": What has become of access -- access today, to today's peer-reviewed research -- in all this Gold Fever and "sector-growth" fervor, which seems to have left the pressing immediate needs of research and researchers by the wayside in favor of speculative future economics? "We care very much about the stability of and access to our research."Then why doesn't Dartmouth mandate Green OA self-archiving, today? "We are working on that from a number of fronts and in multiple conversations. The compact is not our answer to everything. But we certainly won't step back from an opportunity to help create a more vibrant publishing landscape."But why is committing to provide a little extra Gold OA for a small part of Dartmouth's peer-reviewed research output, at extra cost, being acted upon today, whereas committing to provide Green OA to all the rest of Dartmouth's peer-reviewed research output at no extra cost (by mandating Green OA) is still idling in "conversation" mode? -- especially since the cost of the value-added peer review for all the rest is already being paid in full by existing institutional subscriptions? Stevan Harnad American Scientist Open Access Forum Thursday, July 23. 2009Post-Publication Metrics Versus Pre-Publication Peer Review
Patterson, Mark (2009) PLoS Journals – measuring impact where it matters writes:
"[R]eaders tend to navigate directly to the articles that are relevant to them, regardless of the journal they were published in... [T]here is a strong skew in the distribution of citations within a journal – typically, around 80% of the citations accrue to 20% of the articles... [W]hy then do researchers and their paymasters remain wedded to assessing individual articles by using a metric (the impact factor) that attempts to measure the average citations to a whole journal?Merits of Metrics. Of course direct article and author citation counts are infinitely preferable to -- and more informative than -- just a journal average (the journal "impact factor"). And yes, multiple postpublication metrics will be a great help in navigating, evaluating and analyzing research influence, importance and impact. But it is a great mistake to imagine that this implies that peer review can now be done on just a generic "pass/fail" basis. Purpose of Peer Review. Not only is peer review dynamic and interactive -- improving papers before approving them for publication -- but the planet's 25,000 peer-reviewed journals differ not only in the subject matter they cover, but also, within a given subject matter, they differ (often quite substantially) in their respective quality standards and criteria. It is extremely unrealistic (and would be highly dysfunctional, if it were ever made to come true) to suppose that these 25,000 journals are (or ought to be) flattened to provide a 0/1 pass/fail decision on publishability at some generic adequacy level, common to all refereed research. Pass/Fail Versus Letter-Grades. Nor is it just a matter of switching all journals from assigning a generic pass/fail grade to assigning its own letter grade (A-, B+, etc.), despite the fact that that is effectively what the current system of multiple, independent peer-reviewed journals provides. For not only do journal peer-review standards and criteria differ, but the expertise of their respective "peers" differs too. Better journals have better and more exacting referees, exercising more rigorous peer review. (So the 25,000 peer-reviewed journals today cannot be thought of as one generic peer-review filter that accepts papers for publication in each field with grades between A+ and E; rather there are A+ journals, B- journals, etc.: each established journal has its own independent standards, to which its submissions are answerable) Track Records and Quality Standards. And users know all this, from the established track records of the journals they consult as readers and publish in as authors. Whether or not we like to put it that way, this all boils down to selectivity across a gaussian distribution of research quality in each field. There are highly selective journals, that accept only the very best papers -- and even those often only after several rounds of rigorous refereeing, revision and re-refereeing. And there are less selective journals, that impose less exacting standards -- all the way down to the fuzzy pass/fail threshold that distinguishes "refereed" journals from journals whose standards are so low that they are virtually vanity-press journals. Supplement Versus Substitute. This difference (and independence) among journals in terms of their quality standards is essential if peer-review is to serve as the quality enhancer and filter that it is intended to be. Of course the system is imperfect, and, for just that reason alone (amongst many others) a rich diversity of post-publication metrics are an invaluable supplement to peer review. But they are certainly no substitute for pre-publication peer review, or, most importantly, its quality triage. Quality Distribution. So much research is published daily in most fields that on the basis of a generic 0/1 quality threshold, researchers simply cannot decide rationally or reliably what new research is worth the time and investment to read, use and try to build upon. Researchers and their work differ in quality too, and they are entitled to know a priori, as they do now, whether or not a newly published work has made the highest quality cut, rather than merely that it has met some default standards, after which users must wait for the multiple post-publication metrics to accumulate across time in order to be able to have a more nuanced quality assessment. Rejection Rates. More nuanced sorting of new research is precisely what peer review is about, and for, and especially at the highest quality levels. Although authors (knowing the quality track-records of their journals) mostly self-select, submitting their papers to journals whose standards are roughly commensurate with their quality, the underlying correlate of a journal's refereeing quality standards is basically their relative rejection rate: What percentage of annual papers in their designated subject matter would meet their standards (if all were submitted to that journal, and the only constraint on acceptance were the quality level of the article, not how many articles the journal could manage to referee and publish per year)? Quality Ranges. This independent standard-setting by journals effectively ranges the 25,000 titles along a rough letter-grade continuum within each field, and their "grades" are roughly known by authors and users, from the journals' track-records for quality. Quality Differential. Making peer review generic and entrusting the rest to post-publication metrics would wipe out that differential quality information for new research, and force researchers at all levels to risk pot-luck with newly published research (until and unless enough time has elapsed to sort out the rest of the quality variance with post-publication metrics). Among other things, this would effectively slow down instead of speeding up research progress. Turn-Around Time. Of course pre-publication peer review takes time too; but if its result is that it pre-sorts the quality of new publications in terms of known, reliable letter-grade standards (the journals' names and track-records), then it's time well spent. Offloading that dynamic pre-filtering function onto post-publication metrics, no matter how rich and plural, would greatly handicap research usability and progress, and especially at its all-important highest quality levels. More Value From Post-Publication Metrics Does Not Entail Less Value From Pre-Publication Peer Review. It would be ironic if today's eminently valid and timely call for a wide and rich variety of post-publication metrics -- in place of just the unitary journal average (the "journal impact factor") -- were coupled with an ill-considered call for collapsing the planet's wide and rich variety of peer-reviewed journals and their respective independent, established quality levels onto some sort of global, generic pass/fail system. Differential Quality Tags. There is an idea afoot that peer review is just some sort of generic pass/fail grade for "publishability," and that the rest is a matter of post-publication evaluation. I think this is incorrect, and represents a misunderstanding of the actual function that peer review is currently performing. It is not a 0/1, publishable/unpublishable threshold. There are many different quality levels, and they get more exacting and selective in the higher quality journals (which also have higher-quality and more exacting referees and refereeing). Users need these differential quality tags when they are trying to decide whether newly published work is worth taking the time to ready and making the effort and risk to try to build upon (at the quality level of their own work). User/Author/Referee Experience. I think both authors and users have a good idea of the quality levels of the journals in their fields -- not from the journals' impact factors, but from their content, and their track-records for content. As users, researchers read articles in their journals; as authors they write for those journals, and revise for their referees; and as referees they referee for them. They know that all journals are not equal, and that "peer-reviewed" can be done at a whole range of quality levels. Metrics As Substitutes for User/Author/Referee Experience? Is there any substitute for this direct experience with journals (as users, authors and referees) in order to know what their peer-reviewing standards and quality level are? There is nothing yet, and no one can say yet whether there will ever be metrics as accurate as having read, written and refereed for the journals in question. Metrics might eventually provide an approximation, though we don't yet know how close, and of course they only come after publication (well after). Quality Lapses? Journal track records, user experiences, and peer review itself are certainly not infallible either, however; the usually-higher-quality journals may occasionally publish a lower-quality article, and vice versa. But on average, the quality of the current articles should correlate well with the quality of past articles. Whether judgements of quality from direct experience (as user/author/referee) will ever be matched or beaten by multiple metrics, I cannot say, but I am pretty sure they are not matched or beaten by the journal impact factor. Regression on the Generic Mean? And even if multiple metrics do become as good a joint predictor of journal article quality as user experience, it does not follow that peer-review can then be reduced to generic pass/fail, with the rest sorted by metrics, because (1) metrics are journal-level, not article-level (though they can also be author-level) and, more important still, (2) if journal-differences are flattened to generic peer review, entrusting the rest to metrics, then the quality of articles themselves will fall, as rigorous peer review does not just assign articles a differential grade (via the journal's name and track-record), but it improves them, through revision and re-refereeing. More generic 0/1 peer review, with less individual quality variation among journals, would just generate quality regression on the mean. REFERENCES Bollen J, Van de Sompel H, Hagberg A, Chute R (2009) A Principal Component Analysis of 39 Scientific Impact Measures. PLoS ONE 4(6): e6022. doi:10.1371/journal.pone.0006022 Brody, T., Harnad, S. and Carr, L. (2006) . Journal of the American Association for Information Science and Technology (JASIST) 57(8) pp. 1060-1072. Garfield, E., (1955) Citation Indexes for Science: A New Dimension in Documentation through Association of Ideas. Science 122: 108-111 Harnad, S. (1979) Creative disagreement. The Sciences 19: 18 - 20. Harnad, S. (ed.) (1982) Peer commentary on peer review: A case study in scientific quality control, New York: Cambridge University Press. Harnad, S. (1984) Commentaries, opinions and the growth of scientific knowledge. American Psychologist 39: 1497 - 1498. Harnad, Stevan (1985) Rational disagreement in peer review. Science, Technology and Human Values, 10 p.55-62. Harnad, S. (1990) Scholarly Skywriting and the Prepublication Continuum of Scientific Inquiry Psychological Science 1: 342 - 343 (reprinted in Current Contents 45: 9-13, November 11 1991). Harnad, S. (1986) Policing the Paper Chase. (Review of S. Lock, A difficult balance: Peer review in biomedical publication.) Nature 322: 24 - 5. Harnad, S. (1996) Implementing Peer Review on the Net: Scientific Quality Control in Scholarly Electronic Journals. In: Peek, R. & Newby, G. (Eds.) Scholarly Publishing: The Electronic Frontier. Cambridge MA: MIT Press. Pp 103-118. Harnad, S. (1997) Learned Inquiry and the Net: The Role of Peer Review, Peer Commentary and Copyright. Learned Publishing 11(4) 283-292. Harnad, S. (1998/2000/2004) The invisible hand of peer review. Nature [online] (5 Nov. 1998), Exploit Interactive 5 (2000): and in Shatz, B. (2004) (ed.) Peer Review: A Critical Inquiry. Rowland & Littlefield. Pp. 235-242. Harnad, S. (2008) Validating Research Performance Metrics Against Peer Rankings. Ethics in Science and Environmental Politics 8 (11) Special Issue: The Use And Misuse Of Bibliometric Indices In Evaluating Scholarly Performance Harnad, S. (2009) Open Access Scientometrics and the UK Research Assessment Exercise. Scientometrics 79 (1) Shadbolt, N., Brody, T., Carr, L. and Harnad, S. (2006) The Open Research Web: A Preview of the Optimal and the Inevitable, in Jacobs, N., Eds. Open Access: Key Strategic, Technical and Economic Aspects. Chandos. Monday, September 8. 2008OA Needs Open Evidence, Not Anonymous Innuendo
The testimony of "Ethan" regarding SJI's publishing practices could have been valuable -- if it had been provided non-anonymously. As it stands, it merely amounts to anonymous, nonspecific, unsubstantiated allegations. If those allegations against SJI are in reality true, then making them anonymously, as "Ethan" does, does more harm than good, because then they can be so easily discredited and dismissed, as being merely the anonymous, nonspecific, unsubstantiated allegations that they are. (If they are in reality false, then they are of course a depolorable smear.)
Richard Poynder is a distinguished and highly respected journalist and the de facto chronicler of the OA movement. I hope "Ethan" has contacted Richard, as he requested, giving him his real name, and the names of the SJI journal submissions that he refereed and recommended for rejection as having "zero scientific value." Richard can then fact-check (confidentially, without embarrassing the authors) whether or not any of those articles were published as such. What “Ethan” should have done if he was, as he said, receiving articles of low quality to referee, in a “peer review” procedure of doubtful quality, was to resign as referee, request removal of his name from the list of referees — did “he”? and was his name removed? — and, if he felt strongly enough, offer to make his objective evidence available to those who may wish to investigate these publishing practices. What is needed in order to expose shoddy publishing practices is objective, verifiable evidence and open answerability, not anonymous allegations (as in the entrails of Wikipedia, where pseudonymous bullying reigns supreme). This is not, after all, whistle-blowing on the Mafia, that requires a witness protection program. If you offer to referee for a publisher with shoddy peer-review practices, you risk nothing if you provide what objective evidence you have of those practices. I know that "publish or perish" has authors fearful of offending publishers by doing anything they think might reduce their chances of acceptance, and that referees often perform their gratis services out of the same superstitious worry; and I know that junior referees are worried about offending senior researchers if they are openly critical of their work, and that even peer colleagues and rivals are often leery of the consequences of openly dissing one another’s research. Yet none of these bits of regrettable but understandable professional paranoia explain why "Ethan" felt the need to hide under a cloak of anonymity in providing objective evidence of dubious peer-review practices by a publisher and journals that hardly have the patina or clout of some of the more prestigious established publishers and journals. Is it SJI's public threats of litigation, through postings like the one below, that have everyone so intimidated? Surely the antidote against that sort of thing is open evidence, not anonymous innuendo. (Something better is needed by way of open evidence, however, than just contented testimonials elicited from accepted authors!) Stevan Harnad American Scientist Open Access Forum Monday, June 16. 2008Nature's Fall from Aside the Angels
Steve Inchcoombe, managing director of Nature Publishing Group writes, of Nature:
"We also support and encourage self-archiving of the author’s final version of accepted articles."But if you look in the Romeo directory of publisher self-archiving policies, you will find that whereas Nature is indeed among the 92% of journals that have endorsed the immediate self-archiving of the author's unrefereed first draft (the preprint), Nature is not among the 63% of journals that have endorsed the immediate self-archiving of the author's peer-reviewed final draft (the postprint) -- the one that is the real target of OA, and indispensable for research usage and progress. Nature used to be "green" on the immediate self-archiving of both preprints and postprints, but, electing to take half of NIH's maximal allowable access embargo as its own minimum, Nature became one of the few journals that back-slid in 2005 to impose a 6-month embargo on open access to the peer-reviewed final draft. It doesn't make much difference, because Institutional Repositories still have the almost-OA email eprint request-a-copy Button to tide over research usage needs during the embargo, but let it not be thought that Nature is still on the "side of the angels" insofar as OA is concerned... Maxine Clarke, Publishing Executive Editor, Nature, replied: "Don't forget that people can always read the article in the journal, Stevan, as soon as it is published! The vast majority of scientists are either at an institution with a site license or can access the journal free via OARE, AGORA or HINARI, so they don't even have to take out a subscription."But what about those would-be users worldwide who are "[n]either at an institution with a site license [n]or can access the journal free via OARE, AGORA or HINARI"? Is there any reason whatsoever why they should all be denied access for six months if they (or their institutions) do not "have [the funds] to take out a subscription"? Because that, Maxine, is what OA is really all about. Stevan Harnad American Scientist Open Access Forum Wednesday, October 31. 2007RePEc, Peer Review, and Harvesting/Exporting from IRs to CRs
The new RePEc blog is a welcome addition to the blogosphere. The economics community is to be congratulated for its longstanding practise of self-archiving its pre-refereeing preprints and exporting them to RePEc.
Re: the current RePEc blog posting, "New Peer Review Systems": Experiments on improving peer review are always welcome, but what the worldwide research community (in all disciplines, economics included) needs most urgently today is not peer review reform, but Open Access (OA) to its existing peer-reviewed journal literature. It's far easier to reform access than to reform the peer-review system, and it's also already obvious exactly what needs to be done and how, for OA -- mandate RePEc-style self-archiving, but for the refereed postprints, not just the unrefereed preprints -- whereas peer-review reforms are still in the testing stage. It's not even clear whether once most unrefereed preprints and all refereed postprints are OA anyone will still feel any need for radical peer review reform at all; it may simply be a matter of more efficient online implementation. So if I were part of the RePEc community, I would be trying to persuade economists (who, happily, already have the preprint self-archiving habit) to extend their practise to postprints -- and to persuade their institutions and funders to mandate postprint self-archiving in each author's own OAI-compliant Institutional Repository (IR). From there, if and when desired, its metadata can then also be harvested by, or exported to, CRs (Central Repositories) like RePEc or PubMed Central. (One of the rationales for OAI-interoperability is harvestability.) But the primary place to deposit one's own preprints and postprints, in all disciplines, is "at home," i.e., in one's own institutional archive, for visibility, usage, impact, record-keeping, monitoring, metrics, and assessment -- and in order to ensure a scaleable universal practise that systematically covers all research space, whether funded or unfunded, in all disciplines, single or multi, and at all institutions -- universities and research institutes. (For institutions that have not yet created an IR of their own -- even though the software is free and the installation is quick, easy, and cheap -- there are reliable interim CRs such as Depot to deposit in, and harvest back from, once your institution has its own IR.) Stevan Harnad American Scientist Open Access Forum Tuesday, September 4. 2007British Academy Report on Peer Review and Metrics
The 4 Sept Guardian article on peer review (on the 5 Sept British Academy Report, to be published tomorrow) seems to be a good one. The only thing it lacks is some conclusions (which journalists are often reluctant to take the responsibility of making):"Help Wanted: A pall of gloom lies over the vital system of peer review. But the British Academy has some bright ideas". The Guardian, Jessica Shepherd reports, Tuesday September 4, 2007 (1) Peer review just means the assessment of research by qualified experts. (In the case of research proposals, it is assessment for fundability, and in the case of research reports, it is assessment for publishability.) (2) Yes, peer review, like all human judgment, is fallible, and susceptible to error and abuse. (3) Funding and publishing without any assessment is not a solution: (3a) Everything cannot be funded (there aren't enough funds), and even funded projects first need some expert advice in their design.(4) So far, nothing as good as or better than peer review (i.e., qualified experts vetting the work of their fellow-experts) has been found, tested and demonstrated. So peer review remains the only straw afloat, if the alternative is not to be tossing a coin for funding, and publishing everything on a par. (5) Peer review can be improved. The weak link is always the editor (or Board of Editors), who choose the reviewers and to whom the reviewers and authors are answerable; and the Funding Officer(s) or committee choosing the reviewers for proposals, and deciding how to act on the basis of the reviews. There are many possibilities for experimenting with ways to make this meta-review component more accurate, equitable, answerable, and efficient, especially now that we are in the online era. (6) Metrics are not a substitute for peer review, they are a supplement to it. In the case of the UK, a Dual Support System of prospective funding of (i) individual competitive proposals (RCUK) and (ii) retrospective top-sliced funding of entire university departments, based on their recent past research performance (RAE), metrics can help inform and guide funding officers, committees, editors, Boards and reviewers. And in the case of the RAE in particular, they can shoulder a lot of the former peer-review burden: The RAE, being a retrospective rather than a prospective exercise, can benefit from the prior publication peer review that the journals have already done for the submissions, rank the outcomes with metrics, and then only add expert judgment afterward, as a way of checking and fine-tuning the metric rankings. Funders and universities explicitly recognizing peer review performance as a metric would be a very good idea, both for the reviewers and the researchers being reviewed. Harnad, S. (2007) Open Access Scientometrics and the UK Research Assessment Exercise. In Proceedings of 11th Annual Meeting of the International Society for Scientometrics and Informetrics 11(1), pp. 27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds. Brody, T., Carr, L., Gingras, Y., Hajjem, C., Harnad, S. and Swan, A. (2007) Incentivizing the Open Access Research Web: Publication-Archiving, Data-Archiving and Scientometrics. CTWatch Quarterly 3(3). Shadbolt, N., Brody, T., Carr, L. and Harnad, S. (2006) The Open Research Web: A Preview of the Optimal and the Inevitable, in Jacobs, N., Eds. Open Access: Key Strategic, Technical and Economic Aspects. Chandos. Harnad, S. (ed.) (1982) Peer commentary on peer review: A case study in scientific quality control, New York: Cambridge University Press. Harnad, Stevan (1985) Rational disagreement in peer review. Science, Technology and Human Values, 10 p.55-62. Harnad, S. (1986) Policing the Paper Chase. [Review of S. Lock, A difficult balance: Peer review in biomedical publication.]Nature 322: 24 - 5. Harnad, S. (1996) Implementing Peer Review on the Net: Scientific Quality Control in Scholarly Electronic Journals. In: Peek, R. & Newby, G. (Eds.) Scholarly Publishing: The Electronic Frontier. Cambridge MA: MIT Press. Pp 103-118. Harnad, S. (1997) Learned Inquiry and the Net: The Role of Peer Review, Peer Commentary and Copyright. Learned Publishing 11(4) 283-292. Harnad, S. (1998/2000/2004) The invisible hand of peer review. Nature [online] (5 Nov. 1998), Exploit Interactive 5 (2000): and in Shatz, B. (2004) (ed.) Peer Review: A Critical Inquiry. Rowland & Littlefield. Pp. 235-242. Peer Review Reform Hypothesis-Testing (started 1999) A Note of Caution About "Reforming the System" (2001) Self-Selected Vetting vs. Peer Review: Supplement or Substitute? (2002) Stevan Harnad American Scientist Open Access Forum Tuesday, October 17. 2006Premature Rejection SlipRichard Poynder, in "Open Access: death knell for peer review?" has written yet another thoughtful, stimulating essay. But I think he (and many of the scholars and scientists he cites) are quite baldly wrong on this one! What is peer review? Nothing more nor less than qualified experts vetting the work of their fellow specialists to make sure it meets certain established standards of reliability, quality and usability -- standards that correspond to the quality level of the journal whose name and track-record certifies the outcome. Peer review is dynamic and answerable: Dynamic, because it is not just an "admit/eject" decision by a gate-keeper or an "A/B/C/D" mark assigned by a schoolmarm, but an interactive process of analysis, criticism and revision that may take several rounds of successive revisions and re-refereeing. And answerable, because the outcome must meet the requirements set out by the referees as determined by the editor, sometimes resulting in an accepted final draft that is very different from the originally submitted preprint -- and sometimes in no accepted draft at all. Oh, and like all exercises in human judgment, even expert judgment, peer review is fallible, and sometimes makes errors of both omission and commission (but neither machinery nor anarchy can do better). It is also approximate rather than exact; and, as noted, quality-standards differ from journal to journal, but are generally known from the journal's public track record. (The only thing that does resemble an A/B/C/D marking system is the journal-quality hierarchy itself: Meeting the quality-standards of the best journals is rather like receiving an A+, and the bottom rung is not much better than a vanity press.) But here are some other home truths about peer review (from an editor of 25 years' standing who alas knows all too well whereof he speaks): Qualified referees are a scarce, over-harvested resource. It is hard to get them to agree to review, and hard to get them to do it within a reasonable amount of time. And it is not easy to find the right referees; ill-chosen referees (inexpert or biassed) can suppress a good paper or admit a bad one; they can miss detectable errors, or introduce gratuitous distortions. Those who think spontaneous, self-appointed vetting can replace the systematic selectivity and answerability of peer review should first take on board the ineluctable fact of referee scarcity, reluctance and tardiness, even when importuned by a reputable editor, with at least the prospect that their efforts, though unpaid, will be heeded. (Now ask yourself the likelihood that that the right umpires will do their duty on their own, and not even sure of being heeded for their pains.) Friends of self-policed vetting should also sample for a while the raw sludge that first makes its way to the editor's desk, and ask themselves whether they would rather everyone had to contend directly with that commodity for themselves, instead of having it filtered for them by peer review, as now. (Think of it more as a food-taster for the emperor at risk of being poisoned -- rather than as an elitist "gate-keeper" keeping the hoi-poloi out of the club -- for that is closer to what a busy researcher faces in trying to decide what work to risk some of his scarce reading time on, or (worse) his even scarcer and more precious research time in trying to build upon.) And peer-review reformers or replacers should also reflect on whether they think that those who have nothing better to do with their time than to wade through this raw, unfiltered sludge on their own recognizance -- posting their take-it-or-leave-it "reviews" publicly, for authors and users to heed as they may or may not see fit -- are the ones they would like to trust to filter their daily sludge for them, instead of answerable editors' selected, answerable experts. Or whether they would like to see the scholarly milestones, consisting of the official, certified, published, answerable versions, vanish in a sea of moving targets, consisting of successive versions of unknown quality, crisscrossed by a tangle of reviews, commentaries and opinions of equally unknown quality. Not that all the extras cannot be had too, alongside the peer-reviewed milestones: In our online age, no gate-keeper is blocking the public posting of unrefereed preprints, self-appointed commentaries, revised drafts, and even updates and upgrades of the published milestones -- alongside the milestones themselves. What is at issue here is whether we can do without the filtered, certified milestones themselves (until we once again reinvent peer review). The question has to be asked seriously; and if one hasn't the imagination to pose it from the standpoint of a researcher trying to make tractable use of the literature, let us pose it more luridly, from the standpoint of how to treat a family member who is seriously ill: navigate the sludge directly, to see for oneself what's on the market? ask one's intrepid physician to try to sort out the reliable cure from the surrounding ocean of quackery? And if you think this is not a fair question, do you really think science and scholarship are that much less important than curing sick kin? Eppur, eppur... what tugs at me on odd days of the week is the undeniable fact that most research is not cited, nor worth citing, anyway, so why bother with peer review [or, horribile dictu, OA!] for all of that? And on the other end, the few authors of the very, very best work are virtually peerless, and can exchange their masterworks amongst themselves, as in Newton's day. So is all this peer review just to keep the drones busy? I can't say. (I used to mumble things in reply to the effect that "we need to countenance the milk in order to be ensured of the cream rising to the top" or "we need to admit the chaff in order to sift out its quota of wheat" or "we need to screen the gaussian noise if we want to ensure our ration of signal") [1], [2], [3]But I can say that none of this has anything to do with Open Access, now (except that it can be obtruded, along with so many other irrelevant things, to slow OA's progress). If self-archiving mandates were adopted universally, all of this would be mooted. The current peer-reviewed literature, such as it is, would at long-last be OA -- which is the sole goal of the OA movement, and the end of the road for OA advocacy, the rest being about scholars and scientists making use of this newfound bonanza, whilst other processes (digital preservation, journal reform, copyright reform, peer review reform) proceed apace.Harnad, S. (1986) Policing the Paper Chase. (Review of S. Lock, A difficult balance: Peer review in biomedical publication.)Nature 322: 24 - 5. As it is, however, second-guessing the future course of peer review is still one of the at-least 34 pieces of irrelevance distracting us from getting round to doing the optimal and inevitable at long, long last... Harnad, S. (1990) Scholarly Skywriting and the Prepublication ContinuumStevan Harnad American Scientist Open Access Forum Friday, September 1. 2006Perelman and Peerlessness
The lion's share of science and scholarship is founded on peer review:
The findings of experts are vetted by qualified fellow-experts for correctness, importance and originality before being published; this validates the results and serves as a filter, to protect other scientists and scholars from risking their time and effort reading and trying to apply or build upon work that may not be sound. That's the lion's share of science and scholarship. But some scientists and scholars are peerless: Their work is at such a high level that only they, or a very few like them, are even equipped to test and attest to its soundness. Such is the case with the work of Grigori Perelman. It is a mistake to try to generalize this in any way: it doesn't scale. It does not follow from the fact that a rare genius like Perelman can transmit his huge and profound contribution by simpling posting it publicly on the Web -- without refereeing or publication -- that anything at all has changed about the way the overwhelming majority of scientific and scholarly research continues to need to be quality-controlled: via classical peer review. Nor has this anything at all to do with Open Access. In paper days, Perelman could just as well have snail-mailed his proofs to the few people on the planet qualified to check them, and if, having done that, he was content to leave it at that, he could have done so. They would have been cited in articles and would have made their way into textbooks as "unpublished results by G. Perelman (2003)." For the quotidial minor and major contributions that are researchers' daily bread and butter, formal publication is essential, for both credibility and credit. For the occasional rare monumental contribution or masterpiece, they are supererogatory. Nothing follows from this. OA continues to mean free online access to peer-reviewed research (after -- and sometimes before -- peer review), not to research free of peer review! Stevan Harnad American Scientist Open Access Forum Wednesday, July 27. 2005WEIGHING ARTICLES/AUTHORS INSTEAD OF JOURNALS IN RESEARCH ASSESSMENT
The United Kingdom ranks and rewards the research productivity of all its universities through a national Research Assessment Exercise (RAE) conducted every four years.
If, as lately proposed, the "RAE shifts focus from prestige journals" (THS, 22 July 2005) as a basis for its ranking, what will it shift focus to? An established journal's prestige and track-record are correlated with its selectivity and peer-review standards, hence its quality level, and often also its citation impact. What would it mean to ignore or de-emphasise that? The correlations will be ignored, all articles will be given equal weight -- and then what? What gain in accuracy and fairness of research assessment is to be expected from ignoring the known predictors -- for correlation is predictive -- of research quality? Are all articles to be re-peer-reviewed by the RAE itself, bottom-up? Is that efficient, desirable, realistic? The most prestigious international journals draw upon international expertise in their peer review: Is the UK to reduplicate all this effort in-house every 4 years? Why? Isn't our time better spent getting the peer-reviewing done right the first time, and then getting on with our research? Research assessment used to be publish-or-perish bean-counting; it is now weighted by the quality level of the journal in which the bean is planted. RAE outcome is already highly correlated with counts of the citations that articles sprout, even though the RAE never actually counts citations directly. That's because a journal's prestige is correlated with its articles' citation counts. So if we're going to start ignoring journal prestige, shouldn't we begin to count article (and author) citations directly in its place?
« previous page
(Page 2 of 3, totaling 21 entries)
» next page
|
QuicksearchSyndicate This BlogMaterials You Are Invited To Use To Promote OA Self-Archiving:
Videos:
The American Scientist Open Access Forum has been chronicling and often directing the course of progress in providing Open Access to Universities' Peer-Reviewed Research Articles since its inception in the US in 1998 by the American Scientist, published by the Sigma Xi Society. The Forum is largely for policy-makers at universities, research institutions and research funding agencies worldwide who are interested in institutional Open Acess Provision policy. (It is not a general discussion group for serials, pricing or publishing issues: it is specifically focussed on institutional Open Acess policy.)
You can sign on to the Forum here.
ArchivesCalendar
CategoriesBlog AdministrationStatisticsLast entry: 2018-09-14 13:27
1129 entries written
238 comments have been made
Top ReferrersSyndicate This Blog |