QuicksearchYour search for metrics returned 102 results:
Wednesday, December 9. 2009Comments on Raym Crow's (2002) SPARC position paper on institutional repositoriesComments on 2002 SPARC Position Paper on Institutional Repositories by Raym Crow (Note: These comments were originally posted on Sun Aug 04 2002) The SPARC position paper, "The Case for Institutional Repositories," by Raym Crow (2002), is timely and will serve a useful purpose in mapping out for universities exactly why it is in their best interests to self-archive their research output, and how they should go about doing so. I will only comment on a few passages, having mostly to do with the topic of "certification" (peer review) in which SPARC's message may have become a little garbled along the same lines that like-minded precursor initiatives (notably E-biomed and Scholar's Forum) have likewise been a little garbled: E-biomed: A Proposal for Electronic Publications in the Biomedical Sciences Scholars' Forum: A New Model For Scholarly Communication To overview the point in question very briefly: To provide open access (i.e., free, online, full-text access) to the research output of universities and research institutions worldwide -- output that is currently accessible only by paying access-tolls to the 24,000 peer reviewed journals in which their 2.5 million annual research papers are published -- does not call for or depend upon any changes at all in the peer review system. On the contrary, it would be a profound strategic (and factual) mistake to give the research community the incorrect impression that there is or ought to be any sort of link at all between providing open access to their own research literature by self-archiving it and any modification whatsoever in the peer review system that currently controls and certifies the quality of that research. The question of peer-review modification has absolutely nothing to do with the institutional repositories and self-archiving that the SPARC paper is advocating. The only thing that authors and institutions need to be clearly and explicitly reassured about (because it is true) is that self-archiving in institutional Eprints Archives will preserve intact that very same peer-reviewed literature (2.5 million peer-reviewed papers annually, in 24,000 peer-reviewed journals) to which it is designed to provide open access. Hence, apart from providing these reassurances, it is best to leave the certification/peer-review issue alone! Here is where this potentially misleading and counterproductive topic is first introduced in the SPARC paper's section on "certification": RC: "CERTIFICATION: Most of the institutional repository initiatives currently being developed rely on user (including author) communities to control the input of content. These can include academic departments, research centers and labs, administrative groups, and other sub-groups. Faculty and others determine what content merits inclusion and act as arbiters for their own research communities. Any at the initial repository submission stage thus comes from the sponsoring community within the institution, and the rigor of qualitative review and certification will vary."There is a deep potential ambiguity here. The SPARC paper might merely be referring here to how much, and how, institutions might decide to self-vet their own research output when it is still in the form of pre-peer-review preprints,and that would be fine: "1.5. Distinguish unrefereed preprints from refereed postprints" But this institutional self-vetting of whatever of its own pre-refereeing research output a university decides to make public online should on no account be described as "qualitative review and certification"! That would instead be peer review, and peer review is the province of the qualified expert referees (most of them affiliated with other institutions, not the author's institution) who are called upon formally by the editors of independent peer-reviewed journals to referee the submissions to those journals; this quality-review is not the province of the institution that is submitting the research. Self-archiving is not self-publishing, and peer-review cannot be self-administered: "1.4. Distinguish self-publishing (vanity press) from self-archiving (of published, refereed research)" It merely invites confusion to characterize whatever preliminary self-vetting an institution may elect to do on the contents of the unrefereed preprint sector of its Eprint Archives with what it is that journals do when they implement peer review. Worse, it might invite the conflation of self-archiving with self-publishing, if what the SPARC paper has in mind here is not just the unrefereed preprint sector of the institutional repository, but what would be its refereed postprint sector, consisting of those papers that are certified as having met a specific journal's established quality standards after classical peer review has taken its standard course: "What is an Eprint Archive?" "What is an Eprint?" "What should be self-archived?" "What is the purpose of self-archiving?" "Is self-archiving publication?" It is extremely important to clearly differentiate an institution's self-vetting of the unrefereed sector of its archive from the external quality control and certification provided by refereed journals that subsequently yields the refereed sector of its archive. Nothing is gained by conflating the two: "Peer-review reform: Why bother with peer review?" RC: "In some instances, the certification will be implicit and associative, deriving from the reputation of the author's host department. In others, it might involve more active review and vetting of the research by the author's departmental peers. While more formal than an associative certification, this certification would typically be less compelling than rigorous external peer review. Still, in addition to the primary level certification, this process helps ensure the relevance of the repository's content for the institution's authors and provides a peer-driven process that encourages faculty participation."These are all reasonable possibilities for the preliminary self-selection and self-vetting of an institution's unrefereed preprints. But implying that they amount to anything more than that -- by using the term "peer" for both this internal self-vetting and external peer review, and suggesting that there is some sort of continuum of "compellingness" between the two -- is not helpful or clarifying but instead leads to (quite understandable) confusion and resistance on the part of researchers and their institutions: For, having read the above, the potential user who previously knew the refereed journal literature -- consisting of 24,000 peer-reviewed journals, 2,5 million refereed articles per year, each clearly certified with each journal's quality-control label, and backed by its established reputation and impact -- now no longer has a clear idea what literature we might be talking about here! Are we talking about providing open access to that same refereed literature, or are we talking about substituting some home-grown, home-brew in its place? Yet there is no need at all for this confusion: As correctly noted in the SPARC paper, University Eprint Archives ("Institutional Repositories") can have a variety of contents, but prominent among them will be the university's own research output (self-archived for the sake of the visibility, usage, impact, and their resulting individual and institutional rewards, as well described elsewhere in the SPARC paper). That institutional research output has, roughly, two embryonic stages: pre-peer-review (unrefereed) preprints and post-peer-review (refereed) postprints. Now the pre-peer-review preprint sector of the archive may well require some internal self-vetting (this is up to the institution), but the post-peer-review postprint sector certainly does not, for the "vetting" there has been done -- as it always has been -- by the external referees and editors of the journals to which those papers were submitted as preprints, and by which they were accepted for publication (possibly only after several rounds of substantive revision and re-refereeing) once the refereeing process had transformed them into the postprints. Nor is the internal self-vetting of the preprint sector any sort of substitute for the external peer review that dynamically transforms the preprints into refereed, journal-certified postprints. In the above-quoted passage, the functions of the internal preprint self-vetting and the external postprint refereeing/certification are completely conflated -- and conflated, unfortunately, under what appears like an institutional vanity-press penumbra, a taint that the self-archiving initiative certainly does not need, if it is to encourage the opening of access to its existing quality-controlled, certified research literature, such as it is, rather than to some untested substitute for it. RC: It should be noted that to serve the primary registration and certification functions, a repository must have some official or formal standing within the institution. Informal, grassroots projects - however well-intentioned - would not serve this function until they receive official sanction.Universities should certainly establish whatever internal standards they see fit for pre-filtering their pre-refereeing research before making it public. But the real filtration continues to be what it always was, namely, classical peer review, implemented and certified as it always was. This needs to be made crystal clear! RC: " OVERLAY JOURNALS: Third-party online journals that point to articles and research hosted by one or more repositories provide another mechanism for peer review certification in a disaggregated model."Unfortunately, the current user of the existing, toll-access refereed-journal literature is becoming more and more confused about just what is actually being contemplated here! Does institutional self-archiving mean that papers lose the quality-control and certification of peer-reviewed journals and have it replaced by something else? By what? And what is the evidence that we would then still have the same literature we are talking about here? Does institutional self-archiving mean giving up the established forms of quality control and certification and replacing them by untested alternatives? There also seems to be some confusion between the more neutral concept of (1) "overlay journals" (OJs) (e.g., Arthur Smith, which merely use Eprint Archives for input (the online submission/refereeing of author self-archived preprints) and output (the official certification of author self-archived postprints as having been peer-reviewed, accepted and "published" by the OJ in question), but leave the classical peer review system intact; and the vaguer and more controversial notion of (2) "deconstructed journals" (DJs) on the "disaggregated model" (e.g., John W.T. Smith), in which (as far as I can ascertain) what is being contemplated is the self-archiving of preprints and their subsequent "submission" to one or many evaluating/certifying entities (some of which may be OJs, others some other unspecified kind of certifier) who give the papers their respective "stamps of approval." "Re: Alternative publishing models - was: Scholar's Forum: A New Model... JWT Smith has made some testable empirical conjectures, which could eventually be tested in a future programme of empirical research on alternative research quality review and certification systems. But they certainly do not represent an already tested and already validated ("certified"?) alternative system, ready for implementation in place of the 2.5 million annual research articles that currently appear in the 24,000 established refereed journals! As such, untested speculations of this kind are perhaps a little out of place in the context of a position paper that is recommending concrete (and already tested) practical steps to be taken by universities in order to maximize the visibility, accessibility and impact of their research output (and perhaps eventually to relieve their library serials budgetary burden too). Author/institution self-archiving of research output -- both preprints and postprints -- is a tested and proven supplement to the classical journal peer review and publication system, but by no means a substitute for it. Self-archiving in Open Access Eprint Archives has now been going on for over a decade, and both its viability and its capacity to increase research visibility and impact have been empirically demonstrated. Substitutes for the existing journal peer review and publication system, in contrast, require serious and systematic prior testing in their own right; there is nothing anywhere near ready there for practical recommendations other than the feasibility of Overlay Journals (OJs) as a means of increasing the efficiency and speed and lowering the cost of classical peer review. Almost no testing of any other model has been done yet; there are no generalizable findings available, and there are many prima facie problems with some of the proposed models (including JWT Smith's "disaggregated" model, [DJs]) that have not even been addressed: See the discussion (and some of the prima facie problems) of JWT Smith's model under: "Alternative publishing models - was: Scholar's Forum: A New Model..." "Journals are Quality Certification Brand-Names" "Central vs. Distributed Archives" "The True Cost of the Essentials (Implementing Peer Review)" "Workshop on Open Archives Initiative in Europe" In contrast, there has been a recent announcement that the Journal of Nonlinear Mathematical Physics will become openly accessible as an "overlay journal" (OJ) on the Physics Archive . This is certainly a welcome development -- but note that JNMP is a classically peer-reviewed journal, and hence the "overlay" is not a substitute for classical peer review: It merely increases the visibility, accessibility and impact of the certified, peer-reviewed postprints while at the same time providing a faster, more efficient and economical way of processing submissions and implementing [classical] peer review online. Indeed, Overlay Journals (OJs) are very much like the Open-Access Journals that are the target of Budapest Open Access Strategy 2. Deconstructed/Disaggregated Journals (DJs), in contrast, are a much vaguer, more ambiguous, and more problematic concept, nowhere near ready for recommendation in a SPARC position paper. RC: "While some of the content for overlay journals might have been previously published in refereed journals, other research may have only existed as a pre-print or work-in-progress."This is unfortunately beginning to conflate the notion of the "overlay" journal (OJ) with some of the more speculative hypothetical features of the "deconstructed" or "disaggregated" journal (DJ): The (informal) notion of an overlay journal is quite simple: If researchers are self-archiving their preprints and postprints in Eprint Archives anyway, there is, apart from any remaining demand for paper editions, no reason for a journal to put out its own separate edition at all: Instead, the preprint can first be deposited in the preprint sector of an Eprint Archive. The journal can be notified by the author that the deposit is intended as a formal submission. The referees can review the archived preprint. The author can revise it according to the editor's disposition letter and the referee reports. The revised draft can again be deposited and re-refereed as a revised preprint. Once a final draft is accepted, that then becomes tagged as the journal-certified (refereed) postprint. End of story. That is an "overlay" journal (OJ), with the postprint permanently "certified" by the journal-name as having met that journal's established quality standards. The peer review is classical, as always; the only thing that has changed is the medium of implementation of the peer review and the medium of publication (both changes being in the direction of greater efficiency, functionality, speed, and economy). A deconstructed/disaggregated journal (DJ) is an entirely different matter. As far as I can ascertain, what is being contemplated there is something like an approval system plus the possibility that the same paper is approved by a number of different "journals." The underlying assumptions are questionable: (1) Peer review is neither a static red-light/green-light process nor a grading system, singular or multiple: The preprint does not receive one or a series of "tags." Peer review is a dynamic process of mediated interactions between an author and expert referees, answerable to an expert editor who selects the referees for their expertise and who determines what has to be done to meet the journal's quality standards -- a process during which the content of the preprint undergoes substantive revision, sometimes several rounds of it. The "grading" function comes only after the preprint has been transformed by peer review into the postprint, and consists of the journal's own ranking in the established (and known) hierarchy of journal quality levels (often also associated with the journal's citation impact factor). It is not at all clear whether and how having raw preprints certified as approved -- singly or many times over -- by a variety of "deconstructed journals" (DJs) can yield a navigable, sign-posted literature of the known quality and quality-standards that we have currently. (And to instead interactively transform them into postprints is simply to reinvent peer review.) (2) Even more important: Referees are a scarce resource. Referees sacrifice their precious research time to perform this peer-reviewing duty for free, normally at the specific request of the known editor of a journal of known quality, and with the knowledge that the author will be answerable to the editor. The result of this process is the navigable, quality-controlled refereed research literature we have now, with the quality-grade certified by the journal label and its established reputation. It is not at all clear (and there are many prima facie reasons to doubt) that referees would give of their time and expertise to a "disaggregated" system to provide grades and comments on raw preprints that might or might not be graded and commented upon by other (self-selected? appointed?) referees as well, and might or might not be responsive to their recommendations. Nor is it clear that a disaggregated system would continue to yield a literature that was of any use to other users either. Classical peer review already exists, and works, and it is the fruits of that classical peer review that we are talking about making openly accessible through self-archiving, nothing more (or less)! Journals (more specifically, their editorial boards and referees) are the current implementers of peer review. They have the experience, and their quality-control "labels" (the journal-names) have the established reputations (and citation impact factors) on which such "metadata" tags depend for their informational value in guiding users. There is no need either to abandon journals or to re-invent them under another name ("DJ"). A peer-reviewed journal, medium-independently, is merely a peer-review service provider and certifier. That is what they are, and that is what they will continue to be. Titles, editorial boards and their referees may migrate, to be sure. They have done so in the past, between different toll-access publishers; they could do so now too, if/when necessary, from toll-access to open-access publishers. But none of this involves any change in the peer review system; hence there should be no implication that it does. (JWT Smith also contemplates paying referees for their services, another significant and untested departure from classical peer review, with the potential for bias and abuse -- if only there were enough money available to make it worth referees' while, which there is not! At realistic rates, offering to pay a referee for stealing his research time to review a paper would risk adding insult to injury.) So there is every reason to encourage institutions to self-archive their research output, such as it is, before and after peer review. But there is no reason at all to link this with speculative scenarios about new publication and/or peer review systems, which could well put the very literature we are trying to make more usable and used at risk of ceasing to be useful or usable to anyone. The message to researchers and their institutions should be very clear: The self-archiving of your research output, before (preprints) and after (postprints) peer-reviewed publication will maximize its visibility, usage, and impact, with all the resulting benefits to you and your institution. Self-archiving is merely a supplement to the existing system, an extra thing that you and your institution can do, in order to enjoy these benefits. You need give up nothing, and nothing else need change. In addition, one possible consequence, if enough researchers and their institutions self-archive enough research long enough, is that your institutional libraries might begin to enjoy some savings on their serials expenditures, because of subscription cancellations. This outcome is not guaranteed, but it is a possible further benefit, and might in turn lead to further restructuring of the journal publication system under the cancellation pressure -- probably in the direction of cutting costs and downsizing to the essentials, which will probably reduce to just providing peer review alone. The true cost of that added value, per paper, will in turn be much lower than the total cost now, and it will make most sense to pay for it out of the university's annual windfall subscriptions savings as a service, per outgoing paper, rather than as a product, per incoming paper, as in toll-access days. This outcome too would be very much in line with the practice of institutional self-archiving of outgoing research that is being advocated by the SPARC position paper. The foregoing paragraph, however, only describes a hypothetical possibility, and need not and should not be counted as among the sure benefits of author/institution self-archiving -- which are, to repeat: maximized visibility, usage, and impact for institutional research output, resulting from maximized accessibility. RC: "As a paper could appear in more than one journal and be evaluated by more than one refereeing body, these overlays would allow the aggregation and combination of research articles by multiple logical approaches - for example, on a particular theme or topic (becoming the functional equivalent of anthology volumes in the humanities and social sciences); across disciplines; or by affiliation (faculty departmental bulletins that aggregate the research of their members)."Here the speculative notion of substituting "disaggregated journals" (DJs) for classical peer review is being conflated with the completely orthogonal matter of collections and alerting: An open-access online research literature can certainly be linked and bundled and recombined in a variety of very useful ways, but this has nothing whatsoever to do with the way its quality is arrived at and certified as such. Until an alternative has been found, tested and proven to yield at least comparable sign-posted quality, the classical peer review system is the only game in town. Let us not delay the liberation of its fruits from access-barriers still longer by raising the spectre of freeing them not only from the access-tolls but also from the self-same peer review system that (until further notice) generated and certified their quality! "Rethinking "Collections" and Selection in the PostGutenberg Age" RC: "Such journals exist today-for example, the Annals of Mathematics overlay to arXiv and Perspectives in Electronic Publishing, to name just two-and they will proliferate as the volume of distributed open access content increases."The Annals of Mathematics is an "overlay" journal (OJ) of the kind I described above, using classical peer review. It is not an example of the "disaggregated" quality control system (DJ). Perspectives in Electronic Publishing, in contrast, is merely a collection of links to already published work. It does not represent any sort of alternative to classical peer review and journal publication. RC: "Besides overlay journals pointing to distributed content, high-value information portals - centered around large, sophisticated data sets specific to a particular research community - will spawn new types of digital overlay publications based on the shared data."Journals that are overlays to institutional research repositories are merely certifying that papers bearing their tag have undergone their peer-review and have met their established quality standards. This has nothing to do with alternative forms of quality control, disaggregated or otherwise. Post hoc collections (link-portals) have nothing to do with quality control either, although they will certainly be valuable for other purposes. RC: "Regardless of journal type, the basis for assessing the quality of the certification that overlay journals provide differs little from the current journal system: eminent editors, qualified reviewers, rigorous standards, and demonstrated quality."Not only does it not differ: Overlay Journals (OJs) will provide identical quality and standards -- as long as "overlay" simply means having the implementation of peer review (and the certification of its outcome) piggy-back on the institutional archives, as it should. Alternative forms of quality control (e.g., DJs), on the other hand, will first have to demonstrate that they work. And neither of these is to be confused with the post-hoc function of aggregating online content, peer-reviewed or otherwise. This should all be made crystal clear in the SPARC paper, partly by stating it in a clear straighforward way, and partly by omitting the speculative options that only cloud the picture needlessly (and have nothing to do with institutional self-archiving and its rationale [open access], but simply risk confusing and discouraging would-be self-archivers and their institutions). RC: "In addition to these analogues to the current journal certification system, a disaggregated model also enables new types of certification models. Roosendaal and Geurts have noted the implications of internal and external certification systems."Please, let us distinguish the two by calling "internal certification" pre-certification (or "self-certification") so as not to confuse it with peer review, which is by definition external (except in that happy but rare case where an institution happens to house enough of the world's qualified experts on a given piece of research not to have to consult any outside experts). A good deal of useful pre-filtering can be done by institutions on their own research output, especially if the institution is large enough. (CERN has a very rigorous internal review system that all outgoing research must undergo before it is submitted to a journal for peer review.) But, on balance, "internal certification" rightly raises the spectre of vanity press publication. Nor is it a coincidence that when universities assess their own researchers for promotion and tenure, they tend to rely on the external certification provided by peer reviewed journals (weighted sometimes by their impact factors) rather than just internal review. The same is true of the external assessors of university research output. So, please, let us not link the very desirable and face-valid goal of maximizing universities' research visibility and research impact through open access provided by institutional self-archiving with the much more dubious matter of institutional self-certification. RC: "Certification may pertain at the level of internal, methodological considerations, pertinent to the research itself - the standard basis for most scholarly peer review. Alternatively, the work may be gauged or certified by criteria external to the research itself - for example, by its economic implications or practical applicability. Such internal and external certification systems would typically operate in different contexts and apply different criteria. In a disaggregated model, these multiple certification levels can co-exist."This is all rather vague, and somewhat amateurish, and would (in my opinion) have been better left out of this otherwise clear and focussed call for institutional self-archiving of research output. And the idea of expecting referees to spend their precious time refereeing already-refereed and already-certified (i.e., already-published) papers yet again is unrealistic in the extreme, especially considering the growing number of papers, the scarcity of qualified expert referees (who are otherwise busy doing the research itself), and the existing backlogs and delays in refereeing and publication. Besides, as indicated already, refereeing is not passive tagging or grading: It is a dynamic, interactive, and answerable process in which the preprint is transformed into the accepted postprint, and certified as such. Are we to imagine each of these papers being re-written every time they are submitted to yet another DJ? There is a lot to be said for postpublication revision and updating of the postprints ("post-postprints") in response to postpublication commentary (or to correct substantive errors that come to light later), but it only invites confusion to call that "disaggregated journal publication." The refereed, journal-certified postprint should remain the critical, canonical, scholarly and archival milestone that it is, perpetually marking the fact that that draft successfully met that journal's established quality standards. Further iterations of this refereeing/certification process make no sense (apart from being profligate with scarce resources) and should in any case be tested for feasibility and outcome before being recommended! RC: "To support both new and existing certification mechanisms, quality certification metadata could be standardized to allow OAI-compliant harvesting of that information. This would allow a reader to determine whether there is any certificationinformation about an article, regardless of where the article originated or where it is discovered."Might I venture to put this much more simply (and restrict it to the refereed research literature, which is my only focus)? By far the most relevant and informative "metadatum" certifying the information in a research paper is the JOURNAL-NAME of the journal in which it was published (signalling, as it does, the journal's established reputation, quality level, and impact factor)! (Yes, the AUTHOR-NAME, and the AUTHOR-INSTITUTION metadata-tags may be useful sometimes too, but those cases do not, as they say, "scale" -- otherwise style="font-style: italic;">self-certification would have replaced peer review long ago. COMMENT-tags would be welcome too, but caveat emptor.) "Peer Review, Peer Commentary, and Eprint Archive Policy" Please let us not lose sight of the fact that the main purpose of author/institution self-archiving in institutional Eprint Archives is to maximize the visibility, uptake and impact of research output by maximizing its accessibility (by provising open access). It is not intended as an experimental implementation of speculations about untested new forms of quality control! That would be to put this all-important literature needlessly at risk (and would simply discourage researchers and their institutions from self-archiving it at all). There is a huge amount of further guiding information that can be derived from the literature to help inform navigation, search and usage. A lot of it will be digitometric analysis based on usage measures such as citation, hits, and commentary But none of these digitometrics should be mistaken for certification, which, until further notice, is a systematic form of expert human interaction and judgement called peer review. Harnad, S. & Carr, L. (2000) Integrating, Navigating and Analyzing Eprint Archives Through Open Citation Linking (the OpCit Project). Current Science 79(5): 629-638. RC: "Depending on the goals established by each institution, an institutional repository could contain any work product generated by the institution's students, faculty, non-faculty researchers, and staff. This material might include student electronic portfolios, classroom teaching materials, the institution's annual reports, video recordings, computer programs, data sets, photographs, and art works-virtually any digital material that the institution wishes to preserve. However, given SPARC's focus on scholarly communication and on changing the structure of the scholarly publishing model, we will define institutional repositories here-whatever else they might contain-as collecting, preserving, and disseminating scholarly content. This content may include pre-prints and other works-in-progress, peer-reviewed articles, monographs, enduring teaching materials, data sets and other ancillary research material, conference papers, electronic theses and dissertations, and gray literature."This passage is fine, and refocusses on the items of real value in the SPARC position paper. RC: "To control and manage the accession of this content requires appropriate policies and mechanisms, including content management and document version control systems. The repository policy framework and technical infrastructure must provide institutional managers the flexibility to control who can contribute, approve, access, and update the digital content coming from a variety of institutional communities and interest groups (including academic departments, libraries, research centers and labs, and individual authors). Several of the institutional repository infrastructure systems currently being developed have the technical capacity to embargo or sequester access to submissions until the content has been approved by a designated reviewer. The nature and extent of this review will reflect the policies and needs of each individual institution, possibly of each participating institutional community. As noted above, sometimes this review will simply validate the author's institutional affiliation and/or authorization to post materials in the repository; in other instances, the review will be more qualitative and extensive, serving as a primary certification."This is all fine, as long as it is specified that what is at issue is institutional pre-certification or self-certification of its unrefereed research (preprints). For peer-reviewed research the only institutional authentication required is at most that the AUTHOR-NAME and JOURNAL-NAME are indeed as advertised! (The integrity of the full text could be vetted too, but I'm inclined to suggest that that would be a waste of time and resources at this point. What is needed right now is that institutions should create and fill their own Eprint Archives with their research output, pre- and post-refereeing, immediately. The "definitive" text, until journals really all become "overlay" journals, is currently in the hands of the publishers and subscribing libraries. For the time being, let authors "self-certify" their refereed, published texts as being what they say they are; let's leave worrying about more rigorous authentication for later. For now, the goal should be to self-archive as much research output as possible, as soon as possible, with minimal fuss. The future will take care of itself. RC: "Institutional repository policies, practices, and expectations must also accommodate the differences in publishing practices between academic disciplines. The early adopter disciplines that developed discipline-specific digital servers were those with an established pre-publication tradition. Obviously, a discipline's existing peer-to-peer communication patterns and research practices need to be considered when developing institutional repository content policies and faculty outreach programs. Scholars in disciplines with no prepublication tradition will have to be persuaded to provide a prepublication version; they might fear plagiarism or anticipate copyright or other acceptance problems in the event they were to submit the work for formal publication. They might also fear the potential for criticism of work not yet benefiting from peer review and editing. For these non-preprint disciplines, a focus on capturing faculty post-publication contributions may prove a more practical initial strategy."Agreed. And here are some prima facie FAQs for allaying each of these by now familiar prima facie fears: Authentication Corruption Certification Evaluation Peer Review Copyright Plagiarism Priority Tenure/Promotion Legality Publisher Agreement RC: "Including published material in the repository will also help overcome concerns, especially from scholars in non-preprint disciplines, that repository working papers might give a partial view of an author's research."Indeed. And that is the most important message of all -- and the primary function of institutional eprint archives: to provide open access to all peer-reviewed research output! RC: "Therefore, including published material, while raising copyright issues that need to be addressed, should lower the barrier to gaining non-preprint traditions to participate. Where authors meet traditional publisher resistance to the self-archiving rights necessary for repository posting, institutions can negotiate with those publishers to allow embargoed access to published research."Fine. RC: "While gaining the participation of faculty authors is essential to effecting an evolutionary change in the structure of scholarly publishing, early experience suggests better success when positioning the repository as a complement to, rather than as a replacement for, traditional print journals."Not only "positioning" it as a complement: Clearly proclaiming that a complement, not a replacement, is exactly what it is! Not just with respect to the relatively trivial issue of on-paper vs. on-line, but also with respect to the much more fundamental one, about journal peer review (vide supra). Institutional self-archiving is certainly no substitute for external peer review. (This is is stated clearly in some parts of the SPARC paper, but unfortunately contradicted, or rendered ambiguous, in other parts.) RC: "This course partially obviates the most problematic objection to open access digital publishing: that it lacks the quality and prestige of established journals."This is a non-sequitur and a misunderstanding: The quality and prestige come from being certified as having met the quality standards of an established peer-reviewed journal. This has nothing whatsoever to do with the medium (on-paper or on-line), nor with the access system (toll-access or open-access); and it certainly cannot be attained by self-archiving unrefereed preprints only. The papers must of course continue to be submitted to peer-reviewed journals for refereeing, revision, and subsequent certification. RC: "This also allows repository proponents to build a case for faculty participation based on the primary benefits that repositories deliver directly to participants, rather than relying on secondary benefits and on altruistic faculty commitment to reforming a scholarly communications model that has served them well on an individual level."I could not follow this. The primary benefits of self-archiving are the maximization of the visibility, uptake and impact of research output by maximizing its accessibility (through open-access). Researchers certainly will not, and should not, self-archive in order to support untested new "certification" conjectures, nor even to ease their institutions' serials budgets. The appeal must be straight to researchers' self-interest in promoting their own research. RC: "Additionally, value-added services such as enhanced citation indexing and name authority control will allow a more robust qualitative analysis of faculty performance where impact on one's field is a measurement. The aggregating mechanisms that enable the overall assessment of the qualitative impact of a scholar's body of work will make it easier for academic institutions to emphasize the quality, and de-emphasize the quantity, of an author's work.53 This will weaken the quantity-driven rationale for the superfluous splintering of research into multiple publication submissions. The ability to gauge a faculty member's publishing performance on qualitative rather than quantitative terms should benefit both faculty and their host institutions."All true, but strategically, it is best to stress maximization of existing performance indicators, rather than hypothetical new ones: Harnad, S. (2001) "Research Access, Impact and Assessment are linked." Times Higher Education Supplement 1487: p. 16. RC: "Learned society publishers are for the most part far less aggressive in exploiting their monopolies than their for-profit counterparts. Even so, most society publishing programs, even in a not-for-profit context, often contribute significantly to covering an organization's operating expenses and member services. It is not surprising, then, that proposals advocating institutional repositories and other open access dissemination of scholarly research generate anxiety, if not outright resistance, amongst society publishers. While one hopes that societies adopt the broadest perspective possible in serving the needs of their members-including the broadest possible access to the scholarly research in the field-it is unlikely that societies will trade their organizations' solvency for the greater good of scholarship. It is important, therefore, to review how society publishers can continue to operate in an environment of institutional repositories and other open access systems."Once the causal connection between access and impact is cleary demonstrated to the research community, it is highly unlikely that they will knowingly choose to continue to subsidise their Learned Societies' "good works" with the lost impact of their own work, by continuing to hold it hoastage to impact-blocking access-tolls: Societies will need to find better ways to support their good works. RC: "Some suggest that institutional repositories, pre-print servers, and electronic aggregations of individual articles will undermine the importance of the journal as a packager of articles. However, institutional repositories and other open access mechanisms will only threaten the survival of scholarly journals if they defeat the brand positions of the established society journals and if individual article impact metrics replace journal impact factors in academic advancement decisions."Most of the above is not true, and hence better left unsaid. It is quite possible (and hence should not be denied) that author/institution self-archiving of refereed research may eventually necessitate downsizing by publishers (to become peer-review/certification service-providers : "Hypothetical Sequel" "Downsizing" But none of this has anything to do with journal- vs author- impact metrics! The ISI's Web of Science has already made it possible (and very useful) for institutions and funding agencies) to use either journal or author citation impact metrics for assessment, whichever is more useful and informative, and it is very likely that weighting publications only by their journal-impact will prove a much blunter instrument than weighting them by the paper's and/or author's impact:. Harnad, S., Carr, L., Brody, T. & Oppenheim, C. (2003) Mandated online RAE CVs Linked to University Eprint Archives: Improving the UK Research Assessment Exercise whilst making it cheaper and easier. Ariadne 35 But once the institutional Eprint Archives are up and filled, far richer and more sensitive digitometric measures of impact and usage are waiting to be devised and tested on this vast corpus. A taste is already available from citebase and its correlator. See the bibliography of ongoing research on these new digitometric performance indicators. RC: "On the first point, journal brand reputation will, for the foreseeable future, continue to be integral to the assessment of article and author quality."For the reader/user/navigator of the literature, certainly. But more sensitive measures are developing too, for the evaluator, funder and employer. The all-important JOURNAL-NAME tag, and the established quality level and impact to which it attests will continue to be indispensable sign-posts, but a great deal more will be built on top of them, once the entire refereed journal literature (24K journals) is online and open-access. RC: "Market-aware journals with prominent editorial boards and well-established publishing histories should be able to maintain their prestige, even with a proliferation of article-based aggregations. As to the second point, while new metrics will evolve that demonstrate the quantitative impact of individual articles, rigorous peer review will continue to provide value. Even after individual article impact analysis becomes widespread and accepted by academic tenure committees, stringent refereeing standards will continue to play a central role in indicating quality."Correct, and mainly because peer review is the cornerstone of it all. RC: "Learned societies have long-standing relationships with their members and they should be able to act as focal points for the research communities they represent. While society dues typically include a journal subscription, society members also enjoy other benefits of membership-and, presumably, additional value-beyond the journal subscription itself. Societies, therefore, provide community-supporting services to justify their members' dues besides the value allocated to the journal subscription. While a commercial publisher would find it difficult to charge a subscription fee for a journal freely available online, society publishers-by repositioning the benefits of membership-might well prove able to allow journal article availability via open access repositories without experiencing substantial membership cancellations or revenue attrition."In other words, members of learned societies may still be willing to pay membership dues to support their societies' "good works." But there is no need to call these dues "subscriptions"! And the cost of peer review itself can be covered very easily out of institutional subscription savings, if and when it becomes necessary. RC: Given the extent of government and private philanthropic foundation funding for academic research, especially in the sciences, such funding agencies have a vested interest in broadening the dissemination of scientific research. There are several mechanisms by which government and private funding agencies could help to achieve this broadened dissemination. It has been suggested that government and foundation research grants could be written to include subsidies for author page charges and other input-side fees to support open access business models. Such stipulations would help effect change in those disciplines, primarily in the sciences, where author page charges are the norm. Obviously, such subsidies would be less effective in disciplines where input-side models bear the stigma of vanity publishing; still, over time, this resistance could be overcome.If/when open-access prevails enough to reduce publisher income, it will at the same time increase institutional savings (from cancelled subscriptions). As peer review costs much less than the whole of what journal publishers used to do, it can easily be paid for, at the author/institution end, as a service cost for outgoing research instead of as a product cost for incoming research as it is now, out of just a portion of institutions' annual windfall savings, as indicated below: RC: "ECONOMICALLY: The burden of scholarly journal costs on academic libraries has been well documented. While the variety of institutional contexts and potential implementations make it difficult to project institutional repository development and operational costs with any precision, the evidence so far suggests that the resources required would represent but a fraction of the journal costs that libraries now incur and over which they have little control."And that is mainly because peer review alone -- which will be journal publishers' only remaining essential service if and when all journal publication becomes all open-access publication -- costs far less than what journal subscription/license tolls used to cost. The per-paper archiving cost, distributed over the research institutions that generate the outgoing papers, is negligible, compared to what it cost for incoming papers in the toll-based system. "The True Cost of the Essentials (Implementing Peer Review)" RC: "Several institutions have applied the e-prints self-archiving software to implement institutional repositories. Developed at the University of Southampton, the free eprints.org self-archiving software now comes configured to run an institutional pre-prints archive. The generic version of e-prints is fully interoperable with all the OAI Metadata Harvesting Protocol."Not an institutional pre-prints archive: An institutional Eprints Archive. (Eprints = preprints + postprints) RC: "Universities that have implemented e-prints solutions include Cal Tech, the University of Nottingham, University of Glasgow, and the Australian National University. The participants in all these programs have described their experiences, providing practical insights that should benefit others contemplating an OAI-compliant e-prints implementation."See the CalTech review of their experience with eprints for SPARC. Stevan Harnad American Scientist Open Access Forum Wednesday, December 2. 2009The 1994 "Subversive Proposal" at 15: A Response
On Wed, 2 Dec 2009, Stevan Harnad [SH2] wrote:
SH3: Just dull-wittedness. It should have been obvious already then that the primary target was refereed journal articles and that "esoteric" was a red herring.On June 27, 1994, Stevan Harnad [SH1] wrote:SH2: "What on earth do you mean by "esoteric"? Are we supposed to have different criteria for a publication depending on how big a readership it is likely to have? In that case we need a sliding scale whose value we cannot possibly know in advance for every candidate piece of writing." SH2: "Paper publishing? Is this, then, merely about getting published articles online? That's not likely to be a very radical proposal, since (today, in 1994) it is surely a foregone conclusion that publishers will all have online editions within a few years. [So is this about] online, online-only, or free-online?"SH3: More somnambulism: It should have been clearly stated as "free online access to refereed journal articles" (i.e., OA). SH2: "Give-away writing might be a natural kind, but what distinguishes give-away writing from non-give-away writing? How does one recognize it in advance? And surely the distinction is not just based on probable market but on some other aspect of academic motivation. After all, textbooks are as "academic" as one can get, yet textbook authors are certainly motivated to sell their words, otherwise many would not do the work of writing them."SH3: Addle-brainedness, yet again: Refereed research articles are written purely for research usage and impact not for sales revenue. That's how you distinguish them. And you recognize them by the journal-names. SH2: "Who are "peers"? And what is the reason for this obsession with reaching their "eyes and minds"? The fact that they are all in some sort of "esoteric" club surely is not the explanation."SH3: Peers are the fellow-researchers worldwide for whose usage peer-reviewed research is conducted and published. "Eyes and minds" should have been research uptake, usage and impact (e.g., as measured by downloads and citations). SH2: "And this "building on one another's contributions" sounds cosy enough, but what is really going on here? It's certainly not about verbal Lego Blocks!"SH3: Research uptake, usage, applications, citations. SH2: "Fine. These authors are saints, or monks. But why? For what?"SH3: Their research progress, their funding and their careers are based on the uptake and usage of their research findings, not on income from the sales of their writings. (User access-barriers are also author impact-barriers.) SH2: "The criterion sounds like it's esotericity itself, but why? Besides, that's circular: Is give-away writing esoteric because its target readership is tiny? Or is its target readership tiny because the writing's esoteric?"SH3: Fuzzy thinking again: Esotericity, though roughly correlated, is a red herring. Give-away writing is give-away writing, and wants to be freely accessible online because access-barriers are usage- and impact-barriers. (Yes, the potential users of most refereed research are few, but that's not the point, nor the criterion: the need to maximize usage and impact is the criterion.) SH2: "And FTP archiving sounds fine, but isn't it already obsolete? This is June 27 1994, but Tim Berners-Lee created the Web 5 years ago!"SH3: Ignorance, sir, pure ignorance. SH2: "And there you go again with "electronic publication"? Is this just about moving to electronic publication? But that's surely going to happen anyway."SH3: Fuzziness, pure fuzziness. It is and was about free online access, not about online publication. SH2: "And is "esoteric" publication, then, merely "vanity press" publication? If so, then it's no wonder its likely readership is so tiny..."SH3: It's about refereed publication, hence not vanity-press. (But there was definitely muddle and ambiguity regarding unrefereed vs. refereed drafts. The focus should have been directly on refereed drafts, with unrefereed drafts being only a potential entry point in some cases.) SH2: "But physicists (who are doing it on the Web, by the way, not via FTP) have already been doing much the same thing (sharing their pre-refereeing drafts) on paper for years now, even before the web, or FTP, email, or the online medium itself. Is that all you mean by "esoteric"? And if so, the online medium's there now: those who want to share drafts are free to share them that way. That isn't even "publication," it's just public sharing of work-in-progress."SH3: You're right, and that's yet another gap in my original logic. Nothing is or was stopping those who might wish to make their unrefereed drafts publicly accessible online from doing so; but that is not the point, nor the problem, nor the objective. The problem is access to refereed, published research. All potential users need access to that; and all authors want their refereed research to be accessible to all its potential users (not just those whose institutions can afford to subscribe to the journal in which it happens to be published); whereas not all (or even most) authors want their unrefereed drafts to be accessible to all. (And, yes, "esoteric" is once again a red herring. It ought to have been "peer-reviewed research" all along, to short-circuit potential ambiguities and misunderstandings.) SH2: "Why didn't you say that in the first place? "peer-reviewed" rather than "esoteric.""SH3: Mea culpa. SH2: "But, again, nothing stands in the way of authors sharing unrefereed drafts online with their tiny intended public prior to submitting them for peer-review and then publication, does it? What's your point?"SH3: The point is and should have been about peer-reviewed drafts. Earlier unrefereed drafts were just one potential entry point. (Perhaps I was just too timid or unimaginative to say "post your peer-reviewed drafts" at that time.) But for me, another major motivation for posting writings was to elicit quote/commentary (as in this very commentary). And although the refereed, published draft can elicit commentary too, it is especially useful at the draft stage, when it can still help shape the final published version. SH2: "But is [what's sought] really the "patina" of paper publishing, or the patina of peer-review, and a given publication's prior track record for peer-review quality standards?"SH3: Just peer-review and track-record. The rest was again just ill-thought-through muddle. (There may have been some faint excuse for such muddle way back in 1994; but one can hardly invoke that today, 15 years later, when all of these muddles have since been raised, rehearsed, and resolved, many times over, in countless online discussion forums, FAQs, conferences, and published articles, chapters and books. Hence the frayed patience of weary archivangelists even if they themselves are not free of original sin, insofar as not having thought all things through sufficiently rigorously at the very outset is concerned. There is no excuse for the same old muddles 15 years on...) SH2: "And what, exactly, is the scope of "peer-reviewed publication"? Apart from journal articles (and refereed conference proceedings), aren't monographs, edited books and even textbooks "peer-reviewed"? And aren't some of them "non-esoteric," because revenue-seeking?"SH3: There is genuine uncertainty about the cut-off point. All peer-reviewed journal articles are, without exception, author give-aways, hence all can and should be made freely accessible online to maximize their usage and impact. The same may be true for some monographs, edited books (and possibly even some textbooks, if the authors are magnanimous). But none of these other categories is exception-free (rather the contrary). Freeing authors' writings online against their wills cannot be the objective of the Open Access (OA) movement. Nor can providing free access to writings to which the author does not want there to be free access serve as the basis for OA mandates by institutions and funders. That is why the exception-free give-away content -- written solely for usage and impact -- is the primary target of the OA movement (and of OA mandates). By the way, another enormous oversight in the Subversive Proposal (though I can hardly imagine how it could have been anticipated at that time) was the failure to call for (what we would now call) Green OA self-archiving mandates by institutions and funders. It only became apparent after another half-decade had passed with researchers' fingers still not stirred into motion by the Subversive Proposal that mandates would be necessary... SH2: "The (obvious) flaw with the hope of making all refereed publications free online by first making their unrefereed drafts free online is that, unlike physicists (and, before them, computer scientists, and economists), most authors in most disciplines do not wish to make their unrefereed drafts public (either because they consider it unscholarly, or because they fear professional embarrassment, or because they don't want to immortalize their errors, or because they thing unrefereed results could be dangerous, e.g. to public health).SH3: All true, and, again, mea culpa. The road to the optimal solution -- the one that covers all refereed research, immediately upon acceptance for publication, has been somewhat circuitous: First, the Subversive Proposal recommended self-archiving all unrefereed preprints (but that would not work for the many researchers and disciplines that do not wish to make unrefereed drafts public). A variant on that strategy was the "preprints plus corrigenda" strategy, which recommended self-archiving unrefereed preprints and later also self-archiving a file containing all corrections arising from the refereeing. Likewise inadequate, partly because, again, many authors don't want to make unrefereed drafts public, and also because it would be awkward and inconvenient for authors to have to archive -- and for users to have to consult -- separate preprint and corrigenda files. It has to be added that the P&C strategy was never really intended as an actual overt practice: it was just intended to assuage the worries of those who thought there was some sort of insurmountable obstacle in principle to self-archiving the refereed version in cases where the publisher objected. In reality, some publishers have objected even to self-archiving the unrefereed preprint [this is called the "Ingelfinger Rule"], but most have since dropped this objection. And the sensible strategy for the refereed postprint is to self-archive it and reconsider only if and when a publisher requests a take-down. Sixty-three percent of journals already endorse immediate OA self-archiving of the refereed postprint. And in the past two decades, there have been virtually no publisher take-down requests for the many million refereed postprints that have been self-archived. It's absurd to let a one in a million exception drive practice, especially when all it would entail would be a take-down! But for those authors (and for those mandates) that insist on refraining from making the refereed postprint OA for the remaining 37% of articles until their publishers endorse it (most endorse it after an embargo period), the best author practice is to deposit the refereed final draft in their own institutional repositories (IRs) anyway, immediately upon acceptance for publication, but to set access to it as "Closed Access" instead of Open Access during any embargo. That way the repository's semi-automatic "email eprint request" Button can provide almost-immediate, almost-OA to any would-be user during the embargo. At the time of the Subversive Proposal, however, neither the OAI interoperability protocol, nor OAI-compliant institutional repository software, nor the notion of self-archiving mandates yet existed. So today's Best Practice solution was not yet in sight, namely: deposit, and mandate deposit, of all refereed final drafts immediately upon acceptance; set access to the 63% of deposits that are published in Green journals to OA immediately; and, if you wish, set access to Closed Access for the remaining 37%, and rely on the Almost-OA Button during the embargo. Once such IDOA -- Immediate Deposit, Optional Access -- mandates are adopted globally by institutions and funders, the days of embargoes are numbered anyway, under the overwhelming pressure of the benefits of OA. And another thing that was not yet in sight in 1994 was the fact that the benefits of OA (likewise not yet named then!) could and would be demonstrated to authors and their institutions and funders quantitatively, in the form of the scientometric evidence of the "OA Advantage": significantly increased download and citation impact for OA articles, compared to non-OA ones. This too would eventually go on to encourage mandates as well as the increased the use of OA content to generate rich new metrics for measuring and rewarding research impact. None of this was quite obvious yet in 1994. SH2: "And what about all the published reprints that authors would prefer not to have shared with the world when they were just unrefereed drafts?"SH3: Self-archive the refereed version immediately upon publication (and rely on the Button if you wish to observe the access-embargo). SH2: "How and why did this "subversive proposal" (to the author community) turn into speculations about publishing and publishers?"SH3: This is the plaint that plagues and shames me the most! For the needless and counterproductive speculation about the future of publication -- along with all the essential features of what would eventually be called "Gold OA publishing" -- were all introduced in that proposal, with the result that premature "gold fever" contributed to distracting from and delaying the ("Green OA") self-archiving that was the essence of the Subversive Proposal. But I do think it was unavoidable -- in responding to the (now at least) 38 prima facie worries that immediately began to be raised time and time again about self-archiving -- particularly worries #8, #9, #14, #17, #19, #28, #30, & #31 -- by sketching the obvious way in which publication cost-recovery could evolve into the Gold OA model if and when universal Green OA self-archiving should ever make it necessary. But I never imagined that the prospect of gold would become such an attraction -- mostly to those, like librarians, not in a position to provide Green OA themselves, but groaning under the burden of the serials crisis, but also to publishing reform theorists more interested in publishing economics and iniquities than in researchers' immediate access needs -- that gold fever would propagate and distract from providing and mandating Green OA, rather than reassuring and reinforcing it. (For some reason that neither Peter Suber nor I can quite fathom, people take to Gold much more readily than to Green, even to the extent of imagining that OA is synonymous with Gold OA publishing.) Well, one reaps what one sows, and I accept a large part of the blame for having already begun to sprinkle gold dust way back in 1994, and continuing to stir it for some years to come -- -- until I at last learned from sorry experience to stop speculating about tomorrow's hypothetical transitions and focus only on the tried, tested and sure practical means of reaching 100% OA today: universal Green OA deposit mandates by institutions and funders. I still think, however, that the proof-of-principle for Gold OA publishing by BMC and PLoS was, on balance, useful, even though premature, because it did serve to allay worries that universal Green OA self-archiving would destroy peer-reviewed publication altogether, by making subscriptions unsustainable, and hence making publication costs unrecoverable. No, it would merely induce a transition to Gold-OA publishing to recover the costs of publication. (Moreover, the costs of publishing then, after having achieved universal Green OA, would be far lower -- just the costs of peer review alone -- and paid for out of a fraction of the self-same annual institutional windfall savings on which the premise of subscription collapse underlying this set of worries is predicated.) But there I go, succumbing to gold fever again... SH2: "In this speculation about publishing media and costs, what have "pages" to do with it? And what, exactly, does the 25% figure pay for (and what is the 75% that is no longer needed)?"SH3: Pages have nothing to do with it. That was just a regrettable momentary lapse into the papyrocentric thinking of the Gutenberg era. The right reckoning is total publication costs per article. And once authors are all systematically depositing their refereed drafts in their institutional repositories, and users are using those OA drafts instead of the publisher's proprietary version, the global network of IRs becomes the access-provider and archive and the only remaining function (and expense) remaining for journals is the implementation of peer review, certified by their name and track-record. (The peers, of course, continue to referee for free, as they always did.) SH2: "You seem to be pretty generous with other people's money. ["advance subsidies (from authors' page charges, learned society dues, university publication budgets and/or governmental publication subsidies)" And you seem to have forgotten the money already being paid for subscriptions."SH3: More of the perils of premature speculation. Of course no extra funds are needed if the transition to Gold OA only comes after universal Green OA has been reached, and only if and when that universal Green OA in turn makes subscriptions unsustainable. For then, by the very same token, the subscription cancellation releases the funds to pay for Gold OA -- whereas paying pre-emptively for Gold OA now, while it is unnecessary, because most of the essential journals are still subscription-based, requires extra money (and at an inflated -- because again premature -- cost). But you see how easy it is to keep getting taken up with Gold OA speculation instead of attending to Green OA practice, within reach since 1994, yet still not grasped? SH2: "But what, exactly, is this money supposed to be paying for? (Again, there seems to be conflation of online-only publication, and its costs, with free online access-provision: surely they are not the same thing.)"SH3: Today: nothing. After universal Green OA -- if and when that makes subscriptions collapse -- it will pay for peer-review alone. SH2: "This still sounds quite muddled and vague: We've heard about "esoteric," give-away writings, but it has not yet been made clear what they are, and why they are give-ways."SH3: Refereed journal articles, written only for research impact. SH2: "We have heard about online publication, and online-only publication."SH3: The Subversive Proposal was only meant to be about making refereed research freely accessible online. SH2: "We have heard about (some) authors making their unrefereed drafts free online. But how (and why) do we get from that to free online refereed publication?"SH3: Forget about the unrefereed drafts; they're just extras. The way to make refereed research free online is to deposit your refereed final draft, free for all, in your Institution's OA Repository, immediately upon acceptance for publication. SH2: "And [how (and why) do we get] from there to paying to publish instead of paying to subscribe? (What needs to be paid for, how and why? And how do we get there from here, given that most authors do not wish to make their unrefereed drafts public?)"SH3: Right now, nothing needs extra to be paid for. Subscriptions are paying for it all, handsomely. All that's needed is author keystrokes, to deposit all final refereed drafts, immediately upon acceptance for publication. That's all that's been needed since 1994, but now we know the keystrokes need to be mandated, to set the fingers in motion, so what's needed is institutional and funder Green OA self-archiving mandates. All of that is for sure, and will generate 100% OA with certainty. The rest is speculation: If universal Green OA makes subscriptions no longer sustainable, publishers will cut costs, downsize to the essentials -- providing peer review alone -- paid for, on the Gold OA model, out of the institutional subscription cancellation savings. SH2: Sounds like a rather inchoate proposal to me... (And you reputedly expect this to happen overnight? Might we have some more details about what we might expect to happen on that fabled night?)"SH: Inchoate it was, in 1994, though the practical means to do it overnight (fingers) were already available in 1994. Since then, the OAI protocol and the IR software have made it a lot simpler and easier. But the keystrokes remain to be done. Thirty eight prima facie worries have kept fingers in a state of Zeno's Paralysis, despite all being answered, fully, many, many times over. Now it is time to mandate the keystrokes. That too could be done overnight, by the stroke of a Department Head's, DVC's or VC's pen, as Wendy Hall (Southampton), Tom Cochrane (QUT), and Bernard Rentier (Liege) have since shown. Will it be another 15 years before the remaining 10,000 universities and research institutions (or at least the top 1000) wield the mighty pen to unleash the even mightier keystrokes (as 68 Institutions and Departments, and 42 Funders have already done)? Or will we keep dithering about Gold OA, publishing reform, peer review reform, re-use rights, author addenda, preservation and the other 38 factors causing Zeno's Paralysis) for another decade and a half? Stevan Harnad American Scientist Open Access Forum Saturday, November 28. 2009Collini on "Impact on humanities" in Times Literary SupplementCommentary on:One can agree whole-heartedly with Professor Collini that much of the spirit and the letter of the RAE and the REF and their acronymous successors are wrong-headed and wasteful -- while still holding that measures ("metrics") of scholarly/scientific impact are not without some potential redeeming value, even in the Humanities. After all, even expert peer judgment, if expressed rather than merely silently mentalized, is measurable. (Bradley's observation on the ineluctability of metaphysics applies just as aptly to metrics: "Show me someone who wishes to refute metaphysics and I'll show you a metaphysician with a rival system.") The key is to gather as rich, diverse and comprehensive a spectrum of candidate metrics as possible, and then test and validate them jointly, discipline by discipline, against the existing criteria that each discipline already knows and trusts (such as expert peer judgment) so as to derive initial weights for those metrics that prove to be well enough correlated with the discipline's trusted existing criteria to be useable for prediction on their own. Prediction of what? Prediction of future "success" by whatever a discipline's (or university's or funder's) criteria for success and value might be. There is room for putting a much greater weight on the kinds of writings that fellow-specialists within the discipline find useful, as Professor Collini has rightly singled out, rather than, say, success in promoting those writings to the general public. The general public may well derive more benefit indirectly, from the impact of specialised work on specialists, than from its direct impact on themselves. And of course industrial applications are an impact metric only for some disciplines, not others. Ceterum censeo: A book-citation impact metric is long overdue, and would be an especially useful metric for the Humanities. Harnad, S. (2001) Research access, impact and assessment. Times Higher Education Supplement 1487: p. 16. Harnad, S., Carr, L., Brody, T. & Oppenheim, C. (2003) Mandated online RAE CVs Linked to University Eprint Archives: Improving the UK Research Assessment Exercise whilst making it cheaper and easier. Ariadne 35. Brody, T., Carr, L., Harnad, S. and Swan, A. (2007) Time to Convert to Metrics. Research Fortnight pp. 17-18. Harnad, S. (2008) Open Access Book-Impact and "Demotic" Metrics Open Access Archivangelism October 10, 2008. Harnad, S. (2008) Validating Research Performance Metrics Against Peer Rankings. Ethics in Science and Environmental Politics 8 (11) doi:10.3354/esep00088 Special Issue on "The Use And Misuse Of Bibliometric Indices In Evaluating Scholarly Performance" Harnad, S. (2009) Open Access Scientometrics and the UK Research Assessment Exercise. Scientometrics 79 (1) Friday, November 13. 2009Richard Poynder Interviews Lars Fischer About German Open Access Mandate Petition
Soon after OA-week, the UK's Times Higher Education (THE) published two articles on OA plus an editorial on OA Mandates and Metrics. At the same time, Professor Eberhard Hilf announced that in Germany Lars Fischer had initiated a petition to the German Bundestag to mandate Green OA, supported by the Coalition for Action, and inviting signatories from around the world.
OA's chronicler and critic, Richard Poynder, lost no time in interviewing Lars Fischer about his petition: Interview of Lars Fischer by Richard Poynder Open and Shut 13 November 2009 Also recommended: Times Higher Education (THE) Editorial: "Put all the results out in the open" By Ann Mroz THE 12 November 2009 "Researchers, government and society benefit when research is made freely available, so the sooner it is mandated, the better" "Learning to share " By Zoë Corbyn THE 12 November 2009 "Free, immediate and permanently available research results for all - that's what the open-access campaigners want. Unsurprisingly, the subscription publishers disagree. Zoe Corbyn weighs up the ramifications for journals, while Matthew Reisz asks how books will fare" Friday, October 23. 2009Don't Count Your Metric Chickens Before Your Open-Access Eggs Are Laid
In "Open Access is the short-sighted fight" Daniel Lamire [DL] writes:
DL: "(1) If scientific culture rewarded readership and impact above all else, we would not have to force authors toward Open Access."(a) University hiring and performance evaluation committees do reward impact. (It is no longer true that only publications are counted: their citation impact is counted and rewarded too.) (b) Soon readership (e.g., download counts, link counts, tags, comments) too will be counted among the metrics of impact, and rewarded -- but this will only become possible once the content itself is Open Access (OA), hence fully accessible online, its impact measurable and rewardable. (See references cited at the end of this commentary under heading METRICS.) (c) OA mandates do not force authors toward OA -- or no moreso than the universal "publish or perish" mandates force authors toward doing and publishing research: What these mandates do is close the loop between research performance and its reward system. (d) In the case of OA, it has taken a long time for the world scholarly and scientific community to become aware of the causal connection between OA and research impact (and its rewards), but awareness is at long last beginning to grow. (Stay tuned for the announcement of more empirical findings on the OA impact advantage later today, in honor of OA week.) DL: "You know full well that many researchers are just happy to have the paper appear in a prestigious journal. They will not make any effort to make their work widely available because they are not rewarded for it. Publishing is enough to receive tenure, grants and promotions. And the reward system is what needs to be fixed."This is already incorrect: Publishing is already not enough. Citations already count. OA mandates will simply make the causal contingency between access and impact, and between impact and employment/salary/promotion/funding/prizes more obvious and explicit to all. In other words, the reward system will be fixed (including the development and use of a rich and diverse new battery of OA metrics of impact) along with fixing the access system. DL: "(2) I love peer review. My blog is peer reviewed. You are a peer and just reviewed my blog post."Peer commentary is not peer review (as surely I -- who founded and edited for a quarter century a rather good peer-reviewed journal that also provided open peer commentary -- ought to be in a position to know!). Peer commentary (as well as post-hoc metrics themselves) are an increasingly important supplement to peer review, but they are themselves neither peer review nor a substitute for it. (Again, see references at the end of this commentary under the heading PEER REVIEW.) DL: "(3) PLoS has different types of peer review where correctness is reviewed, but no prediction is made as to the perceived importance of the work. Let me quote them:"You have profoundly misunderstood this, Daniel:“Too often a journal’s decision to publish a paper is dominated by what the Editor/s think is interesting and will gain greater readership — both of which are subjective judgments and lead to decisions which are frustrating and delay the publication of your work. PLoS ONE will rigorously peer-review your submissions and publish all papers that are judged to be technically sound. Judgments about the importance of any particular paper are then made after publication by the readership (who are the most qualified to determine what is of interest to them).” (i) It is most definitely a part of peer review to evaluate (and where necessary correct) the quality, validity, rigor, originality, relevance, interest and importance of candidates for publication in the journal for which they are refereeing. (ii) Journals differ in the level of their peer review standards (and with those standards co-vary their acceptance criteria, selectivity, acceptance rates -- and hence their quality and reliability). (iii) PLoS Biology and PLoS Medicine were created explicitly in order to maintain the highest standards of peer review (with acceptance criteria selectivity and acceptance rates at the level of those of Nature and Science [which, by the way, are, like all peer judgments and all human judgment, fallible, but also corrigible post-hoc, thanks to the supplementary scrutiny of peer commentary and follow-up publications)). (iv) PLoS ONE was created to cater for a lower level in the hierarchy of journal peer review standards. (There is no point citing the lower standards of mid-range journals in that pyramid as if they were representative of peer review itself.) (vi) Some busy researchers need to know the quality level of a new piece of refereed research a-priori, at point of publication -- before they invest their scarce time in reading it, or, worse, their even scarcer and more precious research time and resources in trying to build upon it -- rather than waiting for months or years of post-hoc peer scrutiny or metrics to reveal it. (v) Once again: commentary -- and, rarer, peer commentary -- is a supplement, not a substitute for peer review. DL: "(4) Moreover, PLoS does publish non-peer-reviewed material, see PLoS Currents: Influenza for example."And the journal hierarchy also includes unrefereed journals at the bottom of the pyramid. Users are quite capable of weighting publications by the quality track-record of their provenance, whether between journals, or between sections of the same journal. Caveat Emptor. METRICS: Brody, T., Kampa, S., Harnad, S., Carr, L. and Hitchcock, S. (2003) Digitometric Services for Open Archives Environments. In Proceedings of European Conference on Digital Libraries 2003, pp. 207-220, Trondheim, Norway. Harnad, S. (2006) Online, Continuous, Metrics-Based Research Assessment. Technical Report, ECS, University of Southampton. Brody, T., Carr, L., Harnad, S. and Swan, A. (2007) Time to Convert to Metrics. Research Fortnight pp. 17-18. Brody, T., Carr, L., Gingras, Y., Hajjem, C., Harnad, S. and Swan, A. (2007) Incentivizing the Open Access Research Web: Publication-Archiving, Data-Archiving and Scientometrics. CTWatch Quarterly 3(3). Harnad, S. (2008) Self-Archiving, Metrics and Mandates. Science Editor 31(2) 57-59 Harnad, S. (2008) Validating Research Performance Metrics Against Peer Rankings. Ethics in Science and Environmental Politics 8 (11) The Use And Misuse Of Bibliometric Indices In Evaluating Scholarly Performance Harnad, S., Carr, L. and Gingras, Y. (2008) Maximizing Research Progress Through Open Access Mandates and Metrics. Liinc em Revista 4(2). Harnad, S. (2009) Multiple metrics required to measure research performance. Nature (Correspondence) 457 (785) (12 February 2009) Harnad, S. (2009) Open Access Scientometrics and the UK Research Assessment Exercise. Scientometrics 79 (1) Also in Proceedings of 11th Annual Meeting of the International Society for Scientometrics and Informetrics 11(1), pp. 27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds. (2007) Harnad, S; Carr, L; Swan, A; Sale, A & Bosc H. (2009) Maximizing and Measuring Research Impact Through University and Research-Funder Open-Access Self-Archiving Mandates. Wissenschaftsmanagement 15(4) 36-41 PEER REVIEW: Harnad, S. (1978) BBS Inaugural Editorial. Behavioral and Brains Sciences 1(1) Harnad, S. (ed.) (1982) Peer commentary on peer review: A case study in scientific quality control, New York: Cambridge University Press. Harnad, S. (1984) Commentaries, opinions and the growth of scientific knowledge. American Psychologist 39: 1497 - 1498. Harnad, Stevan (1985) Rational disagreement in peer review. Science, Technology and Human Values, 10 p.55-62. Harnad, S. (1986) Policing the Paper Chase. (Review of S. Lock, A difficult balance: Peer review in biomedical publication.) Nature 322: 24 - 5. Harnad, S. (1995) Interactive Cognition: Exploring the Potential of Electronic Quote/Commenting. In: B. Gorayska & J.L. Mey (Eds.) Cognitive Technology: In Search of a Humane Interface. Elsevier. Pp. 397-414. Harnad, S. (1996) Implementing Peer Review on the Net: Scientific Quality Control in Scholarly Electronic Journals. In: Peek, R. & Newby, G. (Eds.) Scholarly Publishing: The Electronic Frontier. Cambridge MA: MIT Press. Pp 103-118. Harnad, S. (1997) Learned Inquiry and the Net: The Role of Peer Review, Peer Commentary and Copyright. Learned Publishing 11(4) 283-292. Harnad, S. (1998/2000/2004) The invisible hand of peer review. Nature [online] (5 Nov. 1998), Exploit Interactive 5 (2000): and in Shatz, B. (2004) (ed.) Peer Review: A Critical Inquiry. Rowland & Littlefield. Pp. 235-242. Harnad, S. (2003/2004) Back to the Oral Tradition Through Skywriting at the Speed of Thought. Interdisciplines. Retour a la tradition orale: ecrire dans le ciel a la vitesse de la pensee. Dans: Salaun, Jean-Michel & Vendendorpe, Christian (dir). Le défi de la publication sur le web: hyperlectures, cybertextes et meta-editions. Presses de l'enssib. Harnad, S. (2003) BBS Valedictory Editorial. Behavioral and Brain Sciences 26(1) Friday, September 25. 2009Video: Stevan Harnad on Integrating Research and Thesis MandatesVideo of Stevan Harnad's "The Open Access Movement: Integrating Universities' ETD-Deposit and Research-Deposit Mandates, Repositories and Metrics." Presented at ETD2009 "Bridging the Knowledge Divide". Please feel free to use it to promote OA and OA mandates. Thursday, July 23. 2009Post-Publication Metrics Versus Pre-Publication Peer Review
Patterson, Mark (2009) PLoS Journals – measuring impact where it matters writes:
"[R]eaders tend to navigate directly to the articles that are relevant to them, regardless of the journal they were published in... [T]here is a strong skew in the distribution of citations within a journal – typically, around 80% of the citations accrue to 20% of the articles... [W]hy then do researchers and their paymasters remain wedded to assessing individual articles by using a metric (the impact factor) that attempts to measure the average citations to a whole journal?Merits of Metrics. Of course direct article and author citation counts are infinitely preferable to -- and more informative than -- just a journal average (the journal "impact factor"). And yes, multiple postpublication metrics will be a great help in navigating, evaluating and analyzing research influence, importance and impact. But it is a great mistake to imagine that this implies that peer review can now be done on just a generic "pass/fail" basis. Purpose of Peer Review. Not only is peer review dynamic and interactive -- improving papers before approving them for publication -- but the planet's 25,000 peer-reviewed journals differ not only in the subject matter they cover, but also, within a given subject matter, they differ (often quite substantially) in their respective quality standards and criteria. It is extremely unrealistic (and would be highly dysfunctional, if it were ever made to come true) to suppose that these 25,000 journals are (or ought to be) flattened to provide a 0/1 pass/fail decision on publishability at some generic adequacy level, common to all refereed research. Pass/Fail Versus Letter-Grades. Nor is it just a matter of switching all journals from assigning a generic pass/fail grade to assigning its own letter grade (A-, B+, etc.), despite the fact that that is effectively what the current system of multiple, independent peer-reviewed journals provides. For not only do journal peer-review standards and criteria differ, but the expertise of their respective "peers" differs too. Better journals have better and more exacting referees, exercising more rigorous peer review. (So the 25,000 peer-reviewed journals today cannot be thought of as one generic peer-review filter that accepts papers for publication in each field with grades between A+ and E; rather there are A+ journals, B- journals, etc.: each established journal has its own independent standards, to which its submissions are answerable) Track Records and Quality Standards. And users know all this, from the established track records of the journals they consult as readers and publish in as authors. Whether or not we like to put it that way, this all boils down to selectivity across a gaussian distribution of research quality in each field. There are highly selective journals, that accept only the very best papers -- and even those often only after several rounds of rigorous refereeing, revision and re-refereeing. And there are less selective journals, that impose less exacting standards -- all the way down to the fuzzy pass/fail threshold that distinguishes "refereed" journals from journals whose standards are so low that they are virtually vanity-press journals. Supplement Versus Substitute. This difference (and independence) among journals in terms of their quality standards is essential if peer-review is to serve as the quality enhancer and filter that it is intended to be. Of course the system is imperfect, and, for just that reason alone (amongst many others) a rich diversity of post-publication metrics are an invaluable supplement to peer review. But they are certainly no substitute for pre-publication peer review, or, most importantly, its quality triage. Quality Distribution. So much research is published daily in most fields that on the basis of a generic 0/1 quality threshold, researchers simply cannot decide rationally or reliably what new research is worth the time and investment to read, use and try to build upon. Researchers and their work differ in quality too, and they are entitled to know a priori, as they do now, whether or not a newly published work has made the highest quality cut, rather than merely that it has met some default standards, after which users must wait for the multiple post-publication metrics to accumulate across time in order to be able to have a more nuanced quality assessment. Rejection Rates. More nuanced sorting of new research is precisely what peer review is about, and for, and especially at the highest quality levels. Although authors (knowing the quality track-records of their journals) mostly self-select, submitting their papers to journals whose standards are roughly commensurate with their quality, the underlying correlate of a journal's refereeing quality standards is basically their relative rejection rate: What percentage of annual papers in their designated subject matter would meet their standards (if all were submitted to that journal, and the only constraint on acceptance were the quality level of the article, not how many articles the journal could manage to referee and publish per year)? Quality Ranges. This independent standard-setting by journals effectively ranges the 25,000 titles along a rough letter-grade continuum within each field, and their "grades" are roughly known by authors and users, from the journals' track-records for quality. Quality Differential. Making peer review generic and entrusting the rest to post-publication metrics would wipe out that differential quality information for new research, and force researchers at all levels to risk pot-luck with newly published research (until and unless enough time has elapsed to sort out the rest of the quality variance with post-publication metrics). Among other things, this would effectively slow down instead of speeding up research progress. Turn-Around Time. Of course pre-publication peer review takes time too; but if its result is that it pre-sorts the quality of new publications in terms of known, reliable letter-grade standards (the journals' names and track-records), then it's time well spent. Offloading that dynamic pre-filtering function onto post-publication metrics, no matter how rich and plural, would greatly handicap research usability and progress, and especially at its all-important highest quality levels. More Value From Post-Publication Metrics Does Not Entail Less Value From Pre-Publication Peer Review. It would be ironic if today's eminently valid and timely call for a wide and rich variety of post-publication metrics -- in place of just the unitary journal average (the "journal impact factor") -- were coupled with an ill-considered call for collapsing the planet's wide and rich variety of peer-reviewed journals and their respective independent, established quality levels onto some sort of global, generic pass/fail system. Differential Quality Tags. There is an idea afoot that peer review is just some sort of generic pass/fail grade for "publishability," and that the rest is a matter of post-publication evaluation. I think this is incorrect, and represents a misunderstanding of the actual function that peer review is currently performing. It is not a 0/1, publishable/unpublishable threshold. There are many different quality levels, and they get more exacting and selective in the higher quality journals (which also have higher-quality and more exacting referees and refereeing). Users need these differential quality tags when they are trying to decide whether newly published work is worth taking the time to ready and making the effort and risk to try to build upon (at the quality level of their own work). User/Author/Referee Experience. I think both authors and users have a good idea of the quality levels of the journals in their fields -- not from the journals' impact factors, but from their content, and their track-records for content. As users, researchers read articles in their journals; as authors they write for those journals, and revise for their referees; and as referees they referee for them. They know that all journals are not equal, and that "peer-reviewed" can be done at a whole range of quality levels. Metrics As Substitutes for User/Author/Referee Experience? Is there any substitute for this direct experience with journals (as users, authors and referees) in order to know what their peer-reviewing standards and quality level are? There is nothing yet, and no one can say yet whether there will ever be metrics as accurate as having read, written and refereed for the journals in question. Metrics might eventually provide an approximation, though we don't yet know how close, and of course they only come after publication (well after). Quality Lapses? Journal track records, user experiences, and peer review itself are certainly not infallible either, however; the usually-higher-quality journals may occasionally publish a lower-quality article, and vice versa. But on average, the quality of the current articles should correlate well with the quality of past articles. Whether judgements of quality from direct experience (as user/author/referee) will ever be matched or beaten by multiple metrics, I cannot say, but I am pretty sure they are not matched or beaten by the journal impact factor. Regression on the Generic Mean? And even if multiple metrics do become as good a joint predictor of journal article quality as user experience, it does not follow that peer-review can then be reduced to generic pass/fail, with the rest sorted by metrics, because (1) metrics are journal-level, not article-level (though they can also be author-level) and, more important still, (2) if journal-differences are flattened to generic peer review, entrusting the rest to metrics, then the quality of articles themselves will fall, as rigorous peer review does not just assign articles a differential grade (via the journal's name and track-record), but it improves them, through revision and re-refereeing. More generic 0/1 peer review, with less individual quality variation among journals, would just generate quality regression on the mean. REFERENCES Bollen J, Van de Sompel H, Hagberg A, Chute R (2009) A Principal Component Analysis of 39 Scientific Impact Measures. PLoS ONE 4(6): e6022. doi:10.1371/journal.pone.0006022 Brody, T., Harnad, S. and Carr, L. (2006) . Journal of the American Association for Information Science and Technology (JASIST) 57(8) pp. 1060-1072. Garfield, E., (1955) Citation Indexes for Science: A New Dimension in Documentation through Association of Ideas. Science 122: 108-111 Harnad, S. (1979) Creative disagreement. The Sciences 19: 18 - 20. Harnad, S. (ed.) (1982) Peer commentary on peer review: A case study in scientific quality control, New York: Cambridge University Press. Harnad, S. (1984) Commentaries, opinions and the growth of scientific knowledge. American Psychologist 39: 1497 - 1498. Harnad, Stevan (1985) Rational disagreement in peer review. Science, Technology and Human Values, 10 p.55-62. Harnad, S. (1990) Scholarly Skywriting and the Prepublication Continuum of Scientific Inquiry Psychological Science 1: 342 - 343 (reprinted in Current Contents 45: 9-13, November 11 1991). Harnad, S. (1986) Policing the Paper Chase. (Review of S. Lock, A difficult balance: Peer review in biomedical publication.) Nature 322: 24 - 5. Harnad, S. (1996) Implementing Peer Review on the Net: Scientific Quality Control in Scholarly Electronic Journals. In: Peek, R. & Newby, G. (Eds.) Scholarly Publishing: The Electronic Frontier. Cambridge MA: MIT Press. Pp 103-118. Harnad, S. (1997) Learned Inquiry and the Net: The Role of Peer Review, Peer Commentary and Copyright. Learned Publishing 11(4) 283-292. Harnad, S. (1998/2000/2004) The invisible hand of peer review. Nature [online] (5 Nov. 1998), Exploit Interactive 5 (2000): and in Shatz, B. (2004) (ed.) Peer Review: A Critical Inquiry. Rowland & Littlefield. Pp. 235-242. Harnad, S. (2008) Validating Research Performance Metrics Against Peer Rankings. Ethics in Science and Environmental Politics 8 (11) Special Issue: The Use And Misuse Of Bibliometric Indices In Evaluating Scholarly Performance Harnad, S. (2009) Open Access Scientometrics and the UK Research Assessment Exercise. Scientometrics 79 (1) Shadbolt, N., Brody, T., Carr, L. and Harnad, S. (2006) The Open Research Web: A Preview of the Optimal and the Inevitable, in Jacobs, N., Eds. Open Access: Key Strategic, Technical and Economic Aspects. Chandos. Monday, June 1. 2009ETD2009 Keynote: Integrating University Thesis and Research Open Access MandatesETD 2009 June 10, Pittsburgh Thursday, March 19. 2009The Need to Cross-Validate and Initialize Multiple Metrics Jointly Against Peer Ratings
The Times Higher Education Supplement (THES) has reported the results of a study they commissioned by Evidence Ltd that found that the ranking criteria for assessing and rewarding research performance in the UK Research Assessment Exercise (RAE) changed from RAE 2001 to RAE 2008. The result is that citations, which correlated highly with RAE 2001, correlated less highly with RAE 2008, so a number of universities whose citation counts had decreased were rewarded more in 2008, and a number of universities whose citation counts had increased were rewarded less.
(1) Citation counts are only one (though an important one) among many potential metrics of research performance. (2) If the RAE peer panel raters' criteria for ranking the universities varied or were inconsistent between RAE 2001 and RAE 2008 then that is a problem with peer ratings rather than with metrics (which, being objective, remain consistent). (3) Despite the variability and inconsistency, peer ratings are the only way to initialise the weights on metrics: Metrics first have to be jointly validated against expert peer evaluation by measuring their correlation with the peer rankings, discipline by discipline; then the metrics' respective weights can be updated and fine-tuned, discipline by discipline, in conjunction with expert judgment of the resulting rankings and continuing research activity. (4) If only one metric (e.g., citation) is used, there is the risk that expert ratings will simply echo it. But if a rich and diverse battery of multiple metrics is jointly validated and initialized against the RAE 2008 expert ratings, then this will create an assessment-assistant tool whose initial weights can be calibrated and used in an exploratory way to generate different rankings, to be compared by the peer panels with previous rankings as well as with new, evolving criteria of research productivity, uptake, importance, influence, excellence and impact. (5) The dawning era of Open Access (free web access) to peer-reviewed research is providing a wealth of new metrics to be included, tested and assigned initial weights in the joint battery of metrics. These include download counts, citation and download growth and decay rates, hub and authority scores, interdisciplinarity scores, co-citations, tag counts, comment counts, link counts, data-usage, and many other openly accessible and measurable properties of the growth of knowledge in our evolving "Cognitive Commons." Brody, T., Kampa, S., Harnad, S., Carr, L. and Hitchcock, S. (2003) Digitometric Services for Open Archives Environments. In Proceedings of European Conference on Digital Libraries 2003, pp. 207-220, Trondheim, Norway. Brody, T., Carr, L., Harnad, S. and Swan, A. (2007) Time to Convert to Metrics. Research Fortnight pp. 17-18. Brody, T., Carr, L., Gingras, Y., Hajjem, C., Harnad, S. and Swan, A. (2007) Incentivizing the Open Access Research Web: Publication-Archiving, Data-Archiving and Scientometrics. CTWatch Quarterly 3(3). Carr, L., Hitchcock, S., Oppenheim, C., McDonald, J. W., Champion, T. and Harnad, S. (2006) Extending journal-based research impact assessment to book-based disciplines. Technical Report, ECS, University of Southampton. Hajjem, C., Harnad, S. and Gingras, Y. (2005) Ten-Year Cross-Disciplinary Comparison of the Growth of Open Access and How it Increases Research Citation Impact. IEEE Data Engineering Bulletin 28(4) pp. 39-47. Harnad, S. (2001) Research access, impact and assessment. Times Higher Education Supplement 1487: p. 16. Harnad, S. (2007) Open Access Scientometrics and the UK Research Assessment Exercise. In Proceedings of 11th Annual Meeting of the International Society for Scientometrics and Informetrics 11(1), pp. 27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds. Harnad, S. (2008) Self-Archiving, Metrics and Mandates. Science Editor 31(2) 57-59 Harnad, S. (2008) Validating Research Performance Metrics Against Peer Rankings. Ethics in Science and Environmental Politics 8 (11) doi:10.3354/esep00088 The Use And Misuse Of Bibliometric Indices In Evaluating Scholarly Performance Harnad, S. (2009) Multiple metrics required to measure research performance. Nature (Correspondence) 457 (785) (12 February 2009) Harnad, S., Carr, L., Brody, T. & Oppenheim, C. (2003) Mandated online RAE CVs Linked to University Eprint Archives: Improving the UK Research Assessment Exercise whilst making it cheaper and easier. Ariadne 35. Harnad, S., Carr, L. and Gingras, Y. (2008) Maximizing Research Progress Through Open Access Mandates and Metrics. Liinc em Revista. Sunday, March 8. 2009U. Edinburgh: Scotland's 6th Green OA Mandate, UK's 22nd, Planet's 67th
(Thanks to Peter Suber's Open Access News.)
Note that Edinburgh's is the optimal ID/OA Mandate. (Let us hope Edinburgh will also implement the automatized Request a Copy Button for Embargoed or Closed Access Deposits!) University of Edinburgh (UK* institutional-mandate)The University of Edinburgh has adopted an OA mandate. Here's an excerpt from the Open Access Publications Policy (January 27 - February 4, 2009), the proposal which the university's Electronic Senate approved on February 18, 2009: This... Publications Policy... requires researchers to deposit their research outputs in the Publications Repository, and where appropriate in the Open Access Edinburgh Research Archive in order to maximise the visibility of the University’s research.... This policy will be implemented [i.e. become mandatory] from January 2010, and in the meantime, researchers are encouraged to deposit outputs.... The Publications Repository (PR) is a closed repository for use only within the University of Edinburgh and is an internal University tool for research output management, while Edinburgh Research Archive (ERA) is a public open access repository, making content available through global searching mechanisms such as Google. This policy requires each researcher to provide the peer reviewed final accepted version of a research output to deposit. The policy encourages the deposit of an electronic copy of nonpeer reviewed research, particularly where this may be used for national assessments. Researchers (or their proxies, eg research administrators) will deposit these research outputs in the PR, and at the same time provide information about whether the research output can be made publicly available in ERA. It will then be automatically passed into ERA, where this is allowable, with no further input from the researcher or their agent.... There are several strong reasons for pursuing the requirement for the deposits of such research outputs at the moment: 1. The impact of research is maximized because there is growing evidence that research deposited in Open Access repositories is more heavily used and cited 2. The deposit of outputs in ERA will support compliance with Research Council and other funding agency requirements that research outputs are available openly. 3. This will ensure that each research output has consistent metadata and ensures longevity which, for example, a researcher’s own website does not. 4. Items which are already in Edinburgh Research Archive are well used. The average number of times each item was downloaded during 2008 was 228, with the top countries downloading Edinburgh research being: United States, United Kingdom, Australia, China, Iran and India. 5. Researchers, research groups or Schools can use the PR to provide automatically generated output for their own websites, or for their curriculum vitae. 6. Future possible metrics based research assessment will require us to ensure that Edinburgh’s research be cited as much as possible, and this means that it must be as visible as possible.... 9. This will become a competitive tool for Edinburgh’s research by enhancing its reputation and branding as a good place to carry out research.... 11. The world of scholarly communication is changing—adopting this policy in Edinburgh will help us move forward within this changing environment. Other universities require their researchers to deposit research outputs. Harvard University, Stirling University—the first in the UK to do so, and very recently the University of Glasgow, have adopted institutional requirements for such deposit. 12. Such a deposit requirement is in line with other UoE policies on knowledge exchange, public accountability and serving the public good.... Since this initiative requires changed patterns of work from researchers, there will be many questions some of which are addressed in this section.... -- What happens if I don’t want to make the research output public? There will always be a variety of circumstances where it is not possible to deposit, for example where a researcher does not wish to go public with their research immediately, because they wish to publish further, or where commercial reasons exist or where there are copyright issues (considered below). In these cases the research output should be deposited but only the metadata will be exposed in the PR the item will not be passed into ERA until permission is given. -- What happens if the publisher does not agree? You should try to avoid assigning the copyright to the publisher or granting them an exclusive license. Rather, you should aim to grant a nonexclusive licence which leaves you with the ability to deposit the work in the University Repositories and possibly make it available in other digital forms. -- How should I communicate this with the publisher? There will be advice and guidance on how to achieve this and template forms to show how you can amend Publisher copyright forms. -- What about research outputs which are not journal articles? The PR and ERA can accept most research output types including books, book chapters, conference proceedings, performances, video, audio etc. In some cases – for example books not available electronically – the PR/ERA will hold only metadata, with the possibility of links to catalogues so that users can find locations.... -- What about my research data? Data supporting research outputs is also required by RCs to be made available? and this can be included where requested. IS is establishing a working group to consider research data issues.... -- I would like to publish in an author-pays Open Access journal. Does this mean that I also have to deposit? Yes, please deposit the research output in the normal manner....
« previous page
(Page 4 of 11, totaling 102 entries)
» next page
|
QuicksearchSyndicate This BlogMaterials You Are Invited To Use To Promote OA Self-Archiving:
Videos:
The American Scientist Open Access Forum has been chronicling and often directing the course of progress in providing Open Access to Universities' Peer-Reviewed Research Articles since its inception in the US in 1998 by the American Scientist, published by the Sigma Xi Society. The Forum is largely for policy-makers at universities, research institutions and research funding agencies worldwide who are interested in institutional Open Acess Provision policy. (It is not a general discussion group for serials, pricing or publishing issues: it is specifically focussed on institutional Open Acess policy.)
You can sign on to the Forum here.
ArchivesCalendar
CategoriesBlog AdministrationStatisticsLast entry: 2018-09-14 13:27
1129 entries written
238 comments have been made
Top ReferrersSyndicate This Blog |