QuicksearchYour search for metrics returned 102 results:
Saturday, October 14. 2006The Special Case of Astronomy
Michael Kurtz (Harvard-Smithsonian Center for Astrophysics) has provided some (as always) very interesting and informative data on the special case of research access and self-archiving practices in Astronomy. His data show that: (1) In astronomy, where all active, publishing researchers already have online access to all relevant journal articles (a very special case!), researchers all use the versions "eprinted" (self-archived) in Arxiv first, because those are available first; but they all switch to using the journal version, instead of the self-archived one, as soon as the journal version is available. That is interesting, but hardly surprising, in view of the very special conditions of astronomy: If I only had access to a self-archived preprint or postprint first, I'd used that, faute de mieux. And as soon as the official journal version was accessible -- assuming that it's equally accessible -- I'd use that. But these conditions -- (i) open accessibility of the eprint before publication, (ii) in one longstanding central repository (Arxiv), for many and in some cases most papers, and (iii) open accessibility of the journal version of all papers upon publication -- is simply not representative of most other fields! In most other fields, (i') only about 15% of papers are available early as preprints or postprints, (ii') they are self-archived in distributed IRs and websites, not one central one (Arxiv), and (iii') the journal versions of many papers are not accessible at all to many of the researchers after publication. That's a very different state of affairs (outside astronomy and some areas of physics). (2) Kurtz's data showing that astronomy journals are not cancelled despite 100% OA are very interesting, but they too follow almost tautologically from (1): If virtually all researchers have access to the journal version, and virtually all of them prefer to use that rather than the eprint, it stands to reason that it is not being cancelled! (What is cause and what is effect there is another question -- i.e., whether preference is driving subscriptions or subscriptions are driving preference.) (3) In astronomy, as indicated by Kurtz, there is a small, closed circle of core journals, and all active researchers worldwide already have access to all of them. But in many other fields there is not a closed circle of core journals, and/or not all researchers have access. Hence access to a small set of core journals is not a precondition for being an active researcher in many fields -- which does not mean that lacking that access does not weaken the research (and that is the point!). (4) I agree completely that there is a component of self-selection Quality Bias (QB) in the correlation between self-archiving and citations. The question is (4a) how much of the higher citation count for self-archived articles is due to QB (as opposed to Early Advantage, Competitive Advantage, Quality Advantage, Usage Advantage, and Arxiv (Central) Bias)? And (4b) does self-selection QB itself have any causal consequences (or are authors doing it purely superstitiously, since it is has no causal effects at all)? The effects of course need not be felt in citations; they could be felt in downloads (usage) or in other measures of impact (co-citations, influence on research direction, funding, fame, etc.). The most important thing to bear in mind is that it would be absurd to imagine that somehow OA guarantees a quality-blind linear increment to the usage of any article, regardless of its quality. It is virtually certain that OA will benefit the better articles more, because they are more worth using and trying to build upon, hence more handicapped by access-barriers (which do exist in fields other than astro). That's QA, not QB. No amount of accessibility will help unciteable papers get used and cited. And most papers are uncited, hence probably unciteable, no matter how visible and accessible you may try to make them! (5) I think we agree that the basic challenge in assessing causality here is that we have a positive correlation (between proportion of papers self-archived and citation-counts) but we need to analyze the direction of the causation. The fact that more-cited papers tend to be self-archived more, and less-cited papers less is merely a restatement of the correlation, not a causal analysis of it: The citations, after all, come after the self-archiving, not before! The only methodologically irreproachable way to test causality would be to randomly choose a (sufficiently large, diverse, and representative) sample of N papers at the time of acceptance for publication (postprints -- no previous preprint self-archiving) and randomly impose self-archiving on N/2 of them, and not on the other N/2. That way we have random selection and not self-selection. Then we count citations for about 2-3 years, for all the papers, and compare them. No one will do that study, but an approximation to it can be done (and we are doing it) by comparing (a) citation counts for papers that are self-archived in IRs that have a self-archiving mandate with (b) citation counts for papers in IRs without mandates and with (c) papers (in the same journal and year) that are not self-archived. Not a perfect method, problems with small Ns, short available time-windows, and admixtures of self-selection and imposed self-archiving even with mandates -- but an approximation nonetheless. And other metrics -- downloads, co-citations, hub/authority scores, endogamy scores, growth-rates, funding, etc. -- can be used to triangulate and disambiguate. Stay tuned. Now some comments: On Tue, 10 Oct 2006, Michael Kurtz wrote: "Recently Stevan has copied me on two sets of correspondance concerning the OA citation advantage; I thought I would just briefly respond to both.And it also shows how anomalous Astronomy is, compared to other fields, where it is certainly not true that every researcher has subscriptions to the main journals... "Figure 5 of the J Electronic Publishing paper also shows that there is no effect of cost on the OA reads (and thus by extension citation) differential. Note in the plot that there is no change in slope for the obsolescence function of the reads (either of preprinted or non-preprinted) at 36 months. At 36 months the 3 year moving wall allows the papers to be accessed by everyone, this shows clearly that there is no cost effect portion of the OA differential in astronomy. This confirms the conclusion of my IPM article."And it underscores again, how unrepresentative astronomy is of research as a whole. "Citations are probably the least sensitive measure to see the effects of OA. This is because one must be able to read the core journals in order to write a paper which will be published by them. It is really not possible for a person who has not been regularly reading journal articles on, say, nuclear physics, to suddenly be able to write one, and cite the OA articles which enabled that writing. It takes some time for a body of authors who did not previously have access to form and write acceptable papers."In astronomy -- where the core journals are few and a closed circle, and all active researchers have access to them. But this is not true of research as a whole, across disciplines (or around the world). Researchers in most fields are no doubt handicapped for having less than full access, but that does not prevent them from doing and publishing research altogether. "Any statistical analysis of the causal/bias distinction must take into account the actual distribution of citations among articles. This is why I made the monte carlo analysis in the IPM paper. As a quick example for papers published in the Astrophysical Journal in 2003: The most cited 10% have 39% of all citations, and are 96% in the arXiv; the lowest cited 10% have 0.7% of all citations and are 29% in the arXiv. Showing the causal hypothesis is true will be very difficult under these conditions."(i) Since all of the published postprints in all these journals are accessible to all research-active astronomers as of their date of publication, we are of necessity speaking here mostly about an Early Access effect (preprints). Most of the other components of the Open Access Advantage (Competitive Advantage, Usage Advantage, Quality Advantage) are minimized here by the fact that everything in astronomy is OA from the date of publication onward. The remaining components are either Arxiv-specific (the Arxiv Bias -- the tradition of archiving and hence searching in one central repository) or self-selection [Quality Bias] influencing who does and does not self-archive early, with their prepublication preprint. Since most fields don't post pre-refereeing preprints at all, this comparison is mostly moot. For most fields, the question about citation advantage concerns the postprint only, and as of the date of acceptance for publication, not before. (ii) In other fields too, there is the very same correlation between citation counts and percentage self-archived, but it is based on postprints, self-archived at publication, not pre-refereeing preprints self-archived much earlier. And, most important, it is not true in these fields that the postprint is accessible to all researchers via subscription: Many potential users cannot access the article at all if it is not self-archived -- and that is the main basis for the OA impact advantage. "Perhaps the journal which is most sensitive to cancellations due to OA archiving is Nuclear Physics B; it is 100% in arXiv, and is very expensive. I have several times seen librarians say that they would like to cancel it. One effect of OA on Nuclear Physics B is that its impact factor (as we measure it, I assume ISI gets the same thing) has gone up, just as we show in the J E Pub paper for Physical Review D. Whether Nuclear Physics B has been cancelled more than Nuclear Physics A or Physics Letters B must be well known at Elsevier."It is an interesting question whether NPB is being cancelled, but if it is, it clearly is not because of self-archiving, nor because of astronomy's special "universal paid OA" OA to the published version: if NPB is being cancelled, it is for the usual reason, which is that it is not good enough to justify its share of the institution's journal budget. Harnad, S. (2005) OA Impact Advantage = EA + (AA) + (QB) + QA + (CA) + UAStevan Harnad American Scientist Open Access Forum Saturday, September 30. 2006"Metrics" are Plural, Not Singular: Valid Objections From UUK About RAE
Universities UK and the Russell Group are spot-on in their criticisms of the replacement of the old panel-based Research Assesement Exercise (RAE) by one single metric (prior research funding). That would not only be arbitrary and absurd, but extremely unfair and counterproductive.
That very valid specific objection, however, has next to nothing to do with the general plan to replace the RAE's current tremendously wasteful panel-based review by metrics (plural), which include a rich and diverse potential array of objective performance indicators rather than just one self-fulfilling prophecy (i.e., how much prior funding has been awarded). UUK are also quite right that each metric needs to be tested and validated, discipline by discipline (some already have been), and that the metric formula and the weights for each of the metrics have to be adjusted and optimised individually for each discipline. The parallel panel/metric shadow exercise planned for 2008 will help accomplish this testing, validation, and customisation. Whether -- and if so how much -- panel review will still be needed in some disciplines once the metric formula has been tested, validated and optimised is an empirical question (but my own guess is: not much). Prior Amsci Topic Threads: Stevan Harnad American Scientist Open Access Forum Monday, September 18. 2006Submitting one's own published work for assessment is Fair UseCrossRef and Publishers Licensing Society have come to a "gentleman's agreement" with RAE/HEFCE to "license" the papers that are submitted to RAE for assessment "free of charge": At the heart of this there are not one, not two, not three, but four pieces of patent nonsense so absurd as to take one's breath away. Most of the nonsense is on RAE/HEFCE's end; one cannot blame the publishers for playing along (especially as the gentleman's agreement holds some hope of forestalling OA a bit longer, or at least the role the RAE might have played in hastening OA's arrival):2008 UK Research Assessment Exercise (RAE) (1) The first piece of nonsense is the RAE's pedantic and dysfunctional insistence on laying their hands directly on the "originals," the publisher's version of each article per author, rather than sensibly settling for the author's peer-reviewed final drafts (postprints).What will moot all of this is, of course, the OA self-archiving mandates by RCUK and the UK universities themselves, which will fill the UK universities' IRs, which will in their turn -- with the help of the IRRA I(Institutional Repositories and Research Assessment) -- mediate the submission of both the postprints and the metrics to the RAE. Then this ludicrous side-show about the "licensing" of the all-important "originals" to the RAE, for "peer re-review" via the mediation of CrossRef and the publishers will at last be laid to rest, once and for all. RAE 2008 will be its last hurrah... Prior AmSci Threads on this topic: "Future UK RAEs to be Metrics-Based"Stevan Harnad American Scientist Open Access Forum Wednesday, June 21. 2006Let 1000 RAE Metric Flowers Bloom: Avoid Matthew Effect as Self-Fulfilling Prophecy
Let 1000 RAE Metric Flowers Bloom: Avoid Matthew Effect as Self-Fulfilling Prophecy Stevan Harnad The conversion of the UK Research Assessment Exercise (RAE) from the present costly, wasteful exercise to time-saving and cost-efficient metrics is welcome, timely, and indeed long overdue, but the worrying thing is that the RAE planners currently seem to be focused on just one metric -- prior research funding -- instead of the full and rich spectrum of new (and old) metrics that will become available in an Open Access world, with all the research performance data digitally available online for analysis and use. Mechanically basing the future RAE rankings exclusively on prior funding would just generate a Matthew Effect (making the rich richer and the poor poorer), a self-fulfilling prophecy that is simply equivalent to increasing the amount given to those who were previously funded (and scrapping the RAE altogether, as a separate, semi-independent performance evaluator and funding source). What the RAE should be planning to do is to look at weighted combinations of all available research performance metrics -- including the many that are correlated, but not so tightly correlated, with prior RAE rankings, such as author/article/book citation counts, article download counts, co-citations (co-cited with and co-cited by, weighted with the citation weight of the co-citer/co-citee), endogamy/exogamy metrics (citations by self or collaborators versus others, within and across disciplines), hub/authority counts (in-cites and out-cites, weighted recursively by the citation's own in-cite and out-cite counts), download and citation growth rates, semantic-web correlates, etc. It would be both arbitrary and absurd to blunt the potential sensitivity, power, predictivity and validity of metrics a-priori, by biasing them toward the prior-funding counts metric alone. Prior funding should just be one out of a full battery of weighted metrics, adjusted to each discipline and validated against one another (and against human judgment too). Shadbolt, N., Brody, T., Carr, L. and Harnad, S. (2006) The Open Research Web: A Preview of the Optimal and the Inevitable. In: Jacobs, N., Eds. Open Access: Key Strategic, Technical and Economic Aspects chapter 21. Chandos. Stevan Harnad American Scientist Open Access Forum Saturday, June 17. 2006Book-impact metric for research assessment in book-based disciplines: Self-archiving books' metadata and bibliographiesFor all disciplines -- but especially for disciplines that are more book-based than journal-article-based -- it would be highly beneficial for authors to self-archive in their institutional repositories the metadata as well as the cited-reference lists (bibliographies) for the books they publish annually. That way, next-generation scientometric search engines like citebase will be able to harvest and link their reference lists (exactly as they do the reference lists of articles whose full texts have been self-archived). This will generate a book citation impact metric. Books cite and are cited by books; moreover, books cite articles and are cited by articles. It is already possible to scrape together a rudimentary book-impact index from Thompson-ISI's Web of Knowledge along with data from Google Books and Google Scholar, but a worldwide Open Access database, across all disciplines, indexing all the article output as well as the book output self-archived in all the world's institutional repositories could do infinitely better than that: All that's needed is for authors' institutions and funders to mandate institutional (author) self-archiving of (1) the metadata and full-texts of all their article output along with (2) the metadata and reference lists of all their book output. We can even do better than that, because although many book authors may not wish to make their books' full-texts Open Access (OA), they can still deposit their books' full-texts in their institutional repositories and set access as Closed Access -- accessible only to scientometric full-text harvesters and indexers (like google books) for full-text inversion, boolean search, and semiometric analysis (text endogamy/exogamy, text-overlap, text similarity/proximity, semantic lineage, latent semantic analysis, etc.) -- without making the full-text text itself OA to individual users (i.e., potential book-buyers) if they do not wish to. This will help provide the UK's new metrics-based Research Assessment Exercise (RAE) with research performance indicators better suited for the disciplines whose research is not as journal-article- (and conference-paper-) based as that of the physical, biological and engineering sciences. Carr, L, Hitchcock, S., Oppenheim, C., McDonald, J.W., Champion, T. & Harnad, S. (2006) Can journal-based research impact assessment be generalised to book-based disciplines? (Research Proposal)Stevan Harnad American Scientist Open Access Forum Friday, June 16. 2006Metrics-Based Assessment of Published, Peer-Reviewed ResearchOn Wed, 14 Jun 2006, Larry Hurtado, Department of Divinity, University of Edinburgh, wrote in the American Scientist Open Access Forum: LH: "Stevan Harnad is totally in favour of a "metrics based" approach to judging research merit with a view toward funding decisions, and greets the news of such a shift from past/present RAE procedure with unalloyed joy."No, metrics are definitely not meant to serve as the basis for all or most research funding decisions: research proposals, as noted, are assessed by peer review. Metrics is intended for the other component in the UK dual funding system, in which, in addition to directly funded research, based on competitive peer review of research bids, there is also a smaller, secondary (but prestigious) top-slicing system, the Research Assessment Exercise (RAE). It is the RAE that needed to be converted to metrics from the absurd, wasteful and costly juggernaut that it used to be. LH: "Well, hmmm. I'm not so sure (at least not yet). Perhaps there is more immediate reason for such joy in those disciplines that already rely heavily on a metrics approach to making decisions about researchers."No discipline uses metrics systematically yet; moreover, many metrics are still to be designed and tested. However, the only thing "metrics" really means is: the objective measurement of quantifiable performance indicators. Surely all disciplines have measurable performance indicators. Surely it is not true of any discipline that the only way, or the best way, to assess all of its annual research output is by having each piece individually re-reviewed after it has already been peer-reviewed twice -- before execution, by a funding council's peer-reviewers as a research proposal, and after execution, by a journal's referees as a research publication. LH: "In the sciences, and also now social sciences, there are citation-services that count publications and citations thereof in a given list of journals deemed the "canon"of publication venues for a given discipline. And in these disciplines journal articles are deemed the main (perhaps sole) mode of research publication. Ok. Maybe it'll work for these chaps."First, with an Open Access database, there need be no separate "canon": articles in any of the world's 24,000 peer-reviewed journals and congresses can count -- though some will (rightly) count for more than others, based on the established and known quality standards and impact of the journal in which it appeared (this too can be given a metric weight). Alongside the weighted impact factor of the journal, there will be the citation counts for each article itself, its author, the co-citations in and out, the download counts, the hub/authority weights, the endogamy/exogamy weights. etc. etc. All these metrics (and many more) will be derivable for all disciplines from an Open Access database (no longer just restricted to ISI's Web of Knowledge). That includes, by the way, citations of books by journal articles -- and also citations of books and journal articles by books, because although most book authors may not wish to make their books' full-texts OA, they can and should certainly make their books' bibliographic metadata, including their bibliography of cited references, OA. Those book-impact metrics can then be added to the metric harvest, citation-linked, counted, and duly weighted, along with all the other metrics. There are even Closed-Access ways of self-archiving books' digital full-texts (such as google book search) so they can be processed for semiometric analysis (endogamy/exogamy, content overlap, proximity, lineage, chronometric trends) by harvesters that do not make the full text available openly. All disciplines can benefit from this. LH: "But I'd like to know how it will work in Humanities fields such as mine. Some questions, for Stevan or whomever. First, to my knowledge, there is no such citation-count service in place. So, will the govt now fund one to be set up for us? Or how will the metrics be compiled for us? I.e., there simply is no mechanism in place for doing "metrics"for Humanities disciplines."All the government needs to do is to mandate the self-archiving of all UK research output in each researcher's own OAI-compliant institutional (or central) repository. (The US and the rest of Europe will shortly follow suit, once the prototype policy model is at long last adopted by a major player!) The resulting worldwide interoperable database will be the source of all the metric data, and a new generation of scientometric and semiometric harvesters and analysers will quickly be spawned to operate on it, to mine it to extract the rich new generation of metrics. There is absolutely nothing exceptional about the humanities (as long as book bibliographies are self-archived too, alongside journal-article full-texts). Research uptake and usage is a generic indicator of research performance, and citations and downloads are generic indicators of research uptake and usage. The humanities are no different in this regard. Moreover, inasmuch as OA also enhances research uptake and usage itself, the humanities stand to benefit from OA, exactly like the other disciplines. LH: "Second, for us, journal articles are only one, and usually not deemed the primary/preferred, mode of research publication. Books still count quite heavily. So, if we want to count citations, will some to-be-imagined citation-counting service/agency comb through all the books in my field as well as the journal articles to count how many of my publications get cited and how often? If not, then the "metrics"will be so heavily flawed as to be completing misleading and useless."All you need to do is self-archive your books' metadata and cited reference lists and all your journal articles in your OAI-compliant Institutional repository. The scientometric search engines -- like citebase, citeseer, google scholar, and more to come -- will take care of all the rest. If you want to do even better, scan in, OCR and self-archive the legacy literature too (the journal articles plus the metadata and cited reference lists of books of yore too; if you're worried about variations in reference citing styles: don't worry! Just get the digital texts in and algorithms can start sorting them out and improving themselves). LH: "Third, in many sciences, esp. natural and medical sciences, research simply can't be conducted without significant external funding. But in many/most Humanities disciplines truly groundbreaking and highly influential research continues to be done without much external funding."So what is your point? That the authors of unfunded research, uncoerced by any self-archiving mandate, will not self-archive? Don't worry. They will. They may not be the first ones, but they will follow soon afterwards, as the power and potential of self-archiving to measure as well as to accelerate and increase research impact and progress become more and more manifest. LH: "(Moreover, no govt has yet seen fit to provide funding for the Humanities constituency of researchers commensurate with that available for Sciences. So, it's a good thing we don't have to depend on such funding!)"Funding grumbles are a worthy topic, but they have nothing whatsoever to do with OA and the benefits of self-archiving, or metrics. LH: "My point is that the "metrics"for the Humanities will have to be quite a bit different in what is counted, at the very least."No doubt. And the metrics used, and their weights, will be adjusted accordingly. But metrics they will be. No exceptions there. And no regression back to either human re-evaluation or delphic oracles: Objective, countable performance indicators (for the bulk research output: of course for special prizes and honours individual human judgment will have to be re-invoked, in order to compare like with like, individually). LH: "Fourth, I'm not convinced (again, not yet; but I'm open to persuasion) that counting things = research quality and impact. Example: A number of years ago, coming from a tenure meeting at my previous University I ran into a colleague in Sociology. He opined that it was unnecessary to labour over tenure, and that he needed only two pieces of information: number of publications and number of citations. I responded, "I have two words for you: Pons and Fleischman". Remember these guys? They were cited in Time and Newsweek and everywhere else for a season as discovers of "cold fusion". And over the next couple of years, as some 50 or so labs tried unsuccessfully to replicate their alleged results, they must have been among the most frequently-cited guys in the business. And the net effect of all that citation was to discredit their work. So, citation = "impact". Well, maybe, but in this case "impact"= negative impact. So, are we really so sure of "metrics"?"Not only do citations have to be weighted, as they can and will be, recursively, by the weight of their source (Proceedings of the Royal Society vs. The Daily Sun, citations from Nobel Laureates vs citations from uncited authors), but semiometric algorithms will even begin to have a go at sorting positive citations from negative ones, disinterested ones from endogamous ones, etc. Are you proposing to defer to individual expert opinion in some (many? most? all?) cases, rather than using a growing wealth and diversity of objective performance indicators? Do you really think it is harder to find individual cases of subjective opinion going wrong than objective metrics going wrong? LH: "Perhaps, however, Stevan can help me see the light, and join him in acclaiming the advent of metrics."I suggest that the best way to see the light on the subjective of Open Access Digitometrics is to start self-archiving and sampling the (few) existing digitometric engines, such as citebase. You might also wish to have a look at the chapter I recommended (no need to buy the book: it's OA: Just click!): Shadbolt, N., Brody, T., Carr, L. and Harnad, S. (2006) The Open Research Web: A Preview of the Optimal and the Inevitable, in Jacobs, N., Eds. Open Access: Key Strategic, Technical and Economic Aspects, chapter 21. Chandos.Stevan Harnad American Scientist Open Access Forum Friday, April 14. 2006Metrics and Assessment
The following is a comment on an article that appeared in the Thursday April 13th issue of the The Independent concerning the UK Research Assessment Exercise (RAE) and Metrics (followed by a response to another piece in The Independent about Web Metrics).
Re: Hodges, L. (2006) The RAE is dead - long live metrics. The Independent April 13 2006Absolutely no one can justify (on the basis of anything but superstition) holding onto an expensive, time-wasting research assessment system such as the RAE, which produces rankings that are almost perfectly correlated with, hence almost exactly predictable from, inexpensive objective metrics such as prior funding, citations and research student counts. Hence the only two points worth discussing are (1) which metrics to use and (2) how to adapt the choice of metrics and their relative weights for each discipline. The web has opened up a vast and rich universe of potential metrics that can be tested for their validity and predictive power: citations, downloads, co-citations, immediacy, growth-rate, longevity, interdisciplinarity, user tags/commentaries and much, much more. These are all measures of research uptake, usage, impact, progress and influence. They have to be tested and weighted according to the unique profile of each discipline (or even subdiscipline). Just the prior-funding metric alone is highly predictive on its own, but it also generates a Matthew Effect: a self-fulfilling, self-perpetuating prophecy. So multiple, weighted mertics are needed for balanced evaluation and prediction. I would not for a moment believe, however, that any (research) discipline lacks predictive metrics of research performance altogether. Even less credible is the superstitious notion that the only way (or the best) to evaluate research is for RAE panels to re-do, needlessly, locally, the peer review that has already been done, once, by the journals in which the research has already been published. The urgent feeling that some form of human re-review is somehow crucial for fairness and accuracy has nothing to do with the RAE or metrics in particular; it is just a generic human superstition (and irrationality) about population statistics versus my own unique, singular case...
The reasons for the University of Southampton's extremely high overall webmetric rating are four: (1) U. Southampton's university-wide research performanceThis all makes for an extremely strong Southampton web presence, as reflected in such metrics as the "G factor", which places Southampton 3rd in the UK and 25th among the world's top 300 universities or Webometrics,which places Southampton 6th in UK, 9th in Europe, and 80th among the top 3000 universities it indexes. Of course, these are extremely crude metrics, but Southampton itself is developing more powerful and diverse metrics for all Universities in preparation for the newly announced metrics-only Research Assessment Exercise. Some references: Harnad, S. (2001) Why I think that research access, impact and assessment are linked. Times Higher Education Supplement. 1487: p. 16. Hitchcock, S., Brody, T., Gutteridge, C., Carr, L., Hall, W., Harnad, S., Bergmark, D. and Lagoze, C. (2002) Open Citation Linking: The Way Forward. D-Lib Magazine 8(10). Harnad, S. (2003) Why I believe that all UK research output should be online. Times Higher Education Supplement. Friday, June 6 2003. Harnad, S., Carr, L., Brody, T. & Oppenheim, C. (2003) Mandated online RAE CVs Linked to University Eprint Archives: Improving the UK Research Assessment Exercise whilst making it cheaper and easier. Ariadne 35. Berners-Lee, T., De Roure, D., Harnad, S. and Shadbolt, N. (2005) Journal publishing and author self-archiving: Peaceful Co-Existence and Fruitful Collaboration. Brody, T., Harnad, S. and Carr, L. (2006) Earlier Web Usage Statistics as Predictors of Later Citation Impact. Journal of the American Association for Information Science and Technology (JASIST). Shadbolt, N., Brody, T., Carr, L. & Harnad, S. (2006) The Open Research Web: A Preview of the Optimal and the Inevitable. In: Jacobs, N., Eds. Open Access: Key Strategic, Technical and Economic Aspects. Chandos. Citebase impact ranking engine and usage/citation correlator/predictor Beans and Bean Counters Bibliography of Findings on the Open Access Impact Advantage Stevan Harnad American Scientist Open Access Forum Thursday, March 23. 2006Online, Continuous, Metrics-Based Research AssessmentAs predicted, and long urged, the UK's wasteful, time-consuming Research Assessment Exercise (RAE) is to be replaced by metrics: "Research exercise to be scrapped"RAE outcome is most closely correlated (r = 0.98) with the metric of prior RCUK research funding (Figure 4.1) (this is no doubt in part a "Matthew Effect"), but research citation impact is another metric highly correlated with the RAE outcome, even though it is not explicitly counted. Now it can be explicitly counted (along with other powerful new performance metrics) and all the rest of the ritualistic time-wasting can be abandoned, without further ceremony. This represents a great boost for institutional self-archiving in Open Access Institutional Repositories, not only because that is the obvious, optimal means of submission to the new metric RAE, but because it is also a powerful means of maximising research impact, i.e., maximising those metrics: (I hope Research Councils UK (RCUK) is listening!). Harnad, S. (2001) Why I think that research access, impact and assessment are linked. Times Higher Education Supplement 1487: p. 16.And this new metric RAE policy will help "unskew" it, by instead placing the weight on the individual author/article citation counts (and download counts, CiteRanks, authority counts, citation/download latency, citation/longevity, co-citation signature, and many, many new OA metrics waiting to be devised and validated, including full-text semantic-analysis and semantic-web-tag analyses too) rather than only, or primarily, on the blunter instrument (the journal impact factor). This is not just about one number any more! The journal tag will still have some weight, but just one weight among many, in an OA scientometric multiple regression equation, customised for each discipline. This is an occasion for rejoicing at progress, pluralism and openness, not digging up obsolescent concerns about over-reliance on the journal impact factor. The document actually says You are quite right, though, that the default metric many have in mind is research income, but be patient! Now that the door has been opened to objective metrics (instead of amateurish in-house peer-re-review), this will spawn more and more candidates for enriching the metric equation. If RAE top-slicing wants to continue to be an independent funding source in the present "dual" funding system (RCUK/RAE), it will want to have some predictive metrics that are independent of prior funding. (If RAE instead just wants to redundantly echo research funding, it need merely scale up RCUK research grants to absorb what would have been the RAE top-slice and drop the RAE and dual funding altogether!)"one or more metrics... could be used to assess research quality and allocate funding, for example research income, citations, publications, research student numbers etc." The important thing is to scrap the useless, time-wasting RAE preparation/evaluation ritual we were all faithfully performing, when the outcome was already so predictable from other, cheaper, quantitative sources. Objective metrics are the natural, sensible way to conduct such an exercise, continuously, and once we are doing metrics, many powerful new predictive measures will emerge, over and above grant income and citations. The RAE ranking will not come from one variable, but from a multiple regression equation, with many weighted predictor metrics in an Open Access world, in which research full-texts in their own authors' Institutional Repositories are citation-linked, download-monitored and otherwise scientometrically assessed and analysed continuously. Hitchcock, S., Brody, T., Gutteridge, C., Carr, L., Hall, W., Harnad, S., Bergmark, D. and Lagoze, C. (2002) Open Citation Linking: The Way Forward. D-Lib Magazine 8(10). Stevan Harnad Friday, October 7. 2005Re: Critique of Research Fortnight article on RCUK policy proposalPrior AmSci Topic Thread (started September 16, 2005):On Thu, 6 Oct 05, Sally Morris (ALPSP) wrote: SM: "Interesting that Stevan chooses to ignore key points in my message: IOP didn't say 'the opposite' at all - they said subs hadn't been affected 'yet'; as Ken Lillywhite's message makes clear, they fully expect subs to suffer as the logical consequence of the fall in downloads - and Bob Michaelson's message shows that their fear is justified?"Please do look again, Sally, as I did duly note the "yet": And the target of "opposite" was also fully clarified:SH: "'Yet' can quite safely and reasonably be appended to everything I have seen and heard, and it makes not a whit of difference." [1] - [4] are all true, and constitute the substance of what we are talking about: ("Is there any evidence that self-archiving causes cancellations?" Answer: No. "Is IOP opposing self-archiving ?" Answer: No.)SH: "What I should have said was that the diminished article downloads do not equal, nor do they imply, diminished subscriptions, and that IOP had said exactly the opposite: That despite replication in a repository (ArXiv) IOP had [1] found no diminished subscriptions, [2] does not consider self-archiving a threat, [3] cooperates with Arxiv, and indeed [4] will soon be hosting a mirror of Arxiv. You, dear Sally, have instead always refocused the question from objective evidence (about actual self-archiving and actual cancellations) to subjective worries (about possible future cancellations) for which there is as yet no objective evidence at all. And you have tried (but were, I am afraid, destined to be unsuccessful) to interpret the perfectly true, but perfectly irrelevant statement that IOP has recorded lower downloads at its website, as if it were evidence of present or future cancellations. It is not: So please do keep the two propositions in focus:SH: "This statement [that IOP finds diminished downloads for self-archived articles] is perfectly true but in no way implies what ALPSP cites it to imply (i.e., that diminished downloads are evidence that self-archiving causes cancellations), for that is the exact opposite of what the Institute of Physics has said (Swan & Brown 2005)." True: IOP website downloads are reduced by self-archivingThese are the objective facts. The rest is about subjective worries, which, with Rene Descartes, I tend to regard as incorrigible (to the worrier), hence impenetrable to doubt (by the worrier), yet eminently fallible. (I cannot doubt that I have a tooth-ache, when I have a tooth-ache, but I can doubt that the tooth-ache means there's anything wrong with my tooth, even though it feels like it: it might be referred pain from another organ, or even just neurasthenic pain.) As to librarian anecdotes about cancellation practices: First, you will agree that they do not amount to much until/unless they translate into measurable objective effects (which both APS and IOP have said they could not detect, across 14 years of self-archiving). Apart from that, librarian anecdotes can be freely traded. Here's one of my favorites: In my next posting, I will turn to the consequences of failing to exercise the Cartesian faculty of critical analysis, one-sidedly hewing to subjective worries, while ignoring objective counter-evidence. I will do a critique of the following unsigned article that has just appeared in Research Fortnight:"Personal communication from a UK University Library Director: 'I know of no HE library where librarians make cancellation or subscription decisions. Typically they say to the department/faculty 'We have to save ?X,000" from your share of the serials budget: what do you want to cut?'. These are seen as academic --not metrics-driven -- judgements, and no librarian makes those academic judgements, as they are indefensible in Senate' [S]uch decisions are almost always wholly subjective, not objective, and have nothing to do with the existence or otherwise of repositories'." "The Dangers of Open Access, RCUK Style" Monday, August 22. 2005Journal Publishing and Author Self-Archiving: Peaceful Co-Existence and Fruitful CollaborationTim Berners-Lee (UK, Southampton & US, MIT) Dave De Roure (UK, Southampton) Stevan Harnad (UK, Southampton & Canada, UQaM) Derek Law (UK, Strathclyde) Peter Murray-Rust (UK, Cambridge) Charles Oppenheim (UK, Loughborough) Nigel Shadbolt (UK, Southampton) Yorick Wilks (UK, Sheffield) Subbiah Arunachalam (India, MSRF) Helene Bosc (France, INRA, ret.) Fred Friend (UK, University College, London) Andrew Odlyzko (US, University of Minnesota) Arthur Sale (Australia, University of Tasmania) Peter Suber (US, Earlham) SUMMARY: The UK Research Funding Councils (RCUK) have proposed that all RCUK fundees should self-archive on the web, free for all, their own final drafts of journal articles reporting their RCUK-funded research, in order to maximise their usage and impact. ALPSP (a learned publishers' association) now seeks to delay and block the RCUK proposal, auguring that it will ruin journals. All objective evidence from the past decade and a half of self-archiving, however, shows that self-archiving can and does co-exist peacefully with journals while greatly enhancing both author/article and journal impact, to the benefit of both. Journal publishers should not be trying to delay and block self-archiving policy; they should be collaborating with the research community on ways to share its vast benefits.This is a public reply, co-signed by the above, to the August 5, 2005, public letter by Sally Morris, Executive Director of ALPSP (Association of Learned and Professional Society Publishers) to Professor Ian Diamond, Chair, RCUK (Research Councils UK), concerning the RCUK proposal to mandate the web self-archiving of authors' final drafts of all journal articles resulting from RCUK-funded research, making them freely accessible to all researchers worldwide who cannot afford access to the official journal version, in order to maximise the usage and impact of the RCUK-funded research findings. It is extremely important that the arguments and objective evidence for or against the optimality of research self-archiving policy be aired and discussed openly, as they have been for several years now, all over the world, so that policy decisions are not influenced by one-sided arguments from special interests that can readily be shown to be invalid. Every single one of the points made by the ALPSP below is incorrect -- incorrect from the standpoint of both objective evidence and careful logical analysis. We accordingly provide a point by point rebuttal here, along with a plea for an end to publishers' efforts to block or delay self-archiving policy -- a policy that is undeniably beneficial to research and researchers, as well as to their institutions and the public that funds them. Publishers should collaborate with the research community to share the benefits of maximising research access and impact. (Please note that this is not the first time the ALPSP's points have been made, and rebutted; but whereas the rebuttals take very careful, detailed account of the points made by ALPSP, the ALPSP unfortunately just keeps repeating its points without taking any account of the detailed replies. By way of illustration, the prior ALPSP critique of the RCUK proposal (April 19) was followed on July 1 by a point-by-point rebuttal. The reader who compares the two cannot fail to notice certain recurrent themes that ALPSP keeps ignoring in their present critique. In particular, 3 of the 5 examples that ALPSP cites below as evidence of the negative effects of self-archiving on journals turn out to have nothing at all to do with self-archiving, exactly as pointed out in the earlier rebuttal. The other 2 examples turn out to be positive evidence for the potential of sharing the benefits through cooperation and collaboration between the research and publishing community, rather than grounds for denying research and researchers those benefits through opposition.) All quotes are from ALPSP response to RCUK's proposed position statement on access to research outputswhich was addressed to: Professor Ian Diamond, Research Councils UK Secretariat on 5th August, 2005: ALPSP: "Although the mission of our publisher members is to disseminate and maximise access to research information"The principle of maximising access to research information is indeed the very essence of the issue at hand. The reader of the following statements and counter-statements should accordingly bear this principle in mind while weighing them: Unlike the authors of books or of magazine and newspaper articles, the authors of research journal articles are not writing in order to sell their words, but in order to share their findings, so other researchers can use and build upon them, in order to advance research progress, to the benefit of the public that funded the research. This usage and application is called research impact. Research impact is a measure of research progress and productivity: the influence that the findings have had on the further course of research and its applications; the difference it has made that a given piece of research has been conducted at all, rather than being left unfunded and undone. Research impact is the reason the public funds the research and the reason researchers conduct the research and report the results. Research that makes no impact may as well not have been conducted at all. One of the primary indicators -- but by no means the only one -- of research impact is the number of resulting pieces of research by others that make use of a finding, by citing it. Citation counts are accordingly quantitative measures of research impact. (The reader is reminded, at this early point in our critique, that it is impossible for a piece of research to be read, used, applied and cited by any researcher who cannot access it. Research access is a necessary (though not a sufficient) condition for research impact.) Owing to this central importance of impact in the research growth and progress cycle, the authors of research are rewarded not by income from the sales of their texts, like normal authors, but by 'impact income' based on how much their research findings are used, applied, cited and built upon. Impact is what helps pay the author's salary, what brings further RCUK grant income, and what brings RAE (Research Assessment Exercise) income to the author's institution. And the reason the public pays taxes for the RCUK and RAE to use to fund research in the first place is so that that research can benefit the public -- not so that it can generate sales income for publishers. There is nothing wrong with research also generating sales income for publishers. But there is definitely something wrong if publishers try to prevent researchers from maximising the impact of their research, by maximising access to it. For whatever limits research access limits research progress; to repeat: access is a necessary condition for impact. Hence, for researchers and their institutions, the need to 'maximise access to research information' is not just a pious promotional slogan: Whatever denies access to their research output is denying the public the research impact and progress it paid for and denying researchers and their institutions the impact income they worked for. Journals provide access to all individuals and institutions that can afford to subscribe to them, and that is fine. But what about all the other would-be users -- those researchers world-wide whose institutions happen to be unable to afford to subscribe to the journal in which a research finding happens to be published? There are 24,000 research journals and most institutions can afford access only to a small fraction of them. Across all fields tested so far (including physics, mathematics, biology, economics, business/management, sociology, education, psychology, and philosophy), articles that have been self-archived freely on the web, thereby maximising access, have been shown to have 50%-250+% greater citation impact than articles that have not been self-archived. Is it reasonable to expect researchers and their institutions and funders to continue to renounce that vast impact potential in an online age that has made this impact-loss no longer necessary? Can asking researchers to keep on losing that impact be seriously described as 'maximising access to research information'? Now let us see on what grounds researchers are being asked to renounce this impact: ALPSP: "we find ourselves unable to support RCUK's proposed position paper on the means of achieving this. We continue to stress all the points we made in our previous response, dated 19 April, and are insufficiently reassured by RCUK's reply. We are convinced that RCUK's proposed policy will inevitably lead to the destruction of journals."If it were indeed true that the RCUK's policy will inevitably lead to the destruction of journals, then this contingency would definitely be worthy of further time and thought. But there is in fact no objective evidence whatseover in support of this dire prophecy. All evidence (footnote 1) from 15 years of self-archiving (in some fields having reached 100% self-archiving long ago) is exactly the opposite: that self-archiving and journal publication can and do continue to co-exist peacefully, with institutions continuing to subscribe to the journals they can afford, and researchers at the institutions that can afford them continuing to use them; the only change is that the author's own self-archived final drafts (as well as earlier pre-refereeing preprints) are now accessible to all those researchers whose institutions could not afford the official journal version (as well as to any who may wish to consult the pre-refereeing preprints). In other words, the self-archived author's drafts, pre- and post-refereeing, are supplements to the official journal version, not substitutes for it. In the absence of any objective evidence at all to the effect that self-archiving reduces subscriptions, let alone destroys journals, and in the face of 15 years' worth of evidence to the contrary, ALPSP simply amplifies the rhetoric, elevating pure speculation to a putative justification for continuing to delay and oppose a policy that is already long overdue and a practice that has already been amply demonstrated to deliver something of immense benefit to research, researchers, their institutions and funders: dramatically enhanced impact. All this, ALPSP recommends, is to be put on hold because some publishers have the 'conviction' that self-archiving will destroy journals. ALPSP: "A policy of mandated self-archiving of research articles in freely accessible repositories, when combined with the ready retrievability of those articles through search engines (such as Google Scholar) and interoperability (facilitated by standards such as OAI-PMH), will accelerate the move to a disastrous scenario."The objective evidence from 15 years of continuous self-archiving by physicists (even longer by computer scientists) has in fact tested this grim hypothesis; and this cumulative evidence affords not the slightest hint of any move to a 'disastrous scenario.' Throughout the past decade and a half, final drafts of hundreds of thousands of articles have been made freely accessible and readily retrievable by their authors (in some fields approaching 100% of the research published). And these have indeed been extensively accessed and retrieved and used and applied and cited by researchers in those disciplines, exactly as their authors intended (and far more extensively than articles for which the authors' drafts had not been made freely accessible). Yet when asked, both of the large physics learned societies (the Institute of Physics Publishing in the UK and the American Physical Society) responded very explicitly that they could identify no loss of subscriptions to their journals as a result of this critical mass of self-archived and readily retrievable physics articles (footnote 1). The ALPSP's doomsday conviction does not gain in plausibility by merely being repeated, ever louder. Google Scholar and OAI-PMH do indeed make the self-archived supplements more accessible to their would-be users, but that is the point: The purpose of self-archiving is to maximise access to research information. (Some publishers may still be in the habit of reckoning that research is well-served by access-denial, but the providers of that research -- the researchers themselves, and their funders -- can perhaps be forgiven for reckoning, and acting, otherwise.) ALPSP: "Librarians will increasingly find that 'good enough' versions of a significant proportion of articles in journals are freely available; in a situation where they lack the funds to purchase all the content their users want [emphasis added] it is inconceivable that they would not seek to save money by cancelling subscriptions to those journals. As a result, those journals will die."First, please note the implicit premise here: Where research institutions 'lack the funds to purchase all the content their researchers want,' the users (researchers) should do without that content, and the providers (researchers) should do without the usage and impact, rather than just giving it to one another, as the RCUK proposes. And why? Because researchers giving their own research to researchers who cannot afford the journal version will make the journals die. Second, RCUK-funded researchers publish in thousands of journals all over the world -- the UK, Europe and North America. Their publications, though important, represent the output of only a small fraction of the world's research population. Neither research topics nor research journals have national boundaries. Hence it is unlikely that a 'significant proportion' of the articles in any particular journal will become freely available purely as a consequence of the RCUK policy. Third, journals die and are born every year, since the advent of journals. Their birth may be because of a new niche, and their demise might be because of the loss or saturation of an old niche, or because the new niche was an illusion. Scholarly fashions, emphases and growth regions also change. This is ordinary intellectual evolution plus market economics. Fourth (and most important), as we have already noted, physics journals already do contain a 'significant proportion' of articles that have been self-archived in the physics repository, arXiv -- yet librarians have not cancelled subscriptions (footnote 1) despite a decade and a half's opportunity to do so, and the journals continue to survive and thrive. So whereas ALPSP may find it subjectively 'inconceivable,' the objective fact is that self-archiving is not generating cancellations, even where it is most advanced and has been going on the longest. Research libraries -- none of which can afford to subscribe to all journals, because they have only finite journals budgets -- have always tried to maximise their purchasing power, cancelling journals they think their users need less, and subscribing to journals they think their users need more. As objective indicators, some may use (1) usage statistics (paper and online) and (2) citation impact factors, but the final decision is almost always made on the basis of (3) surveys of their own users' recommendations (footnote 2). Self-archiving does not change this one bit, because self-archiving is not done on a per-journal basis but on a per-article basis. And it is done anarchically, distributed across authors, institutions and disciplines. An RCUK mandate for all RCUK-funded researchers to self-archive all their articles will have no net differential effect on any particular journal one way or the other. Nor will RCUK-mandated self-archiving exhaust the contents of any particular journal. So librarians' money-saving and budget-balancing subscription/cancellation efforts may proceed apace. Journals will continue to be born and to die, as they always did, but with no differential influence from self-archiving. But let us fast-forward this speculation: The RCUK self-archiving mandate itself is unlikely to result in any individual journal's author-archived supplements rising to anywhere near 100%, but if the RCUK model is followed (as is quite likely) by other nations around the world, we may indeed eventually reach 100% self-archiving for all articles in all journals. That would certainly be optimal for research, researchers, their institutions, their funders, and the tax-paying public that funds the funders. Would it be disastrous for journals? A certain amount of pressure would certainly be taken off librarians' endless struggle to balance their finite journal budgets: The yearly journal selection process would no longer be a struggle for basic survival (as all researchers would have online access to at least the author-self-archived supplements), but market competition would continue among publisher-added-values, which include (1) the paper edition and (2) the official, value-added, online edition (functionally enriched with XML mark-up, citation links, publisher's PDF, etc.). The market for those added values would continue to determine what was subscribed to and what was cancelled, pretty much as it does now, but in a stabler way, without the mounting panic and desperation that struggling with balancing researchers' basic inelastic survival needs has been carrying with it for years now (the 'serials crisis'). If, on the other hand, the day were ever to come when there was no longer a market for the paper edition, and no longer a market for some of the online added-values, then surely the market can be trusted to readjust to that new supply/demand optimum, with publishers continuing to sell whatever added values there is still a demand for. One sure added-value, for example, is peer review. Although journals don't actually perform the peer review (researchers do it for them, for free), they do administer it, with qualified expert editors selecting the referees, adjudicating the referee reports, and ensuring that authors revise as required. It is conceivable that one day that peer review service will be sold as a separate service to authors and their insitutions, with the journal-name just a tag that certifies the outcome, instead of being bundled into a product that is sold to users and their institutions. But that is just a matter of speculation right now, when there is still a healthy demand for both the paper and online editions. Publishing will co-evolve naturally with the evolution of the online medium itself. But what cannot be allowed to happen now is for researchers' impact (and the public's investment and stake in it) to be held hostage to the status quo, under the pretext of forestalling a doomsday scenario that has no evidence to support it and all evidence to date contradicting it. ALPSP: "The consequences of the destruction of journals' viability are very serious. Not only will it become impossible to support the whole process of quality control, including (but not limited to) peer review"Notice that the doomsday scenario has simply been taken for granted here, despite the absence of any actual evidence for it, and despite all the existing evidence to the contrary. Because it is being intoned so shrilly and with such 'conviction', it is to be taken at face value, and we are simply to begin our reckoning with accepting it as an unchallenged premise: but that premise is without any objective foundation whatsoever. As ALPSP mentions peer review, however, is this not the point to remind ourselves that among the many (unquestionable) values that the publisher does add, peer-review is a rather anomalous one, being an unpaid service that researchers themselves are rendering to the publisher gratis (just as they give their articles gratis, without seeking any payment)? As noted above, peer review and the certification of its outcome could in principle be sold as a separate service to the author-institution, instead of being bundled with a product to the subscriber-institution; hence it is not true that it would be 'impossible to support' peer review even if journals' subscription base were to collapse entirely. But as there is no evidence of any tendency toward a collapse of the subscription base, this is all just hypothetical speculation at this point. ALPSP: "but in addition, the research community will lose all the other value and prestige which is added, for both author and reader, through inclusion in a highly rated journal with a clearly understood audience and rich online functionality."Wherever authors and readers value either the paper edition or the rich online functionality -- both provided only by the publisher -- they will continue to subscribe to the journal as long as they can afford it, either personally or through their institutional library. As noted above, this clearly continues to be the case for the physics journals that are the most advanced in testing the waters of self-archiving. Publishers who add sufficient value create a product that the market will pay for (by the definition of supply, demand and sufficient-value). However, surely the interests of research and the public that funds it are not best-served if those researchers (potential users) who happen to be unable to afford the particular journal in which the functionally enriched, value-added version is published are denied access to the basic research finding itself. Even more important and pertinent to the RCUK proposal: The fundee's and funder's research should not be denied the impact potential from all those researchers who cannot afford access. Researchers have always given away all their findings (to their publishers as well as to all requesters of reprints) so that other researchers could further advance the research by using, applying and building upon their findings. Access-denial has always limited the progress, productivity and impact of science and scholarship. Now the online age has at last made it possible to put an end to this needless access-denial and resultant impact-loss; the RCUK is simply the first to propose systematically applying the natural, optimal, and inevitable remedy to all research output. Whatever publisher-added value is truly value continues to be of value when it co-exists with author self-archiving. Articles continue to appear in journals, and the enriched functionality of the official value-added online edition (as well as the paper edition) are still there to be purchased. It is just that those who could not afford them previously will no longer be deprived of access to the research findings themselves. ALPSP: "This in turn will deprive learned societies of an important income stream, without which many will be unable to support their other activities -- such as meetings, bursaries, research funding, public education and patient information -- which are of huge benefit both to their research communities and to the general public."(Notice, first, that this is all still predicated on the truth of the doomsday conviction -- 'that self-archiving will inevitably destroy journals' -- which is contradicted by all existing evidence.) But insofar as learned-societies 'other activieties' are concerned, there is a very simple, straight-forward way to put the proposition at issue: Does anyone imagine -- if an either/or choice point were ever actually reached, and the trade-off and costs/benefits were made completely explicit and transparent -- that researchers would knowingly and willingly choose to continue subsidising learned societies' admirable good works -- meetings, bursaries, research funding, public education and patient information -- at the cost of their own lost research impact? The ALPSP doomsday 'conviction', however, has no basis in evidence, hence there is no either/or choice that needs to be made. All indications to date are that learned societies will continue to publish journals -- adding value and successfully selling that added-value -- in peaceful co-existence with RCUK-mandated self-archiving. But entirely apart from that, ALPSP certainly has no grounds for asking researchers to renounce maximising their own research impact for the sake of financing learned societies' good works (like meetings, bursaries and public education) -- good works that could finance themselves in alternative ways that were not parasitic on research progress, if circumstances were ever to demand it The ALPSP letter began by stating that the mission of ALPSP publisher members is to 'disseminate and maximise access to research information'. Some of the journal-publishing learned societies do indeed affirm that this is their mission; yet by their restrictive publishing practices they actively contradict it, while defending the resulting inescapable contradiction by pleading a disaster scenario (very like the one ALPSP repeatedly invokes) in the name of protecting the publishing profits that support all of the society's other activities. Yet this is not the attitude of forward-thinking, member-oriented societies that understand properly what researchers in their fields need and know how to deliver it. Here is a quote from Dr Elizabeth Marincola, Executive Director of the American Society for Cell Biology, a sizeable but not huge society (10,000 members; many US scientific and medical societies have over 100,000 members): This perfectly encapsulates why we should not be too credulous about the dire warnings emanating from learned societies to the effect that self-archiving will damage research and its dissemination. The dissemination of research findings should, as avowed, be a high-priority service for societies -- a direct end in itself, not just a trade activity to generate profit so as to subsidise other activities, at the expense of research itself."I think the more dependent societies are on their publications, the farther away they are from the real needs of their members. If they were really doing good work and their members were aware of this, then they wouldn't be so fearful'' When my colleagues come to me and say they couldn't possibly think of putting their publishing revenues at risk, I think 'why haven't you been diversifying your revenue sources all along and why haven't you been diversifying your products all along?' The ASCB offers a diverse range of products so that if publications were at risk financially, we wouldn't lose our membership base because there are lots of other reasons why people are members." (Footnote 3) ALPSP: "The damaging effects will not be limited to UK-published journals and UK societies; UK research authors publish their work in the most appropriate journals, irrespective of the journals' country of origin."The thrust of the above statement is rather unclear: The RCUK-mandated self-archiving itself will indeed be distributed across all journals, worldwide. Hence, if it had indeed been 'damaging', that damage would likewise be distributed (and diluted) across all journals, not concentrated on any particular journal. So what is the point being made here? But in fact there is no evidence at all that self-archiving is damaging to journals, rather than co-existing peacefully with them; and a great deal of evidence that it is extremely beneficial to research, researchers, their institutions and their funders. ALPSP: "We absolutely reject unsupported assertions that self-archiving in publicly accessible repositories does not and will not damage journals. Indeed, we are accumulating a growing body of evidence that the opposite is the case [emphasis added], even at this early stage"We shall now examine whose assertions need to be absolutely rejected as unsupported, and whether there is indeed 'a growing body of evidence that the opposite is the case'. What follows is the ALPSP's 5 pieces of putative evidence in support of their expressed 'conviction' that self-archiving will damage journals. Please follow carefully, as the first two pieces of evidence [1]-[2] -- concerning usage and citation statistics -- will turn out to be positive evidence rather than negative evidence, and the last three pieces of evidence [3]-[5] -- concerning journals that make all of their own articles free online -- turn out to have nothing whatsoever to do with author self-archiving: ALPSP: "For example:How does example [1] show that 'the opposite is the case'? As has already been reported above, the Institute of Physics Publishing (UK) and the American Physical Society (US) have both stated publicly that they can identify no loss of subscriptions as a result of nearly 15 years of self-archiving by physicists! (Moreover, publishers and institutional repositories can and will easily work out a collaborative system of pooled usage statistics, all credited to the publisher's official version; so that is no principled obstacle either.) The easiest thing in the world for Institutional Repositories (IRs) to provide to publishers (along with the link from the self-archived supplement in the IR to the official journal version on the publisher's website -- something that is already dictated by good scholarly practice) is the IR download statistics for the self-archived version of each article. These can be pooled with the download statistics for the official journal version and all of it (rightly) credited to the article itself. Another bonus that the self-archived supplements already provide is enhanced citation impact -- of which it is not only the article, the author, the institution and the funder who are the co-beneficiaries, but also the journal and the publisher, in the form of an enhanced journal impact factor (average citation count). It has also been demonstrated recently that download impact and citation impact are correlated, downloads in the first six months after publication being predictive of citations after 2 years. All these statistics and benefits are there to be shared between publishers, librarians and research institutions in a cooperative, collaborative atmosphere that welcomes the benefits of self-archiving to research and that works to establish a system that shares them among the interested parties. Collaboration on the sharing of the benefits of self-archiving is what learned societies should be setting up meetings to do -- rather than just trying to delay and oppose what is so obviously a substantial and certain benefit to research, researchers, their institutions and funders, as well as a considerable potential benefit to journals, publishers and libraries. If publishers take an adversarial stance on self-archiving, all they do is deny themselves of its potential benefits (out of the groundless but self-sustaining 'conviction' that self-archiving can inevitably bring them only disaster). Its benefits to research are demonstrated and incontestable, hence will incontestably prevail. (ALPSP's efforts to delay the optimal and inevitable will not redound to learned societies' historic credit; the sooner they drop their filibustering and turn to constructive cooperation and collaboration, the better for all parties concerned.) ALPSP: "[2] Citation statistics and the resultant impact factors are of enormous importance to authors and their institutions; they also influence librarians' renewal/cancellation decisions. Both the Institute of Physics and the London Mathematical Society are therefore troubled to note an increasing tendency for authors to cite only the repository version of an article, without mentioning the journal in which it was later published."Librarians' decisions about which journals to renew or cancel take into account a variety of comparative measures, citation statistics being one of them (footnote 2). Self-archiving has now been analysed extensively and shown to increase journal article citations substantially in field after field; so journals carrying self-archived articles will have higher impact factors, and will hence perform better under this measure in competing for their share of libraries' serials budgets. This refutes example [2]. As to the proper citation of the official journal version: This is merely a question of proper scholarly practice, which is evolving and will of course adapt naturally to the new medium; a momentary lag in scholarly rigour is certainly no argument against the practice of self-archiving or its benefits to research and researchers. Moreover, publishers and institutional repositories can and will easily work out a collaborative system of pooled citation and reference statistics -- all credited to the official published version. So that is no principled obstacle either. This is all just a matter of adapting scholarly practices naturally to the new medium (and is likewise inevitable). It borders on the absurd to cite something whose solution is so simple and obvious as serious grounds for preventing research impact from being maximised by universal self-archiving! ALPSP: "[3] Evidence is also growing that free availability of content has a very rapid negative effect on subscriptions. Oxford University Press made the contents of Nucleic Acids Research freely available online six months after publication; subscription loss was much greater than in related journals where the content was free after a year. The journal became fully Open Access this year, but offered a substantial reduction in the publication charge to those whose libraries maintained a print subscription; however, the drop in subscriptions has been far more marked than was anticipated."This is a non-sequitur, having nothing to do with self-archiving, one way or the other (as was already pointed out in the prior rebuttal of APLSP's April critique of the RCUK proposal): This example refers to an entire journal's contents -- the official value-added versions, all being made freely accessible, all at once, by the publisher -- not to the anarchic, article-by-article self-archiving of the author's final draft by the author, which is what the RCUK is mandating. This example in fact reinforces what was noted earlier: that RCUK-mandated self-archiving does not single out any individual journal (as OU Press did above with one of its own) and drive its self-archived content to 100%. Self-archiving is distributed randomly across all journals. Since journals compete (somewhat) with one another for their share of each institution's finite journal acquisitions budget, it is conceivable that if one journal gives away 100% of its official, value-added contents online and the others don't, that journal might be making itself more vulnerable to differential cancellation (though not necessarily: there are reported examples of the exact opposite effect too, with the free online version increasing not only visibility, usage and citations, but thereby also increasing subscriptions, serving as an advertisement for the journal). But this is in any case no evidence for cancellation-inducing effects of self-archiving, which involves only the author's final drafts and is not focussed on any one journal but randomly distributed across all journals, leaving them to continue to compete for subscriptions amongst themselves, on the basis of their relative merits, exactly as they did before. ALPSP: "[4] The BMJ Publishing Group has noted a similar effect; the journals that have been made freely available online on publication have suffered greatly increased subscription attrition, and access controls have had to be imposed to ensure the survival of these titles."Exactly the same reply as above: The risks of making 100% of one journal's official, value-added contents free online while all other journals are not doing likewise has nothing whatosever to do with anarchic self-archiving, by authors, of the final drafts of their own articles, distributed randomly across journals. ALPSP: "[5] In the USA, the Institute for Operations Research and the Management Sciences found that two of its journals had, without its knowledge, been made freely available on the Web. For one of these, an established journal, they noted a subscriptions decline which was more than twice as steep as the average for their other established journals; for the other, a new journal where subscriptions would normally have been growing, they declined significantly. While the unauthorised free versions have now been removed, it is too early to tell whether the damage is permanent."Exactly the same artifact as in the prior two cases. (The trouble with self-generated Doomsday Scenarios is that they tend to assume such a grip on the imagination that their propounders cannot distinguish objective evidence from the 'corroboration' that comes from merely begging the question or changing the subject!) In all three examples, whole journals were made freely available, all at once, in their entirety, along with all the added value and rich online functionality that a journal provides. This is not at all the same as authors self-archiving only their own final drafts (which are simply their basic research reports), and doing so on a single-article (rather than a whole-journal) basis. Yet the latter is all that the RCUK proposes to mandate. Hence examples [3]-[5] are really a misleading conflation of two altogether different matters, creating the illusion of support for what is in fact an untenable conclusion on which they actually have no bearing one way or the other. [Moreover -- even though it has nothing at all to do with what the RCUK is mandating --if one does elect to look at evidence from whole-journal open access then there are many more examples of journals that have benefited from being made freely available: Molecular Biology of the Cell's subscriptions, for example, have grown steadily after free access was provided by its publisher, The American Society for Cell Biology (footnote 3). That journal also enjoys a high impact factor and healthy submissions by authors, encouraged by the increased exposure their articles receive. The same has happened for journals published by other societies (footnote 4).] ALPSP: "In addition, it is increasingly clear that this is exactly how researchers are already using search engines such as Scirus and Google Scholar: Greg R. Notess, Reference Librarian, Montana State University, in a recent article in Information Today (Vol 29, No 4) writes 'At this point, my main use of both [Scirus and Google Scholar] is for finding free Web versions of otherwise inaccessible published articles.'"This is merely a repetition of ALPSP's earlier point about OAI and Google Scholar. Reply: Yes, these wonderful new resources do increase access to the self-archived supplements: but that's the point! To maximise research access, usage and impact. Other search engines that retrieve free access articles (such as citebase, citeseer and OAIster) likewise serve the research community by enabling any unsubscribed researchers to find and access drafts of articles they could not otherwise use because they are accessible only by subscription. ISI's Web of Knowledge and Elsevier's Scopus, both paid services, find the authors' free versions as well as the journals' subscription-only versions, which researchers can then use whenever they or their institutions can afford subscription, license, or pay-per-view access; Elsevier's Scirus, a free service, likewise retrieves both, as does Google itself (if at least the reference metadata are made web-accessible). All these services do indeed help to maximise access, usage and impact, all to the benefit of the impact of that small proportion of current research that happens to be spontaneously self-archived already (15%). The RCUK mandate will increase this benefit systematically to that remaining 85% of UK research output that is still only accessible today to those who can afford the official journal version. ALPSP: "'I found a number of full-text articles via Google Scholar that are PDFs downloaded from a publisher site and then posted on another site, free to all.'"This point, on the other hand, is not about author self-archiving, but about pirating and bootleg of the publisher's official version. RCUK is not mandating or condoning anything like that: The policy pertains only to authors' own final drafts, self-archived by them -- not to the published version poached by 3rd party consumers, which is called theft. (Hence this point is irrelevant.) ALPSP: "'Both Scirus and Scholar were also useful for finding author-hosted article copies, preprints, e-prints, and other permutations of the same article.'"Exactly as one would hope they would be, if one hopes to 'maximise access to research'. ALPSP: "In the light of this growing evidence of serious and irreversible damage, each publisher must have the right to establish the best way of expanding access to its journal content that is compatible with continuing viability."So far no evidence whatsoever of 'serious and irreversible damage' (or indeed of any damage) caused by author self-archiving has been presented by ALPSP. (This is unsurprising, because in reality no such evidence exists, and all existing evidence is to the contrary.) Of course publishers can and should do whatever they wish in order to expand access to their journal content and remain viable. But they certainly have no right to prevent researchers, their institutions and their funders from likewise doing whatever they can and wish in order to expand the access to, and the impact of, their own research findings -- nor to expect them to agree to keep waiting passively to see whether their publishers will one day maximise their access and impact for them. 100% self-archiving is already known to be both doable and to enhance research impact substantially; self-archiving has also been co-existing peacefully with journals for over a decade and a half (including in those fields where 100% self-archiving has already been reached) ; 100% self-archiving overall is already well overdue, and years' worth of research impact have already been needlessly lost waiting for it. ALPSP has given no grounds whatsoever for continuing this delay for one moment longer. It has merely aired a doomsday scenario of its own imagination and then adduced 'evidence' in its support that is obviously irrelevant and defeasible.What is certain is that research impact cannot be held hostage to publishers' anxieties, simply on the grounds of their subjective intensity. ALPSP: "This is not best achieved by mandating the earliest possible self-archiving, and thus forcing the adoption of untried and uncosted publishing practices."Self-archiving in October 2005 is not 'the earliest possible self-archiving' It is self-archiving that is already at least a decade overdue. And it has nothing to do with untried and uncosted publishing practices: Self-archiving is not a publishing practice at all; it is a researcher practice. And it has been tried and tested -- with great success and great benefits for research progress -- for over 15 years now. What is needed today is more self-archiving -- 100% -- not more delay. Or does the 'earliest possible' here refer not to when the RCUK self-archiving mandate is at last implemented, but how early the published article should be self-archived? If so, the answer from the point of view of research impact and progress is unambiguous: The final draft should be self-archived and made accessible to all potential users immediately upon acceptance for publication (prefinal preprint drafts even earlier, if the author wishes). No research usage or progress should be held back arbitrarily for 3, 6, 12 or more months, for any reason whatsoever. It cannot be stressed enough just how crucial it is for RCUK to resist any pressure to impose or allow any sort of access-denial period, of any length, during which unpaid access to research findings would be embargoed -- findings that the RCUK has paid for, with public money, so that they can be immediately reported, used, applied and built upon, for the benefit of the public that paid for it, not so that they can be embargoed, for the benefit of assuaging publishers' subjective fears about 'disaster scenarios' for which there does not exist a shred of objective evidence. Any delay that is allowed amounts to an embargo on research productivity and progress, at the expense of the interests of the tax-paying public. That is exactly what happened recently to the US National Institutes of Health's public access policy, setting US research access and impact back several years. Fortunately, there is a simple compromise that will completely immunise the RCUK mandate from any possibility of being rendered ineffectual in this way: What all RCUK-funded researchers should be required to self-archive in their own Institutional Repositories (IRs) immediately upon acceptance for publication are: (1) each article's metadata (author name, date, article title, journal name, etc.).That fulfills the RCUK requirement. The access-setting, however, can then be given two options: (OA) Open AccessThe RCUK fundee is strongly encouraged (but not required) to set access to OA immediately. As 90% of journals have already given article self-archiving their official green light, 90% of articles can have their access set to OA immediately. For the remaining 10%, the author can set access to IA initially, but of course each article's metadata (author, title, journal, etc.) will immediately be openly accessible webwide to all would-be users, just as the metadata of the OA 90% are. That's enough data so that would-be users can immediately email the author for an 'eprint' (the author's final draft) if they cannot afford to access the journal version. The author can keep emailing eprints to each would-be user until either the remaining 10% of journals update their policy or the author tires of doing all those needless keystrokes and sets articleaccess to OA. In the meanwhile, however, 100% of RCUK-funded research will be immediately accessible webwide, 90% of it directly, and 10% of it with author mediation, maximising its access and impact. Nature can take care of the rest at its leisure. ALPSP: "It is clearly unrealistic to consult adequately with all those likely to be affected over the summer holiday period, and we therefore urge you to extend the consultation period and to defer, for at least 12 months, the introduction of any mandate for authors to self-archive. In the meantime, we would like to take up RCUK's expressed willingness to engage with both publishers and learned societies, beginning with a meeting in early September with representatives of ALPSP; we propose one of the following dates: 5th September, 6th September, 7th September, 8th SeptemberThe consultation has been going on since long before 'the summer holiday period' and there has already been far more delay and far more research impact needlessly lost than anyone can possibly justify. Some members of the publishing community are quite leisurely about continuing to prolong this needless loss of research impact and progress in order to continue debating, but the research community itself is not (as indicated, for example, by the ill-fated demand for open access -- by a deadline of September 1, 2001 -- on the part of the 34,000 researchers who signed the PloS petition). RCUK should go ahead and implement its immediate-self-archiving mandate, with no further delay or deferral, and then meet with ALPSP and other interested parties to discuss and plan how the UK Institutional Repositories can collaborate with journals and their publishers in pooling download and citation statistics, and in other other ways of sharing the benefits of maximising UK research access and impact. Any further pertinent matters and developments can be discussed as well -- but not at the cost of further delaying what is indisputably the optimal and inevitable (and long overdue) outcome for research, researchers, their institutions, and their funders -- and for the public, which funds the research on the understanding that its use and applications are meant to be maximised to benefit the public's interests, not minimised to protect other parties' from imaginary threats to their interests. (A shorter UK version of this critique -- http://openaccess.eprints.org/index.php?/archives/18-guid.html -- has been co-signed by the following UK senior researchers [in boldface] and sent as hard copy to the recipients of the ALPSP statement. The present longer analysis has also been co-signed by some prominent international supporters of the RCUK initiative.) Tim Berners-Lee (UK, Southampton & US, MIT) Dave De Roure (UK, Southampton) Stevan Harnad (UK, Southampton & Canada, UQaM) Derek Law (UK, Strathclyde) Peter Murray-Rust (UK, Cambridge) Charles Oppenheim (UK, Loughborough) Nigel Shadbolt (UK, Southampton) Yorick Wilks (UK, Sheffield) Subbiah Arunachalam (India, MSRF) Helene Bosc (France, INRA, ret.) Fred Friend (UK, University College, London) Andrew Odlyzko (US, University of Minnesota) Arthur Sale (Australia, University of Tasmania) Peter Suber (US, Earlham) References 1. Swan, A (2004). Re: Open Access vs. NIH Back Access and Nature's Back-Sliding. American Scientist Open Access Forum: 3 February 2005. 2. Personal communication from a UK University Library Director: 'I know of no HE library where librarians make cancellation or subscription decisions. Typically they say to the department/faculty 'We have to save £X,000" from your share of the serials budget: what do you want to cut?'. These are seen as academic --not metrics-driven -- judgements, and no librarian makes those academic judgements, as they are indefensible in Senate' [S]uch decisions are almost always wholly subjective, not objective, and have nothing to do with the existence or otherwise of repositories.' 3. The society lady: an interview with Elizabeth Marincola. Open Access Now: 6 October 2003 4. Walker, T (2002) Two societies show how to profit by providing free access. Learned Publishing 15: 279-284. Copies of ALPSP open letter were also sent to:
« previous page
(Page 10 of 11, totaling 102 entries)
» next page
|
QuicksearchSyndicate This BlogMaterials You Are Invited To Use To Promote OA Self-Archiving:
Videos:
The American Scientist Open Access Forum has been chronicling and often directing the course of progress in providing Open Access to Universities' Peer-Reviewed Research Articles since its inception in the US in 1998 by the American Scientist, published by the Sigma Xi Society. The Forum is largely for policy-makers at universities, research institutions and research funding agencies worldwide who are interested in institutional Open Acess Provision policy. (It is not a general discussion group for serials, pricing or publishing issues: it is specifically focussed on institutional Open Acess policy.)
You can sign on to the Forum here.
ArchivesCalendar
CategoriesBlog AdministrationStatisticsLast entry: 2018-09-14 13:27
1129 entries written
238 comments have been made
Top ReferrersSyndicate This Blog |