QuicksearchYour search for metrics returned 102 results:
Sunday, September 26. 2010Needed: OA -- Not Alternative Impact Metrics for OA Journals
An article by Ulrich Herb (2010) [UH] is predicated on one of the oldest misunderstandings about OA: that OA ≡ OA journals ("Gold OA") and that the obstacle to OA is that OA journals don't have a high enough impact factor:
UH:The usual solution that is proposed for this non-problem is that we should therefore give OA journals a higher weight in performance evaluation, despite their lower impact factor, in order to encourage OA. (This is nonsense, and it is not the "solution" proposed by UH. A journal's weight in performance evaluation needs to be earned -- on the basis of its content's quality and impact -- not accorded by fiat, in order to encourage OA.) The "solution" proposed by UH is not to give OA journals a higher a-priori weight, but to create new impact measures that will accord them a higher weight. UH:New impact measures are always welcome -- but they too must earn their weights based on their track-records for validity and predictivity. And what is urgently needed by and for research and researchers is not more new impact measures but more OA. And the way to provide more OA is to provide OA to more articles -- which can be done in two ways, not just the one way of publishing in OA journals (Gold OA), but by self-archiving articles published in all journals (whether OA or non-OA) in institutional repositories, to make them OA ("Green OA"). Ulrich Herb seems to have misunderstood this completely (equating OA with Gold OA only). The contradiction is evident in two successive paragraphs: UH:The bold-face passage in the second paragraph is completely erroneous, and in direct contradiction with what is stated in the immediately preceding paragraph. For the increased citations generated by making articles in any journal (OA or non-OA) OA by making them freely accessible online are included in the relevant databases used to calculate journal impact. Indeed, most of the evidence that OA increases citations comes from comparing the citation counts of articles (in the same journal and issue) that are and are not made OA by their authors. (And these within-journal comparisons are necessarily based on Green OA, not Gold OA.) Yes, there are journals (OA and non-OA -- mostly non-OA!) that are not (yet) indexed by some of the databases (WoS, JCR, Scopus, etc.); but that is not an OA problem. Yes, let's keep enhancing the visibility and harvestability of OA content; but that is not the OA problem: the problem is that most content is not yet OA. And yes, let's keep developing rich, new OA metrics; but you can't develop OA metrics until the content is made OA. References Brody, T., Carr, L., Gingras, Y., Hajjem, C., Harnad, S. and Swan, A. (2007) Incentivizing the Open Access Research Web: Publication-Archiving, Data-Archiving and Scientometrics. CTWatch Quarterly 3(3). Harnad, S. (2008) Validating Research Performance Metrics Against Peer Rankings. Ethics in Science and Environmental Politics 8 (11) doi:10.3354/esep00088 (Special issue: The Use And Misuse Of Bibliometric Indices In Evaluating Scholarly Performance) Harnad, S. (2009) Open Access Scientometrics and the UK Research Assessment Exercise. Scientometrics 79 (1) Herb, Ulrich (2010) OpenAccess Statistics: Alternative Impact Measures for Open Access documents? An examination how to generate interoperable usage information from distributed Open Access services., 2010 In: L'information scientifique et technique dans l'univers numérique. Mesures et usages. L'association des professionnels de l'information et de la documentation, ADBS, pp. 165-178 Saturday, July 3. 2010Google Scholar Boolean Search on Citing Articles
In the world of journal articles, each article is both a "citing" item and a "cited" item. The list of references a given article cites provides that article's outgoing citations. And all the other articles in whose reference lists that article is cited provide that article's incoming citations.
Formerly, with Google Scholar (first launched in November 2004) (1) you could do a google-like boolean (and, or, not, etc.) word search, which ranked the articles that it retrieved by how highly cited they were. Then, for any individual citing article in that ranked list of citing articles, (2) you could go on to retrieve all the articles citing that individual cited article, again ranked by how highly cited they were. But you could not go on to do a boolean word search within just that set of citing articles; as of July 1 you can. (Thanks to Joseph Esposito for pointing this out on liblicense.) Of course, Google Scholar is a potential scientometric killer-app that is just waiting to design and display powers far, far greater and richer than even these. Only two things are holding it back: (a) the sparse Open Access content of the web to date (only about 20% of articles published annually) and (b) the sleepiness of Google, in not yet realizing what a potentially rich scientometric resource and tool they have in their hands (or, rather, their harvested full-text archives). Citebase gives a foretaste of some more of the latent power of an Open Access impact and influence engine (so does citeseerx), but even that is pale in comparison with what is still to come -- if only Green OA self-archiving mandates by the world's universities, the providers of all the missing content, hurry up and get adopted so that they can be implemented, and then all the target content for these impending marvels (not just 20% of it) can begin being reliably provided at long last. (Elsevier's SCOPUS and Thomson-Reuters' Web of Knowledge are of course likewise standing by, ready to upgrade their services so as to point also to the OA versions of the content they index -- if only we hurry up and make it OA!) Harnad, S. (2001) Proposed collaboration: google + open citation linking. OAI-General. June 2001. Wednesday, May 12. 2010PostGutenberg Peer Review
Joseph Esposito [JE] asks, in liblicense-l:
JE: “What happens when the number of author-pays open access sites grows and these various services have to compete with one another to get the finest articles deposited in their respositories?”Green OA mandates require deposit in each author's own institutional repository. The hypothesis of Post-Green-OA subscription cancellations (which is only a hypothesis, though I think it will eventually prove to be right) is that the Green OA version will prove to be enough for users, leaving peer review as the only remaining essential publishing service a journal will need to perform. Whether on the non-OA subscription model or on the Gold-OA author-pays model, the only way scholarly/scientific journals compete for content is through their peer-review standards: The higher-quality journals are the ones with more rigorous and selective criteria for acceptability. This is reflected in their track records for quality, including correlates of quality and impact such as citations, downloads and the many rich new metrics that the online and OA era will be generating. JE: “What will the cost of marketing to attract the best authors be?”It is not "marketing" but the journal's track record for quality standards and impact that attract authors and content in peer-reviewed research publication. Marketing is for subscribers (institutional and individual); for authors and their institutions it is standards and metrics that matter. And, before someone raises the question: Yes, metrics can be manipulated and abused, in the short term, but cheating can also be detected, especially as deviations within a rich context of multiple metrics. Manipulating a single metric (e.g., robotically inflating download counts) is easy, but manipulating a battery of systematically intercorrelated metrics is not; and abusers can and will be named and shamed. In research and academia, this risk to track record and career is likely to counterbalance the temptation to cheat. (Let's not forget that metrics, like the content they are derived from, will be OA too...) JE: “I am not myself aware of any financial modeling that attempts to grapple with an environment where there are not a handful of such services but 200, 400, more.”There are already at least 25,000 such services (journals) now! There will be about the same number post-Green-OA. The only thing that would change (on the hypothesis that universal Green OA will eventually make subscriptions unsustainable) is that the 25,000 post-Green-OA journals would only provide peer review: no more print edition, online edition, distribution, archiving, or marketing (other than each journal's quality track record itself, and its metrics). Gone too would be the costs of these obsolete products and services, and their marketing. (Probably gone too will be the big-bucks era of journal-fleet publishing. Unlike with books, it has always been the individual journal's name and track record that has mattered to authors and their institutions and funders, not their fleet-publisher's imprimatur. Software for implementing peer review online will provide the requisite economy of scale at the individual journal level: no need to co-bundle a fleet of independent journals and fields under the same operational blanket.) JE: “As these services develop and authors seek the best one, what new investments will be necessary in such areas as information technology?”The best peer review is provided by the best peers (for free), applying the highest quality standards. OA metrics will grow and develop (independent of publishers), but peer review software is pretty trivial and probably already quite well developed (hence hopes of "patenting" new peer review "systems" are probably pipe-dreams.) JE: “Will the fixed costs of managing such a service rise along with greater demands by the most significant authors?”The journal quality hierarchy will remain pretty much as it is now, with the highest-quality (hence most selective) journals the fewest, at the top, grading down to the many average-level journals, and then the near-vanity press at the bottom (since just about everything eventually gets published somewhere, especially in the online era). (I also think that "no-fault peer review" will evolve as a natural matter of course -- i.e., authors will pay a standard fee per round of peer review, independent of outcome: acceptance, revision/re-refereeing or rejection. So being rejected by a higher-level journal will not be a dead loss, if the author is ready to revise for a lower-level journal in response to the higher-level journal's review. Nor will rejected papers be an unfair burden, bundled into the fee of the authors of accepted papers.) JE: “As more services proliferate, what will the cost of submitting material on an author-pays basis be?”There will be no more new publishing services, apart from peer review (and possibly some copy-editing), and no more new journals either; 25,000 is probably enough already! And the cost per round of refereeing should not prove more than about $200. JE: “Will the need to attract the best authors drive prices down?”There will be no "need to attract the best authors," but the best journals will get them by maintaining the highest standards. Since the peers review for free, the cost per round of refereeing is small and pretty fixed. JE: “If prices are driven down, is there any way for such a service to operate profitably as the costs of marketing and technology grow without attempting to increase in volume what is lost in margin?”Peer-reviewed journal publishing will no longer be big business; just a modest scholarly service, covering its costs. JE: “If such services must increase their volume, will there be inexorable pressure to lower some of the review standards in order to solicit more papers?”There will be no pressure to increase volume (why should there be)? Scholars try to meet the highest quality standards they can meet. Journals will try to maintain the highest quality standards they can maintain. JE: “What is the proper balance between the right fee for authors, the level of editorial scrutiny, and the overall scope of the service, as measured by the number of articles developed?”Much ado about little, here. The one thing to remember is that there is a trade-off between quality-standards and volume: The more selective a journal, the smaller is the percentage of all articles in a field that will meet its quality standards. The "price" of higher selectivity is lower volume, but that is also the prize of peer-reviewed publishing: Journals aspire to high quality and authors aspire to be published in journals of high quality. No-fault refereeing fees will help distribute the refereeing load (and cost) better than (as now) inflating the fees of accepted papers to cover the costs of rejected papers (rather like a shop-lifting surcharge!). Journals lower in the quality hierarchy will (as always) be more numerous, and will accept more papers, but authors are likely to continue to try a top-down strategy (as now), trying their luck with a higher-quality journal first. There will no doubt be unrealistic submissions that can (as now) be summarily rejected without formal refereeing (or fee). The authors of papers that do merit full refereeing may elect to pay for refereeing by a higher-level journal, at the risk of rejection, but they can then use their referee reports to submit a more roadworthy version to a lower-level journal. With no-fault refereeing fees, both journals are paid for their costs, regardless of how many articles they actually accept for publication. (PotGutenberg publication means, I hasten to add, that accepted papers are certified with the name and track-record of the accepting journal, but those names just serve as the metadata for the Green OA version self-archived in the author's institutional repository.) And let's not forget what peer-reviewed research publishing is about, and for: It is not about provisioning a publishing industry but about providing a service to research, researchers, their institutions and their funders. Gutenberg-era publication costs meant that the Gutenberg publisher evolved, through no fault of its own, into the tail that wagged the paper-trained research pooch; in the PostGutenberg era, things will at last rescale into more proper and productive anatomic proportions...Harnad, S. (2009) The PostGutenberg Open Access Journal. In: Cope, B. & Phillips, A (Eds.) The Future of the Academic Journal. Chandos. Stevan Harnad American Scientist Open Access Forum Wednesday, March 31. 2010Designing the Optimal Open Access Mandate
Keynote Address to be presented at UNT Open Access Symposium, University of North Texas, 18 May, 2010.
OVERVIEW: As the number of Open Access (OA) mandates adopted by universities worldwide grows it is important to ensure that the most effective mandate model is selected for adoption, and that a very clear distinction is made between what is required and what is recommended: By far the most effective and widely applicable OA policy is to require that the author's final, revised peer-reviewed draft must be deposited in the institutional repository (IR) immediately upon acceptance for publication, without exception, but only to recommend, not require, that access to the deposit should be set immediately as Open Access (at least 63% of journals already endorse immediate, unembargoed OA); access to deposits for which the author wishes to honor a publisher access embargo can be set as Closed Access. The IR's "fair use" button allows users to request and authors to authorize semi-automated emailing of individual eprints to individual requesters, on a case by case basis, for research uses during the embargo. The adoption of an “author’s addendum” reserving rights should be recommended but not required (opt-out/waiver permitted). It is also extremely useful and productive to make IR deposit the official mechanism for submitting publications for annual performance review. IRs can also monitor compliance with complementary OA mandates from research funding agencies and can provide valuable metrics on usage and impact. (Mandate compliance should be compulsory, but there need be no sanctions or penalties for noncompliance; the benefits of compliance will be their own reward.) On no account should a university adopt a costly policy of funding Gold OA publishing by its authors until/unless it has first adopted a cost-free policy of mandatory Green OA self-archiving. Stevan Harnad Harnad, S. (2008) Waking OA’s “Slumbering Giant”: The University's Mandate To Mandate Open Access. New Review of Information Networking 14(1): 51 - 68 Sunday, February 21. 2010Critique of "Impact Assessment," Chrisp & Toale, Pharmaceutical Marketing 2008
The following is a (belated) critique of:
"Impact Assesment," by Paul Chrisp (publisher, Core Medical Publishing) & Kevin Toale (Dove Medical Press). Pharmaceutical Marketing September 2008 "Open access has emerged in the last few years as a serious alternative to traditional commercial publishing models, taking the benefits afforded by technology one step further. In this model, authors are charged for publishing services, and readers can access, download, print and distribute papers free at the point of use."Incorrect. Open Access (OA) means free online access and OA Publishing ("Gold OA") is just one of the two ways to provide OA (and not the fastest, cheapest or surest): The fastest, cheapest and surest way to provide OA is OA Self-Archiving (of articles published in conventional non-OA journals: "Green OA") in the author's Institutional Repository. "Although its ultimate goal is the free availability of information online, open access is not the same as free access – publishing services still cost money."Incorrect. There are two forms of OA: (1) Gratis OA (free online access) and (2) Libre OA (free online access plus certain re-user rights) "Other characteristics of open access journals are that authors retain copyright and they must self-archive content in an independent repository."Incorrect. This again conflates Green and Gold OA: Gold OA journals make their own articles free online. In Green OA, articles self-archive their articles. "researchers are depositing results in databases rather than publishing them in journal articles"Incorrect. This conflates unrefereed preprint self-archiving with refereed, published postprint self-archiving. Green OA is the self-archiving of refereed, published postprints. The self-archiving of unrefereed preprints is an optional supplement to, not a substitute for, postprint OA. "a manuscript may be read more times than it is cited, and research shows that online hits per article do not correlate with IF".Incorrect. "Research shows" that online hits (downloads) do correlate with citations (and hence with citation impact factors). See references cited below. "Faculty of 1000 (www.f1000medicine.com)... asks opinion leaders in clinical practice and research to select the most influential articles in 18 medical specialties. Articles are evaluated and ranked..."Expert rankings are rankings and metrics (such as hit or citation counts) are metrics. Metrics can and should be tested and validated against expert rankings. Validated metrics can then be used as supplements to -- or even substitutes for -- rankings. But the validation has to be done a much broader and more systematic basis than Faculty of 1000, and on a much richer set of candidate metrics. Nor is the purpose of metrics "pharmaceutical marketing": It is to monitor, predict, navigate, analyze and reward research influence and importance. Bollen, J., Van de Sompel, H., Hagberg, A. and Chute, R. (2009) A principal component analysis of 39 scientific impact measures in PLoS ONE 4(6): e6022, Brody, T., Harnad, S. and Carr, L. (2006) Earlier Web Usage Statistics as Predictors of Later Citation Impact. Journal of the American Association for Information Science and Technology (JASIST) 57(8) 1060-1072. Harnad, S. (2008) Validating Research Performance Metrics Against Peer Rankings . Ethics in Science and Environmental Politics 8 (11) doi:10.3354/esep00088 The Use And Misuse Of Bibliometric Indices In Evaluating Scholarly Performance Harnad, S. (2009) Open Access Scientometrics and the UK Research Assessment Exercise. Scientometrics 79 (1) Also in Proceedings of 11th Annual Meeting of the International Society for Scientometrics and Informetrics 11(1), pp. 27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds. (2007) Lokker, C., McKibbon, K. A., McKinlay, R.J., Wilczynski, N. L. and Haynes, R. B. (2008) Prediction of citation counts for clinical articles at two years using data available within three weeks of publication: retrospective cohort study BMJ, 2008;336:655-657 Moed, H. F. (2005) Statistical Relationships Between Downloads and Citations at the Level of Individual Documents Within a Single Journal. Journal of the American Society for Information Science and Technology 56(10): 1088- 1097 O'Leary, D. E. (2008) The relationship between citations and number of downloads Decision Support Systems 45(4): 972-980 Watson, A. B. (2009) Comparing citations and downloads for individual articles Journal of Vision 9(4): 1-4 Saturday, January 30. 2010Arxiv Arcana
Nat Gustafson-Sundell wrote:
NGS: "I don't expect local repositories to ever offer quality control."Of course not. They are merely offering a locus for authors to provide free access to their preprint drafts before submitting them to journals for peer review, and to their final drafts (postprints) after they have been peer-reviewed and accepted for publication by a journal. Individual institutions cannot peer-review their own research output (that would be in-house vanity-publishing). And global repositories like arxiv or pubmedcentral or citeseerx or google scholar cannot assume the peer-review functions of the thousands and thousands of journals that are actually doing the peer- review today. That would add billions to their costs (making each into one monstrous (generic?) megajournal: near impossible, practically, if it weren't also totally unnecessary -- and irrelevant to OA and its costs). NGS: "Also, users have said again and again that they prefer discovery by subject, which will be possible for semantic docs in local repositories or better indexes (probably built through better collaborations), but not now."Search should of course be central and subject-tagged, over a harvested central collection from the distributed local IRs, not local, IR by IR. (My point was that central deposit is no longer necessary nor desirable, either for content-provision or for search. The optimal system is institutional deposit (mandated by institutions as well as funders) and then central harvesting for search. NGS: "I agree that it would be great if local repositories were more used, and eventually, the systems will be in place to make it possible, but every study I've seen still shows local repository use to remain disappointingly low, although some universities are doing better than others.""Use" is ambiguous, as it can refer both to author use (for deposit) and user use (for search and retrieval). We agree that the latter makes no sense: users search at the harvester level, not the IR level. But for the former (low author "use," i.e., low levels of deposit), the solution is already known: Unmandated IRs (i.e., most of the existing c. 1500 IRs) are near empty (of OA's target content, which is preprints and postprints of peer-reviewed journal articles) whereas mandated IRs (c. 150, i.e.m 1%!) are capturing (or on the way to capturing) their full annual postprint output. So the solution is mandates. And the locus of deposit for both institutional and funder mandates should be institutional, not central, so the two kinds of mandates converge rather than compete (requiring multiple deposit of the same paper). For the special case of arxiv, with its long history of unmandated deposit, a university's IR could import its own remote arxiv deposits (or export its local deposits to arxiv) with software like SWORD, but eventually it is clear that institution-external deposit makes no sense: Institutions are the universal providers of all peer-reviewed research, funded and unfunded, across all fields. One-stop/one-step local deposit (followed by automatic import. export. and harvesting to/ from whatever central services are needed) is the only sensible, scaleable and sustainable system, and also the one that is most conducive to the growth of universal OA deposit mandates from institutions, reinforced by funder mandates likewise requiring institutional deposit, rather than discouraged by gratuitously requiring institution-external deposit. NGS: "Inter-institutional repositories by subject area (however broadly defined) simply work better, such as arXiv or even the Princeton-Stanford repository for working papers in the classics.""Work better" for what? Deposit or search? You are conflating the locus of search (which should, of course, be cross-institutional) with the locus of deposit, which should be institutional, in order to accelerate institutional deposit mandates and in order to prevent discouraging adoption and compliance because of the prospect of having to deposit the same paper in more than one place. (Yes, automatic import/export/harvesting software is indifferent to whether it is transferring from local IRs to central CRs or from central CRs to local IRs, but the logistics and pragmatics of deposit and deposit mandates -- since the institution is always the source of the content -- make it obvious that one-time deposit institutionally fits all output, systematically and tractably, whereas willy-nilly IR/CR deposit, depending on fields' prior deposit habits or funder preferences is a recipe for many more years of the confusion, inaction, absence of mandates, and near-absence of OA content that we have now.) NGS: "Currently, universities are paying external middlemen an outsized fee for validation and packaging services. These services can and should be brought "in-house" (at least as an ideal/ goal to develop toward whenever the opportunities can be seized) except in cases where prices align with value, which occurs still with some society and commercial publications."I completely agree that along with hosting their own peer-reviewed research output, and mandating its deposit in their own IRs, institutions can also use their IRs (along with specially developed software for this purpose) to showcase, manage, monitor, and measure their own research output. That is what OA metrics (local and global) will make possible. But not till the problem of getting the content into OA IRs is solved. And the solution is institutional and funder mandates -- for institutional (not institution-external) deposit. NGS: "To the extent that an arXiv or the inter-institutional repository for humanities research which will be showing up in 3-7 years moves toward offering these services, they are clearly preferable to old fashioned subscription models (since the financial support is for actual services) and current local repositories which do not offer everything needed in the value chain (as listed in Van de Sompel et al. 2004)."(1) The reason 99% of IRs offer no value is that 99% of IRs are at least 85% empty. Only the 1% that are mandated are providing the full institutional OA content -- funded and unfunded, across all disciplines -- that all this depends on. (2) The central collections, as noted, are indispensable for the services they provide, but that does not include locus of deposit and hosting: There, central deposit is counterproductive, a disservice. (3) With local hosting of all their research output, plus central harvesting services, institutions can get all they need by way of search and metrics, partly through local statistics, partly from central ones. NGS: " I remember when I first read an article quoting a researcher in an arXiv covered field who essentially said that journals in his field were just for vanity and advancement, since all the "action" was in arXiv (Ober et al. 2007 quoting Manuel 2001 quoting McGinty 1999) -- now think about the value of a repository that doesn't just store content and offer access."This familiar slogan, often voiced by longstanding arxiv users, that "Journals are obsolete: They're only for tenure committees. We [researchers] only use the arxiv" is as false, empirically, as it is incoherent, logically: It is just another instance of the "Simon Says" phenomenon: (Pay attention to what Simon actually does, not to what he says.) Although it is perfectly true that most arxiv users don't bother to consult journals any more -- using the OA version in arxiv only, and referring to the journal's canonical version-of-record only in citing -- it is equally (and far more relevantly) true that they all continue to submit all those papers to peer-reviewed journals, and to revise them according to the feedback from the referees, until they are accepted and published. That is precisely the same thing that all other researchers are doing, including the vast majority that do not self-archive their peer-reviewed postprints (or, even more rarely, their unrefereed preprints) at all. So journals are not just for vanity and advancement; they are for peer review. And arxiv users are just as dependent on that as all other researchers. (No one has ever done the experiment of trying to base all research usage on nothing but unrefereed preprints and spontaneous user feedback.) So the only thing that is true in what "Simon says" is that when all papers are available, OA, as peer-reviewed final drafts (and sometimes also supplemented earlier by the prerefereeing drafts) there is no longer any need for users or authors to consult the journal's proprietary version of record. (They can just cite it, sight unseen.) But what follows from that is that journals will eventually have to scale down to becoming just peer-review service-providers and certifiers (rather than continuing also to be access-providers or document producers, either on-paper or online). Nothing follows from that about the value of repositories, except that they are useless if they do not contain the target content (at least after peer review, and, where possible and desired by authors, also before peer review). Harnad, S. (1998/2000/2004) The invisible hand of peer review. Nature [online] (5 Nov. 1998), Exploit Interactive 5 (2000): and in Shatz, B. (2004) (ed.) Peer Review: A Critical Inquiry. Rowland & Littlefield. Pp. 235-242. NGS: "Do I think the financial backing will remain in place? It depends on the services actually offered and to what extent subject repositories could replace a patchwork system of single titles offered by a patchwork of publishers."At the moment the issue is whether arxiv, such as it is (a central locus for institution-external deposit of institutional research content in some fields, mostly physics, plus a search and alerting service), can be sustained by voluntary sub-sidy/scription -- not whether, if arxiv also somehow "took over" the function of journals (peer review), that too could be paid for by voluntary sub-sidy/ scription... NGS: "Universities could save a great deal by refusing to pay the same overhead over and over again to maintain complete collections in single subject areas (not to mention paying for other people's profits)."I can't quite follow this: You mean universities can cancel journal subscriptions? How do those universities' users then get access to those cancelled journals' contents, unless they are all being systematically made OA? Apart from those areas of physics where it has already been happening since 1991, that isn't going to happen in most other fields till OA is mandated by the universal providers of that content, the universities (reinforced by mandates from their funders). Then (but only then) can universities cancel their journal subscriptions and use (part of) their windfall saving to pay (journals!) for the peer-review of their own research output, article by article (instead of buying in other universities' output, journal by journal). NGS: "More importantly, more could be done to make articles useful and discoverable in a collaborative environment, from metadata to preservation, so that the value chain is extended and improved (my sci-fi includes semantic docs, not just cataloged texts, and improved, or multi-stage, peer review, or peer review on top of a working papers repository)."All fine, and desirable -- but not until all the OA content is being provided, and (outside of physics), it isn't being provided -- except when mandated... So let's not build castles in Spain before we have their contents safely in hand. NGS: "I think there's been plenty of 'chatter' to indicate that the basic assumptions in conversations between universities are changing (see recent conference agendas), so that we can expect to see more and more practical plans to collaborate on metadata, preservation, and , yes, publications."I'll believe the "chatter" when it has been cashed into action (deposit mandates). Till then it's just distraction and time-wasting. NGS: "My head spins to think of the amount of money to be saved on the development of more shared platforms, although, the money will only be saved if other expenditures are slowly turned off."All this talk about money, while the target content -- which could be provided at no cost -- is still not being provided (or mandated)... NGS: "Sandy mentioned in another post that she [he] would hope for arXiv like support for university monographs..."Monographs (not even a clearcut case, like peer-reviewed articles, which are all, already, author give-aways, written only for usage and impact) are moot, while not even peer-reviewed articles are being deposited, or mandated... NGS: "Open access and NFP publications which do offer the full value chain have been proven to have much lower production costs per page than FP publishers and they do not suffer any impact disadvantages -- and these are still operated on a largely stand-alone basis, without the advantages that can be gained by sharing overhead."Cash castles in Spain again, while the free content is not yet being provided or mandated... NGS: "Maybe local repositories really are the way to go, since then each institution has more control over its own contribution, but the collaboration and the support will still need to occur to support discovery (implying metadata, both in production and development of standards and tools) and preservation."No, search and preservation are not the problem: content is. NGS: "I suppose another problem with local repositories, however, is that a consensus is far less likely to unite around local repositories as a practical option at this juncture -- the case can't just be made with words, you need the numbers and arXiv has them -- and while I am interested to see strong local repositories emerge, there is greater sense in supporting what can be achieved, since we need more steps in the right direction.""The numbers" say the following: Physicists have been depositing their preprints and postprints spontaneously (unmandated) in arxiv since 1991, but in the ensuing 20 years this commendable practice has not been taken up by other disciplines. The numbers, in other words, are static, and stagnant. The only cases in which they have grown are those where deposit was mandated (by institutions and funders). And for that, it no longer makes sense (indeed it goes contrary to sense) to deposit them institutional-externally, instead of mandating institutional deposit and then harvesting centrally. And the virtue of that is that it distributes the costs of managing deposits sustainably, by offloading them onto each institution, for its own output, instead of depending on voluntary institutional sub-sidy/scription for obsolete and unnecessary central deposit. (See also the "denominator fallacy," which arises when you compare the size of size of central repositories with the size of institutional repositories: The world's 25,000 peer-reviewed journals publish about 2.5 million articles annually, across all fields. A repository's success rate is the proportion of its annual target contents that are being deposited annually. For an institution, the denominator is its own total annual peer-reviewed journal article output across all fields. For a central repository, it is the total annual article output -- in the field(s) it covers -- from all the institutions in the world. Of course the central repository's numerator is greater than any single institutional repository's numerator. But its denominator is far greater still. Arxiv has famously been doing extremely well for certain areas of physics, unmandated, for two decades. But in other areas arxiv is not not doing so well, relative to the field's true denominator; and most other central repositories are likewise not doing well, In fact, it is pretty certain that -- apart from physics, with its 2-decade tradition of deposit, plus a few other fields such as economics (preprints) and computer science -- unmandated central repositories are doing exactly as badly unmandated institutional repositories are doing, namely, about 15%.) Stevan Harnad American Scientist Open Access Forum Tuesday, January 26. 2010Harvard's Recommendations to President Obama on Public Access Policy
Professor Steven Hyman, Provost of Harvard, the first US University to mandate Open Access, has submitted such a spot-on, point for point response to President Obama’s Request for Information on Public Access Policy that if his words are heeded, the beneficiaries will not only be US research progress and the US tax-paying public, by whom US research is funded and for whose benefit it is conducted, but research progress and its public benefits planet-wide, as US policy is globally reciprocated.
Reproduced below are just a few of the highlights of Professor Hyman’s response. Every one of the highlights has a special salience, and attests to the minute attention and keen insight into the subtle details of Open Access that went into the preparation of this invaluable set of recommendations. [Hash-marks (#) indicate three extremely minor points on which the response could be ever so slightly clarified -- see end.] “The public access policy should (1) be mandatory, not voluntary, (2) use the shortest practical embargo period, no longer than six months, (3) apply to the final version of the author’s peer-reviewed manuscript, as opposed to the published version, unless the publisher consents to provide public access to the published version, (4) [# require deposit of the manuscript in a suitable open repository #] immediately upon acceptance for publication, where it would remain “dark” until the embargo period expired, and (5) avoid copyright problems by [## requiring federal grantees, when publishing articles based on federally funded research, to retain the right to give the relevant agency a non-exclusive license to distribute a public-access copy of his or her peer-reviewed manuscript ##]… Three suggestions for clarifying the minor points indicated by the hash-marks (#): [#”require deposit of the manuscript in a suitable open repository” #](add: “preferably the fundee’s own institutional repository”) [##”requiring federal grantees, when publishing articles based on federally funded research, to retain the right to give the relevant agency a non-exclusive license to distribute a public-access copy of his or her peer-reviewed manuscript” ##](add: “the rights retention and license are desirable and welcome, but not necessary if the publisher already endorses making the deposit publicly accessible immediately, or after the allowable embargo period”) [### "we will never have an adequate control group [for measuring the mandate's success]: a set of articles on similar topics, of similar quality, for which there is no public access" ###](add: “but closed-access articles published in the same journal and year as mandatorily open-access articles do provide an approximate matched control baseline for comparison”) Stevan Harnad American Scientist Open Access Forum Thursday, January 21. 2010On Open Access: "Gratis" and "Libre"
Matthew Cockerill [MC] (BioMedCentral) wrote:
MC: "Agreement on terminology can really only ever be pragmatic"Agreed. MC: "Many of us use "open access" to mean what Stevan refers to as 'libre open access', and have distinguished this from "free access" which Stevan refers to as 'Gratis open access'."This is alas all true too. It is also true that "many of us" (not me!) use "open access" to mean "gold open access" (publishing) only. And the progress of open access is likewise much the worse off -- pragmatically-- because of this other widespread conflation (sometimes willful, mostly just ignorant) too. It is also true that what Stevan (and Peter, let's not forget) -- co-coiners of the original (nonbinding, nonlegal) BOAI definition of "open access" -- refer to as "libre open access" was coined specifically to distinguish it from "gratis open access," which means free online access (whereas libre OA means free online access plus some re-use rights, not all yet specified). But from the very outset, there has been some (understandable) motivation on the part of gold open access publishers to co-opt the term "open access" to fit their product, and only their product. See the long, sad, "Free Access vs. Open Access" debate, started by BioMedCental's first editorial "Free Access is not Open Access" in "Open Access Now" on 28 July 2003). What is one to say, except that some of it sounds a lot like a battle over a trademark -- which you need, if you are conducting a trade... But not just a battle over trademark. Also ideology vs. pragmatics. (I don't, by the way, think Matt's motivation, in particular, is primarily commercial: I am certain that he believes, very sincerely, in (libre) OA.) My own motivation is exclusively to get all of the refereed literature freely accessible online, at long last, as soon as possible (it's already more than a decade and a half overdue), in whatever way works, is within reach, works surely, and works fast. Hence the only thing at stake for me when it comes to the trademark "OA" is the fate of free online access itself, which will certainly come much later if -- now that the term "OA" and the "OA Movement" are launched in public consciousness -- it is now declared, for either commercial or ideological reasons, that OA mandates are no longer OA mandates but "FA" mandates, the OA impact advantage is no longer the OA advantage but the FA advantage, and those who have been fighting for OA since long before it got a name have not, in fact, been fighting for OA but "FA." Moreover, it means that precious little of the (already precious little) OA we have to date (about 15% green plus about 15% gold) is in reality OA at all: It's just "FA." I find all this doubly foolish, not only because (1) gratis OA (free online access) is a necessary condition, though not a sufficient condition, for libre OA (free online access plus some re-use rights, not all yet specified) and will (as is evident to anyone who gives it a few minutes of serious thought) almost certainly lead to libre OA soon after it becomes universal (if and when we do what we need to do to make gratis OA universal) but also because (2) over-reaching and insisting on libre OA first, and deprecating gratis OA as not really being OA at all, merely FA, is merely serving to delay the onset of libre OA too (just as insisting that only Gold OA publishing is OA is delaying the era of Gold OA publishing). So, yes, as Matt says, use of the terminology is just a matter of pragmatics, but not linguistic pragmatics: strategic pragmatics. And needlessly, counterproductively over-reaching for libre OA (or Gold OA) now, when Green gratis OA is fully within our grasp is just about as unpragmatic and short-sighted as one can possibly be, in the short (but already far too long) history of OA. And the attempt to co-opt the term exclusively is simply making the "best" the enemy of the better. (I can already sense that there are those who are straining to chime in that their insistence on libre OA, too, is driven neither by commercial considerations nor ideology but pragmatics: they need the re-use rights, now, and their research progress is hurting for the lack of them. Let me suggest that if you look more closely at this "pragmatic" case for libre OA it almost always turns out to be about open data, not OA (which is about journal articles). Yet those who are in a hurry for open data are apparently happy to conflate their case with OA's, even if it's at the expense of again gratuitously handicapping our reach -- for the green gratis OA to journal articles that is within our grasp -- with the independent extra burden of data re-use rights. And what is invariably forgotten in all this special-case over-reaching is the completely correctable general case that has been staring us in the face, uncorrected, lo these 15+ years, which is that every day countless would-be users are being denied access and usage for the 85% of journal articles that are accessible only to those with subscription access. That is the paramount problem that the online era has empowered us to solve, and instead we are fussing about extra perks that will surely come soon after we solve it, but not if we continue to make those extra perks a precondition for a solution -- or even for naming the problem!) MC: "I believe the reason that many, including BioMed Central, reserve the term open access for the 'libre' sense is not simply the historical precedent of BOAI and Bethesda, but also the wider related usage of the term open (as in open source, open courseware, open wetware, open government). In all cases, these imply the availability, reusability and redistributability of the material, not the fact that it doesn't cost anything."And in all cases, as soon as one takes the trouble of looking closely at the apparent similarities, the profound differences reveal that this conflation of senses is specious and superficial: article texts are not program code that needs to be re-used and re-written; article texts are to be read and then the ideas and findings in them are to be re-used in new research and writings. Same for the disanalogy with open data, which of course includes "open wetware." Inasmuch as open courseware is just text, free online access for all is all that's needed. (Put the URL in the coursepack instead of the text.) Inasmuch as courseware is programs, it's the same disanalogy between text code and software code. Ditto for "open multimedia" and rip/remix/mashup: not for scholarly/scientific text -- though fine for the scholarly/scientific ideas and findings described in the text (modulo plagiarism). And "open government" is about combatting secrecy, which is moot for published scientific research (whether or not access carries a price tag). In other words, I don't know about Peter, but it's certainly true that for my own part it was not because of all of these superficial and in the end specious commonalities supposedly shared by this panoply of "open" X's that I favored the term "open access" as the descriptor for what the online era had made possible for refereed scholarly/scientific journal articles."On the Deep Disanalogy Between Text and Software and Between Text and Data Insofar as Free/Open Access is Concerned" On the contrary. If I had known in 2002 what confusion and conflation it would make "OA" heir to, I would have avoided the term "open" like the plague. (There was one commonality, though, that both Peter and I did intentionally try to capitalize on in our choice of that term: the "open" in the "open archives initiative" protocol for metadata harvesting. That harks back to an even earlier decision point, this time in an email exchange with Herb van de Sompel in 1999 about what how to rename the "Universal Preprint Service" and its "Santa Fe Convention," which had been the original names for the OAI and OAI protocol. It was Herb who opted for "open" rather than "free" (which I seem to recall that I preferred), so OAI became OAI, and OA/BOAI followed soon afterward (though OAI's "archive" was soon jettisoned -- again for no good reason whatsoever, just arbitrariness and pedantry -- in favor of"repository"... Lexicalization is notoriously capricious, and unintended metaphors and other affinities can come back to haunt you...) MC: "On which basis, one might refer to Gratis open access, as being 'non-open open access'. Which is why it seems to me a problematic form of terminology, however well-intentioned."On the contrary, Matt. You are being so seduced by your incoming biases here that you don't realize that you are making them into self-fulfilling prophecies: Gratis OA is only "non-OA OA" to those who wish to argue that free online access is not open access! Let me close with an abstract of the keynote I will be giving at the e-Democracy Conference in Austria in May. In that talk I also will be discussing the commonalities and differences among the various "open" movements, but note only that "The problem [of Green Gratis OA] is not particularly an instance of "eDemocracy" one way or the other...":
Stevan Harnad American Scientist Open Access Forum Wednesday, January 6. 2010Universities UK on Open Access, Metrics, Mandates and the Research Excellence Framework
Universities UK recommends making all the research outputs submitted to the UK's new Research Excellence Framework (REF) Open Access (OA).
The UUK's recommendation is of course very welcome and timely. All research funded by the RCUK research councils is already covered by the fact that all the UK councils already mandate OA. It is this policy, already adopted by the UK, that the US is now also contemplating adopting, in the form of the proposed Federal Research Public Access Act (FRPAA), as well as the discussion in President Obama's ongoing OSTP Public Access Policy Forum. But if HEFCE were to follow the UUK's recommendation, it would help to ensure Open Access to UK research funded by the EU (for which OA is only partially mandated thus far) and other funders, as well as to unfunded research -- for which OA is mandated by a still small but growing number of universities in the UK and worldwide. (The same UUK proposal could of course be taken up by UK's universities, for once they mandate OA for all their research output, all UK research, funded and unfunded, becomes OA!) There is an arbitrary constraint on REF submissions, however, which would greatly limit the scope of an OA requirement (as well as the scope of REF itself): Only four research outputs per researcher may be submitted, for a span covering at least four years, rather than all research output in that span. This limitation arises because the REF retains the costly and time-consuming process of re-reviewing, by the REF peer panels, of all the already peer-reviewed research outputssubmitted. This was precisely what it had earlier been proposed to replace by metrics, if they prove sufficiently correlated with -- and hence predictive of -- the peer panel ranklings. Now it will only be partially supplemented by a few metrics. This is a pity, and an opportunity lost, both for OA and for testing and validating a rich and diverse new battery of metrics and initializing their respective weights, discipline by discipline. Instead, UUK has endorsed a simplistic (and likewise untested and arbitrary) a-priori weighting ("60/20/20 for outputs, impact and environment"). Harnad, S. (2009) Open Access Scientometrics and the UK Research Assessment Exercise. Scientometrics 79 (1) Also in Proceedings of 11th Annual Meeting of the International Society for Scientometrics and Informetrics 11(1), pp. 27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds. (2007) Thursday, December 10. 2009Please Comment on Mandate Proposal by President Obama's Office of Science and Technology Policy (OSTP)
Today (Dec 10 2009) begins the comment period for President Obama's OSTP Public Forum on How Best to Make Federally Funded Research Results Available For Free. Comments will be in three phases:
Implementation (Dec. 10 to 20): Which Federal agencies are good candidates to adopt Public Access policies? What variables (field of science, proportion of research funded by public or private entities, etc.) should affect how public access is implemented at various agencies, including the maximum length of time between publication and public release?Please do comment at the OSTP site (you'll need to register first). My own comments follow: It would be a great benefit to research progress in the US as well as worldwide if the US were to require not only NIH-funded research journal articles to be made freely accessible to all users online, but all federally funded research journal articles. BENEFITS: The benefits of making all US publicly funded research publicly accessible online would not only be in the fact that all tax-payers (and not just those who can afford to subscribe to the journal in which it was published) will be able to read and use the research their taxes paid for, but, even more important, it will allow all researchers (and not just those whose institutions can afford to subscribe to the journal in which it was published) to read, use, apply and build upon all those research findings, again to the benefit of the public that funded them, and for the future research advances for the sake of which research is funded, conducted and published. WHICH RESEARCH? Which federally funded research should be made publicly accessible online? Start with all research that is fully funded federally, in all scientific, technical and scholarly fields, and then work out agreements in the case of joint private funding. Most private funders will likewise want to ensure maximal usage and impact for the research they have funded. If they want it published at all, they will also want access to it to be maximized. TIMING OF DEPOSIT: Allowable embargo time should be minimal, but, far more important, the requirement should be to deposit the final, peer-reviewed draft, immediately upon acceptance for publication, in the author's institutional repository, without exception. 63% of journals already endorse making the deposit Open Access immediately. For the remaining 37%, the deposit can be made Closed Access, with only its metadata (authors, date, title, journal, abstract) accessible publicly during the allowable embargo. That way researchers can send the author a semi-automatic email eprint request for an individual copy to be used for research purposes. This will tide over research needs during any embargo. LOCUS OF DEPOSIT: It is extremely important to require institutional instead of central deposit (which is what several funders require now, e.g., NIH requires central deposit in PubMedCentral, PMC). Institutional deposits can be easily and automatically harvested or imported into central collections and services like PMC (or Scirus or OAIster or Citeseer, or, for that matter, Google Scholar and Google). The NIH requirement to deposit in PubMedCentral (PMC) is an extremely counterproductive handicap, needlessly slowing down the growth of public access for no good reason at all. Institutions (universities and research institutes) are the universal providers of all research output, funded and unfunded, across all fields. If funders mandate institutional deposit, they encourage and reinforce universalizing the adoption of institutional public access mandates across all their fundees' institutions (and they gain a powerful ally in monitoring and ensuring compliance with the funder mandates). But if funders instead require central deposit, they discourage and compete with universalizing the adoption and implementation of institutional public-access requirements. Nor is there any advantage whatsoever -- functional, technical or practical -- to requiring central rather than institutional deposit; it only creates needless obstacles to the universal adoption of public access and public access mandates for all research output. (It's rather like web hosts depositing their web pages directly in google, instead of hosting them locally and just letting google harvest them.) WHO DEPOSITS? The current NIH public access policy allows the option of publishers doing the PMC deposits in place of NIH's fundees. This not only makes fundee compliance vaguer and compliance-monitoring more difficult, but it further locks in publisher embargoes (with less scope for authors providing individual access to researchers during the embargo) and it further discourages convergent institutional mandates (with the prospect of researchers having to do multiple deposit for the same paper, institution-internal and institution-external). The ones responsible for ensuring that the deposit is made, immediately upon acceptance for publication, are the fundee and the fundee's institution, by monitoring the deposits in their own institutional repository. Publishers should be out of the loop. DEPOSIT WHAT? There is no need at all to be draconian about the format of the deposit. The important thing is that the full, peer-reviewed final draft should be deposited in the fundee's (OAI-compliant) institutional repository immediately upon acceptance for publication. A preference can be expressed for XML format, but any format will do for now, until the practice of immediate Open Access deposit approaches global universality (at which time it will all converge on XML as a natural matter of course anyway). It would be a needless handicap and deterrent to insist on any particular format today. (Doc or Docx will do, so will HTML or PDF or any of the open formats.) Don't complicate or discourage compliance by gratuitously insisting on more than necessary at the outset, and trust that as the practice of public access provision and usage grows, researchers will converge quite naturally on the optimal format. And remember that in the meanwhile the official published version will continue to be generated by publishers, purchased and stored by subscribing institutions, and preserved in deposit library archives. The public-access drafts are just supplements for the time being, not substitutes, deposited so that it is not only paying subscribers who can access and use federally funded research.) MONITORING COMPLIANCE: What are the best mechanisms to ensure compliance? To require deposit in the fundee's institutional repository immediately upon acceptance for publication. Fundees' institutions are already co-responsible for compliance with funders' application and fulfillment conditions, and already only too eager to help. They should be made responsible for ensuring timely compliance with the funder's deposit requirement. It can also be made part of the grant requirement that the funder must be notified immediately upon deposit by being sent the deposit's URL, so it can be linked or imported for the funder's records and/or harvested by the funder's designated central repository (e.g. PMC). METRICS OF SUCCESS: Institutions already have an interest in monitoring the usage and impact of their research output, and their institutional repositories already have means for generating usage metrics and statistics (e.g., IRStats). In addition there are now central means of measuring usage and impact (free services such as Citeseer, Citebase, Publish-or-Perish, Google Scholar and Google Books, as well as fee-based ones such as SCOPUS and Thompson-Reuters Web of Science). These and other rich new metrics will be available to measure success once the deposit requirements are adopted, growing, and supplying the content from which these rich new online metrics are extracted. Which of the new metrics proves to be the "best" remains to be tested by systematically assessing their predictive power and their correlation with peer evaluations. COMMENT AND FEEDBACK: Once the research content is openly accessible online, many rich new tagging, commenting and feedback mechanisms will grow quite naturally on top of them (and can also be provided by central harvesters and services commissioned by the funders themselves, if they wish, or the metrics can simply be harvested from other services for the funder's subset of their content). PRIVATE SECTOR USABILITY: Metrics will not only make it possible for deposit rates, downloads, citations, and newer metrics and their growth to be measured and monitored, but it will also be possible to sort uptake metrics into those based on public access and usage, researcher access and usage, and industrial R&D and applications access and usage. But the urgent priority is first to provide the publicly accessible research content on which all these uptake measures will be based. The measures will evolve quite naturally once the content is globally available. For more detailed guidelines on optimizing OA mandates (what to mandate depositing, where and when to mandate deposit, and how to integrate institutional and funder mandates), see 1, 2 & 3).All federal agencies that fund scientific, technical and scholarly research should require fundees to make the resulting peer-reviewed articles to be made freely accessible online (”Open Access”). There is no objective reason why any publicly funded research that is published in peer-revewed journals should be accessible only to subscribers. The costs of providing free online access are minimal. Fundees should be required to make their articles publicly accessible by depositing them in their own institutions’ OAI-compliant Open Access Repository. The deposit should be made immediately upon acceptance for publication. For the minority of publishers who do not yet endorse making the deposit Open Access immediately, an embargo of 6 months can be allowed during which only the deposit’s metadata are openly accessible but the author can provide individual eprints for researcher purposes in response to individual user email requests mediated by the institutional repository software. All empirical data indicate that the optimal embargo is no embargo. If there is any access embargo at all, the result is needlessly lost research usage and impact. There is no field of science or scholarship, fast or slow, that benefits — or fails to lose — from denying access to peer-reviewed, published results once they have been accepted for publication. The only version of a paper that needs to be made freely accessible to all users online is the author’s peer-reviewed final draft, immediately upon acceptance for publication. There are no advantages at all to later versions of the paper, only disadvantages, because fewer publishers endorse making their proprietary PDF freely accessible. Eventually the author’s final draft can be in XML format, but for now any format (doc, docx, html, pdf, etc.) will do. Only mandatory deposit is successful and effective. All other alternatives fail to generate deposits above the spontaneous unmandated level of about 15%. See Arthur Sale’s studies. The only relevant structural characteristics of a public access policy are the ones already mentioned: mandate deposit of the fundee’s peer-reviewed final draft in the fundee’s institutional repository immediately upon acceptance for publication. (Preferable format, but not obligatory: XML.) Compliance should be monitored by the fundee’s institution, as part of the grant’s fulfillment condition. Maximum permissible embargo before making the deposit Open Access: 6 months (but preferably no embargo). Deposits can be harvested to central collections and services like PubMed Central, but deposit should be institutional and not central, in order to reinforce and facilitate complementary institutional mandates to deposit unfunded research. Stevan Harnad American Scientist Open Access Forum
« previous page
(Page 3 of 11, totaling 102 entries)
» next page
|
QuicksearchSyndicate This BlogMaterials You Are Invited To Use To Promote OA Self-Archiving:
Videos:
The American Scientist Open Access Forum has been chronicling and often directing the course of progress in providing Open Access to Universities' Peer-Reviewed Research Articles since its inception in the US in 1998 by the American Scientist, published by the Sigma Xi Society. The Forum is largely for policy-makers at universities, research institutions and research funding agencies worldwide who are interested in institutional Open Acess Provision policy. (It is not a general discussion group for serials, pricing or publishing issues: it is specifically focussed on institutional Open Acess policy.)
You can sign on to the Forum here.
ArchivesCalendar
CategoriesBlog AdministrationStatisticsLast entry: 2018-09-14 13:27
1129 entries written
238 comments have been made
Top ReferrersSyndicate This Blog |