Friday, August 22. 2008Comhghairdeas, Eire! Ireland's HEA adopts world's 52nd Green OA Self-Archiving MandateIreland's Higher Education Authority (HEA) has adopted the world's 52nd Green OA Self-Archiving mandate (Ireland's second OA mandate; the 27th funder mandate overall) and it chose the optimal mandate model: EURAB's.Comhghairdeas, Eire! The mandate's text is well worth reading (and emulating) in detail. Thursday, August 21. 2008Max Planck Society Pays for Gold OA and Still Fails to Mandate Green OA
One can only leave it to posterity to judge the wisdom of the Max Planck Society in being prepared to divert "central" funds toward funding the publication of (some) MPS research in (some) Gold OA journals (PLoS) without first mandating Green OA self-archiving for all MPS research output.
It is not as if MPS does not have an Institutional Repository (IR): It has EDOC, containing 108,933 records (although it is not clear how many of those are peer-reviewed research articles, how many of them are OA, and what percentage of MPS's current annual research output is deposited and OA). But, despite being a long-time friend of OA, MPS has no Green OA self-archiving mandate. I have been told, repeatedly, that "in Germany one cannot mandate self-archiving," but I do not believe it, not for a moment. This is pure lack of reflection and ingenuity: At the very least, Closed Access deposit in EDOC can certainly be mandated for all MPS published research output as a purely administrative requirement, for internal record-keeping and performance-assessment. This is called the "Immediate Deposit, Optional Access" (IDOA) Mandate. And then the "email eprint request" Button can be added to EDOC to provide almost-OA to all those deposits that the authors don't immediately make OA of their own accord (95% of journals already endorse immediate OA in some form). Then the MPS can go ahead and spend any spare money it may have to fund publication instead of research. This should not be construed as any sort of critique of PLoS, a superb Gold OA publisher, producing superb journals. Nor is it a critique of paying for Gold OA, for those who have the funds. It is a critique of paying for Gold OA without first having mandated Green OA. (For that is rather like an institution offering to pay for its employees' medical insurance for car accidents without first having mandated seat-belts; or, more luridly, offering to pay for the treatment of its employees' secondary-smoke-induced illnesses without first having mandated that the workplace must be smoke-free.) Stevan Harnad American Scientist Open Access Forum 51st Green OA Self-Archiving Mandate: European Union's 7th Framework
The European Commission has now mandated Green OA self-archiving for 20% of its 7th Framework Funding. This is the 51st Green OA Mandate worldwide (and the 26th funder mandate: The European Research Council (ERC), another European research funder, had earlier likewise mandated Green OA.)
See ROARMAP: Institution's/Department's OA Self-Archiving Policy The pilot covers approximately 20% of the FP7 budget and will apply to specific areas of research under the 7th Research Framework Programme (FP7): Health, Energy, Environment, Information and Communication Technologies (Cognitive Systems, Interaction, Robotics), Research Infrastructures (e-Infrastructures), Socio-economic Sciences and Humanities, Science in Society. New grant agreements in the areas covered by the pilot will contain a clause requiring grant recipients to deposit peer reviewed research articles or final manuscripts resulting from their FP7 projects into their institutional or if unavailable a subject-based repository... within six or twelve months after publication, depending on the research area. Tuesday, August 12. 2008Use And Misuse Of Bibliometric Indices In Scholarly Performance Evaluation
Ethics In Science And Environmental Politics (ESEP)
ESEP Theme Section: The Use And Misuse Of Bibliometric Indices In Evaluating Scholarly Performance + accompanying Discussion Forum Editors: Howard I. Browman, Konstantinos I. Stergiou Quantifying the relative performance of individual scholars, groups of scholars, departments, institutions, provinces/states/regions and countries has become an integral part of decision-making over research policy, funding allocations, awarding of grants, faculty hirings, and claims for promotion and tenure. Bibliometric indices (based mainly upon citation counts), such as the h-index and the journal impact factor, are heavily relied upon in such assessments. There is a growing consensus, and a deep concern, that these indices — more-and-more often used as a replacement for the informed judgement of peers — are misunderstood and are, therefore, often misinterpreted and misused. The articles in this ESEP Theme Section present a range of perspectives on these issues. Alternative approaches, tools and metrics that will hopefully lead to a more balanced role for these instruments are presented.Browman HI, Stergiou KI INTRODUCTION: Factors and indices are one thing, deciding who is scholarly, why they are scholarly, and the relative value of their scholarship is something else entirely Saturday, August 9. 2008Estimating Annual Growth in OA Repository Content
This is a useful beginning in the analysis of the growth of Open Access (OA), but it is mostly based on central collections of a variety of different kinds of content.Deblauwe, Francis (2008) OA Academia in Repose: Seven Academic Open-Access Repositories Compared A useful way to benchmark OA progress would be to focus on OA's target content -- this would be, first and foremost, peer-reviewed scientific and scholarly journal articles -- and to indicate, year by year, the proportion of the total annual output of the content-providers, rather than just absolute annual deposit totals. The OA content-providers are universities and research institutions. The denominator for all measures should be the number of articles the institution publishes in a given year, and the numerator should be the number of articles published in that year (full-texts) that are deposited in that institution's Institutional Repository (IR). Just counting total deposits, without specifying the year of publication, the year of deposit, and the total target output of which they are a fraction (as well as making sure they are article full-texts rather than just metadata) is only minimally informative. Absolute totals for Central Repositories (CRs), based on open-ended input from distributed institutions, are even less informative, as there is no indication of the size of the total output, hence what fraction of that has been deposited. If an institution does not know its own annual published articles output -- as is likely, since such record-keeping is one of the many functions that the OA IRs are meant to perform -- an estimate can be derived from the Institute of Scientific Information's (ISI's) annual data for that institution. The estimate is then simple: Determine what proportion of the full-texts of the annual ISI items for that institution are in the IR. (ISI does not index everything, but it probably indexes the most important output, and this ratio is hence an estimate of what proportion of the most important output is being made OA annually by that institution). This calculation could easily be done for the only university IR among the 7 analyzed above, Cambridge University's. It was probably chosen because it is the IR containing the largest total number of items (see ROAR) and one of the few IRs with a total item count big enough to be comparable with the total counts of the multi-institutional collections such as Arxiv. However, it is unclear what proportion of the items in Cambridge's IR are the full-texts of journal articles -- and what percentage of Cambridge's annual journal article output this represents. CERN is an institution, but not a multidisciplinary university: High Energy Physics only. CERN has, however, done the recommended estimate of its annual OA growth in 2006 and found its IR "Three Quarters Full and Counting. http://library.cern.ch/HEPLW/12/papers/2/ CERN, moreover, is one of the 25 institutions, universities and departments that have mandated deposit in their IR. Those are also the IRs that are growing the fastest. (Deblauwe notes that"Resources... remain a big issue, e.g., in 2006, after the initially-funded three years, DSpace@Cambridge's growth rate slowed down due to underestimation of the expenses and difficulty of scaling up." I would suggest that what Cambridge needs is not more resources for the IR but a deposit mandate, like Southampton's, QUT's, Minho's, CERN's, Harvard's, Stanford's, and the rest of the 25 mandates to date: See ROARMAP.) Stevan Harnad American Scientist Open Access Forum Tuesday, August 5. 2008Self-Promotion Bias in Arxiv Deposit ListingsThis interesting paper reports that in the physics Arxiv (astrophysics sector), where virtually all current articles in astrophysics are OA in preprint form (with no postprint OA problem in astrophysics either) several factors significantly influence citation counts:Dietrich, JP (2008) Disentangling visibility and self-promotion bias in the arXiv: astro-ph positional citation effect. PUBLICATIONS OF THE ASTRONOMICAL SOCIETY OF THE PACIFIC 120 (869): 801-804 (1) Arxiv provides a daily list of articles deposited. The articles higher on that list are more cited than the articles lower on that list.The authors rightly point out that in a high-output field like astrophysics, visibility is an important factor in usage and citations, and authors need alerting and navigation aids based on importance, relevance and quality, rather than on random timing and author self-promotion biasses. I would add that in fields -- whether high- or low-output -- that, unlike astrophysics, are not yet OA, accessibility itself probably has much the same sort of effect on citations that visibility does in an OA field like astrophysics. (Even maximized visibility cannot make articles accessible to those who cannot afford access to the full-text.) Stevan Harnad American Scientist Open Access Forum Monday, August 4. 2008Are Online and Free Online Access Broadening or Narrowing Research?
Evans, James A. (2008) Electronic Publication and the Narrowing of Science and Scholarship Science 321(5887): 395-399 DOI:10.1126/science.1150473Evans found that as more and more journal issues are becoming accessible online (mostly only the older back-issues for free), journals are not being cited less overall, but citations are narrowing down to fewer articles, cited more.Excerpt: "[Based on] a database of 34 million articles, their citations (1945 to 2005), and online availability (1998 to 2005),... as more journal issues came online, the articles [cited] tended to be more recent, fewer journals and articles were cited, and more of those citations were to fewer journals and articles... [B]rowsing of print archives may have [led] scientists and scholars to [use more] past and present scholarship. Searching online... may accelerate consensus and narrow the range of findings and ideas built upon." In one of the few fields where this can be and has been analyzed thoroughly, astrophysics, which effectively has 100% Open Access (OA) (free online access) already, Michael Kurtz too found that with free online access to everything, reference lists became (a little) shorter, not longer, i.e., people are citing (somewhat) fewer papers, not more, when everything is accessible to them free online. The following seems a plausible explanation: Before OA, researchers cited what they could afford to access, and that was not necessarily all the best work, so they could not be optimally selective for quality, importance and relevance. (Sometimes -- dare one say it? -- they may even have resorted to citing "blind," going by just the title and abstract, which they could afford, but not the full text, to which they had no subscription.) In contrast, when everything becomes accessible, researchers can be more selective and can cite only what is most relevant, important and of high quality. (It has been true all along that about 80-90% of citations go to the top 10-20% of articles. Now that the top 10-20% (along with everything else in astrophysics), is accessible to everyone, everyone can cite it, and cull out the less relevant or important 80-90%. This is not to say that OA does not also generate some extra citations for lesser articles too; but the OA citation advantage is bigger, the better the article -- the "quality advantage" -- (and perhaps most articles are not that good!). Since the majority of published articles are uncited (or only self-cited), there is probably a lot published that no amount of exposure and access can render worth citing! (I think there may also exist some studies [independent of OA] on "like citing like" -- i.e., articles tending to be cited more at their own "quality" level rather than a higher one. [Simplistically, this means within their own citation bracket, rather than a higher one.] If true, this too could probably be analyzed from an OA standpoint.) But the trouble is that apart from astrophysics and high energy physics, no other field has anywhere near 100% OA: It's closer to 15% in other fields. So aside from a (slightly negative) global correlation (between the growth of OA and the average length of the reference list), the effect of OA cannot be very deeply analyzed in most fields yet. In addition, insofar as OA is concerned, much of the Evans effect seems to be based on "legacy OA," in which it is the older literature that is gradually being made accessible online or freely accessible online, after a long non-online, non-free interval. Fields differ in their speed of uptake and their citation latencies. In physics, which has a rapid turnaround time, there is already a tendency to cite recent work more, and OA is making the turnaround time even faster. In longer-latency fields, the picture may differ. For the legacy-OA effect especially, it is important to sort fields by their citation turnaround times; otherwise there can be biases (e.g. if short- or long-latency fields differ in the degree to which they do legacy OA archiving). If I had to choose between the explanation of the Evans effect as a recency/bandwagon effect, as Evans interprets it, or as an increased overall quality/selectivity effect, I'd choose the latter (though I don't doubt there is a bandwagon effect too). And that is even without going on to point out that Tenopir & King, Gingras and others have shown that -- with or without OA -- there is still a good deal of usage and citation of the legacy literature (though it differs from field to field). I wouldn't set much store by "skimming serendipity" (the discovery of adjacent work while skimming through print issues), since online search and retrieval has at least as much scope for serendipity. (And one would expect more likelihood of a bandwagon effect without OA, where authors may tend to cite already cited but inaccessible references "cite unseen.") Are online and free online access broadening or narrowing research? They are broadening it by making all of it accessible to all researchers, focusing it on the best rather than merely the accessible, and accelerating it. Stevan Harnad American Scientist Open Access Forum Saturday, August 2. 2008Open Access: "Gratis" and "Libre"
Re-posted from Peter Suber's Open Access News. (This is to register 100% agreement on this definition of "Gratis" and "Libre" OA, and on the new choice of terms.)
Thursday, July 31. 2008Davis et al's 1-year Study of Self-Selection Bias: No Self-Archiving Control, No OA Effect, No ConclusionThe following is an expanded, hyperlinked version of a BMJ critique of: Davis, PN, Lewenstein, BV, Simon, DH, Booth, JG, & Connolly, MJL (2008) Open access publishing, article downloads, and citations: randomised controlled trial British Medical Journal 337: a568 Overview (by SH): Davis et al.'s study was designed to test whether the "Open Access (OA) Advantage" (i.e., more citations to OA articles than to non-OA articles in the same journal and year) is an artifact of a "self-selection bias" (i.e., better authors are more likely to self-archive or better articles are more likely to be self-archived by their authors). The control for self-selection bias was to select randomly which articles were made OA, rather than having the author choose. The result was that a year after publication the OA articles were not cited significantly more than the non-OA articles (although they were downloaded more). The authors write: "To control for self selection we carried out a randomised controlled experiment in which articles from a journal publisher’s websites were assigned to open access status or subscription access only"The authors conclude: "No evidence was found of a citation advantage for open access articles in the first year after publication. The citation advantage from open access reported widely in the literature may be an artefact of other causes."Commentary: To show that the OA advantage is an artefact of self-selection bias (or of any other factor), you first have to produce the OA advantage and then show that it is eliminated by eliminating self-selection bias (or any other artefact). This is not what Davis et al. did. They simply showed that they could detect no OA advantage one year after publication in their sample. This is not surprising, since most other studies, some based based on hundreds of thousands of articles, don't detect an OA advantage one year after publication either. It is too early. To draw any conclusions at all from such a 1-year study, the authors would have had to do a control condition, in which they managed to find a sufficient number of self-selected, self-archived OA articles (from the same journals, for the same year) that do show the OA advantage, whereas their randomized OA articles do not. In the absence of that control condition, the finding that no OA advantage is detected in the first year for this particular sample of 247 out of 1619 articles in 11 physiological journals is completely uninformative. The authors did find a download advantage within the first year, as other studies have found. This early download advantage for OA articles has also been found to be correlated with a citation advantage 18 months or more later. The authors try to argue that this correlation would not hold in their case, but they give no evidence (because they hurried to publish their study, originally intended to run four years, three years too early.) (1) The Davis study was originally proposed (in December 2006) as intended to cover 4 years: Davis, PN (2006) Randomized controlled study of OA publishing (see comment)It has instead been released after a year. (2) The Open Access (OA) Advantage (i.e., significantly more citations for OA articles, always comparing OA and non-OA articles in the same journal and year) has been reported in all fields tested so far, for example: Hajjem, C., Harnad, S. and Gingras, Y. (2005) Ten-Year Cross-Disciplinary Comparison of the Growth of Open Access and How it Increases Research Citation Impact. IEEE Data Engineering Bulletin 28(4) pp. 39-47.(3) There is always the logical possibility that the OA advantage is not a causal one, but merely an effect of self-selection: The better authors may be more likely to self-archive their articles and/or the better articles may be more likely to be self-archived; those better articles would be the ones that get more cited anyway. (4) So it is a very good idea to try to control methodologically for this self-selection bias: The way to control it is exactly as Davis et al. have done, which is to select articles at random for being made OA, rather than having the authors self-select. (5) Then, if it turns out that the citation advantage for randomized OA articles is significantly smaller than the citation advantage for self-selected-OA articles, the hypothesis that the OA advantage is all or mostly just a self-selection bias is supported. (6) But that is not at all what Davis et al. did. (7) All Davis et al. did was to find that their randomized OA articles had significantly higher downloads than non-OA articles, but no significant difference in citations. (8) This was based on the first year after publication, when most of the prior studies on the OA advantage likewise find no significant OA advantage, because it is simply too early: the early results are too noisy! The OA advantage shows up in later years (1-4). (9) If Davis et al. had been more self-critical, seeking to test and perhaps falsify their own hypothesis, rather than just to confirm it, they would have done the obvious control study, which is to test whether articles that were made OA through self-selected self-archiving by their authors (in the very same year, in the very same journals) show an OA advantage in that same interval. For if they do not, then of course the interval was too short, the results were released prematurely, and the study so far shows nothing at all: It is not until you have actually demonstrated an OA advantage that you can estimate how much of that advantage might in reality be due to a self-selection artefact! (10) The study shows almost nothing at all, but not quite nothing, because one would expect (based on our own previous study, which showed that early downloads, at 6 months, predict enhanced citations at a year and a half or later) that Davis's increased downloads too would translate into increased citations, once given enough time. Brody, T., Harnad, S. and Carr, L. (2006) Earlier Web Usage Statistics as Predictors of Later Citation Impact. Journal of the American Association for Information Science and Technology (JASIST) 57(8) pp. 1060-1072.(11) The findings of Michael Kurtz and collaborators are also relevant in this regard. They looked only at astrophysics, which is special, in that (a) it is a field with only about a dozen journals, to which every research-active astronomer has subscription access -- these days they also have free online access via ADS -- and (b) it is a field in which most authors self-archive their preprints very early in arxiv -- much earlier than the date of publication. Kurtz, M. J. and Henneken, E. A. (2007) Open Access does not increase citations for research articles from The Astrophysical Journal. Preprint deposited in arXiv September 6, 2007.(12) Kurtz & Henneken, too, found the usual self-archiving advantage in astrophysics (i.e., about twice as many citations for OA papers than non-OA), but when they analyzed its cause, they found that most of the cause was the Early Advantage of access to the preprint, as much as a year before publication of the (OA) postprint. In addition, they found a self-selection bias (for prepublication preprints -- which is all that were involved here, because, as noted, in astrophysics, after publication, everything is OA): The better articles by the better authors were more likely to have been self-archived as preprints. (13) Kurtz's results do not generalize to all fields, because it is not true of other fields either that (a) they already have 100% OA for their published postprints, or that (b) many authors tend to self-archive preprints before publication. (14) However, the fact that early preprint self-archiving (in a field that is 100% OA as of postprint publication) is sufficient to double citations is very likely to translate into a similar effect, in a non-OA, non-preprint-archiving field, if one reckons on the basis of the one-year access embargo that many publishers are imposing on the postprint. (The yearlong "No-Embargo" advantage provided by postprint OA in other fields might not turn out to be so big as to double citations, as the preprint Early Advantage in astrophysics does, because any potential prepublication advantage is lost, and after publication there is at least the subscription access to the postprint; but the postpublication counterpart of the Early Advantage for postprints that are either not self-archived or embargoed is likely to be there too.) (15) Moreover, the preprint OA advantage is primarily Early Advantage, and only secondarily Self-Selection. (16) The size of the postprint self-selection bias would have been what Davis et al. tested -- if they had done the proper control, and waited long enough to get an actual OA effect to compare against. (Their regression analyses simply show that exactly as they detected no citation advantage in their sample and interval for the random OA articles, they likewise likewise detected no citation advantage for the self-selected self-archived OA articles in their sample and interval: this hardly constitutes evidence that the (undetected) OA advantage is in reality a self-selection artefact!) (17) We had reported in an unpublished 2007 pilot study that there was no statistically significant difference between the size of the OA advantage for mandated (i.e., obligatory) and unmandated (i.e., self-selected) self-archiving: Hajjem, C & Harnad, S. (2007) The Open Access Citation Advantage: Quality Advantage Or Quality Bias? Preprint deposited in arXiv January 22, 2007.(18) We will soon be reporting the results of a 4-year study on the OA advantage in mandated and unmandated self-archiving that confirms these earlier findings: Mandated self-archiving is like Davis et al.'s randomized OA, but we find that it does not reduce the OA advantage at all -- once enough time has elapsed for there to be an OA Advantage at all. Stevan Harnad American Scientist Open Access Forum Tuesday, July 29. 200850th Green OA Self-Archiving Mandate Worldwide: France's ANR/SHS
The Humanities and Social Sciences branch of France's Agence Nationale de la recherche has just announced its Green OA self-archiving mandate -- France's first funder mandate (France' second mandate overall, and the world's 50th). See ROARMAP
Note that the situation in France with central repositories is very different from the case of NIH's PMC repository: France's HAL is a national central repository where (in principle) (1) all French research output --from every field, and every institution-- can be deposited and (again, in principle) (2) every French institution (or department or funder) can have its own interface and "look" in HAL, a "virtual" Institutional Repository (IR), saving it the necessity of creating an IR of its own if it does not feel it needs to. The crucial underlying question -- and several OA advocates in France are raising the question, notably, Hélène Bosc, in a forthcoming article (meanwhile, see this) -- is whether the probability of adopting institutional OA mandates in France is increased or decreased by the HAL option: Are universities more inclined to adopt a window on HAL, and to mandate central deposit of all their institutional research output, or would they be more inclined to mandate deposit in their own autonomous university IRs, which they manage and control? Again, the SWORD protocol for automatic import and export between IRs and CRs is pertinent, because then it doesn't matter which way institutions prefer to do it.
« previous page
(Page 71 of 113, totaling 1129 entries)
» next page
|
QuicksearchSyndicate This BlogMaterials You Are Invited To Use To Promote OA Self-Archiving:
Videos:
The American Scientist Open Access Forum has been chronicling and often directing the course of progress in providing Open Access to Universities' Peer-Reviewed Research Articles since its inception in the US in 1998 by the American Scientist, published by the Sigma Xi Society. The Forum is largely for policy-makers at universities, research institutions and research funding agencies worldwide who are interested in institutional Open Acess Provision policy. (It is not a general discussion group for serials, pricing or publishing issues: it is specifically focussed on institutional Open Acess policy.)
You can sign on to the Forum here.
ArchivesCalendar
CategoriesBlog AdministrationStatisticsLast entry: 2018-09-14 13:27
1129 entries written
238 comments have been made
Top ReferrersSyndicate This Blog |