Wednesday, March 31. 2010
Keynote Address to be presented at UNT Open Access Symposium, University of North Texas, 18 May, 2010.
OVERVIEW: As the number of Open Access (OA) mandates adopted by universities worldwide grows it is important to ensure that the most effective mandate model is selected for adoption, and that a very clear distinction is made between what is required and what is recommended:
By far the most effective and widely applicable OA policy is to require that the author's final, revised peer-reviewed draft must be deposited in the institutional repository (IR) immediately upon acceptance for publication, without exception, but only to recommend, not require, that access to the deposit should be set immediately as Open Access (at least 63% of journals already endorse immediate, unembargoed OA); access to deposits for which the author wishes to honor a publisher access embargo can be set as Closed Access.
The IR's "fair use" button allows users to request and authors to authorize semi-automated emailing of individual eprints to individual requesters, on a case by case basis, for research uses during the embargo.
The adoption of an “ author’s addendum” reserving rights should be recommended but not required (opt-out/waiver permitted).
It is also extremely useful and productive to make IR deposit the official mechanism for submitting publications for annual performance review.
IRs can also monitor compliance with complementary OA mandates from research funding agencies and can provide valuable metrics on usage and impact.
(Mandate compliance should be compulsory, but there need be no sanctions or penalties for noncompliance; the benefits of compliance will be their own reward.)
On no account should a university adopt a costly policy of funding Gold OA publishing by its authors until/unless it has first adopted a cost-free policy of mandatory Green OA self-archiving.
Stevan HarnadHarnad, S. (2008) Waking OA’s “Slumbering Giant”: The University's Mandate To Mandate Open Access. New Review of Information Networking 14(1): 51 - 68
Gargouri, Y., Hajjem, C., Lariviere, V., Gingras, Y., Brody, T., Carr, L. and Harnad, S. (2010) Self-Selected or Mandated, Open Access Increases Citation Impact for Higher Quality Research.
Sale, A., Couture, M., Rodrigues, E., Carr, L. and Harnad, S. (2010) Open Access Mandates and the "Fair Dealing" Button. In: Dynamic Fair Dealing: Creating Canadian Culture Online (Rosemary J. Coombe & Darren Wershler, Eds.)
Harnad, S; Carr, L; Swan, A; Sale, A & Bosc H. (2009) Maximizing and Measuring Research Impact Through University and Research-Funder Open-Access Self-Archiving Mandates. Wissenschaftsmanagement 15(4) 36-41
• "Which Green OA Mandate Is Optimal?"
• "The Immediate-Deposit/Optional-Access (ID/OA) Mandate: Rationale and Model"
• "Optimizing OA Self-Archiving Mandates: What? Where? When? Why? How?"
• "How To Integrate University and Funder Open Access Mandates"
• "Upgrading Harvard's Opt-Out Copyright Retention Mandate: Add a No-Opt-Out Deposit Clause"
• "On Not Putting The Gold OA-Payment Cart Before The Green OA-Provision Horse"
Wednesday, March 17. 2010
In announcing Alma Swan's Review of Studies on the Open Access Impact Advantage, I had suggested that the growing number of studies on the OA Impact Advantage were clearly ripe for a meta-analysis. Here is an update:
David Wilson wrote:
"Interesting discussion. Phil Davis has a limited albeit common view of meta-analysis. Within medicine, meta-analysis is generally applied to a small set of highly homogeneous studies. As such, the focus is on the overall or pooled effect with only a secondary focus on variability in effects. Within the social sciences, there is a strong tradition of meta-analyzing fairly heterogeneous sets of studies. The focus is clearly not on the overall effect, which would be rather meaningless, but rather on the variability in effect and the study characteristics, both methodological and substantive, that explain that variability.
"I don't know enough about this area to ascertain the credibility of [Phil Davis's] criticism of the methodologies of the various studies involved. However, the one study that [Phil Davis] claims is methodologically superior in terms of internal validity (which it might be) is clearly deficient in statistical power. As such, it provides only a weak test. Recall, that a statistically nonsignificant finding is a weak finding -- a failure to reject the null and not acceptance of the null.
"Meta-analysis could be put to good use in this area. It won't resolve the issue of whether the studies that Davis thinks are flawed are in fact flawed. It could explore the consistency in effect across these studies and whether the effect varies by the method used. Both would add to the debate on this issue."Lipsey, MW & Wilson DB (2001) Practical Meta-Analysis. Sage. David B. Wilson, Ph.D.
Associate Professor
Chair, Administration of Justice Department
George Mason University
10900 University Boulevard, MS 4F4
Manassas, VA 20110-2203
Added Mar 15 2010
See also (thanks to Peter Suber for spotting this study!):
Wagner, A. Ben (2010) Open Access Citation Advantage: An Annotated Bibliography. Issues in Science and Technology Librarianship. 60. Winter 2010
On Mar 12, 2010 Gene V Glass wrote the following:
"Far more issues about OA and meta analysis have been raised in this thread for me to [be able to] comment on. But having dedicated 35 years of my efforts to meta analysis and 20 to OA, I can’t resist a couple of quick observations.
Holding up one set of methods (be they RCT or whatever) as the gold standard is inconsistent with decades of empirical work in meta analysis that shows that “perfect studies” and “less than perfect studies” seldom show important differences in results. If the question at hand concerns experimental intervention, then random assignment to groups may well be inferior as a matching technique to even an ex post facto matching of groups. Randomization is not the royal road to equivalence of groups; it’s the road to probability statements about differences.
Claims about the superiority of certain methods are empirical claims. They are not a priori dicta about what evidence can and can not be looked at."Glass, G.V.; McGaw, B.; & Smith, M.L. (1981). Meta-analysis in Social Research. Beverly Hills, CA: SAGE.
Rudner, Lawrence, Gene V Glass, David L. Evartt, and Patrick J. Emery (2000). A user's guide to the meta-analysis of research studies. ERIC Clearinghouse on Assessment and Evaluation, University of Maryland, College Park.
GVG's Publications
|