Update Jan 1, 2010: See Gargouri, Y; C Hajjem, V Larivière, Y Gingras, L Carr,T Brody & S Harnad (2010) “Open Access, Whether Self-Selected or Mandated, Increases Citation Impact, Especially for Higher Quality Research”
Update Feb 8, 2010: See also "Open Access: Self-Selected, Mandated & Random; Answers & Questions"
Response to Comment by Ian Russell on Ann Mroz's 12 November 2009 editorial "Put all the results out in the open" in Times Higher Education:
It's especially significant that
Ian Russell -- CEO of the
Association of Learned and Professional Society Publishers (which, make no mistake about it, includes all the big
STM commercials too) -- should be saying:
"It’s not 'lobbying from subscription publishers' that has stalled open access, it’s the realization that the simplistic arguments of the open access lobby don’t hold water in the real world... [with] open access lobbyists constantly referring to the same biased and dubious ‘evidence’ (much of it not in the peer reviewed literature)."
Please stay tuned for more peer-reviewed evidence on this, but for now note only that the study Ian Russell selectively singles out as not "biased or dubious" -- the "first randomized trial" (
Davis et al 2008), which found that "Open access [OA] articles were no more likely to be cited than subscription access articles in the first year after publication” -- is the study that argued that in the host of
other peer-reviewed studies that
have kept finding OA articles to be more likely to be cited (the effect usually becoming statistically significant not during but after the first year), the OA advantage (according to Davis et al) is simply a result of a self-selection bias on the part of their authors: Authors selectively make their better (hence more citeable) articles OA.
Russell selectively cites only this negative study -- the overhastily (overoptimistically?) published first-year phase of a still ongoing three-year study by Davis et al -- because its result sounds more congenial to the publishing lobby. Russell selectively ignores as "biased and dubious" the many
positive (peer-reviewed) studies that do keep finding the OA advantage, as well as the
critique of this negative study (as having been based on too short a time interval and too small a sample, not even long enough to replicate the widely reported effect that it was attempting to demonstrate to be merely an artifact of a self-selection bias). Russell also selectively omits to mention that even the Davis et al study found an OA advantage for downloads within the first year -- with other peer-reviewed studies having found that a
download advantage in the first year translates into a citation advantage in the second year (e.g., Brody et al 2006). (If one were uncharitable, one might liken this sort of self-serving selectivity to that of the tobacco industry lobby in its time of tribulation, but here it is not public health that is at stake, merely research impact...)
But fair enough. We've now tested whether the self-selected OA impact advantage is reduced or eliminated when the OA is mandated rather than self-selective. The results will be announced as soon as they have gone through peer review. Meanwhile, place your bets...
Brody, T., Harnad, S. and Carr, L. (2006)
Earlier Web Usage Statistics as Predictors of Later Citation Impact.
Journal of the American Association for Information Science and Technology (JASIST) 57(8) pp. 1060-1072.
Davis, PN, Lewenstein, BV, Simon, DH, Booth, JG, & Connolly, MJL (2008)
Open access publishing, article downloads, and citations: randomised controlled trial British Medical Journal 337: a568
Harnad, S. (2008)
Davis et al's 1-year Study of Self-Selection Bias: No Self-Archiving Control, No OA Effect, No Conclusion.
Hitchcock, S. (2009)
The effect of open access and downloads ('hits') on citation impact: a bibliography of studies.