Wednesday, December 17. 2014
Steven Hill of HEFCE has posted “an overview of the work HEFCE are currently commissioning which they are hoping will build a robust evidence base for research assessment” in LSE Impact Blog 12(17) 2014 entitled Time for REFlection: HEFCE look ahead to provide rounded evaluation of the REF
Let me add a suggestion, updated for REF2014, that I have made before (unheeded):
Scientometric predictors of research performance need to be validated by showing that they have a high correlation with the external criterion they are trying to predict. The UK Research Excellence Framework (REF) -- together with the growing movement toward making the full-texts of research articles freely available on the web -- offer a unique opportunity to test and validate a wealth of old and new scientometric predictors, through multiple regression analysis: Publications, journal impact factors, citations, co-citations, citation chronometrics (age, growth, latency to peak, decay rate), hub/authority scores, h-index, prior funding, student counts, co-authorship scores, endogamy/exogamy, textual proximity, download/co-downloads and their chronometrics, tweets, tags, etc.) can all be tested and validated jointly, discipline by discipline, against their REF panel rankings in REF2014. The weights of each predictor can be calibrated to maximize the joint correlation with the rankings. Open Access Scientometrics will provide powerful new means of navigating, evaluating, predicting and analyzing the growing Open Access database, as well as powerful incentives for making it grow faster.
Harnad, S. (2009) Open Access Scientometrics and the UK Research Assessment Exercise. Scientometrics 79 (1)
(Also in Proceedings of 11th Annual Meeting of the International Society for Scientometrics and Informetrics 11(1), pp. 27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds. 2007)
See also:
The Only Substitute for Metrics is Better Metrics (2014)
and
On Metrics and Metaphysics (2008)
REF2014 gives the 2014 institutional and departmental rankings based on the 4 outputs submitted.
That is then the criterion against which the many other metrics I list below can be jointly validated, through multiple regression, to initialize their weights for REF2020, as well as for other assessments. In fact, open access metrics can be — and will be — continuously assessed, as open access grows. And as the initialized weights of the metric equation (per discipline) are optimized for predictive power, the metric equation can replace the peer rankings (except for periodic cross-checks and updates) -- or at least supplement it.
Single metrics can be abused, but not only can abuses be named and shamed when detected, but it becomes harder to abuse metrics when they are part of a multiple, inter-correlated vector, with disciplinary profiles of their normal interactions: someone dispatching a robot to download his papers would quickly be caught out when the usual correlation between downloads and later citations fails to appear. Add more variables and it gets even harder.
In a weighted vector of multiple metrics like the sample I had listed, it’s no use to a researcher if told in advance that for REF2020 the metric equation will be the following, with the following weights for their particular discipline:
REF2020Rank =
w1(pubcount) + w2(JIF) + w3(cites) +w4(art-age) + w5(art-growth) + w6(hits) + w7(cite-peak-latency) + w8(hit-peak-latency) + w9(citedecay) + w10(hitdecay) + w11(hub-score) + w12(authority+score) + w13(h-index) + w14(prior-funding) +w15(bookcites) + w16(student-counts) + w17(co-cites + w18(co-hits) + w19(co-authors) + w20(endogamy) + w21(exogamy) + w22(co-text) + w23(tweets) + w24(tags), + w25(comments) + w26(acad-likes) etc. etc.
The potential list could be much longer, and the weights can be positive or negative, and varying by discipline.
"The man who is ready to prove that metric knowledge is wholly impossible… is a brother metrician with rival metrics…”
Comment on: Mryglod, Olesya, Ralph Kenna, Yurij Holovatch and Bertrand Berche (2014) Predicting the results of the REF using departmental h-index: A look at biology, chemistry, physics, and sociology. LSE Impact Blog 12(6)
"The man who is ready to prove that metaphysical knowledge is wholly impossible… is a brother metaphysician with a rival theory” Bradley, F. H. (1893) Appearance and Reality
The topic of using metrics for research performance assessment in the UK has a rather long history, beginning with the work of Charles Oppenheim.
The solution is neither to abjure metrics nor to pick and stick to one unvalidated metric, whether it’s the journal impact factor or the h-index.
The solution is to jointly test and validate, field by field, a battery of multiple, diverse metrics (citations, downloads, links, tweets, tags, endogamy/exogamy, hubs/authorities, latency/longevity, co-citations, co-authorships, etc.) against a face-valid criterion (such as peer rankings).
See also: "On Metrics and Metaphysics" (2008)
Oppenheim, C. (1996). Do citations count? Citation indexing and the Research Assessment Exercise (RAE). Serials: The Journal for the Serials Community, 9(2), 155-161.
Oppenheim, C. (1997). The correlation between citation counts and the 1992 research assessment exercise ratings for British research in genetics, anatomy and archaeology. Journal of documentation, 53(5), 477-487.
Oppenheim, C. (1995). The correlation between citation counts and the 1992 Research Assessment Exercise Ratings for British library and information science university departments. Journal of Documentation, 51(1), 18-27.
Oppenheim, C. (2007). Using the h-index to rank influential British researchers in information science and librarianship. Journal of the American Society for Information Science and Technology, 58(2), 297-301.
Harnad, S. (2001) Research access, impact and assessment. Times Higher Education Supplement 1487: p. 16.
Harnad, S. (2003) Measuring and Maximising UK Research Impact. Times Higher Education Supplement. Friday, June 6 2003
Harnad, S., Carr, L., Brody, T. & Oppenheim, C. (2003) Mandated online RAE CVs Linked to University Eprint Archives: Improving the UK Research Assessment Exercise whilst making it cheaper and easier. Ariadne 35.
Hitchcock, Steve; Woukeu, Arouna; Brody, Tim; Carr, Les; Hall, Wendy and Harnad, Stevan. (2003) Evaluating Citebase, an open access Web-based citation-ranked search and impact discovery service Technical Report, ECS, University of Southampton.
Harnad, S. (2004) Enrich Impact Measures Through Open Access Analysis. British Medical Journal BMJ 2004; 329:
Harnad, S. (2006) Online, Continuous, Metrics-Based Research Assessment. Technical Report, ECS, University of Southampton.
Brody, T., Harnad, S. and Carr, L. (2006) Earlier Web Usage Statistics as Predictors of Later Citation Impact. Journal of the American Association for Information Science and Technology (JASIST) 57(8) pp. 1060-1072.
Brody, T., Carr, L., Harnad, S. and Swan, A. (2007) Time to Convert to Metrics. Research Fortnight 17-18.
Brody, T., Carr, L., Gingras, Y., Hajjem, C., Harnad, S. and Swan, A. (2007) Incentivizing the Open Access Research Web: Publication-Archiving, Data-Archiving and Scientometrics. CTWatch Quarterly 3(3).
Harnad, S. (2008) Validating Research Performance Metrics Against Peer Rankings. Ethics in Science and Environmental Politics 8 (11) doi:10.3354/esep00088 The Use And Misuse Of Bibliometric Indices In Evaluating Scholarly Performance
Harnad, S. (2008) Self-Archiving, Metrics and Mandates. Science Editor 31(2) 57-59
Harnad, S., Carr, L. and Gingras, Y. (2008) Maximizing Research Progress Through Open Access Mandates and Metrics. Liinc em Revista 4(2).
Harnad, S. (2009) Open Access Scientometrics and the UK Research Assessment Exercise. Scientometrics 79 (1) Also in Proceedings of 11th Annual Meeting of the International Society for Scientometrics and Informetrics 11(1), pp. 27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds. (2007)
Harnad, S. (2009) Multiple metrics required to measure research performance. Nature (Correspondence) 457 (785) (12 February 2009)
Harnad, S; Carr, L; Swan, A; Sale, A & Bosc H. (2009) Maximizing and Measuring Research Impact Through University and Research-Funder Open-Access Self-Archiving Mandates. Wissenschaftsmanagement 15(4) 36-41
Friday, December 5. 2014
Comments on "What would be the implications of a ‘gold’ Open Access REF policy?" (Ben Johnson, HEFCE)
Ben Johnson “this post ignores … the commonly heard prediction that universal green OA will somehow deliver a sustainable gold OA future all on its own… Let me spell out that commonly heard prediction, explaining exactly how and why today's pre-green gold OA is fool’s gold -- unaffordable and unsustainable -- and exactly how and why universal green OA, on its own, will deliver a sustainable gold OA future, in the form of post-green fair-gold:
0. Lost Impact: Journal subscriptions are costly, unaffordable and growing, research funds are scarce, hard to come by and shrinking, and research access, usage, impact, productivity and progress are being needlessly lost every day that we fail to provide OA.
1. Over-Pricing: Pre-green gold OA publication fees are arbitrarily and hugely over-priced. (We will see how much, and why, shortly.)
2. Double-Payment: Payment for pre-green gold is double payment: (i) subscription fees for incoming papers plus (ii) gold fees for outgoing papers. (Must-have subscription journals cannot be cancelled by an institution until those same articles are accessible to users in some other way.)
3. Double_Dipping: On top of that, paying the same "hybrid gold" journal (both subscription and optional gold) for pre-green hybrid gold also allows publisher double-dipping.
4. "Rebates": Even if the pre-green hybrid gold publisher promises all N of its subscribing institutions a full, 100% rebate on all hybrid gold income received, that only means that (N-1)/N of whatever hybrid gold any institution pays for its own outgoing hybrid gold papers becomes a subsidy to all the other N-1 subscribing institutions: The institution only gets back 1/Nth of its hybrid gold outlay. (The UK, for example, would get back a 6% subscription rebate for its hybrid gold outlay; the rest of the UK hybrid gold outlay would become a rebate to the other 94% of subscribing institutions in the countries that were not foolish enough to pay pre-emptively for pre-green gold.) Unless the full gold OA rebate goes to the same institution that paid for the gold (by deducting it from the subscription fee), it is still double-payment.
5. Repositories: Research funds are scarce, subscriptions are barely affordable, and pre-green gold payment is completely unnecessary, because green OA can be provided at no extra cost. (Institutional repositories already exist anyway, for multiple purposes, so their cost per paper is negligible, particularly compared to the grotesque cost per paper for pre-green gold.)
6. CC-BY: CC-BY is most definitely not so urgent today, compared to access itself, as to be worth the extra cost of pre-green gold today: CC-BY will come quite naturally of its own accord soon after universal green prevails, and at no extra cost. (We will see how and why shortly.)
7. Embargoes: Publisher embargoes on green are ineffectual because of the repositories’ copy-request Button -- if, but only if the paper was mandatorily deposited immediately upon acceptance for publication, exactly as HEFCE requires). The sole purpose of publishers' OA embargoes today is to try to ensure that -- come what may -- their current level of revenue per paper published, whether via subscriptions or via fool's gold, is sustained. (Please pause for a moment to think this through. It says it all.)
8. Cancelation: So post-green — i.e., once immediate-deposit green has been mandated and provided universally, by all institutions and funders, as HEFCE has done -- institutions can at last cancel their journal subscriptions, because then their users can access the content another way.
9. Obsolete Costs: The post-green unsustainability of subscriptions will force publishers to cut all publishing costs that have been made obsolete by the post-green OA era: Publishers will be forced to phase out the print edition, the online edition, access-provision and archiving: these functions will now be offloaded onto the distributed global network of green OA institutional repositories. And publishers current level of revenue per article will not be sustained.
10. Fair Gold: To cover the remaining post-green cost of peer-reviewed journal publishing — which is just the cost of managing peer review itself — post-green journals will convert to affordable, sustainable fair gold. Institutions will easily pay this service fee, per outgoing paper, out of a fraction of their windfall subscription cancelation savings on incoming papers.
In other words, post-green, subscriptions will be gone, embargoes will be gone, and all OA will be CC-BY (where desired). Ben Johnson: “Would repositories disappear in a gold OA world? No, they’re still useful for theses etc. Monitoring would continue to be necessary for any OA policy.” In the Post-green fair-gold OA world there will no longer be any need to monitor OA policy. Everything published will be fair-gold OA! But there will certainly be a need for the worldwide network of green OA repositories — to provide access and archving in place of the pre-green subscription journals: For it will have been the cancelation pressure generated by universally mandated-and-provided green OA that drove the entire downsizing and transition to fair gold.Harnad, S. (2007) The Green Road to Open Access: A Leveraged Transition. In: Anna Gacs. The Culture of Periodicals from the Perspective of the Electronic Age. L'Harmattan. 99-106.
______ (2010) No-Fault Peer Review Charges: The Price of Selectivity Need Not Be Access Denied or Delayed. D-Lib Magazine 16 (7/8).
______ (2013) The Postgutenberg Open Access Journal (revised). In, Cope, B and Phillips, A (eds.) The Future of the Academic Journal (2nd edition). 2nd edition of book Chandos.
______ (2014) The only way to make inflated journal subscriptions unsustainable: Mandate Green Open Access. LSE Impact of Social Sciences Blog 4/28
Houghton, J. & Swan, A. (2013) Planting the Green Seeds for a Golden Harvest: Comments and Clarifications on "Going for Gold". D-Lib Magazine 19 (1/2).
Sale, A., Couture, M., Rodrigues, E., Carr, L. and Harnad, S. (2014) Open Access Mandates and the "Fair Dealing" Button. In: Dynamic Fair Dealing: Creating Canadian Culture Online (Rosemary J. Coombe & Darren Wershler, Eds.)
Ben Johnson:
December 8, 2014 at 10:16 pm
Stevan, you make an excellent set of points. Particularly interesting to me is the notion that ‘offsetting’ schemes, even those that look quite generous on the face of it (like RSC, who offer APC vouchers equal to the full subscription rate of a given institution), are still set up to replace subscriptions with APCs at the 100% level.
While it isn’t a professional mission of mine to drive down the cost of scholarly communication at the expense of all else, clearly one of the most compelling advantages of your 100% green-leveraged transition idea is that it offers an opportunity to make the whole of scholarly publishing much cheaper than it is now. We should always be mindful that the price of OA publishing in ‘top’ journals can reach into the £000’s but the cost of posting in a repository is £33, and posting on the arXiv costs just $7 (as I read somewhere the other day). A case for the marginal benefit of spending those extra thousands of pounds on gold fees has, as far as I’m aware, not been made – even if that marginal benefit includes CC BY.
As I’ve argued before in this blog, I’m not convinced that journal publishing need do much more than referee articles and stamp them with their brand sticker. So, while the system probably won’t ever get down to being just $7 per article, there are strong public policy reasons why funders of research would not be averse to an outcome that managed to do away with a lot of the superfluities that surround that most highly affected of artefacts, the “version of record”. (I do think there is a good strategic argument for wanting CC BY, though, if perhaps there is no economic or financial case.)
Most interestingly for the neutral, the success criterion of such a green-leveraged transition is identical to the success criterion of a gold future: both require ~100% uptake, globally. So this makes for an interesting fight, doesn’t it?
|