Wednesday, December 17. 2014
Steven Hill of HEFCE has posted “an overview of the work HEFCE are currently commissioning which they are hoping will build a robust evidence base for research assessment” in LSE Impact Blog 12(17) 2014 entitled Time for REFlection: HEFCE look ahead to provide rounded evaluation of the REF
Let me add a suggestion, updated for REF2014, that I have made before (unheeded):
Scientometric predictors of research performance need to be validated by showing that they have a high correlation with the external criterion they are trying to predict. The UK Research Excellence Framework (REF) -- together with the growing movement toward making the full-texts of research articles freely available on the web -- offer a unique opportunity to test and validate a wealth of old and new scientometric predictors, through multiple regression analysis: Publications, journal impact factors, citations, co-citations, citation chronometrics (age, growth, latency to peak, decay rate), hub/authority scores, h-index, prior funding, student counts, co-authorship scores, endogamy/exogamy, textual proximity, download/co-downloads and their chronometrics, tweets, tags, etc.) can all be tested and validated jointly, discipline by discipline, against their REF panel rankings in REF2014. The weights of each predictor can be calibrated to maximize the joint correlation with the rankings. Open Access Scientometrics will provide powerful new means of navigating, evaluating, predicting and analyzing the growing Open Access database, as well as powerful incentives for making it grow faster.
Harnad, S. (2009) Open Access Scientometrics and the UK Research Assessment Exercise. Scientometrics 79 (1)
(Also in Proceedings of 11th Annual Meeting of the International Society for Scientometrics and Informetrics 11(1), pp. 27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds. 2007)
See also:
The Only Substitute for Metrics is Better Metrics (2014)
and
On Metrics and Metaphysics (2008)
REF2014 gives the 2014 institutional and departmental rankings based on the 4 outputs submitted.
That is then the criterion against which the many other metrics I list below can be jointly validated, through multiple regression, to initialize their weights for REF2020, as well as for other assessments. In fact, open access metrics can be — and will be — continuously assessed, as open access grows. And as the initialized weights of the metric equation (per discipline) are optimized for predictive power, the metric equation can replace the peer rankings (except for periodic cross-checks and updates) -- or at least supplement it.
Single metrics can be abused, but not only can abuses be named and shamed when detected, but it becomes harder to abuse metrics when they are part of a multiple, inter-correlated vector, with disciplinary profiles of their normal interactions: someone dispatching a robot to download his papers would quickly be caught out when the usual correlation between downloads and later citations fails to appear. Add more variables and it gets even harder.
In a weighted vector of multiple metrics like the sample I had listed, it’s no use to a researcher if told in advance that for REF2020 the metric equation will be the following, with the following weights for their particular discipline:
REF2020Rank =
w1(pubcount) + w2(JIF) + w3(cites) +w4(art-age) + w5(art-growth) + w6(hits) + w7(cite-peak-latency) + w8(hit-peak-latency) + w9(citedecay) + w10(hitdecay) + w11(hub-score) + w12(authority+score) + w13(h-index) + w14(prior-funding) +w15(bookcites) + w16(student-counts) + w17(co-cites + w18(co-hits) + w19(co-authors) + w20(endogamy) + w21(exogamy) + w22(co-text) + w23(tweets) + w24(tags), + w25(comments) + w26(acad-likes) etc. etc.
The potential list could be much longer, and the weights can be positive or negative, and varying by discipline.
"The man who is ready to prove that metric knowledge is wholly impossible… is a brother metrician with rival metrics…”
Comment on: Mryglod, Olesya, Ralph Kenna, Yurij Holovatch and Bertrand Berche (2014) Predicting the results of the REF using departmental h-index: A look at biology, chemistry, physics, and sociology. LSE Impact Blog 12(6)
"The man who is ready to prove that metaphysical knowledge is wholly impossible… is a brother metaphysician with a rival theory” Bradley, F. H. (1893) Appearance and Reality
The topic of using metrics for research performance assessment in the UK has a rather long history, beginning with the work of Charles Oppenheim.
The solution is neither to abjure metrics nor to pick and stick to one unvalidated metric, whether it’s the journal impact factor or the h-index.
The solution is to jointly test and validate, field by field, a battery of multiple, diverse metrics (citations, downloads, links, tweets, tags, endogamy/exogamy, hubs/authorities, latency/longevity, co-citations, co-authorships, etc.) against a face-valid criterion (such as peer rankings).
See also: "On Metrics and Metaphysics" (2008)
Oppenheim, C. (1996). Do citations count? Citation indexing and the Research Assessment Exercise (RAE). Serials: The Journal for the Serials Community, 9(2), 155-161.
Oppenheim, C. (1997). The correlation between citation counts and the 1992 research assessment exercise ratings for British research in genetics, anatomy and archaeology. Journal of documentation, 53(5), 477-487.
Oppenheim, C. (1995). The correlation between citation counts and the 1992 Research Assessment Exercise Ratings for British library and information science university departments. Journal of Documentation, 51(1), 18-27.
Oppenheim, C. (2007). Using the h-index to rank influential British researchers in information science and librarianship. Journal of the American Society for Information Science and Technology, 58(2), 297-301.
Harnad, S. (2001) Research access, impact and assessment. Times Higher Education Supplement 1487: p. 16.
Harnad, S. (2003) Measuring and Maximising UK Research Impact. Times Higher Education Supplement. Friday, June 6 2003
Harnad, S., Carr, L., Brody, T. & Oppenheim, C. (2003) Mandated online RAE CVs Linked to University Eprint Archives: Improving the UK Research Assessment Exercise whilst making it cheaper and easier. Ariadne 35.
Hitchcock, Steve; Woukeu, Arouna; Brody, Tim; Carr, Les; Hall, Wendy and Harnad, Stevan. (2003) Evaluating Citebase, an open access Web-based citation-ranked search and impact discovery service Technical Report, ECS, University of Southampton.
Harnad, S. (2004) Enrich Impact Measures Through Open Access Analysis. British Medical Journal BMJ 2004; 329:
Harnad, S. (2006) Online, Continuous, Metrics-Based Research Assessment. Technical Report, ECS, University of Southampton.
Brody, T., Harnad, S. and Carr, L. (2006) Earlier Web Usage Statistics as Predictors of Later Citation Impact. Journal of the American Association for Information Science and Technology (JASIST) 57(8) pp. 1060-1072.
Brody, T., Carr, L., Harnad, S. and Swan, A. (2007) Time to Convert to Metrics. Research Fortnight 17-18.
Brody, T., Carr, L., Gingras, Y., Hajjem, C., Harnad, S. and Swan, A. (2007) Incentivizing the Open Access Research Web: Publication-Archiving, Data-Archiving and Scientometrics. CTWatch Quarterly 3(3).
Harnad, S. (2008) Validating Research Performance Metrics Against Peer Rankings. Ethics in Science and Environmental Politics 8 (11) doi:10.3354/esep00088 The Use And Misuse Of Bibliometric Indices In Evaluating Scholarly Performance
Harnad, S. (2008) Self-Archiving, Metrics and Mandates. Science Editor 31(2) 57-59
Harnad, S., Carr, L. and Gingras, Y. (2008) Maximizing Research Progress Through Open Access Mandates and Metrics. Liinc em Revista 4(2).
Harnad, S. (2009) Open Access Scientometrics and the UK Research Assessment Exercise. Scientometrics 79 (1) Also in Proceedings of 11th Annual Meeting of the International Society for Scientometrics and Informetrics 11(1), pp. 27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds. (2007)
Harnad, S. (2009) Multiple metrics required to measure research performance. Nature (Correspondence) 457 (785) (12 February 2009)
Harnad, S; Carr, L; Swan, A; Sale, A & Bosc H. (2009) Maximizing and Measuring Research Impact Through University and Research-Funder Open-Access Self-Archiving Mandates. Wissenschaftsmanagement 15(4) 36-41
|