Comments on UUK Press Release 8 November 2007:
UUK report looks at the use of bibliometrics
"This report will help Universities UK to formulate its position on the development of the new framework for replacing the RAE after 2008."
Some of the points for consideration in the report include:
Bibliometrics are probably the most useful of a number of variables that could feasibly be used to measure research performance.
What metrics count as "bibliometrics"? Do downloads? hubs/authorities? Interdisciplinarity metrics? Endogamy/exogamy metrics? chronometrics, semiometrics?
There is evidence that bibliometric indices do correlate with other, quasi-independent measures of research quality - such as RAE grades - across a range of fields in science and engineering.
Meaning that citation counts correlate with panel rankings in all disciplines tested so far. Correct.
There is a range of bibliometric variables as possible quality indicators. There are strong arguments against the use of (i) output volume (ii) citation volume (iii) journal impact and (iv) frequency of uncited papers.
The "strong" arguments are against using any of these variables alone, or without testing and validation. They are not arguments against including them in the battery of candidate metrics to be tested, validated and weighted against the panel rankings, discipline by discipline, in a multiple regression equation.
'Citations per paper' is a widely accepted index in international evaluation. Highly-cited papers are recognised as identifying exceptional research activity.
Citations per paper is one (strong) candidate metric among many, all of which should be co-tested, via multiple regression analysis, against the parallel RAE panel rankings (and other validated or face-valid performance measures).
Accuracy and appropriateness of citation counts are a critical factor.
Not clear what this means. ISI citation counts should be supplemented by other citation counts, such as Scopus, Google Scholar, Citeseer and Citebase: each can be a separate metric in the metric equation. Citations from and to books are especially important in some disciplines.
There are differences in citation behaviour among STEM and non-STEM as well as different subject disciplines.
And probably among many other disciplines too. That is why each discipline's regression equation needs to be validated separately. This will yield a different constellation of metrics as well as of beta weights on the metrics, for different disciplines.
Metrics do not take into account contextual information about individuals, which may be relevant.
What does this mean? Age, years since degree, discipline, etc. are all themselves metrics, and can be added to the metric equation.
They also do not always take into account research from across a number of disciplines.
Interdisciplinarity is a measurable metric. There are self-citations, co-author citations, small citation circles, specialty-wide citations, discipline-wide citations, and cross-disciplinary citations. These are all endogamy/exogamy metrics. They can be given different weights in fields where, say, interdisciplinarity is highly valued.
The definition of the broad subject groups and the assignment of staff and activity to them will need careful consideration.
Is this about RAE panels? Or about how to distribute researchers by discipline or other grouping?
Bibliometric indicators will need to be linked to other metrics on research funding and on research postgraduate training.
"Linked"? All metrics need to be considered jointly in a multiple regression equation with the panel rankings (and other validated or face-valid criterion metrics).
There are potential behavioural effects of using bibliometrics which may not be picked up for some years
Yes, metrics will shape behaviour (just as panel ranking shaped behaviour), sometimes for the better, sometimes for the worse. Metrics can be abused -- but abuses can also be detected and named and shamed, so there are deterrents and correctives.
There are data limitations where researchers' outputs are not comprehensively catalogued in bibliometrics databases.
The obvious solution for this is Open Access: All UK researchers should deposit
all their research output in their Institutional Repositories (IRs). Where it is not possible to set access to a deposit as OA, access can be set as Closed Access, but the bibliographic metadata will be there. (The IRs will not only provide access to the texts and the metadata, but they will generate further metrics, such as download counts, chronometrics, etc.)
The report comes ahead of the HEFCE consultation on the future of research assessment expected to be announced later this month. Universities UK will consult members once this is published.
Let's hope both UUK and HEFCE are still open-minded about ways to optimise the transition to metrics!
References
Harnad, S., Carr, L., Brody, T. & Oppenheim, C. (2003)
Mandated online RAE CVs Linked to University Eprint Archives: Improving the UK Research Assessment Exercise whilst making it cheaper and easier.
Ariadne 35.
Brody, T., Kampa, S., Harnad, S., Carr, L. and Hitchcock, S. (2003)
Digitometric Services for Open Archives Environments. In
Proceedings of European Conference on Digital Libraries 2003, pp. 207-220, Trondheim, Norway.
Harnad, S. (2006)
Online, Continuous, Metrics-Based Research Assessment.
Technical Report, ECS, University of Southampton.
Harnad, S. (2007)
Open Access Scientometrics and the UK Research Assessment Exercise. In
Proceedings of 11th Annual Meeting of the International Society for Scientometrics and Informetrics 11(1), pp. 27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds.
Brody, T., Carr, L., Harnad, S. and Swan, A. (2007)
Time to Convert to Metrics.
Research Fortnight pp. 17-18.
Brody, T., Carr, L., Gingras, Y., Hajjem, C., Harnad, S. and Swan, A. (2007)
Incentivizing the Open Access Research Web: Publication-Archiving, Data-Archiving and Scientometrics.
CTWatch Quarterly 3(3).
See also: Prior Open Access Archivangelism Postings on RAE and metrics
Stevan Harnad
American Scientist Open Access Forum