SUMMARY: The UK Research Assessment Exercise's transition from time-consuming, cost-ineffective panel review to low-cost metrics is welcome, but there is still a top-heavy emphasis on the Prior-Funding metric. This will generate a Matthew-Effect/Self-Fulfilling Prophecy (the rich get richer) and it will also collapse the UK Dual Funding System -- (1) competitive proposal-based funding plus (2) RAE performance-based, top-sliced funding -- into just a scaled up version of (1) alone. The RAE should commission rigorous, systematic studies, testing metric equations discipline by discipline. There are not just three but many potentially powerful and predictive metrics that could be used in these equations (e.g., citations, recursively weighted citations, co-citations, hub/authority indices, latency scores, longevity scores, downloads, download/citation correlations, endogamy/exogamy scores, and many more rich and promising indicators).The objective should be to maximise the depth, breadth, flexibility, predictive power and validity of the battery of RAE metrics by choosing and weighting the right ones. More metrics are better than fewer. They provide cross-checks on one another and triangulation can also help catch anomalies, if any.
The UK
Research Assessment Exercise's (RAE's) sensible and overdue
transition from time-consuming, cost-ineffective panel review to low-cost metrics is moving forward. However, there is still a
top-heavy emphasis, in the RAE's provisional metric equation, on the
Prior-Funding metric: "How much research funding has the candidate department received in the past?"
"The outcome announced today is a new process that uses for all subjects a set of indicators based on research income, postgraduate numbers, and a quality indicator."
Although prior funding should be
part of the equation, it should definitely not be the most heavily weighted component a-priori, in any field. Otherwise, it will merely generate a Matthew-Effect/Self-Fulfilling Prophecy (the rich get richer, etc.) and it will also collapse the UK Dual Funding System -- (1) competitive proposal-based funding
plus (2) RAE performance-based, top-sliced funding -- into just a scaled up version of (1) alone.
Having made the right decision -- to rely far more on low-cost metrics than on costly panels -- the RAE should now commission rigorous, systematic studies of metrics, testing metric equations discipline by discipline. There are not just three but
many potentially powerful and predictive metrics that could be used in these equations (e.g., citations, recursively weighted citations, co-citations, hub/authority indices, latency scores, longevity scores, downloads, download/citation correlations, endogamy/exogamy scores, and many more rich and promising indicators). Unlike panel review, metrics are automatic and cheap to generate, and during and after the
2008 parallel panel/metric exercise they can be tested and cross-validated against the panel rankings, field by field.
In all metric fields -- biometrics, psychometrics, sociometrics -- the choice and weight of metric predictors needs to be based on careful, systematic, prior testing and validation, rather than on a hasty a-priori choice. Biassed predictors are also to be avoided: The idea is to maximise the depth, breadth, flexibility, predictive power and hence validity of the metrics by choosing and weighting the right ones. More metrics is better than fewer, because they serve as cross-checks on one another; this triangulation also highlights anomalies, if any.
Let us hope that the RAE's good sense will not stop with the decision to convert to metrics, but will continue to prevail in making a sensible, informed choice among the rich spectrum of metrics available in the online age.
Excerpts from
"Response to consultation on successor to research assessment exercise"
"In the Science and Innovation Investment Framework 2004-2014 (published in 2004), the Government expressed an interest in using metrics collected as part of the 2008 RAE to provide a benchmark on the value of metrics as compared to peer review, with a view to making more use of metrics in assessment and reducing the administrative burden of peer review. The 10-Year Science and Innovation Investment Framework: Next Steps published with the 2006 Budget moved these plans forward by proposing a consultation on moving to a metrics-based research assessment system after the 2008 RAE. A working Group chaired by Sir Alan Wilson (then DfES Director General of Higher Education) and Professor David Eastwood produced proposals which were issued for consultation on 13 June 2006. The Government announcement today is the outcome of that consultation."
"The RAE panels already make some use of research metrics in reaching their judgements about research quality. Research metrics are statistics that provide indicators of the success of a researcher or department. Examples of metrics include the amount of income a department attracts from funders for its research, the number of postgraduate students, or the number of times a published piece of research is cited by other researchers. Metrics that relate to publications are usually known as bibliometrics.
"The outcome announced today is a new process that uses for all subjects a set of indicators based on research income, postgraduate numbers, and a quality indicator. For subjects in science, engineering, technology and medicine (SET) the quality indicator will be a bibliometric statistic relating to research publications or citations. For other subjects, the quality indicator will continue to involve a lighter touch expert review of research outputs, with a substantial reduction in the administrative burden. Experts will also be involved in advising on the weighting of the indicators for all subjects."
Some Prior References:
Harnad, S. (2001) Why I think that research access, impact and assessment are linked. Times Higher Education Supplement 1487: p. 16.
Hitchcock, S., Brody, T., Gutteridge, C., Carr, L., Hall, W., Harnad, S., Bergmark, D. and Lagoze, C. (2002) Open Citation Linking: The Way Forward. D-Lib Magazine 8(10).
Harnad, S. (2003) Why I believe that all UK research output should be online. Times Higher Education Supplement. Friday, June 6 2003.
Harnad, S., Carr, L., Brody, T. & Oppenheim, C. (2003) Mandated online RAE CVs Linked to University Eprint Archives: Improving the UK Research Assessment Exercise whilst making it cheaper and easier. Ariadne 35.
Shadbolt, N., Brody, T., Carr, L. and Harnad, S. (2006) The Open Research Web: A Preview of the Optimal and the Inevitable, in Jacobs, N., Eds. Open Access: Key Strategic, Technical and Economic Aspects. Chandos."Metrics" are Plural, Not Singular: Valid Objections From UUK About RAE"
Pertinent Prior AmSci Topic Threads:
UK "RAE" Evaluations (began Nov 2000)
Digitometrics (May 2001)
Scientometric OAI Search Engines (began Aug 2002)
UK Research Assessment Exercise (RAE) review (Oct 2002)
Australia stirs on metrics (June 2006)
Big Brother and Digitometrics (began May 2001)
UK Research Assessment Exercise (RAE) review (began Oct 2002)
Need for systematic scientometric analyses of open-access data (began Dec 2002)
Potential Metric Abuses (and their Potential Metric Antidotes) (began Jan 2003)
Future UK RAEs to be Metrics-Based (began Mar 2006)
Australia stirs on metrics (Jun 2006)
Let 1000 RAE Metric Flowers Bloom: Avoid Matthew Effect as Self-Fulfilling Prophecy (Jun 2006)
Australia's RQF (Nov 2006)
Stevan Harnad
American Scientist Open Access Forum
Three talks by Stevan Harnad at Indiana University on December 4-5:
(1)
Maximising the Return on Resource Investment in Research at Indiana University by Mandating Self-Archiving
(2)
Open Access Scientometrics
(3)
Origins of Language
(Over a hundred thousand years ago, language evolved as a way of providing Open Access to the
categories that human beings acquired. Publishing and providing online access to peer-reviewed research findings is just a natural -- indeed
optimal and inevitable --
PostGutenberg upgrade of this
ancestral adaptation.)