SUMMARY: The conversion of the UK Research Assessment Exercise (RAE) from the present costly, wasteful exercise to time-saving and cost-efficient metrics is timely and welcome, but it is critically important not to bias the outcome by restricting the metric to prior research funding. Otherwise, this will merely generate a Matthew Effect -- a self-fulfilling prophecy -- and the RAE will no longer be a semi-independent funding source but merely a multiplier effect on prior research funding. Open Access will provide a rich digital database from which to harvest a broad and diverse spectrum of metrics which can then be weighted and adapted to each discipline.
Let 1000 RAE Metric Flowers Bloom:
Avoid Matthew Effect as Self-Fulfilling Prophecy
Stevan Harnad
The conversion of the UK Research Assessment Exercise (
RAE) from the present costly, wasteful exercise to time-saving and cost-efficient
metrics is welcome, timely, and indeed long
overdue, but the worrying thing is that the RAE planners currently seem to be focused on just one metric -- prior research funding -- instead of the full and rich spectrum of new (and old) metrics that will become available in an Open Access world, with all the research performance data digitally available online for analysis and use.
Mechanically basing the future RAE rankings exclusively on prior funding would just generate a
Matthew Effect (making the rich richer and the poor poorer), a self-fulfilling prophecy that is simply equivalent to increasing the amount given to those who were previously funded (and scrapping the RAE altogether, as a separate, semi-independent performance evaluator and funding source).
What the RAE
should be planning to do is to look at weighted combinations of all available research performance metrics -- including the many that are correlated, but not so tightly correlated, with prior RAE rankings, such as author/article/book citation counts, article download counts, co-citations (co-cited with and co-cited by, weighted with the citation weight of the co-citer/co-citee), endogamy/exogamy metrics (citations by self or collaborators versus others, within and across disciplines), hub/authority counts (in-cites and out-cites, weighted recursively by the citation's own in-cite and out-cite counts), download and citation growth rates, semantic-web correlates, etc.
It would be both arbitrary and absurd to blunt the potential sensitivity, power, predictivity and validity of metrics a-priori, by biasing them toward the prior-funding counts metric alone. Prior funding should just be one out of a full battery of weighted metrics, adjusted to each discipline and validated against one another (and against human judgment too).
Shadbolt, N., Brody, T., Carr, L. and Harnad, S. (2006) The Open Research Web: A Preview of the Optimal and the Inevitable. In: Jacobs, N., Eds. Open Access: Key Strategic, Technical and Economic Aspects chapter 21. Chandos.
Pinfield, S. (2006) UK plans research funding overhaul The Scientist Wednesday 21 June, 2006
Stevan Harnad
American Scientist Open Access Forum