Thursday, December 14. 2006
(All quotes are from "The death of peer review" by Natasha Gilbert in Research notes, The Guardian, Tuesday December 12, 2006)
Guardian: "The chancellor has decided to do away with the age-old, and trusted, system of peer review for assessing the quality of science coming out of the UK's universities - which has been used as the basis for carving up public funding." (1) Peer review of research publications is conducted by the referees consulted by peer-reviewed journals.
(2) Peer review of competitive research grant applications is conducted by the referees consulted by research funding councils ( RCUK).
(3) The RAE (Research Assessment Exercise) is neither a research journal nor a competitive research grant funding council.
(4) The RAE is part of a dual research funding system: (i) competitive research grant applications plus (ii) top-sliced funding based on RAE ranking of each university department's research performance.
(5) The RAE panel review is not peer review, and never has been peer review: It is a time-consuming, wasteful re-review of already peer-reviewed publications.
(6) " Metrics" are statistical indicators of research performance such as publication counts, citations, downloads, links, students, funding, etc.
(7) Metrics are already highly correlated with RAE rankings.
(8) What has (at long last) been replaced by metrics is the time-consuming, wasteful RAE panel re-review of already peer-reviewed publications.
We should be celebrating the long overdue death of RAE panel re-review, not prematurely feting the demise of peer review itself, which is alive and well.
A more worrisome question concerns which metrics will be used: Guardian: "From 2010-11, science, engineering, technology and medicine (SET) subjects will instead be assessed using statistical indicators, such as the number of postgraduate students in a department and the amount of money a department brings in through its research." The fallacy here is that the RAE is supposed to be part of a dual funding system. If competitive funding is used as a heavily weighted metric, it is tantamount to collapsing it all into just one system -- competitive grant applications -- and merely increasing the amount of money given to the winners: A self-fulfilling prophecy and a whopping "Matthew Effect."
Yet in the OA world there are a rich variety of potential metrics, which should be tested and validated and customised to each discipline. Let 1000 RAE Metric Flowers Bloom: Avoid Matthew Effect as Self-Fulfilling Prophecy
"Metrics" are Plural, Not Singular: Valid Objections From UUK About RAE Guardian: "This new system should solve the much-complained-about bureaucracy of the research assessment exercise (RAE). But some, such as the Royal Society, the UK's academy of science, are adamant that sounding the death-knell for peer review in SET subjects is a bad move." Metrics will put an end to wasting UK researchers' time re-reviewing and being re-reviewed, allowing them to devote their time instead to doing research. But a biassed and blinkered choice of metrics will sound the death-knell of the dual funding system (not peer review).
Stevan Harnad
American Scientist Open Access Forum
Saturday, December 9. 2006
Peter Suber: "If the metrics have a stronger OA connection, can you say something short (by email or on the blog) that I could quote for readers who aren't clued in, esp. readers outside the UK?" (1) In the UK (Research Assessment Exercise, RAE) and Australia (Research Quality Framework, RQF) all researchers and institutions are evaluated for "top-sliced" funding, over and above competitive research proposals.
(2) Everywhere in the world, researchers and research institutions have research performance evaluations, on which careers/salaries, research funding, economic benefits, and institutional/departmental ratings depend.
(3) There is now a natural synergy growing between OA self-archiving, Institutional Repositories ( IRs), OA self-archiving mandates, and the online "metrics" toward which both the RAE/RQF and research evaluation in general are moving.
(4) Each institution's IR is the natural place from which to derive and display research performance indicators: publication counts, citation counts, download counts, and many new metrics, rich and diverse ones, that will be mined from the OA corpus, making research evaluation much more open, sensitive to diversity, adapted to each discipline, predictive, and equitable.
(5) OA Self-Archiving not only allows performance indicators (metrics) to be collected and displayed, and new metrics to be developed, but OA also enhances metrics (research impact), both competitively (OA vs. NOA) and absolutely (Quality Advantage: OA benefits the best work the most, and Early Advantage), as well as making possible the data-mining of the OA corpus for research purposes. (Research Evaluation, Research Navigation, and Research Data-Mining are all very closely related.)
(6) This powerful and promising synergy between Open Research and Open Metrics is hence also a strong incentive for institutional and funder OA mandates, which will in turn hasten 100% OA: Their connection needs to be made clear, and the message needs to be spread to researchers, their institutions, and their funders.
(Needless to say, closed, internal, non-displayed metrics are also feasible, where appropriate.) Pertinent Prior AmSci Topic Threads:
UK "RAE" Evaluations (began Nov 2000)
Big Brother and Digitometrics (May 2001)
Scientometric OAI Search Engines (began Aug 2002)
UK Research Assessment Exercise (RAE) review (Oct 2002)
Need for systematic scientometric analyses of open-access data (began Dec 2002)
Potential Metric Abuses (and their Potential Metric Antidotes) (began Jan 2003)
Future UK RAEs to be Metrics-Based (began Mar 2006)
Australia stirs on metrics (Jun 2006)
Let 1000 RAE Metric Flowers Bloom: Avoid Matthew Effect as Self-Fulfilling Prophecy (Jun 2006)
Australia's RQF (Nov 2006) Stevan Harnad
American Scientist Open Access Forum
Thursday, December 7. 2006
SUMMARY: The UK Research Assessment Exercise's transition from time-consuming, cost-ineffective panel review to low-cost metrics is welcome, but there is still a top-heavy emphasis on the Prior-Funding metric. This will generate a Matthew-Effect/Self-Fulfilling Prophecy (the rich get richer) and it will also collapse the UK Dual Funding System -- (1) competitive proposal-based funding plus (2) RAE performance-based, top-sliced funding -- into just a scaled up version of (1) alone. The RAE should commission rigorous, systematic studies, testing metric equations discipline by discipline. There are not just three but many potentially powerful and predictive metrics that could be used in these equations (e.g., citations, recursively weighted citations, co-citations, hub/authority indices, latency scores, longevity scores, downloads, download/citation correlations, endogamy/exogamy scores, and many more rich and promising indicators).The objective should be to maximise the depth, breadth, flexibility, predictive power and validity of the battery of RAE metrics by choosing and weighting the right ones. More metrics are better than fewer. They provide cross-checks on one another and triangulation can also help catch anomalies, if any.
The UK Research Assessment Exercise's (RAE's) sensible and overdue transition from time-consuming, cost-ineffective panel review to low-cost metrics is moving forward. However, there is still a top-heavy emphasis, in the RAE's provisional metric equation, on the Prior-Funding metric: "How much research funding has the candidate department received in the past?" "The outcome announced today is a new process that uses for all subjects a set of indicators based on research income, postgraduate numbers, and a quality indicator." Although prior funding should be part of the equation, it should definitely not be the most heavily weighted component a-priori, in any field. Otherwise, it will merely generate a Matthew-Effect/Self-Fulfilling Prophecy (the rich get richer, etc.) and it will also collapse the UK Dual Funding System -- (1) competitive proposal-based funding plus (2) RAE performance-based, top-sliced funding -- into just a scaled up version of (1) alone.
Having made the right decision -- to rely far more on low-cost metrics than on costly panels -- the RAE should now commission rigorous, systematic studies of metrics, testing metric equations discipline by discipline. There are not just three but many potentially powerful and predictive metrics that could be used in these equations (e.g., citations, recursively weighted citations, co-citations, hub/authority indices, latency scores, longevity scores, downloads, download/citation correlations, endogamy/exogamy scores, and many more rich and promising indicators). Unlike panel review, metrics are automatic and cheap to generate, and during and after the 2008 parallel panel/metric exercise they can be tested and cross-validated against the panel rankings, field by field.
In all metric fields -- biometrics, psychometrics, sociometrics -- the choice and weight of metric predictors needs to be based on careful, systematic, prior testing and validation, rather than on a hasty a-priori choice. Biassed predictors are also to be avoided: The idea is to maximise the depth, breadth, flexibility, predictive power and hence validity of the metrics by choosing and weighting the right ones. More metrics is better than fewer, because they serve as cross-checks on one another; this triangulation also highlights anomalies, if any.
Let us hope that the RAE's good sense will not stop with the decision to convert to metrics, but will continue to prevail in making a sensible, informed choice among the rich spectrum of metrics available in the online age. Excerpts from
"Response to consultation on successor to research assessment exercise"
"In the Science and Innovation Investment Framework 2004-2014 (published in 2004), the Government expressed an interest in using metrics collected as part of the 2008 RAE to provide a benchmark on the value of metrics as compared to peer review, with a view to making more use of metrics in assessment and reducing the administrative burden of peer review. The 10-Year Science and Innovation Investment Framework: Next Steps published with the 2006 Budget moved these plans forward by proposing a consultation on moving to a metrics-based research assessment system after the 2008 RAE. A working Group chaired by Sir Alan Wilson (then DfES Director General of Higher Education) and Professor David Eastwood produced proposals which were issued for consultation on 13 June 2006. The Government announcement today is the outcome of that consultation."
"The RAE panels already make some use of research metrics in reaching their judgements about research quality. Research metrics are statistics that provide indicators of the success of a researcher or department. Examples of metrics include the amount of income a department attracts from funders for its research, the number of postgraduate students, or the number of times a published piece of research is cited by other researchers. Metrics that relate to publications are usually known as bibliometrics.
"The outcome announced today is a new process that uses for all subjects a set of indicators based on research income, postgraduate numbers, and a quality indicator. For subjects in science, engineering, technology and medicine (SET) the quality indicator will be a bibliometric statistic relating to research publications or citations. For other subjects, the quality indicator will continue to involve a lighter touch expert review of research outputs, with a substantial reduction in the administrative burden. Experts will also be involved in advising on the weighting of the indicators for all subjects." Some Prior References:
Harnad, S. (2001) Why I think that research access, impact and assessment are linked. Times Higher Education Supplement 1487: p. 16.
Hitchcock, S., Brody, T., Gutteridge, C., Carr, L., Hall, W., Harnad, S., Bergmark, D. and Lagoze, C. (2002) Open Citation Linking: The Way Forward. D-Lib Magazine 8(10).
Harnad, S. (2003) Why I believe that all UK research output should be online. Times Higher Education Supplement. Friday, June 6 2003.
Harnad, S., Carr, L., Brody, T. & Oppenheim, C. (2003) Mandated online RAE CVs Linked to University Eprint Archives: Improving the UK Research Assessment Exercise whilst making it cheaper and easier. Ariadne 35.
Shadbolt, N., Brody, T., Carr, L. and Harnad, S. (2006) The Open Research Web: A Preview of the Optimal and the Inevitable, in Jacobs, N., Eds. Open Access: Key Strategic, Technical and Economic Aspects. Chandos."Metrics" are Plural, Not Singular: Valid Objections From UUK About RAE" Pertinent Prior AmSci Topic Threads:
UK "RAE" Evaluations (began Nov 2000)
Digitometrics (May 2001)
Scientometric OAI Search Engines (began Aug 2002)
UK Research Assessment Exercise (RAE) review (Oct 2002)
Australia stirs on metrics (June 2006)
Big Brother and Digitometrics (began May 2001)
UK Research Assessment Exercise (RAE) review (began Oct 2002)
Need for systematic scientometric analyses of open-access data (began Dec 2002)
Potential Metric Abuses (and their Potential Metric Antidotes) (began Jan 2003)
Future UK RAEs to be Metrics-Based (began Mar 2006)
Australia stirs on metrics (Jun 2006)
Let 1000 RAE Metric Flowers Bloom: Avoid Matthew Effect as Self-Fulfilling Prophecy (Jun 2006)
Australia's RQF (Nov 2006) Stevan Harnad
American Scientist Open Access Forum
|