Re: UK Research Assessment Exercise (RAE) review

From: Jan Velterop <jan_at_biomedcentral.com>
Date: Tue, 26 Nov 2002 18:33:08 +0000

As Einstein said, "Not everything that can be counted, counts; and not
everything that counts, can be counted."

Scientometrics and other metrics are about counting what can be counted.
No-doubt the actions of citing, using, browsing, teaching, et cetera,
are real ones that can be counted and thus are 'objective'. So 'quantity'
is dealt with. What about 'quality'? Quality is relative, and based on
judgement. The (micro-)judgements that lead to citing, browsing, awarding
Nobel prizes (OK, not so micro), et cetera, are utterly subjective,
so what we count is 'votes'. Does more votes mean a higher 'quality'
than fewer votes? Does it matter who does the voting?

I think it does, at least in these matters, and therefore a review process
is needed that ranks things like originality, fundamental new insights,
and yes, contributions to wider dissemination and understanding as well,
in order to base important decisions on more than just quasi-objective
measurements.

Fortunately, in biology such secondary review is beginning to take shape:
Faculty of 1000 (www.facultyof1000.com). It often shows that the subjective
importance of articles is often unconnected, or only very loosely connected,
to established scientometrics. It constantly brings up 'hidden jewels',
articles in pretty obscure journals that are nonetheless highly interesting
or significant.

I am sure that automated, more inclusive, counting of votes made possible by
open and OAI-compliant online journals and repositories will help the
visibility of those currently outside the ISI Impact Factory universe, such
as the journals from Bhutan. But it can't replace judgement.

Jan Velterop

> -----Original Message-----
> From: Stevan Harnad [mailto:harnad_at_ecs.soton.ac.uk]
> Sent: 26 November 2002 15:16
> To: AMERICAN-SCIENTIST-OPEN-ACCESS-FORUM_at_LISTSERVER.SIGMAXI.ORG
>
> For the sake of communication and moving ahead, I would like to clarify
> two points of definition (and methodology, and logic) about the terms
> "research impact" and "scientometric measures":
>
> "Research impact" means the measurable effects of research, including
> everything in the following range of measurable effects:
>
> (1) browsed
> (2) read
> (3) taught
> (4) cited
> (5) co-cited by authoritative sources
> (6) used in other research
> (7) applied in practical applications
> (8) awarded the Nobel Prize
>
> All of these (and probably more) are objectively measurable indices of
> research impact. Research impact is not, and never has been just (4),
> i.e., not just citation counts, whether average journal citation ratios
> (the ISI "journal impact factor") or individual paper total or annual
> citation counts, or individual author total or average or annual
> citation counts (though citations are certainly important, in this
> family of impact measures).
>
> So when I speak of the multiple regression equation measuring research
> impact I mean all of the above (at the very least).
>
> "Scientometric measures" are the above measures. Scientometric analyses
> also include time-series analyses, looking for time-based patterns in
> the individual curves and the interrelations among measures like the
> above ones -- and much more, to be discovered and designed as the
> scientometric database consisting of the full text papers, their
> reference list and their raw data become available for
> analysis online.
Received on Tue Nov 26 2002 - 18:33:08 GMT

This archive was generated by hypermail 2.3.0 : Fri Dec 10 2010 - 19:46:43 GMT