Tuesday, December 12. 2006
These cartoons by Judith Economos don't really capture the "raincoat" metaphor. (What they are actually illustrating is " The Geeks and the Irrational.")
The raincoat metaphor (to ruin it, by explaining it) speaks for itself: Rain is obvious. The (cumulative) disadvantages of wetness are obvious. That raincoats are for shielding you from the rain is obvious. That raincoats can shield you from the rain is obvious. Yet, Zeno-like, the raincoat-advice -- "It's raining: Time to put on the ol' raincoats!" -- is not taken, for at least 34 silly, obviously defeasible reasons: "It's not raining. You can't stop the rain. Rain's good for you. God meant us to get wet. Raincoats are illegal. Raincoats don't work. Raincoats don't last. Raincoats will ruin the health-care industry. We need to block the clouds directly instead. Putting on a raincoat takes too long. Putting on a raincoat is too much work. I can't button my raincoat..." ( See "Raincoat Science: 43 More Open Access Haikus")
But Judith's cartoons are just too good not to show, even though they are about the green vs gold option rather than the don (sic) vs. don't option...
(Judith Economos also illustrated these Open Access rhymes. Feel free to use any of this to promote OA.)
Sunday, December 10. 2006
In Open Access News, Peter Suber excerpted the following from the AIP Position On Open Access & Public Access: "AIP is fearful of and against government mandates that provide rules in favor of one business model over another.
AIP is against funding agencies mandating free access to articles after they have undergone costly peer review or editing by publishers."
It is important not to confuse AIP (American Institute of Physics) with APS (American Physical Society). AIP is merely the publisher of the journals of APS, which is a Learned Society (and one of the most progressive on OA). Evolving APS Copyright Policy (American Physical Society) (began Dec 1999)
APS copyright policy (Mar 2002)
Don't take the grumbling of AIP too seriously. The APS/AIP division-of-labor is optimal, because it allows us to separate the scientific/scholarly interests from the publishing interests (which are so thoroughly conflated in most other Learned Societies, notably the American Chemical Society!). ACS meeting comments on e-prints
Not a Proud Day in the Annals of the Royal Society The AIP is basically saying that the interests of generating and protecting AIP's current revenue streams and cost-recovery model trump the interests of research, researchers, their institutions, their funders, and the interests of the tax-paying public that funds their funders.
In contrast, the international Open Access movement, five out of eight UK Research Councils, the Wellcome Trust, a growing number of Australian and Canadian Research Councils, CERN, the proposed US Federal Research Public Access Act (FRPAA), the provosts of most of the top US universities, the European Commission, the Developing World, and a growing number of individual universities and research institutions think otherwise.
(By the way, self-archiving mandates do not "favor of one business model over another": They are not about business models at all. They are about maximizing the access, usage and impact of publicly funded research.).
AIP is the publishing tail, yet again trying to wag the research dog. Soon we will see an end of this sort of nonsense. Berners-Lee, T., De Roure, D., Harnad, S. and Shadbolt, N. (2005) Journal publishing and author self-archiving: Peaceful Co-Existence and Fruitful Collaboration. Technical Report, Department of Electronics and computer Science, University of Southampton. Stevan Harnad
American Scientist Open Access Forum
Saturday, December 9. 2006
Peter Suber: "If the metrics have a stronger OA connection, can you say something short (by email or on the blog) that I could quote for readers who aren't clued in, esp. readers outside the UK?" (1) In the UK (Research Assessment Exercise, RAE) and Australia (Research Quality Framework, RQF) all researchers and institutions are evaluated for "top-sliced" funding, over and above competitive research proposals.
(2) Everywhere in the world, researchers and research institutions have research performance evaluations, on which careers/salaries, research funding, economic benefits, and institutional/departmental ratings depend.
(3) There is now a natural synergy growing between OA self-archiving, Institutional Repositories ( IRs), OA self-archiving mandates, and the online "metrics" toward which both the RAE/RQF and research evaluation in general are moving.
(4) Each institution's IR is the natural place from which to derive and display research performance indicators: publication counts, citation counts, download counts, and many new metrics, rich and diverse ones, that will be mined from the OA corpus, making research evaluation much more open, sensitive to diversity, adapted to each discipline, predictive, and equitable.
(5) OA Self-Archiving not only allows performance indicators (metrics) to be collected and displayed, and new metrics to be developed, but OA also enhances metrics (research impact), both competitively (OA vs. NOA) and absolutely (Quality Advantage: OA benefits the best work the most, and Early Advantage), as well as making possible the data-mining of the OA corpus for research purposes. (Research Evaluation, Research Navigation, and Research Data-Mining are all very closely related.)
(6) This powerful and promising synergy between Open Research and Open Metrics is hence also a strong incentive for institutional and funder OA mandates, which will in turn hasten 100% OA: Their connection needs to be made clear, and the message needs to be spread to researchers, their institutions, and their funders.
(Needless to say, closed, internal, non-displayed metrics are also feasible, where appropriate.) Pertinent Prior AmSci Topic Threads:
UK "RAE" Evaluations (began Nov 2000)
Big Brother and Digitometrics (May 2001)
Scientometric OAI Search Engines (began Aug 2002)
UK Research Assessment Exercise (RAE) review (Oct 2002)
Need for systematic scientometric analyses of open-access data (began Dec 2002)
Potential Metric Abuses (and their Potential Metric Antidotes) (began Jan 2003)
Future UK RAEs to be Metrics-Based (began Mar 2006)
Australia stirs on metrics (Jun 2006)
Let 1000 RAE Metric Flowers Bloom: Avoid Matthew Effect as Self-Fulfilling Prophecy (Jun 2006)
Australia's RQF (Nov 2006) Stevan Harnad
American Scientist Open Access Forum
Friday, December 8. 2006
On the good authority of Arthur Sale (and Peter Suber), the classification of the Australian Research Council (ARC) self-archiving policy in ROARMAP has been upgraded to a mandate.
There are now 17 self-archiving mandates worldwide, 5 of them in Australia: A departmental and university-wide one at U. Tasmania, a university-wide one at QUT, and a funder mandate at ARC, joined soon after by another funder mandate ( NHMRC) and reinforced by the Research Quality Framework (RQF) (the Australian counterpart of the UK Research Assessment Exercise, RAE).
Congratulations to Australia, and especially to Tom Cochrane, Paul Callan, Colin Steele, Malcolm Gillies, and to the Archivangelist of the Antipodes, Arthur Sale.
Thursday, December 7. 2006
SUMMARY: The UK Research Assessment Exercise's transition from time-consuming, cost-ineffective panel review to low-cost metrics is welcome, but there is still a top-heavy emphasis on the Prior-Funding metric. This will generate a Matthew-Effect/Self-Fulfilling Prophecy (the rich get richer) and it will also collapse the UK Dual Funding System -- (1) competitive proposal-based funding plus (2) RAE performance-based, top-sliced funding -- into just a scaled up version of (1) alone. The RAE should commission rigorous, systematic studies, testing metric equations discipline by discipline. There are not just three but many potentially powerful and predictive metrics that could be used in these equations (e.g., citations, recursively weighted citations, co-citations, hub/authority indices, latency scores, longevity scores, downloads, download/citation correlations, endogamy/exogamy scores, and many more rich and promising indicators).The objective should be to maximise the depth, breadth, flexibility, predictive power and validity of the battery of RAE metrics by choosing and weighting the right ones. More metrics are better than fewer. They provide cross-checks on one another and triangulation can also help catch anomalies, if any.
The UK Research Assessment Exercise's (RAE's) sensible and overdue transition from time-consuming, cost-ineffective panel review to low-cost metrics is moving forward. However, there is still a top-heavy emphasis, in the RAE's provisional metric equation, on the Prior-Funding metric: "How much research funding has the candidate department received in the past?" "The outcome announced today is a new process that uses for all subjects a set of indicators based on research income, postgraduate numbers, and a quality indicator." Although prior funding should be part of the equation, it should definitely not be the most heavily weighted component a-priori, in any field. Otherwise, it will merely generate a Matthew-Effect/Self-Fulfilling Prophecy (the rich get richer, etc.) and it will also collapse the UK Dual Funding System -- (1) competitive proposal-based funding plus (2) RAE performance-based, top-sliced funding -- into just a scaled up version of (1) alone.
Having made the right decision -- to rely far more on low-cost metrics than on costly panels -- the RAE should now commission rigorous, systematic studies of metrics, testing metric equations discipline by discipline. There are not just three but many potentially powerful and predictive metrics that could be used in these equations (e.g., citations, recursively weighted citations, co-citations, hub/authority indices, latency scores, longevity scores, downloads, download/citation correlations, endogamy/exogamy scores, and many more rich and promising indicators). Unlike panel review, metrics are automatic and cheap to generate, and during and after the 2008 parallel panel/metric exercise they can be tested and cross-validated against the panel rankings, field by field.
In all metric fields -- biometrics, psychometrics, sociometrics -- the choice and weight of metric predictors needs to be based on careful, systematic, prior testing and validation, rather than on a hasty a-priori choice. Biassed predictors are also to be avoided: The idea is to maximise the depth, breadth, flexibility, predictive power and hence validity of the metrics by choosing and weighting the right ones. More metrics is better than fewer, because they serve as cross-checks on one another; this triangulation also highlights anomalies, if any.
Let us hope that the RAE's good sense will not stop with the decision to convert to metrics, but will continue to prevail in making a sensible, informed choice among the rich spectrum of metrics available in the online age. Excerpts from
"Response to consultation on successor to research assessment exercise"
"In the Science and Innovation Investment Framework 2004-2014 (published in 2004), the Government expressed an interest in using metrics collected as part of the 2008 RAE to provide a benchmark on the value of metrics as compared to peer review, with a view to making more use of metrics in assessment and reducing the administrative burden of peer review. The 10-Year Science and Innovation Investment Framework: Next Steps published with the 2006 Budget moved these plans forward by proposing a consultation on moving to a metrics-based research assessment system after the 2008 RAE. A working Group chaired by Sir Alan Wilson (then DfES Director General of Higher Education) and Professor David Eastwood produced proposals which were issued for consultation on 13 June 2006. The Government announcement today is the outcome of that consultation."
"The RAE panels already make some use of research metrics in reaching their judgements about research quality. Research metrics are statistics that provide indicators of the success of a researcher or department. Examples of metrics include the amount of income a department attracts from funders for its research, the number of postgraduate students, or the number of times a published piece of research is cited by other researchers. Metrics that relate to publications are usually known as bibliometrics.
"The outcome announced today is a new process that uses for all subjects a set of indicators based on research income, postgraduate numbers, and a quality indicator. For subjects in science, engineering, technology and medicine (SET) the quality indicator will be a bibliometric statistic relating to research publications or citations. For other subjects, the quality indicator will continue to involve a lighter touch expert review of research outputs, with a substantial reduction in the administrative burden. Experts will also be involved in advising on the weighting of the indicators for all subjects." Some Prior References:
Harnad, S. (2001) Why I think that research access, impact and assessment are linked. Times Higher Education Supplement 1487: p. 16.
Hitchcock, S., Brody, T., Gutteridge, C., Carr, L., Hall, W., Harnad, S., Bergmark, D. and Lagoze, C. (2002) Open Citation Linking: The Way Forward. D-Lib Magazine 8(10).
Harnad, S. (2003) Why I believe that all UK research output should be online. Times Higher Education Supplement. Friday, June 6 2003.
Harnad, S., Carr, L., Brody, T. & Oppenheim, C. (2003) Mandated online RAE CVs Linked to University Eprint Archives: Improving the UK Research Assessment Exercise whilst making it cheaper and easier. Ariadne 35.
Shadbolt, N., Brody, T., Carr, L. and Harnad, S. (2006) The Open Research Web: A Preview of the Optimal and the Inevitable, in Jacobs, N., Eds. Open Access: Key Strategic, Technical and Economic Aspects. Chandos."Metrics" are Plural, Not Singular: Valid Objections From UUK About RAE" Pertinent Prior AmSci Topic Threads:
UK "RAE" Evaluations (began Nov 2000)
Digitometrics (May 2001)
Scientometric OAI Search Engines (began Aug 2002)
UK Research Assessment Exercise (RAE) review (Oct 2002)
Australia stirs on metrics (June 2006)
Big Brother and Digitometrics (began May 2001)
UK Research Assessment Exercise (RAE) review (began Oct 2002)
Need for systematic scientometric analyses of open-access data (began Dec 2002)
Potential Metric Abuses (and their Potential Metric Antidotes) (began Jan 2003)
Future UK RAEs to be Metrics-Based (began Mar 2006)
Australia stirs on metrics (Jun 2006)
Let 1000 RAE Metric Flowers Bloom: Avoid Matthew Effect as Self-Fulfilling Prophecy (Jun 2006)
Australia's RQF (Nov 2006) Stevan Harnad
American Scientist Open Access Forum
Three talks by Stevan Harnad at Indiana University on December 4-5:
(1) Maximising the Return on Resource Investment in Research at Indiana University by Mandating Self-Archiving
(2) Open Access Scientometrics
(3) Origins of Language
(Over a hundred thousand years ago, language evolved as a way of providing Open Access to the categories that human beings acquired. Publishing and providing online access to peer-reviewed research findings is just a natural -- indeed optimal and inevitable -- PostGutenberg upgrade of this ancestral adaptation.)
Wednesday, December 6. 2006
Brunel University's School of Information Systems Computing and Mathematics has just adopted the 9th departmental/institutional self-archiving mandate. (Together with the 6 research funder mandates, that now makes 15 mandates worldwide, and the 8th for the UK.) Brunel University School of Information Systems Computing and Mathematics (UNITED KINGDOM mandate)
Institution's/Department's OA Eprint Archives: http://bura.brunel.ac.uk/
Institution's/Department's OA Self-Archiving Policy:
BURA will make journal articles conference papers, doctoral theses, recordings and images freely available via the internet, allowing users to read, download and copy material for non-commercial private study or research purposes. Brunel's School of Information Systems Computing and Mathematics is supporting the initiative to make it compulsory for researchers to deposit their journal articles and theses in BURA. "[F]or academics it will make readily available their research to the world. If it is successful, it could also lead onto the whole university adopting mandatory self-archiving." This is an instance of Prof. Artur Sale's recommended "Patchwork Mandate" -- departments first, then the university as a whole. Other examples are Prof. Sale's own University of Tasmania's departmental and university-wide mandates and University of Southampton's ECS departmental mandate (soon to become a university-wide mandate).
If your own university or research institution has a self-archiving policy, please register it in ROARMAP (Registry of Open Access Repository Material Archiving Policies)
Stevan Harnad
American Scientist Open Access Forum
|