Tuesday, September 9. 2008Open Access and Research Conference 2008: Brisbane 24-25 SeptemberOpen Access and Research Conference 2008 Research Evaluation, Metrics and Open Access in the Humanities: Dublin 18-20 September-- Aimed at Arts and Humanities researchers, Deans of Research, Librarians, research group leaders and policy makers within the Coimbra-Group member universities and the Irish University sector...Research Evaluation, Metrics and Open Access in the Humanities -- To compare established and innovative methods and models of research evaluation and assess their appropriateness for the Arts and Humanities sector... -- To assess the increasing impact of bibliometrical approaches and Open Access policies on the Arts and Humanities sector... Wednesday, November 28. 2007Administrative Keystroke Mandates To Record Research Output Can Serve As Open Access Mandates TooThere is no need to keep waiting for governmental OA mandates.Harnad, Stevan (2005) The OA Policy of Southampton University (ECS), UK: the "Keystroke" Strategy [Putting the Berlin Principle into Practice: the Southampton Keystroke Policy] . Delivered at Berlin 3 Open Access: Progress in Implementing the Berlin Declaration on Open Access to Knowledge in the Sciences and Humanities, University of Southampton (UK).University OA mandates are natural extensions of universities' existing record-keeping, asset management, and performance-assessment policies. They complement research-funder OA mandates, and are the most efficient and productive way to monitor and credit compliance and fulfillment for both. Australia's Arthur Sale has done the most work on this. Please read what he has to say: Arthur Sale The evidence is quite clear that advocacy does not work by itself, and never has worked anywhere. To repeat the bleeding obvious once again: depositing in repositories is avoidable work under a voluntary regime, and like all avoidable work it will be avoided by most academics, even if perceived to be in their best interests, and even if the work is minor. The work needs to be (a) required and (b) integrated into the work pattern of researchers, so it becomes the norm. This is the purpose of mandates - to make it clear to researchers that they are expected to do this work. My research and published papers show that mandates do work, and they take a couple of years for the message to sink in. Enforcement need only be a light touch - reporting to heads of departments for example. [See references below.] At the risk of boring some, may I point to a similar case in Australia. All universities are required to produce an annual return to the Australian Government of publications in the previous year in the categories of refereed journal articles, refereed conference papers, books, and book chapters. The universities make this known to their staff (a mandate), and they all fill out forms and provide photocopies of the works. The workload is considerably more than depositing a paper in a repository. The scheme has been going for many years and is regarded as part of the academic routine. The data is used by Government to determine part of the university block grant. The result is near 100% compliance. What I am doing in Australia is pressing for this already existing mandate to be extended to the repositories. If the researcher deposits in the repository, and the annual return is automatically derived from the repository, then (a) the researcher wins because it takes him/her less time, (b) it takes the administrators less time as the process is automated and only needs to be audited, and (c) the repository delivers its usual benefits for those with eyes to see. All we need is for the research office to promulgate such a policy in each university. It is in their own interests as well as the university's. Arthur Sale University of Tasmania Swan, A. and Brown, S. (2005) Open access self-archiving: An author study. JISC Technical Report, Key Perspectives Inc. http://eprints.ecs.soton.ac.uk/10999/ Sale, Arthur (2006) Researchers and institutional repositories, in Jacobs, Neil, Eds. Open Access: Key Strategic, Technical and Economic Aspects, chapter 9, pages 87-100. Chandos Publishing (Oxford) Limited. http://eprints.utas.edu.au/257/ Sale, A. The Impact of Mandatory Policies on ETD Acquisition. D-Lib Magazine April 2006, 12(4). http://dx.doi.org/10.1045/april2006-sale Sale, A. Comparison of content policies for institutional repositories in Australia. First Monday, 11(4), April 2006. http://firstmonday.org/issues/issue11_4/sale/index.html Sale, A. The acquisition of open access research articles. First Monday, 11(9), October 2006. http://www.firstmonday.org/issues/issue11_10/sale/index.html Sale, A. (2007) The Patchwork Mandate D-Lib Magazine 13 1/2 January/February http://www.dlib.org/dlib/january07/sale/01sale.html Saturday, November 24. 2007Victory for Labour, Research Metrics and Open Access Repositories in Australia
Posted by Arthur Sale in the American Scientist Open Access Forum:
Yesterday, Australia held a Federal Election. The Australian Labor Party (the previous opposition) have clearly won, with Kevin Rudd becoming the Prime-Minister-elect. Thursday, November 22. 2007UK Research Evaluation Framework: Validate Metrics Against Panel RankingsOnce one sees the whole report, it turns out that the HEFCE/RAE Research Evaluation Framework is far better, far more flexible, and far more comprehensive than is reflected in either the press release or the Executive Summary. It appears that there is indeed the intention to use many more metrics than the three named in the executive summary (citations, funding, students), that the metrics will be weighted field by field, and that there is considerable open-mindedness about further metrics and about corrections and fine-tuning with time. Even for the humanities and social sciences, where "light touch" panel review will be retained for the time being, metrics too will be tried and tested. This is all very good, and an excellent example for other nations, such as Australia (also considering national research assessment with its Research Quality Framework), the US (not very advanced yet, but no doubt listening) and the rest of Europe (also listening, and planning measures of its own, such as EurOpenScholar). There is still one prominent omission, however, and it is a crucial one: The UK is conducting one last parallel metrics/panel RAE in 2008. That is the last and best chance to test and validate the candidate metrics -- as rich and diverse a battery of them as possible -- against the panel rankings. In all other fields of metrics -- biometrics, psychometrics, even weather forecasting metrics – before deployment the metric predictors first need to be tested and shown to be valid, which means showing that they do indeed predict what they were intended to predict. That means they must correlate with a "criterion" metric that has already been validated, or that has "face-validity" of some kind. The RAE has been using the panel rankings for two decades now (at a great cost in wasted time and effort to the entire UK research community -- time and effort that could instead have been used to conduct the research that the RAE was evaluating: this is what the metric RAE is primarily intended to remedy). But if the panel rankings have been unquestioningly relied upon for 2 decades already, then they are a natural criterion against which the new battery of metrics can be validated, initializing the weights of each metric within a joint battery, as a function of what percentage of the variation in the panel rankings each metric can predict. This is called "multiple regression" analysis: N "predictors" are jointly correlated with one (or more) "criterion" (in this case the panel rankings, but other validated or face-valid criteria could also be added, if there were any). The result is a set of "beta" weights on each of the metrics, reflecting their individual predictive power, in predicting the criterion (panel rankings). The weights will of course differ from discipline by discipline. Now these beta weights can be taken as an initialization of the metric battery. With time, "super-light" panel oversight can be used to fine-tune and optimize those weightings (and new metrics can always be added too), to correct errors and anomalies and make them reflect the values of each discipline. (The weights can also be systematically varied to use the metrics to re-rank in terms of different blends of criteria that might be relevant for different decisions: RAE top-sliced funding is one sort of decision, but one might sometimes want to rank in terms of contributions to education, to industry, to internationality, to interdisciplinarity. Metrics can be calibrated continuously and can generate different "views" depending on what is being evaluated. But, unlike the much abused "university league table," which ranks on one metric at a time (and often a subjective opinion-based rather than an objectiveone), the RAE metrics could generate different views simply by changing the weights on some selected metrics, while retaining the other metrics as the baseline context and frame of reference.) To accomplish all that, however, the metric battery needs to be rich and diverse, and the weight of each metric in the battery has to be initialised in a joint multiple regression on the panel rankings. It is very much to be hoped that HEFCE will commission this all-important validation exercise on the invaluable and unprecedented database they will have with the unique, one-time parallel panel/ranking RAE in 2008. That is the main point. There are also some less central points: The report says -- a priori -- that REF will not consider journal impact factors (average citations per journal), nor author impact (average citations per author): only average citations per paper, per department. This is a mistake. In a metric battery, these other metrics can be included, to test whether they make any independent contribution to the predictivity of the battery. The same applies to author publication counts, number of publishing years, number of co-authors -- even to impact before the evaluation period. (Possibly included vs. non-included staff research output could be treated in a similar way, with number and proportion of staff included also being metrics.) The large battery of jointly validated and weighted metrics will make it possible to correct the potential bias from relying too heavily on prior funding, even if it is highly correlated with the panel rankings, in order to avoid a self-fulfilling prophecy which would simply collapse the Dual RAE/RCUK funding system into just a multiplier on prior RCUK funding. Self-citations should not be simply excluded: they should be included independently in the metric battery, for validation. So should measures of the size of the citation circle (endogamy) and degree of interdisciplinarity. Nor should the metric battery omit the newest and some of the most important metrics of all, the online, web-based ones: downloads of papers, links, growth rates, decay rates, hub/authority scores. All of these will be provided by the UK's growing network of UK Institutional Repositories. These will be the record-keepers -- for both the papers and their usage metrics -- and the access-providers, thereby maximizing their usage metrics. REF should put much, much more emphasis on ensuring that the UK network of Institutional Repositories systematically and comprehensively records its research output and its metric performance indicators. But overall, thumbs up for a promising initiative that is likely to serve as a useful model for the rest of the research world in the online era. References Harnad, S., Carr, L., Brody, T. & Oppenheim, C. (2003) Mandated online RAE CVs Linked to University Eprint Archives: Improving the UK Research Assessment Exercise whilst making it cheaper and easier. Ariadne 35. Brody, T., Kampa, S., Harnad, S., Carr, L. and Hitchcock, S. (2003) Digitometric Services for Open Archives Environments. Proceedings of European Conference on Digital Libraries 2003, pp. 207-220, Trondheim, Norway. Harnad, S. (2006) Online, Continuous, Metrics-Based Research Assessment. Technical Report, ECS, University of Southampton. Harnad, S. (2007) Open Access Scientometrics and the UK Research Assessment Exercise. Proceedings of 11th Annual Meeting of the International Society for Scientometrics and Informetrics 11(1), pp. 27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds. Brody, T., Carr, L., Harnad, S. and Swan, A. (2007) Time to Convert to Metrics. Research Fortnight pp. 17-18. Brody, T., Carr, L., Gingras, Y., Hajjem, C., Harnad, S. and Swan, A. (2007) Incentivizing the Open Access Research Web: Publication-Archiving, Data-Archiving and Scientometrics. CTWatch Quarterly 3(3). See also: Prior Open Access Archivangelism postings on RAE and metrics Stevan Harnad American Scientist Open Access Forum Thursday, November 8. 2007UUK report looks at the use of bibliometricsComments on UUK Press Release 8 November 2007:What metrics count as "bibliometrics"? Do downloads? hubs/authorities? Interdisciplinarity metrics? Endogamy/exogamy metrics? chronometrics, semiometrics? There is evidence that bibliometric indices do correlate with other, quasi-independent measures of research quality - such as RAE grades - across a range of fields in science and engineering.Meaning that citation counts correlate with panel rankings in all disciplines tested so far. Correct. There is a range of bibliometric variables as possible quality indicators. There are strong arguments against the use of (i) output volume (ii) citation volume (iii) journal impact and (iv) frequency of uncited papers.The "strong" arguments are against using any of these variables alone, or without testing and validation. They are not arguments against including them in the battery of candidate metrics to be tested, validated and weighted against the panel rankings, discipline by discipline, in a multiple regression equation. 'Citations per paper' is a widely accepted index in international evaluation. Highly-cited papers are recognised as identifying exceptional research activity.Citations per paper is one (strong) candidate metric among many, all of which should be co-tested, via multiple regression analysis, against the parallel RAE panel rankings (and other validated or face-valid performance measures). Accuracy and appropriateness of citation counts are a critical factor.Not clear what this means. ISI citation counts should be supplemented by other citation counts, such as Scopus, Google Scholar, Citeseer and Citebase: each can be a separate metric in the metric equation. Citations from and to books are especially important in some disciplines. There are differences in citation behaviour among STEM and non-STEM as well as different subject disciplines.And probably among many other disciplines too. That is why each discipline's regression equation needs to be validated separately. This will yield a different constellation of metrics as well as of beta weights on the metrics, for different disciplines. Metrics do not take into account contextual information about individuals, which may be relevant.What does this mean? Age, years since degree, discipline, etc. are all themselves metrics, and can be added to the metric equation. They also do not always take into account research from across a number of disciplines.Interdisciplinarity is a measurable metric. There are self-citations, co-author citations, small citation circles, specialty-wide citations, discipline-wide citations, and cross-disciplinary citations. These are all endogamy/exogamy metrics. They can be given different weights in fields where, say, interdisciplinarity is highly valued. The definition of the broad subject groups and the assignment of staff and activity to them will need careful consideration.Is this about RAE panels? Or about how to distribute researchers by discipline or other grouping? Bibliometric indicators will need to be linked to other metrics on research funding and on research postgraduate training."Linked"? All metrics need to be considered jointly in a multiple regression equation with the panel rankings (and other validated or face-valid criterion metrics). There are potential behavioural effects of using bibliometrics which may not be picked up for some yearsYes, metrics will shape behaviour (just as panel ranking shaped behaviour), sometimes for the better, sometimes for the worse. Metrics can be abused -- but abuses can also be detected and named and shamed, so there are deterrents and correctives. There are data limitations where researchers' outputs are not comprehensively catalogued in bibliometrics databases.The obvious solution for this is Open Access: All UK researchers should deposit all their research output in their Institutional Repositories (IRs). Where it is not possible to set access to a deposit as OA, access can be set as Closed Access, but the bibliographic metadata will be there. (The IRs will not only provide access to the texts and the metadata, but they will generate further metrics, such as download counts, chronometrics, etc.) The report comes ahead of the HEFCE consultation on the future of research assessment expected to be announced later this month. Universities UK will consult members once this is published.Let's hope both UUK and HEFCE are still open-minded about ways to optimise the transition to metrics! References Harnad, S., Carr, L., Brody, T. & Oppenheim, C. (2003) Mandated online RAE CVs Linked to University Eprint Archives: Improving the UK Research Assessment Exercise whilst making it cheaper and easier. Ariadne 35. Brody, T., Kampa, S., Harnad, S., Carr, L. and Hitchcock, S. (2003) Digitometric Services for Open Archives Environments. In Proceedings of European Conference on Digital Libraries 2003, pp. 207-220, Trondheim, Norway. Harnad, S. (2006) Online, Continuous, Metrics-Based Research Assessment. Technical Report, ECS, University of Southampton. Harnad, S. (2007) Open Access Scientometrics and the UK Research Assessment Exercise. In Proceedings of 11th Annual Meeting of the International Society for Scientometrics and Informetrics 11(1), pp. 27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds. Brody, T., Carr, L., Harnad, S. and Swan, A. (2007) Time to Convert to Metrics. Research Fortnight pp. 17-18. Brody, T., Carr, L., Gingras, Y., Hajjem, C., Harnad, S. and Swan, A. (2007) Incentivizing the Open Access Research Web: Publication-Archiving, Data-Archiving and Scientometrics. CTWatch Quarterly 3(3). See also: Prior Open Access Archivangelism Postings on RAE and metricsStevan Harnad American Scientist Open Access Forum Friday, October 12. 2007UK RAE Reform Should Be Evidence-Based
The UK Research Assessment Exercise has taken a few steps forward and a few steps back:
(1) In evaluating and rewarding the research performance of universities department by department, future RAEs (after 2008) will no longer, as before, assess only 4 selected papers per researcher, among those researchers selected for inclusion: All papers, by all departmental researchers, will be assessed. (Step forward)As I have pointed out many times before, (i) prior research income, if given too much weight, becomes a self-fulfilling prophecy, and reduces the RAE to a multiplication factor on competitive research funding. The result would be that instead of the current two autonomous components in the UK's Dual Support System (RAE and RCUK), there would only be one: RCUK (and other) competitive proposal funding, multiplied by the RAE metric rank, dominated by prior funding. To counterbalance against this, a rich spectrum of potential metrics needs to be tested in the 2008 RAE, and validated against the panel review rankings, which will still be collected in the 2008 parallel RAE. Besides (i) research income, (ii) postgraduate student counts, and (iii) journal impact factors, there is a vast spectrum of other candidate metrics, including (iv) citation metrics for each article itself (rather than just its journal's average), (iv) download metrics, (v) citation and download growth curve metrics, (vi) co-citation metrics, (vii) hub/authority metrics, (viii) endogamy/interdisciplinarity metrics (ix) book citation metrics, (x) web link metrics, (xi) comment tag metrics, (xii) course-pack metrics, and many more. All these candidate metrics should be tested and validated against the panel rankings in RAE 2008, in a multiple regression equation. The selection and weighting of each metric should be adjusted, discipline by discipline, rationally and empirically, rather than a priori, as is being proposed now. Harnad, S. (2007) Open Access Scientometrics and the UK Research Assessment Exercise. In Proceedings of 11th Annual Meeting of the International Society for Scientometrics and Informetrics 11(1), pp. 27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds.(I might add that RCUK's plans to include "potential economic benefits to the UK" among the criteria for competitive research funding could do with a little more rational and empirical support too, rather than being adopted a priori.) Stevan Harnad American Scientist Open Access Forum Sunday, August 26. 2007Validating Open Access Metrics for RAE 2008The United Kingdom's Research Assessment Exercise (RAE) is doing two things right. There are also two things it is planning to do that are currently problematic, but that could easily be made right. Let's start with what RAE is already doing right: (+1) It is a good idea to have a national research performance evaluation to monitor and reward research productivity and progress. Other countries will be following and eventually emulating the UK's lead. (Australia is already emulating it.)But, as with all policies that are being shaped collectively by disparate (and sometimes under-informed) policy-making bodies, two very simple and remediable flaws in the reformed RAE system have gone detected and hence uncorrected. They can still be corrected, and there is still hope that they will be, as they are small, easily fixed flaws; but, if left unfixed, they will have negative consequences, compromising the RAE as well as the RAE reforms: (-1) The biggest flaw concerns the metrics that will be used. Metrics first have to be tested and validated, discipline by discipline, to ensure that they are accurate indicators of research performance. Since the UK has relied on the RAE panel evaluations for two decades, and since the last RAE (2008) before conversion to metrics is to be a parallel panel/metrics exercise, the natural thing to do is to test as many candidate metrics as possible in this exercise, and to cross-validate them against the rankings given by the panels, separately, in each discipline. (Which metrics are valid performance indicators will differ from discipline to discipline.) Hence the prior-funding metric (-1a) needs to be used cautiously, to avoid bias and self-fulfilling prophecy; and the citation-count metric (-2b) is a good candidate, but only one of many potential metrics that can and should be tested in the parallel RAE 2008 metric/panel exercise. (Other metrics include co-citation counts, download counts, download and citation growth and longevity counts, hub/authority scores, interdisciplinarity scores, and many other rich measures for which RAE 2008 is the ideal time to do the testing and validation, discipline by disciplines, as it is virtually certain that disciplines will differ in which metrics are predictive for them, and what the weightings of each metric should be.) Yet it looks as if RAE 2008 and HEFCE are not currently planning to commission this all-important validation analysis, testing metrics against panel rankings for a rich array of candidate metrics. This is a huge flaw and oversight, although it can still be easily remedied by going ahead and doing such a systematic cross-validation study after all.(-1a) Prior research funding has already been shown to be extremely highly correlated with the RAE panel rankings in a few (mainly scientific) disciplines, but this was undoubtedly because the panels, in making their rankings, already had those metrics in hand, as part of the submission. Hence the panels themselves could explicitly (or implicitly) count them in making their judgments! Now a correlation between metrics and panel rankings is desirable initially, because that is the way to launch and validate the candidate metrics. In the case of this particular metric, however, not only is there a potential interaction, indeed a bias, that makes the prior-funding metric and the panel ranking non-independent, and hence invalidates the test of this metric's validity, but there is also a deeper reason for not putting a lot of weight on the prior-funding metric: For such a systematic metric/panel cross-validation study in RAE 2008, however, the array of candidate metrics has to be made as rich and diverse as possible. The RAE is not currently making any effort to collect as many potential metrics as possible in RAE 2008, and this is partly because it is overlooking the growing importance of online, Open Access metrics -- and indeed overlooking the growing importance of Open Access itself, both in research productivity and progress itself, and in evaluating it. Brody, T., Carr, L., Gingras, Y., Hajjem, C., Harnad, S. and Swan, A. (2007) Incentivizing the Open Access Research Web: Publication-Archiving, Data-Archiving and Scientometrics. CTWatch Quarterly 3(3).This brings us to the second flaw in HEFCE's RAE 2008 plans: (-2) For no logical or defensible reason at all, RAE 2008 is insisting that researchers submit the publishers' PDFs for the 2008 exercise. Now it does represent some progress that the RAE is accepting electronic drafts rather than requiring hard copy, as in past years. But in insisting that those electronic drafts must be the publisher's PDF, the RAE is creating two unnecessary problems.To recapitulate: two pluses -- (+1) research performance itself, and (+2) conversion to metrics -- plus two (correctable) minuses -- (-1) failure to explicitly provide for the systematic evaluation of a rich candidate spectrum of metrics against the RAE 2008 panel rankings and (-2) failure to require deposit of the authors' papers in their own IRs, to generate more OA metrics, more OA, and more UK research impact.(-2a) One unnecessary problem, a minor one, is that the RAE imagines that in order to have the publisher's PDF for evaluation, they need to seek (or even pay for) permission from the publisher. This is complete nonsense! Researchers (i.e., the authors) submit their own published work to the RAE for evaluation. For the researchers, this is Fair Dealing (Fair Use) and no publisher permission or payment whatsoever is needed. (As it happens, I believe HEFCE has worked out a "special arrangement" whereby publishers "grant permission" and "waive payment." But the completely incorrect notion that permission or payment were even at issue, in principle, has an important negative consequence, which I will now describe.) The good news is that there is still time to fully remedy (-1) and (-2), if only policy-makers take a moment to listen, think it through, and do the little that needs to be done to fix it. Appendix: Are Panel Rankings Face-Valid? It is important to allay a potential misunderstanding: It is definitely not the case that the RAE panel rankings are themselves infallible or face-valid! The panelists are potentially biased in many ways. And RAE panel review was never really "peer review," because peer review means consulting the most qualified specialists in the world for each specific paper, whereas the panels are just generic UK panels, evaluating all the UK papers in their discipline: It is the journals who already conducted the peer review. So metrics are not just needed to put an end to the waste and the cost of the existing RAE, but also to try to put the outcome on a more reliable, objective, valid and equitable basis. The idea is not to duplicate the outcome of the panels, but to improve it. Nevertheless -- and this is the critical point -- the metrics do have to be validated; and, as an essential first step, they have to be cross-validated against the panel rankings, discipline by discipline. For even though those panel rankings are and always were flawed, they are what the RAE has been relying upon, completely, for two decades. So the first step is to make sure that the metrics are chosen and weighted so as to get as close a fit to the panel rankings as possible, discipline by discipline. Then, and only then, can the "ladder" of the panel-rankings -- which got us where we are -- be tossed away, allowing us to rely on the metrics alone -- which can then be continuously calibrated and optimised in future years, with feedback from future meta-panels that are monitoring the rankings generated by the metrics and, if necessary, adjusting and fine-tuning the metric weights or even adding new, still to be discovered and tested metrics to them. In sum: despite their warts, the current RAE panel rankings need to be used to bootstrap the new metrics into usability. Without that prior validation based on what has been used until now, the metrics are just hanging from a skyhook and no one can say whether or not they measure what the RAE panels have been measuring until now. Without validation, there is no continuity in the RAE and it is not really a "conversion" to metrics, but simply an abrupt switch to another, untested assessment tool. (Citation counts have been tested elsewhere, in other fields, but as there has never been anything of the scope and scale of the UK RAE, across all disciplines in an entire country's research output, the prior patchwork testing of citation counts as research performance indicators is nowhere near providing the evidence that would be needed to make a reliable, valid choice of metrics for the UK RAE: only cross-validation within the RAE parallel metric/panel exercise itself -- jointly with a rich spectrum of other candidate metrics -- can provide that kind of evidence, and the requisite continuity, for a smooth, rational transition from panel rankings to metrics.) Stevan Harnad American Scientist Open Access Forum Thursday, December 14. 2006The Death of Peer Review? Rumors Premature...(All quotes are from "The death of peer review" by Natasha Gilbert in Research notes, The Guardian, Tuesday December 12, 2006)(1) Peer review of research publications is conducted by the referees consulted by peer-reviewed journals. (2) Peer review of competitive research grant applications is conducted by the referees consulted by research funding councils (RCUK). (3) The RAE (Research Assessment Exercise) is neither a research journal nor a competitive research grant funding council. (4) The RAE is part of a dual research funding system: (i) competitive research grant applications plus (ii) top-sliced funding based on RAE ranking of each university department's research performance. (5) The RAE panel review is not peer review, and never has been peer review: It is a time-consuming, wasteful re-review of already peer-reviewed publications. (6) "Metrics" are statistical indicators of research performance such as publication counts, citations, downloads, links, students, funding, etc. (7) Metrics are already highly correlated with RAE rankings. (8) What has (at long last) been replaced by metrics is the time-consuming, wasteful RAE panel re-review of already peer-reviewed publications. We should be celebrating the long overdue death of RAE panel re-review, not prematurely feting the demise of peer review itself, which is alive and well. A more worrisome question concerns which metrics will be used: Guardian: "From 2010-11, science, engineering, technology and medicine (SET) subjects will instead be assessed using statistical indicators, such as the number of postgraduate students in a department and the amount of money a department brings in through its research."The fallacy here is that the RAE is supposed to be part of a dual funding system. If competitive funding is used as a heavily weighted metric, it is tantamount to collapsing it all into just one system -- competitive grant applications -- and merely increasing the amount of money given to the winners: A self-fulfilling prophecy and a whopping "Matthew Effect." Yet in the OA world there are a rich variety of potential metrics, which should be tested and validated and customised to each discipline. Metrics will put an end to wasting UK researchers' time re-reviewing and being re-reviewed, allowing them to devote their time instead to doing research. But a biassed and blinkered choice of metrics will sound the death-knell of the dual funding system (not peer review).Let 1000 RAE Metric Flowers Bloom: Avoid Matthew Effect as Self-Fulfilling ProphecyGuardian: "This new system should solve the much-complained-about bureaucracy of the research assessment exercise (RAE). But some, such as the Royal Society, the UK's academy of science, are adamant that sounding the death-knell for peer review in SET subjects is a bad move." Stevan Harnad American Scientist Open Access Forum Saturday, December 9. 2006Open Research MetricsPeter Suber: "If the metrics have a stronger OA connection, can you say something short (by email or on the blog) that I could quote for readers who aren't clued in, esp. readers outside the UK?"(1) In the UK (Research Assessment Exercise, RAE) and Australia (Research Quality Framework, RQF) all researchers and institutions are evaluated for "top-sliced" funding, over and above competitive research proposals. (2) Everywhere in the world, researchers and research institutions have research performance evaluations, on which careers/salaries, research funding, economic benefits, and institutional/departmental ratings depend. (3) There is now a natural synergy growing between OA self-archiving, Institutional Repositories (IRs), OA self-archiving mandates, and the online "metrics" toward which both the RAE/RQF and research evaluation in general are moving. (4) Each institution's IR is the natural place from which to derive and display research performance indicators: publication counts, citation counts, download counts, and many new metrics, rich and diverse ones, that will be mined from the OA corpus, making research evaluation much more open, sensitive to diversity, adapted to each discipline, predictive, and equitable. (5) OA Self-Archiving not only allows performance indicators (metrics) to be collected and displayed, and new metrics to be developed, but OA also enhances metrics (research impact), both competitively (OA vs. NOA) and absolutely (Quality Advantage: OA benefits the best work the most, and Early Advantage), as well as making possible the data-mining of the OA corpus for research purposes. (Research Evaluation, Research Navigation, and Research Data-Mining are all very closely related.) (6) This powerful and promising synergy between Open Research and Open Metrics is hence also a strong incentive for institutional and funder OA mandates, which will in turn hasten 100% OA: Their connection needs to be made clear, and the message needs to be spread to researchers, their institutions, and their funders. (Needless to say, closed, internal, non-displayed metrics are also feasible, where appropriate.) Pertinent Prior AmSci Topic Threads:Stevan Harnad American Scientist Open Access Forum
« previous page
(Page 2 of 4, totaling 31 entries)
» next page
|
QuicksearchSyndicate This BlogMaterials You Are Invited To Use To Promote OA Self-Archiving:
Videos:
The American Scientist Open Access Forum has been chronicling and often directing the course of progress in providing Open Access to Universities' Peer-Reviewed Research Articles since its inception in the US in 1998 by the American Scientist, published by the Sigma Xi Society. The Forum is largely for policy-makers at universities, research institutions and research funding agencies worldwide who are interested in institutional Open Acess Provision policy. (It is not a general discussion group for serials, pricing or publishing issues: it is specifically focussed on institutional Open Acess policy.)
You can sign on to the Forum here.
ArchivesCalendar
CategoriesBlog AdministrationStatisticsLast entry: 2018-09-14 13:27
1129 entries written
238 comments have been made
Top ReferrersSyndicate This Blog |