Wednesday, November 28. 2007Administrative Keystroke Mandates To Record Research Output Can Serve As Open Access Mandates TooThere is no need to keep waiting for governmental OA mandates.Harnad, Stevan (2005) The OA Policy of Southampton University (ECS), UK: the "Keystroke" Strategy [Putting the Berlin Principle into Practice: the Southampton Keystroke Policy] . Delivered at Berlin 3 Open Access: Progress in Implementing the Berlin Declaration on Open Access to Knowledge in the Sciences and Humanities, University of Southampton (UK).University OA mandates are natural extensions of universities' existing record-keeping, asset management, and performance-assessment policies. They complement research-funder OA mandates, and are the most efficient and productive way to monitor and credit compliance and fulfillment for both. Australia's Arthur Sale has done the most work on this. Please read what he has to say: Arthur Sale The evidence is quite clear that advocacy does not work by itself, and never has worked anywhere. To repeat the bleeding obvious once again: depositing in repositories is avoidable work under a voluntary regime, and like all avoidable work it will be avoided by most academics, even if perceived to be in their best interests, and even if the work is minor. The work needs to be (a) required and (b) integrated into the work pattern of researchers, so it becomes the norm. This is the purpose of mandates - to make it clear to researchers that they are expected to do this work. My research and published papers show that mandates do work, and they take a couple of years for the message to sink in. Enforcement need only be a light touch - reporting to heads of departments for example. [See references below.] At the risk of boring some, may I point to a similar case in Australia. All universities are required to produce an annual return to the Australian Government of publications in the previous year in the categories of refereed journal articles, refereed conference papers, books, and book chapters. The universities make this known to their staff (a mandate), and they all fill out forms and provide photocopies of the works. The workload is considerably more than depositing a paper in a repository. The scheme has been going for many years and is regarded as part of the academic routine. The data is used by Government to determine part of the university block grant. The result is near 100% compliance. What I am doing in Australia is pressing for this already existing mandate to be extended to the repositories. If the researcher deposits in the repository, and the annual return is automatically derived from the repository, then (a) the researcher wins because it takes him/her less time, (b) it takes the administrators less time as the process is automated and only needs to be audited, and (c) the repository delivers its usual benefits for those with eyes to see. All we need is for the research office to promulgate such a policy in each university. It is in their own interests as well as the university's. Arthur Sale University of Tasmania Swan, A. and Brown, S. (2005) Open access self-archiving: An author study. JISC Technical Report, Key Perspectives Inc. http://eprints.ecs.soton.ac.uk/10999/ Sale, Arthur (2006) Researchers and institutional repositories, in Jacobs, Neil, Eds. Open Access: Key Strategic, Technical and Economic Aspects, chapter 9, pages 87-100. Chandos Publishing (Oxford) Limited. http://eprints.utas.edu.au/257/ Sale, A. The Impact of Mandatory Policies on ETD Acquisition. D-Lib Magazine April 2006, 12(4). http://dx.doi.org/10.1045/april2006-sale Sale, A. Comparison of content policies for institutional repositories in Australia. First Monday, 11(4), April 2006. http://firstmonday.org/issues/issue11_4/sale/index.html Sale, A. The acquisition of open access research articles. First Monday, 11(9), October 2006. http://www.firstmonday.org/issues/issue11_10/sale/index.html Sale, A. (2007) The Patchwork Mandate D-Lib Magazine 13 1/2 January/February http://www.dlib.org/dlib/january07/sale/01sale.html Saturday, November 24. 2007Victory for Labour, Research Metrics and Open Access Repositories in Australia
Posted by Arthur Sale in the American Scientist Open Access Forum:
Yesterday, Australia held a Federal Election. The Australian Labor Party (the previous opposition) have clearly won, with Kevin Rudd becoming the Prime-Minister-elect. Thursday, November 22. 2007UK Research Evaluation Framework: Validate Metrics Against Panel RankingsOnce one sees the whole report, it turns out that the HEFCE/RAE Research Evaluation Framework is far better, far more flexible, and far more comprehensive than is reflected in either the press release or the Executive Summary. It appears that there is indeed the intention to use many more metrics than the three named in the executive summary (citations, funding, students), that the metrics will be weighted field by field, and that there is considerable open-mindedness about further metrics and about corrections and fine-tuning with time. Even for the humanities and social sciences, where "light touch" panel review will be retained for the time being, metrics too will be tried and tested. This is all very good, and an excellent example for other nations, such as Australia (also considering national research assessment with its Research Quality Framework), the US (not very advanced yet, but no doubt listening) and the rest of Europe (also listening, and planning measures of its own, such as EurOpenScholar). There is still one prominent omission, however, and it is a crucial one: The UK is conducting one last parallel metrics/panel RAE in 2008. That is the last and best chance to test and validate the candidate metrics -- as rich and diverse a battery of them as possible -- against the panel rankings. In all other fields of metrics -- biometrics, psychometrics, even weather forecasting metrics – before deployment the metric predictors first need to be tested and shown to be valid, which means showing that they do indeed predict what they were intended to predict. That means they must correlate with a "criterion" metric that has already been validated, or that has "face-validity" of some kind. The RAE has been using the panel rankings for two decades now (at a great cost in wasted time and effort to the entire UK research community -- time and effort that could instead have been used to conduct the research that the RAE was evaluating: this is what the metric RAE is primarily intended to remedy). But if the panel rankings have been unquestioningly relied upon for 2 decades already, then they are a natural criterion against which the new battery of metrics can be validated, initializing the weights of each metric within a joint battery, as a function of what percentage of the variation in the panel rankings each metric can predict. This is called "multiple regression" analysis: N "predictors" are jointly correlated with one (or more) "criterion" (in this case the panel rankings, but other validated or face-valid criteria could also be added, if there were any). The result is a set of "beta" weights on each of the metrics, reflecting their individual predictive power, in predicting the criterion (panel rankings). The weights will of course differ from discipline by discipline. Now these beta weights can be taken as an initialization of the metric battery. With time, "super-light" panel oversight can be used to fine-tune and optimize those weightings (and new metrics can always be added too), to correct errors and anomalies and make them reflect the values of each discipline. (The weights can also be systematically varied to use the metrics to re-rank in terms of different blends of criteria that might be relevant for different decisions: RAE top-sliced funding is one sort of decision, but one might sometimes want to rank in terms of contributions to education, to industry, to internationality, to interdisciplinarity. Metrics can be calibrated continuously and can generate different "views" depending on what is being evaluated. But, unlike the much abused "university league table," which ranks on one metric at a time (and often a subjective opinion-based rather than an objectiveone), the RAE metrics could generate different views simply by changing the weights on some selected metrics, while retaining the other metrics as the baseline context and frame of reference.) To accomplish all that, however, the metric battery needs to be rich and diverse, and the weight of each metric in the battery has to be initialised in a joint multiple regression on the panel rankings. It is very much to be hoped that HEFCE will commission this all-important validation exercise on the invaluable and unprecedented database they will have with the unique, one-time parallel panel/ranking RAE in 2008. That is the main point. There are also some less central points: The report says -- a priori -- that REF will not consider journal impact factors (average citations per journal), nor author impact (average citations per author): only average citations per paper, per department. This is a mistake. In a metric battery, these other metrics can be included, to test whether they make any independent contribution to the predictivity of the battery. The same applies to author publication counts, number of publishing years, number of co-authors -- even to impact before the evaluation period. (Possibly included vs. non-included staff research output could be treated in a similar way, with number and proportion of staff included also being metrics.) The large battery of jointly validated and weighted metrics will make it possible to correct the potential bias from relying too heavily on prior funding, even if it is highly correlated with the panel rankings, in order to avoid a self-fulfilling prophecy which would simply collapse the Dual RAE/RCUK funding system into just a multiplier on prior RCUK funding. Self-citations should not be simply excluded: they should be included independently in the metric battery, for validation. So should measures of the size of the citation circle (endogamy) and degree of interdisciplinarity. Nor should the metric battery omit the newest and some of the most important metrics of all, the online, web-based ones: downloads of papers, links, growth rates, decay rates, hub/authority scores. All of these will be provided by the UK's growing network of UK Institutional Repositories. These will be the record-keepers -- for both the papers and their usage metrics -- and the access-providers, thereby maximizing their usage metrics. REF should put much, much more emphasis on ensuring that the UK network of Institutional Repositories systematically and comprehensively records its research output and its metric performance indicators. But overall, thumbs up for a promising initiative that is likely to serve as a useful model for the rest of the research world in the online era. References Harnad, S., Carr, L., Brody, T. & Oppenheim, C. (2003) Mandated online RAE CVs Linked to University Eprint Archives: Improving the UK Research Assessment Exercise whilst making it cheaper and easier. Ariadne 35. Brody, T., Kampa, S., Harnad, S., Carr, L. and Hitchcock, S. (2003) Digitometric Services for Open Archives Environments. Proceedings of European Conference on Digital Libraries 2003, pp. 207-220, Trondheim, Norway. Harnad, S. (2006) Online, Continuous, Metrics-Based Research Assessment. Technical Report, ECS, University of Southampton. Harnad, S. (2007) Open Access Scientometrics and the UK Research Assessment Exercise. Proceedings of 11th Annual Meeting of the International Society for Scientometrics and Informetrics 11(1), pp. 27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds. Brody, T., Carr, L., Harnad, S. and Swan, A. (2007) Time to Convert to Metrics. Research Fortnight pp. 17-18. Brody, T., Carr, L., Gingras, Y., Hajjem, C., Harnad, S. and Swan, A. (2007) Incentivizing the Open Access Research Web: Publication-Archiving, Data-Archiving and Scientometrics. CTWatch Quarterly 3(3). See also: Prior Open Access Archivangelism postings on RAE and metrics Stevan Harnad American Scientist Open Access Forum Thursday, November 8. 2007UUK report looks at the use of bibliometricsComments on UUK Press Release 8 November 2007:What metrics count as "bibliometrics"? Do downloads? hubs/authorities? Interdisciplinarity metrics? Endogamy/exogamy metrics? chronometrics, semiometrics? There is evidence that bibliometric indices do correlate with other, quasi-independent measures of research quality - such as RAE grades - across a range of fields in science and engineering.Meaning that citation counts correlate with panel rankings in all disciplines tested so far. Correct. There is a range of bibliometric variables as possible quality indicators. There are strong arguments against the use of (i) output volume (ii) citation volume (iii) journal impact and (iv) frequency of uncited papers.The "strong" arguments are against using any of these variables alone, or without testing and validation. They are not arguments against including them in the battery of candidate metrics to be tested, validated and weighted against the panel rankings, discipline by discipline, in a multiple regression equation. 'Citations per paper' is a widely accepted index in international evaluation. Highly-cited papers are recognised as identifying exceptional research activity.Citations per paper is one (strong) candidate metric among many, all of which should be co-tested, via multiple regression analysis, against the parallel RAE panel rankings (and other validated or face-valid performance measures). Accuracy and appropriateness of citation counts are a critical factor.Not clear what this means. ISI citation counts should be supplemented by other citation counts, such as Scopus, Google Scholar, Citeseer and Citebase: each can be a separate metric in the metric equation. Citations from and to books are especially important in some disciplines. There are differences in citation behaviour among STEM and non-STEM as well as different subject disciplines.And probably among many other disciplines too. That is why each discipline's regression equation needs to be validated separately. This will yield a different constellation of metrics as well as of beta weights on the metrics, for different disciplines. Metrics do not take into account contextual information about individuals, which may be relevant.What does this mean? Age, years since degree, discipline, etc. are all themselves metrics, and can be added to the metric equation. They also do not always take into account research from across a number of disciplines.Interdisciplinarity is a measurable metric. There are self-citations, co-author citations, small citation circles, specialty-wide citations, discipline-wide citations, and cross-disciplinary citations. These are all endogamy/exogamy metrics. They can be given different weights in fields where, say, interdisciplinarity is highly valued. The definition of the broad subject groups and the assignment of staff and activity to them will need careful consideration.Is this about RAE panels? Or about how to distribute researchers by discipline or other grouping? Bibliometric indicators will need to be linked to other metrics on research funding and on research postgraduate training."Linked"? All metrics need to be considered jointly in a multiple regression equation with the panel rankings (and other validated or face-valid criterion metrics). There are potential behavioural effects of using bibliometrics which may not be picked up for some yearsYes, metrics will shape behaviour (just as panel ranking shaped behaviour), sometimes for the better, sometimes for the worse. Metrics can be abused -- but abuses can also be detected and named and shamed, so there are deterrents and correctives. There are data limitations where researchers' outputs are not comprehensively catalogued in bibliometrics databases.The obvious solution for this is Open Access: All UK researchers should deposit all their research output in their Institutional Repositories (IRs). Where it is not possible to set access to a deposit as OA, access can be set as Closed Access, but the bibliographic metadata will be there. (The IRs will not only provide access to the texts and the metadata, but they will generate further metrics, such as download counts, chronometrics, etc.) The report comes ahead of the HEFCE consultation on the future of research assessment expected to be announced later this month. Universities UK will consult members once this is published.Let's hope both UUK and HEFCE are still open-minded about ways to optimise the transition to metrics! References Harnad, S., Carr, L., Brody, T. & Oppenheim, C. (2003) Mandated online RAE CVs Linked to University Eprint Archives: Improving the UK Research Assessment Exercise whilst making it cheaper and easier. Ariadne 35. Brody, T., Kampa, S., Harnad, S., Carr, L. and Hitchcock, S. (2003) Digitometric Services for Open Archives Environments. In Proceedings of European Conference on Digital Libraries 2003, pp. 207-220, Trondheim, Norway. Harnad, S. (2006) Online, Continuous, Metrics-Based Research Assessment. Technical Report, ECS, University of Southampton. Harnad, S. (2007) Open Access Scientometrics and the UK Research Assessment Exercise. In Proceedings of 11th Annual Meeting of the International Society for Scientometrics and Informetrics 11(1), pp. 27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds. Brody, T., Carr, L., Harnad, S. and Swan, A. (2007) Time to Convert to Metrics. Research Fortnight pp. 17-18. Brody, T., Carr, L., Gingras, Y., Hajjem, C., Harnad, S. and Swan, A. (2007) Incentivizing the Open Access Research Web: Publication-Archiving, Data-Archiving and Scientometrics. CTWatch Quarterly 3(3). See also: Prior Open Access Archivangelism Postings on RAE and metricsStevan Harnad American Scientist Open Access Forum
(Page 1 of 1, totaling 4 entries)
|
QuicksearchSyndicate This BlogMaterials You Are Invited To Use To Promote OA Self-Archiving:
Videos:
The American Scientist Open Access Forum has been chronicling and often directing the course of progress in providing Open Access to Universities' Peer-Reviewed Research Articles since its inception in the US in 1998 by the American Scientist, published by the Sigma Xi Society. The Forum is largely for policy-makers at universities, research institutions and research funding agencies worldwide who are interested in institutional Open Acess Provision policy. (It is not a general discussion group for serials, pricing or publishing issues: it is specifically focussed on institutional Open Acess policy.)
You can sign on to the Forum here.
ArchivesCalendar
CategoriesBlog AdministrationStatisticsLast entry: 2018-09-14 13:27
1129 entries written
238 comments have been made
Top ReferrersSyndicate This Blog |