Thursday, December 7. 2006Unbiassed Open Access Metrics for the Research Assessment ExerciseThe UK Research Assessment Exercise's (RAE's) sensible and overdue transition from time-consuming, cost-ineffective panel review to low-cost metrics is moving forward. However, there is still a top-heavy emphasis, in the RAE's provisional metric equation, on the Prior-Funding metric: "How much research funding has the candidate department received in the past?" "The outcome announced today is a new process that uses for all subjects a set of indicators based on research income, postgraduate numbers, and a quality indicator."Although prior funding should be part of the equation, it should definitely not be the most heavily weighted component a-priori, in any field. Otherwise, it will merely generate a Matthew-Effect/Self-Fulfilling Prophecy (the rich get richer, etc.) and it will also collapse the UK Dual Funding System -- (1) competitive proposal-based funding plus (2) RAE performance-based, top-sliced funding -- into just a scaled up version of (1) alone. Having made the right decision -- to rely far more on low-cost metrics than on costly panels -- the RAE should now commission rigorous, systematic studies of metrics, testing metric equations discipline by discipline. There are not just three but many potentially powerful and predictive metrics that could be used in these equations (e.g., citations, recursively weighted citations, co-citations, hub/authority indices, latency scores, longevity scores, downloads, download/citation correlations, endogamy/exogamy scores, and many more rich and promising indicators). Unlike panel review, metrics are automatic and cheap to generate, and during and after the 2008 parallel panel/metric exercise they can be tested and cross-validated against the panel rankings, field by field. In all metric fields -- biometrics, psychometrics, sociometrics -- the choice and weight of metric predictors needs to be based on careful, systematic, prior testing and validation, rather than on a hasty a-priori choice. Biassed predictors are also to be avoided: The idea is to maximise the depth, breadth, flexibility, predictive power and hence validity of the metrics by choosing and weighting the right ones. More metrics is better than fewer, because they serve as cross-checks on one another; this triangulation also highlights anomalies, if any. Let us hope that the RAE's good sense will not stop with the decision to convert to metrics, but will continue to prevail in making a sensible, informed choice among the rich spectrum of metrics available in the online age. Stevan HarnadSome Prior References: American Scientist Open Access Forum Thursday, October 26. 2006Why is Southampton's G-Factor (web impact metric) so high?
U. Southampton ranks 3rd in the UK and 25th in the world in the G-factor International University Ranking, a measure of "the importance or relevance of the university from the combined perspectives of all of the leading universities in the world... as a function of the number of links to their websites from the websites of other leading international universities" compiled by University Metrics.
Why is U. Southampton's rank so remarkably high (second only to Cambridge and Oxford in the UK, and out-ranking the likes of Yale, Columbia and Brown in the US)? Long practising what it has been preaching -- about maximising research impact through Open Access Self-Archiving -- is a likely factor. (This is largely a competitive advantage: Southampton invites other universities to come and level the playing field -- by likewise self-archiving their own research output!) Saturday, September 30. 2006"Metrics" are Plural, Not Singular: Valid Objections From UUK About RAE
Universities UK and the Russell Group are spot-on in their criticisms of the replacement of the old panel-based Research Assesement Exercise (RAE) by one single metric (prior research funding). That would not only be arbitrary and absurd, but extremely unfair and counterproductive.
That very valid specific objection, however, has next to nothing to do with the general plan to replace the RAE's current tremendously wasteful panel-based review by metrics (plural), which include a rich and diverse potential array of objective performance indicators rather than just one self-fulfilling prophecy (i.e., how much prior funding has been awarded). UUK are also quite right that each metric needs to be tested and validated, discipline by discipline (some already have been), and that the metric formula and the weights for each of the metrics have to be adjusted and optimised individually for each discipline. The parallel panel/metric shadow exercise planned for 2008 will help accomplish this testing, validation, and customisation. Whether -- and if so how much -- panel review will still be needed in some disciplines once the metric formula has been tested, validated and optimised is an empirical question (but my own guess is: not much). Prior Amsci Topic Threads: Stevan Harnad American Scientist Open Access Forum Monday, September 18. 2006Submitting one's own published work for assessment is Fair UseCrossRef and Publishers Licensing Society have come to a "gentleman's agreement" with RAE/HEFCE to "license" the papers that are submitted to RAE for assessment "free of charge": At the heart of this there are not one, not two, not three, but four pieces of patent nonsense so absurd as to take one's breath away. Most of the nonsense is on RAE/HEFCE's end; one cannot blame the publishers for playing along (especially as the gentleman's agreement holds some hope of forestalling OA a bit longer, or at least the role the RAE might have played in hastening OA's arrival):2008 UK Research Assessment Exercise (RAE) (1) The first piece of nonsense is the RAE's pedantic and dysfunctional insistence on laying their hands directly on the "originals," the publisher's version of each article per author, rather than sensibly settling for the author's peer-reviewed final drafts (postprints).What will moot all of this is, of course, the OA self-archiving mandates by RCUK and the UK universities themselves, which will fill the UK universities' IRs, which will in their turn -- with the help of the IRRA I(Institutional Repositories and Research Assessment) -- mediate the submission of both the postprints and the metrics to the RAE. Then this ludicrous side-show about the "licensing" of the all-important "originals" to the RAE, for "peer re-review" via the mediation of CrossRef and the publishers will at last be laid to rest, once and for all. RAE 2008 will be its last hurrah... Prior AmSci Threads on this topic: "Future UK RAEs to be Metrics-Based"Stevan Harnad American Scientist Open Access Forum Wednesday, June 21. 2006Let 1000 RAE Metric Flowers Bloom: Avoid Matthew Effect as Self-Fulfilling Prophecy
Let 1000 RAE Metric Flowers Bloom: Avoid Matthew Effect as Self-Fulfilling Prophecy Stevan Harnad The conversion of the UK Research Assessment Exercise (RAE) from the present costly, wasteful exercise to time-saving and cost-efficient metrics is welcome, timely, and indeed long overdue, but the worrying thing is that the RAE planners currently seem to be focused on just one metric -- prior research funding -- instead of the full and rich spectrum of new (and old) metrics that will become available in an Open Access world, with all the research performance data digitally available online for analysis and use. Mechanically basing the future RAE rankings exclusively on prior funding would just generate a Matthew Effect (making the rich richer and the poor poorer), a self-fulfilling prophecy that is simply equivalent to increasing the amount given to those who were previously funded (and scrapping the RAE altogether, as a separate, semi-independent performance evaluator and funding source). What the RAE should be planning to do is to look at weighted combinations of all available research performance metrics -- including the many that are correlated, but not so tightly correlated, with prior RAE rankings, such as author/article/book citation counts, article download counts, co-citations (co-cited with and co-cited by, weighted with the citation weight of the co-citer/co-citee), endogamy/exogamy metrics (citations by self or collaborators versus others, within and across disciplines), hub/authority counts (in-cites and out-cites, weighted recursively by the citation's own in-cite and out-cite counts), download and citation growth rates, semantic-web correlates, etc. It would be both arbitrary and absurd to blunt the potential sensitivity, power, predictivity and validity of metrics a-priori, by biasing them toward the prior-funding counts metric alone. Prior funding should just be one out of a full battery of weighted metrics, adjusted to each discipline and validated against one another (and against human judgment too). Shadbolt, N., Brody, T., Carr, L. and Harnad, S. (2006) The Open Research Web: A Preview of the Optimal and the Inevitable. In: Jacobs, N., Eds. Open Access: Key Strategic, Technical and Economic Aspects chapter 21. Chandos. Stevan Harnad American Scientist Open Access Forum Saturday, June 17. 2006Book-impact metric for research assessment in book-based disciplines: Self-archiving books' metadata and bibliographiesFor all disciplines -- but especially for disciplines that are more book-based than journal-article-based -- it would be highly beneficial for authors to self-archive in their institutional repositories the metadata as well as the cited-reference lists (bibliographies) for the books they publish annually. That way, next-generation scientometric search engines like citebase will be able to harvest and link their reference lists (exactly as they do the reference lists of articles whose full texts have been self-archived). This will generate a book citation impact metric. Books cite and are cited by books; moreover, books cite articles and are cited by articles. It is already possible to scrape together a rudimentary book-impact index from Thompson-ISI's Web of Knowledge along with data from Google Books and Google Scholar, but a worldwide Open Access database, across all disciplines, indexing all the article output as well as the book output self-archived in all the world's institutional repositories could do infinitely better than that: All that's needed is for authors' institutions and funders to mandate institutional (author) self-archiving of (1) the metadata and full-texts of all their article output along with (2) the metadata and reference lists of all their book output. We can even do better than that, because although many book authors may not wish to make their books' full-texts Open Access (OA), they can still deposit their books' full-texts in their institutional repositories and set access as Closed Access -- accessible only to scientometric full-text harvesters and indexers (like google books) for full-text inversion, boolean search, and semiometric analysis (text endogamy/exogamy, text-overlap, text similarity/proximity, semantic lineage, latent semantic analysis, etc.) -- without making the full-text text itself OA to individual users (i.e., potential book-buyers) if they do not wish to. This will help provide the UK's new metrics-based Research Assessment Exercise (RAE) with research performance indicators better suited for the disciplines whose research is not as journal-article- (and conference-paper-) based as that of the physical, biological and engineering sciences. Carr, L, Hitchcock, S., Oppenheim, C., McDonald, J.W., Champion, T. & Harnad, S. (2006) Can journal-based research impact assessment be generalised to book-based disciplines? (Research Proposal)Stevan Harnad American Scientist Open Access Forum Friday, June 16. 2006Metrics-Based Assessment of Published, Peer-Reviewed ResearchOn Wed, 14 Jun 2006, Larry Hurtado, Department of Divinity, University of Edinburgh, wrote in the American Scientist Open Access Forum: LH: "Stevan Harnad is totally in favour of a "metrics based" approach to judging research merit with a view toward funding decisions, and greets the news of such a shift from past/present RAE procedure with unalloyed joy."No, metrics are definitely not meant to serve as the basis for all or most research funding decisions: research proposals, as noted, are assessed by peer review. Metrics is intended for the other component in the UK dual funding system, in which, in addition to directly funded research, based on competitive peer review of research bids, there is also a smaller, secondary (but prestigious) top-slicing system, the Research Assessment Exercise (RAE). It is the RAE that needed to be converted to metrics from the absurd, wasteful and costly juggernaut that it used to be. LH: "Well, hmmm. I'm not so sure (at least not yet). Perhaps there is more immediate reason for such joy in those disciplines that already rely heavily on a metrics approach to making decisions about researchers."No discipline uses metrics systematically yet; moreover, many metrics are still to be designed and tested. However, the only thing "metrics" really means is: the objective measurement of quantifiable performance indicators. Surely all disciplines have measurable performance indicators. Surely it is not true of any discipline that the only way, or the best way, to assess all of its annual research output is by having each piece individually re-reviewed after it has already been peer-reviewed twice -- before execution, by a funding council's peer-reviewers as a research proposal, and after execution, by a journal's referees as a research publication. LH: "In the sciences, and also now social sciences, there are citation-services that count publications and citations thereof in a given list of journals deemed the "canon"of publication venues for a given discipline. And in these disciplines journal articles are deemed the main (perhaps sole) mode of research publication. Ok. Maybe it'll work for these chaps."First, with an Open Access database, there need be no separate "canon": articles in any of the world's 24,000 peer-reviewed journals and congresses can count -- though some will (rightly) count for more than others, based on the established and known quality standards and impact of the journal in which it appeared (this too can be given a metric weight). Alongside the weighted impact factor of the journal, there will be the citation counts for each article itself, its author, the co-citations in and out, the download counts, the hub/authority weights, the endogamy/exogamy weights. etc. etc. All these metrics (and many more) will be derivable for all disciplines from an Open Access database (no longer just restricted to ISI's Web of Knowledge). That includes, by the way, citations of books by journal articles -- and also citations of books and journal articles by books, because although most book authors may not wish to make their books' full-texts OA, they can and should certainly make their books' bibliographic metadata, including their bibliography of cited references, OA. Those book-impact metrics can then be added to the metric harvest, citation-linked, counted, and duly weighted, along with all the other metrics. There are even Closed-Access ways of self-archiving books' digital full-texts (such as google book search) so they can be processed for semiometric analysis (endogamy/exogamy, content overlap, proximity, lineage, chronometric trends) by harvesters that do not make the full text available openly. All disciplines can benefit from this. LH: "But I'd like to know how it will work in Humanities fields such as mine. Some questions, for Stevan or whomever. First, to my knowledge, there is no such citation-count service in place. So, will the govt now fund one to be set up for us? Or how will the metrics be compiled for us? I.e., there simply is no mechanism in place for doing "metrics"for Humanities disciplines."All the government needs to do is to mandate the self-archiving of all UK research output in each researcher's own OAI-compliant institutional (or central) repository. (The US and the rest of Europe will shortly follow suit, once the prototype policy model is at long last adopted by a major player!) The resulting worldwide interoperable database will be the source of all the metric data, and a new generation of scientometric and semiometric harvesters and analysers will quickly be spawned to operate on it, to mine it to extract the rich new generation of metrics. There is absolutely nothing exceptional about the humanities (as long as book bibliographies are self-archived too, alongside journal-article full-texts). Research uptake and usage is a generic indicator of research performance, and citations and downloads are generic indicators of research uptake and usage. The humanities are no different in this regard. Moreover, inasmuch as OA also enhances research uptake and usage itself, the humanities stand to benefit from OA, exactly like the other disciplines. LH: "Second, for us, journal articles are only one, and usually not deemed the primary/preferred, mode of research publication. Books still count quite heavily. So, if we want to count citations, will some to-be-imagined citation-counting service/agency comb through all the books in my field as well as the journal articles to count how many of my publications get cited and how often? If not, then the "metrics"will be so heavily flawed as to be completing misleading and useless."All you need to do is self-archive your books' metadata and cited reference lists and all your journal articles in your OAI-compliant Institutional repository. The scientometric search engines -- like citebase, citeseer, google scholar, and more to come -- will take care of all the rest. If you want to do even better, scan in, OCR and self-archive the legacy literature too (the journal articles plus the metadata and cited reference lists of books of yore too; if you're worried about variations in reference citing styles: don't worry! Just get the digital texts in and algorithms can start sorting them out and improving themselves). LH: "Third, in many sciences, esp. natural and medical sciences, research simply can't be conducted without significant external funding. But in many/most Humanities disciplines truly groundbreaking and highly influential research continues to be done without much external funding."So what is your point? That the authors of unfunded research, uncoerced by any self-archiving mandate, will not self-archive? Don't worry. They will. They may not be the first ones, but they will follow soon afterwards, as the power and potential of self-archiving to measure as well as to accelerate and increase research impact and progress become more and more manifest. LH: "(Moreover, no govt has yet seen fit to provide funding for the Humanities constituency of researchers commensurate with that available for Sciences. So, it's a good thing we don't have to depend on such funding!)"Funding grumbles are a worthy topic, but they have nothing whatsoever to do with OA and the benefits of self-archiving, or metrics. LH: "My point is that the "metrics"for the Humanities will have to be quite a bit different in what is counted, at the very least."No doubt. And the metrics used, and their weights, will be adjusted accordingly. But metrics they will be. No exceptions there. And no regression back to either human re-evaluation or delphic oracles: Objective, countable performance indicators (for the bulk research output: of course for special prizes and honours individual human judgment will have to be re-invoked, in order to compare like with like, individually). LH: "Fourth, I'm not convinced (again, not yet; but I'm open to persuasion) that counting things = research quality and impact. Example: A number of years ago, coming from a tenure meeting at my previous University I ran into a colleague in Sociology. He opined that it was unnecessary to labour over tenure, and that he needed only two pieces of information: number of publications and number of citations. I responded, "I have two words for you: Pons and Fleischman". Remember these guys? They were cited in Time and Newsweek and everywhere else for a season as discovers of "cold fusion". And over the next couple of years, as some 50 or so labs tried unsuccessfully to replicate their alleged results, they must have been among the most frequently-cited guys in the business. And the net effect of all that citation was to discredit their work. So, citation = "impact". Well, maybe, but in this case "impact"= negative impact. So, are we really so sure of "metrics"?"Not only do citations have to be weighted, as they can and will be, recursively, by the weight of their source (Proceedings of the Royal Society vs. The Daily Sun, citations from Nobel Laureates vs citations from uncited authors), but semiometric algorithms will even begin to have a go at sorting positive citations from negative ones, disinterested ones from endogamous ones, etc. Are you proposing to defer to individual expert opinion in some (many? most? all?) cases, rather than using a growing wealth and diversity of objective performance indicators? Do you really think it is harder to find individual cases of subjective opinion going wrong than objective metrics going wrong? LH: "Perhaps, however, Stevan can help me see the light, and join him in acclaiming the advent of metrics."I suggest that the best way to see the light on the subjective of Open Access Digitometrics is to start self-archiving and sampling the (few) existing digitometric engines, such as citebase. You might also wish to have a look at the chapter I recommended (no need to buy the book: it's OA: Just click!): Shadbolt, N., Brody, T., Carr, L. and Harnad, S. (2006) The Open Research Web: A Preview of the Optimal and the Inevitable, in Jacobs, N., Eds. Open Access: Key Strategic, Technical and Economic Aspects, chapter 21. Chandos.Stevan Harnad American Scientist Open Access Forum Friday, April 14. 2006Metrics and Assessment
The following is a comment on an article that appeared in the Thursday April 13th issue of the The Independent concerning the UK Research Assessment Exercise (RAE) and Metrics (followed by a response to another piece in The Independent about Web Metrics).
Re: Hodges, L. (2006) The RAE is dead - long live metrics. The Independent April 13 2006Absolutely no one can justify (on the basis of anything but superstition) holding onto an expensive, time-wasting research assessment system such as the RAE, which produces rankings that are almost perfectly correlated with, hence almost exactly predictable from, inexpensive objective metrics such as prior funding, citations and research student counts. Hence the only two points worth discussing are (1) which metrics to use and (2) how to adapt the choice of metrics and their relative weights for each discipline. The web has opened up a vast and rich universe of potential metrics that can be tested for their validity and predictive power: citations, downloads, co-citations, immediacy, growth-rate, longevity, interdisciplinarity, user tags/commentaries and much, much more. These are all measures of research uptake, usage, impact, progress and influence. They have to be tested and weighted according to the unique profile of each discipline (or even subdiscipline). Just the prior-funding metric alone is highly predictive on its own, but it also generates a Matthew Effect: a self-fulfilling, self-perpetuating prophecy. So multiple, weighted mertics are needed for balanced evaluation and prediction. I would not for a moment believe, however, that any (research) discipline lacks predictive metrics of research performance altogether. Even less credible is the superstitious notion that the only way (or the best) to evaluate research is for RAE panels to re-do, needlessly, locally, the peer review that has already been done, once, by the journals in which the research has already been published. The urgent feeling that some form of human re-review is somehow crucial for fairness and accuracy has nothing to do with the RAE or metrics in particular; it is just a generic human superstition (and irrationality) about population statistics versus my own unique, singular case...
The reasons for the University of Southampton's extremely high overall webmetric rating are four: (1) U. Southampton's university-wide research performanceThis all makes for an extremely strong Southampton web presence, as reflected in such metrics as the "G factor", which places Southampton 3rd in the UK and 25th among the world's top 300 universities or Webometrics,which places Southampton 6th in UK, 9th in Europe, and 80th among the top 3000 universities it indexes. Of course, these are extremely crude metrics, but Southampton itself is developing more powerful and diverse metrics for all Universities in preparation for the newly announced metrics-only Research Assessment Exercise. Some references: Harnad, S. (2001) Why I think that research access, impact and assessment are linked. Times Higher Education Supplement. 1487: p. 16. Hitchcock, S., Brody, T., Gutteridge, C., Carr, L., Hall, W., Harnad, S., Bergmark, D. and Lagoze, C. (2002) Open Citation Linking: The Way Forward. D-Lib Magazine 8(10). Harnad, S. (2003) Why I believe that all UK research output should be online. Times Higher Education Supplement. Friday, June 6 2003. Harnad, S., Carr, L., Brody, T. & Oppenheim, C. (2003) Mandated online RAE CVs Linked to University Eprint Archives: Improving the UK Research Assessment Exercise whilst making it cheaper and easier. Ariadne 35. Berners-Lee, T., De Roure, D., Harnad, S. and Shadbolt, N. (2005) Journal publishing and author self-archiving: Peaceful Co-Existence and Fruitful Collaboration. Brody, T., Harnad, S. and Carr, L. (2006) Earlier Web Usage Statistics as Predictors of Later Citation Impact. Journal of the American Association for Information Science and Technology (JASIST). Shadbolt, N., Brody, T., Carr, L. & Harnad, S. (2006) The Open Research Web: A Preview of the Optimal and the Inevitable. In: Jacobs, N., Eds. Open Access: Key Strategic, Technical and Economic Aspects. Chandos. Citebase impact ranking engine and usage/citation correlator/predictor Beans and Bean Counters Bibliography of Findings on the Open Access Impact Advantage Stevan Harnad American Scientist Open Access Forum Thursday, March 23. 2006Online, Continuous, Metrics-Based Research AssessmentAs predicted, and long urged, the UK's wasteful, time-consuming Research Assessment Exercise (RAE) is to be replaced by metrics: "Research exercise to be scrapped"RAE outcome is most closely correlated (r = 0.98) with the metric of prior RCUK research funding (Figure 4.1) (this is no doubt in part a "Matthew Effect"), but research citation impact is another metric highly correlated with the RAE outcome, even though it is not explicitly counted. Now it can be explicitly counted (along with other powerful new performance metrics) and all the rest of the ritualistic time-wasting can be abandoned, without further ceremony. This represents a great boost for institutional self-archiving in Open Access Institutional Repositories, not only because that is the obvious, optimal means of submission to the new metric RAE, but because it is also a powerful means of maximising research impact, i.e., maximising those metrics: (I hope Research Councils UK (RCUK) is listening!). Harnad, S. (2001) Why I think that research access, impact and assessment are linked. Times Higher Education Supplement 1487: p. 16.And this new metric RAE policy will help "unskew" it, by instead placing the weight on the individual author/article citation counts (and download counts, CiteRanks, authority counts, citation/download latency, citation/longevity, co-citation signature, and many, many new OA metrics waiting to be devised and validated, including full-text semantic-analysis and semantic-web-tag analyses too) rather than only, or primarily, on the blunter instrument (the journal impact factor). This is not just about one number any more! The journal tag will still have some weight, but just one weight among many, in an OA scientometric multiple regression equation, customised for each discipline. This is an occasion for rejoicing at progress, pluralism and openness, not digging up obsolescent concerns about over-reliance on the journal impact factor. The document actually says You are quite right, though, that the default metric many have in mind is research income, but be patient! Now that the door has been opened to objective metrics (instead of amateurish in-house peer-re-review), this will spawn more and more candidates for enriching the metric equation. If RAE top-slicing wants to continue to be an independent funding source in the present "dual" funding system (RCUK/RAE), it will want to have some predictive metrics that are independent of prior funding. (If RAE instead just wants to redundantly echo research funding, it need merely scale up RCUK research grants to absorb what would have been the RAE top-slice and drop the RAE and dual funding altogether!)"one or more metrics... could be used to assess research quality and allocate funding, for example research income, citations, publications, research student numbers etc." The important thing is to scrap the useless, time-wasting RAE preparation/evaluation ritual we were all faithfully performing, when the outcome was already so predictable from other, cheaper, quantitative sources. Objective metrics are the natural, sensible way to conduct such an exercise, continuously, and once we are doing metrics, many powerful new predictive measures will emerge, over and above grant income and citations. The RAE ranking will not come from one variable, but from a multiple regression equation, with many weighted predictor metrics in an Open Access world, in which research full-texts in their own authors' Institutional Repositories are citation-linked, download-monitored and otherwise scientometrically assessed and analysed continuously. Hitchcock, S., Brody, T., Gutteridge, C., Carr, L., Hall, W., Harnad, S., Bergmark, D. and Lagoze, C. (2002) Open Citation Linking: The Way Forward. D-Lib Magazine 8(10). Stevan Harnad Monday, February 20. 2006Providing One's Own Writings for Assessment is Unfair Use?
On Sat, 18 Feb 2006 Charles Oppenheim (CO) wrote in AmSci:
CO: : "I regret to say that Stevan is incorrect in some of his comments. For previous RAEs, there WERE licensing arrangements put in place to permit p/copies of articles to be passed to RAE panels. he is probably unaware of this because no great publicity was associated with the arrangements that were set up."There is no end to what people will do, if left to their own devices, safely out of reach of critical reflection. The only substantive question, though, is: What actually makes sense? (If more publicity had attended the low-profile RAE licensing arrangements last time, perhaps some voices of reason would have been raised earlier. As it stands, it seems to me that people in the self-regulating interstices of IP-never-neverland are making ad hoc decisions about what does and does not need permission without any particular answerability to fact or reason, one way or the other!) Unless I am mistaken, the RAE consists of the following: Researchers all over the UK submit N (4? 8? 12?) copies of their four most important articles, to be counted (and sometimes confirmed, and sometimes even read and pondered) by a panel of RAE assessors. I (and many others the world over) receive, every year, several times, copies of the articles of candidates at other institutions who are being evaluated for employment, promotion, tenure, chairs, prizes, or funding. Does anyone imagine that a license has been or needs to be sought in order to send someone's own work out to be evaluated? Reductio ad absurdum: Suppose the photocopies, which the author makes for his own private use, are temporarily lent to another individual, with the request that they then be returned to their owner: Does that too call for "licensing arrangements"? Well then let the evaluation copies be considered a loan, and let that be the end of it! (If still in doubt, run the same thought experiment through with a lent book, instead of an article, or one's own photocopy of one's own book, lent.) Still not absurd enough? Well then return to what would have been the most sensible thing to do in the first place: Not to use originals or photocopies of the publisher's version at all, but simply the author's own peer-reviewed final draft ("postprint"). Still think I need a license to send my own work to someone to assess it for a salary rise? CO: "Licences are likewise needed this time around because the Universities do not (in general) own the copyright in these items, so they are "dealing" with someone else's (usually a publisher's) copyright material. Such copying by Universities cannot be considered "fair dealing" as it is not for one of the permitted purposes, and indeed is not permitted under any other exception to copyright. So I am glad that PLS is arranging a licence so that institutions can pass copies of items to RAE panels without risk of copyright infringement."The solution to this rather absurd pseudo-problem -- "How can I provide a copy of my very own writing to be evaluated by someone who I would very much prefer not to oblige to go out and buy a copy for himself in exchange for the privilege of deciding whether or not to pay me more salary or research funding?" -- is super-simple: Let it not be (nominally) the "universities" that do the submitting to the RAE; let it instead be (nominally) the authors themselves. "Here is my work: Please assess me!" Let the authors either "lend" their own photocopies of their own published articles to the RAE assessors (with a postage stamp and a cheery request to return it to its rightful owner once assessed), or, better still, let them submit only their own final, corrected drafts, straight out of their own word-processors. (I had already pointed out that the fatal foolishness -- probably out of pointless pedantry if not paranoia -- was in RAE's insisting on the publisher's offprint rather than the author's postprint in the first place.) I am, of course, not proposing that these idiotic prophylactic measures actually be taken; I am just trying to use them as an intuition pump, to wash off the nonsensical notion that "institutions" (whether the author's university or HEFCE) are here making "unfair use" of the publisher's property: It is the authors who are doing the fair-using, of their own work, in their own interests. Anyone who insists on construing it in another way is simply giving HEFCE and the universities bad advice. (But, without publicity, bad advice risks being followed.) CO: "In summary, I'm afraid the law does require licensing this time around, as it did for the previous RAEs."The Law requires licensing if we put the question to the Law in the following form: The Law comes up with an altogether different answer if we instead ask:"May institutions make multiple photocopies of a published work to submit them to the RAE?" QED (or so it ought to be, but I expect there are more hermeneutic epicycles to be spun on this yet...)"May individuals lend/send personal copies of their own work to be evaluated?" CO: "My understanding is that the RAE panels want pdfs rather than author postprints because they need the reassurance that the thing they are reading is identical to that which was published. Since the RAE is an auditing exercise in which the onus is on the integrity of what is being submitted, HEFCE no doubt feel that the pdf offers the necessary security."That is indeed the heart of the matter, and just a little common sense and reflection will reveal -- as I have pointed out many times before -- that the onus is not on HEFCE but on the institutions, to make sure that what they are submitting is kosher. If it is discovered that someone has submitted a plagiarised or unpublished or altered work -- something that the electronic medium makes even easier to detect and expose than was possible in the paper medium (though even in the paper medium, the risk and consequences of exposure had been mighty) -- then the ones that are named, shamed, blamed and punished are of course the institutions, and ultimately the researchers, not HEFCE! To show that this is all pure pedantry and nothing more (except possibly paranoia), ask yourself whether it is really "safe" to trust even the journal offprint? After all, peer review being the frail human exercise it is, the only ones who may (or may not) have ensured that the paper met all dietetic laws were the referees: Is the onus of the integrity of the RAE exercise to be entrusted to one or two unidentified, fallible, corruptible referees? Surely RAE should re-do the peer review, and with more robust numbers, on whatever document the author submits! If this last compunction seems to call into question the value of having the RAE assessors re-do in any measure the assessment that has already been done by the peer reviewers, then I have succeeded in making myself understood! There is no need for most of the baroque trappings of this auditing exercise: Insofar as published journal articles are concerned, it is just an auditing exercise. The RAE should not be asking for copies of the papers to read at all -- god knows how many of them actually get read anyway -- it should simply be counting: journal articles, citations, downloads, and other objective indicators. (Charles himself has published a good deal of evidence that a goodly proportion of the variance in the RAE rankings is already predictable from that scientometric audit trail.) Instead, we find ourselves in the absurd position of twisting ourselves into knots in order to have the "legal right" to submit for re-assessment (inexpert re-assessment, and only on a spot-check basis), by an RAE panel, the publisher's proprietary page-images of a peer-reviewed article that has already been assessed (by purpose-picked, qualified peer experts -- within the vagaries of each journal's quality standards, competence, and conscientiousness, such as they are), when the resulting RAE outcome is already highly correlated with an objective audit we could have done without even needing to have the full-texts in hand! And, inasmuch as we may have felt impelled to give the full-texts a peek, we might just as well have had the author's peer-reviewed, corrected final draft (postprint), without the further pomp and circumstance, just duly certified by the already frantic and compulsive RAE preparation committee in each department of each university, eager to maximise their ranks, minimise their risks, and be compliant in every conceivable and inconceivable way.Harnad, S., Carr, L., Brody, T. & Oppenheim, C. (2003) Mandated online RAE CVs Linked to University Eprint Archives: Improving the UK Research Assessment Exercise whilst making it cheaper and easier. Ariadne 35. As I said, this will all be seen to be hilarious in hindsight: Once we are all making our postprints routinely accessible online free for all in our institutional repositories, the thought that we were agitating ourselves over "licensing arrangements" for RAE assessment way back in 2006 will be seen to have just been one of those quaint paleolithic quirks, like the erstwhile conviction that everyone needed a walking stick or a top hat in order to stay upright and avoid their death of a cold or sunstroke... CO: "Having said all that, things would have been so much simpler if, as Stevan has argued, mandated self-archived articles with copyright owned by the academic/HEI had been around years ago!"I hate to be so contrary again but, no, it is not copyright-retention that has been and is the problem. It is finger-retention: If/when researchers make (or get made to make) their fingers do the walking, to do at last those few keystrokes required to deposit their refereed postprints (and optionally also their pre-refereeing preprints) in their own IRs -- a practice to which 93% of journals have already given their blessing, though it was not really needed, yet only about 15% of authors are actually doing it unmandated (whereas 95% would do it if mandated) -- then all of this substance- and sense-free shadow-boxing will be at an end and... (After 12 long years I no longer say "the optimal and inevitable" will be upon us: I don't doubt that we will simply graduate to some new, higher level of tom-foolery.) On Mon, 20 Feb 2006, CO replied: CO:"With respect, Stevan has got the legal situation wrong. It is not the academics who are asked to provide copies of their articles to the RAE panels - it is their employing Universities."But that is the point! The absurd case for licensing RAE submissions is based on simply asking in the wrong way (whereas asking in the right way would yield the identical benefits, but without the spurious licensing requirement): (1) The objective is to have 4 articles from every participating researcher sent to RAE quadrennially for auditing and assessment. (2) If we (arbitrarily) say it is the university that is sending 4N (arbitrary) articles to RAE, then it sounds like the university is making unfair use of 3rd-party content. (3) If we instead (sensibly) say it is each author, sending his own 4 articles, for auditing and assessment, then it is crystal clear that it is fair (authorial) use. Hence (3) is not only the way the whole question should be put, but it is also the most accurate and transparent description of what is actually going on, and what the RAE is actually about: Researchers are sending their articles to RAE to be assessed so that they can get more money! I can only repeat, if RAE persists in putting it instead in an obtuse way, the results will be equally obtuse. CO:"So, saying it is just like being asked by a colleague for a copy of one of your articles is incorrect. It is the employer making multiple copies of multiple articles."Would it settle minds if a directive were circulated at all the UK universities stating that: "On no account must it be the Centre or its secretaries who photo-copy the articles! Each individual author must do so, personally..."? (Charles, with all due respect, I am doing a formal and functional reductio ad absurdum here, so it won't do to just repeat the one arbitrary formal way of characterising and implementing the exercise, when another way of characterising and implementing the very same thing would have precisely the same outcome, without the absurd consequences (i.e., without the ostensible need to license 3rd-party contents!). CO:"It's also not lending of the materials, because the RAE panels retain them, and don't return them to the HEI at the end of the RAE."Would it settle minds if the articles were lent, with the author doing the photocopying, the department merely coordinating ("auditing"!) its own individual authors' mailings, and each author solemnly requesting, in writing, that after "assessment" his personal property itemsd should either be mailed back or destroyed? I make no comment on the absurdity of RAE wanting to preserve the articles in their possession till kingdom come (whereas they are all already in the public record, duly published, and all that's needed for an audit is an audit-trail -- just as an accounting firm need not store the cash, just the bank-statements!). (The bright light who thought RAE needed a permanent store of the articles themselves after the assessment is no doubt the same one that insisted on the publisher's version instead of the author's final draft in the first place. I think we would all be better off if spared this illumination this time round...) CO:"Claiming the academics are lending the stuff to RAE panels does not stand up to serious scrutiny - a check of the RAE documentation makes it clear this is not what it is occurring."Vide supra, re. RAE storage. Arbitrary and absurd practises cannot be justified by simply saying "But look, we're doing it." "CO: I'm sorry to be a pedantic old bore, but what the HEIs are doing for the RAE panels is copyright infringement unless a licence has been agreed."Then let what the HEIs are doing instead be be formally "devolved" to each individual author. Then it's each individual author that's doing it. That's all that's needed (and it's merely a trivial formality; and it's been nothing but a trivial formal matter all along). CO:"Re. the pdf versus author version, the RAE panel are auditors. Just as a financial auditor would require printed invoices, bank statements, etc., the panel has to use the legally most robust version of the documentation it is validating. HEFCE will be bending over backwards to ensure everything it does is legally watertight. I'm afraid Word documents are much less likelyOn the contrary, the fact that HEIs would be the ones taking the risk if they allowed their authors to use a doctored Word document is the point! First, if it's going to be a financial-audit analogy, then let's keep the tertium comparationis straight: First, auditors audit bank-statements, not cash! (They don't need to read the writing on the money, weight the pounds sterling themselves, or stash the cash for future generations.) HEFCE itself is doling out cash on the strength of the research bank-statement audit. The bank-statements are provided by the author, via his institution. It is authors'/institutions' responsibility to ensure that their submitted bank-statement statements are valid. It is they who are liable if they are fraudulent, not HEFCE. And the sensible way for HEFCE to "validate" those bank-statements is precisely the same way any prospective lender would verify a client's bank-statements or credit rating: by consulting a central bank-asset database -- which in this case is ISI, for which the UK fortunately already has a national site-license! All that's needed for that is each article's reference metadata, not its full-text (let alone the full-text in the publisher's PDF format!). And -- ceterum censeo -- the reference metadata are all that needs to be submitted for auditing (i.e., counting). In contrast, "[re-]assessment" (i.e., substantive evaluation, as opposed to mere auditing) -- over and above the peer-review that these published articles have already undergone with their respective journals -- is another matter, and probably a superfluous one, but for the browsing and spot-checking that some of the re-assessors may actually wish to do, the authors' postprints are more than enough. And those postprints need sit merely in the author's own Institutional Repository (IR), where the re-assessors may safely consult them at their leisure, 24/7, online... Am I the only one who sees that this is all an imperial tempest in a virtual teapot? Harrumph! Your weary archivangelist, with his sparse remaining vestiges of patience alas a-frayed... Stevan Harnad 1997 ------------------------------------------------------------------------------------> 2006
« previous page
(Page 3 of 4, totaling 31 entries)
» next page
|
QuicksearchSyndicate This BlogMaterials You Are Invited To Use To Promote OA Self-Archiving:
Videos:
The American Scientist Open Access Forum has been chronicling and often directing the course of progress in providing Open Access to Universities' Peer-Reviewed Research Articles since its inception in the US in 1998 by the American Scientist, published by the Sigma Xi Society. The Forum is largely for policy-makers at universities, research institutions and research funding agencies worldwide who are interested in institutional Open Acess Provision policy. (It is not a general discussion group for serials, pricing or publishing issues: it is specifically focussed on institutional Open Acess policy.)
You can sign on to the Forum here.
ArchivesCalendar
CategoriesBlog AdministrationStatisticsLast entry: 2018-09-14 13:27
1129 entries written
238 comments have been made
Top ReferrersSyndicate This Blog |