QuicksearchYour search for metrics returned 102 results:
Friday, October 12. 2007UK RAE Reform Should Be Evidence-Based
The UK Research Assessment Exercise has taken a few steps forward and a few steps back:
(1) In evaluating and rewarding the research performance of universities department by department, future RAEs (after 2008) will no longer, as before, assess only 4 selected papers per researcher, among those researchers selected for inclusion: All papers, by all departmental researchers, will be assessed. (Step forward)As I have pointed out many times before, (i) prior research income, if given too much weight, becomes a self-fulfilling prophecy, and reduces the RAE to a multiplication factor on competitive research funding. The result would be that instead of the current two autonomous components in the UK's Dual Support System (RAE and RCUK), there would only be one: RCUK (and other) competitive proposal funding, multiplied by the RAE metric rank, dominated by prior funding. To counterbalance against this, a rich spectrum of potential metrics needs to be tested in the 2008 RAE, and validated against the panel review rankings, which will still be collected in the 2008 parallel RAE. Besides (i) research income, (ii) postgraduate student counts, and (iii) journal impact factors, there is a vast spectrum of other candidate metrics, including (iv) citation metrics for each article itself (rather than just its journal's average), (iv) download metrics, (v) citation and download growth curve metrics, (vi) co-citation metrics, (vii) hub/authority metrics, (viii) endogamy/interdisciplinarity metrics (ix) book citation metrics, (x) web link metrics, (xi) comment tag metrics, (xii) course-pack metrics, and many more. All these candidate metrics should be tested and validated against the panel rankings in RAE 2008, in a multiple regression equation. The selection and weighting of each metric should be adjusted, discipline by discipline, rationally and empirically, rather than a priori, as is being proposed now. Harnad, S. (2007) Open Access Scientometrics and the UK Research Assessment Exercise. In Proceedings of 11th Annual Meeting of the International Society for Scientometrics and Informetrics 11(1), pp. 27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds.(I might add that RCUK's plans to include "potential economic benefits to the UK" among the criteria for competitive research funding could do with a little more rational and empirical support too, rather than being adopted a priori.) Stevan Harnad American Scientist Open Access Forum Thursday, September 27. 2007Journal Title Migration and University Resource ReallocationSandy Thatcher, President, Association of American University Presses (AAUP) wrote: If some players (commercial or otherwise) eventually abandon the journal publishing game because of lowered prospects for profit, their titles and editorial boards will migrate, quite naturally, to other players (like PLoS or BMC or Hindawi) who are quite happy to stay in, or enter, the Gold OA arena (but we are again getting ahead of ourselves: it is Green OA, Green OA mandates and Institutional OA Repositories whose time is coming first, not Gold OA). Journal title migration itself is not hypothetical: it is happening all the time, irrespective of OA. (So is journal death, and birth.) A learned journal consists of its editorship, peer-reviewership, authorship, and reputation (including its impact metrics), not its publisher. We know (and value) journals by their individual titles and track-records, not their publishers.ST: "[I]t is not a matter of whether the STM business could be run profitably with NIH-type restrictions in place, but instead the expectations the companies most invested in this business have about profit margins and their willingness to continue in the business at a lower level of profit when their funds might be redirected to more profitable uses elsewhere. Money tends to go where the expectations for profits are greatest." It is certainly true that universities sometimes (often?) act irrationally, sometimes even with respect to their own best interests: not only universities, but corporations (and even people, individual and plural) betimes obtund. But reality eventually exerts a pressure (if the stakes and consequences are nontrivial) and adaptation occurs -- not necessarily for the best, in ethical and humanistic terms, but at least for the better in terms of "interests".ST: "One would hope... that "logic" would apply, of all places, within academic institutions. But I have been writing now for two decades providing "evidence" of ways in which higher education does not act according to logic, or norms of rationality, that one would expect from it." And the competition of interests in the question of what universities will do with their hypothetical windfall journal-cancellation savings (if/when Green OA mandates ever generate the -- likewise hypothetical -- unsustainable subscription cancellations) is a competition between the other things universities could do with those newfound windfall savings -- e.g., (1) buying more books for the university library, or withdrawing them from the university library budget altogether and spending them on something else -- versus (2) using those savings to pay for the university's newfound research publicaton costs (which, on the very same hypothesis, will emerge pari passu with the university's windfall cancellation savings). It seems a safe bet that since the logical brainwork in question is just a one-step deduction (which I think university adminstrators, even with their atrophied neurons, should still be capable of making, if they are still capable of getting up in the morning at all), the new dance-step will be mastered: Faced with the question "Do we use our newfound windfall cancellation savings from our former publication buy-in to pay for our newfound publication costs of our research publication output, or for something else, letting our research output fend for itself?" they will -- under the pressure of logic, necessity, practicality, self-interest, and a lot of emails and phone-calls from their research-publishing faculty -- find their way to the dead-obvious (dare I say "optimal and inevitable"? solution... Stevan Harnad American Scientist Open Access Forum Tuesday, September 4. 2007British Academy Report on Peer Review and Metrics
The 4 Sept Guardian article on peer review (on the 5 Sept British Academy Report, to be published tomorrow) seems to be a good one. The only thing it lacks is some conclusions (which journalists are often reluctant to take the responsibility of making):"Help Wanted: A pall of gloom lies over the vital system of peer review. But the British Academy has some bright ideas". The Guardian, Jessica Shepherd reports, Tuesday September 4, 2007 (1) Peer review just means the assessment of research by qualified experts. (In the case of research proposals, it is assessment for fundability, and in the case of research reports, it is assessment for publishability.) (2) Yes, peer review, like all human judgment, is fallible, and susceptible to error and abuse. (3) Funding and publishing without any assessment is not a solution: (3a) Everything cannot be funded (there aren't enough funds), and even funded projects first need some expert advice in their design.(4) So far, nothing as good as or better than peer review (i.e., qualified experts vetting the work of their fellow-experts) has been found, tested and demonstrated. So peer review remains the only straw afloat, if the alternative is not to be tossing a coin for funding, and publishing everything on a par. (5) Peer review can be improved. The weak link is always the editor (or Board of Editors), who choose the reviewers and to whom the reviewers and authors are answerable; and the Funding Officer(s) or committee choosing the reviewers for proposals, and deciding how to act on the basis of the reviews. There are many possibilities for experimenting with ways to make this meta-review component more accurate, equitable, answerable, and efficient, especially now that we are in the online era. (6) Metrics are not a substitute for peer review, they are a supplement to it. In the case of the UK, a Dual Support System of prospective funding of (i) individual competitive proposals (RCUK) and (ii) retrospective top-sliced funding of entire university departments, based on their recent past research performance (RAE), metrics can help inform and guide funding officers, committees, editors, Boards and reviewers. And in the case of the RAE in particular, they can shoulder a lot of the former peer-review burden: The RAE, being a retrospective rather than a prospective exercise, can benefit from the prior publication peer review that the journals have already done for the submissions, rank the outcomes with metrics, and then only add expert judgment afterward, as a way of checking and fine-tuning the metric rankings. Funders and universities explicitly recognizing peer review performance as a metric would be a very good idea, both for the reviewers and the researchers being reviewed. Harnad, S. (2007) Open Access Scientometrics and the UK Research Assessment Exercise. In Proceedings of 11th Annual Meeting of the International Society for Scientometrics and Informetrics 11(1), pp. 27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds. Brody, T., Carr, L., Gingras, Y., Hajjem, C., Harnad, S. and Swan, A. (2007) Incentivizing the Open Access Research Web: Publication-Archiving, Data-Archiving and Scientometrics. CTWatch Quarterly 3(3). Shadbolt, N., Brody, T., Carr, L. and Harnad, S. (2006) The Open Research Web: A Preview of the Optimal and the Inevitable, in Jacobs, N., Eds. Open Access: Key Strategic, Technical and Economic Aspects. Chandos. Harnad, S. (ed.) (1982) Peer commentary on peer review: A case study in scientific quality control, New York: Cambridge University Press. Harnad, Stevan (1985) Rational disagreement in peer review. Science, Technology and Human Values, 10 p.55-62. Harnad, S. (1986) Policing the Paper Chase. [Review of S. Lock, A difficult balance: Peer review in biomedical publication.]Nature 322: 24 - 5. Harnad, S. (1996) Implementing Peer Review on the Net: Scientific Quality Control in Scholarly Electronic Journals. In: Peek, R. & Newby, G. (Eds.) Scholarly Publishing: The Electronic Frontier. Cambridge MA: MIT Press. Pp 103-118. Harnad, S. (1997) Learned Inquiry and the Net: The Role of Peer Review, Peer Commentary and Copyright. Learned Publishing 11(4) 283-292. Harnad, S. (1998/2000/2004) The invisible hand of peer review. Nature [online] (5 Nov. 1998), Exploit Interactive 5 (2000): and in Shatz, B. (2004) (ed.) Peer Review: A Critical Inquiry. Rowland & Littlefield. Pp. 235-242. Peer Review Reform Hypothesis-Testing (started 1999) A Note of Caution About "Reforming the System" (2001) Self-Selected Vetting vs. Peer Review: Supplement or Substitute? (2002) Stevan Harnad American Scientist Open Access Forum Sunday, August 26. 2007Validating Open Access Metrics for RAE 2008The United Kingdom's Research Assessment Exercise (RAE) is doing two things right. There are also two things it is planning to do that are currently problematic, but that could easily be made right. Let's start with what RAE is already doing right: (+1) It is a good idea to have a national research performance evaluation to monitor and reward research productivity and progress. Other countries will be following and eventually emulating the UK's lead. (Australia is already emulating it.)But, as with all policies that are being shaped collectively by disparate (and sometimes under-informed) policy-making bodies, two very simple and remediable flaws in the reformed RAE system have gone detected and hence uncorrected. They can still be corrected, and there is still hope that they will be, as they are small, easily fixed flaws; but, if left unfixed, they will have negative consequences, compromising the RAE as well as the RAE reforms: (-1) The biggest flaw concerns the metrics that will be used. Metrics first have to be tested and validated, discipline by discipline, to ensure that they are accurate indicators of research performance. Since the UK has relied on the RAE panel evaluations for two decades, and since the last RAE (2008) before conversion to metrics is to be a parallel panel/metrics exercise, the natural thing to do is to test as many candidate metrics as possible in this exercise, and to cross-validate them against the rankings given by the panels, separately, in each discipline. (Which metrics are valid performance indicators will differ from discipline to discipline.) Hence the prior-funding metric (-1a) needs to be used cautiously, to avoid bias and self-fulfilling prophecy; and the citation-count metric (-2b) is a good candidate, but only one of many potential metrics that can and should be tested in the parallel RAE 2008 metric/panel exercise. (Other metrics include co-citation counts, download counts, download and citation growth and longevity counts, hub/authority scores, interdisciplinarity scores, and many other rich measures for which RAE 2008 is the ideal time to do the testing and validation, discipline by disciplines, as it is virtually certain that disciplines will differ in which metrics are predictive for them, and what the weightings of each metric should be.) Yet it looks as if RAE 2008 and HEFCE are not currently planning to commission this all-important validation analysis, testing metrics against panel rankings for a rich array of candidate metrics. This is a huge flaw and oversight, although it can still be easily remedied by going ahead and doing such a systematic cross-validation study after all.(-1a) Prior research funding has already been shown to be extremely highly correlated with the RAE panel rankings in a few (mainly scientific) disciplines, but this was undoubtedly because the panels, in making their rankings, already had those metrics in hand, as part of the submission. Hence the panels themselves could explicitly (or implicitly) count them in making their judgments! Now a correlation between metrics and panel rankings is desirable initially, because that is the way to launch and validate the candidate metrics. In the case of this particular metric, however, not only is there a potential interaction, indeed a bias, that makes the prior-funding metric and the panel ranking non-independent, and hence invalidates the test of this metric's validity, but there is also a deeper reason for not putting a lot of weight on the prior-funding metric: For such a systematic metric/panel cross-validation study in RAE 2008, however, the array of candidate metrics has to be made as rich and diverse as possible. The RAE is not currently making any effort to collect as many potential metrics as possible in RAE 2008, and this is partly because it is overlooking the growing importance of online, Open Access metrics -- and indeed overlooking the growing importance of Open Access itself, both in research productivity and progress itself, and in evaluating it. Brody, T., Carr, L., Gingras, Y., Hajjem, C., Harnad, S. and Swan, A. (2007) Incentivizing the Open Access Research Web: Publication-Archiving, Data-Archiving and Scientometrics. CTWatch Quarterly 3(3).This brings us to the second flaw in HEFCE's RAE 2008 plans: (-2) For no logical or defensible reason at all, RAE 2008 is insisting that researchers submit the publishers' PDFs for the 2008 exercise. Now it does represent some progress that the RAE is accepting electronic drafts rather than requiring hard copy, as in past years. But in insisting that those electronic drafts must be the publisher's PDF, the RAE is creating two unnecessary problems.To recapitulate: two pluses -- (+1) research performance itself, and (+2) conversion to metrics -- plus two (correctable) minuses -- (-1) failure to explicitly provide for the systematic evaluation of a rich candidate spectrum of metrics against the RAE 2008 panel rankings and (-2) failure to require deposit of the authors' papers in their own IRs, to generate more OA metrics, more OA, and more UK research impact.(-2a) One unnecessary problem, a minor one, is that the RAE imagines that in order to have the publisher's PDF for evaluation, they need to seek (or even pay for) permission from the publisher. This is complete nonsense! Researchers (i.e., the authors) submit their own published work to the RAE for evaluation. For the researchers, this is Fair Dealing (Fair Use) and no publisher permission or payment whatsoever is needed. (As it happens, I believe HEFCE has worked out a "special arrangement" whereby publishers "grant permission" and "waive payment." But the completely incorrect notion that permission or payment were even at issue, in principle, has an important negative consequence, which I will now describe.) The good news is that there is still time to fully remedy (-1) and (-2), if only policy-makers take a moment to listen, think it through, and do the little that needs to be done to fix it. Appendix: Are Panel Rankings Face-Valid? It is important to allay a potential misunderstanding: It is definitely not the case that the RAE panel rankings are themselves infallible or face-valid! The panelists are potentially biased in many ways. And RAE panel review was never really "peer review," because peer review means consulting the most qualified specialists in the world for each specific paper, whereas the panels are just generic UK panels, evaluating all the UK papers in their discipline: It is the journals who already conducted the peer review. So metrics are not just needed to put an end to the waste and the cost of the existing RAE, but also to try to put the outcome on a more reliable, objective, valid and equitable basis. The idea is not to duplicate the outcome of the panels, but to improve it. Nevertheless -- and this is the critical point -- the metrics do have to be validated; and, as an essential first step, they have to be cross-validated against the panel rankings, discipline by discipline. For even though those panel rankings are and always were flawed, they are what the RAE has been relying upon, completely, for two decades. So the first step is to make sure that the metrics are chosen and weighted so as to get as close a fit to the panel rankings as possible, discipline by discipline. Then, and only then, can the "ladder" of the panel-rankings -- which got us where we are -- be tossed away, allowing us to rely on the metrics alone -- which can then be continuously calibrated and optimised in future years, with feedback from future meta-panels that are monitoring the rankings generated by the metrics and, if necessary, adjusting and fine-tuning the metric weights or even adding new, still to be discovered and tested metrics to them. In sum: despite their warts, the current RAE panel rankings need to be used to bootstrap the new metrics into usability. Without that prior validation based on what has been used until now, the metrics are just hanging from a skyhook and no one can say whether or not they measure what the RAE panels have been measuring until now. Without validation, there is no continuity in the RAE and it is not really a "conversion" to metrics, but simply an abrupt switch to another, untested assessment tool. (Citation counts have been tested elsewhere, in other fields, but as there has never been anything of the scope and scale of the UK RAE, across all disciplines in an entire country's research output, the prior patchwork testing of citation counts as research performance indicators is nowhere near providing the evidence that would be needed to make a reliable, valid choice of metrics for the UK RAE: only cross-validation within the RAE parallel metric/panel exercise itself -- jointly with a rich spectrum of other candidate metrics -- can provide that kind of evidence, and the requisite continuity, for a smooth, rational transition from panel rankings to metrics.) Stevan Harnad American Scientist Open Access Forum Friday, July 27. 2007"Permission Barriers" are a red herring for OA: Keystrokes are our only real barrier
Klaus Graf writes:
"1. time of free access (the embargo-question): This is the only question Stevan Harnad is interested in. If we can call the OA-FREE journals of DOAJ 'OA' we should also... call [self-archived articles that are] freely accessible articles after an embargo 'OA'."This is incorrect. OA means immediate, permanent, free, full-text access online to published journal articles, webwide. ("Immediate" means immediately upon acceptance for publication.) Hence embargoed access means embargoed access, not OA. I am interested in OA but it has become quite evident across the past 13 years that not nearly enough authors make their articles OA spontaneously, of their own accord (only about 15% do), despite its demonstrated benefits. It is also quite evident that the only real barrier to 100% OA is the keystrokes that it takes to deposit the article and its metadata into the author's Institutional Repository. It is for this reason that my own focus is currently on (1) institutional (and research funder) mandates that ensure that those keystrokes are executed as a matter of institutional/funder policy and on (2) developing the OA metrics that will quantify and reward those benefits. Administrative deposit mandates of course only ensure deposit, not OA. But the benefits of OA themselves will ensure that all those deposits will be made free as worldwide deposits approach 100%, and new deposits will not long thereafter be OA ab ovo. And during the brief life of embargos, Institutional Repositories will provide "almost OA" via their "Fair Use" Button, allowing would-be users to request -- and authors to provide -- an email version almost instantly, with a click from the requester and then a click from the author. American Scientist Open Access Forum Saturday, July 21. 2007Making Visibility Visible: OA Metrics of Productivity and PrestigeOn Fri, 20 Jul 2007, my colleague Steve Hitchcock wrote in the American Scientist Open Access Forum: Hitchcock: Yes, of course, mandates and content are the no. 1 priority. But that doesn't mean we should ignore anything else that might help facilitate more of both. We have enough content in IRs [Institutional Repositories] now for improved visibility to be an issue, and it's an issue that will become more acute as content continues to grow.We don't, unfortunately, have enough content in IRs now! And for what we do have, google provides more than enough visibility. What's needed, urgently, is increased content, not improved visibility. Yes, mandates are the no. 1 priority; but the reason they are still so slow in coming is because we keep getting distracted and diverted to priority no. 2, 3, 4... instead. What Arxiv has is content (in one field); IRs as a whole do not (in any field).Harnad: IRs do not need "to do more to be highly visible." Their problem is not their invisibility, it is their emptiness. And Steve Hitchcock ought to know this, because his own department's IR is anything but invisible -- for the simple reason that it has content. And it has content because self-archiving is mandated!Hitchcock: My point is not about one single IR, or any single IR, but about services that reveal IRs collectively. It's services that allow us to have effective IRs - OAI and interoperability and all that. And I didn't say they are invisible, but that they could and should be more visible. It's not just about search, it's about awareness and currency as well. Arxiv has that, IRs as a whole do not. The IRs' problem is not the visibility of what little they have, but how little they have. If we keep on distracting the attention (of what I am increasingly coming to believe is a research community suffering from Attention-Deficit-Disorder!) toward the non-problem of the day -- this time the "discoverability/visibility" problem -- instead of staying focused on the only real, persistent problem -- which is providing that missing OA content -- then we are simply compounding our persistent failure to reach for what is already long within our grasp. It is not sufficient to say that mandates are the no. 1 priority. We have to actually make them the no. 1 priority, until they are actually adopted. Then we can move on to our other pet peeves. Right now the ill-informedness, noise and confusion levels are still far too high to justify indulging still more distractions. Hitchcock: I'm not arguing for central repositories, but others are. Critically, some mandates require them, e.g. Wellcome, while the RCUK mandates are more open. So the best we can say is that the most important mandates so far are ambivalent about [whether to deposit in central] subject [CRs] vs IRs. In that case some authors affected by the mandates have a choice, and this is a challenge to IRs now in which IRs can help their cause with better services.Mandating CR deposit instead of IR deposit is simply a fundamental strategic and practical error, and can and should be dealt with as such, not as a fait-accompli motivating a detour into yet another irrelevancy ("discoverability"). And there is no point touting nascent IR functionalities that purport to remedy IRs' non-existent "visibility" problem when IRs' only real problem is their non-existent content -- for which mandates, not IR visibility-enhancements, are the solution. We don't solve -- or even contribute to the grasp of -- a real problem by diverting attention to a non-problem and its solution, as if it were all or part of the solution to the real problem. (There has already been far too much of that sort of wheel-spinning in OA for 13 years now and we need to resist another spell of still more of the same.)Optimizing OA Self-Archiving Mandates: There is, however, something that we can do that is not only complementary to mandates, but an incentive for adopting them -- and it just might serve to redirect this useless fuss about "visibility" in a more useful direction: No, there is no problem with the visibility -- to their would-be users webwide -- of the 15% of articles that are already being deposited in IRs; but there definitely is a problem with the visibility of that visibility and of that usage to the authors of those articles -- and especially to the authors of the 85% of articles that have not yet been deposited (and to the institutions and funders of those authors who have not mandated that they be deposited). I am speaking, of course, of OA metrics -- the visible, quantitative indicators of the enhanced visibility and usage vouchsafed by OA. It is not enough for a few of these metrics to be plumbed, and then published in journal articles and postings -- as admirably indexed by Steve Hitchcock's very useful bibliography of the effect of open access and downloads ('hits') on citation impact. We have to go on to make those metrics directly visible to self-archivers and non-self-archivers alike, immediately and continuously, rather than just in the occasional published study -- and not only absolute metrics but comparative ones. That will make the greater visibility of the self-archived contents visible, thereby providing an immediate, continuous and palpable incentive to self-archive, and to mandate self-archiving. Those are the kinds of visibility metrics that Arthur Sale at U. Tasmania, Les Carr at Southampton, and Leo Waaijers at SURF/DARE have been working on providing. And the biggest showcase and testbed for all those new metrics of productivity and prestige, and of OA's visible effects on them, will be the 2008 UK Research Assessment Exercise (although I rather hope OA will not wait that long!). Then universities and research funders (worldwide, not just in the UK) will have a palpable sense of how much visibility, usage, impact and income they are losing (and losing to their competitors), the longer they delay mandating OA self-archiving... Some of the absolute visibility metrics are already implemented in U. Southampton's EPrints IRs:Harnad, S. (2007) Open Access Scientometrics and the UK Research Assessment Exercise. In Proceedings of 11th Annual Meeting of the International Society for Scientometrics and Informetrics 11(1), pp. 27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds. as well as U. Tasmania's Eprints IRs. A clever adaptation of Tim Brody's citebase, across IRs, could provide the comparative picture too. Stevan Harnad American Scientist Open Access Forum Saturday, June 9. 2007London Open Research ConferenceOpen Research: 3rd London Conference on Opening Access to Research Publications Thursday, June 7. 2007British Classification Society post-RAE ScientometricsBritish Classification Society Meeting , "Analysis Methodologies for Post-RAE Scientometrics", Friday 6 July 2007, International Building room IN244 Royal Holloway, University of London, EghamThe selection of appropriate and/or best data analysis methodologies is a result of a number of issues: the overriding goals of course, but also the availability of well formatted, and ease of access to such, data. The meeting will focus on the early stages of the analysis pipeline. An aim of this meeting is to discuss data analysis methodologies in the context of what can be considered as open, objective and universal in a metrics context of scholarly and applied research. Les Carr and Tim Brody (Intelligence, Agents, Media group, Electronics and Computer Science, University of Southampton): "Open Access Scientometrics and the UK Research Assessment Exercise"Harnad, S. (2007) Open Access Scientometrics and the UK Research Assessment Exercise. In Proceedings of 11th Annual Meeting of the International Society for Scientometrics and Informetrics (in press), Madrid, Spain. Brody, T., Harnad, S. and Carr, L. (2006) Earlier Web Usage Statistics as Predictors of Later Citation Impact. Journal of the American Association for Information Science and Technology (JASIST) 57(8) pp. 1060-1072. Carr, L., Hitchcock, S., Oppenheim, C., McDonald, J. W., Champion, T. and Harnad, S. (2006) Extending journal-based research impact assessment to book-based disciplines. Sunday, June 3. 2007Brazilian National Green OA Self-Archiving Mandate
Hélio Kuramoto of IBICT has helped to formulate a Proposed Law (introduced by Rodrigo Rollemberg, Member of Brazil's House of Representatives) that would require all Brazil's public institutions of higher education and research units to create OA institutional repositories and self-archive all their technical-scientific output therein.
There is also a petition in support of this Green OA Self-Archiving Mandate. I urge all in favor of OA in Brazil (and worldwide) to sign the petition here. Below is an English translation of the petition. Bravo to HK and RR for this timely and welcome step, setting an inspiring example for all. (Brazil's Auriverde -- Gold and Green -- flag is especially apposite for OA!)
"Academics strike back at spurious rankings"Academics strike back at spurious rankingsThis news item in Nature lists some of the (very valid) objections to the many unvalidated university rankings -- both subjective and objective -- that are in wide use today. These problems are all the more reason for extending Open Access (OA) and developing OA scientometrics, which will provide open, validatable and calibratable metrics for research, researchers, and institutions in each field -- a far richer, more sensitive, and more equitable spectrum of metrics than the few, weak and unvalidated measures available today. Some research groups that are doing relevant work on this are, in the UK: (1) our own OA scientometrics group (Les Carr, Tim Brody, Alma Swan, Stevan Harnad) at Southampton (and UQaM, Canada), and our collaborators Charles Oppenheim (Loughborough) and Arthur Sale (Tasmania); (2) Mike Thelwall (Wolverhampton); in the US: (3) Johan Bollen & Herbert van de Sompel at LANL; and in the Netherlands: (5) Henk Moed & Anthony van Raan (Leiden; cited in the Nature news item). Below are excerpts from the Nature article, followed by some references.
Isidro Aguillo is the Scientific Director of the Laboratory of Quantitative Studies of the Internet of the Centre for Scientific Information and Documentation Spanish National Research Council and editor of Cybermetrics, the International Journal of Scientometrics, Informetrics and Bibliometrics. In a posting to the American Scientist Open Access Forum, Dr. Aguillo makes the very valid point (in response to Declan Butler's Nature news article about the use of unvalidated university rankings) that web metrics provide new and potentially useful information not available elsewhere. This is certainly true, and web metrics should certainly be among the metrics that are included in the multiple regression equation that should be tested and validated in order to weight each of the candidate component metrics and to develop norms and benchmarks for reliable widespread use in ranking and evaluation. Among other potential useful sources of candidate metrics are: University MetricsBollen, Johan and Herbert Van de Sompel. Mapping the structure of science through usage. Scientometrics, 69(2), 2006 Hardy, R., Oppenheim, C., Brody, T. and Hitchcock, S. (2005) Open Access Citation Information. ECS Technical Report. Harnad, S., Carr, L., Brody, T. & Oppenheim, C. (2003) Mandated online RAE CVs Linked to University Eprint Archives: Improving the UK Research Assessment Exercise whilst making it cheaper and easier. Ariadne 35. Shadbolt, N., Brody, T., Carr, L. and Harnad, S. (2006) The Open Research Web: A Preview of the Optimal and the Inevitable, in Jacobs, N., Eds. Open Access: Key Strategic, Technical and Economic Aspects, Chandos. Harnad, S. (2007) Open Access Scientometrics and the UK Research Assessment Exercise. Invited Keynote, 11th Annual Meeting of the International Society for Scientometrics and Informetrics. Madrid, Spain, 25 June 2007 Kousha, Kayvan and Thelwall, Mike (2006) Google Scholar Citations and Google Web/URL Citations: A Multi-Discipline Exploratory Analysis. In Proceedings International Workshop on Webometrics, Informetrics and Scientometrics & Seventh COLLNET Meeting, Nancy (France). Moed, H.F. (2005). Citation Analysis in Research Evaluation. Dordrecht (Netherlands): Springer. van Raan, A. (2007) Bibliometric statistical properties of the 100 largest European universities: prevalent scaling rules in the science system. Journal of the American Society for Information Science and Technology (submitted) Stevan Harnad American Scientist Open Access Forum
« previous page
(Page 8 of 11, totaling 102 entries)
» next page
|
QuicksearchSyndicate This BlogMaterials You Are Invited To Use To Promote OA Self-Archiving:
Videos:
The American Scientist Open Access Forum has been chronicling and often directing the course of progress in providing Open Access to Universities' Peer-Reviewed Research Articles since its inception in the US in 1998 by the American Scientist, published by the Sigma Xi Society. The Forum is largely for policy-makers at universities, research institutions and research funding agencies worldwide who are interested in institutional Open Acess Provision policy. (It is not a general discussion group for serials, pricing or publishing issues: it is specifically focussed on institutional Open Acess policy.)
You can sign on to the Forum here.
ArchivesCalendar
CategoriesBlog AdministrationStatisticsLast entry: 2018-09-14 13:27
1129 entries written
238 comments have been made
Top ReferrersSyndicate This Blog |