Monday, November 3. 2008
Les Carr has posted a call: Looking for Evidence of Researcher Engagement with Repositories"a collection of success stories - anecdotes of how repositories have been able to improve the lot of researchers - for appealing to institutional repository nay-sayers and open access agnostics" and the redoubtable Alma Swan has, as always, responded with data, posting: Reasons researchers really rate repositories which is admiringly reproduced in full below:
by Alma Swan
 As the SPARC repositories conference approaches in Baltimore, repositories are the topic of conversation all over the place. Les Carr will be running an eve-of-meeting session where people can contribute and share evidence or anecdotes about how repositories are benefiting researchers. I've had a few whispers in my ear that people are still saying researchers don't rate repositories. Perhaps they don't, where they don't fully understand the picture, or where they've not (yet) personally seen the benefits of using one. But they certainly rate them when they do see those benefits. And that shows we must get the right messages to researchers - and, critically, in the right way. One conduit is an articulate peer. John Willinsky's lovely tale of how he persuaded his fellow faculty members at Stanford to vote unanimously to mandate themselves to provide OA, greenly, through the repository is illustrative of the power of the peer. It needs a champion who has the arguments marshalled, is respected in his/her peer community, and the right moment. John used a Faculty of Education 'Retreat' at Monterey to stand up and speak to his peers. He managed to persuade them of the arguments so effectively that they had time to take a walk on the beach afterwards. That can happen elsewhere, too, though not everyone will have a beach to hand, obviously. But OA advocates who wish to rise to the 'champion' challenge can identify events or mechanisms in their own institution that can be used effectively to persuade their peers of the issues. Afterwards they can go to the park or the pub: bonding is location-independent. The testimony of peers to the effect that using a repository to provide OA has really shown a benefit is also powerful. I've long used a quotation from a US philosopher, offered in a free-response box in one of our author surveys, to make a point to researcher audiences. It goes: "Self-archiving in the PhilSci Archive has given instant world-wide visibility to my work. As a result, I was invited to submit papers to refereed international conferences/journals and got them accepted". Not much to argue with there. One big career boost, pronto. Let's look at another such. Last month at the Open Access & Research conference in Brisbane, Paula Callan presented some data from her own QUT repository in a workshop on 'Making OA Happen' (all the ppts are up on the conference website). The data pertain to a chemist, Ray Frost, who has personally (yes, please note, all those who say that researchers cannot be asked to deposit their own articles) deposited around 300 of his papers published over the last few years. Now, this man is prolific in his publishing activity and it is the fact that he has provided such a great baseline that means we can really trust the data here. An increase of 100% on nought is still nought, and an increase of 100% on two is only two. What we've always needed is a sizeable base to start with, so that we can legitimately say that a certain percentage increase (or whatever) has occurred. Ray Frost has provided us with one. Look at the charts at the top of this post. What the data show is this: on the left are the papers Frost has published each year since 1992 (the data are from Web of Science). These have been downloaded 165,000 (yes) times from the QUT repository. On the right are the citations he has gathered over that time period. From 2000 to 2003, citations were approximately flat-lining at about 300 per year, on 35-40 papers per year. When Ray started putting his articles into the QUT repository, the numbers of citations started to take off. The latest count is 1200 in one year. Even though Ray’s publication rate went up a bit over this period – to 55-60 papers per year – the increase in citations is impressive. And unless Ray’s work suddenly became super-important in 2004, the extra impact is a direct result of Open Access. Now, there’s another little piece of information to add to this tale: the QUT library staff routinely add DOIs to each article deposited in the repository. Would-be users who can access the published version will generally do so using those. The 165,000 downloads are from users who do not have access to Ray’s articles through their own institution’s subscriptions – the whole purpose of Open Access. That’s an awful lot of EXTRA readership and a lot of new citations coming in on the back of it. The final example of a reason for rating repositories comes from Ann Marie Clark, the Library Director at the Fred Hutchinson Cancer Research Center in Seattle. ‘The Hutch’ has a repository built on the EPrints software and is starting to capture the output of the Center as the Library develops an advocacy programme. No doubt individual researchers at the Hutch will in future enjoy the same sort of increase in impact as Ray Frost in Brisbane. Already, though, one other reason for depositing has come to the fore in Seattle. Ann Marie reports that the National Institutes of Health, the major funder for work done by scientists at The Hutch, nowadays require that most grant applications come in electronic form only. Along with this new electronic submission system came new policies. "One in particular," Ann Marie says, "affects how our researchers think about OA and their own papers. This new rule limits them, when citing papers that support their grant proposal, from attaching more than three published PDFs. Any papers cited, beyond that limit, may only offer URLs for freely-accessible versions. As a result, convincing faculty members to work with our librarians to deposit their papers into our repository has not been difficult at all. The icing on the cake for our faculty is that our repository also offers a stable and contextual home to their, historically orphaned, supplemental data files." So there we have it. Or them, rather. Reasons researchers really rate repositories: vast visibility, increased impact, worry-reduced workflow.
Alma Swan
Optimal Scholarship
Monday, October 20. 2008
Submission fees as a potential means of covering peer review costs have been mooted since at least 1999 and much discussed across the years in the American Scientist Open Access Forum. They are indeed a promising and potentially viable mechanism for covering the costs of peer review.
However, today, when 90% of journals (and almost 100% of the top journals) are still subscription-based, publication charges of any kind are still a deterrent. There is a case to be made, however, that submission charges -- for peer review -- applied to all submissions, regardless of whether they are ultimately accepted or rejected, are a more understandable and justifiable expense than publication charges, applied only to accepted articles (and bearing the additional burden of the cost of the peer review for all the rejected articles too).
It remains true, however, that at a time when most peer-reviewed journals are still subscription-based -- and when Green OA self-archiving is available as the authors' means to make all their published articles OA -- it is an unnecessary additional constraint and burden for authors (or their institutions or funders) to have to pay in any way for OA. While subscriptions are still paying the costs of peer review, the only source for paying publication charges -- whether for submission of acceptance -- is already-scarce research funds.
It makes incomparably more sense to focus all OA efforts on Green OA self-archiving and Green OA self-archiving mandates at this time. That will generate universal (Green) OA. If and when that should in turn make subscriptions unsustainable, then the subscription cancellation savings can be used to pay for a transition to Gold OA charges to cover the costs of the peer review.
Today, in contrast, such charges (whether for submission or acceptance) are not only a gratuitous additional burden for authors, their institutions and their funders, but they are a distraction from the immediate need for universal Green OA self-archiving and Green OA self-archiving mandates from all research institutions and funders.
Stevan Harnad
American Scientist Open Access Forum
Sunday, October 19. 2008
 'the man who is ready to prove that metaphysics is wholly impossible... is a brother metaphysician with a rival theory.'
Francis Herbert Bradley (1846-1924) Appearance and Reality A critique of metrics and European Reference Index for the Humanities (ERIH) by History of Science, Technology and Medicine journal editors has been posted on the Classicists list. ERIH looks like an attempt to set up a bigger, better alternative to the ISI Journal Impact Factor (JIF), tailored specifically for the Humanities. The protest from the journal editors looks as if it is partly anti-JIF, partly opposed to the ERIH approach and appointees, and partly anti-metrics.
Their vision seems rather narrow. In the Open Access era, metrics are becoming far richer, more diverse, more transparent and more answerable than just the ISI JIF: author/article citations, author/article downloads, book citations, growth/decay metrics, co-citation metrics, hub/authority metrics, endogamy/exogamy metrics, link metrics, tag metrics, comment metrics, semiometrics (text-mining) and much more. The days of the univariate JIF are already over. This is not the time to reject metrics; it is the time to test and validate, jointly, as full a battery of candidate metrics as possible, but validating the battery separately for each discipline, against peer ranking or other validated or face-valid standards (as in the UK's RAE 2008).
Brody, T., Kampa, S., Harnad, S., Carr, L. and Hitchcock, S. (2003) Digitometric Services for Open Archives Environments. In Proceedings of European Conference on Digital Libraries 2003, pp. 207-220, Trondheim, Norway.
Brody, T., Carr, L., Harnad, S. and Swan, A. (2007) Time to Convert to Metrics. Research Fortnight pp. 17-18.
Brody, T., Carr, L., Gingras, Y., Hajjem, C., Harnad, S. and Swan, A. (2007) Incentivizing the Open Access Research Web: Publication-Archiving, Data-Archiving and Scientometrics. CTWatch Quarterly 3(3).
Carr, L., Hitchcock, S., Oppenheim, C., McDonald, J. W., Champion, T. and Harnad, S. (2006) Extending journal-based research impact assessment to book-based disciplines. Technical Report, ECS, University of Southampton.
Harnad, S. (2001) Research access, impact and assessment. Times Higher Education Supplement 1487: p. 16.
Harnad, S., Carr, L., Brody, T. & Oppenheim, C. (2003) Mandated online RAE CVs Linked to University Eprint Archives: Improving the UK Research Assessment Exercise whilst making it cheaper and easier. Ariadne 35.
Harnad, S. (2006) Online, Continuous, Metrics-Based Research Assessment. Technical Report, ECS, University of Southampton.
Harnad, S. (2007) Open Access Scientometrics and the UK Research Assessment Exercise. In Proceedings of 11th Annual Meeting of the International Society for Scientometrics and Informetrics 11(1), pp. 27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds.
Harnad, S. (2008) Self-Archiving, Metrics and Mandates. Science Editor 31(2) 57-59
Harnad, S. (2008) Validating Research Performance Metrics Against Peer Rankings. Ethics in Science and Environmental Politics 8 (11) doi:10.3354/esep00088 The Use And Misuse Of Bibliometric Indices In Evaluating Scholarly Performance
Harnad, S., Carr, L. and Gingras, Y. (2008) Maximizing Research Progress Through Open Access Mandates and Metrics. Liinc em Revista.
Date: Sun, 19 Oct 2008 11:56:22 +0100
Sender: Classicists
From: Nick Lowe
Subject: History of Science pulls out of ERIH [As editorial boards and subject associations in other humanities subjects contemplate their options, this announcement by journals in History of Science seems worth passing on in full. Thanks to Stephen Clark for the forward.] Journals under Threat: A Joint Response from History of Science, Technology and Medicine Editors
We live in an age of metrics. All around us, things are being standardized, quantified, measured. Scholars concerned with the work of science and technology must regard this as a fascinating and crucial practical, cultural and intellectual phenomenon. Analysis of the roots and meaning of metrics and metrology has been a preoccupation of much of the best work in our field for the past quarter century at least. As practitioners of the interconnected disciplines that make up the field of science studies we understand how significant, contingent and uncertain can be the process of rendering nature and society in grades, classes and numbers. We now confront a situation in which our own research work is being subjected to putatively precise accountancy by arbitrary and unaccountable agencies.
Some may already be aware of the proposed European Reference Index for the Humanities (ERIH), an initiative originating with the European Science Foundation. The ERIH is an attempt to grade journals in the humanities - including "history and philosophy of science". The initiative proposes a league table of academic journals, with premier, second and third divisions. According to the European Science Foundation, ERIH "aims initially to identify, and gain more visibility for, top-quality European Humanities research published in academic journals in, potentially, all European languages". It is hoped "that ERIH will form the backbone of a fully-fledged research information system for the Humanities". What is meant, however, is that ERIH will provide funding bodies and other agencies in Europe and elsewhere with an allegedly exact measure of research quality. In short, if research is published in a premier league journal it will be recognized as first rate; if it appears somewhere in the lower divisions, it will be rated (and not funded) accordingly.
This initiative is entirely defective in conception and execution. Consider the major issues of accountability and transparency. The process of producing the graded list of journals in science studies was overseen by a committee of four (the membership is currently listed at ). This committee cannot be considered representative. It was not selected in consultation with any of the various disciplinary organizations that currently represent our field such as the European Association for the History of Medicine and Health, the Society for the Social History of Medicine, the British Society for the History of Science, the History of Science Society, the Philosophy of Science Association, the Society for the History of Technology or the Society for Social Studies of Science. Journal editors were only belatedly informed of the process and its relevant criteria or asked to provide any information regarding their publications.
No indication has been given of the means through which the list was compiled; nor how it might be maintained in the future. The ERIH depends on a fundamental misunderstanding of conduct and publication of research in our field, and in the humanities in general. Journals' quality cannot be separated from their contents and their review processes. Great research may be published anywhere and in any language. Truly ground-breaking work may be more likely to appear from marginal, dissident or unexpected sources, rather than from a well-established and entrenched mainstream. Our journals are various, heterogeneous and distinct. Some are aimed at a broad, general and international readership, others are more specialized in their content and implied audience. Their scope and readership say nothing about the quality of their intellectual content. The ERIH, on the other hand, confuses internationality with quality in a way that is particularly prejudicial to specialist and non-English language journals.
In a recent report, the British Academy, with judicious understatement, concludes that "the European Reference Index for the Humanities as presently conceived does not represent a reliable way in which metrics of peer-reviewed publications can be constructed" (Peer Review: the Challenges for the Humanities and Social Sciences, September 2007: ). Such exercises as ERIH can become self- fulfilling prophecies. If such measures as ERIH are adopted as metrics by funding and other agencies, then many in our field will conclude that they have little choice other than to limit their publications to journals in the premier division. We will sustain fewer journals, much less diversity and impoverish our discipline. Along with many others in our field, this Journal has concluded that we want no part of this dangerous and misguided exercise. This joint Editorial is being published in journals across the fields of history of science and science studies as an expression of our collective dissent and our refusal to allow our field to be managed and appraised in this fashion. We have asked the compilers of the ERIH to remove our journals' titles from their lists.Hanne Andersen (Centaurus)
Roger Ariew & Moti Feingold (Perspectives on Science)
A. K. Bag (Indian Journal of History of Science)
June Barrow-Green & Benno van Dalen (Historia mathematica)
Keith Benson (History and Philosophy of the Life Sciences)
Marco Beretta (Nuncius)
Michel Blay (Revue d'Histoire des Sciences)
Cornelius Borck (Berichte zur Wissenschaftsgeschichte)
Geof Bowker and Susan Leigh Star (Science, Technology and Human Values)
Massimo Bucciantini & Michele Camerota (Galilaeana: Journal of Galilean Studies)
Jed Buchwald and Jeremy Gray (Archive for History of Exact Sciences)
Vincenzo Cappelletti & Guido Cimino (Physis)
Roger Cline (International Journal for the History of Engineering & Technology)
Stephen Clucas & Stephen Gaukroger (Intellectual History Review)
Hal Cook & Anne Hardy (Medical History)
Leo Corry, Alexandre Métraux & Jürgen Renn (Science in Context)
D. Diecks & J. Uffink (Studies in History and Philosophy of Modern Physics)
Brian Dolan & Bill Luckin (Social History of Medicine)
Hilmar Duerbeck & Wayne Orchiston (Journal of Astronomical History & Heritage)
Moritz Epple, Mikael Hård, Hans-Jörg Rheinberger & Volker Roelcke (NTM: Zeitschrift für Geschichte der Wissenschaften, Technik und Medizin)
Steven French (Metascience)
Willem Hackmann (Bulletin of the Scientific Instrument Society)
Bosse Holmqvist (Lychnos) Paul Farber (Journal of the History of Biology)
Mary Fissell & Randall Packard (Bulletin of the History of Medicine)
Robert Fox (Notes & Records of the Royal Society)
Jim Good (History of the Human Sciences)
Michael Hoskin (Journal for the History of Astronomy)
Ian Inkster (History of Technology)
Marina Frasca Spada (Studies in History and Philosophy of Science)
Nick Jardine (Studies in History and Philosophy of Biological and Biomedical Sciences)
Trevor Levere (Annals of Science)
Bernard Lightman (Isis)
Christoph Lüthy (Early Science and Medicine)
Michael Lynch (Social Studies of Science)
Stephen McCluskey & Clive Ruggles (Archaeostronomy: the Journal of Astronomy in Culture)
Peter Morris (Ambix)
E. Charles Nelson (Archives of Natural History)
Ian Nicholson (Journal of the History of the Behavioural Sciences)
Iwan Rhys Morus (History of Science)
John Rigden & Roger H Stuewer (Physics in Perspective)
Simon Schaffer (British Journal for the History of Science)
Paul Unschuld (Sudhoffs Archiv)
Peter Weingart (Minerva)
Stefan Zamecki (Kwartalnik Historii Nauki i Techniki)
Viviane Quirke, RCUK Academic Fellow in twentieth-century Biomedicine, Secretary of the BSHS, Centre for Health, Medicine and Society, Oxford Brookes University
Friday, October 17. 2008
  ETH Zürich (SWITZERLAND* institutional-mandate)
Institution's/Department's OA Eprint Archives
Institution's/Department's OA Self-Archiving Policy
It is the policy of the ETH Zürich to maximise the visibility, usage and impact of their research output by maximising online access to it for all would-be users and researchers worldwide.
Therefore the ETH Zürich:
Requires of staff and postgraduate students to post electronic copies of any research papers that have been accepted for publication in a peer-reviewed journal (post-prints), theses and other scientific research output (monographs, reports, proceedings, videos etc.), to be made freely available as soon as possible into the institutional repository “ETH E-Collection” (http://e-collection.ethbib.ethz.ch/), if there are no legal objections. The ETH Zürich expects authors where possible, to retain their copyright. For detailed information see the rules of the ETH E-Collection.
 Hong Kong University (CHINA* proposed-multi-institutional-mandate)
Institution's/Department's OA Eprint Archives
Proposed OA Self-Archiving Policy
HKU Research Committee Agrees to Endorse [the following policy proposal]:
As the majority of research in Hong Kong is funded by the RGC/UGC, their policies are critical. We would like to propose the following specific actions for the RGC/UGC’s consideration:
a) State clearly that all researchers funded by an RGC grant should aim to publish their results in the highest quality journals or books so as to maximize the influence and impact of the research outcome and that to achieve this when publishing research findings:
i. Researchers should look for suitable OA journals so that, where there is a choice between non OA and OA journals that are equally influential and high impact, the choice should be to publish the results in an OA journal.
ii. When a comparable OA journal does not exist, they should send the journal the Hong Kong author’s addendum (University of Hong Kong, 2008), which adds the right of placing some version (preprint or postprint) of the paper in their university’s institutional repository (IR). If necessary, seek funds from the RGC to pay open access charges up to an agreed limit; perhaps US$3,000, which is the fee agreed with the Wellcome Trust for most Elsevier journals (Elsevier, 2007).
iii. For books and book chapters that are published without a royalty agreement, send the publisher the Hong Kong author’s addendum to seek the right of placing some version in their university’s IR.
iv. Deposit all published papers in their IR, unless the journal refuses in writing. If the published version is refused, deposit the preprint or postprint, as allowed in number ii above.
v. Must provide evidence to the RGC in their progress report that the above steps have been undertaken.
"The Open Access Advantage"
by: John Bacon-Shone, Edwin Cheng, Anthony Ferguson, Carmel McNaught, David Palmer, Ah Chung Tsoi
Hong Kong Baptist University
The Chinese University of Hong Kong
The Hong Kong Polytechnic University
The University of Hong Kong
October 3, 2008
Tuesday, October 14. 2008
 Every day is Open Access (OA) Day.
OA means free online access to refereed research.
OA can be provided by self-archiving in the author's institutional repository all articles published in non-OA journals (" Green OA") and/or by publishing in OA journals (" Gold OA").
Green OA self-archiving is being mandated by 56 universities and research funders worldwide so far.
Green OA self-archiving needs to be mandated by all universities and research funders worldwide.
The result will be universal OA (and Gold OA will follow soon after).
OA maximizes research access, uptake, usage, impact, productivity, progress and benefits to humankind.
The best thing you can do for OA is to lobby for Green OA self-archiving mandates.
" That is all ye know on earth, and all ye need to know."
Every day is Open Access Day
Video 1 (intro in French, rest in English)
Video 2
Video 3
PPTs
Stevan Harnad
American Scientist Open Access Forum
Friday, October 10. 2008
 Many silly, mindless things have been standing in the way of the optimal and inevitable (i.e., universal Open Access) for years now (canards about permissions, peer review, preservation, etc.), but perhaps the biggest of them is the persistent conflation of OA with OA publishing: OA means free online access to refereed journal articles ( "gratis" OA means access only, "libre" OA means also various re-use rights).
OA to refereed journal articles can be provided in two ways: by publishing in an OA journal that provides OA (OA publishing, "Gold" OA) or by publishing in a non-OA journals and self-archiving the article ( "Green" OA).
Hence Green OA, which is full-blooded OA, is OA, but it is not OA publishing -- just as apples are fruit, but fruit are not apples.
Hence the many OA mandates that are being adopted by universities and research funders worldwide are not Gold OA publishing mandates, they are Green OA self-archiving mandates.
It is not doing the OA cause, or progress towards universal OA one bit of good to keep portraying it as a publishing reform movement, with Gold OA publishing as its sole and true goal.
The OA movement's sole and true goal is OA itself, universal OA.
Whether or not universal OA will eventually lead to universal Gold OA publishing is a separate, speculative question.
OA means OA, and OA publishing is merely one of the forms it can take.
(I post this out of daily frustration at continuing to see OA spoken of as synonymous with OA publishing, and of even hearing Green OA self-archiving mandates misdescribed as "OA publishing mandates" [e.g., 1, 2].)
If only we could stop doing this conflation, OA would have a better chance of reaching the optimal and inevitable before the heat death of the universe...
Stevan Harnad
American Scientist Open Access Forum
 SUMMARY: Unlike with OA's primary target, journal articles, the deposit of the full-texts of books in Open Access Repositories cannot be mandated, only encouraged. However, the deposit of book metadata + plus + reference-lists can and should be mandated. That will create the metric that the book-based disciplines need most: a book citation index. ISI's Web of Science only covers citations of books by (indexed) journal articles , but book-based disciplines' biggest need is book-to-book citations. Citebase could provide that, once the book reference metadata are being deposited in the IRs too, rather than just article postprints. (Google Books and Google Scholar are already providing a first approximation to book citation count.) Analogues of "download" metrics for books are also potentially obtainable from book vendors, beginning with Amazon Sales Rank. In the Humanities it also matters for credit and impact how much the non-academic (hence non-citing) public is reading their books ("Demotic Metrics"). IRs can not only (1) add book-metadata/reference deposit to their OA Deposit Mandates, but they can (2) harvest Amazon book-sales metrics for their book metadata deposits, to add to their IR stats. IRs can also already harvest Google Books (and Google Scholar) book-citation counts today, as a first step toward constructing a distributed, universal OA book-citation index. The Dublin humanities metrics conference was also concerned about other kinds of online works, and how to measure and credit their impact: Metrics don't stop with citation counts and download counts. Among the many "Demotic metrics" that can also be counted are link-counts, tag-counts, blog-mentions, and web mentions. This applies to books/authors, as well as to data, to courseware and to other identifiable online resources. We should hasten the progress of book metrics, and that will in turn accelerate the growth in OA's primary target content: journal articles, as well as increasing support for institutional and funder OA Deposit Mandates.
The deposit of the full-texts of book-chapters and monographs in Open Access Repositories should of course be encouraged wherever possible, but, unlike with journal articles, full-text book deposit itself cannot be mandated.
The most important additional thing that the OA movement should be singling out and emphasizing -- over and above the Immediate Deposit (IR) Mandate plus the email-eprint-request Button and the use of metrics to motivate mandates -- is the author deposit of all book metadata+plus+reference+lists in the author's OA Institutional Repository (IR). That will create the metric that the book-based disciplines need the most.
This has been mentioned before, as a possibility and a desideratum for institutional (and funder) OA policy, but it is now crystal clear why it is so important (and so easy to implement).
By systematically ensuring the IR deposit of each book's bibliographic metadata plus its cited-works bibliography, institutions (and funders) are actually creating a book citation index.
This became apparent (again) at the Dublin humanities metrics conference, when ISI's VP Jim Pringle repeated ISI 's (rather weak) response to the Humanities' need for a book citation index, pointing out that "ISI does cover citations of books -- by journal articles."
But that of course is anything but sufficient for book-based disciplines, whose concern is mainly about book-to-book citations!
Yet that is precisely what can be harvested out of IRs (by, for example, Citebase, or a Citebase-like scientometric engine) -- if only the book reference metadata, too, are deposited in the IRs, rather than only article postprints. That immediately begins making the IR network into a unique and much-needed book-citation (distributed) database. (Moreover, Google Books and Google Scholar are already providing a first approximation to this.)
And there's more: Obviously OA IRs will not be able to get book download counts -- analogous to article download counts -- when the only thing deposited is the book's metadata and reference list. However, in his paper at this Dublin conference, Janus Linmans -- in cleaving to his age-old bibliometric measure of library book-holdings lists as the surrogate for book citation counts in his analyses -- inadvertently gave me another obvious idea, over and above the deposit and harvesting of book reference metadata:
Library holdings are just one, weak, indirect metric of book usage (and Google Book Collections already collects some of those data). But far better analogues of "downloads" for books are potentially obtainable from book vendors, beginning with Amazon Sales Rank, but eventually including conventional book vendors too (metrics do not end with web-based data):
The researchers from the Humanities stressed in Dublin that the book-to-book (and journal-to-book and book-to-journal) citation counts would be most welcome and useful, but in the Humanities even those do not tell the whole story, because it also matters for the credit and impact of a Humanities' researcher how much the non-academic (hence non-citing) public is reading their books too. (Let us call these non-academic metrics "Demotic Metrics.")
Well, starting with a systematic Amazon book-sales count, per book deposited in the IR (and eventually extended to many book-vendors, online and conventional), the ball can be set in motion very easily. IRs can not only formally (1) add book-metadata/reference deposit to their OA Deposit Mandates, but they can (2) systematically harvest Amazon book-sales metrics for their book items to add to their IR stats for each deposit.
And there's more: IRs can also harvest Google Books (and Google Scholar) book-citation counts, already today, as a first approximation to constructing a distributed, universal OA book-citation index, even before the practice of depositing book metadata/reference has progressed far enough to provide useful data on its own: Whenever book metadata are deposited in an IR, the IR automatically does (i) an Amazon query (number of sales of this book) plus (ii) a Google-Books/Google-Scholar query (number of citations of this book).
These obvious and immediately feasible additions to an institutional OA mandate and to its IR software configuration and functionality would not only yield immediate useful and desirable metrics and motivate Humanists to become even more supportive of OA and metrics, but it would help set in motion practices that (yet again) are so obviously optimal and feasible for science and scholarship as to be inevitable.
We should hasten the progress of book metrics, and that will in turn accelerate the growth in OA's primary target content: journal articles, as well as increasing support for institutional and funder OA Deposit Mandates.
One further spin-off of the Dublin Metrics Conference was other kinds of online works, and how to measure and credit their impact: Metrics don't stop with citation counts and download counts! Among the many "Demotic metrics" that can also be counted are link-counts, tag-counts, blog-mentions, and web mentions. This applies to books/authors, as well as to data, to courseware and to other identifiable online resources.
In "Appearance and Reality," Bradley (1897/2002) wrote (of Ayer) that 'the man who is ready to prove that metaphysics is wholly impossible ... is a brother metaphysician with a rival theory.
Well, one might say the same of those who are skeptical about metrics: There are only two ways to measure the quality, importance or impact of a piece of work: Subjectively, by asking experts for their judgment (peer review: and then you have a polling metric!) or objectively, by counting objective data of various kinds. But of course counting and then declaring those counts "metrics" for some criterion or other, by fiat, is not enough. Those candidate metrics have to be validated against that criterion, either by showing that they correlate highly with the criterion, or that they correlate highly with an already validated correlate of the criterion. One natural criterion is expert judgment itself: peer review. Objective metrics can then be validated against peer review. Book citation metrics need to be added to the rich and growing battery of candidate metrics, and so do "demotic metrics."
Brody, T., Kampa, S., Harnad, S., Carr, L. and Hitchcock, S. (2003) Digitometric Services for Open Archives Environments. In Proceedings of European Conference on Digital Libraries 2003, pp. 207-220, Trondheim, Norway.
Brody, T., Carr, L., Harnad, S. and Swan, A. (2007) Time to Convert to Metrics. Research Fortnight pp. 17-18.
Brody, T., Carr, L., Gingras, Y., Hajjem, C., Harnad, S. and Swan, A. (2007) Incentivizing the Open Access Research Web: Publication-Archiving, Data-Archiving and Scientometrics. CTWatch Quarterly 3(3).
Carr, L., Hitchcock, S., Oppenheim, C., McDonald, J. W., Champion, T. and Harnad, S. (2006) Extending journal-based research impact assessment to book-based disciplines. Technical Report, ECS, University of Southampton.
Harnad, S. (2001) Research access, impact and assessment. Times Higher Education Supplement 1487: p. 16.
Harnad, S., Carr, L., Brody, T. & Oppenheim, C. (2003) Mandated online RAE CVs Linked to University Eprint Archives: Improving the UK Research Assessment Exercise whilst making it cheaper and easier. Ariadne 35.
Harnad, S. (2006) Online, Continuous, Metrics-Based Research Assessment. Technical Report, ECS, University of Southampton.
Harnad, S. (2007) Open Access Scientometrics and the UK Research Assessment Exercise. In Proceedings of 11th Annual Meeting of the International Society for Scientometrics and Informetrics 11(1), pp. 27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds.
Harnad, S. (2008) Self-Archiving, Metrics and Mandates. Science Editor 31(2) 57-59
Harnad, S. (2008) Validating Research Performance Metrics Against Peer Rankings. Ethics in Science and Environmental Politics 8 (11) doi:10.3354/esep00088 The Use And Misuse Of Bibliometric Indices In Evaluating Scholarly Performance
Harnad, S., Carr, L. and Gingras, Y. (2008) Maximizing Research Progress Through Open Access Mandates and Metrics. Liinc em Revista.
 Young NS, Ioannidis JPA, Al-Ubaydli O (2008) Why Current Publication Practices May Distort Science. PLoS Medicine Vol. 5, No. 10, e201 doi:10.1371/journal.pmed.0050201
ARTICLE SUMMARY: "The current system of publication in biomedical research provides a distorted view of the reality of scientific data that are generated in the laboratory and clinic. This system can be studied by applying principles from the field of economics. The “winner's curse,” a more general statement of publication bias, suggests that the small proportion of results chosen for publication are unrepresentative of scientists' repeated samplings of the real world. The self-correcting mechanism in science is retarded by the extreme imbalance between the abundance of supply (the output of basic science laboratories and clinical investigations) and the increasingly limited venues for publication (journals with sufficiently high impact). This system would be expected intrinsically to lead to the misallocation of resources. The scarcity of available outlets is artificial, based on the costs of printing in an electronic age and a belief that selectivity is equivalent to quality. Science is subject to great uncertainty: we cannot be confident now which efforts will ultimately yield worthwhile achievements. However, the current system abdicates to a small number of intermediates an authoritative prescience to anticipate a highly unpredictable future. In considering society's expectations and our own goals as scientists, we believe that there is a moral imperative to reconsider how scientific data are judged and disseminated." There are reasons to be skeptical about the conclusions of this PLoS Medicine article. It says that science is compromised by insufficient "high impact" journals to publish in. The truth is that just about everything gets published somewhere among the planet's 25,000 peer reviewed journals, just not all in the top journals, which are, by definition, reserved for the top articles -- and not all articles can be top articles. The triage (peer review) is not perfect, so sometimes an article will appear lower (or higher) in the journal quality hierarchy than it ought to. But now that funders and universities are mandating Open Access, all research, top, middle and low will be accessible to everyone. This will correct any access inequities and it will also help remedy quality misassignment (inasmuch as lower quality journals may have fewer subscribers, and users may be less likely to consult lower quality journals). But it will not change the fact that 80% of citations (and presumably usage) goes to the top 20% of articles, though it may flatten this " skewness of science" (Seglen 1992) somewhat.
Seglen PO (1992) The skewness of science. Journal of the American Society for Information Science 43:628-38
Stevan Harnad
American Scientist Open Access Forum
Thursday, October 9. 2008
 University of Glasgow (UK*
funder-mandate)
Institution's/Department's OA Eprint Archives
Institution's/Department's OA Self-Archiving Policy
The policy policy requires staff to deposit:-- electronic copies of peer-reviewed journal articles and conference proceedings
-- bibliographic details of all research outputs, and to encourage staff to provide the full text of other research outputs where appropriate.
|