Evaluating Citebase: Key Usability Results

Steve Hitchcock, Arouna Woukeu, Tim Brody, Les Carr, Wendy Hall and Stevan Harnad

Open Citation Project, IAM Group, Department of Electronics and Computer Science, University of Southampton, SO17 1BJ, United Kingdom
Contact for correspondence: Steve Hitchcock sh94r@ecs.soton.ac.uk

Version history of this report
Version 1.0, official report to JISC, released to selected users and evaluators December 2002
Version 2.0, edited for publication as a Technical Report, July 2003
This version 3.0, focus on usability results, edited for journal publication, draft, July 2003

Abstract

Citebase is a new citation-ranked search and impact discovery service that demonstrates the principle of citation searching of open access eprint archives, and is a featured service of the ArXiv eprint archives in physics. In the first detailed user evaluation of an open access Web citation indexing service, Citebase was evaluated by nearly 200 users from different backgrounds between June and October 2002. The paper summarises the results of the study that impact on the usability of Citebase. The major result of the evaluation is that, as exemplified by Citebase, Web-based citation indexing of open access eprint archives is closer to a state of readiness for serious use than had previously been realised. It was found that within the scope of its primary components, the search interface and services available from its rich bibliographic records, Citebase can be used simply and reliably for the purpose intended, and that it compares favourably with other bibliographic services. It is shown tasks can be accomplished efficiently with Citebase regardless of the background of the user. Better explanations and guidance are required for first-time users. Coverage is seen as a limiting factor, even though Citebase indexes over 200,000 papers from arXiv. Non-physicists were frustrated at the lack of papers from other sciences.

1 Introduction

The Open Citation (OpCit) Project (Hitchcock et al. 2002) has been developing tools and services for reference linking and citation analysis of scholarly research papers in large open access eprint archives (Hitchcock 2003). An exemplary case for open access in scholarly communication has been outlined by Suber (2003). It has been shown that on average online papers are up to three times more likely to be cited than offline papers from the same source (Lawrence 2001).

Most of the data collected and many of the services provided by OpCit have converged within a single service, Citebase, a citation-ranked search and impact discovery service founded on the basic principles of citation indexing as elaborated by Garfield (1994). Citebase offers both a human user interface (http://citebase.eprints.org/), and an Open Archives Initiative (OAI)-based machine interface for further harvesting by other OAI services (Lynch 2001). As an important output of the project, which will continue as a service independently of the project and is now a featured service within arXiv (http://arxiv.org/), the human user interface of Citebase was evaluated rigorously to ensure that it meets user needs. This paper highlights the results of the study that impact on the usability of Citebase.

The OpCit evaluation of Citebase took the form of a two-part Web-based questionnaire, designed to test whether Citebase is usable and useful. A full report of the evaluation, including details of procedure, is also available (Hitchcock et al. 2003).

The test invited users to participate in a practical exercise and then to offer views on the service. Background information was also sought on how this new service might fit in with existing user practices. In this way the evaluation aimed to combine objectivity with subjectivity, overcoming some of the limitations of purely subjective tests (Gunn 2002).

The evaluation was performed over four months from June to October 2002 by members of the Open Citation Project team at Southampton University. Observed tests of local users were followed by scheduled announcements to selected developer lists mangaged by the project's funding agencies, the Joint Information Systems Committee (JISC) in the UK and the National Science Foundation (NSF) in the USA, to OAI developers, open access advocates and international librarian groups. Finally, following consultation with our project partners at arXiv Cornell, arXiv users were directed to the evaluation by means of links placed in abstract pages for all but the latest papers deposited in arXiv.

Citebase services of impact-based scientometric analysis, measurement and navigation are intended in the first instance for research-users, rather than lay-users, because the primary audience for the peer-reviewed research literature is the research community itself.

The principal finding is that Citebase fulfils the objective of providing a usable and useful citation-ranked search service. It is shown tasks can be accomplished efficiently with Citebase regardless of the background of the user. The principle of citation searching of open access archives has thus been demonstrated and need not be restricted to current users.

Its deceptively simple search interface, however, masks a complexity that is characteristic of such services and which requires better explanations and guidance for first-time users. Coverage is seen as another limiting factor. Although Citebase indexes over 200,000 papers from arXiv, non-physicists were frustrated at the lack of papers from other sciences.

2 Background to the evaluation: about Citebase

Citebase, described by Hitchcock et al. (2002), indexes citations from published research papers stored in the larger open access, disciplinary archives - currently arXiv, CogPrints and BioMed Central - each of which offer metadata records about stored papers in a format that complies with the Open Archives Initiative (OAI). Just prior to the evaluation Citebase had records for 230,000 papers, indexing 5.6 million references. By discipline, approximately 200,000 of these papers are classified within arXiv physics archives. Thus, overwhelmingly, the current target user group for Citebase is physicists.

It is clear that a strong motivation for authors to deposit papers in eprint archives is the likelihood of subsequent inclusion in powerful resource discovery services that, like Citebase, have the ability to measure impact. If open access to online papers increases impact, it is important to develop services that can measure this. For this reason there was a need to target this evaluation at prospective users, not just current users, so that Citebase can be designed for an expanding user base.

Citebase harvests OAI metadata records for papers in the selected full-text archives, additionally extracting the references from each paper. The association between document records and references is the basis for a classical citation database.

The primary Citebase Web user interface (Figure 1) shows how the user can classify the search query terms (typical of an advanced search interface) based on metadata in the harvested record (title, author, publication, date). In separate interfaces, users can search by archive identifier or by citation. What differentiates Citebase is that it also allows users to select the criterion for ranking results by Citebase processed data (citation impact, author impact) or based on terms in the records identified by the search, e.g. date (see drop-down list in Figure 1).

It is also possible to rank results by the number of 'hits', a measure of the number of downloads and therefore a rough measure of the usage of a paper. This is a promising experimental feature to analyse both the quantitative and the temporal relationship between hit (i.e. usage) and citation data, as measures as well as predictors of impact. The method of correlating citation data with usage data has been used to investigate new bibliometric measures (Kurtz et al. 2003). Recent studies offer support for the use of reader data by digital libraries to complement more established measures of citation frequency, which reflect author preferences (Darmoni et al. 2002). At the Los Alamos National Laboratory Research Library, Bollen and Luce (2002) defined a measure of the consultation frequency of documents and journals, and found that ranking journals using this method differs strongly from a ranking based on the traditional impact factor and, in addition, corresponded strongly to the general mission and research interests of their user community. At the time of the evaluation, hits measured by Citebase were based on limited data from download frequencies at the UK arXiv mirror at Southampton only.


Figure 1. Citebase search interface showing user-selectable criteria for ranking results (with results appended for the search terms shown)

The top result shown in Figure 1 is ranked by citation impact: Maldacena's paper, the most-cited paper on string theory in arXiv at the time (September 2002), has been cited by 1576 other papers in arXiv. (This is the method and result for Q2.3 in the evaluation exercise described below.)

The combination of data from an OAI record for a selected paper with the references from and citations to that paper is also the basis of the Citebase record for the paper. A record can be opened from a results list by clicking on the title of the paper or on 'Abstract' (see Figure 1). The record will contain bibliographic metadata and an abstract for the paper, from the OAI record. This is supplemented with four characteristic services from Citebase:

Another option presented to users from a results list is to open a PDF version of the paper (see Figure 1). This option is also available from the record page for the paper. This version of the paper is enhanced with linked references to other papers identified to be within arXiv, and is produced by OpCit. Since the project began, arXiv has been producing reference linked versions of papers. Although the methods used for linking are similar, they are not identical and OpCit versions may differ from versions of the paper available from arXiv. One finding of the evaluation that is relevant to the project is whether reference linking of full-text papers should be continued outside arXiv. An earlier, smaller-scale evaluation, based on a previous OpCit interface (Hitchcock et al. 2000), found that arXiv papers are the most appropriate place for reference links because users overwhelmingly use arXiv for accessing full texts of papers, and references contained within papers are used to discover new works.

3 Design of the evaluation forms

Users were presented with a two-part evaluation to complete. Form 1, had four sections designed to: Form 2 involved a simple measure of user satisfaction with the object being evaluated, Citebase, and was reached by submitting Form 1.

4 Results: using Citebase

4.1 Citebase evaluators

Valid submissions to Form 1 were received from 195 evaluators. The backgrounds of evaluators are broadly based, mostly in the sciences, but about 10% of users were non-scientists. About a third of evaluators were physicists, although the number of physicists as a proportion of all users might have been expected to be higher given the concentration of Citebase on physics.

Physicists in this sample tend to be daily users of arXiv. Non-physicists, noting that arXiv has smaller sections on mathematics and computer science, tend to be regular or occasional users of arXiv. Beyond these disciplines most are non-users of arXiv, and thus would be unlikely to use Citebase given its present coverage.

Most arXiv users in this study access new material by browsing, rather than by alerts from arXiv. There was some encouragement for services such as Citebase (note, at this stage of the evaluation users had not yet been introduced to Citebase) in the willingness to use Web search and reference links to access arXiv papers.

OAI is familiar to over half the evaluators, but not to many physicists. The latter is not surprising. OAI was originally motivated by the desire to encourage researchers in other disciplines to build open access archives such as those already available to physicists through arXiv, although the structure of Open Archives, unlike arXiv, is de-centralised (Lynch 2001).

4.2 Practical exercise: building a short bibliography

This was the critical phase of the evaluation, inviting evaluators to try key features of Citebase, identified in section 2, based on a set practical exercise. The subject chosen for the exercise, string theory, is of relevance to many physicists who use arXiv, but no prior knowledge of the subject was required to complete the exercise. Questions posed in the exercise, with results, are shown in Table 1. Figure 2 plots the results graphically.
 
Table 1: Series of questions posed in the practical exercise to use Citebase, with results of responses; ( ) physicists only
Q2.1 Who is the most-cited (on average) author on string theory in arXiv?
Correct 141 (45) Incorrect 20 (8) No answer 34 (15)
Q2.2 Which paper on string theory is currently being browsed most often in arXiv?
Correct 133 (41) Incorrect 16 (8) No answer 46 (19)
Q2.3 Which is the most-cited paper on string theory in arXiv?
Correct 145 (48) Incorrect 9 (2) No answer 41 (18)
Q2.4 Which is the most highly cited paper that cites the most-cited paper above? (critical point)
Correct 122 (44) Incorrect 26 (5) No answer 47 (19)
Q2.5 Which paper is most often co-cited with the most-cited paper above?
Correct 133 (46) Incorrect 12 (3) No answer 50 (19)
Q2.6a  Download the full-text of the most-cited paper on string theory. What is the URL?
Correct 124 (42) Incorrect 13 (3) No answer 58 (23)
(Correct=Opcit linked copy 71 (15) +arXiv copy 53 (27))
Q2.6b In the downloaded paper, what is the title of the referenced paper co-authored with Strominger and Witten (ref [57])?
Correct 105 (35) Incorrect 27 (9) No answer 63 (24)
Q2.6c Did you use search to find the answer to 2.6b? 
No 118 (40); Yes 18 (3)

The first three questions in the exercise (Q2.1-2.3) involved performing the same task and simply selecting a different ranking criterion from the drop-down list in the search interface (Figure 1). Selectable ranking criteria is not a feature offered by popular Web search engines, even in advanced search pages, which the main Citebase search page otherwise resembles. The user's response to the first question is therefore important in determining the method to be used, and Q2.1 might be expected to score lowest, with familiarity increasing for Q2.2 and Q2.3. Where Q2.1 proved initially tricky, observed tests revealed that users would return to Q2.1 and correct their answer. We have no way of knowing to what extent this happened in unobserved submissions, but allowance should be made for this when interpreting the results.

The next critical point occurs in Q2.4, when users were effectively asked for the first time to look below the search input form to the results listing for the most-cited paper on string theory in arXiv (result of Q2.3). To find the most highly cited paper that cites this paper, notwithstanding the apparent tautology of the question, users must recognise they have to open the Citebase record for the most-cited paper by clicking on its title or on the Abstract link. Within this record the user then has to identify the section 'Top 5 Articles Citing this Article'. To find the paper most often co-cited with the current paper (Q2.5) the user has to scroll down the page, or use the link, to find the section 'Top 5 Articles Co-cited with this Article'.

Now it gets slightly harder. The evaluator is asked to download a copy of the full-text of the current paper (Q2.6a). What the task seeks to determine is the user's preference for selecting either the arXiv version of the paper or the OpCit linked PDF version. Both are available from the Citebase record. A typical linked PDF was illustrated by Hitchcock et al. (2000).

As a check on which version users had downloaded, they were asked to find a reference (Q2.6b) contained within the full text (and which at the time of the evaluation was not available in the Citebase record). To complete the task, users had to give the title of the referenced paper, but this is not as simple as it might be because the style of physics papers is not to give titles of papers in references. To find the title, the user would need to access a record of the referenced paper. Had they downloaded the linked version or not? If so, the answer was one click away. If not, the task was more complicated. As final confirmation of which version users had chosen, and how they had responded subsequently, users were asked if they had resorted to search to find the title of the referenced paper. In fact, a search using Citebase or arXiv would not have yielded the title easily.

In this practical exercise users were asked to demonstrate completion of each task by identifying an item of information from the resulting page, variously the author, title or URL of a paper. Responses to these questions, input using the Web form 1, were automatically classified as true, false or no response. Users could cut-and-paste this information, but to ensure false responses were not triggered by mis-keying or entering an incomplete answer, a fuzzy text matching procedure was used in the forms processor.

Figure 2 shows that most users were able to build a short bibliography successfully using Citebase. As this exercise involved using the principal features of Citebase, there is a good chance that users would be able to use Citebase for other investigations, especially those related to physics. The 'true' line in Figure 2a, indicating correct answers to the questions posed, shows a downward trend through the exercise, which is most marked for Q2.6 involving downloading of PDF full texts. Figure 2b, which includes results for physicists only, qualitatively shows an almost identical trend, indicating there is no greater propensity among physicists to be able to use the system compared with other users.

As anticipated, Q2.4 proved to be a critical point, showing a drop in correct answers from Q2.3. The upturn for Q2.5 suggests that user confidence returns quickly when familiarity is established for a particular type of task. Similarly, the peak in the 'true' curve for Q2 .3 in both Figure 2a and b shows that usability improves quickly with familiarity of the features of a particular page. At no point in either Figure 2a or b is there evidence of a collapse of confidence or of unwillingness among users to complete the exercise.


a

b
Figure 2. Progress in building a short bibliography through Q2.1-2.6b in evaluation Form 1 (T=true, correct answer, F=false, N=no response): a, All users; b, Physicists only

Although the Web forms-based approach is an indirect method of recording task completion, the results of this exercise can be read as an objective measure showing whether Citebase is a usable service. As an extra aid to judge the efficiency with which the tasks are performed, users were asked to time this section (Table 2).
 
Table 2: Time taken to complete the practical exercise
1-5 minutes 5-10 10-15 15-20 20-25 25-30 30+ ? Total
13 (6) 60 (21) 36 (14) 17 (5) 9 (0) 6 (0) 2 (0) 5 (2) 147
( ) physicists only

Physicists generally completed the exercise faster (Figure 3a). Reinforcing these results, almost 90% of users (100% of physicists) completed the exercise within 20 mins, with approximately 50% (55% physicists) finishing within 10 mins. There appears to be some correlation between subject disciplines and level of arXiv usage with the time taken to complete the exercise (Figure 3), although neither correlation is statistically significant. Taken together these results show that tasks can be accomplished efficiently with Citebase regardless of the background of the user.


a

b
Figure 3. Correlations between time taken to complete the practical, bibliography-building exercise and: a, subject disciplines (x axis: physics=4, maths=3, computer=2, infoScience=1, other=0) correlation= -0.15, N=140, p<0.077, b, level of arXiv usage (x axis: daily usage=4, regular usage=3, occasional usage=2, no usage=1) correlation=-0.18, N=140, p<0.033

On the basis of these results there can be confidence in the usability of most of the features of Citebase. Separate user comments drew attention to some usability issues, including help and support documentation, terminology.

The incidental issue of which PDF version users prefer to download, OpCit or arXiv version (Q2.6a), was not conclusively answered. It can be noted that among all users, physicists displayed a greater preference to download the arXiv version.

4.3 User views on Citebase

By this stage users might be excited, exhausted or exasperated by Citebase (or by the evaluation), but they are now familiar with its features, and were asked for their views on these.

Users were asked about Citebase as it is now and how it might be in future. It is reasonable to limit users to a single choice in the latter, idealised scenario, so that desired features have to be prioritised. Users are likely to be more critical of the actual service, so it seemed safe to allow a more open choice of preferred features, with more than one feature selectable (Figure 4). Links to citing and co-citing papers are features of Citebase that are valued by users, even though these features are not unique to Citebase. The decision to rank papers according to criteria such as these, and to make these ranking criteria selectable from the main Citebase search interface, is another feature that has had a positive impact with users. Citations/hit graphs appear to have been a less successful feature. There is little information in the data or comments to indicate why this might be, but it is a feature worth persevering with until more complete data can be tested.


Figure 4. Most useful features of Citebase

Users found it harder to say what would improve Citebase, judging from the number of 'no responses' (Figure 5). Wider coverage, especially in terms of more papers, is desired by all users, including physicists. The majority of comments submitted separately from this question criticised coverage.

Signs of the need for better support documentation reemerged in this section. Although the number of users calling for a better interface is not high (Figure 5), comments indicate that those calling for improvements in this area are more vociferous. Among features not offered on the questionnaire but suggested by users, the need for greater search precision stands out.


Figure 5. Improving Citebase

Citebase has to be shaped to offer users a service they cannot get elsewhere, or a better service. This part of the evaluation concluded by asking the user for a view on Citebase in comparison with familiar bibliographic services.

Users were asked what services they would use to compile a bibliography in their own work and field: ISI Web of Science (16), PubMed (10), SLAC Spires (8), Mathscinet (7) and Google (6), were the most popular choices. A further 27 unique choices wre also submitted.

On the basis of this result there is a roughly equal likelihood that users who participated in this survey will use Web-based services (e.g. Web search), online library services and personal bibliography software to create bibliographies. This presents opportunities for Citebase to become established as a Web-based service that could be integrated with other services. The lack of a dominant bibliography service, including services from ISI, among this group of users emphasises the opportunity.

Users were then asked to rate Citebase in comparison with these bibliography services (users were to try and assume that Citebase covered other subjects to the degree it now covers physics). Figure 6 suggests that Citebase is beginning to exploit the opportunity presented by the lack of dominant alternative bibliography services among evaluators, but needs to do more to convince users, even physicists, that it can become their primary bibliographic service.


Figure 6. Comparing Citebase with other bibliography services

Attempts to correlate how Citebase compares with other bibliographic services with other factors considered throughout the evaluation - with subject discipline, and with level of arXiv usage - showed no correlations in either case. This means that reactions to Citebase are not polarised towards any particular user group or as a result of the immediate experience of using Citebase for the pre-set exercise, and suggests that the principle of citation searching of open access archives has been demonstrated and need not be restricted to current users.

There was little opportunity for users to compare, contrast and discuss features of Citebase that differentiate it from other services. In particular, Citebase offers access to full texts in open access eprint archives. Thist is an aspect that needs to be emphasised as coverage and usage widen. Comments reveal that some users appreciate this, although calls for Citebase to expand coverage in areas not well covered now suggest this is not always understood. It is not possible for Citebase to simply expand coverage unless there is recognition by researchers, as authors, of the need to contribute to open access archives. One interpretation is that users in such areas do not see the distinction between open access archives and services and paid-for journals and services, because they do not directly pay for those services themselves - these services appear to be free.

4.4 User satisfaction with Citebase

Form 1 prompted users to respond to specific questions and features, and gave an impression of their reaction to the evaluated service, but did not explore their personal feelings about it. A recommended way of tackling this is an approach based on the well-known Software Usability Measurement Inventory (SUMI) form of questionnaire for measuring software quality from the end user's point of view. Form 2 was a short implementation of this approach which sought to discover: Experience has shown that users rush through this form within a few minutes if it is seen immediately after the first form. It is thus a rough measure of satisfaction, but when structured in this way can point to areas of concern that might otherwise go undetected.

Four response options, ranging from very positive to very negative, were offered for each of four statements in each section. These responses were scored 2 to -2. A neutral response was not offered, but no response scored zero.

This exercise could have been longer and explored other areas, but may have inhibited the number of responses. As the exercise involved a separate form, accessed on submitting Form 1, it was not expected that all users would progress this far. Of 195 users who submitted the first form, 133 completed Form 2.

The summary results by question and section are shown in Table 3 and Figure 7.
 
Table 3: Satisfaction scores (Form 2)
Question 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Average score by Q. 0.92 0.79 1.39 1.05 0.41 1.17 0.83 1.02 0.65 1.07 1.42 0.99 0.92 0.27 0.57 0.26
Section Impression Command Effectiveness Navigability
Average by section 1.04 0.86 1.03 0.51

a   b
Figure 7. Average user satisfaction scores: a, by question, b, by section

The highest score was recorded for Q11 (Figure 7a), indicating that on average Citebase users were able to find the information required most of the time. Scoring almost as high, Q3 shows users found the system frustrating to use only some of the time.

The questions ranked lowest by score, Q14 and Q16, suggest that users agree weakly with the proposition that there were plenty of ways to find the information needed, and disagreed weakly with the proposition that it is easy to become disoriented when using the system.

Scores by section indicate that, overall, users formed a good impression of Citebase (Figure 7b). They found it mostly to be effective for task completion (confirming the finding from the practical exercise in Form 1), and they were able to control the system most of the time. The lower score for navigability suggests this is an area that requires further consideration.

Recalling that responses were scored between 2 and -2, depending on the strength of the user's reaction, it can be seen that on average no questions or sections scored negatively; six questions scored in the top quartile, and two sections just crept into the top quartile.

Among users, scores were more diverse, with the total user score varying from 31 (maximum score possible is 32) to -25. Other high scores included 29, 28 and 27 (by five users). Only eight users scored Citebase negatively.

6 Conclusions

The major result of the evaluation is that, as exemplified by Citebase, Web-based citation indexing of open access archives is closer to a state of readiness for serious use than had previously been realised.

The exercise to evaluate Citebase had a clear scope and objectives. Within the scope of its primary components, the search interface and services available from a Citebase record, it was found Citebase can be used simply and reliably for resource discovery. It was shown tasks can be accomplished efficiently with Citebase regardless of the background of the user.

The principle of citation searching of open access archives has been demonstrated and need not be restricted to current users.

More data need to be collected and the process refined before it is as reliable for measuring impact. As part of this process users should be encouraged to use Citebase to compare the evaluative rankings it yields with other forms of ranking.

Citebase is a useful service that compares favourably with other bibliographic services, although it needs to do more to integrate with some of these services if it is to become the primary choice for users.

The majority of users were able to complete a task involving all the major features of Citebase. User satisfaction appeared to be markedly lower when users were invited to assess navigability than for other features of Citebase.

Citebase needs to be strengthened in terms of the help and support documentation it offers to users.

Coverage is seen as a limiting factor. Although Citebase indexes over 200,000 papers from arXiv, non-physicists were frustrated at the lack of papers from other sciences. This is a misunderstanding of the nature of open access services, which depend on prior self-archiving by authors. In other words, rather than Citebase it is users, many of whom will also be authors, who have it within their power to increase the scope of Citebase by making their papers available freely from OAI-compliant open access institutional archives. Institutions can do more to support and promote these archives within their communities (Crow 2002, Young 2002), and research funders can provide a strong motivation for authors to self-archive by mandating that assessable work is to be openly accessible online (Harnad et al. 2003). Citebase will index more papers and more subjects as more archives are launched.

Development of Citebase is continuing. There are wider objectives and aspirations for developing Citebase. The overarching purpose is to help increase the open-access literature. Where there are gaps in the literature - and there are large gaps in the open-access literature currently - Citebase will help to accelerate the rate at which these gaps are filled.

Acknowledgements

The Open Citation Project (http://opcit.eprints.org/) was funded between 1999 and 2002 by the Joint NSF - JISC International Digital Libraries Research Programme.

We are grateful to Paul Ginsparg, Simeon Warner and Paul Houle at arXiv Cornell for their comments and feedback on the design of the evaluation and their cooperation in helping to direct arXiv users to Citebase during the evaluation. Eberhard Hilf and Thomas Severiens at PhysNet and Jens Vigen at CERN were also a great help in alerting users to the evaluation.

Finally, we thank all our Web evaluators, who must remain anonymous, but this in no way diminishes their vital contribution.

References

Bollen, Johan and Rick Luce (2002) "Evaluation of Digital Library Impact and User Communities by Analysis of Usage Patterns". D-Lib Magazine, Vol. 8, No. 6, June
http://www.dlib.org/dlib/june02/bollen/06bollen.html

Crow, R. (2002) "The Case for Institutional Repositories: A SPARC Position Paper". Scholarly Publishing & Academic Resources Coalition, Washington, D.C., July
http://www.arl.org/sparc/IR/ir.html

Darmoni, Stefan J., et al. (2002) Reading factor: a new bibliometric criterion for managing digital libraries. Journal of the Medical Library Association, Vol. 90, No. 3, July
http://www.pubmedcentral.gov/picrender.fcgi?action=stream&blobtype=pdf&artid=116406

Garfield, Eugene (1994) "The Concept of Citation Indexing: A Unique and Innovative Tool for Navigating the Research Literature". Current Contents, January 3rd
http://www.isinet.com/isi/hot/essays/citationindexing/1.html

Gunn, Holly (2002) "Web-based Surveys: Changing the Survey Process". First Monday, Vol. 7, No. 12, December
http://firstmonday.org/issues/issue7_12/gunn/index.html

Harnad, Stevan, Les Carr, Tim Brody and Charles Oppenheim (2003) "Mandated online RAE CVs Linked to University Eprint Archives". Ariadne, issue 35, April
http://www.ariadne.ac.uk /issue35/harnad/intro.htm

Hitchcock, Steve (2003) "Metalist of open access eprint archives: the genesis of institutional archives and independent services". Submitted to ARL Bimonthly Report, to appear at http://opcit.eprints.org/archive-metalist.html

Hitchcock, Steve, Donna Bergmark, Tim Brody, Christopher Gutteridge, Les Carr, Wendy Hall, Carl Lagoze, Stevan Harnad (2002) "Open Citation Linking: The Way Forward". D-Lib Magazine, Vol. 8, No. 10, October
http://www.dlib.org/dlib/october02/hitchcock/10hitchcock.html

Hitchcock, Steve, Les Carr, Zhuoan Jiao, Donna Bergmark, Wendy Hall, Carl Lagoze and Stevan Harnad (2000) "Developing Services for Open Eprint Archives: Globalisation, Integration and the Impact of Links". Proceedings of the Fifth ACM Conference on Digital Libraries, June (ACM: New York), pp. 143-151
http://opcit.eprints.org/dl00/dl00.html

Hitchcock, Steve, Arouna Woukeu, Tim Brody, Les Carr, Wendy Hall and Stevan Harnad (2003) "Evaluating Citebase, an open access Web-based citation-ranked search and impact discovery service". Author eprint, to be published http://opcit.eprints.org/evaluation/Citebase-evaluation/evaluation-report-journal.html

Kurtz, M. J., G. Eichorn, A. Accomazzi, C. Grant, M. Demleitner, S. S. Murray, N. Martimbeau, and B. Elwell (2003) "The NASA astrophysics data system: Sociology, bibliometrics, and impact". Author eprint, submitted to Journal of the American Society for Information Science and Technology
http://cfa-www.harvard.edu/~kurtz/jasist-submitted.ps

Lawrence, Steve (2001) "Free Online Availability Substantially Increases a Paper's Impact". Nature Web Debate on e-access, May
http://www.nature.com/nature/debates/e-access/Articles/lawrence.html

Lynch, Clifford A. (2001) "Metadata Harvesting and the Open Archives Initiative". ARL Bimonthly Report, No. 217, August
http://www.arl.org/newsltr/217/mhp.html

Suber, Peter (2003) "Removing the Barriers to Research: An Introduction to Open Access for Librarians". College & Research Libraries News, 64, February, 92-94, 113
http://www.earlham.edu/~peters/writing/acrl.htm