Chen & Konstan's (C & K) paper, "
Conference Paper Selectivity and Impact" is interesting, though somewhat limited because it is based only on computer science and has fuller data on conference papers than on journal papers.
The finding is that papers from highly selective conferences are cited as much as (or even more than) papers from certain journals. (Journals of course also differ among themselves in quality and acceptance rates.)
Comparing the ceiling for citation counts for high- and low-selectivity conferences by analyzing only the equated top slice of the low-selectivity conferences, C & K found that that the high-selectivity conferences still did better, suggesting that the difference was not just selectivity (i.e., filtration) but also "reputation." (The effect broke down a bit in comparing the very highest and next-to-highest selectivity conferences, with the next-to-highest doing slightly better than the very highest; plenty of post hoc speculations were ready to account for this effect too: excess narrowness, distaste for competition, etc. at the very highest level, but not the next-highest…)
Some of this smacks of over-interpretation of sparse data, but I'd like to point out two rather more general considerations that seem to have been overlooked or under-rated:
(
1) Conference selectivity is not the same as journal selectivity: A conference accepts the top fraction of its submissions (whatever it sets the cut-off point to be), with no rounds of revision, resubmission and re-refereeing (or at most one cursory final round, when the conference is hardly in the position to change most of its decisions, since the conference looms and the program cannot be made much more sparse than planned). This is passive filtration. Journals do not work this way; they have periodic issues, which they must fill, but they can have a longstanding backlog of papers undergoing revision that are not published until and unless they have succeeded in meeting the referee reports' and editor's requirements. The result is that the accepted journal papers have been systematically improved ("dynamic filtration") through peer review (sometimes several rounds), whereas the conference papers have simply been statically ranked much as they were submitted. This is peer ranking, not peer review.
(2) "Reputation" really just means track record: How useful have papers in this venue been in the past? Reputation clearly depends on the reliability and validity of the selective process. But reliability and validity depend on more than the volume and cut-off point of raw submission rankings (passive filtration).
I normally only comment on open-access-related matters, so let me close by pointing out a connection:
There are three kinds of selectivity: journal selectivity, author selectivity and user selectivity. Journals (and conferences) select which papers to accept for publication; authors select which papers to submit, and which publication venue to submit them to; and users select which papers to read, use and cite. Hence citations are an outcome of a complex interaction of all three factors. The relevant entity for the user, however, is the paper, not the venue. Yes, the journal's reputation will play a role in the user's decision about whether to read a paper, just as the author's reputation will; and of course so will the title and topic. But the main determinant is the paper itself. And in order to read, use and cite a paper, you have to be able to access it. Accessibility trumps all the other factors: it is not a sufficient condition for citation, but it is certainly a necessary one.
Stevan Harnad
American Scientist Open Access Forum