Impact factors* are getting lots of use, and (perhaps as a direct result) it’s fashionable to argue that this use should be abhorred. Some days it seems like the impact factor can join the P value, the lecture, the paywalled journal, and bellbottom jeans in the lineup of innovations widely claimed to be obsolete and, perhaps, to have been bad ideas in the first place. And yet, just last week I was talking with a collaborator about where to send a manuscript, and when she mentioned a journal I didn’t know, my first question was “What’s its impact factor?”
So: am I guilty of perpetuating the horror that is the impact factor, or was my question a reasonable one?
There are three anti-impact-factor positions one can take, and many people either ignore this, or conflate them First, one can argue that impact factor is a poor measure of paper quality. Second, one can argue that impact factor is a poor measure of journal quality. Third, one can argue that we needn’t measure journal quality at all. All three** arguments are in wide circulation.
(1) Is impact factor a good measure of paper quality?
Of course not, and thinking it might be is surely the most egregious misuse of the impact factor. It happens, though. Maybe you’ve sat on a hiring or tenure committee and heard somebody argue that paper X should be heavily weighted because it appeared in Nature, or Science, or some other journal with a high impact factor. This is ridiculous, of course; some legendarily crappy science appears in Science andNature, and so does a lot of science that’s flashy but unimportant. (Retraction rates even correlate positively with impact factors). Conversely, many influential papers appear in journals with lower impact factors. But complaining that journal impact factor is a poor measure of paper quality is like complaining that a letter-opener does a poor job of crushing garlic. People who make this complaint should grab a better tool (I’m intrigued by the newly proposed relative citation ratio) and stop making themselves look silly***.
(2) Is impact factor a good measure of journal quality?
There are many ways in which impact factor is an imperfect measure. Its time horizon is short, which favours fields that move quickly, and papers that are written for immediate splash rather than lasting value. It doesn’t distinguish positive citations from negative ones. It’s a mean rather than a median, and so heavily influenced by a few outlying papers. On top of these intrinsic problems, publishers game impact factors (for instance, by publishing more reviews, or by encouraging within-journal citation). Of course, this gaming shouldn’t surprise us, because gaming is an inevitable feature of quality-signalling systems. Book publishers game best-seller lists, TV networks game Neilsen ratings, and in behavioural ecology (and bars) both males and females game the signals used for mate choice. It’s worth working to make the signals harder to game – but in the meanwhile, we generally don’t abandon imperfect signaling systems, because they still carry useful information. Impact factor seems no different to me. So let’s remember that this isn’t much use decrying their imperfection unless we can suggest an alternative metric that does better (this suggestion is intriguing, although I’m skeptical).
(3) Should we measure journal quality at all?
Perhaps the bravest argument is that we should simply stop trying to deploy metrics of journal quality. According to this argument, impact factors are designed to serve the interests of publishers, while the unit that’s important to everyone else is not the journal but the paper. There is some truth to each half of this: publishers aren’t directly interested in the quality of individual papers (only in their aggregate quality), while in assessments such as tenure and granting, we should obviously evaluate quality of individual contributions, not the venues in which they appeared. And yet there are two reasons I find impact factor helpful – one for reading, and one for writing. On the reading side: there’s a lot to read out there, and knowing that certain journals have high editorial standards and tend to print papers that will be interesting and important (as judged by others’ citation of them) is a useful shortcut to prioritizing my reading time. It’s useful to scan tables of contents for the top-impact journals, while covering lower-impact outlets by topic-focused search (for instance, via Google Scholar alerts). On the writing side: when I’m considering a journal to publish in, impact factor is a help. Sometimes I want to aim high, and sometimes I know I’ve written something that ought to be available but that isn’t going to galvanize my field. Knowing a journal’s impact factor (along with its scope, of course, and compared only within fields) is a pretty good first cut in thinking about whether it’s a good home for a particular manuscript. (There’s a related point here about whether we need journals with reputations at all, but that will have to be a future post).
So I’m not ready to abandon the journal impact factor, or efforts to measure journal quality more generally. Like every tool, it can be abused. Like most tools, used as designed, it can do a useful job for us.
*In case you haven’t been paying attention: for a given year, a journal’s impact factor is the mean number of citations to papers published in that journal in the previous two years. This is presumed say something about the average impact of papers in that journal, and in turn, about the journal’s “importance” or “quality.”
**Yes, the logical structure here suggests a fourth argument: that we need not measure paper quality at all. Including this would make for some nice symmetry – but I hope nobody actually believes that we shouldn’t assess the quality of papers!
***Impact factor might, however, usefully measure the quality of sufficiently large sets of papers. When that set is “all papers in journal X”, of course, that’s just impact factor used as designed; see arguments (2) and 3. But what if that set is “15 papers on a CV”? Well, if I have two job candidates, and one has 15 papers in the American Naturalist (JIF = 4.5) and one has 15 papers in the Canadian Entomologist (JIF = 0.8), then I may not need to read every paper to make my comparison. (Cue rage in Replies….. so I should point out that this doesn’t mean there’s anything wrong with the Canadian Entomologist; I’ve published there and will do so again.)
Dr. Stephen Heard is a professor in the Department of Biology at the University of New Brunswick. This article was originally posted on his blog Scientist Sees Squirrel. This article is copyrighted by Dr. Stephen Heard and was published with permission. You can also find him on twitter at @StephenBHeard.