Some Thoughts on Peer Review

December 16th, 2007

I’ve refereed a couple of papers recently, and had several papers refereed, with a variety of results that has had me thinking a lot about peer review lately. I personally know the chief editors of a couple of the leading journals in astronomy and have had the chance to discuss the job with them one-on-on at a meeting last summer and get their perspectives. I’ve reviewed piles of telescope and grant proposals over the years, and will be reviewing proposals again for the Hubble Space Telescope this coming spring. I’ve also seen a lot of ignorant, correct-but-biased, and misguided statements about peer review on various internet forums I read, and at the same time a lot of faith placed in peer-reviewed journals above beyond any other sort of report.

I’m an astronomer, and will primarily be discussing the experience in astronomy. Since astronomy is a relatively small field, things are generally a little less cut-throat and the review process smaller and simpler. Some large fields have a lot of exclusive journals where perfectly good papers get rejected for not quite fitting or being cutting edge enough, and journals use multiple reviews per paper as a matter of course.

But first, what is peer review? At its simplest, it’s a process that has experts in a particular field reviewing papers and grants in that field, under the assumption that the experts are the ones best qualified to determine if a particular paper or project is professional and high-quality. From there, it varies a lot. In reviewing telescope/grant proposals, usually the peer review of several to eight astronomers just ranks a set of proposals and draws a line of minimum quality; observatory directors and grant officers do the rest, taking into account funding levels, instrument schedules, and other complicating factors. For papers, and I’ll be focusing on papers here, the journal editor selects a reviewer who is an expert in the field of the submitted paper and generally reliable to provide an opinion about the paper, following guidelines for the particular journal. Here are examples of such guidelines.

Now, I wanted to relate a few personal experiences, good and bad, both as a contributor and as a reviewer (who we usually refer to as the referee). My first submitted paper was a breeze, with two minor comments to address, which we did within a day, and resubmitted to immediate acceptance. My second paper had a lot of small tacky criticisms, a couple of which were just wrong. We fixed the simple stuff, and explained why some of the wrong criticisms were wrong, and resubmitted to immediate acceptance. Most of my early papers were readily accepted, sometimes with a few revisions, sometimes with a lot of small ones taking a lot of work. More recently, probably because I’ve written more papers and the statistics are reaching the extremes of the distribution of reviewers, I’ve had papers essentially rejected for really stupid reasons that I’ve been so mad about it’s taken months to get back to them with a clear head and revise, to papers accepted as is within a day (probably an editor doing the refereeing or having a qualified colleague down the hall). Rejection is harsh, but it’s like writing. It’s part of the job and you get used to it. What is harder to get used to is getting criticized by referees who stridently make incorrect assertions with all the arrogance of anonymity.

Let me take an aside here. Referees can reveal their identity (I usually do) or remain anonymous. Authors are always known to the referees. It’s not practical to make things double blind as authors often refer to their own previous work and it would be obvious anyway too often of the time, but it can cause problems. My old advisor once got a report on a theory-heavy paper that started with “She is not known as a theorist…” which isn’t a valid criticism or bias to betray. In speculative fiction, it would be like Stan Schmidt at Analog rejecting a story by Niel Gaiman starting off with “You are known for fantasy, not science fiction…” It’s critiquing the person, not the story.

One more aside. Refereeing is voluntary and something you do as community service. There’s no pay. There’s no reward, except for the opportunity to think critically and to have some influence on the literature in your field. Similarly, there’s no training. Any asshole scientist out there can be asked to referee a paper. And sometimes they are.

So, a few horror stories from myself and friends. One friend showed me a referee’s report that went on for pages about how he was a terrible scientist and this paper was ridiculous (he isn’t and it wasn’t, but it was full of ad hominems) — the editor apologized profusely, but she really shouldn’t have forwarded the report to him. It didn’t meet a professional level. The problem I’ve been running into more often has to do with referee’s deciding how you should have written the paper, when they haven’t thought as deeply about it. This has to do with things like telling you to spend weeks of work to nail down an uncertainty on a number that isn’t really important in the first place, or deciding that the scope of the paper is too large or too small and should be redone, or nitpicking things like what papers you’ve cited on some minor point and not telling you which ones they think you should cite. They’re what I call “stupid smart people.” They know their field, but they don’t really know how to referee a paper or how to see something from a perspective other than their own, and they’re blind to those facts.

The stakes can be high on the author’s side. Publish or perish isn’t a myth. People tend to get jobs and tenure based on their publications. Astronomy is generous enough that few papers are rejected outright and most get through with some level of revisions, but it can mean a lot of work and a lot of delays. When it’s busy work, it’s frustrating. When the comments are insulting, it hits right at the ego as it can threaten a career.

And since no one teaches people how to referee, there are all kinds of biases. Some people believe some journals are for certain kinds of papers, and only those, and two fair-minded scientists won’t agree. Editors are responsible, ultimately, and the reviews are only to aid them in making their decisions. I had one experience with an editor, and older guy who didn’t even use email, who likely sent a paper to one of his old fuddy-duddy friends, who was a little clueless and dogmatic on some points and showed very poor judgment. The editor wrote in his response that he had also read the paper and had the same poor opinion of it. We revised the paper only by adding a bit of data (a high-resolution spectrum a friend got for us which should not have been necessary) that addressed the critical point in a more direct way that they could understand and the paper was accepted without additional comment. The report and the editor’s letter had both been incompetent on the science, unfortunately.

It happens. Referring and “peer-reviewed journals” are far from perfect.

I had a boss who refereed a paper and recommended it for publication. He thought the results were probably wrong, but his philosophy was that the data and measurements were valid, and the interpretation should and would get hashed out in the literature in following papers. Some old-school editors (e.g. Chandrasekhar) would have held up the paper and been stubborn about publishing it until the interpretation met his satisfaction.

The editors I’ve talked with all acknowledge the imperfection of the system. They also all agree that the end result of the system is that papers are better overall and more likely correct than without it. I try to keep that in mind when I get a referee’s report where the first comment is just flat out mistaken or insulting (it happens).

Like the US legal system, it isn’t very good but it’s better than anything else out there.

Share/Bookmark

You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.