Why Do Referee Reports Take So Long?

Every person who has submitted a paper to a journal has at some point or another asked themselves this question. With so much riding on publication, the wait for referee reports and an editor’s decision can seem interminable. Six months, nine months, a year later…”Why do referee reports take so long? Why must I sit in agony for so long?”

I’ve been there; and now, as a relatively new member of the ranks of journal editorial board members, I’m gaining experience on the other side of the equation when, as an editor, I’m awaiting referee reports from my referees with as bated breath as I do as an author. And in that capacity, I’ve started to ask this question more seriously: Why is it that referee reports take so long?

Reflecting on my own practice, the actual time spent on a referee report generally breaks down into this:

  • My first read through a paper is at a pretty superficial level, focusing on proofreading. Where are the spelling and grammar errors, where are the repetitive parts, where are the things that make me go “hunh?” and require more thought. I do this pass first because typos and grammatical errors drive me nuts, and if I don’t have them marked out, I cannot set them aside and focus on content.
  • My second read through is then focusing on the content: Are the arguments cogent, is there material that isn’t addressed that should be, do the things which made go “hunh” on the first read still do so, or do they make more sense now that I’m paying more attention to the content?
  • This second read-through often prompts a literature search where I go to either double-check my memory of something, or double-check what the authors say about a particular fact, or to compile a list of references that would be beneficial to the author to consult.
  • If the paper is technical, then it is after the second read-through that I sit down and work through any of the proofs that were not completely transparent.
  • Then, I sit down with my annotated copy of the paper and go through the laborious process of typing up my notes and turning them into a useful report (see What Makes a Good Referee Report).

Each of these stages individually takes, on average, less than an afternoon. Thus, there is in principle no reason why I shouldn’t be able to turn around referee reports in a week.

But of course, in principle, that doesn’t happen. There are often things higher in the priority queue which cause the tedious task of writing referee reports to get pushed down, so I don’t get around to printing off the paper right away, or maybe a week elapses between each of the steps. Of course, some delays are legitimate: If a referee request arrives while I’m traveling, and I don’t have access to a printer, there is an enforced delay before I can start. Or perhaps I’m on vacation, or I’m home with a sick child. But even with such delays, the only real excuse I can give for not getting the majority of referee reports turned in within a month is procrastination.

The more that I think about this, the more I think that this aspect of our publishing practices needs to change. Many editorial boards (my own included) have a policy of 60- or 90-day initial deadlines for reports, and the motivation for this policy is that it gives the referees time to do their jobs thoroughly and without being rushed, so that the end result is of higher quality. But given my own experience as a report writer, I have to question whether longer deadlines actually do result in better reports, rather than just later ones. Working under the assumption that a referee will take roughly the same amount of time to write his report whether he is given 28 days, 60 days, or 90 days, what are the potential drawbacks of having shorter referee periods? The most obvious is that more people will turn down the request, due to legitimate reasons for not having the time in the up-coming four week (say) block, due to conferences, holidays, other non-standard obligations, etc. The next worry is that the reports received will be of a lower quality, or shorter, or not as detailed. These are legitimate worries (and there may be more: Please share yours in the comments!) but I think both can be addressed.

In the first case, we have a referee who would otherwise have accepted to review a paper but declines because the referee period is too short. This can be accommodated by the introduction (or higher uptake) of another practice: Asking the referee herself to set a reasonable deadline (which will still likely be less than 60 or 90 days, even if more than 4 weeks) for the report. The few times that I’ve been approached by a journal which defaults to a 4 week, or otherwise similarly short period, and have been uncertain if I could accommodate that deadline, I’ve replied suggesting an alternative, usually only 1-2 weeks later, and these suggestions have been met with pleasure. I’ve also found, personally, that when I set my own deadline, I’m much more likely to meet it (or complete it early) than if I’m working under a deadline set by someone else — even if the latter is further in the future.

Regarding the second, I’ve already discussed above how in my own practice, having more time to write the referee report doesn’t necessarily result in me actually spending more time to write it. I expect this is similar for many others. In the case where this actually is a problem, where, e.g., two reports are received within the four week period and they are not sufficiently detailed or useful enough for the editor to make his decision, then one can simply either ask a third referee or to ask one or both of the referees who’ve already responded to expand on unclear parts in their report. Either of these options is likely to still result in a faster turn around time, from receipt of the submission to the decision by the editor.

In principle, the length of time given for referee reports seems to me to be one of philosophical culture and practice, rather than of genuine need. In other fields, short turn around time for referee reports is the norm, and the practice works well because it is a part of the academic culture. In computer science, most conferences require full papers at the time of submission, and these are rigorously refereed sometimes with extremely short turn around times (my husband has been on the programme committees of conferences where he’s given 2-3 papers to report on in a 7-10 day period). Because these papers are technical in nature, and thus take more time to do a thorough reading of the proofs and definitions, the amount of work that goes in to producing a useful report is, in many cases, significantly more than goes into refereeing a philosophy paper. Of course, one difference between this situation and that of ordinary journal submissions is that programme committee members know roughly when they will be receiving their submissions and have to write the reports on them, and so can block out the relevant time on their calendar at the time they agree to be on the committee, whereas referees for journals are generally contacted out of the blue at any given time. Nevertheless, given a deadline of 4 weeks instead of 10 days deals with much of the possibility of receiving a request when one is simply unable to deal with it in the required time, and the possibility of suggesting an alternative deadline means that such cases can be effectively and easily dealt with.

At this point, I have myself pretty well convinced that there is not much reason to have extended deadlines for referee reports (and intend to take this issue up with my editor-in-chief). I would love to hear arguments in favor of the status quo in the comments!

Advertisements

How Much is Too Much?

[Note: This was originally published on Monday, then accidentally deleted yesterday, and now re-published today. Apologies!]

Two anonymous readers write asking about self-plagiarism:

(#1) I’m been wondering lately about “self-plagiarism” and to what extent it is acceptable to reuse material from ones own previous papers. It seems to me that this is actually quite common in philosophy, but I wonder how to approach editors about this.

I’m thinking of submitting a paper to anthology, and (maybe) later another paper to a journal. The latter paper would reuse material from the anthology-paper, but develop it in a slightly different direction. To what extent would this be acceptable? Should one alert the journal editor about this and ask whether it is okay?

I realise that there are copyright issues here, but my question is more about the ethics of self-plagiarism. I would be interesting to hear what editors think about this.

(#2) What’s protocol for submitting two articles that have a significant amount of overlapping content (say, a page) to separate journals? The articles, as I am imagining the situation, are otherwise significantly different.

There are three sorts of contexts where the issue of self-plagiarism generally arises. The first is when you have the same material that you want to present to two, mostly disjoint, audiences. The second is when you are presenting new material which builds on previous material you’ve already published. The third is when you are presenting new material inspired by the same background setting as something you’ve previously published (even if the new paper doesn’t explicitly build on the previous publication).

Since I work in a field where I publish to two almost disjoint audiences (history of logic and mathematical logic), I face all of these situations regularly. In almost every paper I write, before I can present the new and interesting material, I have to provide some historical, expository information. (Once you’ve written more than a few papers on obligationes, it becomes very hard to present the same background material in new and exciting ways! But since my audience is rarely historians, I can’t assume any of them already know anything about the genre.) So I’ve developed a few personal guidelines:

  • There should be as little verbatim material as possible. (You cannot be sure that you won’t end up having overlapping readers — especially if the readers of the second paper are assiduous in following up references to the first — and no one likes to read exactly the same paragraph over and over.) If you can rewrite the material, even a little bit, do so.
  • Where you can get away with simply referencing the previous paper, do so. (This is easier to do when what you’re referencing is, e.g., previously proven theorems.)
  • Be explicit. If the argument has appeared in print in a different version, say so.
  • Ask yourself what you hope to gain by re-packaging the argument and presenting it to a new audience. What can the new presentation give your new audience that they wouldn’t be able to get from the previous one? What do they gain? This can be either the addition of necessary background information that was assumed in the original paper, the augmentation of the paper with new arguments, responses to criticisms raised to the previous paper, discussion of how the subject of the paper is relevant to the interests of the new audience, etc.

There is no hard-and-fast quantitative guideline that can be given as to “How much overlap is too much.” Any attempt to give such a quantitative rule (“75% is too much”; “50% is too much”; “25% is too much”; “any is too much”) could only be justified by circling back to the motivation behind both of the questions above. These questions arise from the stand-point that plagiarism is bad — a foundation that I think all agree on. In my opinion, whether the work being plagiarised belongs to the person who is plagiarising or to another does not really matter: The standards of what counts as plagiarism should be the same in both cases. (I know not everyone agrees with this, and invite dissenting arguments in the comments!) But before one can say “how much (self-plagiarism) is too much”, we need to first consider why plagiarism is frowned on, because this differs between self- and non-self-plagiarism. Plagiarising someone else’s thoughts is an attempt to appropriate for your own credit something due to someone else. Plagiarising your own thoughts is an attempt to receive credit twice for the same idea.

This gives us an important distinction for the question of “how much is too much”: Ideas. There is a qualitative difference between self-plagiarism of arguments and self-plagiarism of expository matter, with the former being significantly more problematic than the latter. In fact, for the most part, the latter is going to be almost completely unproblematic, so long as it is restricted in amount and the other guidelines I suggest above taken into account, and this is because if you’re writing more than one paper in the same subject material, there is no way to escape the need to present the same expository information in more than one paper. But in a profession like philosophy where publications matter so much, a problem arises in when there is an appearance that an author is attempting to double-dip, that is, to get two papers for the “price” of a single idea.* So what you want to do is show — both to your reader and to the editor — that you are cognizant of this, and you are trying to offer more than the same paper repackaged different.

The moral of the story: Cite your sources. Be explicit about what you are re-using. Say how the derivative piece goes beyond the original. Document, document, document, and then let the editor make the final decision as to whether there is too much overlap. Then the question becomes not one of “does this paper exhibit plagiarism” but one of “does this paper provide sufficient new and interesting content to warrant publication?”


Footnote

*. This is a norm that differs from field to field. In computer science, for example, it is completely routine for large portions of papers published in conference proceedings to be lifted with minimal change into journal articles which extend the conference results substantially. But these cases are nevertheless still explicitly marked: The journal article will include in its acknowledgements or in the introduction a statement that it is an extended version of one or more conference papers, with full citations.

Preparing a Manuscript for Anonymous Review

A nuts and bolts type post about best practices:

The submission process for most journals includes the request that the author prepare their submission for anonymous review. This step ensures that referees, and editors in triple anonymous reviewed journals, can’t discover the identity of the author simply by reading the submission.

While some aspects of preparing a manuscript for blind review are universally known (e.g., delete your name from the first page), many are not. And for some aspects, there may be disagreement about best practices. So how should you prepare your manuscript, aside from the obvious step of removing your name from the paper?

Acknowledgements: Many authors include acknowledgements or thanks to those who helped them in their papers, perhaps as a first or last footnote. These should be entirely deleted, and replaced with something that indicates they were removed, like “Acknowledgments removed”. But in my view this should also include any time an individual is recognized as contributing to the paper throughout. You might want to thank a person at a particular point in the paper, rather than general acknowledgements. Those should be edited as well. Even such remarks as “Jane Smith pointed out to me in conversation . . .” should be edited, in my view, since such remarks may inadvertently cause the reader to discover your identity. (Some sub-disciplines are small, and now the referee knows that the author is not Jane Smith, for example.) Something as simple as “[Name removed] pointed out to me in conversation . . .” works well.

Document Properties: Word processing software often automatically, perhaps without even your knowledge, include your name, institution, email or other identifying marks in the meta-data of the file you submit. You can find this information in the document properties. For example, open a Microsoft Word document that you created on your computer, go to file>properties, and you may see this information. If it’s there, it’s easy to delete, and preparing your submission for anonymous review should include this step. (The way to delete these properties, and the ease with which it can be done, differs from software to software.) In spite of including specific instruction to authors to check for this information, probably half of the submissions to Res Philosophica include them. (We had to set up a process where the Editorial Manager checks for this, before sending the submission to the Editor.)

Self-citation: Often you want to cite your own prior work. Indeed, in research projects, you are often building on previous work, and so you need to cite your own work. But for anonymous review you have to anonymize those references. Standardly, this is done as I suggested above for acknowledgements. So instead of “As I argue in Jacobs 2011 . . .”, you would have “As I argue in [citation removed] . . .” Again, though, doing it this way, especially in a small sub-field, can often lead to the author being identified by the referee, since the referee knows, for example, that all the authors who are named in the submission’s bibliography are not the author of the submission. (I’ve even seen bibliographies that leave the author’s own entries in alphabetical order, but with the details deleted, so that it’s clear the author’s last name begins with, say, “M”.)

In light of this, it seems to me that there is a better way to handle self-citation: Edit the paper so as to not use first person in self-citation contexts. So instead of changing “As I argue in Jacobs 2011 . . .” to “As I argue in [citation removed] . . .”, you would change it to “As Jacobs (2011) argues . . .” This does require a bit more editing on the part of the author, but it does seem to avoid the worries that simply deleting the author’s name raises. It’s not perfect, but initially I’m inclined to think it’s the best way to handle self-citation.

What do you think? Is that the best way to handle self-citation? Are there other issues to think about when preparing a manuscript for blind review?

Conflicts of Interest

[Micro-intro: I’m a new author on this blog. I’m Carrie Jenkins, one of the editors of Thought. Hi!]

Wesley Buckwalter writes, via our Suggest a Topic page:

It would be very helpful to hear a discussion concerning editorial and referee conflicts of interest. For instance, there are many potential kinds of conflicts of interest in reviewing (author is your PI, history of co-authorship, shared grant, same department, etc) with seemingly no consensus in philosophy which are appropriate or inappropriate when issuing invitations of review. I frequently receive invitations in which I declare even remote/perceived conflicts, or sometimes even feel that I must decline in light of them. I realize it is difficult to recruit referees, but this seems like an issue essential for quality of review, and something we should have transparent consensus both from the perspective of editor and reviewer response?

The first issue this raises in my mind is that of anonymity. If a reviewer is in a position to declare a conflict of interest, then there is no possibility of anonymous review, and that is an issue independently of the conflict of interest. In my experience, there is little by way of disciplinary consensus regarding when non-anonymous reviewing is acceptable. There do exist areas of philosophy in which expertise is so limited that the only alternative to non-anonymous review is non-expert review. The rest, I take it, is a matter of judgment calls.

Qua editor, I would say it is best practice for reviewers to declare to editors if they know who the author of a paper is, especially (but not only) if they feel that there could be a conflict of interest. In some circumstances they might still be the best (and/or only) person available, but it really helps if editors know that there could be an issue, so that they can ask someone else wherever possible.

Further thoughts/discussion welcome!

The Value and Cost of Desk Rejections

Many journal policies include the option of desk-rejecting a submission: To return it to the author with a negative response without sending it to referees.

What is the value of doing this?

Clearly, there is significant value for the editor/editorial staff: A desk rejection is quick, straightforward, and takes a submission out of the queue for good. No searching for referees, no waiting for their reports, no hounding the referees when the reports are late.

For the author, there is the value of quick turnaround: Because submissions which have been desk-rejected don’t go out to referees, there is generally a much shorter time between submission and final decision. This allows the author to promptly turn around and resubmit, rather than languishing in no man’s land for months only to find that his waiting was for naught and he needs to begin the process again.

Thus, it would seem that a strong case can be made for desk-rejecting those submissions which are clearly unsuitable for publication in the particular venue.

However, I think that this conclusion might be a little too simplistic, and that is because not all desk-rejections are equal. In particular, there is a large difference between “We are not going to accept your submission because it is not suited for publication in our specific venue” and “We are not going to accept your submission because we think it is not suited for publication tout court” — but this is a difference only for the author, and not really for the editor. Thus I think the question should be not “What is the value of desk rejections?” but “What is the value of desk rejections when no reason is given?” Here, the answer is not quite so straightforward. Clearly, the reasons why it is valuable for the editor given above all still hold. But there is a sharp drop in value for the author. What good is quick turn-around time if the author ultimately haves no idea why the paper was not only not accepted, but not even sent out to referees? With absolutely no guidance, how is the author to know how, or even if, he should revise before resubmitting? A submission which has been desk-rejected because it is unsuited to a particular venue — but for which there may be a suitable venue out there — will be handled by the author in a different way from one which has been desk-rejected because it is in principle not publishable, but without knowing which is the case, or if the situation is somewhere in between, a desk-rejection has little to no value for an author.

Given the benefits to the editors of having a policy which includes the option of desk-rejecting, I think its clear that editors should retain that option: We truly don’t want to antagonize our potential referee pool by sending them papers which we already know we aren’t going to publish! However, I think that in exchange for the added value that the option gives, editors should consider paying the small cost of providing a few sentence explanation. It needn’t be detailed, but the value of providing the author guidance as to why the paper was desk-rejected outweighs, in my opinion, the cost to the editor of having to add this information to her form-letter rejection, and this cost is one that editors should be willing to pay in order to obtain the value of a quick decision and processing of an unsuitable submission.

Morally Superior File Formats

What does a publisher contribute to a journal? We academics supply the content, and we do the curating too, as referees and editors. Does the publisher supply the website, the editorial software, and the storage of published papers? Even that’s all free now, thanks to libraries like the University of Toronto’s and the University of Michigan’s, and thanks to the Public Knowledge Project’s Open Journal System.

So what’s a publisher good for? Converting the ragged miscellany of files submitted by authors into professional, publishable PDFs. Oh, and in the process they establish the official page-numbers later authors will refer to.

It’s a surprisingly trivial contribution, but if you want your journal taken seriously, you had better produce a uniform and minimally stylish product. So unless the editors have time to convert each file by hand (!), or they can recruit someone else to do it ($), a computer must do the job. The catch is: computers can’t do this job unless they can read each author’s chosen file format and identify all the sections, headings, citations, numbered lists, tables, diagrams, etc. For reasons I won’t go into here, popular software like Microsoft Word makes this impossible to do reliably.

This problem was actually solved by computer scientists at Stanford way back in the late ’70s and early ’80s. The solution wasn’t some fancy program that parses the messy files produced by Word and converts them into polished journal pages. It was instead to get people to write in a standard file format that already identifies headings as headings, citations as citations, lists as lists, etc. Leslie Lamport created the format that is now standard in math and many sciences: LaTeX. It generates beautiful, finished products, of even higher quality than those produced by professional typesetters, thanks to free software created single-handedly by Donald Knuth. (If you’re not familiar with Knuth, he’s a fascinating character with an amusingly idiosyncratic home page.)

LaTeX is now standard in fields where mathematical formulae are common, because typing math in Word is excruciating and slow with unlovely results, while LaTeX makes math easy, fast, and beautiful. But philosophers have been slow to adopt the format. It has just enough of a learning curve to make Word preferable in the short run, if you don’t need to type any math. So few philosophers bother to learn the system that every mathematician learns as a student.

This leaves us philosophers dependent on publishers in a way mathematicians are not. Editors of math journals can count on authors to use LaTeX, and they can use Knuth’s free software to typeset papers without any help from a publisher. But in philosophy, file formats pose a final, petty barrier to open access publication.

So are we doomed by our ludditism to eternal dependence on publishers? No. Simple, user-friendly options are now available. And thanks to a large and supportive community of programmers, free software for working with these simpler formats is plentiful. One promising such format is Markdown, created by John Gruber with help from the late Aaron Swartz (yes, that Aaron Swartz). This post was written in Markdown and published using the wonderful pandoc program created by philosopher John MacFarlane. To see just how simple Markdown is, you can read the source-text for this post here. For more on the scholarly use of Markdown, go here.

There are other moral reasons to prefer a format like Markdown. The software isn’t just free, it’s also open-source, meaning the code is publicly available for anyone to copy or modify. Software that "nobody owns" is essential to a public forum like the web. In fact, most of the web is open-source. (Most web servers are open-source and over 70% of web browsers, including Chrome, Firefox, and Safari, are open-source under the hood.)

When we write in Microsoft Word, we defect in a collective action problem. We make ourselves dependent on commercial publishers and software companies, lining their pockets at the expense of students, taxpayers, and universities. Were we to cooperate instead like the mathematicians, this outcome could be easily avoided.

Though we don’t ordinarily think it, some file formats are morally superior to others.

Should Journals Publish Their Statistics?

How many submissions does Philosophy Journal X receive each year? What portion of those are accepted, rejected, or R&Red, and how quickly are these decisions reached? How many are in history, philosophy of race, or epistemology? How many come from senior philosophers, from women philosophers, from American philosophers?

Philosophy journals don’t normally track and publish such statistics. There are many reasons they should though—and maybe some reasons they shouldn’t.

Most obviously, submitting authors want to know their chances, and their expected wait-times. Junior philosophers especially need this information to plan their job and tenure applications. Thanks to Andrew Cullison and his immensely valuable journal surveys, authors can now get a rough, comparative view of which journals offer better odds and better wait-times. But the representativeness and accuracy of the surveys are necessarily limited, since the data is supplied anonymously by submitting authors on a volunteer basis.

Tracking and publishing statistics can also help keep editors honest and journals efficient. It’s harder to ignore undesirable trends when one has to face the cold, hard numbers at the end of the month, especially if those numbers become public. Making them public also provides recourse for the community should a journal become systematically derelict. Volunteer data can be accused of bias, and every journal has disgruntled submitters so word of mouth can always be met with insistence that decency is the norm, that the horror stories are just unfortunate exceptions. Shining a light over the whole terrain leaves fewer shadows to hide in.

A clearer picture of what goes on in our journals might also help us understand broader problems in our discipline. How do sexism, racism, area-ism, and other forms of bias and exclusion operate and propogate in philosophy? How effective is anonymous review at eliminating biases? Systematic knowledge of who submits where, and what fates they enjoy or suffer there, can help us identify problems as well as solutions. It can also help us identify other trends, in the rise and decline of certain topics and areas for example, thus enriching our understanding of what we do as a community and why.

So why shouldn’t journals publish their statistics? It does take some time and effort. But online editorial systems now make it a fairly trivial task. Common epistemological, ethical, and metaphysical challenges for collecting demographic data may be more serious. How would a journal determine an author’s gender, sexual orientation, area of specialization, ethnicity, or race? Requiring much of this information from submitters is unethical, while making it voluntary risks biasing the sample. And there are metaphysical questions about which dimensions are meaningful and what categories they should employ (if there is such a thing as ‘race’ in a relevant sense, which categories should be used?).

These are more reasons to think carefully before trying than reasons not to try. Anonymized, voluntary questionnaires with thoughtful, inclusive categories would have minimal ethical overhead. Since they also have the potential to supply valuable information about our discipline, we should at least try to gather that information. We can always rejig or scale back if initial efforts prove unsucessful.

Anyway, none of these complications prevent us gathering the information that already lies unused in journal databases. It’s easy enough to count how many submissions came in, what topics they dealt with, what decisions they met, and how quickly it all happened. At the very least then, journals should gather and publish this information at regular intervals.