Monthly Archives: January 2014

The Value and Cost of Desk Rejections

Many journal policies include the option of desk-rejecting a submission: To return it to the author with a negative response without sending it to referees.

What is the value of doing this?

Clearly, there is significant value for the editor/editorial staff: A desk rejection is quick, straightforward, and takes a submission out of the queue for good. No searching for referees, no waiting for their reports, no hounding the referees when the reports are late.

For the author, there is the value of quick turnaround: Because submissions which have been desk-rejected don’t go out to referees, there is generally a much shorter time between submission and final decision. This allows the author to promptly turn around and resubmit, rather than languishing in no man’s land for months only to find that his waiting was for naught and he needs to begin the process again.

Thus, it would seem that a strong case can be made for desk-rejecting those submissions which are clearly unsuitable for publication in the particular venue.

However, I think that this conclusion might be a little too simplistic, and that is because not all desk-rejections are equal. In particular, there is a large difference between “We are not going to accept your submission because it is not suited for publication in our specific venue” and “We are not going to accept your submission because we think it is not suited for publication tout court” — but this is a difference only for the author, and not really for the editor. Thus I think the question should be not “What is the value of desk rejections?” but “What is the value of desk rejections when no reason is given?” Here, the answer is not quite so straightforward. Clearly, the reasons why it is valuable for the editor given above all still hold. But there is a sharp drop in value for the author. What good is quick turn-around time if the author ultimately haves no idea why the paper was not only not accepted, but not even sent out to referees? With absolutely no guidance, how is the author to know how, or even if, he should revise before resubmitting? A submission which has been desk-rejected because it is unsuited to a particular venue — but for which there may be a suitable venue out there — will be handled by the author in a different way from one which has been desk-rejected because it is in principle not publishable, but without knowing which is the case, or if the situation is somewhere in between, a desk-rejection has little to no value for an author.

Given the benefits to the editors of having a policy which includes the option of desk-rejecting, I think its clear that editors should retain that option: We truly don’t want to antagonize our potential referee pool by sending them papers which we already know we aren’t going to publish! However, I think that in exchange for the added value that the option gives, editors should consider paying the small cost of providing a few sentence explanation. It needn’t be detailed, but the value of providing the author guidance as to why the paper was desk-rejected outweighs, in my opinion, the cost to the editor of having to add this information to her form-letter rejection, and this cost is one that editors should be willing to pay in order to obtain the value of a quick decision and processing of an unsuitable submission.

Advertisements

Morally Superior File Formats

What does a publisher contribute to a journal? We academics supply the content, and we do the curating too, as referees and editors. Does the publisher supply the website, the editorial software, and the storage of published papers? Even that’s all free now, thanks to libraries like the University of Toronto’s and the University of Michigan’s, and thanks to the Public Knowledge Project’s Open Journal System.

So what’s a publisher good for? Converting the ragged miscellany of files submitted by authors into professional, publishable PDFs. Oh, and in the process they establish the official page-numbers later authors will refer to.

It’s a surprisingly trivial contribution, but if you want your journal taken seriously, you had better produce a uniform and minimally stylish product. So unless the editors have time to convert each file by hand (!), or they can recruit someone else to do it ($), a computer must do the job. The catch is: computers can’t do this job unless they can read each author’s chosen file format and identify all the sections, headings, citations, numbered lists, tables, diagrams, etc. For reasons I won’t go into here, popular software like Microsoft Word makes this impossible to do reliably.

This problem was actually solved by computer scientists at Stanford way back in the late ’70s and early ’80s. The solution wasn’t some fancy program that parses the messy files produced by Word and converts them into polished journal pages. It was instead to get people to write in a standard file format that already identifies headings as headings, citations as citations, lists as lists, etc. Leslie Lamport created the format that is now standard in math and many sciences: LaTeX. It generates beautiful, finished products, of even higher quality than those produced by professional typesetters, thanks to free software created single-handedly by Donald Knuth. (If you’re not familiar with Knuth, he’s a fascinating character with an amusingly idiosyncratic home page.)

LaTeX is now standard in fields where mathematical formulae are common, because typing math in Word is excruciating and slow with unlovely results, while LaTeX makes math easy, fast, and beautiful. But philosophers have been slow to adopt the format. It has just enough of a learning curve to make Word preferable in the short run, if you don’t need to type any math. So few philosophers bother to learn the system that every mathematician learns as a student.

This leaves us philosophers dependent on publishers in a way mathematicians are not. Editors of math journals can count on authors to use LaTeX, and they can use Knuth’s free software to typeset papers without any help from a publisher. But in philosophy, file formats pose a final, petty barrier to open access publication.

So are we doomed by our ludditism to eternal dependence on publishers? No. Simple, user-friendly options are now available. And thanks to a large and supportive community of programmers, free software for working with these simpler formats is plentiful. One promising such format is Markdown, created by John Gruber with help from the late Aaron Swartz (yes, that Aaron Swartz). This post was written in Markdown and published using the wonderful pandoc program created by philosopher John MacFarlane. To see just how simple Markdown is, you can read the source-text for this post here. For more on the scholarly use of Markdown, go here.

There are other moral reasons to prefer a format like Markdown. The software isn’t just free, it’s also open-source, meaning the code is publicly available for anyone to copy or modify. Software that "nobody owns" is essential to a public forum like the web. In fact, most of the web is open-source. (Most web servers are open-source and over 70% of web browsers, including Chrome, Firefox, and Safari, are open-source under the hood.)

When we write in Microsoft Word, we defect in a collective action problem. We make ourselves dependent on commercial publishers and software companies, lining their pockets at the expense of students, taxpayers, and universities. Were we to cooperate instead like the mathematicians, this outcome could be easily avoided.

Though we don’t ordinarily think it, some file formats are morally superior to others.

Should Journals Publish Their Statistics?

How many submissions does Philosophy Journal X receive each year? What portion of those are accepted, rejected, or R&Red, and how quickly are these decisions reached? How many are in history, philosophy of race, or epistemology? How many come from senior philosophers, from women philosophers, from American philosophers?

Philosophy journals don’t normally track and publish such statistics. There are many reasons they should though—and maybe some reasons they shouldn’t.

Most obviously, submitting authors want to know their chances, and their expected wait-times. Junior philosophers especially need this information to plan their job and tenure applications. Thanks to Andrew Cullison and his immensely valuable journal surveys, authors can now get a rough, comparative view of which journals offer better odds and better wait-times. But the representativeness and accuracy of the surveys are necessarily limited, since the data is supplied anonymously by submitting authors on a volunteer basis.

Tracking and publishing statistics can also help keep editors honest and journals efficient. It’s harder to ignore undesirable trends when one has to face the cold, hard numbers at the end of the month, especially if those numbers become public. Making them public also provides recourse for the community should a journal become systematically derelict. Volunteer data can be accused of bias, and every journal has disgruntled submitters so word of mouth can always be met with insistence that decency is the norm, that the horror stories are just unfortunate exceptions. Shining a light over the whole terrain leaves fewer shadows to hide in.

A clearer picture of what goes on in our journals might also help us understand broader problems in our discipline. How do sexism, racism, area-ism, and other forms of bias and exclusion operate and propogate in philosophy? How effective is anonymous review at eliminating biases? Systematic knowledge of who submits where, and what fates they enjoy or suffer there, can help us identify problems as well as solutions. It can also help us identify other trends, in the rise and decline of certain topics and areas for example, thus enriching our understanding of what we do as a community and why.

So why shouldn’t journals publish their statistics? It does take some time and effort. But online editorial systems now make it a fairly trivial task. Common epistemological, ethical, and metaphysical challenges for collecting demographic data may be more serious. How would a journal determine an author’s gender, sexual orientation, area of specialization, ethnicity, or race? Requiring much of this information from submitters is unethical, while making it voluntary risks biasing the sample. And there are metaphysical questions about which dimensions are meaningful and what categories they should employ (if there is such a thing as ‘race’ in a relevant sense, which categories should be used?).

These are more reasons to think carefully before trying than reasons not to try. Anonymized, voluntary questionnaires with thoughtful, inclusive categories would have minimal ethical overhead. Since they also have the potential to supply valuable information about our discipline, we should at least try to gather that information. We can always rejig or scale back if initial efforts prove unsucessful.

Anyway, none of these complications prevent us gathering the information that already lies unused in journal databases. It’s easy enough to count how many submissions came in, what topics they dealt with, what decisions they met, and how quickly it all happened. At the very least then, journals should gather and publish this information at regular intervals.