How many submissions does Philosophy Journal X receive each year? What portion of those are accepted, rejected, or R&Red, and how quickly are these decisions reached? How many are in history, philosophy of race, or epistemology? How many come from senior philosophers, from women philosophers, from American philosophers?
Philosophy journals don’t normally track and publish such statistics. There are many reasons they should though—and maybe some reasons they shouldn’t.
Most obviously, submitting authors want to know their chances, and their expected wait-times. Junior philosophers especially need this information to plan their job and tenure applications. Thanks to Andrew Cullison and his immensely valuable journal surveys, authors can now get a rough, comparative view of which journals offer better odds and better wait-times. But the representativeness and accuracy of the surveys are necessarily limited, since the data is supplied anonymously by submitting authors on a volunteer basis.
Tracking and publishing statistics can also help keep editors honest and journals efficient. It’s harder to ignore undesirable trends when one has to face the cold, hard numbers at the end of the month, especially if those numbers become public. Making them public also provides recourse for the community should a journal become systematically derelict. Volunteer data can be accused of bias, and every journal has disgruntled submitters so word of mouth can always be met with insistence that decency is the norm, that the horror stories are just unfortunate exceptions. Shining a light over the whole terrain leaves fewer shadows to hide in.
A clearer picture of what goes on in our journals might also help us understand broader problems in our discipline. How do sexism, racism, area-ism, and other forms of bias and exclusion operate and propogate in philosophy? How effective is anonymous review at eliminating biases? Systematic knowledge of who submits where, and what fates they enjoy or suffer there, can help us identify problems as well as solutions. It can also help us identify other trends, in the rise and decline of certain topics and areas for example, thus enriching our understanding of what we do as a community and why.
So why shouldn’t journals publish their statistics? It does take some time and effort. But online editorial systems now make it a fairly trivial task. Common epistemological, ethical, and metaphysical challenges for collecting demographic data may be more serious. How would a journal determine an author’s gender, sexual orientation, area of specialization, ethnicity, or race? Requiring much of this information from submitters is unethical, while making it voluntary risks biasing the sample. And there are metaphysical questions about which dimensions are meaningful and what categories they should employ (if there is such a thing as ‘race’ in a relevant sense, which categories should be used?).
These are more reasons to think carefully before trying than reasons not to try. Anonymized, voluntary questionnaires with thoughtful, inclusive categories would have minimal ethical overhead. Since they also have the potential to supply valuable information about our discipline, we should at least try to gather that information. We can always rejig or scale back if initial efforts prove unsucessful.
Anyway, none of these complications prevent us gathering the information that already lies unused in journal databases. It’s easy enough to count how many submissions came in, what topics they dealt with, what decisions they met, and how quickly it all happened. At the very least then, journals should gather and publish this information at regular intervals.