Should Journals Publish Their Statistics?

How many submissions does Philosophy Journal X receive each year? What portion of those are accepted, rejected, or R&Red, and how quickly are these decisions reached? How many are in history, philosophy of race, or epistemology? How many come from senior philosophers, from women philosophers, from American philosophers?

Philosophy journals don’t normally track and publish such statistics. There are many reasons they should though—and maybe some reasons they shouldn’t.

Most obviously, submitting authors want to know their chances, and their expected wait-times. Junior philosophers especially need this information to plan their job and tenure applications. Thanks to Andrew Cullison and his immensely valuable journal surveys, authors can now get a rough, comparative view of which journals offer better odds and better wait-times. But the representativeness and accuracy of the surveys are necessarily limited, since the data is supplied anonymously by submitting authors on a volunteer basis.

Tracking and publishing statistics can also help keep editors honest and journals efficient. It’s harder to ignore undesirable trends when one has to face the cold, hard numbers at the end of the month, especially if those numbers become public. Making them public also provides recourse for the community should a journal become systematically derelict. Volunteer data can be accused of bias, and every journal has disgruntled submitters so word of mouth can always be met with insistence that decency is the norm, that the horror stories are just unfortunate exceptions. Shining a light over the whole terrain leaves fewer shadows to hide in.

A clearer picture of what goes on in our journals might also help us understand broader problems in our discipline. How do sexism, racism, area-ism, and other forms of bias and exclusion operate and propogate in philosophy? How effective is anonymous review at eliminating biases? Systematic knowledge of who submits where, and what fates they enjoy or suffer there, can help us identify problems as well as solutions. It can also help us identify other trends, in the rise and decline of certain topics and areas for example, thus enriching our understanding of what we do as a community and why.

So why shouldn’t journals publish their statistics? It does take some time and effort. But online editorial systems now make it a fairly trivial task. Common epistemological, ethical, and metaphysical challenges for collecting demographic data may be more serious. How would a journal determine an author’s gender, sexual orientation, area of specialization, ethnicity, or race? Requiring much of this information from submitters is unethical, while making it voluntary risks biasing the sample. And there are metaphysical questions about which dimensions are meaningful and what categories they should employ (if there is such a thing as ‘race’ in a relevant sense, which categories should be used?).

These are more reasons to think carefully before trying than reasons not to try. Anonymized, voluntary questionnaires with thoughtful, inclusive categories would have minimal ethical overhead. Since they also have the potential to supply valuable information about our discipline, we should at least try to gather that information. We can always rejig or scale back if initial efforts prove unsucessful.

Anyway, none of these complications prevent us gathering the information that already lies unused in journal databases. It’s easy enough to count how many submissions came in, what topics they dealt with, what decisions they met, and how quickly it all happened. At the very least then, journals should gather and publish this information at regular intervals.

Advertisements

11 thoughts on “Should Journals Publish Their Statistics?

  1. jdjacobs

    It might be helpful to distinguish between different sets of statistics.

    I think every journal ought to publish a minimal set of statistics; call these level 1. It seems to me that these are: acceptance rate, time to initial decision.

    Then there are a bunch of other statistics that would be nice for journals to report; call these level 2. It seems to me these would include: number of submissions, percent desk reject (if desk review is used), time to desk reject, percent R&R, percent of R&Rs eventually accepted, time from initial R&R to final decision, average time for referee reports to be completed, time from acceptance to publication.

    Then there are statistics where there may be questions concerning whether journals should collect them at all, or publish them if collected; call these level 3. It’s less clear to me what belongs in here, but here’s an initial list: topic areas of papers, demographics of authors of submissions (grad student or adjunct or tenure-track or etc; woman or etc; and so on), demographics of referees, whether referees are cited in the paper, correlations of various demographics data (both authors and referees) with decisions.

    Regarding level 1, I wonder what time frame ought to be reported. As an initial stab, I would think a two year average would be good.

    I’m much less confident about what to think about level 3 statistics. I don’t know of any journal that publishes them (but there very well might be one). And, it would seem to me that I would need to be very careful, both about asking for the data, and about analyzing the data (since I’m not trained to do so). I hope others chime in about level 3 statistics, both what should go in there and whether and how they should be collected and reported.

    Reply
    1. Sara L. Uckelman

      One of the potential level-3 statistic that you list is one that I would both be hesitant to publish and would also not give much credence to if I saw published, and that’s ‘area’. I think that trying to neatly categorize papers into a (possibly) pre-determined list of categories based on what area they are in would be extremely difficult to do well, especially if only one category could be chosen per paper; and if more than one, then it would be time-consuming. Of course, if the categories are extremely highlevel, such as: Aesthetics; Ethics; Logic; Epistemology; Philosophy of Science; Metaphysics; History (not intended to be a complete list), then it might be easier to divide up the papers, but the result could only be informative for generalist journals; any journal with a more specific focus would end up putting all of their papers into one or two of the categories, and this wouldn’t be especially useful.

      Given how long the submissions process takes, demographics regarding the status of the author would also be difficult to track accurately. Is it the status of the author when the paper was submitted? When it was accepted? When it was published? I’ve had papers where between the institute I was at when I submitted the paper, and the institute I was at when it was finally published, there was a 15-month stint at another institute in between.

      Reply
      1. Jonathan Weisberg Post author

        Thanks, Sara, I take your point about complications w.r.t. area. In the case of generalist journals, I think having a few checkboxes for rubrics like ‘Religion’, ‘Aesthetics’, or ‘Logic’ would provide lots of interesting information without significantly extending the submission process. In my experience, many submission systems already ask the author to volunteer keywords, which I’m guessing is less useful than having pre-established categories offered in checkboxes.

        But also, in the case of many specialist journals, I think these same rubrics could still be helpful. For example, papers in philosophy of religion often fall squarely in epistemology, in metaphysics, or in philosophy of physics. Similarly, papers in journals like Erkenntnis and Synthese are sometimes primarily papers in logic, or in epistemology, or in metaphysics.

        In general though, I’m envisioning checkboxes rather than radio-buttons here: a paper would normally fall under 1-3 rubrics, depending on its scope. PhilPapers uses a similar categorization system. Their system also offers a much richer tree structure that would allow specialist journals to use finer categories if they wished.

        Finally, on the concern about changes in demographic status during the review process: journals could just make an arbitrary decision about what convention to adopt, e.g. to just always go with the information at the time of initial submission. Making this decision and being transparent about one’s methodology might be better than not collecting any data at all. And the decision needn’t be entirely arbitrary: if the main work in generating a paper happens before it is submitted, then the data at the time of initial submission is the most salient.

    2. Jonathan Weisberg Post author

      Thanks for these suggestions, Jonathan, they’re really helpful.

      There are some natural breakpoints we might use to further develop your levels proposal. For example, Level A might include the data journals already have but don’t use (for the most part). This would roughly encompass your levels 1 and 2. Then Level B might include innocuous demographic data that’s generally publicly available, but which would have to be requested from authors to be easily gathered: AOS, institutional geography, tenure-status, and maybe gender. Then, finally, Level C could include less innocuous data, especially data that raises privacy concerns because it can inadvertently out authors when published: sexual orientation and disability status, for example.

      At any rate, I’m inclined to lump levels 1 and 2 together, since journals typically have this data on hand and it’s just a matter of tallying the numbers.

      Reply
  2. Stephen Read

    I’m on the Editorial Board of The Philosophical Quarterly. It collects statistics on many of these matters annually (number of submissions, acceptance rate, length of reviewing time, country of origin, gender distribution – but not length of time to publication, which is perhaps less necessary with Online Early) and distributes them to the Board meeting every May. Should I propose to the next meeting that these statistics been made available online?

    Reply
    1. Jonathan Weisberg Post author

      Seconded: it’d be great if an eminent journal took the lead on this. Thanks for offering to make the proposal, Stephen.

      I’m encouraged to hear that some journals already track and share stats internally, it’d be interesting to hear of others that do something similar.

      Reply
  3. Pingback: Making Philosophy Journal Statistics Publicly Available | Daily Nous

  4. philippablum

    Dialectica has been doing something in between level-2 and level-3 statistics for the last 14 years – they are available here:
    http://www.philosophie.ch/dialectica/
    http://www.philosophie.ch/dialectica/dialectica_statistics.pdf

    I think it would be very helpful if philosophy journals would make
    publicly available much more information on acceptance rates and
    submission statistics. The only other two bits of information I know of are:
    – AJP: around 600 submissions a year – cf.
    http://www.tandfonline.com/doi/full/10.1080/00048402.2013.850805#.U7K6oahhvgX
    – Mind: around 350 submissions a year, cf. http://mind.oxfordjournals.org/
    Does anyone know about others?

    A quick summary of the dialectica statistics:
    – The acceptance rate over the last ten years is 8.36% (2320
    submissions, of which 194 were accepted).
    – In 2013, we published 28 articles and a total of 611 pages (549
    excluding commissioned book reviews). Of 298 articles submitted in 2013,
    34 were accepted.
    – Our turn-around time is reasonably quick (median of 3 months) and our
    backlog is small (currently accepted papers are published in 4/2014).
    – Currently, about 12% of our submissions are authored by women. This
    has been constant over the last 14 years and is surprising, given that
    about a third of PhDs and a quarter of jobs in philosophy are (held) by
    women. The acceptance rate of female submissions (16%) is higher than
    the one of male submissions (14%).
    – Between 2007 and 2013, 28% of our submissions came from people working
    in the US, 20% from the UK, 6% from both Germany and Canada, 5% from
    Italy, 4% from Spain, and 3% from each of Australia, Spain and
    Switzerland. 12% of the submissions come from Asia (mostly Israel,
    China, Iran and Hong Kong) and only 1% from Africa.

    Reply
    1. philippablum

      Update:
      Kimberly Brownlee (Warwick) said on the Leiter Blog (http://leiterreports.typepad.com/blog/2014/07/data-on-journal-submissions-bpa-and-apa-to-undertake-collection-of-information.html) that the British Philosophical Association Exec. Committee will gather this data. From responses to blog posts, here is the information I gathered so far:

      dialectica: 298 submissions in 2013, acceptance rate 04-13: 8.4%
      Journal of the History of Philosophy: 264 submissions in 2013-2014, acc. r. 13-14: 5.3%
      Philosophers’ Imprint: 431 submissions in 2013, acc. r 04-13: 7%
      Ethics: 446 submissions in 2012, acc. r. 12: approx. 5%
      European Journal of Political Theory: acc. rate 11.6%
      Phil. Review: 471 submissions in 2012, acc. rate 13 3.1%
      Australasian Journal of Philosophy: 650 submissions in 2013, acc. rate 13: 5.5%
      Mind: 350 submissions

      In my opinion, the acceptance rate is the best measure of the quality of a journal, much better than reputational surveys and citation data: the former heavily depends on who judges what reputation journals have, the latter is simply not adequate in philosophy: we do not cite papers (only) to express that they are good, important or should be read (as is perhaps done in the natural sciences), but (also) because they make similar points, provide evidence that some silly claim we discuss has indeed be made in print and for lots of other reasons, most of which do not correlate with quality at all.

      The problem with acceptance rates is that it must be made explicit how they are calculated: most importantly, journals should say which of their papers were commissioned or are part of special issues that do not undergo the refereeing process (Phil Studies and J of Phil e.g. publish lots of papers from conferences, without always saying so explicitly); furthermore, it has to be made clear whether resubmissions count as new submissions, whether the acc. rate is the proportion of papers submitted to papers published within that period, or acceptance verdicts delivered within that period etc. For acceptance rates covering short periods, such as a year, there are also lots of coincidences that have to be factored out. But all in all, I think that the acceptance rate over a reasonably long period of time (minimum five years) is the best measure of quality for journals there is.

      Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s