The central discussion of this session focused on the different methods behind accepting and rejecting manuscripts. The method used depends on the purpose or goal of a journal, so it varies from organization to organization.
Peter D Adams, editor at the American Physical Society (APS), and Leslie Sage, senior editor at Nature, both described how their journals’ goal is to publish recent research that substantially affects their journals’ fields. Their submission-evaluation process is rigorous, and they invest time only in papers that they think will probably be accepted. In-house editors make the preliminary decision to reject or accept a manuscript. Only after a manuscript has passed that initial evaluation does it proceed to peer review. Adams noted that, depending on the particular journal, APS rejects about half the number of submitted papers after the initial internal evaluation compared with the number of papers rejected that are subjected to external review. The internal rejections are somewhat arbitrary, but few of the rejections are challenged.
Sage detailed how the chief editors of each science team at Nature read all the abstracts of papers submitted in their fields, reject all the obviously inappropriate submissions, and assign the remaining papers to the appropriate scientific editors. Those editors evaluate whether the subject of a manuscript is interesting, the science is sound, and the research advances the field; they also check for duplicate submissions or plagiarism. A manuscript that passes those tests undergoes peer review. Nature rejects roughly 75% of its submissions without external review, and, as at APS, few of the rejections are challenged. Both APS and Nature also reject manuscripts after peer review. Accepted papers go through both developmental editing and copyediting.
Emma Veitch, senior editor at PLOS ONE, described a different approach to manuscript evaluation. The PLOS ONE approach is to accept all papers that deserve to be published; to that end, PLOS ONE has tightly applied criteria for accepting manuscripts. If the reporting is clear, the science is sound, and the conclusions support the data, PLOS ONE will publish the manuscript. The editors do not ask how important the paper’s contribution to the field is or what the relevant audience is. Their goal is to provide a database in which all research—whether the findings were positive, negative, mixed, or replications of results of previous research—is available for others to review. Accordingly, PLOS ONE rejection figures are low. After a manuscript has gone through peer review and is accepted, it is given minimal copyediting and then is published at the authors’ or research funders’ expense.
Adams, Sage, and Veitch all stated that each journal should have a clear vision and accept papers that fit that vision. Editors do not need to have criteria for rejection that are set in stone but they do need to be consistent in their choices and be able to justify their decisions as upholding the journal’s vision.
In addition, Sage discussed Nature’s views on the selection of referees. People with whom the authors are collaborating were declared poor choices, as was anyone academically related to any of the authors. Sage cautioned that this can be a tricky question in that not all connections among people are obvious. People cited in acknowledgments are sometimes acceptable as referees, but editors should generally avoid them as well. The best referees are postdoctoral scholars who completed their doctoral work 2–5 years previously and are working on the same topic, especially ones that the author did not cite in the manuscript.
Veitch also discussed PLOS ONE’s specialized metrics. Rather than using the standard impact factor, PLOS ONE evaluates impact on an article basis via custom markers and statistics. It tracks sharing of articles through social media; the actual number of published citations that an article receives; and use, which is measured against its own sites and PubMed sites through downloads, views, and academic bookmarks.