In keeping with CSE’s interest in editorial research, the Council of Science Editors 2010 annual meeting included CSE’s third annual poster session. Research for presentation was chosen on the basis of blinded review of abstracts. Presented below are abstracts of the selected research, which addressed a wide variety of topics in editing and publication. Forthcoming issues of Science Editor will include reports on other sessions of the annual meeting.
Conflicts of Interest
Analysis of Number and Type of Publications That Editors Publish in Their Own Journals: Case Study of Scholarly Journals in Croatia
Lana Bošnjak, Livia Puljak, Katarina Vukojević, Ana Marušić
University of Split School of Medicine, Split, Croatia
Background: Editors of scholarly academic journals are often active researchers and may have papers suitable for publication in their own journals. However, this constitutes a conflict of interest that needs to be declared and managed. To assess practices of editors publishing in their own journals, we analyzed published pieces by decision-making editors of 180 scientific journals published in Croatia.
Methods: We counted publications by decision-making editors in their own journals for the period 2005–2008. Eligible publications were those relevant for academic advancement in Croatia: original research articles, reviews, professional papers, and brief research reports. Editorials and other editorial material were excluded.
Results: The 172 editors-in-chief and 84 other editors (associate, executive, or junior) published a total of 887 articles in their own journals (median, 1 per editor; range, 0–90). Of those articles, 332 (37.4%) were relevant for academic promotion: 204 original articles (median, 0 per editor; range, 0–15), 66 professional articles (0; 0–8), and 62 review articles (0; 0–5). Almost half the editors did not publish any articles relevant for academic promotion in their own journals, and 18 (7.0%) published five or more articles in their own journals, ranging from five (eight editors) to 25 (one editor). Of the 18 editors with high self-publishing figures, 11 had fewer than five articles published in other international journals indexed in Current Contents, and six had fewer than five articles published in other journals indexed in SCOPUS in the same period. Two editors published exclusively in their own journals. None of the journals had a publicly available policy for addressing conflict of interest related to submissions from decision-making editors, and none of the articles included a statement on the decision-making process regarding the manuscript.
Conclusion: Although most of the editors in this case study did not appear to misuse their own journals for scientific publishing and academic promotion, there is a need for greater transparency of the declaration and management of editorial conflict of interest in academic and scholarly journals.
Editing and Editorial Decisions
Acceptance Rate of Pfizer-Sponsored Manuscripts
LaVerne Mooney, Daireen Garcia, and Lorna Fay Pfizer, New York, New York
Background: Delays in the publication of scientific data have the potential to affect treatment decisions and transparency. Some factors that influence delays are rejection of manuscripts by peer-reviewed journals and subsequent manuscript resubmissions. The analysis objectives were to determine the acceptance rate (AR) of Pfizer-sponsored manuscripts on first submission to a journal and the eventual acceptance rate on resubmission and to identify ways to reduce the number of resubmissions and speed the availability of Pfizer data to health-care professionals and patients.
Methods: Pfizer-supported manuscripts (n = 347) whose authors were responsible for journal selection were analyzed,1 including data on 11 drugs in three therapeutic areas. For each drug, every manuscript submitted for the first time in the period 1 January 2008 to 30 June 2009 was tracked, and AR was calculated. A subset (n = 55) was analyzed for the time from initial submission to publication and for reasons for rejection.
Results: The AR for first-time submissions was 50%, and it could be up to 55% if those awaiting journal decisions are accepted. Of manuscripts rejected on first submission, 51% were accepted on second submission to a different journal; if those awaiting journal decisions are accepted, the AR could be 69%. The eventual acceptance rate was 74% and could be up to 87%.
The subset analysis showed that average time from submission to publication was 5.9 months (median, 5.5 months). Each additional submission delayed publication by about 3 months. One-third of the journals’ “decision letters” indicated that the choice of journal was inappropriate.
Conclusion: Our analysis showed that Pfizer-sponsored manuscripts may require more than one journal submission before acceptance; yet the majority of these manuscripts are published in peer-reviewed journals. Use of presubmission inquiries to assess journal interest may help to balance authors’ aspirations with journals’ expectations. Adherence to International Committee of Medical Journal Editors (ICMJE) guidelines and familiarity with journal AR and audiences are critical for success. Improved journal selection can reduce delays associated with serial submissions, speed the availability of data to healthcare providers and patients, and reduce the burden on the peer-review process.
Note: Acceptance-rate data on 171 of the manuscripts were presented at the International Society for Medical Publication Professionals annual meeting in April 2010.
A Corpus-Based Evaluation of Usage Advice from the CSE and Chicago Style Manuals
Shelsea Van Ornum, Doris R Dant, and Janene Auger Brigham Young University, Provo, Utah
Background: Until recently, describers of language phenomena were limited in their ability to study usage systematically in edited, published texts. The development of large corpora has opened new possibilities for usage research. However, few corpora contain sufficient, diverse, recent, and publicly accessible data to ensure validity for a study of usage items specifically in science–technology–medicine (STM) writing. The recent release of the Corpus of Contemporary American English in 2008 enabled us to evaluate prescriptive usage advice in two style manuals empirically.
Methods: We selected 10 usage items mentioned in both the Chicago Manual of Style (15th ed) and Scientific Style and Format: The CSE Manual for Authors, Editors, and Publishers (7th ed). We then searched the corpus to see whether actual usage, as represented in two corpus subgenres of academic writing (Sci/Tech/Agri and Med), was consistent with the advice of the style manuals or with the advice of one manual over the other. We determined patterns of usage for each item on the basis of frequency, occurrence per million words, and comparative percentage of usage.
Results: Of the 10 usage items, four (farther–further, flammable–inflammable, impact, and presently) had patterns of usage in corpus samples that aligned more closely with Chicago’s recommendations than with CSE’s recommendations. Three items (after–following, although–while, and healthy–healthful) aligned more closely with the CSE’s recommendations than with Chicago’s recommendations. Two items (different from–different than and due to–because of) followed the advice of both style manuals with some minor deviations. And one item (people–persons) aligned with neither manual.
Conclusion: In nine of the 10 items we researched, STM usage followed the recommendations of at least one style manual. However, STM usage did not consistently correspond to the advice of one manual over the other. We suspect that the same pattern would hold true for a larger sample of items. Thus, STM writers and editors must be prudent when applying the usage advice found in style manuals inasmuch as usage advice in neither Chicago nor CSE alone appears to reflect actual STM usage in our corpus samples fully.
A Proposed Search Engine to Assist Editors and Reviewers in Research Peer Review
Robert G Badgett, University of Texas Health Science Center at San Antonio
E Glynn Harmon, University of Texas at Austin School of Information
Background: Research peer review of grant and manuscript submissions is time consuming and difficult. We hypothesize that an automated process can be constructed to help reviewers assess novelty and comprehensiveness of concepts in a submission by showing the most relevant publications not cited by a submission.
Methods: We used an existing, published manuscript as a case study to create a simulation of a search at the time of submission of the manuscript. After developing search algorithms of MEDLINE and ClinicalTrials.gov using existing, validated filters, we reproduced the searches for the period just before the manuscript was published.
The search algorithm steps were (1) identify MeSH terms by using a frequency table of MeSH terms for articles cited by the manuscript, (2) identify the articles and registered trials estimated to be most important, and (3) remove from this list articles cited by the submission.
Results: The manuscript used in the demonstration is a narrative review of treating opioid-dependent outpatients published in May 2008 (PMID: 18458279). The review included the statement that buprenorphine is less effective than methadone. The search strategy identified articles available to the authors but not cited by them. For example, one systematic review not cited suggested that the larger variation in dose for methadone than for buprenorphine accounts for differing reports of relative efficacy and adverse effects (PMID 15720937; published 2005). Similarly, a trial not cited found that buprenorphine was more effective than low-dose methadone (PMID: 11058673; published 2000). The search identified 141 relevant trials registered at clinicaltrials.gov; only 5% of the trials had identifiable publications of their results.
Conclusion: The search algorithms retrieved relevant articles, including trials in high-impact journals, that were not cited by the authors. The plethora of registered but not published trials suggests a dynamic topic. In this specific example of a well-written article published in a major journal, the value of the additional citations is debatable. In other settings, searching the additional citations might help reviewers to judge the quality and novelty of manuscripts and grant submissions. Such judgments could accelerate triaging of submissions.
Author-Suggested Versus Editor-Selected Reviewers: Comparison of Recommendations and Effect on Editor Decision
Jessica L Moore, Eric G Neilson, and Vivian Siegel
Vanderbilt University School of Medicine, Nashville, Tennessee
Background: Editors’ choices on whether to invite reviewers that authors suggested or that authors requested not be invited might be informed by research comparing the recommendations of author-suggested and author-discouraged reviewers with those of editor-selected reviewers. Results of previous research indicate that authorsuggested reviewers make more favorable recommendations, but whether this is true at other journals and whether it influences editors’ decisions remain unknown.
Methods: We examined reviewer recommendations and editor decisions on all 211 manuscripts reviewed by the Journal of the American Society of Nephrology (JASN) during the 6-month period October 2008– March 2009. To compare recommendations and editor decisions, we assigned them numerical values from 1 (“reject”) to 5 (“accept as submitted”). Only seven author-discouraged reviewers made recommendations on manuscripts reviewed in the original period, so we collected data on manuscripts to which authordiscouraged reviewers were assigned over a longer period, April 2007–July 2009.
Results: Author-suggested reviewers (mean, 2.97, roughly equivalent to “accept if significantly revised”; n = 101 reviews) gave significantly more positive recommendations than editor-selected reviewers (mean, 2.59, between “reject unless substantially revised” and “accept if significantly revised”; n = 433) (P < 0.005, Mann-Whitney U). That led us to investigate whether the reviewers that authors suggest for these manuscripts generally give more positive recommendations than editor-selected reviewers, regardless of whether they are asked to review a particular manuscript; preliminary results suggest that they do. Author-discouraged reviewers gave significantly more negative recommendations (mean, 2.46) than editor-selected reviewers considering the same manuscript (mean, 2.90; n = 39) (P = 0.049, Wilcoxon signed rank). The difference between editor decision and mean recommendation (mean, -0.89) did not differ significantly from the mean without author-suggested reviewers’ input (mean, -0.79; n = 62) (P = 0.166, Wilcoxon signed rank), whereas the same comparison with regard to author-discouraged reviewers was nearly significant (-0.67 versus -0.82; n = 39) (P = 0.055).
Conclusions: Recommendations by author-suggested and author-discouraged reviewers differed from those of editorselected reviewers, but their reviews did not appear to affect editors’ decisions. That agrees with JASN’s editorial policy, which encourages editors to come to their own decisions rather than relying on an average or majority of reviewers’ recommendations.
CME for Peer Reviewers—How to Assess Benefits
Mary Beth Schaeffer and Christine Laine
Annals of Internal Medicine, Philadelphia, Pennsylvania
Background: Annals of Internal Medicine grants up to 3 hours of Category 1 continuing medical education (CME) credit to each reviewer whose review is returned on time and judged satisfactory by an editor. Reviewers are notified by e-mail that they are eligible, and they must claim the credit, as required by the Accreditation Council for Continuing Medical Education (ACCME). We at Annals are also asked to provide yearly survey results to our education department as part of their yearly audit for CME activities to show how this activity has fulfilled the requirements for accreditation, which involves a long list of items, some of which are not applicable to peer review.
Methods: We sent a link to the survey to the 762 physicians who claimed at least 1 hour of CME credit in 2008 for their work in reviewing manuscripts for Annals. We received 223 responses.
Results: Very few reviewers (12%) felt that the act of peer reviewing a manuscript led to a change in practice; a larger percentage (74%) felt that the exercise led to more confidence in applying their knowledge in practice.
Conclusions: The important job of peer reviewing requires reviewers to apply their expertise in a critique to the authors and editors that may warrant additional research into the literature. ACCME requires that we prove that a reviewer fulfilled particular requirements. We would like to see different criteria used in the future to judge peer review.