Annual Meeting Report

How to Do Editorial Research

Download Article

MODERATOR:
Mary Warner
American Pharmacists Association
Washington, DC

SPEAKERS:
Jeanette Panning
American Geophysical Union
Washington, DC

Morgan Sorenson
American Academy of Neurology
Minneapolis, Minnesota

Jeannine Botos
Oxford University Press
New York, New York

Kelly Anderson
American Society of Civil Engineers
Reston, Virginia

REPORTER:
Mary Warner
American Pharmacists Association
Washington, DC

The best editorial operations not only run well, but also know why they run well. And to know why your operations are running well, you need to have information (data) about your journal and its readership. At the CSE 2018 Annual Meeting, the session “How to Do Editorial Research” aimed to provide an overview of how to collect that data through editorial research, including tips on getting started and case studies from successful research projects.

The session began with an overview of how to get started on an editorial research project, including formulating the question you want to answer about areas such as impact factor trends, peer review, submissions, authorship, business models and pricing, or readership (see Figure 1 for a list of sample questions).

Figure 1. Sample editorial research questions.

Mary Warner, speaking for Jeanette Panning, summarized the methodology for conducting editorial research—surveys, metrics, and data mining. She emphasized using your in-house manuscript submission and tracking system to pull information on submissions by authorship affiliation, gender, society membership, etc.; accepted versus declined manuscripts by author and reviewer characteristics; reviewer quality; and trends over time (Figure 2).

Figure 2. Suggestions for making use of the reporting capability of your manuscript submission system.

Searching online databases such as Clarivate’s Journal Citation Reports, Google Scholar, and PubMed can also yield valuable data to help answer your question.

Warner then summarized survey methodology, including tips for designing your survey to ensure valid results. Best practices include the following:

  • Keep it short—no more than 10–15 minutes to complete
  • Have no more than 5 choices for ratings
  • Use succinct (simple) wording to avoid confusion
  • Include no more than 2 open-ended questions
  • Use responsive design to allow completion on mobile devices
  • If possible, offer an incentive (access to results, raffle for a gift card, etc.)

The session continued with 3 cases studies: Morgan Sorenson described efforts to evaluate social media effectiveness, Jeannine Botos described a reviewer incentive program, and Kelly Anderson discussed identity verification of author-suggested reviewers.

Sorenson shared results from a study at the American Academy of Neurology to determine if there was value in their efforts to promote papers via Twitter and Facebook and whether one type of social media was more effective than another. They compared web access numbers for 6 papers on similar topics—some promoted, some not; surveyed authors to see if they provided their own promotion; and compared results of web traffic from Twitter and Facebook. Results showed that less than 1% of web traffic was coming from social media, with Twitter having a higher click rate, and that authors were generally not doing their own social media promotion. Based on these results, they decided not to increase time spent on social media while possibly using more engaging methods on Twitter and focusing on other ways to drive traffic to the journal’s website.

Sorenson concluded by sharing a few tips for analyzing social media results, including using the free analytics provided by both Facebook and Twitter. These reports can help determine who your top followers are, what topics are getting the most attention on social media, and what times are most effective to post new content for your readership.

Botos described work done by staff of the Journal of the National Cancer Institute (JNCI) to implement a reviewer incentive program, through which $50 would be donated to Cancer Care’s patient education programs for every high-quality peer review submitted within 7 days of accepting the invitation. Their hypothesis was that this program would speed up the peer-review process and motivate reviewers to accept invitations.

Reviewers were informed of the program in their invitation, acceptance, and reminder letters and the quality of each review was assessed by the JNCI editorial staff. Numbers and percentages of good quality peer reviews completed in 7 days or less along with peer reviewer acceptance and turnaround times were compared during 2 periods: the 15 months of the program and 8 months before it began. The results indicated that the number of good quality peer reviews completed in 7 days or less increased by 5% for initial submissions and by 16% for revisions. After 15 months, mean peer reviewer turnaround time was reduced by 0.8 days for initial submissions and by 0.4 days for revisions. The team concluded that while the program was associated with an increase in the speed of good-quality individual reviews and with small improvements in average on-time peer reviews, it did not lead to a substantially faster peer-review process. The program was ended at that point.

Anderson then shared work recently done at the American Society of Civil Engineers (ASCE) on identity verification of author-suggested reviewers. This work resulted from a case of fraudulent peer review in which an author had provided the name of a qualified researcher but with an email address that allowed the author to review his own paper. ASCE staff wanted to discern the frequency of editors using author-suggested reviewers and whether the editors vetted the individuals.

Figure 3. Results and editor feedback to the question “Do you take steps to verify author suggested reviewer’s identity?”

Using SurveyMonkey, editors and associate editors were asked various questions regarding the use of author-suggested reviewers including how frequently they used author-suggested reviewers, any methods used to verify reviewer identity, and if editors felt the names were useful. The data showed that 86% of the respondents use author-suggested reviewers frequently or sometimes. Most indicated that suggested reviewers were used only when needed, specifically in specialized niche fields where the pool of reviewers is small. The data also showed that 56% take steps to verify a reviewer’s identity, institution, expertise, and affiliation (if any) with the author, using tools such Google Scholar and the journal’s database for reviewer history (Figure 3). Finally, 70% of respondents indicated that it is valuable to have author-suggested reviewers, but it is necessary to verify the reviewer’s affiliation and expertise through various sources to avoid reviewer fraud. As a result of this work, ASCE removed the option for authors to supply an email address for suggested reviewers. Instead, authors supply a reviewer’s name and institution, leaving the responsibility for finding and verifying a reviewer email to the editor. Anderson concluded that as reviewer misconduct becomes a larger problem in scholarly publishing, it is important to survey editors periodically to see where policies can be tweaked to avoid ethical issues.