Annual Meeting Reports

Publishing Questions—Data-Informed Solutions

At one point during her presentation, Heather Goodell of the American Heart Association (AHA) said, “This is the panel where even when you have data, you may not arrive at an answer to your question!”

This valuable and engaging session was intended to illustrate the process by which the industry leaders on the panel have attempted to answer questions about their operations with the data they have at hand.

Session moderator and panelist Diane Scott-Lichter of the American College of Physicians wanted to know how to determine whether or not certain content is of value to readers and writers. She quickly discovered a discrepancy challenge in matching data at the article level and type captured and cited in Thomson Reuters Web of Science and those from the journal’s online host, Silverchair. Matching citation data and usage data, Scott-Lichter learned about differences among the article types. Revealing the value of highly used and cited articles, seeing what is either highly used or highly cited, and identifying articles that were not well used or cited allows for a closer examination by domain experts to inform editorial decisions.

With 60% of submissions rejected without external peer review at JAMA, panelist Annette Flanagin noted that the JAMA editors wanted to evaluate the effectiveness of the process by which the submissions rejected by JAMA are transferred to other specialty journals in the JAMA Network of journals. The common term for this process used by other families of journals is “cascading peer review.” JAMA’s previous passive, manual-transfer method required authors to opt in after the rejection decision. Changing this to an active, automated one in which the authors opted in at the time of submission to JAMA with an agreement among JAMA family editors to guarantee a review within five days improved overall efficiency and increased author acceptance of such transfers. The number of transferred manuscripts increased 4-fold and the number of transferred papers accepted for publication in the second journal increased. The result ensured fast decisions while maintaining healthy rejection rates and not diluting the JAMA brand.

Panelist Helen Atkins of PLOS wanted to know a better way to accurately predict when a submitted article had a high likelihood of being rejected so that it could be triaged to appropriate editors through an expedited process. As a communityrun journal with no editor-in-chief, PLOS ONE was looking to optimize the peerreview process for an average of 200 daily submissions. After analyzing data on two years of rejected and accepted articles, a number of criteria emerged. In addition to identifying specific problematic article types such as clinical trials, meta- analyses, and genome-wide association studies, Atkins recognized that the analysis could be expanded when more standardized data such as author affiliations (Ringgold IDs) and funding sources (FundRef), which are only being newly captured, became more established.

The American Meteorological Society has been seeing a 10% to 15% decline in print runs, year after year, for all nine of the technical journals it offers both online and in print. Even with an anticipated 20% in cost savings resulting from eliminating print, panelist Ken Heideman and Society leadership remain unconvinced that is the right thing to do, citing abundant anecdotal feedback suggesting that a nontrivial subset of members is willing to pay more to receive the journals in print form. Ken was emphatic in stating that subscribers will continue to receive content in whichever form they prefer; the Society has no plans to unilaterally “kill” print.

Goodell wanted to know if social media was “really worth it” and used a randomized trial of AHA Facebook and Twitter activity for the journal Circulation to try to find out. The editor-in-chief was the only AHA editor using social media, and the concern was that readers would forego the actual article in favor of the social media about it. The jury is still out on the results; however, Goodell learned along the way that a tweet about a study examining the use of social media at Circulation was responsible for more social media activity than the entire marketing campaign designed for that purpose.

All in all, this session was an excellent survey of common questions across scholarly publishing. The panelists gave lively, thoughtful presentations to a large audience and took on a number of challenging audience questions as well. It was clear from the presentations that data-driven investigations can sometimes yield inconclusive results and yet, other times, yield serendipitous discoveries.