Annual Meeting Report

Short Course on Journal Metrics

Download Article

REPORTER:
Carissa A. Gilman
Managing Editor, Cancer

Offered every other year since 2009, the Short Course on Journal Metrics was created by past CSE president Angela Cochran to be a deep dive into the different kinds of data available to publication managers and what can be done with them. Cochran, Director of Journals for the American Society of Civil Engineers (ASCE), began the day with introductions and a discussion about the importance of data in making informed decisions and influencing the behavior of stakeholders. She pointed out that publishers and journal managers are no longer the only stakeholders interested in publication metrics. More and more, authors are seeking out metrics to evaluate the impact of their work and justify the need for funding. Because publishers already collect some of this data, journal editorial offices can now explore providing more metrics to authors as an enhanced suite of services.

Cochran first turned the podium over to Jason Roberts of Origin Editorial for one of the fundamental sessions of the day—Editorial Office Metrics. But this was not a simple explanation of how to pull the standard reports on submissions, acceptance rates, and turnaround times. Roberts challenged attendees to take their reports to the next level and apply best research practices and statistical principles to the collection, analysis, and presentation of data. He emphasized that reporting a metric is not enough. One also needs context, detail, measurements of variance, and comparisons. Roberts also stressed that data presentation is often overlooked even though it can be critical to understanding and identifying patterns. He therefore urged attendees to use the appropriate visualization for each set of data. He presented types of graphs that attendees might not have used before and explained how each one was suited to communicating different types of data. Roberts’s comprehensive and challenging presentation made many of us rethink how we have been creating and presenting our editorial office reports.

Cochran then spoke about using data to influence editor behavior, providing a case study of how the ASCE used editorial office metrics to show editors and staff where the logjams were actually occurring in the peer-review process versus where editors might have anecdotally thought they were occurring. She explained how these reports have changed based on feedback from editors, and how the behavior of their editors has changed since providing this information.

Cochran also spoke about new product development, specifically how products like HighWire’s Impact Vizor can be used to determine where rejected papers go (and if they then garner citations and Mendeley saves) and whether new journals, spin-off journals, or new types of products should be considered as a result.

Roberts and Carissa Gilman joined together to talk about using data in performance management for both staff and editors. Data are critical in making decisions to reduce or increase staff, or even to outsource work to editorial support companies. Data can also be critical in anticipating budget implications and getting staffing changes approved by leadership. Roberts spoke about assessing editor performance in areas such as timeliness, decision ratios, and appropriate utilization of reviewer pools to prevent reviewer fatigue.

Gilman gave a lengthy and detailed presentation on traditional citation metrics such as the Journal Impact Factor, Eigenfactor, Article Influence Score, SCImago Journal Rank, Source-Normalized Impact per Paper, and newer entrants to the fray like CiteScore and the various field-normalized citation metrics. She explained the differences between them and the limitations of each one, as well as how some unscrupulous editors have taken advantage of these limitations to game the system. She also talked about altmetrics and how they complement traditional citation metrics, concluding that while social impact is different than scholarly impact, it is still meaningful and worth measuring.

Gilman also spoke about online usage statistics, outlining the types of data that commercial publishers typically pull, along with the challenges posed by changing Google algorithms, evolving types of robots and crawlers, and the inherent difficulty of interpreting user behavior. She spoke about why and how to conduct a traditional competition analysis, while Cochran detailed how to use emerging tools such as Impact Vizor to do a detailed citation analysis of one’s own and competing journals.

Cochran then treated attendees to an adaptation of her revealing and entertaining Scholarly Kitchen piece on how many grains of salt we must take when looking at metrics. She exposed the hidden truths behind many of the databases and platforms on which we rely. These include everything from data delays and lack of transparency to failures of disambiguation, fallibility of text-mining algorithms, and headaches caused by multiple versions of papers existing across multiple platforms.

Bringing that theme closer to home, Roberts finished the day with a peek into the dark arts of metrics and a genuinely motivating call for better data in editorial offices. He exhorted attendees to select the appropriate statistical technique or measurement needed to answer each question and to apply it correctly, to clearly describe report methodology and follow it consistently going forward, to be aware of confounders that mean data may not be as homogenous as first assumed, and to stop attaching too much meaning to too few data points when asserting a trend. Adhering to these principles will allow us to formulate better, more informed policies and protocols based upon actual data rather than anecdotal observations.

After this day of presentations, attendees likely left the course feeling empowered to seek out their data, narrow down the best way to visualize the data, and then use that data to enhance or improve their publications.