MODERATOR:
Jonathan Schultz
American Heart Association
Editor-in-Chief, Science Editor
SPEAKERS:
Michele Avissar-Whiting
Howard Hughes Medical Institute
Chhavi Chauhan
American Society for Investigative Pathology
Dan Kulp
Executive Editor & Director of Publications
American Urological Association
David Mellor
Center for Open Science
REPORTER:
Peter J Olson
JAMA Network
“The future doesn’t just happen to us—we have to shape it.”
Jonathan Schultz opened the final session of the 2025 CSE Annual Meeting with this eloquent edict, and although he was directly addressing the meeting’s 250 attendees, the intent behind his injunction clearly extended to the scholarly publishing industry at large. Schultz set the tone for the session—entitled “The Future of Scientific Editing and Publishing: Science Editor Symposium”—by reflecting on the industry landscape at the start of the 21st century: Open Access (OA) had not yet been defined; peer review was still being managed via ground mail or fax; and online journals, article databases, and manuscript trafficking systems were all in their infancy. Fast-forwarding to 2025, he turned to a panel of scholarly publishing experts to elicit their thoughts on where the industry is right now, what changes they envision in the next quarter-century, and what might be done to influence those changes for the greater good of the scientific enterprise.
Michele Avissar-Whiting, Director of Open Science Strategy at Howard Hughes Medical Institute (HHMI), began the conversation by offering the perspective of a scientific research funding organization. Avissar-Whiting, who oversees OA policy and preprint-related programs at HHMI, noted that while most major industry funders have OA policies that emulate either the Holdren memo1 or the Nelson memo,2 they are beginning to supplement or even replace these policies with preprint mandates. And although she contended that the journal article remains the dominant currency in academia for hiring, tenure, and promotions, she also noted that funders are becoming increasingly interested in nonarticle output—primarily, datasets, protocols, and preregistration information. Avissar-Whiting foresees funders shifting their attention toward these outputs in an effort to analyze scientific research more holistically, ultimately turning the focus away from the journal article as the version of record. Furthermore, she said, the popularity of preprints is making it easier for funders and funding agencies to adopt policies that will help foster this transition.
Avissar-Whiting then passed the microphone to Chhavi Chauhan, founder and president of Samast AI and Director of Scientific Outreach at the American Society for Investigative Pathology. Chauhan explored a topic that would have been difficult to anticipate 25 years ago: the intersection of artificial intelligence (AI) with diversity, equity, inclusion, and accessibility (DEIA) in the scholarly publishing industry. She noted that the onslaught of new and evolving AI tools, combined with the US administration’s recent Executive Orders, has left the industry scrambling to develop policies that will maintain trust in scholarly content while preserving the rigor of the scientific record, particularly given that many institutions have abruptly ended their support of DEIA-related initiatives. As one example of this predicament, she said that algorithmic biases are likely to be introduced in multiple disciplines in scientific research owing to an inability to state whether male or female models were used. On top of that, she said, the widespread deletion of publicly available data and collectively built datasets has forced a massive alteration in how scientific research can be reported, leaving the future uncertain.
The integrity of scientific publications is also at stake. Dan Kulp, Executive Editor and Director of Publications at the American Urological Association and a former chair of the Committee on Publication Ethics, thinks that the more pervasive and persistent integrity ills (such as fabrication, falsification, and plagiarism) can be traced to a systemic reality that Avissar-Whiting had acknowledged just a few moments earlier: that the journal article has become the accepted form of currency—or what he referred to as a “token”—for advancement within the scientific enterprise. Kulp opined that the scientific research process has become monetized and manipulated such that quantity of tokens is valued over quality of content, and that this “publish or perish” mentality has taken an even stronger hold given recent geopolitical pressures. That said, he has observed efforts in the industry to look beyond the journal article and evaluate scientists on a deeper, more qualitative level, which he hopes will eventually remove token-driven incentives.
To expand on the concept of the journal article as a token, David Mellor, Senior Policy Analyst at the Center for Open Science, introduced another metaphor. Calling the journal article “the tip of the iceberg of years of work that becomes disseminated,” Mellor noted that much of the underlying data that accumulate as a natural part of the scientific process are ultimately lost and forgotten, and a published article too often represents a biased, sanitized subset of data, leading to an overproliferation of findings that belie the full body of evidence—a symptom of a system that favors significant, exciting results over “boring” findings, such as null results or replications. Mellor’s hope is that the scientific research community and the scholarly publishing industry will recognize the value of replications—regardless of the outcomes—and move toward a more collaborative process that will bring more rigor to the earlier stages of research. Only then, he said, can we be more confident that the transition from basic to applied research is efficient, transparent, and trustworthy.
After hearing from each panelist, Schultz delved further into the scrutiny surrounding the journal article’s role as the primary vessel for communicating research, asking the panel at large about the alternatives. Kulp stressed that he does not necessarily advocate eliminating the journal article, but that he believes it should be expanded to include as much data as possible so that those data can be easily referenced and replicated. Avissar-Whiting agreed, saying she envisions a more modular and iterative means of communication in which ancillary components are well integrated—or “hard-coded” into the article via links or transclusions—yielding an XML version that has an unprecedented depth to it. Mellor said we would be “fighting human nature” if we abandoned a narrative form of communication but echoed the sentiment that having wider access to a given article’s history—namely, whatever critiques may have been raised during peer review—would result in a more trustworthy narrative. Chauhan focused on the concept of collaboration, noting that shared datasets are becoming more common; in the coming years, she said, it will be important for the scholarly publishing industry to assign a unit of record for an article that includes all of the data connected with that article to begin building the most robust and rigorous body of knowledge possible.
The conversation about shared datasets led to a discussion about incentive structure. If you were to produce a widely used dataset, Schultz asked, should you be rewarded for that accomplishment in the same ways that you would be rewarded for publishing a journal article? Chauhan said she is already seeing evidence of this in the field of pathology, citing two AI tools, PathChat3 and PathPresenter,4 that are used for educational purposes as well as to augment pathologists’ understanding of different pathologies and improve workflow management. Kulp sees the related benefit of highlighting different researchers’ strengths; drawing from his own background in materials science, he suggested that a crystal maker could (and should) be acknowledged just as much as the investigators who go on to publish the new insights they derived from that crystal. Mellor stressed the importance of avoiding incentives that make or break a researcher’s career; the more we can diversify the portfolio of a scientific body of work, he said, the less susceptible it will be to manipulation of a single finding or a single article.
Circling back to peer review, Schultz asked the panel whether the process should be expanded to cover datasets and other methodological aspects of an article. Mellor said that although there is no single answer, he has found personal satisfaction as a reviewer for Registered Reports,5 a relatively new program in which a study’s methodology is peer-reviewed before the results are known: “It’s nicer to review and suggest improvements for upcoming work than it is to poke holes in something somebody’s been doing for the past 3 years.” Chauhan promoted the use of AI, noting that peer review assistant tools are already being built that can help reviewers determine the validity, accuracy, and integrity of a dataset. Avissar-Whiting questioned whether there would ever be a better surrogate for the status quo, though she did say that if a change were to happen at scale and gradually become normalized, she could envision a more expansive peer review process taking hold. Kulp reiterated his advocacy for including as much data as possible, noting that if heretofore unreviewed datasets were peer reviewed, it would be one less thing that an editorial office has to worry about when deciding whether to accept a submitted article.
Schultz then raised the question of quantity vs quality: Will we ever get to the point where the latter supersedes the former? “I think we should separate that question,” Kulp responded, going on to say that the focus should be on the quality of the output regardless of the amount of that output. Avissar-Whiting agreed, saying that we should not “curb the narratives” around the data that are being generated; we should instead seek to publish more data while ensuring that it is of the highest quality. That said, she foresees a “rough transitional period” regarding this mindset; large language models will only become more sophisticated and ubiquitous, making it more challenging to define what it means to be an accomplished scientist.
In closing, Schultz posed a pointed question: Are we prepared for the ethical challenges of the future? Chauhan answered with an optimistic and succinct synthesis of the afternoon’s discussion: Every challenge is an opportunity. Asserting that the unprecedented obstacles faced by the scientific publishing industry have unified us in our approach to the scholarly record, her hope is that this will inspire us to seek the perspectives of nonindustry players who are beginning to see the value of science and are embracing their role as stakeholders in the scientific enterprise. Adding these perspectives, she said, will help the scholarly publishing community envision and/or expand avenues of advancement that are more ethical, responsible, and sustainable than any pathways we may have created on our own.
References and Links
- Holdren JP. Addressing societal and scientific challenges through citizen science and crowdsourcing. [accessed August 18, 2025]. https://obamawhitehouse.archives.gov/sites/default/files/microsites/ostp/ostp_public_access_memo_2013.pdf
- Nelson A. Ensuring free, immediate, and equitable access to federally funded research. [accessed August 18, 2025]. https://bidenwhitehouse.archives.gov/wp-content/uploads/2022/08/08-2022-OSTP-Public-Access-Memo.pdf
- https://www.modella.ai/pathchat
- https://www.pathpresenter.com/
- https://www.cos.io/initiatives/registered-reports