Annual Meeting Reports

RII (Research Integrity Investigation): To Promote and Protect Integrity of the Scientific Record

MODERATORS: 
Alexandra Kahler
KGL Editorial

Andrea Rahkola
American Academy of Neurology

SPEAKERS:
Christina Bennett
American Chemical Society

Alicea Hibbard
American Society for Microbiology

Amanda Sulicz
Institute of Electrical & Electronics Engineers (IEEE)

REPORTER:
Adria Gottesman-Davis
American Academy of Neurology

 

In a session that brought together ethics professionals from across the scholarly publishing industry, panelists tackled one of the most pressing issues in modern publishing: maintaining the integrity of the scientific record amid increasing threats, resource constraints, and the evolving digital landscape. The session featured speakers Christina Bennett (American Chemical Society), Alicea Hibbard (American Society for Microbiology), and Amanda Sulicz (IEEE), with a focus on real-world case studies, institutional collaboration, and the changing norms of misconduct detection.

Integrity Complaints on Public Platforms

Amanda Sulicz opened the session with a series of anonymized case studies pulled from IEEE’s extensive experience with integrity complaints arising on platforms like PubPeer. These platforms, though often helpful in surfacing valid concerns, increasingly host unsubstantiated or malicious complaints that burden editorial teams. 

First, Sulicz shared a case where a tipster alerted IEEE to a series of PubPeer allegations of multiple publications. Upon investigation, IEEE not only found that there was no basis for the allegations, but also that all the complaints targeted two authors. The tipster was unhappy with IEEE’s conclusion that no action was warranted and subsequently submitted several additional allegations against the same authors. Of a total of 23 complaints from 2024 to 2025, only one was found to have possibly violated the multiple publication policy, and as a first offense, carried only a warning. This case raised the question of whether there is a point at which editors should cease to entertain complaints from tipsters who have an established pattern of baseless accusations. 

In the second case, IEEE received over 60 accusations of plagiarism against a particular author over a 12-year span, followed by additional allegations against other authors who had published with the original author, and through metadata in the PDF complaints, it was uncovered that the complaints were coming from an individual who had been denied tenure at their university and were targeting those they held responsible for the loss of that position. This case encouraged IEEE to develop policies that will help protect the reputations of authors from malicious allegations and hopefully will encourage online forums to do the same in the future.  

Despite these examples of the drawbacks of public platforms and anonymous reporting, these systems can also hold significant value for those seeking to maintain research integrity, as individual sleuths may be knowledgeable and reputable within their field. This is exemplified in Sulicz’s final case study of how an anonymous sleuth’s tip about authorship for sale helped IEEE identify and expose a paper mill in 2023.  

In summary thoughts, Sulicz emphasized the tension between open reporting and harassment, and highlighted the reputational risks to both authors and publishers when frivolous allegations circulate unchecked. 

Understanding Paper Mills 

Alicea Hibbard presented an overview on how to detect paper mills, using both indicators and characteristics that warn of unethical practices. Indicators, which are binary and easily demonstrated, include traits such as image manipulation, unusual review turnarounds, repeated use of the same personal email address, and suspicious collaboration patterns, such as editors reviewing each other’s papers. Characteristics are more qualitative and tend to require additional investigation; for example, the use of “plug-and-play” sections or figures is a popular tactic used for imitating valid research in study types such as those involving Mendelian randomization.  

In addition to warning signs in the content of a paper, Hibbard also discussed how suspicious collaboration patterns can help uncover paper mill activity, such as in a noteworthy case involving salaried editors writing and reviewing each other’s work in a large-scale breach of peer review integrity. Another warning characteristic is how authors respond when asked to provide data they initially claimed would be available upon request—frequently-used excuses include “the laptop was stolen” or “the graduate student took the data when they graduated.”

Hibbard also highlighted data types that are especially popular for paper mill tactics, such as the reuse of flow cytometry plots and copied Western blot bands, and outlined tools and strategies for detecting misconduct, such as CrossCheck and iThenticate for text overlap, Imagetwin for image duplication, and Seek & Blastn for gene sequence validation.  

When it comes to detecting generative AI—a recurring topic at the CSE 2025 Annual Meeting—she shared common warning signs such as the use of tortured phrases, as exemplified in a mini review that caught attention due to the repeated mentions of “mixed drinks” throughout the text (Figure). (They meant phage cocktails.)

<b>Figure.</b> Hibbard used AI to generate an image of a “phage cocktail” not long after the original case was handled, repeating the prompts several times until the image generator arrived at the first image. Shortly before this CSE meeting, she again prompted AI with “phage cocktail,” and this time the program immediately returned the second image. A striking example of how generative AI is both prone to startling errors born of its limitations, yet is also evolving so rapidly that even its flaws may advance faster than humans can keep up.
Figure. Hibbard used AI to generate an image of a “phage cocktail” not long after the original case was handled, repeating the prompts several times until the image generator arrived at the first image. Shortly before this CSE meeting, she again prompted AI with “phage cocktail,” and this time the program immediately returned the second image. A striking example of how generative AI is both prone to startling errors born of its limitations, yet is also evolving so rapidly that even its flaws may advance faster than humans can keep up.

Hibbard stressed the importance of objective, standardized retraction notices and the potential of Expressions of Concern as a provisional measure when full investigations are still pending. She recommended journals take advantage of newly published Committee on Publication Ethics (COPE) guidelines and consider user-facing alerts like those employed by Taylor & Francis.

Collaboration Between Journals and Research Institutions

Christina Bennett reported on a cross-industry working group’s efforts to improve communication between publishers and research institutions during misconduct investigations. Historically, if a publisher alerted an institution to a potential integrity issue, the institution often kept details of any ongoing investigation confidential, leaving the journal waiting for a resolution—and the scientific record uncorrected—for extended periods. By bringing together research integrity officers (RIOs) from universities, institutional counsel, and journal publishers and editors, the working group was able to produce a call-to-action for both institutions and journals.

The group called on institutions to expand the “need to know” criteria during investigations to include journals, and additionally, decouple questions about the data from questions of who may be responsible for the problem so that journals can correct flawed science without having to wait until a responsible party is identified. Journals, meanwhile, were asked to establish policies that would include institutional contacts for fee-for-publish concerns, raise author awareness about such policies, and prioritize correcting the scientific record when the data review portion of the investigation is complete.

The advocacy for a change in U.S. policy was successful, and as of January 2025, institutions are permitted to treat journals as need-to-know partners during investigations.1 This enables journals to correct the literature more quickly, without waiting for the full inquiry to conclusively identify the parties responsible.

Bennett encouraged editors to contact RIOs even before initiating formal inquiries, including through hypothetical conversations. She also reiterated the importance of separating the correction of data from judgments about culpability, which can streamline editorial decisions and preserve neutrality.

Q&A Highlights

The session concluded with a lively and practical Q&A. Panelists addressed the tension between an institution’s desire to protect high-profile researchers and a journal’s duty to correct the scientific record, as well as a journal’s desire to advocate for their author base and an institution’s duty to secure their own reputation and integrity. One key takeaway: prioritize the data. Whether or not an individual is found guilty, and indeed whether there was any wrongdoing at all, flawed or fraudulent data must be corrected. The panelists acknowledged that submission systems are often not built for fraud prevention and called for more proactive tooling to flag patterns in reviewer behavior and author metadata.

In summary, the session provided a comprehensive, nuanced view of current challenges and emerging strategies in maintaining research integrity. Across all talks, a common theme emerged: scientific integrity is best protected through collaboration, transparency, and unwavering attention to data accuracy.

References and Links

  1. https://www.ecfr.gov/current/title-42/chapter-I/subchapter-H/part-93