Annual Meeting Reports

Generative AI: The Promise and Peril for Scientific Publishing

Download Article

SPEAKERS: 
Chirag Jay Patel
Head of Sales (Americas)
Cactus Communications

Emilie Gunn
Director, Journals
American Society of Clinical Oncology

Avi Staiman, MA
Founder & CEO
Academic Language Experts

MODERATOR:
Jonathan Schultz
Director, Journal Operations
American Heart Association

REPORTER:
Tony Alves
SVP Product Management
Highwire Press

 

For the session “Generative AI: The Promise and Peril for Scientific Publishing,” moderator Jonathan Schultz introduced the panelists and the topic. Schultz noted that artificial intelligence (AI) and tools like ChatGPT have begun to change the creation and dissemination of scholarly research. Publishers have a responsibility to guard against the abuses and misuses of AI, such as plagiarism and paper mills, as well as the biases and hallucinations inherent in the current AI tools. The first speaker, Chirag Jay Patel, is Head of Sales in the Americas for Cactus Communications. The second speaker, Emilie Gunn, is Director of Journals at the American Society of Clinical Oncology. The third speaker, Avi Staiman, is Founder & CEO of Academic Language Experts. 

Jay Patel provided an overview of the AI landscape in 2023, noting that there are more than 1400 companies that are involved in AI, creating new technology and building on existing technology. He addressed, “What is Generative AI?” by explaining that AI is used to create new content using deep learning models. It is not just creating written content, it is also being used to create artwork, music, computer code and more. A positive way to look at the capabilities of AI is to see it as enhancing human creativity.

Investor interest in AI has soared, with investment growing from $27 million in 2020, to $2.6 billion in 2022. Patel showed that there is heavy investment in applying AI to creating social media and marketing content, content summarization, photo and video editing, and audio editing. The biggest benefactor of this investment has been OpenAI, which had a $20 billion valuation in 2022, followed distantly by Hugging Face, Lightricks and Jasper, with a combined valuation of approximately $5.5 billion.

Patel highlighted the benefits of AI, which can give an organization a competitive edge, especially in the area of client satisfaction, by providing the following: automation of content creation, improvement in responses to technical queries, the ability to summarize complex ideas into an easy-to-understand narrative, standardization of style and format, increased productivity, and personalization of customer experience. He also highlighted the following limitations of AI: there is a lack of original and creative thought; the training data can be biased; there are ethical issues, such as plagiarism; and there may be ownership and copyright concerns. 

Wrapping up his talk, Patel talked about how humans and AI can work together, introducing the CENTAUR Model, a hybrid of human and AI intelligence. It is unclear who came up with this concept, but the model combines the strength of both humans and machines for better decision making. Humans provide strategic guidance and intuition, whereas AI provides analytical and computational capabilities. Although humans provide input, AI makes recommendations based on the data; however, in the end, humans make the final decisions. Patel advises, “Don’t be afraid of AI, use it by finding tedious things in your daily life that you can automate using AI.”

Emilie Gunn continued the discussion by describing how her organization went about creating a policy for AI in their journal publishing program. She started by showing an image of a manual typewriter with the caption, “Do you feel like this?”; followed by an image of the Microsoft “Clippy” character with the caption “Does ChatGPT feel like this?”; followed by a picture of the Terminator robot with the caption “Or like this?”; then finishing with an image of a robot hand and human hand touching fingers, like the Michelangelo “Creation of Adam” on the ceiling of the Sistine Chapel, with the caption “Maybe more like this.” Gunn uses this juxtaposition of imagery to emphasize that AI is just a more modern version of something we are already familiar with. AI does not have to be intrusive or scary; it can be something we learn to work with and utilize.

Gunn recalled a meeting where she discussed the use of AI and large language models (LLM) with editors, some of whom expressed serious concern—even advocating a ban on their use. Gunn pointed out that journal staff need to help editors understand how AI and LLMs are currently being used. It is useful to engage them in a discussion as to why they may be against the use of these technologies. Part of that may be to ask them about potential uses in their own fields, and to explore if there are any situations or uses that may be acceptable.

When developing policies around the uses of AI and LLMs Gunn advised keeping the policies broad and general, noting that you will not be able to address every use of these technologies. Think in terms of categories (e.g., uses, users, article types, etc.). Do not put a value judgment on the uses. Be clear about expectations; for example, where, when, and how should authors describe the use of AI. Also, there are good reasons to forbid their use. For example, with AI there are issues with accuracy and the potential for plagiarism and the fact that machines cannot meet the requirements of authorship. Gunn also reminded the audience that sometimes the voice of the author is an important element, and this can be lost in AI. Finally, once you have set your policies around the use of AI and LLM, think about how it will be announced, what actions you will take if you suspect an author has violated your policies, and how AI could be used for reviewers and editors.

The third speaker, Avi Staiman, broadened the discussion by talking about the use of AI in research. First, he compared ChatGPT to Wordle, proclaiming it to be Wordle on steroids. Similar to a person solving Wordle, ChatGPT fills in the blank spaces. ChatGPT is an autofill; it looks at big, complex text and guesses the next word. He asked, “Why do we have such an emotional reaction to AI and ChatGPT?” It is because ChatGPT and other LLMs are a quantum leap forward. AI has been around for a long time, but ChatGPT is different because it has the power to displace information workers and impact our knowledge economy. However, it is important to understand the capabilities of these tools; they provide us with words, not facts.

Staiman discussed how researchers are currently using AI. One important use is that it levels the playing field for scholars whose first language is not English, using it for translation, editing, drafting abstracts, and practicing writing. Another use is as a cooperative research advisor. It can provide grant ideas, suggest experimental techniques, assist in data analysis, and point out new areas of research. However, Staiman warns that you need to be careful because not all information will be accurate. He asked ChatGPT to critique his presentation and it was both helpful and provided bad advice. He warns that you need to think about your level of tolerance for mistakes, and to realize that humans make mistakes too. A third use is as a research assistant, providing a literature review and summaries of the literature. A fourth use is as a personal peer reviewer, reviewing manuscripts and grant applications, ensuring your research is novel, and identifying gaps. Finally, AI is being used as a personal publicist, creating social media posts, blog posts, email newsletters, online profiles, and other sorts of media engagement.

Having discussed how researchers are using AI, Staiman cited a Springer Nature survey that reveals that 80% of responding researchers have used ChatGPT. The conversation needs to now focus on the following: 1) What are the responsible/productive uses of AI tools in research and 2) How can we encourage responsible AI use among authors?