This piece is based on the talk “Publishing Ethics in the Age of Artificial Intelligence” delivered by the author at the 15th GW Ethics in Publishing Conference, October 10.
If we take a first-principles approach to ethics in publishing in the age of artificial intelligence (AI), we must begin by examining what ethics truly implies, what AI actually is, what it can and cannot do, and then apply that understanding to the realities of scholarly publishing. There is an old principle that resonates deeply here: “Evil is that which is not necessary.” Ethics, then, is not simply about compliance or the observance of formal rules. It is about doing what is necessary, no more and no less. When we act beyond what is necessary, we risk entering unethical territory.
The essence of ethics lies in the ability to think everything through for oneself, to develop one’s own insight and understanding, and to act in harmony with that insight. It requires a sensitive awareness of one’s environment and a balance between what is genuinely needed as an individual and what serves the larger collective good. Ethics, in this sense, is not abstract morality but a living intelligence applied to context.
Reframing AI: From Intelligence to Thought
When we speak of AI, it may be useful to think of it not as artificial intelligence but as artificial thought. The distinction is significant. Thought involves imagination, analysis, and communication. Intelligence, by contrast, is discernment—the ability to observe without prejudice and with full attention so that the true nature of a thing reveals itself. Discernment is knowing how much to say and where to stop. This capacity for restraint, born of awareness, is central to ethics, and it raises a question: How do we cultivate such discernment in a powerful and unrestrained tool like AI?
The relationship between thought and intelligence mirrors that between thinking and awareness. Thinking without awareness tends to become repetitive, mechanical, and incoherent. Thinking guided by awareness and attention, however, runs straight. It gains coherence, and coherence is inseparable from ethics. When our use of AI becomes coherent, aligned with necessity and guided by attention, it also becomes ethical.
Three Guiding Principles for Ethical AI Use
1. Do What Is Necessary, No More and No Less
The temptation to apply AI wherever it can be applied is strong, but it must be resisted. We should not begin with the tool and look for uses; we should begin with a clearly defined problem and then ask whether AI is the right resource to address it. AI application is justified only when it solves a genuine problem, one that would otherwise be too difficult, tedious, or time-consuming for human effort alone. Using AI merely for optimization or novelty can easily become counterproductive.
2. Amplify, Not Replace, Human Judgment
AI should support discernment, not substitute for it. It can and should reduce manual effort and lower-level cognitive work, but it must not erode the higher cognitive processes that define scholarship—reading, reflection, questioning, and meaning-making. The best use of AI is to take over mechanical and routine elements of the publishing workflow so that humans can focus on tasks requiring insight, context, and judgment. In this way, AI becomes a complement to, rather than a competitor with, human intelligence.
3. Build Guardrails for Discernment
Ethical behavior should not rely solely on individual discretion; it must be supported by systems, processes, and environments that guide right use. These guardrails can take the form of secure, publisher-endorsed AI tools, clear and transparent usage policies, audit trails, and shared industry standards. When discernment is built into the structure of our tools and systems, ethical use becomes not just an expectation, but the natural default.
When evaluating any application of AI, we might therefore ask 3 questions: Is it necessary? Does it enhance human judgment and experience? And are there guardrails to ensure responsible use? Unfortunately, the current trend often reverses this logic. Organizations decide they need AI, then search for ways to apply it. The result is confusion, inefficiency, and an erosion of clarity about purpose.
The Real Risk: Cognitive Laziness
The greatest risk associated with AI is not technological but cognitive. The danger lies in allowing AI to dictate what and how we think, thereby bypassing the process of individual perception, the act of knowing something for oneself. This risk is not hypothetical; it is already visible in the fragmented attention and diminished depth of thought encouraged by continuous digital interaction. What should concern us most is not the inaccuracy of AI output but the dulling of human insight that can result from overreliance on it.
AI learns from what already exists. It replicates patterns from accumulated data. But insight arises from lived experience interpreted in the present moment. The vitality of knowledge depends on this human translation of experience into understanding. We must hope and work to preserve that process, using AI only to manage mechanical aspects of work while reserving for ourselves the creative and interpretive dimensions.
The Middle Path and the Real Opportunity
There is, of course, a middle path. It involves using AI as one would any other technology, intentionally, thoughtfully, and only where appropriate. The mere existence of a capability does not create an obligation to use it. The challenge is less about AI itself than about our human tendency to equate convenience with progress. Convenience has long guided technological development, but it is not a moral compass. The temptation of “easy” is too great to be managed by self-control alone. It must be managed by awareness and design.
The real opportunity that AI presents lies in its ability to manage scale. If deployed wisely, AI can relieve humans of tedious or repetitive tasks, reduce burnout, and create the cognitive space needed for reflective editorial and peer-review work. Properly used, it can help strengthen research integrity, accelerate workflows, and improve efficiency without diminishing human judgment. Like medical imaging that supports diagnosis without replacing the physician, AI can assist without displacing.
Building Ethical Infrastructure
The responsible use of AI in publishing requires infrastructure: secure environments, transparent workflows, and shared standards that guide ethical behavior by design. Tools must not encourage users to bypass safeguards or manipulate processes. Publishers have a leading role to play in defining what constitutes necessary and appropriate AI use, offering controlled environments that promote helpful use while preventing misuse. Ethics, in this sense, becomes a matter not only of personal virtue but of structural intelligence.
Efficiency, Commerce, and Control
Some might ask whether AI remains efficient if human oversight is always required. The answer is yes, when efficiency is understood as the optimal use of human attention rather than its elimination. Human attention is a scarce resource. AI should be used to focus it where it matters most. Replacing human attention altogether creates new ethical risks and undermines trust in scholarly communication.
The commercial reality for publishers is that they must manage growing manuscript volumes while maintaining quality. This challenge can only be met through collaboration among publishers, technology providers, and industry bodies, to ensure AI adoption benefits all stakeholders while safeguarding integrity. Many organizations are already making progress toward this goal, but shared vigilance remains essential.
Beyond the technology itself, we must recognize that some ethical problems originate in the wider academic culture, the “publish or perish” mindset and the systemic pressures it creates. What we can control, however, is how we design and steward the processes we manage, ensuring that ethics is built into the infrastructure of publishing practice.
A Call to Discernment
Ultimately, ethics is discernment: the ability to see clearly what is necessary and to stop where necessity ends. AI is a capable technology that mimics thought, but it cannot replicate awareness, judgment, or conscience. The challenge before us is to discern where AI can truly help and where it must not intrude. That discernment itself is a form of intelligence.
AI expands what is possible but also tempts us toward excess. It offers efficiency but risks dulling attention. It amplifies our capabilities but cannot substitute for awareness. The real value of AI lies in reducing or eliminating meaningless cognitive and manual work, helping us manage scale, and operating at a speed that serves human needs. Used wisely, it can be profoundly beneficial, not only for individuals but for the collective progress of knowledge. If we ensure that AI serves human judgment rather than substitutes for it, if attention, not automation, remains the ultimate safeguard, then we will not only protect ethics in publishing but elevate it. Ethical publishing is not a checklist; it is a culture of awareness, a daily practice of attention, honesty, and restraint across the ecosystem. Our shared task is to build ethical infrastructure: policies, systems, and environments that make the right actions easier and the wrong ones harder. That is what ethical infrastructure means, and that is what scholarly publishing needs in the age of AI.
Ashutosh Ghildiyal (https://orcid.org/0000-0002-6813-6209) is VP, Strategy and Growth, Integra Software Services.
Opinions expressed are those of the authors and do not necessarily reflect the opinions or policies of their employers, the Council of Science Editors, or the Editorial Board of Science Editor.