AI tools have become ever more available to the general public in the last few years. Before ChatGPT, there were standalone tools like Grammarly and the AI-adjacent Editor in Microsoft Word. Now, though, tools from many of the big technology companies have become available (freely or at low cost) to the public. Google’s Gemini, Microsoft’s Copilot, and many others offer the scholar, writer, and researcher an instant editor, cartoon maker, coder, research assistant, and even confidante.
This puts all of us — researchers, writers, and publishers alike — in a new position when it comes to questions of authorship, ethics, and reproducibility.
Are AI tools okay to use in writing a journal article or blog post?
Here’s what Medical Care says in the Instructions to Authors:
ARTIFICIAL INTELLIGENCE
The journal does not consider Artificial Intelligence (AI) authoring tools to meet the requirements for authorship as recommended by the ICMJE. Authors who use AI tools in the writing of a manuscript, production of images or graphical elements of the paper, or in the collection and analysis of data, must be transparent in disclosing in the Materials and Methods (or similar section) of the paper how the AI tool was used and which tool was used. Authors are fully responsible for the content of their manuscript, even those parts produced by an AI tool, and are thus liable for any breach of publication ethics.
A Note on Terminology
In this context, when we talk about “AI tools”, we are specifically talking about “generative AI” (genAI), which includes large language models (LLMs) such as ChatGPT and Claude. We are not talking about machine-learning tools, like random forests, or even natural language processing tools, like algorithms used to organize data from free-text fields.
This distinction is important because, while we still need to cite and document our methods, those tools are not “black boxes.” They are typically explainable and reproducible. If they aren’t, they may not be used in a published research study.
Publication Ethics
One of the reasons to publish research is to disseminate the methods used so that the work could reasonably be reproduced by others. Unfortunately for researchers,
- Many of the genAI tools are total “black boxes” for the user;
- Algorthmic changes and model updates can make results difficult to replicate; and
- Corporations have the right to stop offering a tool at any time and with no warning.
The lack of static, citable, reproducible content makes the use of some of these AI tools in research problematic. This is over and above the fact that generative AI tools sometimes “hallucinate” or cite papers that don’t exist.
Best Practices and Cautions
Here are some best practices and cautions for using AI tools in research and publication in 2026:
- If you use an AI tool, you must disclose it to your co-authors, editors, and readers. Never try to pass off AI-generated content as your own.
- Manually verify all information you obtain via an AI tool. Never trust the tool to be accurate.
- If you get an idea from an AI tool, make sure it’s original and has not been previously published. Using AI content could mean you are plagiarizing someone else’s work.
- AI-generated content is not copyrightable in the U.S. as copyright law requires human authorship. Unaltered AI-generated work, including text, images, and logos, cannot be copyrighted, patented, or trademarked.
- If you use an AI tool in your research, it needs to be able to be reproducible by others. So make sure others on your team get the same results as you do. If not, abandon the tool.
In summary, it’s one thing to run a draft through a built-in editor to catch typos or coding errors. That’s a legitimate use.
It’s a whole other thing to take AI-generated work and try to pretend it was made by a human. Beyond the ethical concerns, the professional consequences if caught, including retractions, investigations, and termination, could be severe and long-lasting.

