Federal agency requirements establish the baseline
Federal funding agencies have established foundational requirements that researchers
must follow regardless of journal policies. NIH leads with the most explicit restrictions, particularly prohibiting peer reviewers
from using AI tools for grant application analysis. This June 2023 policy (NOT-OD-23-149) specifically bars natural language processors
and large language models from peer review processes, citing confidentiality violations
when grant content is uploaded to online AI tools.
NSF takes a more encouraging approach, suggesting researchers indicate AI use in project
descriptions while prohibiting reviewers from using non-approved AI tools. The December
2023 guidance will be formalized in the 2025 Proposal and Award Policies and Procedures
Guide. DOD notably lacks specific research disclosure requirements, focusing instead
on operational AI development guidelines through their November 2023 AI Adoption Strategy.
|
High-impact journals enforce strict boundaries
The most prestigious scientific journals maintain restrictive policies that effectively
set industry standards. Science (AAAS) implements the harshest stance, completely banning AI-generated text
and treating violations as scientific misconduct. Their policy requires full prompt disclosure in acknowledgments and methods sections,
with AI tools explicitly prohibited from authorship.
Nature adopts a middle ground, prohibiting AI authorship and AI-generated images while
allowing some AI assistance for copy editing without disclosure requirements. Their
January 2023 policy specifically addresses peer review confidentiality, requiring
reviewers to declare any AI use transparently.
JAMA demonstrates the most comprehensive disclosure framework with automated submission
screening and detailed reporting requirements. Their March 2024 updated guidance includes
specific institutional review board considerations for AI use in research design,
representing the most thorough integration of AI oversight into the publication process.
|
Disciplinary differences shape specific requirements
Engineering and computer science publications through IEEE maintain consistent policies
across their extensive journal portfolio. Their April 2024 guidelines require acknowledgment
section disclosure while explicitly prohibiting AI authorship and reviewer AI use.
IEEE's approach emphasizes transparency while recognizing legitimate AI applications
in technical fields.
Physical sciences publishers show variation: ACS requires detailed disclosure in acknowledgments
with December 2024 updates providing specific guidance for AI-generated graphics,
while APS limits AI to light editing only and completely prohibits AI-generated or
modified images in Physical Review journals.
Life sciences publishers PLOS and Cell Press represent different philosophies. PLOS
requires comprehensive disclosure in Methods sections with detailed evaluation descriptions,
while Cell Press restricts AI to readability improvements using standardized disclosure
templates.
|
Major publishers establish cross-portfolio standards
Elsevier and Springer Nature, controlling thousands of journals across disciplines,
have implemented comprehensive policies affecting researchers globally. Springer Nature
distinguishes between AI-assisted copy editing (no disclosure required) and generative
AI work (disclosure required), providing more nuanced guidance than most publishers.
Elsevier maintains stricter requirements, mandating disclosure for most AI use while
prohibiting AI-generated images except in specific research contexts. Both publishers
explicitly prohibit AI authorship and restrict reviewer AI use due to confidentiality
concerns.
|
Social sciences and humanities remain cautious
Humanities and social science journals demonstrate more conservative approaches, reflecting
concerns about AI's ability to handle interpretive, contextual, and creative work.
The American Journal of Political Science requires disclosure while discouraging AI
use for substantial elements like literature reviews.
Cambridge University Press, the first major academic publisher to announce AI ethics
policies in March 2023, prohibits AI authorship while requiring clear declaration
of AI use. The Modern Language Association specifically addresses citation of AI tools
while maintaining authorship restrictions.
|
Key patterns and enforcement mechanisms
Several critical patterns emerge across all policies:
-
Universal AI authorship prohibition: No major journal or agency allows AI to be listed
as an author, citing accountability requirements that only humans can fulfill.
-
Disclosure location consistency: Most require disclosure in acknowledgments sections,
with research-specific AI use detailed in methods sections.
-
Peer review restrictions: Nearly universal prohibition on uploading manuscripts to
AI tools due to confidentiality concerns, with several agencies implementing specific
enforcement mechanisms.
- Author responsibility emphasis: All policies stress that human authors remain fully
accountable for AI-generated content accuracy and integrity.
|
The research landscape has rapidly adapted to AI integration with remarkably consistent
core principles despite implementation variations. Researchers must navigate a complex matrix of federal requirements, journal policies,
and disciplinary standards that universally prohibit AI authorship while requiring
varying levels of disclosure transparency. The most restrictive policies from prestigious venues like Science and NIH are likely
to influence broader adoption of conservative approaches, while more permissive frameworks
may become standard for routine AI assistance in writing and analysis.
Guidance on these issues is still evolving, with policies likely to continue adapting
as AI capabilities advance and research communities gain experience with appropriate
integration boundaries. This document was last updated by Dylan Goldblatt on July
30, 2025.
|
|