Gatekeepers of Science: Evolving Publishing Policies in the Age of AI
Author(s)
Juliette C Thompson, BSc1, Eric J. Manalastas, MSc, PhD2, Aditi Hombali, MSc1, David A Scott, MSc1.
1Visible Analytics Ltd, Oxford, United Kingdom, 2Visible Analytics Ltd, Sheffield, United Kingdom.
1Visible Analytics Ltd, Oxford, United Kingdom, 2Visible Analytics Ltd, Sheffield, United Kingdom.
OBJECTIVES: As the use of artificial intelligence (AI) permeates into every aspect of life, including scientific research, it is important that traditional routes of data dissemination have policies to address its use. Publishing houses are gatekeepers to most scientific research presented to the public and as such their policies should reflect the changing world. The increasing retractions due to AI misuse indicates a problem already exists.
METHODS: The authorship guidelines and/or policies of 12 publishing houses were reviewed to identify where: (1) there was a policy on AI authorship, (2) external guidance was referenced, (3) there was a distinction between generative and non-generative AI, (4) disclosure of the use of AI was required, (5) human ownership was required and (6) the use of AI was permitted for image generation.
RESULTS: As of June 2025, 11/12 publishing houses had policies related to the use of AI in the development of research they publish. One of which simply referenced the COPE and ICMJE guidelines. Where policies were in place, none permitted AI to be recognised as an author. Half referenced the COPE or ICMJE guidelines. Distinguishment between generative and non-generative AI was observed in 4/12 policies. Nearly all required that where AI was used at any point in the research this should be disclosed. The use of AI to support image generation was permitted by 6/12 publishing houses providing this was fully disclosed.
CONCLUSIONS: Publishing houses have recognised the realities that AI is being used in research and have produced policies to try and ensure transparency of what they publish. As with all research, this relies in large part on the honesty of authors which initiatives such as www.academ-ai.info/ indicate cannot always be guaranteed. It is expected that publishing house’s policies will need to evolve with the AI tools.
METHODS: The authorship guidelines and/or policies of 12 publishing houses were reviewed to identify where: (1) there was a policy on AI authorship, (2) external guidance was referenced, (3) there was a distinction between generative and non-generative AI, (4) disclosure of the use of AI was required, (5) human ownership was required and (6) the use of AI was permitted for image generation.
RESULTS: As of June 2025, 11/12 publishing houses had policies related to the use of AI in the development of research they publish. One of which simply referenced the COPE and ICMJE guidelines. Where policies were in place, none permitted AI to be recognised as an author. Half referenced the COPE or ICMJE guidelines. Distinguishment between generative and non-generative AI was observed in 4/12 policies. Nearly all required that where AI was used at any point in the research this should be disclosed. The use of AI to support image generation was permitted by 6/12 publishing houses providing this was fully disclosed.
CONCLUSIONS: Publishing houses have recognised the realities that AI is being used in research and have produced policies to try and ensure transparency of what they publish. As with all research, this relies in large part on the honesty of authors which initiatives such as www.academ-ai.info/ indicate cannot always be guaranteed. It is expected that publishing house’s policies will need to evolve with the AI tools.
Conference/Value in Health Info
2025-11, ISPOR Europe 2025, Glasgow, Scotland
Value in Health, Volume 28, Issue S2
Code
OP9
Topic
Methodological & Statistical Research, Organizational Practices
Topic Subcategory
Best Research Practices, Ethical
Disease
No Additional Disease & Conditions/Specialized Treatment Areas