Bobier Christopher, Rodger Daniel, Hurst Daniel
Central Michigan University College of Medicine, Mount Pleasant, USA.
London South Bank University, London, UK.
BMC Med Ethics. 2025 Jul 3;26(1):79. doi: 10.1186/s12910-025-01239-9.
Rapid advancements in artificial intelligence (AI) pose novel ethical and practical challenges for scholarly publishing. Although AI-related policies are emerging in many disciplines, little is known about the extent and clarity of AI guidance in bioethics and health humanities journals.
A search of publicly available journal lists from the American Society for Bioethics and Humanities, Health Humanities Consortium, and Association for Medical Humanities was supplemented with Google Scholar's top 20 bioethics journals ranked by h5-index. This yielded 54 unique journals, of which 50 remained after excluding those without a functional website or recent publications. AI policies were reviewed at the journal and publisher levels were assessed via website review, and editors were contacted for clarification when required. Data extraction was conducted by one author and independently verified by two additional researchers to ensure accuracy.
Of the 50 journals analyzed, only 8 (16%) had a clear AI policy, while 27 (54%) were published by a publisher with an identifiable AI policy. Publisher AI policy statements were favorable to considering AI-assisted manuscripts. Five (10%) of the 8 journals with a clear AI policy explicitly prohibited AI-generated text in submissions. The remaining 15 (30%) journals did not have a publicly available AI policy. Ten of these 15 journals confirmed an absence of any formal AI policy, and seven indicated that discussion to develop guidelines was ongoing.
The adoption of AI policies in bioethics and health humanities journals is currently inconsistent. Some journals explicitly ban AI-generated text, whereas others permit AI-assisted writing, with publisher policies being favorable to considering AI-assisted manuscripts. The lack of standardized AI guidelines underscores the need for further discussion to ensure the ethical and responsible integration of AI in academic publishing.
人工智能(AI)的快速发展给学术出版带来了新的伦理和实际挑战。尽管许多学科都在出台与人工智能相关的政策,但对于生物伦理学和健康人文学科期刊中人工智能指导的范围和清晰度却知之甚少。
通过搜索美国生物伦理学与人文学会、健康人文学科联盟和医学人文协会公开的期刊列表,并补充谷歌学术按h5指数排名的前20种生物伦理学期刊。这产生了54种独特的期刊,排除那些没有正常运行网站或近期出版物的期刊后,还剩下50种。通过网站审查评估期刊和出版商层面的人工智能政策,必要时联系编辑进行澄清。数据提取由一位作者进行,并由另外两位研究人员独立核实以确保准确性。
在分析的50种期刊中,只有8种(16%)有明确的人工智能政策,而27种(54%)由有明确人工智能政策的出版商出版。出版商的人工智能政策声明有利于考虑人工智能辅助撰写的稿件。8种有明确人工智能政策的期刊中有5种(10%)明确禁止投稿中出现人工智能生成的文本。其余15种(30%)期刊没有公开的人工智能政策。这15种期刊中有10种确认没有任何正式的人工智能政策,7种表示正在讨论制定指导方针。
生物伦理学和健康人文学科期刊目前对人工智能政策的采用并不一致。一些期刊明确禁止人工智能生成的文本,而另一些则允许人工智能辅助写作,出版商的政策有利于考虑人工智能辅助撰写的稿件。缺乏标准化的人工智能指导方针凸显了进一步讨论的必要性,以确保人工智能在学术出版中的伦理和负责任的整合。