• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

在撰写学术手稿时披露使用人工智能工具的伦理问题。

The ethics of disclosing the use of artificial intelligence tools in writing scholarly manuscripts.

作者信息

Hosseini Mohammad, Resnik David B, Holmes Kristi

机构信息

Northwestern University Feinberg School of Medicine, USA.

National Institute of Environmental Health Sciences, USA.

出版信息

Res Ethics. 2023 Oct;19(4):449-465. doi: 10.1177/17470161231180449. Epub 2023 Jun 15.

DOI:10.1177/17470161231180449
PMID:39749232
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11694804/
Abstract

In this article, we discuss ethical issues related to using and disclosing artificial intelligence (AI) tools, such as ChatGPT and other systems based on large language models (LLMs), to write or edit scholarly manuscripts. Some journals, such as , have banned the use of LLMs because of the ethical problems they raise concerning responsible authorship. We argue that this is not a reasonable response to the moral conundrums created by the use of LLMs because bans are unenforceable and would encourage undisclosed use of LLMs. Furthermore, LLMs can be useful in writing, reviewing and editing text, and promote equity in science. Others have argued that LLMs should be mentioned in the acknowledgments since they do not meet all the authorship criteria. We argue that naming LLMs as authors or mentioning them in the acknowledgments are both inappropriate forms of recognition because LLMs do not have free will and therefore cannot be held morally or legally responsible for what they do. Tools in general, and software in particular, are usually cited in-text, followed by being mentioned in the references. We provide suggestions to improve APA Style for referencing ChatGPT to specifically indicate the contributor who used LLMs (because interactions are stored on personal user accounts), the used version and model (because the same version could use different language models and generate dissimilar responses, e.g., ChatGPT May 12 Version GPT3.5 or GPT4), and the time of usage (because LLMs evolve fast and generate dissimilar responses over time). We recommend that researchers who use LLMs: (1) disclose their use in the introduction or methods section to transparently describe details such as used prompts and note which parts of the text are affected, (2) use in-text citations and references (to recognize their used applications and improve findability and indexing), and (3) record and submit their relevant interactions with LLMs as supplementary material or appendices.

摘要

在本文中,我们讨论了与使用和披露人工智能(AI)工具(如ChatGPT和其他基于大语言模型(LLM)的系统)来撰写或编辑学术手稿相关的伦理问题。一些期刊,如《 》,已经禁止使用大语言模型,因为它们引发了有关责任署名的伦理问题。我们认为,这并不是对使用大语言模型所产生的道德难题的合理回应,因为禁令无法执行,而且会助长对大语言模型的秘密使用。此外,大语言模型在撰写、审阅和编辑文本方面可能有用,并能促进科学领域的公平性。其他人认为,由于大语言模型不符合所有署名标准,应该在致谢中提及它们。我们认为,将大语言模型列为作者或在致谢中提及它们都是不恰当的认可形式,因为大语言模型没有自由意志,因此不能对其行为承担道德或法律责任。一般的工具,尤其是软件,通常在文中被引用,随后在参考文献中被提及。我们提供了改进美国心理学会(APA)引用ChatGPT的格式的建议,以特别指明使用大语言模型的贡献者(因为交互记录存储在个人用户账户上)、所使用的版本和模型(因为同一版本可能使用不同的语言模型并产生不同的回答,例如ChatGPT 5月12日版本GPT3.5或GPT4)以及使用时间(因为大语言模型发展迅速,随着时间推移会产生不同的回答)。我们建议使用大语言模型的研究人员:(1)在引言或方法部分披露其使用情况,以透明地描述所使用的提示等细节,并注明文本的哪些部分受到影响;(2)使用文中引用和参考文献(以认可其使用的应用程序,并提高可查找性和索引);以及(3)记录并提交他们与大语言模型的相关交互记录作为补充材料或附录。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3798/11694804/f8d8fe669372/nihms-2000880-f0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3798/11694804/f8d8fe669372/nihms-2000880-f0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3798/11694804/f8d8fe669372/nihms-2000880-f0001.jpg

相似文献

1
The ethics of disclosing the use of artificial intelligence tools in writing scholarly manuscripts.在撰写学术手稿时披露使用人工智能工具的伦理问题。
Res Ethics. 2023 Oct;19(4):449-465. doi: 10.1177/17470161231180449. Epub 2023 Jun 15.
2
Is ChatGPT a "Fire of Prometheus" for Non-Native English-Speaking Researchers in Academic Writing?ChatGPT 是否为非英语母语的学术写作者带来了“普罗米修斯之火”?
Korean J Radiol. 2023 Oct;24(10):952-959. doi: 10.3348/kjr.2023.0773.
3
Fighting reviewer fatigue or amplifying bias? Considerations and recommendations for use of ChatGPT and other large language models in scholarly peer review.对抗审稿人疲劳还是加剧偏见?关于在学术同行评审中使用ChatGPT和其他大语言模型的思考与建议。
Res Integr Peer Rev. 2023 May 18;8(1):4. doi: 10.1186/s41073-023-00133-5.
4
Fighting reviewer fatigue or amplifying bias? Considerations and recommendations for use of ChatGPT and other Large Language Models in scholarly peer review.对抗审稿人疲劳还是放大偏见?关于在学术同行评审中使用ChatGPT和其他大语言模型的考量与建议。
Res Sq. 2023 Feb 20:rs.3.rs-2587766. doi: 10.21203/rs.3.rs-2587766/v1.
5
AUTOGEN: A Personalized Large Language Model for Academic Enhancement-Ethics and Proof of Principle.AUTOGEN:用于学术提升的个性化大型语言模型——伦理与原理验证。
Am J Bioeth. 2023 Oct;23(10):28-41. doi: 10.1080/15265161.2023.2233356. Epub 2023 Jul 24.
6
Evaluation of Large Language Model Performance and Reliability for Citations and References in Scholarly Writing: Cross-Disciplinary Study.评估大型语言模型在学术写作中引文和参考文献方面的性能和可靠性:跨学科研究。
J Med Internet Res. 2024 Apr 5;26:e52935. doi: 10.2196/52935.
7
Human vs machine: identifying ChatGPT-generated abstracts in Gynecology and Urogynecology.人机之争:在妇科和泌尿外科学中识别 ChatGPT 生成的摘要。
Am J Obstet Gynecol. 2024 Aug;231(2):276.e1-276.e10. doi: 10.1016/j.ajog.2024.04.045. Epub 2024 May 6.
8
The transformative impact of large language models on medical writing and publishing: current applications, challenges and future directions.大语言模型对医学写作与出版的变革性影响:当前应用、挑战及未来方向
Korean J Physiol Pharmacol. 2024 Sep 1;28(5):393-401. doi: 10.4196/kjpp.2024.28.5.393.
9
The Role of Large Language Models in Transforming Emergency Medicine: Scoping Review.大型语言模型在变革急诊医学中的作用:范围综述
JMIR Med Inform. 2024 May 10;12:e53787. doi: 10.2196/53787.
10
Utility of artificial intelligence-based large language models in ophthalmic care.人工智能大型语言模型在眼科护理中的应用。
Ophthalmic Physiol Opt. 2024 May;44(3):641-671. doi: 10.1111/opo.13284. Epub 2024 Feb 25.

引用本文的文献

1
Artificial intelligence policies in bioethics and health humanities: a comparative analysis of publishers and journals.生物伦理学与健康人文学科中的人工智能政策:出版商与期刊的比较分析
BMC Med Ethics. 2025 Jul 3;26(1):79. doi: 10.1186/s12910-025-01239-9.
2
Artificial Intelligence and Publishing Ethics: A Narrative Review and SWOT Analysis.人工智能与出版伦理:叙事性综述及SWOT分析
Cureus. 2025 May 14;17(5):e84098. doi: 10.7759/cureus.84098. eCollection 2025 May.
3
The ethics of using artificial intelligence in scientific research: new guidance needed for a new tool.

本文引用的文献

1
Chatbots, ChatGPT, and Scholarly Manuscripts: WAME Recommendations on ChatGPT and Chatbots in relation to scholarly publications.聊天机器人、ChatGPT与学术手稿:世界医学编辑协会关于ChatGPT和聊天机器人在学术出版方面的建议
Natl Med J India. 2023 Jan-Feb;36(1):1-4. doi: 10.25259/NMJI_365_23.
2
The Role of AI in Drug Discovery: Challenges, Opportunities, and Strategies.人工智能在药物研发中的作用:挑战、机遇与策略。
Pharmaceuticals (Basel). 2023 Jun 18;16(6):891. doi: 10.3390/ph16060891.
3
Fighting reviewer fatigue or amplifying bias? Considerations and recommendations for use of ChatGPT and other large language models in scholarly peer review.
科学研究中使用人工智能的伦理问题:新工具需要新指南。
AI Ethics. 2025 Apr;5(2):1499-1521. doi: 10.1007/s43681-024-00493-8. Epub 2024 May 27.
4
The emergence of large language models as tools in literature reviews: a large language model-assisted systematic review.大语言模型作为文献综述工具的出现:一项大语言模型辅助的系统综述
J Am Med Inform Assoc. 2025 Jun 1;32(6):1071-1086. doi: 10.1093/jamia/ocaf063.
5
Disclosing artificial intelligence use in scientific research and publication: When should disclosure be mandatory, optional, or unnecessary?披露科学研究与出版中人工智能的使用情况:何时披露应为强制、自愿或无需披露?
Account Res. 2025 Mar 24:1-13. doi: 10.1080/08989621.2025.2481949.
6
Guidance needed for using artificial intelligence to screen journal submissions for misconduct.使用人工智能筛选期刊投稿中的不当行为所需的指导。
Res Ethics. 2025 Jan;21(1):1-8. doi: 10.1177/17470161241254052. Epub 2024 May 11.
7
Practical Tips for Enhancing Academic Skills with Generative Artificial Intelligence Tools.使用生成式人工智能工具提升学术技能的实用技巧。
Acad Psychiatry. 2025 Feb;49(1):40-43. doi: 10.1007/s40596-024-02055-w. Epub 2024 Sep 27.
8
Editors' Statement on the Responsible Use of Generative AI Technologies in Scholarly Journal Publishing.编辑关于在学术期刊出版中负责任地使用生成式人工智能技术的声明。
Hastings Cent Rep. 2023 Sep;53(5):3-6. doi: 10.1002/hast.1507. Epub 2023 Oct 1.
对抗审稿人疲劳还是加剧偏见?关于在学术同行评审中使用ChatGPT和其他大语言模型的思考与建议。
Res Integr Peer Rev. 2023 May 18;8(1):4. doi: 10.1186/s41073-023-00133-5.
4
ChatGPT: when artificial intelligence replaces the rheumatologist in medical writing.ChatGPT:当人工智能在医学写作中取代风湿病学家。
Ann Rheum Dis. 2023 Aug;82(8):1015-1017. doi: 10.1136/ard-2023-223936. Epub 2023 Apr 11.
5
AI tools can improve equity in science.人工智能工具可以提高科学领域的公平性。
Science. 2023 Mar 10;379(6636):991. doi: 10.1126/science.adg9714. Epub 2023 Mar 9.
6
Can an artificial intelligence chatbot be the author of a scholarly article?人工智能聊天机器人可以成为学术文章的作者吗?
J Educ Eval Health Prof. 2023;20:6. doi: 10.3352/jeehp.2023.20.6. Epub 2023 Feb 27.
7
Corrigendum to "Open artificial intelligence platforms in nursing education: Tools for academic progress or abuse?" [Nurse Educ. Pract. 66 (2023) 103537].《护理教育中的开放人工智能平台:学术进步的工具还是滥用手段?》勘误 [《护理教育实践》66 (2023) 103537]
Nurse Educ Pract. 2023 Feb;67:103572. doi: 10.1016/j.nepr.2023.103572. Epub 2023 Feb 6.
8
Nonhuman "Authors" and Implications for the Integrity of Scientific Publication and Medical Knowledge.非人类“作者”以及对科学出版物和医学知识完整性的影响。
JAMA. 2023 Feb 28;329(8):637-639. doi: 10.1001/jama.2023.1344.
9
ChatGPT is fun, but not an author.ChatGPT 很有趣,但不是作者。
Science. 2023 Jan 27;379(6630):313. doi: 10.1126/science.adg7879. Epub 2023 Jan 26.
10
Using AI to write scholarly publications.使用人工智能撰写学术出版物。
Account Res. 2024 Oct;31(7):715-723. doi: 10.1080/08989621.2023.2168535. Epub 2023 Jan 25.