Suppr超能文献

在撰写学术手稿时披露使用人工智能工具的伦理问题。

The ethics of disclosing the use of artificial intelligence tools in writing scholarly manuscripts.

作者信息

Hosseini Mohammad, Resnik David B, Holmes Kristi

机构信息

Northwestern University Feinberg School of Medicine, USA.

National Institute of Environmental Health Sciences, USA.

出版信息

Res Ethics. 2023 Oct;19(4):449-465. doi: 10.1177/17470161231180449. Epub 2023 Jun 15.

Abstract

In this article, we discuss ethical issues related to using and disclosing artificial intelligence (AI) tools, such as ChatGPT and other systems based on large language models (LLMs), to write or edit scholarly manuscripts. Some journals, such as , have banned the use of LLMs because of the ethical problems they raise concerning responsible authorship. We argue that this is not a reasonable response to the moral conundrums created by the use of LLMs because bans are unenforceable and would encourage undisclosed use of LLMs. Furthermore, LLMs can be useful in writing, reviewing and editing text, and promote equity in science. Others have argued that LLMs should be mentioned in the acknowledgments since they do not meet all the authorship criteria. We argue that naming LLMs as authors or mentioning them in the acknowledgments are both inappropriate forms of recognition because LLMs do not have free will and therefore cannot be held morally or legally responsible for what they do. Tools in general, and software in particular, are usually cited in-text, followed by being mentioned in the references. We provide suggestions to improve APA Style for referencing ChatGPT to specifically indicate the contributor who used LLMs (because interactions are stored on personal user accounts), the used version and model (because the same version could use different language models and generate dissimilar responses, e.g., ChatGPT May 12 Version GPT3.5 or GPT4), and the time of usage (because LLMs evolve fast and generate dissimilar responses over time). We recommend that researchers who use LLMs: (1) disclose their use in the introduction or methods section to transparently describe details such as used prompts and note which parts of the text are affected, (2) use in-text citations and references (to recognize their used applications and improve findability and indexing), and (3) record and submit their relevant interactions with LLMs as supplementary material or appendices.

摘要

在本文中,我们讨论了与使用和披露人工智能(AI)工具(如ChatGPT和其他基于大语言模型(LLM)的系统)来撰写或编辑学术手稿相关的伦理问题。一些期刊,如《 》,已经禁止使用大语言模型,因为它们引发了有关责任署名的伦理问题。我们认为,这并不是对使用大语言模型所产生的道德难题的合理回应,因为禁令无法执行,而且会助长对大语言模型的秘密使用。此外,大语言模型在撰写、审阅和编辑文本方面可能有用,并能促进科学领域的公平性。其他人认为,由于大语言模型不符合所有署名标准,应该在致谢中提及它们。我们认为,将大语言模型列为作者或在致谢中提及它们都是不恰当的认可形式,因为大语言模型没有自由意志,因此不能对其行为承担道德或法律责任。一般的工具,尤其是软件,通常在文中被引用,随后在参考文献中被提及。我们提供了改进美国心理学会(APA)引用ChatGPT的格式的建议,以特别指明使用大语言模型的贡献者(因为交互记录存储在个人用户账户上)、所使用的版本和模型(因为同一版本可能使用不同的语言模型并产生不同的回答,例如ChatGPT 5月12日版本GPT3.5或GPT4)以及使用时间(因为大语言模型发展迅速,随着时间推移会产生不同的回答)。我们建议使用大语言模型的研究人员:(1)在引言或方法部分披露其使用情况,以透明地描述所使用的提示等细节,并注明文本的哪些部分受到影响;(2)使用文中引用和参考文献(以认可其使用的应用程序,并提高可查找性和索引);以及(3)记录并提交他们与大语言模型的相关交互记录作为补充材料或附录。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3798/11694804/f8d8fe669372/nihms-2000880-f0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验