Moffatt Barton, Hall Alicia
Department of Philosophy and Religion, Mississippi State University, Mississippi State, MS, USA.
Account Res. 2024 Aug 7:1-17. doi: 10.1080/08989621.2024.2386285.
The recent emergence of Large Language Models (LLMs) and other forms of Artificial Intelligence (AI) has led people to wonder whether they could act as an author on a scientific paper. This paper argues that AI systems should not be included on the author by-line. We agree with current commentators that LLMs are incapable of taking responsibility for their work and thus do not meet current authorship guidelines. We identify other problems with responsibility and authorship. In addition, the problems go deeper as AI tools also do not write in a meaningful sense nor do they have persistent identities. From a broader publication ethics perspective, adopting AI authorship would have detrimental effects on an already overly competitive and stressed publishing ecosystem. Deterrence is possible as backward-looking tools will likely be able to identify past AI usage. Finally, we question the value of using AI to produce more research simply for publication's sake.
最近大语言模型(LLMs)和其他形式人工智能(AI)的出现,让人们不禁思考它们是否能成为科学论文的作者。本文认为,人工智能系统不应被列入作者署名。我们赞同当前评论者的观点,即大语言模型无法为其工作负责,因此不符合当前的作者身份准则。我们还发现了与责任和作者身份相关的其他问题。此外,问题更为严重,因为人工智能工具既不能以有意义的方式写作,也没有持久的身份。从更广泛的出版伦理角度来看,采用人工智能作为作者将对本就竞争过度激烈和压力巨大的出版生态系统产生不利影响。由于具有追溯功能的工具可能能够识别过去使用人工智能的情况,威慑是有可能的。最后,我们质疑仅仅为了发表而使用人工智能来产出更多研究的价值。