ChatGPT不是作者,但需要一个贡献分类法。

ChatGPT isn't an author, but a contribution taxonomy is needed.

作者信息

Suchikova Y, Tsybuliak N

机构信息

Scientific Work, Berdyansk State Pedagogical University, Zaporizhzhia, Ukraine.

Department of Applied Psychology and Speech Therapy, Berdyansk State Pedagogical University, Zaporizhzhia, Ukraine.

出版信息

Account Res. 2024 Sep 18:1-6. doi: 10.1080/08989621.2024.2405039.

Abstract

PURPOSE

The increasing use of AI tools, particularly large language models like ChatGPT, in academic research has raised significant questions about authorship and transparency. This commentary emphasizes the need for a standardized AI contributions taxonomy to clarify AI's role in producing and publishing research outputs, ensuring ethical standards and maintaining academic integrity.

APPROACH

We propose adapting the NIST AI Use Taxonomy and incorporating categories that reflect AI's use in tasks such as hypothesis generation, data analysis, manuscript preparation, and ethical oversight. Findings: Establishing an AI contributions taxonomy for the production and publication of research output would address inconsistencies in AI disclosure, enhance transparency, and uphold accountability in research. It would help differentiate between AI-assisted and human-led tasks, providing more explicit attribution of contributions.

FINDINGS

Establishing an AI contributions taxonomy for the production and publication of research output would address inconsistencies in AI disclosure, enhance transparency, and uphold accountability in research. It would help differentiate between AI-assisted and human-led tasks, providing more explicit attribution of contributions.

PRACTICAL IMPLICATIONS

The proposed taxonomy would offer researchers and journals a standardized method for disclosing AI's role in academic work, promoting responsible and transparent reporting aligned with ethical guidelines from COPE and ICMJE.

VALUE

A well-defined AI contributions taxonomy for the production and publication of research output would foster transparency and trust in using AI in research, ensuring that AI's role is appropriately acknowledged while preserving academic integrity.

摘要

目的

人工智能工具,特别是像ChatGPT这样的大型语言模型,在学术研究中的使用日益增加,引发了关于作者身份和透明度的重大问题。本评论强调需要一个标准化的人工智能贡献分类法,以阐明人工智能在研究成果产出和发表过程中的作用,确保道德标准并维护学术诚信。

方法

我们建议改编美国国家标准与技术研究院(NIST)的人工智能使用分类法,并纳入反映人工智能在假设生成、数据分析、稿件准备和道德监督等任务中使用情况的类别。

研究结果

为研究成果的产出和发表建立人工智能贡献分类法,将解决人工智能披露方面的不一致问题,提高透明度,并在研究中维护问责制。它将有助于区分人工智能辅助任务和人类主导任务,更明确地确定贡献归属。

实际意义

提议的分类法将为研究人员和期刊提供一种标准化方法,用于披露人工智能在学术工作中的作用,促进符合出版伦理委员会(COPE)和国际医学期刊编辑委员会(ICMJE)道德准则的负责任和透明的报告。

价值

为研究成果的产出和发表制定一个定义明确的人工智能贡献分类法,将促进在研究中使用人工智能时的透明度和信任,确保在保持学术诚信的同时,适当地认可人工智能的作用。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索