Department of Radiation Oncology, Trakya University School of Medicine, Edirne, Türkiye.
J Korean Med Sci. 2024 Aug 26;39(33):e249. doi: 10.3346/jkms.2024.39.e249.
The application of new technologies, such as artificial intelligence (AI), to science affects the way and methodology in which research is conducted. While the responsible use of AI brings many innovations and benefits to science and humanity, its unethical use poses a serious threat to scientific integrity and literature. Even in the absence of malicious use, the Chatbot output itself, as a software application based on AI, carries the risk of containing biases, distortions, irrelevancies, misrepresentations and plagiarism. Therefore, the use of complex AI algorithms raises concerns about bias, transparency and accountability, requiring the development of new ethical rules to protect scientific integrity. Unfortunately, the development and writing of ethical codes cannot keep up with the pace of development and implementation of technology. The main purpose of this narrative review is to inform readers, authors, reviewers and editors about new approaches to publication ethics in the era of AI. It specifically focuses on tips on how to disclose the use of AI in your manuscript, how to avoid publishing entirely AI-generated text, and current standards for retraction.
新技术,如人工智能(AI),在科学中的应用影响了研究的方式和方法。虽然负责任地使用 AI 为科学和人类带来了许多创新和益处,但它的不道德使用对科学完整性和文献构成了严重威胁。即使没有恶意使用,基于 AI 的聊天机器人输出本身作为一种软件应用程序,也存在包含偏差、扭曲、不相关、错误陈述和剽窃的风险。因此,复杂的 AI 算法的使用引发了对偏见、透明度和问责制的关注,需要制定新的道德规则来保护科学的完整性。不幸的是,道德规范的制定和编写跟不上技术的发展和实施的步伐。本叙述性评论的主要目的是让读者、作者、审稿人和编辑了解 AI 时代出版伦理的新方法。它特别侧重于如何在稿件中披露 AI 的使用、如何避免发表完全由 AI 生成的文本,以及当前的撤稿标准。