Ahn Sangzin
Department of Pharmacology and PharmacoGenomics Research Center, Inje University College of Medicine, Busan, Korea.
Center for Personalized Precision Medicine of Tuberculosis, Inje University College of Medicine, Busan, Korea.
J Yeungnam Med Sci. 2025;42:14. doi: 10.12701/jyms.2024.00794. Epub 2024 Dec 11.
Large language models (LLMs), the most recent advancements in artificial intelligence (AI), have profoundly affected academic publishing and raised important ethical and practical concerns. This study examined the prevalence and content of AI guidelines in Korean medical journals to assess the current landscape and inform future policy implementation.
The top 100 Korean medical journals determined by Hirsh index were surveyed. Author guidelines were collected and screened by a human researcher and AI chatbot to identify AI-related content. The key components of LLM policies were extracted and compared across journals. The journal characteristics associated with the adoption of AI guidelines were also analyzed.
Only 18% of the surveyed journals had LLM guidelines, which is much lower than previously reported in international journals. However, the adoption rates increased over time, reaching 57.1% in the first quarter of 2024. High-impact journals were more likely to have AI guidelines. All journals with LLM guidelines required authors to declare LLM tool use and 94.4% prohibited AI authorship. The key policy components included emphasizing human responsibility (72.2%), discouraging AI-generated content (44.4%), and exempting basic AI tools (38.9%).
While the adoption of LLM guidelines among Korean medical journals is lower than the global trend, there has been a clear increase in implementation over time. The key components of these guidelines align with international standards, but greater standardization and collaboration are needed to ensure the responsible and ethical use of LLMs in medical research and writing.
大语言模型(LLMs)是人工智能(AI)的最新进展,已对学术出版产生了深远影响,并引发了重要的伦理和实际问题。本研究调查了韩国医学期刊中人工智能指南的普及率和内容,以评估当前情况并为未来政策实施提供参考。
对根据赫希指数确定的韩国前100种医学期刊进行了调查。作者指南由一名人类研究人员和人工智能聊天机器人收集并筛选,以识别与人工智能相关的内容。提取了大语言模型政策的关键组成部分,并在各期刊之间进行比较。还分析了与采用人工智能指南相关的期刊特征。
只有18%的被调查期刊有大语言模型指南,这远低于此前国际期刊的报道。然而,采用率随时间推移有所上升,在2024年第一季度达到57.1%。高影响力期刊更有可能有人工智能指南。所有有大语言模型指南的期刊都要求作者声明使用大语言模型工具,94.4%的期刊禁止人工智能署名。关键政策组成部分包括强调人类责任(72.2%)、不鼓励人工智能生成的内容(44.4%)以及豁免基本人工智能工具(38.9%)。
虽然韩国医学期刊中采用大语言模型指南的情况低于全球趋势,但随着时间的推移,实施情况有了明显增加。这些指南的关键组成部分与国际标准一致,但需要更大程度的标准化和协作,以确保在医学研究和写作中负责任和符合伦理地使用大语言模型。