Hryciw Brett N, Seely Andrew J E, Kyeremanteng Kwadwo
Division of Critical Care, Department of Medicine, University of Ottawa, Ottawa, ON, Canada.
Division of Thoracic Surgery, Department of Surgery, The Ottawa Hospital, Ottawa, ON, Canada.
Front Artif Intell. 2023 Nov 16;6:1283353. doi: 10.3389/frai.2023.1283353. eCollection 2023.
The integration of large language models (LLMs) and artificial intelligence (AI) into scientific writing, especially in medical literature, presents both unprecedented opportunities and inherent challenges. This manuscript evaluates the transformative potential of LLMs for the synthesis of information, linguistic enhancements, and global knowledge dissemination. At the same time, it raises concerns about unintentional plagiarism, the risk of misinformation, data biases, and an over-reliance on AI. To address these, we propose governing principles for AI adoption that ensure integrity, transparency, validity, and accountability. Additionally, guidelines for reporting AI involvement in manuscript development are delineated, and a classification system to specify the level of AI assistance is introduced. This approach uniquely addresses the challenges of AI in scientific writing, emphasizing transparency in authorship, qualification of AI involvement, and ethical considerations. Concerns regarding access equity, potential biases in AI-generated content, authorship dynamics, and accountability are also explored, emphasizing the human author's continued responsibility. Recommendations are made for fostering collaboration between AI developers, researchers, and journal editors and for emphasizing the importance of AI's responsible use in academic writing. Regular evaluations of AI's impact on the quality and biases of medical manuscripts are also advocated. As we navigate the expanding realm of AI in scientific discourse, it is crucial to maintain the human element of creativity, ethics, and oversight, ensuring that the integrity of scientific literature remains uncompromised.
将大语言模型(LLMs)和人工智能(AI)融入科学写作,尤其是医学文献写作,既带来了前所未有的机遇,也带来了内在挑战。本手稿评估了大语言模型在信息综合、语言提升和全球知识传播方面的变革潜力。同时,它也引发了对无意抄袭、错误信息风险、数据偏差以及过度依赖人工智能的担忧。为解决这些问题,我们提出了人工智能应用的管理原则,以确保诚信、透明、有效性和问责制。此外,还阐述了报告人工智能参与稿件撰写的指南,并引入了一个分类系统来明确人工智能辅助的程度。这种方法独特地解决了人工智能在科学写作中的挑战,强调了作者身份的透明度、人工智能参与的资格认定以及伦理考量。还探讨了关于获取公平性、人工智能生成内容中的潜在偏差、作者身份动态以及问责制等问题,强调了人类作者的持续责任。建议促进人工智能开发者、研究人员和期刊编辑之间的合作,并强调在学术写作中负责任地使用人工智能的重要性。还提倡定期评估人工智能对医学稿件质量和偏差的影响。在我们探索人工智能在科学论述中不断扩展的领域时,保持创造力、伦理和监督的人文因素至关重要,以确保科学文献的完整性不受损害。
Korean J Physiol Pharmacol. 2024-9-1
Malays Fam Physician. 2023-9-25
J Korean Med Sci. 2024-8-26
J Intensive Med. 2024-9-30
J Healthc Inform Res. 2024-9-14
J Med Syst. 2024-8-8
Front Med (Lausanne). 2023-8-30
J Med Internet Res. 2023-7-26
Front Public Health. 2023
Crit Care. 2023-2-25
Account Res. 2024-10
Ann Biomed Eng. 2023-3
Ann Biomed Eng. 2023-2
Nature. 2018-7