Suppr超能文献

生成式人工智能在医学中的伦理应用

Ethical Application of Generative Artificial Intelligence in Medicine.

作者信息

Hasan Sayyida S, Fury Matthew S, Woo Joshua J, Kunze Kyle N, Ramkumar Prem N

机构信息

Rush Medical College, Chicago, Illinois, U.S.A.

Baton Rouge Orthopaedic Clinic, Baton Rouge, Louisiana, U.S.A.

出版信息

Arthroscopy. 2025 Apr;41(4):874-885. doi: 10.1016/j.arthro.2024.12.011. Epub 2024 Dec 15.

Abstract

Generative artificial intelligence (AI) may revolutionize health care, providing solutions that range from enhancing diagnostic accuracy to personalizing treatment plans. However, its rapid and largely unregulated integration into medicine raises ethical concerns related to data integrity, patient safety, and appropriate oversight. One of the primary ethical challenges lies in generative AI's potential to produce misleading or fabricated information, posing risks of misdiagnosis or inappropriate treatment recommendations, which underscore the necessity for robust physician oversight. Transparency also remains a critical concern, as the closed-source nature of many large-language models prevents both patients and health care providers from understanding the reasoning behind AI-generated outputs, potentially eroding trust. The lack of regulatory approval for AI as a medical device, combined with concerns around the security of patient-derived data and AI-generated synthetic data, further complicates its safe integration into clinical workflows. Furthermore, synthetic datasets generated by AI, although valuable for augmenting research in areas with scarce data, complicate questions of data ownership, patient consent, and scientific validity. In addition, generative AI's ability to streamline administrative tasks risks depersonalizing care, further distancing providers from patients. These challenges compound the deeper issues plaguing the health care system, including the emphasis of volume and speed over value and expertise. The use of generative AI in medicine brings about mass scaling of synthetic information, thereby necessitating careful adoption to protect patient care and medical advancement. Given these considerations, generative AI applications warrant regulatory and critical scrutiny. Key starting points include establishing strict standards for data security and transparency, implementing oversight akin to institutional review boards to govern data usage, and developing interdisciplinary guidelines that involve developers, clinicians, and ethicists. By addressing these concerns, we can better align generative AI adoption with the core foundations of humanistic health care, preserving patient safety, autonomy, and trust while harnessing AI's transformative potential. LEVEL OF EVIDENCE: Level V, expert opinion.

摘要

生成式人工智能(AI)可能会给医疗保健带来变革,提供从提高诊断准确性到个性化治疗方案等一系列解决方案。然而,它迅速且基本不受监管地融入医学引发了与数据完整性、患者安全和适当监督相关的伦理问题。主要的伦理挑战之一在于生成式AI有可能产生误导性或虚假信息,带来误诊或不恰当治疗建议的风险,这凸显了医生进行有力监督的必要性。透明度也是一个关键问题,因为许多大语言模型的封闭源代码性质使患者和医疗保健提供者都无法理解AI生成输出背后的推理过程,这可能会削弱信任。AI作为医疗设备缺乏监管批准,再加上对患者源数据和AI生成的合成数据安全性的担忧,使其安全融入临床工作流程变得更加复杂。此外,AI生成的合成数据集虽然对扩大数据稀缺领域的研究很有价值,但却使数据所有权、患者同意和科学有效性等问题变得复杂。此外,生成式AI简化行政任务的能力有使护理非个性化的风险,进一步使提供者与患者疏远。这些挑战加剧了困扰医疗保健系统的更深层次问题,包括对数量和速度的强调超过了价值和专业知识。在医学中使用生成式AI会带来合成信息的大规模扩展,因此需要谨慎采用以保护患者护理和医学进步。考虑到这些因素,生成式AI应用需要监管和严格审查。关键的起点包括建立严格的数据安全和透明度标准,实施类似于机构审查委员会的监督来管理数据使用,并制定涉及开发者、临床医生和伦理学家的跨学科指南。通过解决这些问题,我们可以更好地使生成式AI的应用与人文医疗保健的核心基础保持一致,在利用AI变革潜力时保护患者安全、自主性和信任。证据级别:V级,专家意见。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验