Bragazzi Nicola Luigi, Garbarino Sergio
Human Nutrition Unit, Department of Food and Drugs, University of Parma, Parma, Italy.
Department of Neuroscience, Rehabilitation, Ophthalmology, Genetics and Maternal/Child Sciences, University of Genoa, Genoa, Italy.
JMIR AI. 2024 Jun 7;3:e55957. doi: 10.2196/55957.
Clinical decision-making is a crucial aspect of health care, involving the balanced integration of scientific evidence, clinical judgment, ethical considerations, and patient involvement. This process is dynamic and multifaceted, relying on clinicians' knowledge, experience, and intuitive understanding to achieve optimal patient outcomes through informed, evidence-based choices. The advent of generative artificial intelligence (AI) presents a revolutionary opportunity in clinical decision-making. AI's advanced data analysis and pattern recognition capabilities can significantly enhance the diagnosis and treatment of diseases, processing vast medical data to identify patterns, tailor treatments, predict disease progression, and aid in proactive patient management. However, the incorporation of AI into clinical decision-making raises concerns regarding the reliability and accuracy of AI-generated insights. To address these concerns, 11 "verification paradigms" are proposed in this paper, with each paradigm being a unique method to verify the evidence-based nature of AI in clinical decision-making. This paper also frames the concept of "clinically explainable, fair, and responsible, clinician-, expert-, and patient-in-the-loop AI." This model focuses on ensuring AI's comprehensibility, collaborative nature, and ethical grounding, advocating for AI to serve as an augmentative tool, with its decision-making processes being transparent and understandable to clinicians and patients. The integration of AI should enhance, not replace, the clinician's judgment and should involve continuous learning and adaptation based on real-world outcomes and ethical and legal compliance. In conclusion, while generative AI holds immense promise in enhancing clinical decision-making, it is essential to ensure that it produces evidence-based, reliable, and impactful knowledge. Using the outlined paradigms and approaches can help the medical and patient communities harness AI's potential while maintaining high patient care standards.
临床决策是医疗保健的关键环节,涉及科学证据、临床判断、伦理考量和患者参与的平衡整合。这一过程是动态且多方面的,依靠临床医生的知识、经验和直观理解,通过明智的、基于证据的选择来实现最佳的患者治疗效果。生成式人工智能(AI)的出现为临床决策带来了革命性的机遇。人工智能先进的数据分析和模式识别能力能够显著提升疾病的诊断和治疗水平,处理海量医疗数据以识别模式、定制治疗方案、预测疾病进展并协助进行积极的患者管理。然而,将人工智能纳入临床决策引发了对人工智能生成的见解的可靠性和准确性的担忧。为了解决这些担忧,本文提出了11种“验证范式”,每种范式都是验证人工智能在临床决策中基于证据性质的独特方法。本文还构建了“临床可解释、公平且负责、临床医生、专家和患者参与其中的人工智能”这一概念。该模型专注于确保人工智能的可理解性、协作性和伦理基础,倡导人工智能作为一种辅助工具,其决策过程对临床医生和患者而言是透明且可理解的。人工智能的整合应增强而非取代临床医生的判断,并且应基于实际结果以及伦理和法律合规性进行持续学习和调整。总之,虽然生成式人工智能在提升临床决策方面具有巨大潜力,但确保其产生基于证据、可靠且有影响力的知识至关重要。使用所概述的范式和方法有助于医疗界和患者群体在保持高医疗标准的同时利用人工智能的潜力。