Ennab Mohammad, Mcheick Hamid
Department of Computer Sciences and Mathematics, University of Québec at Chicoutimi, Chicoutimi, QC, Canada.
Front Robot AI. 2024 Nov 28;11:1444763. doi: 10.3389/frobt.2024.1444763. eCollection 2024.
Artificial Intelligence (AI) has demonstrated exceptional performance in automating critical healthcare tasks, such as diagnostic imaging analysis and predictive modeling, often surpassing human capabilities. The integration of AI in healthcare promises substantial improvements in patient outcomes, including faster diagnosis and personalized treatment plans. However, AI models frequently lack interpretability, leading to significant challenges concerning their performance and generalizability across diverse patient populations. These opaque AI technologies raise serious patient safety concerns, as non-interpretable models can result in improper treatment decisions due to misinterpretations by healthcare providers. Our systematic review explores various AI applications in healthcare, focusing on the critical assessment of model interpretability and accuracy. We identify and elucidate the most significant limitations of current AI systems, such as the black-box nature of deep learning models and the variability in performance across different clinical settings. By addressing these challenges, our objective is to provide healthcare providers with well-informed strategies to develop innovative and safe AI solutions. This review aims to ensure that future AI implementations in healthcare not only enhance performance but also maintain transparency and patient safety.
人工智能(AI)在自动化关键医疗任务方面表现卓越,如诊断成像分析和预测建模,其性能常常超越人类。人工智能融入医疗有望显著改善患者治疗效果,包括更快诊断和个性化治疗方案。然而,人工智能模型常常缺乏可解释性,这给其在不同患者群体中的性能和通用性带来了重大挑战。这些不透明的人工智能技术引发了严重的患者安全问题,因为不可解释的模型可能会因医疗服务提供者的误解而导致不恰当的治疗决策。我们的系统综述探讨了人工智能在医疗领域的各种应用,重点是对模型可解释性和准确性的批判性评估。我们识别并阐明了当前人工智能系统的最重大局限性,如深度学习模型的黑箱性质以及不同临床环境中性能的变异性。通过应对这些挑战,我们的目标是为医疗服务提供者提供明智的策略,以开发创新且安全的人工智能解决方案。本综述旨在确保未来医疗领域的人工智能应用不仅能提高性能,还能保持透明度和患者安全。