AP-HP, Service de Médecine Intensive-Réanimation, Hôpital de Bicêtre, DMU 4 CORREVE, Inserm UMR S_999, FHU SEPSIS, CARMAS, Université Paris-Saclay, 78 Rue du Général Leclerc, 94270, Le Kremlin-Bicêtre, France.
Service de Médecine Intensive Réanimation, Centre Hospitalier Universitaire Grenoble Alpes, Av. des Maquis du Grésivaudan, 38700, La Tronche, France.
Crit Care. 2024 Sep 12;28(1):301. doi: 10.1186/s13054-024-05005-y.
In the high-stakes realm of critical care, where daily decisions are crucial and clear communication is paramount, comprehending the rationale behind Artificial Intelligence (AI)-driven decisions appears essential. While AI has the potential to improve decision-making, its complexity can hinder comprehension and adherence to its recommendations. "Explainable AI" (XAI) aims to bridge this gap, enhancing confidence among patients and doctors. It also helps to meet regulatory transparency requirements, offers actionable insights, and promotes fairness and safety. Yet, defining explainability and standardising assessments are ongoing challenges and balancing performance and explainability can be needed, even if XAI is a growing field.
在高风险的重症监护领域,每天的决策都至关重要,清晰的沟通至关重要,因此理解人工智能 (AI) 驱动决策背后的原理似乎至关重要。虽然人工智能有可能改善决策,但它的复杂性可能会阻碍对其建议的理解和遵循。“可解释人工智能”(XAI)旨在弥合这一差距,增强患者和医生的信心。它还有助于满足监管透明度要求,提供可操作的见解,并促进公平和安全。然而,可解释性的定义和评估的标准化仍然是挑战,需要在性能和可解释性之间取得平衡,即使 XAI 是一个不断发展的领域。