Suppr超能文献

人工智能模型是否应该向临床医生解释?

Should AI models be explainable to clinicians?

机构信息

AP-HP, Service de Médecine Intensive-Réanimation, Hôpital de Bicêtre, DMU 4 CORREVE, Inserm UMR S_999, FHU SEPSIS, CARMAS, Université Paris-Saclay, 78 Rue du Général Leclerc, 94270, Le Kremlin-Bicêtre, France.

Service de Médecine Intensive Réanimation, Centre Hospitalier Universitaire Grenoble Alpes, Av. des Maquis du Grésivaudan, 38700, La Tronche, France.

出版信息

Crit Care. 2024 Sep 12;28(1):301. doi: 10.1186/s13054-024-05005-y.

Abstract

In the high-stakes realm of critical care, where daily decisions are crucial and clear communication is paramount, comprehending the rationale behind Artificial Intelligence (AI)-driven decisions appears essential. While AI has the potential to improve decision-making, its complexity can hinder comprehension and adherence to its recommendations. "Explainable AI" (XAI) aims to bridge this gap, enhancing confidence among patients and doctors. It also helps to meet regulatory transparency requirements, offers actionable insights, and promotes fairness and safety. Yet, defining explainability and standardising assessments are ongoing challenges and balancing performance and explainability can be needed, even if XAI is a growing field.

摘要

在高风险的重症监护领域,每天的决策都至关重要,清晰的沟通至关重要,因此理解人工智能 (AI) 驱动决策背后的原理似乎至关重要。虽然人工智能有可能改善决策,但它的复杂性可能会阻碍对其建议的理解和遵循。“可解释人工智能”(XAI)旨在弥合这一差距,增强患者和医生的信心。它还有助于满足监管透明度要求,提供可操作的见解,并促进公平和安全。然而,可解释性的定义和评估的标准化仍然是挑战,需要在性能和可解释性之间取得平衡,即使 XAI 是一个不断发展的领域。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/eaeb/11391805/2b96528eb562/13054_2024_5005_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验