Department of Computer Science, University of Cyprus, Nicosia, Cyprus.
CYENS Centre of Excellence, Nicosia, Cyprus.
Stud Health Technol Inform. 2024 Aug 22;316:808-812. doi: 10.3233/SHTI240534.
Explainable artificial intelligence (AI) focuses on developing models and algorithms that provide transparent and interpretable insights into decision-making processes. By elucidating the reasoning behind AI-driven diagnoses and treatment recommendations, explainability can gain the trust of healthcare experts and assist them in difficult diagnostic tasks. Sepsis is characterized as a serious condition that happens when the immune system of the body has an extreme response to an infection, causing tissue and organ damage and leading to death. Physicians face challenges in diagnosing and treating sepsis due to its complex pathogenesis. This work aims to provide an overview of the recent studies that propose explainable AI models in the prediction of sepsis onset and sepsis mortality using intensive care data. The general findings showed that explainable AI can provide the most significant features guiding the decision-making process of the model. Future research will investigate explainability through argumentation theory using intensive care data focused on sepsis patients.
可解释人工智能(AI)专注于开发模型和算法,为决策过程提供透明和可解释的见解。通过阐明 AI 驱动的诊断和治疗建议的推理过程,可解释性可以赢得医疗保健专家的信任,并帮助他们完成困难的诊断任务。败血症是一种严重的病症,当身体的免疫系统对感染产生过度反应时就会发生败血症,导致组织和器官损伤,并导致死亡。由于其复杂的发病机制,医生在诊断和治疗败血症方面面临挑战。这项工作旨在概述最近的研究,这些研究提出了使用重症监护数据预测败血症发作和败血症死亡率的可解释 AI 模型。总体研究结果表明,可解释 AI 可以提供指导模型决策过程的最重要特征。未来的研究将通过使用重症监护数据集中的论证理论来研究可解释性,重点是败血症患者。