Neves Inês, Folgado Duarte, Santos Sara, Barandas Marília, Campagner Andrea, Ronzio Luca, Cabitza Federico, Gamboa Hugo
Associação Fraunhofer Portugal Research, Rua Alfredo Allen 455/461, 4200-135, Porto, Portugal.
Associação Fraunhofer Portugal Research, Rua Alfredo Allen 455/461, 4200-135, Porto, Portugal; Laboratório de Instrumentação, Engenharia Biomédica e Física da Radiação (LIBPhys-UNL), Departamento de Física, Faculdade de Ciências e Tecnologia, FCT, Universidade Nova de Lisboa, 2829-516, Caparica, Portugal.
Comput Biol Med. 2021 Jun;133:104393. doi: 10.1016/j.compbiomed.2021.104393. Epub 2021 Apr 16.
Treatment and prevention of cardiovascular diseases often rely on Electrocardiogram (ECG) interpretation. Dependent on the physician's variability, ECG interpretation is subjective and prone to errors. Machine learning models are often developed and used to support doctors; however, their lack of interpretability stands as one of the main drawbacks of their widespread operation. This paper focuses on an Explainable Artificial Intelligence (XAI) solution to make heartbeat classification more explainable using several state-of-the-art model-agnostic methods. We introduce a high-level conceptual framework for explainable time series and propose an original method that adds temporal dependency between time samples using the time series' derivative. The results were validated in the MIT-BIH arrhythmia dataset: we performed a performance's analysis to evaluate whether the explanations fit the model's behaviour; and employed the 1-D Jaccard's index to compare the subsequences extracted from an interpretable model and the XAI methods used. Our results show that the use of the raw signal and its derivative includes temporal dependency between samples to promote classification explanation. A small but informative user study concludes this study to evaluate the potential of the visual explanations produced by our original method for being adopted in real-world clinical settings, either as diagnostic aids or training resource.
心血管疾病的治疗和预防通常依赖于心电图(ECG)解读。由于依赖医生的个体差异,ECG解读具有主观性且容易出错。机器学习模型经常被开发和用于辅助医生;然而,其缺乏可解释性是其广泛应用的主要缺点之一。本文聚焦于一种可解释人工智能(XAI)解决方案,使用几种最先进的模型无关方法使心跳分类更具可解释性。我们引入了一个用于可解释时间序列的高级概念框架,并提出了一种原始方法,该方法利用时间序列的导数在时间样本之间添加时间依赖性。结果在麻省理工学院 - 贝斯以色列女执事医疗中心心律失常数据集上得到验证:我们进行了性能分析,以评估解释是否符合模型行为;并使用一维杰卡德指数来比较从可解释模型和所使用的XAI方法中提取的子序列。我们的结果表明,使用原始信号及其导数可在样本之间纳入时间依赖性,以促进分类解释。一项规模虽小但信息丰富的用户研究为这项研究画上句号,该研究评估了我们原始方法产生的可视化解释在现实临床环境中作为诊断辅助工具或培训资源被采用的潜力。