Suppr超能文献

使用热图评估基于梯度的神经网络心电图分析解释方法。

Evaluating gradient-based explanation methods for neural network ECG analysis using heatmaps.

作者信息

Storås Andrea Marheim, Mæland Steffen, Isaksen Jonas L, Hicks Steven Alexander, Thambawita Vajira, Graff Claus, Hammer Hugo Lewi, Halvorsen Pål, Riegler Michael Alexander, Kanters Jørgen K

机构信息

Department of Holistic Systems, SimulaMet, 0170 Oslo, Norway.

Department of Computer Science, Oslo Metropolitan University, 0167 Oslo, Norway.

出版信息

J Am Med Inform Assoc. 2025 Jan 1;32(1):79-88. doi: 10.1093/jamia/ocae280.

Abstract

OBJECTIVE

Evaluate popular explanation methods using heatmap visualizations to explain the predictions of deep neural networks for electrocardiogram (ECG) analysis and provide recommendations for selection of explanations methods.

MATERIALS AND METHODS

A residual deep neural network was trained on ECGs to predict intervals and amplitudes. Nine commonly used explanation methods (Saliency, Deconvolution, Guided backpropagation, Gradient SHAP, SmoothGrad, Input × gradient, DeepLIFT, Integrated gradients, GradCAM) were qualitatively evaluated by medical experts and objectively evaluated using a perturbation-based method.

RESULTS

No single explanation method consistently outperformed the other methods, but some methods were clearly inferior. We found considerable disagreement between the human expert evaluation and the objective evaluation by perturbation.

DISCUSSION

The best explanation method depended on the ECG measure. To ensure that future explanations of deep neural networks for medical data analyses are useful to medical experts, data scientists developing new explanation methods should collaborate tightly with domain experts. Because there is no explanation method that performs best in all use cases, several methods should be applied.

CONCLUSION

Several explanation methods should be used to determine the most suitable approach.

摘要

目的

使用热图可视化评估常用的解释方法,以解释深度神经网络对心电图(ECG)分析的预测,并为解释方法的选择提供建议。

材料与方法

在心电图上训练一个残差深度神经网络,以预测间期和振幅。医学专家对九种常用的解释方法(显著性、反卷积、引导反向传播、梯度SHAP、平滑梯度、输入×梯度、深度提升、集成梯度、GradCAM)进行了定性评估,并使用基于扰动的方法进行了客观评估。

结果

没有一种解释方法始终优于其他方法,但有些方法明显较差。我们发现人类专家评估与基于扰动的客观评估之间存在相当大的分歧。

讨论

最佳解释方法取决于心电图测量指标。为确保未来深度神经网络对医学数据分析的解释对医学专家有用,开发新解释方法的数据科学家应与领域专家紧密合作。由于没有一种解释方法在所有用例中都表现最佳,因此应应用多种方法。

结论

应使用多种解释方法来确定最合适的方法。

相似文献

本文引用的文献

4
Application of artificial intelligence to the electrocardiogram.人工智能在心电图中的应用。
Eur Heart J. 2021 Dec 7;42(46):4717-4730. doi: 10.1093/eurheartj/ehab649.
7
Evaluating the Visualization of What a Deep Neural Network Has Learned.评估深度神经网络所学内容的可视化效果。
IEEE Trans Neural Netw Learn Syst. 2017 Nov;28(11):2660-2673. doi: 10.1109/TNNLS.2016.2599820.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验