Suppr超能文献

Rad4XCNN:一种通过放射组学对CNN衍生特征进行事后全局解释的全新无偏方法。

Rad4XCNN: A new agnostic method for post-hoc global explanation of CNN-derived features by means of Radiomics.

作者信息

Prinzi Francesco, Militello Carmelo, Zarcaro Calogero, Bartolotta Tommaso Vincenzo, Gaglio Salvatore, Vitabile Salvatore

机构信息

Department of Biomedicine, Neuroscience and Advanced Diagnostics (BiND), University of Palermo, Palermo, 90127, Italy.

Institute for High-Performance Computing and Networking (ICAR-CNR), Italian National Research Council, Palermo, 90146, Italy.

出版信息

Comput Methods Programs Biomed. 2025 Mar;260:108576. doi: 10.1016/j.cmpb.2024.108576. Epub 2025 Jan 7.

Abstract

BACKGROUND AND OBJECTIVE

In recent years, machine learning-based clinical decision support systems (CDSS) have played a key role in the analysis of several medical conditions. Despite their promising capabilities, the lack of transparency in AI models poses significant challenges, particularly in medical contexts where reliability is a mandatory aspect. However, it appears that explainability is inversely proportional to accuracy. For this reason, achieving transparency without compromising predictive accuracy remains a key challenge.

METHODS

This paper presents a novel method, namely Rad4XCNN, to enhance the predictive power of CNN-derived features with the inherent interpretability of radiomic features. Rad4XCNN diverges from conventional methods based on saliency maps, by associating intelligible meaning to CNN-derived features by means of Radiomics, offering new perspectives on explanation methods beyond visualization maps.

RESULTS

Using a breast cancer classification task as a case study, we evaluated Rad4XCNN on ultrasound imaging datasets, including an online dataset and two in-house datasets for internal and external validation. Some key results are: (i) CNN-derived features guarantee more robust accuracy when compared against ViT-derived and radiomic features; (ii) conventional visualization map methods for explanation present several pitfalls; (iii) Rad4XCNN does not sacrifice model accuracy for their explainability; (iv) Rad4XCNN provides a global explanation enabling the physician to extract global insights and findings.

CONCLUSIONS

Our method can mitigate some concerns related to the explainability-accuracy trade-off. This study highlighted the importance of proposing new methods for model explanation without affecting their accuracy.

摘要

背景与目的

近年来,基于机器学习的临床决策支持系统(CDSS)在多种医疗状况的分析中发挥了关键作用。尽管它们具有令人期待的能力,但人工智能模型缺乏透明度带来了重大挑战,尤其是在可靠性至关重要的医疗环境中。然而,可解释性似乎与准确性成反比。因此,在不影响预测准确性的前提下实现透明度仍然是一个关键挑战。

方法

本文提出了一种新方法,即Rad4XCNN,以利用放射组学特征固有的可解释性来增强卷积神经网络(CNN)衍生特征的预测能力。Rad4XCNN与基于显著性图的传统方法不同,它通过放射组学为CNN衍生特征赋予可理解的含义,为超越可视化图的解释方法提供了新视角。

结果

以乳腺癌分类任务为例,我们在超声成像数据集上评估了Rad4XCNN,包括一个在线数据集以及两个用于内部和外部验证的内部数据集。一些关键结果如下:(i)与基于视觉Transformer(ViT)衍生的特征和放射组学特征相比,CNN衍生的特征保证了更高的稳健准确性;(ii)传统的用于解释的可视化图方法存在一些缺陷;(iii)Rad4XCNN不会因其可解释性而牺牲模型准确性;(iv)Rad4XCNN提供了全局解释,使医生能够提取全局见解和发现。

结论

我们的方法可以缓解一些与可解释性-准确性权衡相关的问题。本研究强调了在不影响模型准确性的情况下提出新的模型解释方法的重要性。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验