Suppr超能文献

E-MIL:一种用于全切片图像分类的可解释和有证据的多实例学习框架。

E-MIL: An explainable and evidential multiple instance learning framework for whole slide image classification.

机构信息

School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an, Shaanxi, 710049, China.

School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an, Shaanxi, 710049, China.

出版信息

Med Image Anal. 2024 Oct;97:103294. doi: 10.1016/j.media.2024.103294. Epub 2024 Aug 6.

Abstract

Multiple instance learning (MIL)-based methods have been widely adopted to process the whole slide image (WSI) in the field of computational pathology. Due to the sparse slide-level supervision, these methods usually lack good localization on the tumor regions, leading to poor interpretability. Moreover, they lack robust uncertainty estimation of prediction results, leading to poor reliability. To solve the above two limitations, we propose an explainable and evidential multiple instance learning (E-MIL) framework for whole slide image classification. E-MIL is mainly composed of three modules: a detail-aware attention distillation module (DAM), a structure-aware attention refined module (SRM), and an uncertainty-aware instance classifier (UIC). Specifically, DAM helps the global network locate more detail-aware positive instances by utilizing the complementary sub-bags to learn detailed attention knowledge from the local network. In addition, a masked self-guidance loss is also introduced to help bridge the gap between the slide-level labels and instance-level classification tasks. SRM generates a structure-aware attention map that locates the entire tumor region structure by effectively modeling the spatial relations between clustering instances. Moreover, UIC provides accurate instance-level classification results and robust predictive uncertainty estimation to improve the model reliability based on subjective logic theory. Extensive experiments on three large multi-center subtyping datasets demonstrate both slide-level and instance-level performance superiority of E-MIL.

摘要

基于多示例学习 (MIL) 的方法已被广泛应用于计算病理学领域的全切片图像 (WSI) 处理。由于幻灯片级别的监督稀疏,这些方法通常在肿瘤区域的定位上表现不佳,导致可解释性差。此外,它们缺乏对预测结果的稳健不确定性估计,导致可靠性差。为了解决上述两个限制,我们提出了一种用于全幻灯片图像分类的可解释证据多示例学习 (E-MIL) 框架。E-MIL 主要由三个模块组成:一个细节感知注意力蒸馏模块 (DAM)、一个结构感知注意力精炼模块 (SRM) 和一个不确定性感知实例分类器 (UIC)。具体来说,DAM 通过利用补充子袋从本地网络学习详细的注意力知识,帮助全局网络定位更多细节感知的阳性实例。此外,还引入了掩蔽自指导损失来帮助弥合幻灯片级别标签和实例级别分类任务之间的差距。SRM 通过有效建模聚类实例之间的空间关系,生成一个结构感知的注意力图,定位整个肿瘤区域结构。此外,UIC 提供准确的实例级分类结果和稳健的预测不确定性估计,基于主观逻辑理论提高模型可靠性。在三个大型多中心分型数据集上的广泛实验证明了 E-MIL 在幻灯片级和实例级性能上的优势。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验