Suppr超能文献

AttriMIL:从实例属性的角度重新审视基于注意力的多实例学习用于全切片病理图像分类

AttriMIL: Revisiting attention-based multiple instance learning for whole-slide pathological image classification from a perspective of instance attributes.

作者信息

Cai Linghan, Huang Shenjin, Zhang Ye, Lu Jinpeng, Zhang Yongbing

机构信息

School of Computer Science and Technology, Harbin Institute of Technology (Shenzhen), Shenzhen, 518055, China.

Faculty of Computing, Harbin Institute of Technology, Harbin, 150001, China.

出版信息

Med Image Anal. 2025 Jul;103:103631. doi: 10.1016/j.media.2025.103631. Epub 2025 May 14.

Abstract

Multiple instance learning (MIL) is a powerful approach for whole-slide pathological image (WSI) analysis, particularly suited for processing gigapixel-resolution images with slide-level labels. Recent attention-based MIL architectures have significantly advanced weakly supervised WSI classification, facilitating both clinical diagnosis and localization of disease-positive regions. However, these methods often face challenges in differentiating between instances, leading to tissue misidentification and a potential degradation in classification performance. To address these limitations, we propose AttriMIL, an attribute-aware multiple instance learning framework. By dissecting the computational flow of attention-based MIL models, we introduce a multi-branch attribute scoring mechanism that quantifies the pathological attributes of individual instances. Leveraging these quantified attributes, we further establish region-wise and slide-wise attribute constraints to dynamically model instance correlations both within and across slides during training. These constraints encourage the network to capture intrinsic spatial patterns and semantic similarities between image patches, thereby enhancing its ability to distinguish subtle tissue variations and sensitivity to challenging instances. To fully exploit the two constraints, we further develop a pathology adaptive learning technique to optimize pre-trained feature extractors, enabling the model to efficiently gather task-specific features. Extensive experiments on five public datasets demonstrate that AttriMIL consistently outperforms state-of-the-art methods across various dimensions, including bag classification accuracy, generalization ability, and disease-positive region localization. The implementation code is available at https://github.com/MedCAI/AttriMIL.

摘要

多实例学习(MIL)是一种用于全切片病理图像(WSI)分析的强大方法,特别适用于处理具有切片级标签的千兆像素分辨率图像。最近基于注意力的MIL架构显著推进了弱监督WSI分类,促进了临床诊断和疾病阳性区域的定位。然而,这些方法在区分实例时常常面临挑战,导致组织误识别和分类性能的潜在下降。为了解决这些局限性,我们提出了AttriMIL,一种属性感知多实例学习框架。通过剖析基于注意力的MIL模型的计算流程,我们引入了一种多分支属性评分机制,该机制量化单个实例的病理属性。利用这些量化属性,我们进一步建立区域级和切片级属性约束,以在训练期间动态建模切片内和切片间的实例相关性。这些约束鼓励网络捕捉图像块之间的内在空间模式和语义相似性,从而增强其区分细微组织差异的能力以及对具有挑战性实例的敏感性。为了充分利用这两个约束,我们进一步开发了一种病理自适应学习技术来优化预训练的特征提取器,使模型能够有效地收集特定任务的特征。在五个公共数据集上进行的广泛实验表明,AttriMIL在各个维度上始终优于现有方法,包括包分类准确率、泛化能力和疾病阳性区域定位。实现代码可在https://github.com/MedCAI/AttriMIL获取。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验