用于自动Gleason分级和评分中混合多实例学习的增强分层注意力机制
Enhanced hierarchical attention mechanism for mixed MIL in automatic Gleason grading and scoring.
作者信息
Ren Meili, Huang Mengxing, Zhang Yu, Zhang Zhijun, Ren Meiyan
机构信息
Hainan Provincial Key Laboratory of Big Data and Smart Service, Hainan University, Haikou, 570228, China.
Center of Network and Information Education Technology, Shanxi University of Finance and Economics, Taiyuan, 030006, China.
出版信息
Sci Rep. 2025 May 8;15(1):15980. doi: 10.1038/s41598-025-00048-9.
Segmenting histological images and analyzing relevant regions are crucial for supporting pathologists in diagnosing various diseases. In prostate cancer diagnosis, Gleason grading and scoring relies on the recognition of different patterns in tissue samples. However, annotating large histological datasets is laborious, expensive, and often limited to slide-level or limited instance-level labels. To address this, we propose an enhanced hierarchical attention mechanism within a mixed multiple instance learning (MIL) model that effectively integrates slide-level and instance-level labels. Our hierarchical attention mechanism dynamically suppresses noisy instance-level labels while adaptively amplifying discriminative features, achieving a synergistic integration of global slide-level context and local superpixel patterns. This design significantly improves label utilization efficiency, leading to state-of-the-art performance in Gleason grading. Experimental results on the SICAPv2 and TMAs datasets demonstrate the superior performance of our model, achieving AUC scores of 0.9597 and 0.8889, respectively. Our work not only advances the state-of-the-art in Gleason grading but also highlights the potential of hierarchical attention mechanisms in mixed MIL models for medical image analysis.
对组织学图像进行分割并分析相关区域,对于辅助病理学家诊断各种疾病至关重要。在前列腺癌诊断中,Gleason分级和评分依赖于对组织样本中不同模式的识别。然而,标注大型组织学数据集既费力又昂贵,并且通常仅限于玻片级或有限的实例级标签。为了解决这个问题,我们在混合多实例学习(MIL)模型中提出了一种增强的分层注意力机制,该机制有效地整合了玻片级和实例级标签。我们的分层注意力机制动态抑制噪声实例级标签,同时自适应放大判别特征,实现全局玻片级上下文和局部超像素模式的协同整合。这种设计显著提高了标签利用效率,在Gleason分级中达到了当前最优性能。在SICAPv2和TMA数据集上的实验结果证明了我们模型的卓越性能,分别实现了0.9597和0.8889的AUC分数。我们的工作不仅推动了Gleason分级的技术水平,还突出了分层注意力机制在医学图像分析的混合MIL模型中的潜力。