Huang Yuhao, Chang Ao, Dou Haoran, Tao Xing, Zhou Xinrui, Cao Yan, Huang Ruobing, Frangi Alejandro F, Bao Lingyun, Yang Xin, Ni Dong
National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China.
Centre for Computational Imaging and Simulation Technologies in Biomedicine (CISTIB), University of Leeds, Leeds, UK; Department of Computer Science, School of Engineering, University of Manchester, Manchester, UK.
Med Image Anal. 2025 May;102:103552. doi: 10.1016/j.media.2025.103552. Epub 2025 Mar 21.
Accurate segmentation of nodules in both 2D breast ultrasound (BUS) and 3D automated breast ultrasound (ABUS) is crucial for clinical diagnosis and treatment planning. Therefore, developing an automated system for nodule segmentation can enhance user independence and expedite clinical analysis. Unlike fully-supervised learning, weakly-supervised segmentation (WSS) can streamline the laborious and intricate annotation process. However, current WSS methods face challenges in achieving precise nodule segmentation, as many of them depend on inaccurate activation maps or inefficient pseudo-mask generation algorithms. In this study, we introduce a novel multi-agent reinforcement learning-based WSS framework called Flip Learning, which relies solely on 2D/3D boxes for accurate segmentation. Specifically, multiple agents are employed to erase the target from the box to facilitate classification tag flipping, with the erased region serving as the predicted segmentation mask. The key contributions of this research are as follows: (1) Adoption of a superpixel/supervoxel-based approach to encode the standardized environment, capturing boundary priors and expediting the learning process. (2) Introduction of three meticulously designed rewards, comprising a classification score reward and two intensity distribution rewards, to steer the agents' erasing process precisely, thereby avoiding both under- and over-segmentation. (3) Implementation of a progressive curriculum learning strategy to enable agents to interact with the environment in a progressively challenging manner, thereby enhancing learning efficiency. Extensively validated on the large in-house BUS and ABUS datasets, our Flip Learning method outperforms state-of-the-art WSS methods and foundation models, and achieves comparable performance as fully-supervised learning algorithms.
在二维乳腺超声(BUS)和三维自动乳腺超声(ABUS)中准确分割结节对于临床诊断和治疗规划至关重要。因此,开发一种用于结节分割的自动化系统可以提高用户的独立性并加快临床分析。与全监督学习不同,弱监督分割(WSS)可以简化繁琐且复杂的标注过程。然而,当前的WSS方法在实现精确的结节分割方面面临挑战,因为其中许多方法依赖于不准确的激活图或低效的伪掩码生成算法。在本研究中,我们引入了一种基于多智能体强化学习的新型WSS框架,称为翻转学习,它仅依赖二维/三维边界框进行准确分割。具体而言,使用多个智能体从边界框中擦除目标以促进分类标签翻转,擦除的区域用作预测的分割掩码。本研究的关键贡献如下:(1)采用基于超像素/超体素的方法对标准化环境进行编码,捕捉边界先验信息并加快学习过程。(2)引入三个精心设计的奖励,包括分类得分奖励和两个强度分布奖励,以精确引导智能体的擦除过程,从而避免过分割和欠分割。(3)实施渐进式课程学习策略,使智能体能够以逐渐具有挑战性的方式与环境交互,从而提高学习效率。在大型内部BUS和ABUS数据集上进行了广泛验证,我们的翻转学习方法优于现有的WSS方法和基础模型,并实现了与全监督学习算法相当的性能。