Zhou Lei, Zhang Yuzhong, Zhang Jiadong, Qian Xuejun, Gong Chen, Sun Kun, Ding Zhongxiang, Wang Xing, Li Zhenhui, Liu Zaiyi, Shen Dinggang
IEEE Trans Med Imaging. 2025 Jan;44(1):244-258. doi: 10.1109/TMI.2024.3435450. Epub 2025 Jan 2.
Automated breast tumor segmentation on the basis of dynamic contrast-enhancement magnetic resonance imaging (DCE-MRI) has shown great promise in clinical practice, particularly for identifying the presence of breast disease. However, accurate segmentation of breast tumor is a challenging task, often necessitating the development of complex networks. To strike an optimal trade-off between computational costs and segmentation performance, we propose a hybrid network via the combination of convolution neural network (CNN) and transformer layers. Specifically, the hybrid network consists of a encoder-decoder architecture by stacking convolution and deconvolution layers. Effective 3D transformer layers are then implemented after the encoder subnetworks, to capture global dependencies between the bottleneck features. To improve the efficiency of hybrid network, two parallel encoder subnetworks are designed for the decoder and the transformer layers, respectively. To further enhance the discriminative capability of hybrid network, a prototype learning guided prediction module is proposed, where the category-specified prototypical features are calculated through online clustering. All learned prototypical features are finally combined with the features from decoder for tumor mask prediction. The experimental results on private and public DCE-MRI datasets demonstrate that the proposed hybrid network achieves superior performance than the state-of-the-art (SOTA) methods, while maintaining balance between segmentation accuracy and computation cost. Moreover, we demonstrate that automatically generated tumor masks can be effectively applied to identify HER2-positive subtype from HER2-negative subtype with the similar accuracy to the analysis based on manual tumor segmentation. The source code is available at https://github.com/ZhouL-lab/PLHN.
基于动态对比增强磁共振成像(DCE-MRI)的乳腺肿瘤自动分割在临床实践中显示出了巨大的前景,特别是在识别乳腺疾病的存在方面。然而,乳腺肿瘤的准确分割是一项具有挑战性的任务,通常需要开发复杂的网络。为了在计算成本和分割性能之间取得最佳平衡,我们通过结合卷积神经网络(CNN)和Transformer层提出了一种混合网络。具体来说,混合网络由一个通过堆叠卷积层和反卷积层组成的编码器-解码器架构。然后在编码器子网络之后实现有效的3D Transformer层,以捕捉瓶颈特征之间的全局依赖性。为了提高混合网络的效率,分别为解码器和Transformer层设计了两个并行的编码器子网络。为了进一步增强混合网络的判别能力,提出了一个原型学习引导预测模块,其中通过在线聚类计算类别指定的原型特征。所有学习到的原型特征最终与来自解码器的特征相结合,用于肿瘤掩码预测。在私有和公共DCE-MRI数据集上的实验结果表明,所提出的混合网络比现有技术(SOTA)方法具有更好的性能,同时在分割精度和计算成本之间保持平衡。此外,我们证明自动生成的肿瘤掩码可以有效地应用于从HER2阴性亚型中识别HER2阳性亚型,其准确率与基于手动肿瘤分割的分析相似。源代码可在https://github.com/ZhouL-lab/PLHN获取。