Gui Chengzhi, An Xingwei, Liu Shuang, Ming Dong
Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China.
Med Phys. 2025 Jun;52(6):4175-4187. doi: 10.1002/mp.17742. Epub 2025 Mar 16.
Accurate segmentation of lesions is beneficial for quantitative analysis and precision medicine in multimodal magnetic resonance imaging (MRI).
Currently, multimodal MRI fusion segmentation networks still face two main issues. On one hand, simple feature concatenation fails to fully capture the complex relationships between different modalities, as it overlooks the importance of dynamically changing feature weights across modalities. On the other hand, the unlearnable nature of upsampling in segmentation networks leads to feature misalignment issues during feature aggregation with the decoder, resulting in spatial misalignments between feature maps of different levels and ultimately pixel-level classification errors in predictions.
This paper introduces the Self-adaptive weighted fusion and Self-adaptive aligned Network (SNet), which comprises two key modules: the Self-Adaptive Weighted Fusion Module (SWFM) and the Self-Adaptive Aligned Module (SAM). SNet can adaptively assign fusion weights based on the importance of different modalities and adaptively learn feature deformation fields to generate dynamic and flexible variability grids for feature alignment. This approach results in the generation of upsampled late-stage features with correct spatial locations and precise lesion boundaries.
This paper conducts experiments on two MRI datasets: ISLES 2022 and BraTS 2020. In the ISLES 2022 dataset, compared to the sub-optimal network MedNeXt, the proposed SNet showed improvements of 3.52% in Dice Similarity Coefficient (DSC), 1.67% in Intersection over Union (IoU), and 4.7% in sensitivity, with a decrease of 0.33 mm in Hausdorff Distance 95 (HD95). In the BraTS 2020 dataset, compared to the sub-optimal network MedNeXt, the proposed SNet achieved increases of 1.32% in mean DSC, 2.07% in mean IoU, and 2.17% in mean sensitivity, with a decrease of 0.10 mm in mean HD95. The code is open-sourced and available at: https://github.com/Cooper-Gu/S2Net.
Experimental results demonstrate that SNet exhibits superior segmentation performance in multimodal MRI segmentation compared to MedNeXt, FFNet, and ACMINet.
在多模态磁共振成像(MRI)中,病变的准确分割有利于定量分析和精准医学。
目前,多模态MRI融合分割网络仍面临两个主要问题。一方面,简单的特征拼接无法充分捕捉不同模态之间的复杂关系,因为它忽略了跨模态动态变化特征权重的重要性。另一方面,分割网络中不可学习的上采样性质导致在与解码器进行特征聚合时出现特征错位问题,从而导致不同层次特征图之间的空间错位,最终导致预测中的像素级分类错误。
本文介绍了自适应加权融合与自适应对齐网络(SNet),它包括两个关键模块:自适应加权融合模块(SWFM)和自适应对齐模块(SAM)。SNet可以根据不同模态的重要性自适应地分配融合权重,并自适应地学习特征变形场,以生成用于特征对齐的动态灵活可变网格。这种方法导致生成具有正确空间位置和精确病变边界的上采样后期特征。
本文在两个MRI数据集上进行了实验:ISLES 2022和BraTS 2020。在ISLES 2022数据集中,与次优网络MedNeXt相比,所提出的SNet在骰子相似系数(DSC)上提高了3.52%,在交并比(IoU)上提高了1.67%,在灵敏度上提高了4.7%,在豪斯多夫距离95(HD95)上降低了0.33毫米。在BraTS 2020数据集中,与次优网络MedNeXt相比,所提出的SNet在平均DSC上提高了1.32%,在平均IoU上提高了2.07%,在平均灵敏度上提高了2.17%,在平均HD95上降低了0.10毫米。代码已开源,可在以下网址获取:https://github.com/Cooper-Gu/S2Net。
实验结果表明,与MedNeXt、FFNet和ACMINet相比,SNet在多模态MRI分割中表现出卓越的分割性能。