Yang Mingjing, Wu Zhicheng, Zheng Hanyu, Huang Liqin, Ding Wangbin, Pan Lin, Yin Lei
College of Physics and Information Engineering, Fuzhou University, Fuzhou 350108, China.
School of Medical Imaging, Fujian Medical University, Fuzhou 350122, China.
Diagnostics (Basel). 2024 Aug 12;14(16):1751. doi: 10.3390/diagnostics14161751.
Given the diversity of medical images, traditional image segmentation models face the issue of domain shift. Unsupervised domain adaptation (UDA) methods have emerged as a pivotal strategy for cross modality analysis. These methods typically utilize generative adversarial networks (GANs) for both image-level and feature-level domain adaptation through the transformation and reconstruction of images, assuming the features between domains are well-aligned. However, this assumption falters with significant gaps between different medical image modalities, such as MRI and CT. These gaps hinder the effective training of segmentation networks with cross-modality images and can lead to misleading training guidance and instability. To address these challenges, this paper introduces a novel approach comprising a cross-modality feature alignment sub-network and a cross pseudo supervised dual-stream segmentation sub-network. These components work together to bridge domain discrepancies more effectively and ensure a stable training environment. The feature alignment sub-network is designed for the bidirectional alignment of features between the source and target domains, incorporating a self-attention module to aid in learning structurally consistent and relevant information. The segmentation sub-network leverages an enhanced cross-pseudo-supervised loss to harmonize the output of the two segmentation networks, assessing pseudo-distances between domains to improve the pseudo-label quality and thus enhancing the overall learning efficiency of the framework. This method's success is demonstrated by notable advancements in segmentation precision across target domains for abdomen and brain tasks.
鉴于医学图像的多样性,传统的图像分割模型面临着领域转移的问题。无监督域适应(UDA)方法已成为跨模态分析的关键策略。这些方法通常利用生成对抗网络(GAN)通过图像的变换和重建来进行图像级和特征级的域适应,假设不同域之间的特征是对齐的。然而,对于不同的医学图像模态,如MRI和CT,这种假设并不成立,因为它们之间存在显著差异。这些差异阻碍了使用跨模态图像对分割网络进行有效训练,并可能导致误导性的训练指导和不稳定性。为了应对这些挑战,本文提出了一种新颖的方法,该方法包括一个跨模态特征对齐子网络和一个交叉伪监督双流分割子网络。这些组件共同作用,更有效地弥合域差异,并确保稳定的训练环境。特征对齐子网络旨在对源域和目标域之间的特征进行双向对齐,并结合自注意力模块来帮助学习结构一致且相关的信息。分割子网络利用增强的交叉伪监督损失来协调两个分割网络的输出,评估域之间的伪距离以提高伪标签质量,从而提高框架的整体学习效率。通过在腹部和脑部任务的目标域中分割精度的显著提高,证明了该方法的成功。