Cai Zhuotong, Xin Jingmin, You Chenyu, Shi Peiwen, Dong Siyuan, Dvornek Nicha C, Zheng Nanning, Duncan James S
National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, and Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, Shaanxi, China; Department of Biomedical Engineering, Yale University, New Haven, CT, USA.
National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, and Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, Shaanxi, China.
Med Image Anal. 2025 Apr;101:103440. doi: 10.1016/j.media.2024.103440. Epub 2024 Dec 30.
Unsupervised domain adaptation (UDA) has shown impressive performance by improving the generalizability of the model to tackle the domain shift problem for cross-modality medical segmentation. However, most of the existing UDA approaches depend on high-quality image translation with diversity constraints to explicitly augment the potential data diversity, which is hard to ensure semantic consistency and capture domain-invariant representation. In this paper, free of image translation and diversity constraints, we propose a novel Style Mixup Enhanced Disentanglement Learning (SMEDL) for UDA medical image segmentation to further improve domain generalization and enhance domain-invariant learning ability. Firstly, our method adopts disentangled style mixup to implicitly generate style-mixed domains with diverse styles in the feature space through a convex combination of disentangled style factors, which can effectively improve the model generalization. Meanwhile, we further introduce pixel-wise consistency regularization to ensure the effectiveness of style-mixed domains and provide domain consistency guidance. Secondly, we introduce dual-level domain-invariant learning, including intra-domain contrastive learning and inter-domain adversarial learning to mine the underlying domain-invariant representation under both intra- and inter-domain variations. We have conducted comprehensive experiments to evaluate our method on two public cardiac datasets and one brain dataset. Experimental results demonstrate that our proposed method achieves superior performance compared to the state-of-the-art methods for UDA medical image segmentation.
无监督域适应(UDA)通过提高模型的泛化能力来解决跨模态医学分割中的域转移问题,已展现出令人印象深刻的性能。然而,现有的大多数UDA方法依赖于具有多样性约束的高质量图像翻译,以显式增强潜在的数据多样性,这难以确保语义一致性并捕获域不变表示。在本文中,我们提出了一种用于UDA医学图像分割的新型风格混合增强解缠学习(SMEDL),无需图像翻译和多样性约束,以进一步提高域泛化能力并增强域不变学习能力。首先,我们的方法采用解缠风格混合,通过解缠风格因子的凸组合在特征空间中隐式生成具有多样风格的风格混合域,这可以有效提高模型泛化能力。同时,我们进一步引入逐像素一致性正则化,以确保风格混合域的有效性并提供域一致性指导。其次,我们引入双级域不变学习,包括域内对比学习和域间对抗学习,以挖掘域内和域间变化下的潜在域不变表示。我们在两个公共心脏数据集和一个脑数据集上进行了全面实验来评估我们的方法。实验结果表明,与用于UDA医学图像分割的现有方法相比,我们提出的方法取得了卓越的性能。