Qian Xiaoxue, Shao Hua-Chieh, Li Yunxiang, Lu Weiguo, Zhang You
Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA.
Med Phys. 2025 Mar 18. doi: 10.1002/mp.17757.
Unsupervised domain adaptation (UDA) seeks to mitigate the performance degradation of deep neural networks when applied to new, unlabeled domains by leveraging knowledge from source domains. In medical image segmentation, prevailing UDA techniques often utilize adversarial learning to address domain shifts for cross-modality adaptation. Current research on adversarial learning tends to adopt increasingly complex models and loss functions, making the training process highly intricate and less stable/robust. Furthermore, most methods primarily focused on segmentation accuracy while neglecting the associated confidence levels and uncertainties.
To develop a simple yet effective UDA method based on histogram matching-enhanced adversarial learning (HMeAL-UDA), and provide comprehensive uncertainty estimations of the model predictions.
Aiming to bridge the domain gap while reducing the model complexity, we developed a novel adversarial learning approach to align multi-modality features. The method, termed HMeAL-UDA, integrates a plug-and-play histogram matching strategy to mitigate domain-specific image style biases across modalities. We employed adversarial learning to constrain the model in the prediction space, enabling it to focus on domain-invariant features during segmentation. Moreover, we quantified the model's prediction confidence using Monte Carlo (MC) dropouts to assess two voxel-level uncertainty estimates of the segmentation results, which were subsequently aggregated into a volume-level uncertainty score, providing an overall measure of the model's reliability. The proposed method was evaluated on three public datasets (Combined Healthy Abdominal Organ Segmentation [CHAOS], Beyond the Cranial Vault [BTCV], and Abdominal Multi-Organ Segmentation Challenge [AMOS]) and one in-house clinical dataset (UTSW). We used 30 MRI scans (20 from the CHAOS dataset and 10 from the in-house dataset) and 30 CT scans from the BTCV dataset for UDA-based, cross-modality liver segmentation. Additionally, 240 CT scans and 60 MRI scans from the AMOS dataset were utilized for cross-modality multi-organ segmentation. The training and testing sets for each modality were split with ratios of approximately 4:1-3:1.
Extensive experiments on cross-modality medical image segmentation demonstrated the superiority of HMeAL-UDA over two state-of-the-art approaches. HMeAL-UDA achieved a mean (± s.d.) Dice similarity coefficient (DSC) of 91.34% ± 1.23% and an HD95 of 6.18 ± 2.93 mm for cross-modality (from CT to MRI) adaptation of abdominal multi-organ segmentation, and a DSC of 87.13% ± 3.67% with an HD95 of 2.48 ± 1.56 mm for segmentation adaptation in the opposite direction (MRI to CT). The results are approaching or even outperforming those of supervised methods trained with "ground-truth" labels in the target domain. In addition, we provide a comprehensive assessment of the model's uncertainty, which can help with the understanding of segmentation reliability to guide clinical decisions.
HMeAL-UDA provides a powerful segmentation tool to address cross-modality domain shifts, with the potential to generalize to other deep learning applications in medical imaging.
无监督域适应(UDA)旨在通过利用源域的知识来减轻深度神经网络应用于新的未标记域时的性能下降。在医学图像分割中,主流的UDA技术通常利用对抗学习来解决跨模态适应中的域偏移问题。当前关于对抗学习的研究倾向于采用越来越复杂的模型和损失函数,使得训练过程变得高度复杂且稳定性/鲁棒性较差。此外,大多数方法主要关注分割精度,而忽略了相关的置信水平和不确定性。
开发一种基于直方图匹配增强对抗学习(HMeAL-UDA)的简单而有效的UDA方法,并提供模型预测的全面不确定性估计。
为了弥合域差距并降低模型复杂性,我们开发了一种新颖的对抗学习方法来对齐多模态特征。该方法称为HMeAL-UDA,集成了即插即用的直方图匹配策略,以减轻跨模态的特定域图像风格偏差。我们采用对抗学习在预测空间中约束模型,使其在分割过程中专注于域不变特征。此外,我们使用蒙特卡洛(MC)随机失活来量化模型的预测置信度,以评估分割结果的两个体素级不确定性估计,随后将其汇总为体积级不确定性分数,提供模型可靠性的整体度量。所提出的方法在三个公共数据集(联合健康腹部器官分割[CHAOS]、颅外[BTCV]和腹部多器官分割挑战[AMOS])和一个内部临床数据集(UTSW)上进行了评估。我们使用30例MRI扫描(20例来自CHAOS数据集,10例来自内部数据集)和30例来自BTCV数据集的CT扫描进行基于UDA的跨模态肝脏分割。此外,来自AMOS数据集的240例CT扫描和60例MRI扫描用于跨模态多器官分割。每个模态的训练集和测试集以大约4:1 - 3:1的比例划分。
在跨模态医学图像分割上的广泛实验证明了HMeAL-UDA优于两种先进方法。对于腹部多器官分割的跨模态(从CT到MRI)适应,HMeAL-UDA实现了平均(±标准差)骰子相似系数(DSC)为91.34% ± 1.23%,HD95为6.18 ± 2.93毫米;对于相反方向(从MRI到CT)的分割适应,DSC为87.13% ± 3.67%,HD95为2.48 ± 1.56毫米。这些结果接近甚至优于在目标域中使用“真实”标签训练的监督方法。此外,我们提供了对模型不确定性的全面评估,这有助于理解分割可靠性以指导临床决策。
HMeAL-UDA提供了一个强大的分割工具来解决跨模态域偏移问题,有可能推广到医学成像中的其他深度学习应用。