• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于医学图像分割中无监督域适应的直方图匹配增强对抗学习

Histogram matching-enhanced adversarial learning for unsupervised domain adaptation in medical image segmentation.

作者信息

Qian Xiaoxue, Shao Hua-Chieh, Li Yunxiang, Lu Weiguo, Zhang You

机构信息

Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA.

出版信息

Med Phys. 2025 Mar 18. doi: 10.1002/mp.17757.

DOI:10.1002/mp.17757
PMID:40102198
Abstract

BACKGROUND

Unsupervised domain adaptation (UDA) seeks to mitigate the performance degradation of deep neural networks when applied to new, unlabeled domains by leveraging knowledge from source domains. In medical image segmentation, prevailing UDA techniques often utilize adversarial learning to address domain shifts for cross-modality adaptation. Current research on adversarial learning tends to adopt increasingly complex models and loss functions, making the training process highly intricate and less stable/robust. Furthermore, most methods primarily focused on segmentation accuracy while neglecting the associated confidence levels and uncertainties.

PURPOSE

To develop a simple yet effective UDA method based on histogram matching-enhanced adversarial learning (HMeAL-UDA), and provide comprehensive uncertainty estimations of the model predictions.

METHODS

Aiming to bridge the domain gap while reducing the model complexity, we developed a novel adversarial learning approach to align multi-modality features. The method, termed HMeAL-UDA, integrates a plug-and-play histogram matching strategy to mitigate domain-specific image style biases across modalities. We employed adversarial learning to constrain the model in the prediction space, enabling it to focus on domain-invariant features during segmentation. Moreover, we quantified the model's prediction confidence using Monte Carlo (MC) dropouts to assess two voxel-level uncertainty estimates of the segmentation results, which were subsequently aggregated into a volume-level uncertainty score, providing an overall measure of the model's reliability. The proposed method was evaluated on three public datasets (Combined Healthy Abdominal Organ Segmentation [CHAOS], Beyond the Cranial Vault [BTCV], and Abdominal Multi-Organ Segmentation Challenge [AMOS]) and one in-house clinical dataset (UTSW). We used 30 MRI scans (20 from the CHAOS dataset and 10 from the in-house dataset) and 30 CT scans from the BTCV dataset for UDA-based, cross-modality liver segmentation. Additionally, 240 CT scans and 60 MRI scans from the AMOS dataset were utilized for cross-modality multi-organ segmentation. The training and testing sets for each modality were split with ratios of approximately 4:1-3:1.

RESULTS

Extensive experiments on cross-modality medical image segmentation demonstrated the superiority of HMeAL-UDA over two state-of-the-art approaches. HMeAL-UDA achieved a mean (± s.d.) Dice similarity coefficient (DSC) of 91.34% ± 1.23% and an HD95 of 6.18 ± 2.93 mm for cross-modality (from CT to MRI) adaptation of abdominal multi-organ segmentation, and a DSC of 87.13% ± 3.67% with an HD95 of 2.48 ± 1.56 mm for segmentation adaptation in the opposite direction (MRI to CT). The results are approaching or even outperforming those of supervised methods trained with "ground-truth" labels in the target domain. In addition, we provide a comprehensive assessment of the model's uncertainty, which can help with the understanding of segmentation reliability to guide clinical decisions.

CONCLUSION

HMeAL-UDA provides a powerful segmentation tool to address cross-modality domain shifts, with the potential to generalize to other deep learning applications in medical imaging.

摘要

背景

无监督域适应(UDA)旨在通过利用源域的知识来减轻深度神经网络应用于新的未标记域时的性能下降。在医学图像分割中,主流的UDA技术通常利用对抗学习来解决跨模态适应中的域偏移问题。当前关于对抗学习的研究倾向于采用越来越复杂的模型和损失函数,使得训练过程变得高度复杂且稳定性/鲁棒性较差。此外,大多数方法主要关注分割精度,而忽略了相关的置信水平和不确定性。

目的

开发一种基于直方图匹配增强对抗学习(HMeAL-UDA)的简单而有效的UDA方法,并提供模型预测的全面不确定性估计。

方法

为了弥合域差距并降低模型复杂性,我们开发了一种新颖的对抗学习方法来对齐多模态特征。该方法称为HMeAL-UDA,集成了即插即用的直方图匹配策略,以减轻跨模态的特定域图像风格偏差。我们采用对抗学习在预测空间中约束模型,使其在分割过程中专注于域不变特征。此外,我们使用蒙特卡洛(MC)随机失活来量化模型的预测置信度,以评估分割结果的两个体素级不确定性估计,随后将其汇总为体积级不确定性分数,提供模型可靠性的整体度量。所提出的方法在三个公共数据集(联合健康腹部器官分割[CHAOS]、颅外[BTCV]和腹部多器官分割挑战[AMOS])和一个内部临床数据集(UTSW)上进行了评估。我们使用30例MRI扫描(20例来自CHAOS数据集,10例来自内部数据集)和30例来自BTCV数据集的CT扫描进行基于UDA的跨模态肝脏分割。此外,来自AMOS数据集的240例CT扫描和60例MRI扫描用于跨模态多器官分割。每个模态的训练集和测试集以大约4:1 - 3:1的比例划分。

结果

在跨模态医学图像分割上的广泛实验证明了HMeAL-UDA优于两种先进方法。对于腹部多器官分割的跨模态(从CT到MRI)适应,HMeAL-UDA实现了平均(±标准差)骰子相似系数(DSC)为91.34% ± 1.23%,HD95为6.18 ± 2.93毫米;对于相反方向(从MRI到CT)的分割适应,DSC为87.13% ± 3.67%,HD95为2.48 ± 1.56毫米。这些结果接近甚至优于在目标域中使用“真实”标签训练的监督方法。此外,我们提供了对模型不确定性的全面评估,这有助于理解分割可靠性以指导临床决策。

结论

HMeAL-UDA提供了一个强大的分割工具来解决跨模态域偏移问题,有可能推广到医学成像中的其他深度学习应用。

相似文献

1
Histogram matching-enhanced adversarial learning for unsupervised domain adaptation in medical image segmentation.用于医学图像分割中无监督域适应的直方图匹配增强对抗学习
Med Phys. 2025 Mar 18. doi: 10.1002/mp.17757.
2
Semi-supervised abdominal multi-organ segmentation by object-redrawing.通过对象重绘实现半监督腹部多器官分割
Med Phys. 2024 Nov;51(11):8334-8347. doi: 10.1002/mp.17364. Epub 2024 Aug 21.
3
Style mixup enhanced disentanglement learning for unsupervised domain adaptation in medical image segmentation.风格混合增强的解纠缠学习用于医学图像分割中的无监督域适应
Med Image Anal. 2025 Apr;101:103440. doi: 10.1016/j.media.2024.103440. Epub 2024 Dec 30.
4
Deep cross-modality (MR-CT) educed distillation learning for cone beam CT lung tumor segmentation.用于锥形束CT肺肿瘤分割的深度跨模态(MR-CT)诱导蒸馏学习
Med Phys. 2021 Jul;48(7):3702-3713. doi: 10.1002/mp.14902. Epub 2021 May 25.
5
ISGAN: Unsupervised Domain Adaptation With Improved Symmetric GAN for Cross-Modality Multi-Organ Segmentation.ISGAN:基于改进对称生成对抗网络的无监督域适应用于跨模态多器官分割
IEEE J Biomed Health Inform. 2025 Jun;29(6):3874-3885. doi: 10.1109/JBHI.2024.3507092.
6
LE-UDA: Label-Efficient Unsupervised Domain Adaptation for Medical Image Segmentation.LE-UDA:用于医学图像分割的标签高效无监督域适应
IEEE Trans Med Imaging. 2023 Mar;42(3):633-646. doi: 10.1109/TMI.2022.3214766. Epub 2023 Mar 2.
7
IAS-NET: Joint intraclassly adaptive GAN and segmentation network for unsupervised cross-domain in neonatal brain MRI segmentation.IAS-NET:用于新生儿脑 MRI 分割的无监督跨领域的联合类内自适应 GAN 和分割网络。
Med Phys. 2021 Nov;48(11):6962-6975. doi: 10.1002/mp.15212. Epub 2021 Sep 25.
8
A medical unsupervised domain adaptation framework based on Fourier transform image translation and multi-model ensemble self-training strategy.基于傅里叶变换图像翻译和多模型集成自训练策略的医学无监督领域自适应框架。
Int J Comput Assist Radiol Surg. 2023 Oct;18(10):1885-1894. doi: 10.1007/s11548-023-02867-5. Epub 2023 Apr 3.
9
Multiscale unsupervised domain adaptation for automatic pancreas segmentation in CT volumes using adversarial learning.基于对抗学习的 CT 容积中多尺度无监督域自适应自动胰腺分割。
Med Phys. 2022 Sep;49(9):5799-5818. doi: 10.1002/mp.15827. Epub 2022 Jul 27.
10
LMISA: A lightweight multi-modality image segmentation network via domain adaptation using gradient magnitude and shape constraint.LMISA:一种基于梯度幅度和形状约束的域自适应轻量级多模态图像分割网络。
Med Image Anal. 2022 Oct;81:102536. doi: 10.1016/j.media.2022.102536. Epub 2022 Jul 13.

本文引用的文献

1
A Comprehensive Survey on Source-Free Domain Adaptation.无源域适应的综合调查
IEEE Trans Pattern Anal Mach Intell. 2024 Aug;46(8):5743-5762. doi: 10.1109/TPAMI.2024.3370978. Epub 2024 Jul 2.
2
Clinical assessment of deep learning-based uncertainty maps in lung cancer segmentation.基于深度学习的不确定性图在肺癌分割中的临床评估。
Phys Med Biol. 2024 Jan 24;69(3). doi: 10.1088/1361-6560/ad1a26.
3
A Structure-Aware Framework of Unsupervised Cross-Modality Domain Adaptation via Frequency and Spatial Knowledge Distillation.
基于频率和空间知识蒸馏的无监督跨模态域自适应的结构感知框架。
IEEE Trans Med Imaging. 2023 Dec;42(12):3919-3931. doi: 10.1109/TMI.2023.3318006. Epub 2023 Nov 30.
4
Quantifying U-Net uncertainty in multi-parametric MRI-based glioma segmentation by spherical image projection.基于球形图像投影的多参数 MRI 脑胶质瘤分割中 U-Net 不确定性的量化。
Med Phys. 2024 Mar;51(3):1931-1943. doi: 10.1002/mp.16695. Epub 2023 Sep 11.
5
Medical image data augmentation: techniques, comparisons and interpretations.医学图像数据增强:技术、比较与解读
Artif Intell Rev. 2023 Mar 20:1-45. doi: 10.1007/s10462-023-10453-z.
6
QU-BraTS: MICCAI BraTS 2020 Challenge on Quantifying Uncertainty in Brain Tumor Segmentation - Analysis of Ranking Scores and Benchmarking Results.QU-BraTS:MICCAI BraTS 2020脑肿瘤分割不确定性量化挑战赛——排名分数分析与基准测试结果
J Mach Learn Biomed Imaging. 2022 Aug;2022.
7
Rethinking adversarial domain adaptation: Orthogonal decomposition for unsupervised domain adaptation in medical image segmentation.重新思考对抗性领域自适应:医学图像分割中无监督领域自适应的正交分解。
Med Image Anal. 2022 Nov;82:102623. doi: 10.1016/j.media.2022.102623. Epub 2022 Sep 21.
8
Domain Generalization: A Survey.领域泛化:一项综述。
IEEE Trans Pattern Anal Mach Intell. 2023 Apr;45(4):4396-4415. doi: 10.1109/TPAMI.2022.3195549. Epub 2023 Mar 7.
9
Unsupervised Domain Adaptation for Medical Image Segmentation by Disentanglement Learning and Self-Training.通过解缠学习和自我训练实现医学图像分割的无监督域适应
IEEE Trans Med Imaging. 2024 Jan;43(1):4-14. doi: 10.1109/TMI.2022.3192303. Epub 2024 Jan 2.
10
Domain Adaptation for Medical Image Analysis: A Survey.医学图像分析中的域自适应:综述。
IEEE Trans Biomed Eng. 2022 Mar;69(3):1173-1185. doi: 10.1109/TBME.2021.3117407. Epub 2022 Feb 18.