Li Weilu, Zhang Yun, Zhou Hao, Yang Wenhan, Xie Zhi, He Yao
State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China.
State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China.
Med Image Anal. 2025 Feb;100:103404. doi: 10.1016/j.media.2024.103404. Epub 2024 Nov 24.
Deep learning shows promise for medical image segmentation but suffers performance declines when applied to diverse healthcare sites due to data discrepancies among the different sites. Translating deep learning models to new clinical environments is challenging, especially when the original source data used for training is unavailable due to privacy restrictions. Source-free domain adaptation (SFDA) aims to adapt models to new unlabeled target domains without requiring access to the original source data. However, existing SFDA methods face challenges such as error propagation, misalignment of visual and structural features, and inability to preserve source knowledge. This paper introduces Continual Learning Multi-Scale domain adaptation (CLMS), an end-to-end SFDA framework integrating multi-scale reconstruction, continual learning, and style alignment to bridge domain gaps across medical sites using only unlabeled target data or publicly available data. Compared to the current state-of-the-art methods, CLMS consistently and significantly achieved top performance for different tasks, including prostate MRI segmentation (improved Dice of 10.87 %), colonoscopy polyp segmentation (improved Dice of 17.73 %), and plus disease classification from retinal images (improved AUC of 11.19 %). Crucially, CLMS preserved source knowledge for all the tasks, avoiding catastrophic forgetting. CLMS demonstrates a promising solution for translating deep learning models to new clinical imaging domains towards safe, reliable deployment across diverse healthcare settings.
深度学习在医学图像分割方面显示出前景,但由于不同医疗站点之间的数据差异,当应用于不同的医疗保健站点时,其性能会下降。将深度学习模型迁移到新的临床环境具有挑战性,尤其是当由于隐私限制而无法获得用于训练的原始源数据时。无源域适应(SFDA)旨在使模型适应新的未标记目标域,而无需访问原始源数据。然而,现有的SFDA方法面临诸如误差传播、视觉和结构特征不匹配以及无法保留源知识等挑战。本文介绍了持续学习多尺度域适应(CLMS),这是一个端到端的SFDA框架,集成了多尺度重建、持续学习和风格对齐,以仅使用未标记的目标数据或公开可用数据来弥合医疗站点之间的域差距。与当前的最先进方法相比,CLMS在不同任务中始终显著地取得了最佳性能,包括前列腺MRI分割(Dice提高了10.87%)、结肠镜息肉分割(Dice提高了17.73%)以及视网膜图像疾病分类(AUC提高了11.19%)。至关重要的是,CLMS为所有任务保留了源知识,避免了灾难性遗忘。CLMS展示了一种很有前景的解决方案,可将深度学习模型迁移到新的临床成像领域,以便在各种医疗保健环境中安全、可靠地部署。