Suppr超能文献

DC²T:用于半监督跨站点连续分割的解缠引导合并与一致性训练

DC²T: Disentanglement-Guided Consolidation and Consistency Training for Semi-Supervised Cross-Site Continual Segmentation.

作者信息

Zhang Jingyang, Pei Jialun, Xu Dunyuan, Jin Yueming, Heng Pheng-Ann

出版信息

IEEE Trans Med Imaging. 2025 Feb;44(2):903-914. doi: 10.1109/TMI.2024.3469528. Epub 2025 Feb 4.

Abstract

Continual Learning (CL) is recognized to be a storage-efficient and privacy-protecting approach for learning from sequentially-arriving medical sites. However, most existing CL methods assume that each site is fully labeled, which is impractical due to budget and expertise constraint. This paper studies the Semi-Supervised Continual Learning (SSCL) that adopts partially-labeled sites arriving over time, with each site delivering only limited labeled data while the majority remains unlabeled. In this regard, it is challenging to effectively utilize unlabeled data under dynamic cross-site domain gaps, leading to intractable model forgetting on such unlabeled data. To address this problem, we introduce a novel Disentanglement-guided Consolidation and Consistency Training (DC2T) framework, which roots in an Online Semi-Supervised representation Disentanglement (OSSD) perspective to excavate content representations of partially labeled data from sites arriving over time. Moreover, these content representations are required to be consolidated for site-invariance and calibrated for style-robustness, in order to alleviate forgetting even in the absence of ground truth. Specifically, for the invariance on previous sites, we retain historical content representations when learning on a new site, via a Content-inspired Parameter Consolidation (CPC) method that prevents altering the model parameters crucial for content preservation. For the robustness against style variation, we develop a Style-induced Consistency Training (SCT) scheme that enforces segmentation consistency over style-related perturbations to recalibrate content encoding. We extensively evaluate our method on fundus and cardiac image segmentation, indicating the advantage over existing SSCL methods for alleviating forgetting on unlabeled data.

摘要

持续学习(CL)被认为是一种从顺序到达的医学数据中学习的存储高效且隐私保护的方法。然而,大多数现有的CL方法都假设每个数据点都有完整的标注,由于预算和专业知识的限制,这在实际中是不切实际的。本文研究了半监督持续学习(SSCL),它采用随时间到达的部分标注数据点,每个数据点只提供有限的标注数据,而大部分数据仍未标注。在这种情况下,在动态跨数据点域差距下有效利用未标注数据具有挑战性,导致对这类未标注数据出现难以处理的模型遗忘问题。为了解决这个问题,我们引入了一种新颖的解纠缠引导的合并与一致性训练(DC2T)框架,该框架基于在线半监督表示解纠缠(OSSD)的视角,以挖掘随时间到达的数据点中部分标注数据的内容表示。此外,这些内容表示需要进行合并以实现数据点不变性,并进行校准以实现风格鲁棒性,以便即使在没有真实标签的情况下也能减轻遗忘。具体而言,对于先前数据点的不变性,我们在学习新数据点时通过一种受内容启发的参数合并(CPC)方法保留历史内容表示,该方法可防止改变对内容保存至关重要的模型参数。对于针对风格变化的鲁棒性,我们开发了一种风格诱导一致性训练(SCT)方案,该方案通过对与风格相关的扰动强制执行分割一致性来重新校准内容编码。我们在眼底和心脏图像分割上广泛评估了我们的方法,表明它在减轻对未标注数据的遗忘方面优于现有的SSCL方法。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验