Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, 02114, United States of America.
Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, 02114, United States of America.
Med Image Anal. 2023 Jan;83:102641. doi: 10.1016/j.media.2022.102641. Epub 2022 Oct 1.
Unsupervised domain adaptation (UDA) has been a vital protocol for migrating information learned from a labeled source domain to facilitate the implementation in an unlabeled heterogeneous target domain. Although UDA is typically jointly trained on data from both domains, accessing the labeled source domain data is often restricted, due to concerns over patient data privacy or intellectual property. To sidestep this, we propose "off-the-shelf (OS)" UDA (OSUDA), aimed at image segmentation, by adapting an OS segmentor trained in a source domain to a target domain, in the absence of source domain data in adaptation. Toward this goal, we aim to develop a novel batch-wise normalization (BN) statistics adaptation framework. In particular, we gradually adapt the domain-specific low-order BN statistics, e.g., mean and variance, through an exponential momentum decay strategy, while explicitly enforcing the consistency of the domain shareable high-order BN statistics, e.g., scaling and shifting factors, via our optimization objective. We also adaptively quantify the channel-wise transferability to gauge the importance of each channel, via both low-order statistics divergence and a scaling factor. Furthermore, we incorporate unsupervised self-entropy minimization into our framework to boost performance alongside a novel queued, memory-consistent self-training strategy to utilize the reliable pseudo label for stable and efficient unsupervised adaptation. We evaluated our OSUDA-based framework on both cross-modality and cross-subtype brain tumor segmentation and cardiac MR to CT segmentation tasks. Our experimental results showed that our memory consistent OSUDA performs better than existing source-relaxed UDA methods and yields similar performance to UDA methods with source data.
无监督领域自适应 (UDA) 是一种将从有标签源域中学到的信息迁移到无标签异构目标域中以促进实现的重要协议。尽管 UDA 通常是在来自两个域的数据上联合训练的,但由于对患者数据隐私或知识产权的担忧,访问有标签的源域数据通常受到限制。为了避免这种情况,我们提出了一种针对图像分割的“现成 (OS)” UDA (OSUDA),通过在没有源域数据的情况下,将在源域中训练的 OS 分割器自适应到目标域中。为此,我们旨在开发一种新的批量归一化 (BN) 统计量自适应框架。特别是,我们通过指数动量衰减策略逐渐自适应特定于域的低阶 BN 统计量,例如均值和方差,同时通过我们的优化目标显式强制域可共享的高阶 BN 统计量(例如,缩放和平移因子)的一致性。我们还通过低阶统计量差异和缩放因子自适应地量化通道间的可转移性,以衡量每个通道的重要性。此外,我们将无监督自熵最小化纳入我们的框架中,以在新的排队、内存一致的自训练策略的帮助下提高性能,该策略利用可靠的伪标签进行稳定和有效的无监督自适应。我们在跨模态和跨亚型脑肿瘤分割以及心脏磁共振成像到 CT 分割任务上评估了我们基于 OSUDA 的框架。我们的实验结果表明,我们的内存一致 OSUDA 比现有的源放松 UDA 方法表现更好,并与具有源数据的 UDA 方法具有相似的性能。