Liu Xiaofeng, Yoo Chaehwa, Xing Fangxu, Kuo C-C Jay, El Fakhri Georges, Kang Je-Won, Woo Jonghye
Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA.
Dept. of Electronic and Electrical Engineering and Graduate Program in Smart Factory, Ewha Womans University, Seoul, South Korea.
Proc SPIE Int Soc Opt Eng. 2022 Feb-Mar;12032. doi: 10.1117/12.2607895. Epub 2022 Apr 4.
Unsupervised domain adaptation (UDA) has been widely used to transfer knowledge from a labeled source domain to an unlabeled target domain to counter the difficulty of labeling in a new domain. The training of conventional solutions usually relies on the existence of both source and target domain data. However, privacy of the large-scale and well-labeled data in the source domain and trained model parameters can become the major concern of cross center/domain collaborations. In this work, to address this, we propose a practical solution to UDA for segmentation with a black-box segmentation model trained in the source domain only, rather than original source data or a white-box source model. Specifically, we resort to a knowledge distillation scheme with exponential mixup decay (EMD) to gradually learn target-specific representations. In addition, unsupervised entropy minimization is further applied to regularization of the target domain confidence. We evaluated our framework on the BraTS 2018 database, achieving performance on par with white-box source model adaptation approaches.
无监督域适应(UDA)已被广泛用于将知识从有标签的源域转移到无标签的目标域,以应对新域中标记的困难。传统解决方案的训练通常依赖于源域和目标域数据的存在。然而,源域中大规模且标注良好的数据以及训练模型参数的隐私可能成为跨中心/域协作的主要关注点。在这项工作中,为了解决这个问题,我们提出了一种针对UDA分割的实用解决方案,该方案仅使用在源域中训练的黑盒分割模型,而不是原始源数据或白盒源模型。具体来说,我们采用了一种带有指数混合衰减(EMD)的知识蒸馏方案来逐步学习特定于目标的表示。此外,无监督熵最小化进一步应用于目标域置信度的正则化。我们在BraTS 2018数据库上评估了我们的框架,其性能与白盒源模型适应方法相当。