Liu Xiaofeng, Xing Fangxu, Stone Maureen, Zhuo Jiachen, Reese Timothy, Prince Jerry L, Fakhri Georges El, Woo Jonghye
Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, 02114, USA.
Dept. of Neural and Pain Sciences, University of Maryland School of Dentistry, Baltimore, MD, USA.
Med Image Comput Comput Assist Interv. 2021;12903:138-148. doi: 10.1007/978-3-030-87199-4_13. Epub 2021 Sep 21.
Self-training based unsupervised domain adaptation (UDA) has shown great potential to address the problem of domain shift, when applying a trained deep learning model in a source domain to unlabeled target domains. However, while the self-training UDA has demonstrated its effectiveness on discriminative tasks, such as classification and segmentation, via the reliable pseudo-label selection based on the softmax discrete histogram, the self-training UDA for generative tasks, such as image synthesis, is not fully investigated. In this work, we propose a novel generative self-training (GST) UDA framework with continuous value prediction and regression objective for cross-domain image synthesis. Specifically, we propose to filter the pseudo-label with an uncertainty mask, and quantify the predictive confidence of generated images with practical variational Bayes learning. The fast test-time adaptation is achieved by a round-based alternative optimization scheme. We validated our framework on the tagged-to-cine magnetic resonance imaging (MRI) synthesis problem, where datasets in the source and target domains were acquired from different scanners or centers. Extensive validations were carried out to verify our framework against popular adversarial training UDA methods. Results show that our GST, with tagged MRI of test subjects in new target domains, improved the synthesis quality by a large margin, compared with the adversarial training UDA methods.
基于自训练的无监督域适应(UDA)在将源域中训练好的深度学习模型应用于未标记的目标域时,已显示出解决域转移问题的巨大潜力。然而,虽然自训练UDA通过基于softmax离散直方图的可靠伪标签选择,在诸如分类和分割等判别任务上证明了其有效性,但用于诸如图像合成等生成任务的自训练UDA尚未得到充分研究。在这项工作中,我们提出了一种新颖的生成式自训练(GST)UDA框架,用于跨域图像合成,具有连续值预测和回归目标。具体而言,我们建议用不确定性掩码过滤伪标签,并通过实用变分贝叶斯学习量化生成图像的预测置信度。通过基于轮次的交替优化方案实现快速测试时适应。我们在标记到电影磁共振成像(MRI)合成问题上验证了我们的框架,其中源域和目标域中的数据集是从不同的扫描仪或中心获取的。进行了广泛的验证,以将我们的框架与流行的对抗训练UDA方法进行对比。结果表明,与对抗训练UDA方法相比,我们的GST在新目标域中使用测试对象的标记MRI,大大提高了合成质量。