Suppr超能文献

用于无监督域自适应医学图像翻译的注意力持续生成式自训练

Attentive Continuous Generative Self-training for Unsupervised Domain Adaptive Medical Image Translation.

作者信息

Liu Xiaofeng, Prince Jerry L, Xing Fangxu, Zhuo Jiachen, Reese Timothy, Stone Maureen, El Fakhri Georges, Woo Jonghye

机构信息

Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, 02114.

Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA.

出版信息

ArXiv. 2023 May 23:arXiv:2305.14589v1.

Abstract

Self-training is an important class of unsupervised domain adaptation (UDA) approaches that are used to mitigate the problem of domain shift, when applying knowledge learned from a labeled source domain to unlabeled and heterogeneous target domains. While self-training-based UDA has shown considerable promise on discriminative tasks, including classification and segmentation, through reliable pseudo-label filtering based on the maximum softmax probability, there is a paucity of prior work on self-training-based UDA for generative tasks, including image modality translation. To fill this gap, in this work, we seek to develop a generative self-training (GST) framework for domain adaptive image translation with continuous value prediction and regression objectives. Specifically, we quantify both aleatoric and epistemic uncertainties within our GST using variational Bayes learning to measure the reliability of synthesized data. We also introduce a self-attention scheme that de-emphasizes the background region to prevent it from dominating the training process. The adaptation is then carried out by an alternating optimization scheme with target domain supervision that focuses attention on the regions with reliable pseudo-labels. We evaluated our framework on two cross-scanner/center, inter-subject translation tasks, including tagged-to-cine magnetic resonance (MR) image translation and T1-weighted MR-to-fractional anisotropy translation. Extensive validations with unpaired target domain data showed that our GST yielded superior synthesis performance in comparison to adversarial training UDA methods.

摘要

自训练是无监督域适应(UDA)方法中的一个重要类别,当将从有标签的源域学到的知识应用于无标签且异质的目标域时,用于缓解域转移问题。虽然基于自训练的UDA通过基于最大softmax概率的可靠伪标签过滤,在包括分类和分割在内的判别任务上显示出了相当大的前景,但对于包括图像模态翻译在内的生成任务,基于自训练的UDA的前期工作却很少。为了填补这一空白,在这项工作中,我们试图为具有连续值预测和回归目标的域自适应图像翻译开发一个生成式自训练(GST)框架。具体而言,我们使用变分贝叶斯学习在我们的GST中量化偶然不确定性和认知不确定性,以衡量合成数据的可靠性。我们还引入了一种自注意力机制,该机制弱化背景区域,以防止其主导训练过程。然后通过带有目标域监督的交替优化方案进行适应,该方案将注意力集中在具有可靠伪标签的区域上。我们在两个跨扫描仪/中心的受试者间翻译任务上评估了我们的框架,包括标记到电影磁共振(MR)图像翻译和T1加权MR到分数各向异性翻译。对未配对目标域数据的广泛验证表明,与对抗训练UDA方法相比,我们的GST产生了更好的合成性能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/388d/10246114/ecc2a29e3d76/nihpp-2305.14589v1-f0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验