Liu Xiaofeng, Xing Fangxu, Prince Jerry L, Stone Maureen, El Fakhri Georges, Woo Jonghye
Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114 USA.
Deportment of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218 USA.
Proc SPIE Int Soc Opt Eng. 2022 Feb-Mar;12032. doi: 10.1117/12.2610655. Epub 2022 Apr 4.
Cycle reconstruction regularized adversarial training-e.g., CycleGAN, DiscoGAN, and DualGAN-has been widely used for image style transfer with unpaired training data. Several recent works, however, have shown that local distortions are frequent, and structural consistency cannot be guaranteed. Targeting this issue, prior works usually relied on additional segmentation or consistent feature extraction steps that are task-specific. To counter this, this work aims to learn a general add-on structural feature extractor, by explicitly enforcing the structural alignment between an input and its synthesized image. Specifically, we propose a novel input-output image patches self-training scheme to achieve a disentanglement of underlying anatomical structures and imaging modalities. The translator and structure encoder are updated, following an alternating training protocol. In addition, the information w.r.t. imaging modality can be eliminated with an asymmetric adversarial game. We train, validate, and test our network on 1,768, 416, and 1,560 unpaired subject-independent slices of tagged and cine magnetic resonance imaging from a total of twenty healthy subjects, respectively, demonstrating superior performance over competing methods.
循环重建正则化对抗训练(例如CycleGAN、DiscoGAN和DualGAN)已广泛用于利用未配对训练数据进行图像风格迁移。然而,最近的一些工作表明,局部失真很常见,并且无法保证结构一致性。针对这个问题,先前的工作通常依赖于特定于任务的额外分割或一致特征提取步骤。为了解决这个问题,这项工作旨在通过明确强制输入与其合成图像之间的结构对齐来学习一个通用的附加结构特征提取器。具体来说,我们提出了一种新颖的输入-输出图像块自训练方案,以实现潜在解剖结构和成像模态的解缠。翻译器和结构编码器按照交替训练协议进行更新。此外,可以通过非对称对抗博弈消除与成像模态相关的信息。我们分别在来自总共20名健康受试者的1768、416和1560个未配对的独立于受试者的标记和电影磁共振成像切片上训练、验证和测试我们的网络,证明了优于竞争方法的性能。