Suppr超能文献

基于快速训练的非配对深度跨模态合成

Unpaired Deep Cross-Modality Synthesis with Fast Training.

作者信息

Xiang Lei, Li Yang, Lin Weili, Wang Qian, Shen Dinggang

机构信息

Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China.

Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA.

出版信息

Deep Learn Med Image Anal Multimodal Learn Clin Decis Support (2018). 2018 Sep;11045:155-164. doi: 10.1007/978-3-030-00889-5_18. Epub 2018 Sep 20.

Abstract

Cross-modality synthesis can convert the input image of one modality to the output of another modality. It is thus very valuable for both scientific research and clinical applications. Most existing cross-modality synthesis methods require large dataset of paired data for training, while it is often non-trivial to acquire perfectly aligned images of different modalities for the same subject. Even tiny misalignment (i.e., due patient/organ motion) between the cross-modality paired images may place adverse impact to training and corrupt the synthesized images. In this paper, we present a novel method for cross-modality image synthesis by training with the unpaired data. Specifically, we adopt the generative adversarial networks and conduct the fast training in cyclic way. A new structural dissimilarity loss, which captures the detailed anatomies, is introduced to enhance the quality of the synthesized images. We validate our proposed algorithm on three popular image synthesis tasks, including brain MR-to-CT, prostate MR-to-CT, and brain 3T-to-7T. The experimental results demonstrate that our proposed method can achieve good synthesis performance by using the unpaired data only.

摘要

跨模态合成可以将一种模态的输入图像转换为另一种模态的输出。因此,它在科学研究和临床应用中都非常有价值。大多数现有的跨模态合成方法需要大量的配对数据进行训练,而对于同一受试者获取不同模态的完美对齐图像通常并非易事。即使跨模态配对图像之间存在微小的未对齐(即由于患者/器官运动)也可能对训练产生不利影响并损坏合成图像。在本文中,我们提出了一种通过使用未配对数据进行训练的跨模态图像合成新方法。具体来说,我们采用生成对抗网络并以循环方式进行快速训练。引入了一种新的结构差异损失,用于捕捉详细的解剖结构,以提高合成图像的质量。我们在三个流行的图像合成任务上验证了我们提出的算法,包括脑磁共振成像到计算机断层扫描、前列腺磁共振成像到计算机断层扫描以及脑3T到7T。实验结果表明,我们提出的方法仅使用未配对数据就能实现良好的合成性能。

相似文献

1
Unpaired Deep Cross-Modality Synthesis with Fast Training.
Deep Learn Med Image Anal Multimodal Learn Clin Decis Support (2018). 2018 Sep;11045:155-164. doi: 10.1007/978-3-030-00889-5_18. Epub 2018 Sep 20.
2
CMOS-GAN: Semi-Supervised Generative Adversarial Model for Cross-Modality Face Image Synthesis.
IEEE Trans Image Process. 2023;32:144-158. doi: 10.1109/TIP.2022.3226413. Epub 2022 Dec 19.
3
Paired-unpaired Unsupervised Attention Guided GAN with transfer learning for bidirectional brain MR-CT synthesis.
Comput Biol Med. 2021 Sep;136:104763. doi: 10.1016/j.compbiomed.2021.104763. Epub 2021 Aug 18.
4
Deep CT to MR Synthesis Using Paired and Unpaired Data.
Sensors (Basel). 2019 May 22;19(10):2361. doi: 10.3390/s19102361.
5
Deep Learning based Inter-Modality Image Registration Supervised by Intra-Modality Similarity.
Mach Learn Med Imaging. 2018 Sep;11046:55-63. doi: 10.1007/978-3-030-00919-9_7. Epub 2018 Sep 15.
6
Bi-Modality Medical Image Synthesis Using Semi-Supervised Sequential Generative Adversarial Networks.
IEEE J Biomed Health Inform. 2020 Mar;24(3):855-865. doi: 10.1109/JBHI.2019.2922986. Epub 2019 Jun 14.
7
Unpaired Low-Dose CT Denoising Network Based on Cycle-Consistent Generative Adversarial Network with Prior Image Information.
Comput Math Methods Med. 2019 Dec 7;2019:8639825. doi: 10.1155/2019/8639825. eCollection 2019.
8
Ea-GANs: Edge-Aware Generative Adversarial Networks for Cross-Modality MR Image Synthesis.
IEEE Trans Med Imaging. 2019 Jul;38(7):1750-1762. doi: 10.1109/TMI.2019.2895894. Epub 2019 Jan 29.
9
Reconstruction of 7T-Like Images From 3T MRI.
IEEE Trans Med Imaging. 2016 Sep;35(9):2085-97. doi: 10.1109/TMI.2016.2549918. Epub 2016 Apr 1.
10
Multi-Scale Transformer Network With Edge-Aware Pre-Training for Cross-Modality MR Image Synthesis.
IEEE Trans Med Imaging. 2023 Nov;42(11):3395-3407. doi: 10.1109/TMI.2023.3288001. Epub 2023 Oct 27.

引用本文的文献

1
Odontogenic cystic lesion segmentation on cone-beam CT using an auto-adapting multi-scaled UNet.
Front Oncol. 2024 Jun 12;14:1379624. doi: 10.3389/fonc.2024.1379624. eCollection 2024.
2
Within-Modality Synthesis and Novel Radiomic Evaluation of Brain MRI Scans.
Cancers (Basel). 2023 Jul 10;15(14):3565. doi: 10.3390/cancers15143565.
3
On the effect of training database size for MR-based synthetic CT generation in the head.
Comput Med Imaging Graph. 2023 Jul;107:102227. doi: 10.1016/j.compmedimag.2023.102227. Epub 2023 Apr 26.
4
Structure-aware Unsupervised Tagged-to-Cine MRI Synthesis with Self Disentanglement.
Proc SPIE Int Soc Opt Eng. 2022 Feb-Mar;12032. doi: 10.1117/12.2610655. Epub 2022 Apr 4.
5
One-Shot Generative Adversarial Learning for MRI Segmentation of Craniomaxillofacial Bony Structures.
IEEE Trans Med Imaging. 2020 Mar;39(3):787-796. doi: 10.1109/TMI.2019.2935409. Epub 2019 Aug 14.

本文引用的文献

1
Medical Image Synthesis with Context-Aware Generative Adversarial Networks.
Med Image Comput Comput Assist Interv. 2017 Sep;10435:417-425. doi: 10.1007/978-3-319-66179-7_48. Epub 2017 Sep 4.
2
Deep embedding convolutional neural network for synthesizing CT image from T1-Weighted MR image.
Med Image Anal. 2018 Jul;47:31-44. doi: 10.1016/j.media.2018.03.011. Epub 2018 Mar 30.
3
Deep Auto-context Convolutional Neural Networks for Standard-Dose PET Image Estimation from Low-Dose PET/MRI.
Neurocomputing (Amst). 2017 Dec 6;267:406-416. doi: 10.1016/j.neucom.2017.06.048. Epub 2017 Jun 29.
4
Multi-modal registration for correlative microscopy using image analogies.
Med Image Anal. 2014 Aug;18(6):914-26. doi: 10.1016/j.media.2013.12.005. Epub 2013 Dec 18.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验