Suppr超能文献

基于重建一致性损失的多模态脑肿瘤数据补全。

Multi-Modal Brain Tumor Data Completion Based on Reconstruction Consistency Loss.

机构信息

Faculty of Robot Science and Engineering, Northeastern University, Shenyang, 110167, China.

Key Laboratory of Intelligent Computing in Medical Image of Ministry of Education, Northeastern University, Shenyang, 110167, China.

出版信息

J Digit Imaging. 2023 Aug;36(4):1794-1807. doi: 10.1007/s10278-022-00697-6. Epub 2023 Mar 1.

Abstract

Multi-modal brain magnetic resonance imaging (MRI) data has been widely applied in vison-based brain tumor segmentation methods due to its complementary diagnostic information from different modalities. Since the multi-modal image data is likely to be corrupted by noise or artifacts during the practical scanning process, making it difficult to build a universal model for the subsequent segmentation and diagnosis with incomplete input data, image completion has become one of the most attractive fields in the medical image pre-processing. It can not only assist clinicians to observe the patient's lesion area more intuitively and comprehensively, but also realize the desire to save costs for patients and reduce the psychological pressure of patients during tedious pathological examinations. Recently, many deep learning-based methods have been proposed to complement the multi-modal image data and provided good performance. However, current methods cannot fully reflect the continuous semantic information between the adjacent slices and the structural information of the intra-slice features, resulting in limited complementation effects and efficiencies. To solve these problems, in this work, we propose a novel generative adversarial network (GAN) framework, named as random generative adversarial network (RAGAN), to complete the missing T1, T1ce, and FLAIR data from the given T2 modal data in real brain MRI, which consists of the following parts: (1) For the generator, we use T2 modal images and multi-modal classification labels from the same sample for cyclically supervised training of image generation, so as to realize the restoration of arbitrary modal images. (2) For the discriminator, a multi-branch network is proposed where the primary branch is designed to judge whether the certain generated modal image is similar to the target modal image, while the auxiliary branch is to judge whether its essential visual features are similar to those of the target modal image. We conduct qualitative and quantitative experimental validations on the BraTs2018 dataset, generating 10,686 MRI data in each missing modality. Real brain tumor morphology images were compared with synthetic brain tumor morphology images using PSNR and SSIM as evaluation metrics. Experiments demonstrate that the brightness, resolution, location, and morphology of brain tissue under different modalities are well reconstructed. Meanwhile, we also use the segmentation network as a further validation experiment. Blend synthetic and real images into a segmentation network. Our segmentation network adopts the classic segmentation network UNet. The segmentation result is 77.58%. In order to prove the value of our proposed method, we use the better segmentation network RES_UNet with depth supervision as the segmentation model, and the segmentation accuracy rate is 88.76%. Although our method does not significantly outperform other algorithms, the DICE value is 2% higher than the current state-of-the-art data completion algorithm TC-MGAN.

摘要

多模态脑磁共振成像(MRI)数据由于其来自不同模态的互补诊断信息,已广泛应用于基于视觉的脑肿瘤分割方法中。由于多模态图像数据在实际扫描过程中可能会受到噪声或伪影的干扰,使得在输入数据不完整的情况下难以为后续的分割和诊断建立通用模型,因此图像补全成为医学图像预处理中最具吸引力的领域之一。它不仅可以帮助临床医生更直观、更全面地观察患者的病变区域,还可以实现为患者节省成本和减轻患者在繁琐的病理检查过程中的心理压力的愿望。最近,已经提出了许多基于深度学习的方法来补充多模态图像数据,并取得了良好的效果。然而,目前的方法不能充分反映相邻切片之间的连续语义信息和切片内特征的结构信息,导致补全效果和效率有限。为了解决这些问题,在这项工作中,我们提出了一种新的生成对抗网络(GAN)框架,称为随机生成对抗网络(RAGAN),用于从给定的 T2 模态数据中完成真实脑 MRI 中缺失的 T1、T1ce 和 FLAIR 数据,它由以下几部分组成:(1)对于生成器,我们使用 T2 模态图像和来自同一样本的多模态分类标签进行循环监督训练,以实现任意模态图像的恢复。(2)对于鉴别器,提出了一种多分支网络,其中主分支用于判断生成的模态图像是否与目标模态图像相似,而辅助分支用于判断其基本视觉特征是否与目标模态图像相似。我们在 BraTs2018 数据集上进行了定性和定量实验验证,在每种缺失模态下生成了 10686 个 MRI 数据。使用 PSNR 和 SSIM 作为评价指标,比较真实脑肿瘤形态图像和合成脑肿瘤形态图像。实验表明,不同模态下的脑组织亮度、分辨率、位置和形态得到了很好的重建。同时,我们还使用分割网络作为进一步的验证实验。将合成图像和真实图像混合到分割网络中。我们的分割网络采用经典的分割网络 UNet。分割结果为 77.58%。为了证明我们提出的方法的价值,我们使用具有深度监督的更好的分割网络 RES_UNet 作为分割模型,分割准确率为 88.76%。虽然我们的方法并没有明显优于其他算法,但 DICE 值比当前最先进的数据补全算法 TC-MGAN 高 2%。

相似文献

1
Multi-Modal Brain Tumor Data Completion Based on Reconstruction Consistency Loss.
J Digit Imaging. 2023 Aug;36(4):1794-1807. doi: 10.1007/s10278-022-00697-6. Epub 2023 Mar 1.
2
DualMMP-GAN: Dual-scale multi-modality perceptual generative adversarial network for medical image segmentation.
Comput Biol Med. 2022 May;144:105387. doi: 10.1016/j.compbiomed.2022.105387. Epub 2022 Mar 12.
3
BPGAN: Brain PET synthesis from MRI using generative adversarial network for multi-modal Alzheimer's disease diagnosis.
Comput Methods Programs Biomed. 2022 Apr;217:106676. doi: 10.1016/j.cmpb.2022.106676. Epub 2022 Feb 1.
5
Self-Supervised Multi-Modal Hybrid Fusion Network for Brain Tumor Segmentation.
IEEE J Biomed Health Inform. 2022 Nov;26(11):5310-5320. doi: 10.1109/JBHI.2021.3109301. Epub 2022 Nov 10.
7
Joint learning-based feature reconstruction and enhanced network for incomplete multi-modal brain tumor segmentation.
Comput Biol Med. 2023 Sep;163:107234. doi: 10.1016/j.compbiomed.2023.107234. Epub 2023 Jul 4.
8
Common feature learning for brain tumor MRI synthesis by context-aware generative adversarial network.
Med Image Anal. 2022 Jul;79:102472. doi: 10.1016/j.media.2022.102472. Epub 2022 May 4.
9
Multimodal MRI synthesis using unified generative adversarial networks.
Med Phys. 2020 Dec;47(12):6343-6354. doi: 10.1002/mp.14539. Epub 2020 Oct 27.
10
Multi-modal brain tumor segmentation via conditional synthesis with Fourier domain adaptation.
Comput Med Imaging Graph. 2024 Mar;112:102332. doi: 10.1016/j.compmedimag.2024.102332. Epub 2024 Jan 11.

引用本文的文献

1
Synthetic data generation methods in healthcare: A review on open-source tools and methods.
Comput Struct Biotechnol J. 2024 Jul 9;23:2892-2910. doi: 10.1016/j.csbj.2024.07.005. eCollection 2024 Dec.

本文引用的文献

1
Missing MRI Pulse Sequence Synthesis Using Multi-Modal Generative Adversarial Network.
IEEE Trans Med Imaging. 2020 Apr;39(4):1170-1183. doi: 10.1109/TMI.2019.2945521. Epub 2019 Oct 4.
2
Image Synthesis in Multi-Contrast MRI With Conditional Generative Adversarial Networks.
IEEE Trans Med Imaging. 2019 Oct;38(10):2375-2388. doi: 10.1109/TMI.2019.2901750. Epub 2019 Feb 26.
3
H-DenseUNet: Hybrid Densely Connected UNet for Liver and Tumor Segmentation From CT Volumes.
IEEE Trans Med Imaging. 2018 Dec;37(12):2663-2674. doi: 10.1109/TMI.2018.2845918. Epub 2018 Jun 11.
4
The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS).
IEEE Trans Med Imaging. 2015 Oct;34(10):1993-2024. doi: 10.1109/TMI.2014.2377694. Epub 2014 Dec 4.
5
RANDOM FOREST FLAIR RECONSTRUCTION FROM , , AND -WEIGHTED MRI.
Proc IEEE Int Symp Biomed Imaging. 2014 May;2014:1079-1082. doi: 10.1109/ISBI.2014.6868061.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验