• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于三维激发生成对抗网络(E-GAN)的 FDG-PET 向 T1 加权 MRI 的转换。

FDG-PET to T1 Weighted MRI Translation with 3D Elicit Generative Adversarial Network (E-GAN).

机构信息

Department of Mathematics and Computer Science, CNRS, Aix Marseilles University, UMR, 7249 Marseille, France.

Molecular Neuroimaging, Marseille Public University Hospital System, 13005 Marseille, France.

出版信息

Sensors (Basel). 2022 Jun 20;22(12):4640. doi: 10.3390/s22124640.

DOI:10.3390/s22124640
PMID:35746422
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9227640/
Abstract

With the strengths of deep learning, computer-aided diagnosis (CAD) is a hot topic for researchers in medical image analysis. One of the main requirements for training a deep learning model is providing enough data for the network. However, in medical images, due to the difficulties of data collection and data privacy, finding an appropriate dataset (balanced, enough samples, etc.) is quite a challenge. Although image synthesis could be beneficial to overcome this issue, synthesizing 3D images is a hard task. The main objective of this paper is to generate 3D T1 weighted MRI corresponding to FDG-PET. In this study, we propose a separable convolution-based Elicit generative adversarial network (E-GAN). The proposed architecture can reconstruct 3D T1 weighted MRI from 2D high-level features and geometrical information retrieved from a Sobel filter. Experimental results on the ADNI datasets for healthy subjects show that the proposed model improves the quality of images compared with the state of the art. In addition, the evaluation of E-GAN and the state of art methods gives a better result on the structural information (13.73% improvement for PSNR and 22.95% for SSIM compared to Pix2Pix GAN) and textural information (6.9% improvements for homogeneity error in Haralick features compared to Pix2Pix GAN).

摘要

利用深度学习的优势,计算机辅助诊断 (CAD) 成为医学图像分析研究人员的热门话题。训练深度学习模型的主要要求之一是为网络提供足够的数据。然而,在医学图像中,由于数据收集和数据隐私的困难,找到一个合适的数据集(平衡、足够的样本等)是一个相当大的挑战。尽管图像合成可能有助于克服这个问题,但合成 3D 图像是一项艰巨的任务。本文的主要目的是生成与 FDG-PET 对应的 3D T1 加权 MRI。在这项研究中,我们提出了一种基于可分离卷积的启发式生成对抗网络 (E-GAN)。所提出的架构可以从 Sobel 滤波器检索到的 2D 高级特征和几何信息重建 3D T1 加权 MRI。在 ADNI 数据集上对健康受试者的实验结果表明,与现有技术相比,所提出的模型提高了图像质量。此外,E-GAN 和现有技术方法的评估在结构信息方面(与 Pix2Pix GAN 相比,PSNR 提高了 13.73%,SSIM 提高了 22.95%)和纹理信息(与 Pix2Pix GAN 相比,Haralick 特征的同质性误差提高了 6.9%)方面给出了更好的结果。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da53/9227640/b673818d7921/sensors-22-04640-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da53/9227640/f8b0ca907eed/sensors-22-04640-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da53/9227640/87da97cee4f9/sensors-22-04640-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da53/9227640/3f3d09f6552e/sensors-22-04640-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da53/9227640/838050a77146/sensors-22-04640-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da53/9227640/1e0cf7934be8/sensors-22-04640-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da53/9227640/9bcc18775cc1/sensors-22-04640-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da53/9227640/543c178767b6/sensors-22-04640-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da53/9227640/8fb5cd8ed612/sensors-22-04640-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da53/9227640/b673818d7921/sensors-22-04640-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da53/9227640/f8b0ca907eed/sensors-22-04640-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da53/9227640/87da97cee4f9/sensors-22-04640-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da53/9227640/3f3d09f6552e/sensors-22-04640-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da53/9227640/838050a77146/sensors-22-04640-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da53/9227640/1e0cf7934be8/sensors-22-04640-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da53/9227640/9bcc18775cc1/sensors-22-04640-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da53/9227640/543c178767b6/sensors-22-04640-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da53/9227640/8fb5cd8ed612/sensors-22-04640-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da53/9227640/b673818d7921/sensors-22-04640-g009.jpg

相似文献

1
FDG-PET to T1 Weighted MRI Translation with 3D Elicit Generative Adversarial Network (E-GAN).基于三维激发生成对抗网络(E-GAN)的 FDG-PET 向 T1 加权 MRI 的转换。
Sensors (Basel). 2022 Jun 20;22(12):4640. doi: 10.3390/s22124640.
2
Generation of Conventional F-FDG PET Images from F-Florbetaben PET Images Using Generative Adversarial Network: A Preliminary Study Using ADNI Dataset.基于 ADNI 数据集的使用生成对抗网络从 F-Florbetaben PET 图像生成常规 F-FDG PET 图像:初步研究
Medicina (Kaunas). 2023 Jul 10;59(7):1281. doi: 10.3390/medicina59071281.
3
Generation of synthetic PET/MR fusion images from MR images using a combination of generative adversarial networks and conditional denoising diffusion probabilistic models based on simultaneous 18F-FDG PET/MR image data of pyogenic spondylodiscitis.基于化脓性脊柱骨髓炎的 18F-FDG PET/MR 同步图像数据,使用生成对抗网络和条件去噪扩散概率模型组合生成合成 PET/MR 融合图像。
Spine J. 2024 Aug;24(8):1467-1477. doi: 10.1016/j.spinee.2024.04.007. Epub 2024 Apr 12.
4
Generation ofF-FDG PET standard scan images from short scans using cycle-consistent generative adversarial network.使用循环一致生成对抗网络从短扫描生成F-FDG PET标准扫描图像。
Phys Med Biol. 2022 Oct 19;67(21). doi: 10.1088/1361-6560/ac950a.
5
Deep generative denoising networks enhance quality and accuracy of gated cardiac PET data.深度生成式去噪网络提高门控心脏 PET 数据的质量和准确性。
Ann Nucl Med. 2024 Oct;38(10):775-788. doi: 10.1007/s12149-024-01945-1. Epub 2024 Jun 6.
6
BPGAN: Brain PET synthesis from MRI using generative adversarial network for multi-modal Alzheimer's disease diagnosis.基于生成对抗网络的脑 PET 从 MRI 合成用于多模态阿尔茨海默病诊断
Comput Methods Programs Biomed. 2022 Apr;217:106676. doi: 10.1016/j.cmpb.2022.106676. Epub 2022 Feb 1.
7
Ultra-low-dose PET reconstruction using generative adversarial network with feature matching and task-specific perceptual loss.基于特征匹配和任务特定感知损失的生成对抗网络的超低剂量 PET 重建。
Med Phys. 2019 Aug;46(8):3555-3564. doi: 10.1002/mp.13626. Epub 2019 Jun 17.
8
Unsupervised arterial spin labeling image superresolution via multiscale generative adversarial network.基于多尺度生成对抗网络的无监督动脉自旋标记图像超分辨率。
Med Phys. 2022 Apr;49(4):2373-2385. doi: 10.1002/mp.15468. Epub 2022 Mar 7.
9
Paired conditional generative adversarial network for highly accelerated liver 4D MRI.基于配对条件生成对抗网络的肝脏 4D MRI 加速重建
Phys Med Biol. 2024 Jun 17;69(12). doi: 10.1088/1361-6560/ad5489.
10
GAN for synthesizing CT from T2-weighted MRI data towards MR-guided radiation treatment.用于从 T2 加权 MRI 数据生成 CT 以实现磁共振引导放射治疗的 GAN。
MAGMA. 2022 Jun;35(3):449-457. doi: 10.1007/s10334-021-00974-5. Epub 2021 Nov 6.

引用本文的文献

1
Artificial Intelligence in Alzheimer's Disease Diagnosis and Prognosis Using PET-MRI: A Narrative Review of High-Impact Literature Post-Tauvid Approval.使用PET-MRI进行阿尔茨海默病诊断和预后评估的人工智能:Tauvid获批后高影响力文献的叙述性综述
J Clin Med. 2025 Aug 21;14(16):5913. doi: 10.3390/jcm14165913.
2
Turning brain MRI into diagnostic PET: O-water PET CBF synthesis from multi-contrast MRI via attention-based encoder-decoder networks.将脑 MRI 转化为诊断性 PET:基于注意力的编解码器网络从多对比度 MRI 中合成 O-水 PET CBF。
Med Image Anal. 2024 Apr;93:103072. doi: 10.1016/j.media.2023.103072. Epub 2023 Dec 29.
3

本文引用的文献

1
Three-dimensional self-attention conditional GAN with spectral normalization for multimodal neuroimaging synthesis.基于谱归一化的三维自注意力条件生成对抗网络在多模态神经影像合成中的应用。
Magn Reson Med. 2021 Sep;86(3):1718-1733. doi: 10.1002/mrm.28819. Epub 2021 May 7.
2
Bidirectional Mapping of Brain MRI and PET With 3D Reversible GAN for the Diagnosis of Alzheimer's Disease.使用3D可逆生成对抗网络进行脑磁共振成像和正电子发射断层扫描的双向映射以诊断阿尔茨海默病
Front Neurosci. 2021 Apr 9;15:646013. doi: 10.3389/fnins.2021.646013. eCollection 2021.
3
mustGAN: multi-stream Generative Adversarial Networks for MR Image Synthesis.
Generative Adversarial Networks in Medicine: Important Considerations for this Emerging Innovation in Artificial Intelligence.
生成对抗网络在医学中的应用:人工智能这一新兴创新技术的重要考虑因素。
Ann Biomed Eng. 2023 Oct;51(10):2130-2142. doi: 10.1007/s10439-023-03304-z. Epub 2023 Jul 24.
4
Image Translation by Ad CycleGAN for COVID-19 X-Ray Images: A New Approach for Controllable GAN.基于 AdCycleGAN 的 COVID-19 射线图像翻译:一种新的可控 GAN 方法。
Sensors (Basel). 2022 Dec 8;22(24):9628. doi: 10.3390/s22249628.
必须 GAN:用于磁共振图像合成的多流生成对抗网络。
Med Image Anal. 2021 May;70:101944. doi: 10.1016/j.media.2020.101944. Epub 2021 Feb 17.
4
MRI image synthesis with dual discriminator adversarial learning and difficulty-aware attention mechanism for hippocampal subfields segmentation.基于双鉴别器对抗学习和难度感知注意力机制的海马亚区分割MRI图像合成
Comput Med Imaging Graph. 2020 Dec;86:101800. doi: 10.1016/j.compmedimag.2020.101800. Epub 2020 Oct 18.
5
Multi-View Separable Pyramid Network for AD Prediction at MCI Stage by F-FDG Brain PET Imaging.基于 F-FDG 脑 PET 成像的多视图可分离金字塔网络在 MCI 阶段的 AD 预测
IEEE Trans Med Imaging. 2021 Jan;40(1):81-92. doi: 10.1109/TMI.2020.3022591. Epub 2020 Dec 29.
6
Hi-Net: Hybrid-Fusion Network for Multi-Modal MR Image Synthesis.Hi-Net:用于多模态磁共振图像合成的混合融合网络。
IEEE Trans Med Imaging. 2020 Sep;39(9):2772-2781. doi: 10.1109/TMI.2020.2975344. Epub 2020 Feb 20.
7
Image Synthesis in Multi-Contrast MRI With Conditional Generative Adversarial Networks.多对比度 MRI 图像合成的条件生成对抗网络。
IEEE Trans Med Imaging. 2019 Oct;38(10):2375-2388. doi: 10.1109/TMI.2019.2901750. Epub 2019 Feb 26.
8
Ea-GANs: Edge-Aware Generative Adversarial Networks for Cross-Modality MR Image Synthesis.Ea-GANs:用于跨模态磁共振图像合成的边缘感知生成对抗网络。
IEEE Trans Med Imaging. 2019 Jul;38(7):1750-1762. doi: 10.1109/TMI.2019.2895894. Epub 2019 Jan 29.
9
3D Auto-Context-Based Locality Adaptive Multi-Modality GANs for PET Synthesis.基于 3D 自动上下文的局部自适应多模态 GAN 用于 PET 合成。
IEEE Trans Med Imaging. 2019 Jun;38(6):1328-1339. doi: 10.1109/TMI.2018.2884053. Epub 2018 Nov 29.
10
3D conditional generative adversarial networks for high-quality PET image estimation at low dose.基于三维条件生成对抗网络的低剂量 PET 图像高质量估计。
Neuroimage. 2018 Jul 1;174:550-562. doi: 10.1016/j.neuroimage.2018.03.045. Epub 2018 Mar 20.