• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

在脑癌中使用条件生成对抗网络实现从磁共振成像(MR)到合成氟代多巴正电子发射断层扫描/磁共振成像(F-FDOPA PET/MR)融合图像的跨模态图像到图像翻译。

Cross-modality image-to-image translation from MR to synthetic F-FDOPA PET/MR fusion images using conditional GAN in brain cancer.

作者信息

Seo Youngbeom, Yang Heesung, Kong Eunjung, Sanker Vivek, Desai Atman, Lee Jungwon, Park So Hee, Song You Seon, Jeon Ikchan

机构信息

Department of Neurosurgery, Korea University Ansan Hospital, Ansan, Republic of Korea (South Korea).

School of Computer Science and Engineering, Kyungpook National University, Daegu, Republic of Korea (South Korea).

出版信息

Neuroradiology. 2025 Jul 19. doi: 10.1007/s00234-025-03704-z.

DOI:10.1007/s00234-025-03704-z
PMID:40682663
Abstract

OBJECTIVE

This study aims to identify the possibility of cross-modality image-to-image translation from magnetic resonance (MR) to synthetic positron emission tomography (PET)/MR fusion images using conditional generative adversarial networks (CGAN).

METHODS

Retrospective study was conducted involving 32 simultaneous 6-[F]-fluoro-L-3,4-dihydroxyphenylalanine (F-FDOPA) PET/MR imaging examinations from 27 patients diagnosed with brain cancer. We applied paired axial T1-weighted contrast MR (T1C) and PET/T1C fusion images to translate from T1C to synthetic PET/T1C fusion images using the Pix2Pix algorithm of CGAN. To access the image similarity between real and synthetic PET/T1C fusion images, we calculated correlation coefficients for the maximum/mean tumor-to-background ratio (TBR) and quantitative analyses were performed using peak signal-to-noise ratio (PSNR), mean squared error (MSE), structural similarity index (SSIM), and feature similarity index measure (FSIM).

RESULTS

Total 2167 pairs of T1C and PET/T1C fusion images were obtained, which were randomly assigned to training and test datasets in 9:1 ratio (1950 and 217 pairs), and training data were further divided into training and validation datasets in 4:1 ratio (1560 and 390 pairs). The correlation coefficients were 0.706 (CI:0.533-0.822) for TBR (p < 0.001) and 0.901 (CI:0.831-0.943) for TBR (p < 0.001). The quantitative analyses were PSNR of 31.075 ± 3.976, MSE of 0.001 ± 0.001, SSIM of 0.868 ± 0.079, and FSIM of 0.922 ± 0.044, respectively.

CONCLUSION

CGAN based on simultaneous F-FDOPA PET/MR imaging data demonstrated the potential for cross-modality image-to-image translation from T1C to PET/T1C fusion images, though limitations in small dataset and lack of external validation requiring further research.

摘要

目的

本研究旨在利用条件生成对抗网络(CGAN)确定从磁共振(MR)图像到合成正电子发射断层扫描(PET)/MR融合图像进行跨模态图像到图像转换的可能性。

方法

进行回顾性研究,纳入27例诊断为脑癌患者的32次同时进行的6-[F]-氟-L-3,4-二羟基苯丙氨酸(F-FDOPA)PET/MR成像检查。我们应用配对的轴向T1加权对比增强MR(T1C)图像和PET/T1C融合图像,使用CGAN的Pix2Pix算法从T1C图像转换为合成的PET/T1C融合图像。为评估真实和合成PET/T1C融合图像之间的图像相似性,我们计算了最大/平均肿瘤与背景比值(TBR)的相关系数,并使用峰值信噪比(PSNR)、均方误差(MSE)、结构相似性指数(SSIM)和特征相似性指数测量(FSIM)进行定量分析。

结果

共获得2167对T1C和PET/T1C融合图像,以9:1的比例随机分配到训练和测试数据集(1950对和217对),训练数据再以4:1的比例进一步分为训练和验证数据集(1560对和390对)。TBR的相关系数为0.706(CI:0.533 - 0.822)(p < 0.001),TBR的相关系数为0.901(CI:0.831 - 0.943)(p < 0.001)。定量分析的PSNR为31.075 ± 3.976,MSE为0.001 ± 0.001,SSIM为0.868 ± 0.079,FSIM为0.922 ± 0.044。

结论

基于同时的F-FDOPA PET/MR成像数据的CGAN显示了从T1C图像到PET/T1C融合图像进行跨模态图像到图像转换的潜力,尽管存在小数据集的局限性以及缺乏外部验证,仍需要进一步研究。

相似文献

1
Cross-modality image-to-image translation from MR to synthetic F-FDOPA PET/MR fusion images using conditional GAN in brain cancer.在脑癌中使用条件生成对抗网络实现从磁共振成像(MR)到合成氟代多巴正电子发射断层扫描/磁共振成像(F-FDOPA PET/MR)融合图像的跨模态图像到图像翻译。
Neuroradiology. 2025 Jul 19. doi: 10.1007/s00234-025-03704-z.
2
Multimodal medical image-to-image translation via variational autoencoder latent space mapping.通过变分自编码器潜在空间映射实现多模态医学图像到图像的转换。
Med Phys. 2025 Jul;52(7):e17912. doi: 10.1002/mp.17912. Epub 2025 May 29.
3
Structural semantic-guided MR synthesis from PET images via a dual cross-attention mechanism.通过双交叉注意力机制从PET图像进行结构语义引导的MR合成。
Med Phys. 2025 Jul;52(7):e17957. doi: 10.1002/mp.17957.
4
Generation of synthetic PET/MR fusion images from MR images using a combination of generative adversarial networks and conditional denoising diffusion probabilistic models based on simultaneous 18F-FDG PET/MR image data of pyogenic spondylodiscitis.基于化脓性脊柱骨髓炎的 18F-FDG PET/MR 同步图像数据,使用生成对抗网络和条件去噪扩散概率模型组合生成合成 PET/MR 融合图像。
Spine J. 2024 Aug;24(8):1467-1477. doi: 10.1016/j.spinee.2024.04.007. Epub 2024 Apr 12.
5
Super-resolution CBCT on a new generation flat panel imager of a C-arm gantry linear accelerator.基于C型臂龙门直线加速器新一代平板探测器的超分辨率锥形束CT
Med Phys. 2025 Jul;52(7):e18000. doi: 10.1002/mp.18000.
6
Leveraging Physics-Based Synthetic MR Images and Deep Transfer Learning for Artifact Reduction in Echo-Planar Imaging.利用基于物理的合成磁共振图像和深度迁移学习减少回波平面成像中的伪影
AJNR Am J Neuroradiol. 2025 Apr 2;46(4):733-741. doi: 10.3174/ajnr.A8566.
7
Use of a deep learning neural network to generate bone suppressed images for markerless lung tumor tracking.使用深度学习神经网络生成用于无标记肺肿瘤追踪的骨抑制图像。
Med Phys. 2025 Jul;52(7):e17949. doi: 10.1002/mp.17949.
8
T1-contrast enhanced MRI generation from multi-parametric MRI for glioma patients with latent tumor conditioning.从多参数磁共振成像生成T1加权对比增强磁共振成像,用于具有潜在肿瘤预处理的胶质瘤患者。
Med Phys. 2025 Apr;52(4):2064-2073. doi: 10.1002/mp.17600. Epub 2024 Dec 23.
9
Dual-way magnetic resonance image translation with transformer-based adversarial network.基于Transformer的对抗网络的双向磁共振图像转换
Med Phys. 2025 Apr 24. doi: 10.1002/mp.17837.
10
Noise-aware system generative model (NASGM): positron emission tomography (PET) image simulation framework with observer validation studies.噪声感知系统生成模型(NASGM):用于正电子发射断层扫描(PET)图像模拟框架及观察者验证研究。
Med Phys. 2025 Jul;52(7):e17962. doi: 10.1002/mp.17962.

本文引用的文献

1
Artificial Intelligence Detection of Cervical Spine Fractures Using Convolutional Neural Network Models.使用卷积神经网络模型对颈椎骨折进行人工智能检测
Neurospine. 2024 Sep;21(3):833-841. doi: 10.14245/ns.2448580.290. Epub 2024 Sep 30.
2
Generating Synthetic Data for Medical Imaging.医学成像的合成数据生成。
Radiology. 2024 Sep;312(3):e232471. doi: 10.1148/radiol.232471.
3
Artificial Intelligence in Spinal Imaging and Patient Care: A Review of Recent Advances.脊柱成像与患者护理中的人工智能:近期进展综述
Neurospine. 2024 Jun;21(2):474-486. doi: 10.14245/ns.2448388.194. Epub 2024 Jun 30.
4
TomoRay: Generating Synthetic Computed Tomography of the Spine From Biplanar Radiographs.TomoRay:从双平面X线片生成脊柱的合成计算机断层扫描图像
Neurospine. 2024 Mar;21(1):68-75. doi: 10.14245/ns.2347158.579. Epub 2024 Feb 1.
5
Synthesis of diffusion-weighted MRI scalar maps from FLAIR volumes using generative adversarial networks.使用生成对抗网络从液体衰减反转恢复(FLAIR)容积数据合成扩散加权磁共振成像(MRI)标量图。
Front Neuroinform. 2023 Aug 2;17:1197330. doi: 10.3389/fninf.2023.1197330. eCollection 2023.
6
Super-resolution of magnetic resonance images using Generative Adversarial Networks.基于生成对抗网络的磁共振图像超分辨率重建。
Comput Med Imaging Graph. 2023 Sep;108:102280. doi: 10.1016/j.compmedimag.2023.102280. Epub 2023 Jul 31.
7
AI-based Virtual Synthesis of Methionine PET from Contrast-enhanced MRI: Development and External Validation Study.基于人工智能的对比增强 MRI 甲硫氨酸 PET 虚拟合成:开发和外部验证研究。
Radiology. 2023 Aug;308(2):e223016. doi: 10.1148/radiol.223016.
8
Applications of Generative Adversarial Networks (GANs) in Positron Emission Tomography (PET) imaging: A review.生成对抗网络 (GAN) 在正电子发射断层扫描 (PET) 成像中的应用:综述。
Eur J Nucl Med Mol Imaging. 2022 Sep;49(11):3717-3739. doi: 10.1007/s00259-022-05805-w. Epub 2022 Apr 22.
9
[F]-Fluciclovine PET discrimination of recurrent intracranial metastatic disease from radiation necrosis.[F]-氟代脱氧胸苷正电子发射断层扫描对复发性颅内转移性疾病与放射性坏死的鉴别诊断
EJNMMI Res. 2020 Dec 7;10(1):148. doi: 10.1186/s13550-020-00739-6.
10
Spine Computed Tomography to Magnetic Resonance Image Synthesis Using Generative Adversarial Networks : A Preliminary Study.使用生成对抗网络的脊柱计算机断层扫描到磁共振图像合成:一项初步研究。
J Korean Neurosurg Soc. 2020 May;63(3):386-396. doi: 10.3340/jkns.2019.0084. Epub 2020 Jan 14.