Seo Youngbeom, Yang Heesung, Kong Eunjung, Sanker Vivek, Desai Atman, Lee Jungwon, Park So Hee, Song You Seon, Jeon Ikchan
Department of Neurosurgery, Korea University Ansan Hospital, Ansan, Republic of Korea (South Korea).
School of Computer Science and Engineering, Kyungpook National University, Daegu, Republic of Korea (South Korea).
Neuroradiology. 2025 Jul 19. doi: 10.1007/s00234-025-03704-z.
This study aims to identify the possibility of cross-modality image-to-image translation from magnetic resonance (MR) to synthetic positron emission tomography (PET)/MR fusion images using conditional generative adversarial networks (CGAN).
Retrospective study was conducted involving 32 simultaneous 6-[F]-fluoro-L-3,4-dihydroxyphenylalanine (F-FDOPA) PET/MR imaging examinations from 27 patients diagnosed with brain cancer. We applied paired axial T1-weighted contrast MR (T1C) and PET/T1C fusion images to translate from T1C to synthetic PET/T1C fusion images using the Pix2Pix algorithm of CGAN. To access the image similarity between real and synthetic PET/T1C fusion images, we calculated correlation coefficients for the maximum/mean tumor-to-background ratio (TBR) and quantitative analyses were performed using peak signal-to-noise ratio (PSNR), mean squared error (MSE), structural similarity index (SSIM), and feature similarity index measure (FSIM).
Total 2167 pairs of T1C and PET/T1C fusion images were obtained, which were randomly assigned to training and test datasets in 9:1 ratio (1950 and 217 pairs), and training data were further divided into training and validation datasets in 4:1 ratio (1560 and 390 pairs). The correlation coefficients were 0.706 (CI:0.533-0.822) for TBR (p < 0.001) and 0.901 (CI:0.831-0.943) for TBR (p < 0.001). The quantitative analyses were PSNR of 31.075 ± 3.976, MSE of 0.001 ± 0.001, SSIM of 0.868 ± 0.079, and FSIM of 0.922 ± 0.044, respectively.
CGAN based on simultaneous F-FDOPA PET/MR imaging data demonstrated the potential for cross-modality image-to-image translation from T1C to PET/T1C fusion images, though limitations in small dataset and lack of external validation requiring further research.
本研究旨在利用条件生成对抗网络(CGAN)确定从磁共振(MR)图像到合成正电子发射断层扫描(PET)/MR融合图像进行跨模态图像到图像转换的可能性。
进行回顾性研究,纳入27例诊断为脑癌患者的32次同时进行的6-[F]-氟-L-3,4-二羟基苯丙氨酸(F-FDOPA)PET/MR成像检查。我们应用配对的轴向T1加权对比增强MR(T1C)图像和PET/T1C融合图像,使用CGAN的Pix2Pix算法从T1C图像转换为合成的PET/T1C融合图像。为评估真实和合成PET/T1C融合图像之间的图像相似性,我们计算了最大/平均肿瘤与背景比值(TBR)的相关系数,并使用峰值信噪比(PSNR)、均方误差(MSE)、结构相似性指数(SSIM)和特征相似性指数测量(FSIM)进行定量分析。
共获得2167对T1C和PET/T1C融合图像,以9:1的比例随机分配到训练和测试数据集(1950对和217对),训练数据再以4:1的比例进一步分为训练和验证数据集(1560对和390对)。TBR的相关系数为0.706(CI:0.533 - 0.822)(p < 0.001),TBR的相关系数为0.901(CI:0.831 - 0.943)(p < 0.001)。定量分析的PSNR为31.075 ± 3.976,MSE为0.001 ± 0.001,SSIM为0.868 ± 0.079,FSIM为0.922 ± 0.044。
基于同时的F-FDOPA PET/MR成像数据的CGAN显示了从T1C图像到PET/T1C融合图像进行跨模态图像到图像转换的潜力,尽管存在小数据集的局限性以及缺乏外部验证,仍需要进一步研究。