Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu, South Korea.
Department of Nuclear Medicine, Yeungnam University Hospital, Yeungnam University College of Medicine, Daegu, South Korea.
Spine J. 2024 Aug;24(8):1467-1477. doi: 10.1016/j.spinee.2024.04.007. Epub 2024 Apr 12.
Cross-modality image generation from magnetic resonance (MR) to positron emission tomography (PET) using the generative model can be expected to have complementary effects by addressing the limitations and maximizing the advantages inherent in each modality.
This study aims to generate synthetic PET/MR fusion images from MR images using a combination of generative adversarial networks (GANs) and conditional denoising diffusion probabilistic models (cDDPMs) based on simultaneous F-fluorodeoxyglucose (18F-FDG) PET/MR image data.
Retrospective study with prospectively collected clinical and radiological data.
This study included 94 patients (60 men and 34 women) with thoraco-lumbar pyogenic spondylodiscitis (PSD) from February 2017 to January 2020 in a single tertiary institution.
Quantitative and qualitative image similarity were analyzed between the real and synthetic PET/ T2-weighted fat saturation MR (T2FS) fusion images on the test data set.
We used paired spinal sagittal T2FS and PET/T2FS fusion images of simultaneous 18F-FDG PET/MR imaging examination in patients with PSD, which were employed to generate synthetic PET/T2FS fusion images from T2FS images using a combination of Pix2Pix (U-Net generator + Least Squares GANs discriminator) and cDDPMs algorithms. In the analyses of image similarity between the real and synthetic PET/T2FS fusion images, we adopted the values of mean peak signal to noise ratio (PSNR), mean structural similarity measurement (SSIM), mean absolute error (MAE), and mean squared error (MSE) for quantitative analysis, while the discrimination accuracy by three spine surgeons was applied for qualitative analysis.
Total of 2,082 pairs of T2FS and PET/T2FS fusion images were obtained from 172 examinations on 94 patients, which were randomly assigned to training, validation, and test data sets in 8:1:1 ratio (1664, 209, and 209 pairs). The quantitative analysis revealed PSNR of 30.634 ± 3.437, SSIM of 0.910 ± 0.067, MAE of 0.017 ± 0.008, and MSE of 0.001 ± 0.001, respectively. The values of PSNR, MAE, and MSE significantly decreased as FDG uptake increased in real PET/T2FS fusion image, with no significant correlation on SSIM. In the qualitative analysis, the overall discrimination accuracy between real and synthetic PET/T2FS fusion images was 47.4%.
The combination of Pix2Pix and cDDPMs demonstrated the potential for cross-modal image generation from MR to PET images, with reliable quantitative and qualitative image similarities.
使用生成模型将磁共振(MR)到正电子发射断层扫描(PET)的跨模态图像生成有望通过解决每种模态固有的局限性和最大化优势来产生互补的效果。
本研究旨在使用生成对抗网络(GANs)和条件去噪扩散概率模型(cDDPMs)组合,基于同时的 F-氟代脱氧葡萄糖(18F-FDG)PET/MR 图像数据,从 MR 图像生成合成的 PET/MR 融合图像。
回顾性研究,前瞻性收集临床和影像学数据。
本研究包括 2017 年 2 月至 2020 年 1 月在一家三级机构接受胸腰椎化脓性脊椎炎(PSD)治疗的 94 例患者(60 名男性和 34 名女性)。
在测试数据集上分析了真实和合成的 PET/T2 加权脂肪饱和磁共振(T2FS)融合图像之间的定量和定性图像相似性。
我们使用了同时进行的 18F-FDG PET/MR 成像检查中患者的配对脊柱矢状 T2FS 和 PET/T2FS 融合图像,用于通过 Pix2Pix(U-Net 生成器+最小二乘 GANs 鉴别器)和 cDDPMs 算法从 T2FS 图像生成合成的 PET/T2FS 融合图像。在真实和合成的 PET/T2FS 融合图像之间的图像相似性分析中,我们采用了平均峰值信噪比(PSNR)、平均结构相似性度量(SSIM)、平均绝对误差(MAE)和平均平方误差(MSE)的平均值进行定量分析,同时应用了三位脊柱外科医生的鉴别准确率进行定性分析。
从 94 名患者的 172 次检查中总共获得了 2082 对 T2FS 和 PET/T2FS 融合图像,这些图像被随机分配到 8:1:1 的训练、验证和测试数据集(1664、209 和 209 对)中。定量分析显示 PSNR 为 30.634±3.437,SSIM 为 0.910±0.067,MAE 为 0.017±0.008,MSE 为 0.001±0.001。在真实的 PET/T2FS 融合图像中,FDG 摄取量增加时,PSNR、MAE 和 MSE 值显著降低,而 SSIM 没有显著相关性。在定性分析中,真实和合成的 PET/T2FS 融合图像之间的整体鉴别准确率为 47.4%。
Pix2Pix 和 cDDPMs 的组合显示了从 MR 到 PET 图像的跨模态图像生成的潜力,具有可靠的定量和定性图像相似性。