Xu Bangyan, Nie Ziwei, He Jian, Li Aimei, Wu Ting
School of Mathematics, Nanjing University, Nanjing 210093, People's Republic of China.
Department of Nuclear Medicine, Nanjing Drum Tower Hospital, Affiliated Hospital of Medical School, Nanjing University, Nanjing 210093, People's Republic of China.
Phys Med Biol. 2025 May 22;70(11). doi: 10.1088/1361-6560/add8dd.
. Positron emission tomography with 2-deoxy-2-[fluorine-18]fluoro-D-glucose integrated with computed tomography (18F-FDG PET-CT) is a multi-modality medical imaging technique widely used for screening and diagnosis of lesions and tumors, in which, CT can provide detailed anatomical structures, while PET can show metabolic activities. Nevertheless, it has disadvantages such as long scanning time, high cost, and relatively high radiation doses.. We propose a deep learning model for the whole-body CT-to-PET synthesis task, generating high-quality synthetic PET images that are comparable to real ones in both clinical relevance and diagnostic value.. We collect 102 pairs of 3D CT and PET scans, which are sliced into 27 240 pairs of 2D CT and PET images (training: 21,855 pairs, validation: 2810 pairs, testing: 2575 pairs).. We propose a transformer-enhanced generative adversarial network (GAN) for whole-body CT-to-PET synthesis task. The CPGAN model uses residual blocks and fully connected transformer residual blocks to capture both local features and global contextual information. A customized loss function incorporating structural consistency is designed to improve the quality of synthesized PET images.. Both quantitative and qualitative evaluation results demonstrate effectiveness of the CPGAN model. The mean and standard variance of NRMSE, PSNR and SSIM values on test set are(16.90±12.27)×10-4,28.71±2.67and0.926±0.033, respectively, outperforming other seven state-of-the-art models. Three radiologists independently and blindly evaluated and gave subjective scores to 100 randomly chosen PET images (50 real and 50 synthetic). By Wilcoxon signed rank test, there are no statistical differences between the synthetic PET images and the real ones.. Despite the inherent limitations of CT images to directly reflect biological information of metabolic tissues, CPGAN model effectively synthesizes satisfying PET images from CT scans, which has potential in reducing the reliance on actual PET-CT scans.
正电子发射断层扫描与2-脱氧-2-[氟-18]氟-D-葡萄糖结合计算机断层扫描(18F-FDG PET-CT)是一种多模态医学成像技术,广泛用于病变和肿瘤的筛查与诊断,其中CT可提供详细的解剖结构,而PET能显示代谢活性。然而,它存在扫描时间长、成本高以及辐射剂量相对较高等缺点。我们提出一种用于全身CT到PET合成任务的深度学习模型,生成在临床相关性和诊断价值上与真实PET图像相当的高质量合成PET图像。我们收集了102对3D CT和PET扫描数据,将其切片成27240对2D CT和PET图像(训练:21855对,验证:2810对,测试:2575对)。我们提出一种用于全身CT到PET合成任务的基于Transformer增强的生成对抗网络(GAN)。CPGAN模型使用残差块和全连接Transformer残差块来捕捉局部特征和全局上下文信息。设计了一个包含结构一致性的定制损失函数以提高合成PET图像的质量。定量和定性评估结果均证明了CPGAN模型的有效性。测试集上NRMSE、PSNR和SSIM值的均值和标准差分别为(16.90±12.27)×10-4、28.71±2.67和0.926±0.033,优于其他七个先进模型。三位放射科医生独立且盲法对100张随机选择的PET图像(50张真实图像和50张合成图像)进行评估并给出主观评分。通过Wilcoxon符号秩检验,合成PET图像与真实图像之间无统计学差异。尽管CT图像在直接反映代谢组织生物学信息方面存在固有局限性,但CPGAN模型能有效地从CT扫描中合成令人满意的PET图像,这在减少对实际PET-CT扫描的依赖方面具有潜力。