Ghanbari S, Sadremomtaz A
Department of Physics, Faculty of Science University of Guilan, Rasht, Iran.
Biomed Phys Eng Express. 2024 Dec 23;11(1). doi: 10.1088/2057-1976/ad97c2.
Attenuation correction of PET data is commonly conducted through the utilization of a secondary imaging technique to produce attenuation maps. The customary approach to attenuation correction, which entails the employment of CT images, necessitates energy conversion. However, the present study introduces a novel deep learning-based method that obviates the requirement for CT images and energy conversion. This study employs a residual Pix2Pix network to generate attenuation-corrected PET images using the 4033 2D PET images of 37 healthy adult brains for train and test. The model, implemented in TensorFlow and Keras, was evaluated by comparing image similarity, intensity correlation, and distribution against CT-AC images using metrics such as PSNR and SSIM for image similarity, while a 2D histogram plotted pixel intensities. Differences in standardized uptake values (SUV) demonstrated the model's efficiency compared to the CTAC method. The residual Pix2Pix demonstrated strong agreement with the CT-based attenuation correction, the proposed network yielding MAE, MSE, PSNR, and MS-SSIM values of 3 × 10, 2 × 10, 38.859, and 0.99, respectively. The residual Pix2Pix model's results showed a negligible mean SUV difference of 8 × 10(P-value = 0.10), indicating its accuracy in PET image correction. The residual Pix2Pix model exhibits high precision with a strong correlation coefficient of R = 0.99 to CT-based methods. The findings indicate that this approach surpasses the conventional method in terms of precision and efficacy. The proposed residual Pix2Pix framework enables accurate and feasible attenuation correction of brain F-FDG PET without CT. However, clinical trials are required to evaluate its clinical performance. The PET images reconstructed by the framework have low errors compared to the accepted test reliability of PET/CT, indicating high quantitative similarity.
正电子发射断层扫描(PET)数据的衰减校正通常通过利用二次成像技术来生成衰减图来进行。传统的衰减校正方法需要使用CT图像,这就需要进行能量转换。然而,本研究引入了一种基于深度学习的新方法,该方法无需CT图像和能量转换。本研究使用残差Pix2Pix网络,利用37个健康成人大脑的4033张二维PET图像进行训练和测试,以生成经衰减校正的PET图像。该模型在TensorFlow和Keras中实现,通过使用诸如峰值信噪比(PSNR)和结构相似性(SSIM)等指标来比较图像相似性、强度相关性和分布,将其与CT-AC图像进行对比评估,同时绘制二维直方图来显示像素强度。标准化摄取值(SUV)的差异表明了该模型与CTAC方法相比的效率。残差Pix2Pix与基于CT的衰减校正显示出高度一致性,所提出的网络分别产生的平均绝对误差(MAE)、均方误差(MSE)、PSNR和多尺度结构相似性(MS-SSIM)值为3×10、2×10、38.859和0.99。残差Pix2Pix模型的结果显示平均SUV差异可忽略不计,为8×10(P值=0.10),表明其在PET图像校正中的准确性。残差Pix2Pix模型表现出高精度,与基于CT的方法的相关系数R=0.99。研究结果表明,该方法在精度和效率方面优于传统方法。所提出的残差Pix2Pix框架能够在无需CT的情况下对脑F-FDG PET进行准确且可行的衰减校正。然而,需要进行临床试验来评估其临床性能。与PET/CT公认的测试可靠性相比,该框架重建的PET图像误差较低,表明具有较高的定量相似性。