Department of PET Center, The First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, China.
Zhejiang Minfound Intelligent Healthcare Technology Co., Ltd., Hangzhou, China.
PLoS One. 2020 Sep 4;15(9):e0238455. doi: 10.1371/journal.pone.0238455. eCollection 2020.
PET is a popular medical imaging modality for various clinical applications, including diagnosis and image-guided radiation therapy. The low-dose PET (LDPET) at a minimized radiation dosage is highly desirable in clinic since PET imaging involves ionizing radiation, and raises concerns about the risk of radiation exposure. However, the reduced dose of radioactive tracers could impact the image quality and clinical diagnosis. In this paper, a supervised deep learning approach with a generative adversarial network (GAN) and the cycle-consistency loss, Wasserstein distance loss, and an additional supervised learning loss, named as S-CycleGAN, is proposed to establish a non-linear end-to-end mapping model, and used to recover LDPET brain images. The proposed model, and two recently-published deep learning methods (RED-CNN and 3D-cGAN) were applied to 10% and 30% dose of 10 testing datasets, and a series of simulation datasets embedded lesions with different activities, sizes, and shapes. Besides vision comparisons, six measures including the NRMSE, SSIM, PSNR, LPIPS, SUVmax and SUVmean were evaluated for 10 testing datasets and 45 simulated datasets. Our S-CycleGAN approach had comparable SSIM and PSNR, slightly higher noise but a better perception score and preserving image details, much better SUVmean and SUVmax, as compared to RED-CNN and 3D-cGAN. Quantitative and qualitative evaluations indicate the proposed approach is accurate, efficient and robust as compared to other state-of-the-art deep learning methods.
正电子发射断层扫描(PET)是一种广泛应用于各种临床应用的医学成像方式,包括诊断和图像引导的放射治疗。由于 PET 成像涉及电离辐射,因此在临床中非常需要低剂量 PET(LDPET),以最小化辐射剂量,但放射性示踪剂的剂量减少可能会影响图像质量和临床诊断。在本文中,提出了一种基于生成对抗网络(GAN)和循环一致性损失、Wasserstein 距离损失以及额外的监督学习损失的有监督深度学习方法,称为 S-CycleGAN,用于建立非线性端到端映射模型,并用于恢复 LDPET 脑图像。该模型和两种最近发表的深度学习方法(RED-CNN 和 3D-cGAN)应用于 10%和 30%剂量的 10 个测试数据集,以及嵌入具有不同活动、大小和形状的病变的一系列模拟数据集。除了视觉比较外,还对 10 个测试数据集和 45 个模拟数据集评估了六个度量标准,包括 NRMSE、SSIM、PSNR、LPIPS、SUVmax 和 SUVmean。与 RED-CNN 和 3D-cGAN 相比,我们的 S-CycleGAN 方法具有可比的 SSIM 和 PSNR,稍高的噪声,但更好的感知得分和保留图像细节,更高的 SUVmean 和 SUVmax。定量和定性评估表明,与其他最先进的深度学习方法相比,该方法准确、高效且稳健。