College of Optical Science and Engineering, Zhejiang University, 38 Zheda Road, Building 3, Hangzhou 310027, China; Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, 55 Fruit St, White 427, 125 Nashua Street, Suite 660, Boston, MA 02114, United States.
Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, 55 Fruit St, White 427, 125 Nashua Street, Suite 660, Boston, MA 02114, United States.
Neuroimage. 2021 Oct 15;240:118380. doi: 10.1016/j.neuroimage.2021.118380. Epub 2021 Jul 9.
Parametric imaging based on dynamic positron emission tomography (PET) has wide applications in neurology. Compared to indirect methods, direct reconstruction methods, which reconstruct parametric images directly from the raw PET data, have superior image quality due to better noise modeling and richer information extracted from the PET raw data. For low-dose scenarios, the advantages of direct methods are more obvious. However, the wide adoption of direct reconstruction is inevitably impeded by the excessive computational demand and deficiency of the accessible raw data. In addition, motion modeling inside dynamic PET image reconstruction raises more computational challenges for direct reconstruction methods. In this work, we focused on the F-FDG Patlak model, and proposed a data-driven approach which can estimate the motion corrected full-dose direct Patlak images from the dynamic PET reconstruction series, based on a proposed novel temporal non-local convolutional neural network. During network training, direct reconstruction with motion correction based on full-dose dynamic PET sinograms was performed to obtain the training labels. The reconstructed full-dose /low-dose dynamic PET images were supplied as the network input. In addition, a temporal non-local block based on the dynamic PET images was proposed to better recover the structural information and reduce the image noise. During testing, the proposed network can directly output high-quality Patlak parametric images from the full-dose /low-dose dynamic PET images in seconds. Experiments based on 15 full-dose and 15 low-dose F-FDG brain datasets were conducted and analyzed to validate the feasibility of the proposed framework. Results show that the proposed framework can generate better image quality than reference methods.
基于动态正电子发射断层扫描(PET)的参数成像在神经科学中有广泛的应用。与间接方法相比,直接重建方法直接从原始 PET 数据重建参数图像,由于更好的噪声建模和从 PET 原始数据中提取的更丰富的信息,具有更好的图像质量。对于低剂量情况,直接方法的优势更加明显。然而,由于计算需求过高和可用原始数据不足,直接重建的广泛采用不可避免地受到阻碍。此外,动态 PET 图像重建中的运动建模为直接重建方法带来了更多的计算挑战。在这项工作中,我们专注于 F-FDG Patlak 模型,并提出了一种基于新颖的时间非局部卷积神经网络的数据驱动方法,可以从动态 PET 重建序列中估计运动校正的全剂量直接 Patlak 图像。在网络训练期间,基于全剂量动态 PET 正弦图进行带有运动校正的直接重建,以获得训练标签。将重建的全剂量/低剂量动态 PET 图像作为网络输入提供。此外,还提出了一种基于动态 PET 图像的时间非局部块,以更好地恢复结构信息并降低图像噪声。在测试过程中,所提出的网络可以在几秒钟内直接从全剂量/低剂量动态 PET 图像输出高质量的 Patlak 参数图像。基于 15 个全剂量和 15 个低剂量 F-FDG 脑数据集进行了实验和分析,以验证所提出框架的可行性。结果表明,所提出的框架可以生成比参考方法更好的图像质量。