State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, 310027, People's Republic of China.
Phys Med Biol. 2019 Sep 19;64(18):185016. doi: 10.1088/1361-6560/ab3103.
Dual-tracer positron emission tomography (PET) is a promising technique to measure the distribution of two tracers in the body by a single scan, which can improve the clinical accuracy of disease diagnosis and can also serve as a research tool for scientists. Most current research on dual-tracer PET reconstruction is based on mixed images pre-reconstructed by algorithms, which restricts the further improvement of the precision of reconstruction. In this study, we present a hybrid loss-guided deep learning based framework for dual-tracer PET imaging using sinogram data, which can achieve reconstruction by naturally unifying two functions: the reconstruction of the mixed images and the separation for individual tracers. Combined with volumetric dual-tracer images, we adopted a three-dimensional (3D) convolutional neural network (CNN) to learn full features, including spatial information and temporal information simultaneously. In addition, an auxiliary loss layer was introduced to guide the reconstruction of the dual tracers. We used Monte Carlo simulations with data augmentation to generate sufficient datasets for training and testing. The results were analyzed by the bias and variance both spatially (different regions of interest) and temporally (different frames). The analysis verified the feasibility of the 3D CNN framework for dual-tracer reconstruction. Furthermore, we compared the reconstruction results with a deep belief network (DBN), which is another deep learning based technique for the separation of dual-tracer images based on time-activity curves (TACs). The comparison results provide insights into the superior features and performance of the 3D CNN. Furthermore, we tested the [C]FMZ-[C]DTBZ images with three total-counts levels ([Formula: see text], [Formula: see text], [Formula: see text]), which indicate different noise ratios. The analysis results demonstrate that our method can successfully recover the respective distribution of lower total counts with nearly the same accuracy as that of the higher total counts in the total counts range we applied, which also also indicates the proposed 3D CNN framework is more robust to noise compared with DBN.
双示踪剂正电子发射断层扫描(PET)是一种很有前途的技术,可以通过单次扫描测量体内两种示踪剂的分布,这不仅可以提高疾病诊断的临床准确性,还可以作为科学家的研究工具。目前大多数关于双示踪剂 PET 重建的研究都是基于算法预先重建的混合图像,这限制了重建精度的进一步提高。在这项研究中,我们提出了一种基于混合图像引导的深度学习框架,用于使用正弦图数据进行双示踪剂 PET 成像,该框架可以通过自然统一两个功能来实现重建:混合图像的重建和单个示踪剂的分离。结合容积式双示踪剂图像,我们采用了三维(3D)卷积神经网络(CNN)同时学习全特征,包括空间信息和时间信息。此外,引入了辅助损失层来指导双示踪剂的重建。我们使用蒙特卡罗模拟和数据增强来生成足够的训练和测试数据集。通过空间(不同感兴趣区域)和时间(不同帧)上的偏差和方差来分析结果。该分析验证了 3D CNN 框架用于双示踪剂重建的可行性。此外,我们还将重建结果与另一种基于时间-活性曲线(TAC)的双示踪剂图像分离的深度学习技术——深度置信网络(DBN)进行了比较。比较结果为 3D CNN 的优势特征和性能提供了深入的见解。此外,我们还测试了[C]FMZ-[C]DTBZ 图像在三种总计数水平([Formula: see text]、[Formula: see text]、[Formula: see text])下的表现,这表明了不同的噪声比。分析结果表明,我们的方法可以成功地恢复较低总计数的各自分布,并且在我们应用的总计数范围内,其精度几乎与较高总计数相同,这也表明与 DBN 相比,所提出的 3D CNN 框架对噪声更稳健。