State Key Laboratory of Modern Optical Instrumentation, Zhejiang University, Hangzhou, China.
Med Phys. 2022 Jul;49(7):4585-4598. doi: 10.1002/mp.15566. Epub 2022 May 23.
The difficulty of dynamic dual-tracer positron emission tomography (PET) technology is to separate the complete single-tracer information from mixed dual-tracer. Traditional methods cannot separate single-injection single-scan dynamic dual-tracer PET images. In this paper, we propose a deep learning framework based on gated recurrent unit (GRU) network and evaluate its performance with simulation experiments and realistic monkey data.
The proposed single-scan dynamic dual-tracer PET image separation network consists of three parts, including encoder, separation, and decoder module. Encoder part is to map the mixed time activity curves (TACs) from the low-dimensional space to the high-dimensional space to get mixed weight vector matrix. Separation part is to capture the temporal information of mixed weight vector matrix using bi-directional GRU (bi-GRU) layer to obtain the single-tracer masks, and the decoding part remaps the high-dimensional single-tracer weight vector matrix to the low-dimensional space to obtain two separated single tracers.
In the simulation experiments under different tracers, phantoms, noise levels, arterial input function (AIF), and k-parameter with Gaussian random, compared to the stacked auto encoder network and traditional background subtraction method, GRU-based network has better performance with low bias and mean squared error. In the realistic study, the image results of GRU network have higher mean structural similarity and peak signal to noise ratio.
This study demonstrates the feasibility of temporal information-guided neural network in single-injection single-scan dynamic dual-tracer PET images separation. The GRU-based network uses TAC temporal information without AIFs to make the separation results more robust and accurate, which significantly outperforms state-of-the-art method qualitatively and quantitatively.
动态双示踪剂正电子发射断层扫描(PET)技术的难点在于从混合双示踪剂中分离完整的单示踪剂信息。传统方法无法分离单注射单扫描动态双示踪剂 PET 图像。本文提出了一种基于门控循环单元(GRU)网络的深度学习框架,并通过仿真实验和真实猴子数据评估其性能。
所提出的单扫描动态双示踪剂 PET 图像分离网络由编码器、分离和解码器模块三部分组成。编码器部分是将混合时间活动曲线(TAC)从低维空间映射到高维空间,得到混合权重向量矩阵。分离部分使用双向 GRU(bi-GRU)层捕获混合权重向量矩阵的时间信息,以获得单示踪剂掩模,解码部分将高维单示踪剂权重向量矩阵重新映射到低维空间,以获得两个分离的单示踪剂。
在不同示踪剂、体模、噪声水平、动脉输入函数(AIF)和 k 参数具有高斯随机的仿真实验中,与堆叠自动编码器网络和传统背景减法方法相比,基于 GRU 的网络具有更好的性能,具有较低的偏差和均方误差。在真实研究中,GRU 网络的图像结果具有更高的平均结构相似性和峰值信噪比。
本研究证明了基于时间信息引导的神经网络在单注射单扫描动态双示踪剂 PET 图像分离中的可行性。基于 GRU 的网络使用 TAC 时间信息而不使用 AIF,使分离结果更稳健和准确,在定性和定量方面均显著优于最先进的方法。