Guo Xueqi, Zhou Bo, Chen Xiongchao, Liu Chi, Dvornek Nicha C
Yale University, New Haven, CT 06511, USA.
Med Image Comput Comput Assist Interv. 2022 Sep;13434:163-172. doi: 10.1007/978-3-031-16440-8_16. Epub 2022 Sep 16.
Inter-frame patient motion introduces spatial misalignment and degrades parametric imaging in whole-body dynamic positron emission tomography (PET). Most current deep learning inter-frame motion correction works consider only the image registration problem, ignoring tracer kinetics. We propose an inter-frame Motion Correction framework with Patlak regularization (MCP-Net) to directly optimize the Patlak fitting error and further improve model performance. The MCP-Net contains three modules: a motion estimation module consisting of a multiple-frame 3-D U-Net with a convolutional long short-term memory layer combined at the bottleneck; an image warping module that performs spatial transformation; and an analytical Patlak module that estimates Patlak fitting with the motion-corrected frames and the individual input function. A Patlak loss penalization term using mean squared percentage fitting error is introduced to the loss function in addition to image similarity measurement and displacement gradient loss. Following motion correction, the parametric images were generated by standard Patlak analysis. Compared with both traditional and deep learning benchmarks, our network further corrected the residual spatial mismatch in the dynamic frames, improved the spatial alignment of Patlak / images, and reduced normalized fitting error. With the utilization of tracer dynamics and enhanced network performance, MCP-Net has the potential for further improving the quantitative accuracy of dynamic PET. Our code is released at https://github.com/gxq1998/MCP-Net.
帧间患者运动在全身动态正电子发射断层扫描(PET)中会导致空间错位并降低参数成像质量。当前大多数深度学习帧间运动校正方法仅考虑图像配准问题,而忽略了示踪剂动力学。我们提出了一种带有Patlak正则化的帧间运动校正框架(MCP-Net),以直接优化Patlak拟合误差并进一步提高模型性能。MCP-Net包含三个模块:一个运动估计模块,由一个在瓶颈处结合了卷积长短期记忆层的多帧3D U-Net组成;一个执行空间变换的图像扭曲模块;以及一个使用运动校正后的帧和个体输入函数来估计Patlak拟合的解析Patlak模块。除了图像相似性度量和位移梯度损失外,还在损失函数中引入了一个使用均方百分比拟合误差的Patlak损失惩罚项。运动校正后,通过标准Patlak分析生成参数图像。与传统和深度学习基准相比,我们的网络进一步校正了动态帧中的残余空间失配,改善了Patlak图像的空间对齐,并降低了归一化拟合误差。通过利用示踪剂动力学和增强的网络性能,MCP-Net有潜力进一步提高动态PET的定量准确性。我们的代码已在https://github.com/gxq1998/MCP-Net上发布。