Suppr超能文献

用于全身[镓]Ga-PSMA-11和[镓]Ga-FAPI-04 PET图像的双向动态帧预测网络

Bidirectional dynamic frame prediction network for total-body [Ga]Ga-PSMA-11 and [Ga]Ga-FAPI-04 PET images.

作者信息

Yang Qianyi, Li Wenbo, Huang Zhenxing, Chen Zixiang, Zhao Wenjie, Gao Yunlong, Yang Xinlan, Yang Yongfeng, Zheng Hairong, Liang Dong, Liu Jianjun, Chen Ruohua, Hu Zhanli

机构信息

College of Information Science and Engineering, Northeastern University, Shenyang, 110819, China.

Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518000, China.

出版信息

EJNMMI Phys. 2024 Nov 4;11(1):92. doi: 10.1186/s40658-024-00698-0.

Abstract

PURPOSE

Total-body dynamic positron emission tomography (PET) imaging with total-body coverage and ultrahigh sensitivity has played an important role in accurate tracer kinetic analyses in physiology, biochemistry, and pharmacology. However, dynamic PET scans typically entail prolonged durations ([Formula: see text]60 minutes), potentially causing patient discomfort and resulting in artifacts in the final images. Therefore, we propose a dynamic frame prediction method for total-body PET imaging via deep learning technology to reduce the required scanning time.

METHODS

On the basis of total-body dynamic PET data acquired from 13 subjects who received [Ga]Ga-FAPI-04 (Ga-FAPI) and 24 subjects who received [Ga]Ga-PSMA-11 (Ga-PSMA), we propose a bidirectional dynamic frame prediction network that uses the initial and final 10 min of PET imaging data (frames 1-6 and frames 25-30, respectively) as inputs. The peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) were employed as evaluation metrics for an image quality assessment. Moreover, we calculated parametric images (Ga-FAPI: [Formula: see text], Ga-PSMA: [Formula: see text]) based on the supplemented sequence data to observe the quantitative accuracy of our approach. Regions of interest (ROIs) and statistical analyses were utilized to evaluate the performance of the model.

RESULTS

Both the visual and quantitative results illustrate the effectiveness of our approach. The generated dynamic PET images yielded PSNRs of 36.056 ± 0.709 dB for the Ga-PSMA group and 33.779 ± 0.760 dB for the Ga-FAPI group. Additionally, the SSIM reached 0.935 ± 0.006 for the Ga-FAPI group and 0.922 ± 0.009 for the Ga-PSMA group. By conducting a quantitative analysis on the parametric images, we obtained PSNRs of 36.155 ± 4.813 dB (Ga-PSMA, [Formula: see text]) and 43.150 ± 4.102 dB (Ga-FAPI, [Formula: see text]). The obtained SSIM values were 0.932 ± 0.041 (Ga-PSMA) and 0.980 ± 0.011 (Ga-FAPI). The ROI analysis conducted on our generated dynamic PET sequences also revealed that our method can accurately predict temporal voxel intensity changes, maintaining overall visual consistency with the ground truth.

CONCLUSION

In this work, we propose a bidirectional dynamic frame prediction network for total-body Ga-PSMA and Ga-FAPI PET imaging with a reduced scan duration. Visual and quantitative analyses demonstrated that our approach performed well when it was used to predict one-hour dynamic PET images. https://github.com/OPMZZZ/BDF-NET .

摘要

目的

具有全身覆盖和超高灵敏度的全身动态正电子发射断层扫描(PET)成像在生理学、生物化学和药理学的准确示踪剂动力学分析中发挥了重要作用。然而,动态PET扫描通常需要较长时间([公式:见正文]60分钟),这可能会导致患者不适并在最终图像中产生伪影。因此,我们提出了一种通过深度学习技术进行全身PET成像的动态帧预测方法,以减少所需的扫描时间。

方法

基于从13名接受[镓]镓-FAPI-04(Ga-FAPI)的受试者和24名接受[镓]镓-PSMA-11(Ga-PSMA)的受试者获取的全身动态PET数据,我们提出了一种双向动态帧预测网络,该网络使用PET成像数据的初始和最后10分钟(分别为第1 - 6帧和第25 - 30帧)作为输入。峰值信噪比(PSNR)和结构相似性指数测量(SSIM)被用作图像质量评估的指标。此外,我们基于补充后的序列数据计算参数图像(Ga-FAPI:[公式:见正文],Ga-PSMA:[公式:见正文]),以观察我们方法的定量准确性。利用感兴趣区域(ROI)和统计分析来评估模型的性能。

结果

视觉和定量结果都说明了我们方法的有效性。生成的动态PET图像在Ga-PSMA组中的PSNR为36.056±0.709 dB,在Ga-FAPI组中为33.779±0.760 dB。此外,Ga-FAPI组的SSIM达到0.935±0.006,Ga-PSMA组的SSIM达到0.922±0.009。通过对参数图像进行定量分析,我们获得了36.155±4.813 dB(Ga-PSMA,[公式:见正文])和43.150±4.102 dB(Ga-FAPI,[公式:见正文])的PSNR。获得的SSIM值分别为0.932±0.041(Ga-PSMA)和0.980±0.011(Ga-FAPI)。对我们生成的动态PET序列进行的ROI分析还表明,我们的方法可以准确预测体素强度随时间的变化,与真实情况保持整体视觉一致性。

结论

在这项工作中,我们提出了一种用于全身Ga-PSMA和Ga-FAPI PET成像的双向动态帧预测网络,可缩短扫描持续时间。视觉和定量分析表明,我们的方法在用于预测一小时动态PET图像时表现良好。https://github.com/OPMZZZ/BDF-NET

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e0d/11532329/13cde5ade0a3/40658_2024_698_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验