Suppr超能文献

利用深度学习基于磁共振T1图像生成突触密度的合成脑正电子发射断层扫描(PET)图像。

Generating synthetic brain PET images of synaptic density based on MR T1 images using deep learning.

作者信息

Zheng Xinyuan, Worhunsky Patrick, Liu Qiong, Guo Xueqi, Chen Xiongchao, Sun Heng, Zhang Jiazhen, Toyonaga Takuya, Mecca Adam P, O'Dell Ryan S, van Dyck Christopher H, Angarita Gustavo A, Cosgrove Kelly, D'Souza Deepak, Matuskey David, Esterlis Irina, Carson Richard E, Radhakrishnan Rajiv, Liu Chi

机构信息

Department of Biomedical Engineering, Yale University, New Haven, CT, USA.

Department of Psychiatry, Yale University, New Haven, CT, USA.

出版信息

EJNMMI Phys. 2025 Mar 31;12(1):30. doi: 10.1186/s40658-025-00744-5.

Abstract

PURPOSE

Synaptic vesicle glycoprotein 2 A (SV2A) in human brains is an important biomarker of synaptic loss associated with several neurological disorders. However, SV2A tracers, such as [C]UCB-J, are less available in practice due to constrains such as cost, radiation exposure and onsite cyclotron. We therefore aim to generate synthetic [C]UCB-J PET images based on MRI in this study.

METHODS

We implemented a convolution-based 3D encoder-decoder to predict [C]UCB-J SV2A PET images. A total of 160 participants who underwent both MRI and [C]UCB-J PET imaging, including individuals with schizophrenia, cannabis use disorder, Alzheimer's disease, were used in this study. The model was trained on pairs of T1-weighted MRI and [C]UCB-J distribution volume ratio images, and tested through a 10-fold cross-validation process. The image translation accuracy was evaluated based on the mean squared error, structural similarity index, percentage bias and Pearson's correlation coefficient between the ground truth and the predicted images. Additionally, we assessed the prediction accuracy of selected regions of interest (ROIs) crucial for brain disorders to evaluate our results.

RESULTS

The generated SV2A PET images are visually similar to the ground truth in terms of contrast and tracer distribution, quantitatively with low bias (< 2%) and high similarity (> 0.9). Across all diagnostic categories and ROIs, including the hippocampus, frontal, occipital, parietal, and temporal regions, the synthetic SV2A PET images exhibit an average bias of less than 5% compared to the ground truth. The model also demonstrates a capacity for noise reduction, producing images of higher quality compared to the low-dose scans.

CONCLUSION

We conclude that it is feasible to generate robust SV2A PET images with promising accuracy from MRI via a data-driven approach.

摘要

目的

人类大脑中的突触囊泡糖蛋白2A(SV2A)是与多种神经系统疾病相关的突触丧失的重要生物标志物。然而,由于成本、辐射暴露和现场回旋加速器等限制,[C]UCB-J等SV2A示踪剂在实际应用中较少。因此,在本研究中,我们旨在基于MRI生成合成的[C]UCB-J PET图像。

方法

我们实施了基于卷积的3D编码器-解码器来预测[C]UCB-J SV2A PET图像。本研究共纳入160名同时接受MRI和[C]UCB-J PET成像的参与者,包括精神分裂症、大麻使用障碍、阿尔茨海默病患者。该模型在T1加权MRI和[C]UCB-J分布体积比图像对上进行训练,并通过10折交叉验证过程进行测试。基于真实图像与预测图像之间的均方误差、结构相似性指数、百分比偏差和皮尔逊相关系数评估图像翻译准确性。此外,我们评估了对脑部疾病至关重要的选定感兴趣区域(ROI)的预测准确性,以评估我们的结果。

结果

生成的SV2A PET图像在对比度和示踪剂分布方面在视觉上与真实图像相似,定量分析显示偏差低(<2%)且相似度高(>0.9)。在所有诊断类别和ROI中,包括海马体、额叶、枕叶、顶叶和颞叶区域,合成的SV2A PET图像与真实图像相比平均偏差小于5%。该模型还展示了降噪能力,与低剂量扫描相比,生成的图像质量更高。

结论

我们得出结论,通过数据驱动的方法从MRI生成具有良好准确性的可靠SV2A PET图像是可行的。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4a54/11958861/9b2aefa7bf48/40658_2025_744_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验