Suppr超能文献

基于同时重建的活性和衰减图训练的深度神经网络生成全身飞行时间 F-FDG PET/MRI 的 PET 衰减图。

Generation of PET Attenuation Map for Whole-Body Time-of-Flight F-FDG PET/MRI Using a Deep Neural Network Trained with Simultaneously Reconstructed Activity and Attenuation Maps.

机构信息

Department of Biomedical Sciences, Seoul National University, Seoul, Korea.

Department of Nuclear Medicine, Seoul National University, Seoul, Korea.

出版信息

J Nucl Med. 2019 Aug;60(8):1183-1189. doi: 10.2967/jnumed.118.219493. Epub 2019 Jan 25.

Abstract

We propose a new deep learning-based approach to provide more accurate whole-body PET/MRI attenuation correction than is possible with the Dixon-based 4-segment method. We use activity and attenuation maps estimated using the maximum-likelihood reconstruction of activity and attenuation (MLAA) algorithm as inputs to a convolutional neural network (CNN) to learn a CT-derived attenuation map. The whole-body F-FDG PET/CT scan data of 100 cancer patients (38 men and 62 women; age, 57.3 ± 14.1 y) were retrospectively used for training and testing the CNN. A modified U-net was trained to predict a CT-derived μ-map (μ-CT) from the MLAA-generated activity distribution (λ-MLAA) and μ-map (μ-MLAA). We used 1.3 million patches derived from 60 patients' data for training the CNN, data of 20 others were used as a validation set to prevent overfitting, and the data of the other 20 were used as a test set for the CNN performance analysis. The attenuation maps generated using the proposed method (μ-CNN), μ-MLAA, and 4-segment method (μ-segment) were compared with the μ-CT, a ground truth. We also compared the voxelwise correlation between the activity images reconstructed using ordered-subset expectation maximization with the μ-maps, and the SUVs of primary and metastatic bone lesions obtained by drawing regions of interest on the activity images. The CNN generates less noisy attenuation maps and achieves better bone identification than MLAA. The average Dice similarity coefficient for bone regions between μ-CNN and μ-CT was 0.77, which was significantly higher than that between μ-MLAA and μ-CT (0.36). Also, the CNN result showed the best pixel-by-pixel correlation with the CT-based results and remarkably reduced differences in activity maps in comparison to CT-based attenuation correction. The proposed deep neural network produced a more reliable attenuation map for 511-keV photons than the 4-segment method currently used in whole-body PET/MRI studies.

摘要

我们提出了一种新的基于深度学习的方法,以提供比基于 Dixon 的 4 段方法更准确的全身 PET/MRI 衰减校正。我们使用基于最大似然重建的活动和衰减(MLAA)算法估计的活动和衰减图作为输入,将卷积神经网络(CNN)用于学习 CT 衍生的衰减图。回顾性地使用 100 例癌症患者(38 名男性和 62 名女性;年龄,57.3±14.1 岁)的全身 F-FDG PET/CT 扫描数据来训练和测试 CNN。修改后的 U-net 用于从 MLAA 生成的活动分布(λ-MLAA)和 μ 图(μ-MLAA)预测 CT 衍生的 μ 图(μ-CT)。我们使用来自 60 名患者数据的 130 万个补丁来训练 CNN,使用 20 名患者的数据作为验证集以防止过度拟合,使用另外 20 名患者的数据作为 CNN 性能分析的测试集。使用所提出的方法(μ-CNN)生成的衰减图、μ-MLAA 和 4 段方法(μ-segment)与 μ-CT(真实值)进行比较。我们还比较了使用有序子集期望最大化重建的活动图像与μ图之间的体素相关性,以及通过在活动图像上绘制感兴趣区域获得的原发性和转移性骨病变的 SUV。CNN 生成的衰减图噪声更小,并且比 MLAA 更能实现更好的骨识别。μ-CNN 和 μ-CT 之间的骨区域平均 Dice 相似系数为 0.77,明显高于 μ-MLAA 和 μ-CT(0.36)之间的平均 Dice 相似系数。此外,CNN 结果与基于 CT 的结果具有最佳的像素对像素相关性,并显著降低了与基于 CT 的衰减校正相比活动图之间的差异。与目前用于全身 PET/MRI 研究的 4 段方法相比,所提出的深度神经网络为 511keV 光子产生了更可靠的衰减图。

相似文献

8
MR-guided joint reconstruction of activity and attenuation in brain PET-MR.MR 引导的脑 PET-MR 活动和衰减联合重建。
Neuroimage. 2017 Nov 15;162:276-288. doi: 10.1016/j.neuroimage.2017.09.006. Epub 2017 Sep 14.

引用本文的文献

4
Artificial Intelligence (AI) in Nuclear Medicine: Is a Friend Not Foe.核医学中的人工智能:是友非敌。
World J Nucl Med. 2024 Jan 22;23(1):1-2. doi: 10.1055/s-0043-1777698. eCollection 2024 Mar.
8
A review of PET attenuation correction methods for PET-MR.PET-MR的PET衰减校正方法综述
EJNMMI Phys. 2023 Sep 11;10(1):52. doi: 10.1186/s40658-023-00569-0.

本文引用的文献

2
Machine learning in biomedical engineering.生物医学工程中的机器学习
Biomed Eng Lett. 2018 Feb 6;8(1):1-3. doi: 10.1007/s13534-018-0058-3. eCollection 2018 Feb.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验