Suppr超能文献

使用带有生成对抗网络的深度学习方法对全身[F]FDG-PET进行独立衰减校正。

Independent attenuation correction of whole body [F]FDG-PET using a deep learning approach with Generative Adversarial Networks.

作者信息

Armanious Karim, Hepp Tobias, Küstner Thomas, Dittmann Helmut, Nikolaou Konstantin, La Fougère Christian, Yang Bin, Gatidis Sergios

机构信息

Department of Radiology, Diagnostic and Interventional Radiology, University Hospital Tübingen, Hoppe-Seyler-Str. 3, 72076, Tübingen, Germany.

Institute of Signal Processing and System Theory, University of Stuttgart, Stuttgart, Germany.

出版信息

EJNMMI Res. 2020 May 24;10(1):53. doi: 10.1186/s13550-020-00644-y.

Abstract

BACKGROUND

Attenuation correction (AC) of PET data is usually performed using a second imaging for the generation of attenuation maps. In certain situations however-when CT- or MR-derived attenuation maps are corrupted or CT acquisition solely for the purpose of AC shall be avoided-it would be of value to have the possibility of obtaining attenuation maps only based on PET information. The purpose of this study was to thus develop, implement, and evaluate a deep learning-based method for whole body [F]FDG-PET AC which is independent of other imaging modalities for acquiring the attenuation map.

METHODS

The proposed method is investigated on whole body [F]FDG-PET data using a Generative Adversarial Networks (GAN) deep learning framework. It is trained to generate pseudo CT images (CT) based on paired training data of non-attenuation corrected PET data (PET) and corresponding CT data. Generated pseudo CTs are then used for subsequent PET AC. One hundred data sets of whole body PET and corresponding CT were used for training. Twenty-five PET/CT examinations were used as test data sets (not included in training). On these test data sets, AC of PET was performed using the acquired CT as well as CT resulting in the corresponding PET data sets PET and PET. CT and PET were evaluated qualitatively by visual inspection and by visual analysis of color-coded difference maps. Quantitative analysis was performed by comparison of organ and lesion SUVs between PET and PET.

RESULTS

Qualitative analysis revealed no major SUV deviations on PET for most anatomic regions; visually detectable deviations were mainly observed along the diaphragm and the lung border. Quantitative analysis revealed mean percent deviations of SUVs on PET of - 0.8 ± 8.6% over all organs (range [- 30.7%, + 27.1%]). Mean lesion SUVs showed a mean deviation of 0.9 ± 9.2% (range [- 19.6%, + 29.2%]).

CONCLUSION

Independent AC of whole body [F]FDG-PET is feasible using the proposed deep learning approach yielding satisfactory PET quantification accuracy. Further clinical validation is necessary prior to implementation in clinical routine applications.

摘要

背景

正电子发射断层扫描(PET)数据的衰减校正(AC)通常使用第二次成像来生成衰减图。然而,在某些情况下,当CT或MR衍生的衰减图损坏,或者应避免仅为AC目的而进行CT采集时,仅基于PET信息获取衰减图将具有重要价值。因此,本研究的目的是开发、实施和评估一种基于深度学习的全身[F]FDG-PET AC方法,该方法独立于其他成像模态来获取衰减图。

方法

使用生成对抗网络(GAN)深度学习框架在全身[F]FDG-PET数据上研究了所提出的方法。它基于未进行衰减校正的PET数据(PET)和相应的CT数据的配对训练数据进行训练,以生成伪CT图像(CT)。然后将生成的伪CT用于后续的PET AC。使用100组全身PET和相应的CT数据集进行训练。25次PET/CT检查用作测试数据集(不包括在训练中)。在这些测试数据集上,使用采集的CT以及生成相应PET数据集PET和PET的CT对PET进行AC。通过目视检查和对彩色编码差异图的视觉分析对CT和PET进行定性评估。通过比较PET和PET之间的器官和病变SUV进行定量分析。

结果

定性分析显示,大多数解剖区域的PET上没有明显的SUV偏差;在膈肌和肺边界处主要观察到视觉上可检测到的松驰度。定量分析显示,所有器官的PET上SUV的平均百分比偏差为-0.8±8.6%(范围[-30.7%,+27.1%])。平均病变SUV显示平均偏差为0.9±9.2%(范围[-19.6%,+29.2%])。

结论

使用所提出的深度学习方法对全身[F]FDG-PET进行独立AC是可行的,可产生令人满意的PET定量准确性。在临床常规应用中实施之前,需要进一步的临床验证。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/99af/7246235/0c7fb2b10c9e/13550_2020_644_Fig1_HTML.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验