Suppr超能文献

基于深度学习从全身PET/CT的最大强度投影图像中提取特征:肺癌数据的初步结果

Deep Learning-Based Feature Extraction from Whole-Body PET/CT Employing Maximum Intensity Projection Images: Preliminary Results of Lung Cancer Data.

作者信息

Gil Joonhyung, Choi Hongyoon, Paeng Jin Chul, Cheon Gi Jeong, Kang Keon Wook

机构信息

Department of Nuclear Medicine, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 03080 Republic of Korea.

Department of Nuclear Medicine, Seoul National University College of Medicine, 101 Daehak-ro, Jongno-gu, Seoul, 03080 Republic of Korea.

出版信息

Nucl Med Mol Imaging. 2023 Oct;57(5):216-222. doi: 10.1007/s13139-023-00802-9. Epub 2023 Apr 19.

Abstract

PURPOSE

Deep learning (DL) has been widely used in various medical imaging analyses. Because of the difficulty in processing volume data, it is difficult to train a DL model as an end-to-end approach using PET volume as an input for various purposes including diagnostic classification. We suggest an approach employing two maximum intensity projection (MIP) images generated by whole-body FDG PET volume to employ pre-trained models based on 2-D images.

METHODS

As a retrospective, proof-of-concept study, 562 [F]FDG PET/CT images and clinicopathological factors of lung cancer patients were collected. MIP images of anterior and lateral views were used as inputs, and image features were extracted by a pre-trained convolutional neural network (CNN) model, ResNet-50. The relationship between the images was depicted on a parametric 2-D axes map using t-distributed stochastic neighborhood embedding (t-SNE), with clinicopathological factors.

RESULTS

A DL-based feature map extracted by two MIP images was embedded by t-SNE. According to the visualization of the t-SNE map, PET images were clustered by clinicopathological features. The representative difference between the clusters of PET patterns according to the posture of a patient was visually identified. This map showed a pattern of clustering according to various clinicopathological factors including sex as well as tumor staging.

CONCLUSION

A 2-D image-based pre-trained model could extract image patterns of whole-body FDG PET volume by using anterior and lateral views of MIP images bypassing the direct use of 3-D PET volume that requires large datasets and resources. We suggest that this approach could be implemented as a backbone model for various applications for whole-body PET image analyses.

摘要

目的

深度学习(DL)已广泛应用于各种医学影像分析。由于处理体数据存在困难,因此难以将DL模型训练为以PET体数据作为输入的端到端方法,用于包括诊断分类在内的各种目的。我们提出一种方法,利用全身FDG PET体数据生成的两张最大强度投影(MIP)图像,以采用基于二维图像的预训练模型。

方法

作为一项回顾性概念验证研究,收集了562例肺癌患者的[F]FDG PET/CT图像及临床病理因素。将前后位和侧位的MIP图像用作输入,并通过预训练的卷积神经网络(CNN)模型ResNet-50提取图像特征。利用t分布随机邻域嵌入(t-SNE)在参数化二维轴图上描绘图像与临床病理因素之间的关系。

结果

通过两张MIP图像提取的基于DL的特征图由t-SNE进行嵌入。根据t-SNE图的可视化结果,PET图像按临床病理特征聚类。直观地识别出根据患者体位划分的PET模式簇之间的代表性差异。该图显示了根据包括性别以及肿瘤分期在内的各种临床病理因素的聚类模式。

结论

基于二维图像的预训练模型可以通过使用MIP图像的前后位和侧位视图来提取全身FDG PET体数据的图像模式,从而绕过直接使用需要大量数据集和资源的三维PET体数据。我们认为这种方法可以作为全身PET图像分析各种应用的主干模型来实施。

相似文献

3
F-FDG PET/CT Uptake Classification in Lymphoma and Lung Cancer by Using Deep Convolutional Neural Networks.
Radiology. 2020 Feb;294(2):445-452. doi: 10.1148/radiol.2019191114. Epub 2019 Dec 10.
6
Convolutional neural network-based program to predict lymph node metastasis of non-small cell lung cancer using F-FDG PET.
Ann Nucl Med. 2024 Jan;38(1):71-80. doi: 10.1007/s12149-023-01866-5. Epub 2023 Sep 27.
7
Deep-JASC: joint attenuation and scatter correction in whole-body F-FDG PET using a deep residual network.
Eur J Nucl Med Mol Imaging. 2020 Oct;47(11):2533-2548. doi: 10.1007/s00259-020-04852-5. Epub 2020 May 15.
8
Deep learning-based attenuation correction for whole-body PET - a multi-tracer study with F-FDG,  Ga-DOTATATE, and F-Fluciclovine.
Eur J Nucl Med Mol Imaging. 2022 Jul;49(9):3086-3097. doi: 10.1007/s00259-022-05748-2. Epub 2022 Mar 12.

引用本文的文献

1
2
Predicting postoperative prognosis in clear cell renal cell carcinoma using a multiphase CT-based deep learning model.
Abdom Radiol (NY). 2025 May;50(5):2152-2159. doi: 10.1007/s00261-024-04593-1. Epub 2024 Sep 23.

本文引用的文献

1
Transformers in medical imaging: A survey.
Med Image Anal. 2023 Aug;88:102802. doi: 10.1016/j.media.2023.102802. Epub 2023 Apr 5.
3
Evaluation of Neuro Images for the Diagnosis of Alzheimer's Disease Using Deep Learning Neural Network.
Front Public Health. 2022 Feb 7;10:834032. doi: 10.3389/fpubh.2022.834032. eCollection 2022.
5
The promise of artificial intelligence and deep learning in PET and SPECT imaging.
Phys Med. 2021 Mar;83:122-137. doi: 10.1016/j.ejmp.2021.03.008. Epub 2021 Mar 22.
6
Variability of FP-CIT PET Patterns Associated With Clinical Features of Multiple System Atrophy.
Neurology. 2021 Mar 23;96(12):e1663-e1671. doi: 10.1212/WNL.0000000000011634. Epub 2021 Feb 3.
7
Convolutional neural networks in medical image understanding: a survey.
Evol Intell. 2022;15(1):1-22. doi: 10.1007/s12065-020-00540-3. Epub 2021 Jan 3.
8
A scoping review of transfer learning research on medical image analysis using ImageNet.
Comput Biol Med. 2021 Jan;128:104115. doi: 10.1016/j.compbiomed.2020.104115. Epub 2020 Nov 13.
9
3D Deep Learning on Medical Images: A Review.
Sensors (Basel). 2020 Sep 7;20(18):5097. doi: 10.3390/s20185097.
10
Deep-Learning F-FDG Uptake Classification Enables Total Metabolic Tumor Volume Estimation in Diffuse Large B-Cell Lymphoma.
J Nucl Med. 2021 Jan;62(1):30-36. doi: 10.2967/jnumed.120.242412. Epub 2020 Jun 12.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验