Gil Joonhyung, Choi Hongyoon, Paeng Jin Chul, Cheon Gi Jeong, Kang Keon Wook
Department of Nuclear Medicine, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 03080 Republic of Korea.
Department of Nuclear Medicine, Seoul National University College of Medicine, 101 Daehak-ro, Jongno-gu, Seoul, 03080 Republic of Korea.
Nucl Med Mol Imaging. 2023 Oct;57(5):216-222. doi: 10.1007/s13139-023-00802-9. Epub 2023 Apr 19.
Deep learning (DL) has been widely used in various medical imaging analyses. Because of the difficulty in processing volume data, it is difficult to train a DL model as an end-to-end approach using PET volume as an input for various purposes including diagnostic classification. We suggest an approach employing two maximum intensity projection (MIP) images generated by whole-body FDG PET volume to employ pre-trained models based on 2-D images.
As a retrospective, proof-of-concept study, 562 [F]FDG PET/CT images and clinicopathological factors of lung cancer patients were collected. MIP images of anterior and lateral views were used as inputs, and image features were extracted by a pre-trained convolutional neural network (CNN) model, ResNet-50. The relationship between the images was depicted on a parametric 2-D axes map using t-distributed stochastic neighborhood embedding (t-SNE), with clinicopathological factors.
A DL-based feature map extracted by two MIP images was embedded by t-SNE. According to the visualization of the t-SNE map, PET images were clustered by clinicopathological features. The representative difference between the clusters of PET patterns according to the posture of a patient was visually identified. This map showed a pattern of clustering according to various clinicopathological factors including sex as well as tumor staging.
A 2-D image-based pre-trained model could extract image patterns of whole-body FDG PET volume by using anterior and lateral views of MIP images bypassing the direct use of 3-D PET volume that requires large datasets and resources. We suggest that this approach could be implemented as a backbone model for various applications for whole-body PET image analyses.
深度学习(DL)已广泛应用于各种医学影像分析。由于处理体数据存在困难,因此难以将DL模型训练为以PET体数据作为输入的端到端方法,用于包括诊断分类在内的各种目的。我们提出一种方法,利用全身FDG PET体数据生成的两张最大强度投影(MIP)图像,以采用基于二维图像的预训练模型。
作为一项回顾性概念验证研究,收集了562例肺癌患者的[F]FDG PET/CT图像及临床病理因素。将前后位和侧位的MIP图像用作输入,并通过预训练的卷积神经网络(CNN)模型ResNet-50提取图像特征。利用t分布随机邻域嵌入(t-SNE)在参数化二维轴图上描绘图像与临床病理因素之间的关系。
通过两张MIP图像提取的基于DL的特征图由t-SNE进行嵌入。根据t-SNE图的可视化结果,PET图像按临床病理特征聚类。直观地识别出根据患者体位划分的PET模式簇之间的代表性差异。该图显示了根据包括性别以及肿瘤分期在内的各种临床病理因素的聚类模式。
基于二维图像的预训练模型可以通过使用MIP图像的前后位和侧位视图来提取全身FDG PET体数据的图像模式,从而绕过直接使用需要大量数据集和资源的三维PET体数据。我们认为这种方法可以作为全身PET图像分析各种应用的主干模型来实施。