School of Electronic and Information Engineering, Changchun University of Science and Technology, Changchun, 130022, China.
Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, 88 Keling Road, Suzhou, 215163, China.
Jpn J Radiol. 2023 Apr;41(4):417-427. doi: 10.1007/s11604-022-01363-1. Epub 2022 Nov 21.
To explore a multidomain fusion model of radiomics and deep learning features based on F-fluorodeoxyglucose positron emission tomography/computed tomography (F-FDG PET/CT) images to distinguish pancreatic ductal adenocarcinoma (PDAC) and autoimmune pancreatitis (AIP), which could effectively improve the accuracy of diseases diagnosis.
This retrospective study included 48 patients with AIP (mean age, 65 ± 12.0 years; range, 37-90 years) and 64 patients with PDAC patients (mean age, 66 ± 11.3 years; range, 32-88 years). Three different methods were discussed to identify PDAC and AIP based on F-FDG PET/CT images, including the radiomics model (RAD_model), the deep learning model (DL_model), and the multidomain fusion model (MF_model). We also compared the classification results of PET/CT, PET, and CT images in these three models. In addition, we explored the attributes of deep learning abstract features by analyzing the correlation between radiomics and deep learning features. Five-fold cross-validation was used to calculate receiver operating characteristic (ROC), area under the roc curve (AUC), accuracy (Acc), sensitivity (Sen), and specificity (Spe) to quantitatively evaluate the performance of different classification models.
The experimental results showed that the multidomain fusion model had the best comprehensive performance compared with radiomics and deep learning models, and the AUC, accuracy, sensitivity, specificity were 96.4% (95% CI 95.4-97.3%), 90.1% (95% CI 88.7-91.5%), 87.5% (95% CI 84.3-90.6%), and 93.0% (95% CI 90.3-95.6%), respectively. And our study proved that the multimodal features of PET/CT were superior to using either PET or CT features alone. First-order features of radiomics provided valuable complementary information for the deep learning model.
The preliminary results of this paper demonstrated that our proposed multidomain fusion model fully exploits the value of radiomics and deep learning features based on F-FDG PET/CT images, which provided competitive accuracy for the discrimination of PDAC and AIP.
探索一种基于 F-氟代脱氧葡萄糖正电子发射断层扫描/计算机断层扫描(F-FDG PET/CT)图像的放射组学和深度学习特征的多域融合模型,以区分胰腺导管腺癌(PDAC)和自身免疫性胰腺炎(AIP),从而有效提高疾病诊断的准确性。
本回顾性研究纳入了 48 例 AIP 患者(平均年龄 65±12.0 岁;范围 37-90 岁)和 64 例 PDAC 患者(平均年龄 66±11.3 岁;范围 32-88 岁)。基于 F-FDG PET/CT 图像,我们探讨了三种不同的方法来识别 PDAC 和 AIP,包括放射组学模型(RAD_model)、深度学习模型(DL_model)和多域融合模型(MF_model)。我们还比较了这些三种模型中 PET/CT、PET 和 CT 图像的分类结果。此外,我们通过分析放射组学和深度学习特征之间的相关性,探讨了深度学习抽象特征的属性。五折交叉验证用于计算受试者工作特征(ROC)、ROC 曲线下面积(AUC)、准确性(Acc)、敏感度(Sen)和特异度(Spe),以定量评估不同分类模型的性能。
实验结果表明,与放射组学和深度学习模型相比,多域融合模型具有最佳的综合性能,AUC、准确性、敏感度、特异度分别为 96.4%(95%CI 95.4-97.3%)、90.1%(95%CI 88.7-91.5%)、87.5%(95%CI 84.3-90.6%)和 93.0%(95%CI 90.3-95.6%)。此外,我们的研究证明了 PET/CT 的多模态特征优于单独使用 PET 或 CT 特征。放射组学的一阶特征为深度学习模型提供了有价值的互补信息。
本文的初步结果表明,我们提出的多域融合模型充分利用了基于 F-FDG PET/CT 图像的放射组学和深度学习特征的价值,为 PDAC 和 AIP 的鉴别提供了具有竞争力的准确性。