Suppr超能文献

通过CT图像预测肺腺癌中的EGFR突变:瘤内和瘤周放射组学、深度学习及融合模型的比较研究

Prediction of EGFR Mutations in Lung Adenocarcinoma via CT Images: A Comparative Study of Intratumoral and Peritumoral Radiomics, Deep Learning, and Fusion Models.

作者信息

Huang Liyou, Xu Lu, Wang Xun, Zhang Guangbin, Gao Xiancong, Niu Lei, Wen Linchun

机构信息

Department of Oncology, Affiliated Suqian Hospital of Xuzhou Medical University, Suqian 223800, PR China (L.H., L.X., L.W.).

Department of Radiology, Suzhou Hospital Affiliated to Nanjing Medical University, Suzhou 215000, PR China (X.W., G.Z.).

出版信息

Acad Radiol. 2025 May 5. doi: 10.1016/j.acra.2025.04.029.

Abstract

RATIONALE AND OBJECTIVES

This study aims to analyze the intratumoral and peritumoral characteristics of lung adenocarcinoma patients on the basis of chest CT images via radiomic and deep learning methods and to develop and validate a multimodel fusion strategy for predicting epidermal growth factor receptor (EGFR) mutation statuses.

MATERIALS AND METHODS

Retrospective data from 826 lung adenocarcinoma patients across two hospitals were collected. Data from center1 were used for model training and internal validation, while data from center2 were reserved for external validation. Tumor segmentation was performed using the nnUNet network, and volumes of interest (VOIs) for the tumor and its peritumoral regions (2 mm, 4 mm, 6 mm, 8 mm, 10 mm) were subsequently derived.Radiomics features were extracted from various VOIs using PyRadiomics, and radiomics models were developed using Lasso and multiple machine learning algorithms.Using 2D, 2.5D, and 3D images derived from different VOIs as inputs, multiple deep learning models were trained and their performances compared.The radiomics and deep learning models demonstrating the best predictive performance were selected and integrated with clinical models for model fusion.Multi-model fusion of clinical, radiomics, and deep learning features was achieved using feature-level fusion and various decision-level fusion strategies, including hard voting, soft voting, and stacking ensemble.The predictive performances of various fusion models were evaluated and compared systematically.

RESULTS

Among the available radiomic models, the model based on intratumoral and peritumoral 2-mm regions (VOI_Comb2) achieved the best performance on the internal and external validation sets (AUC=0.843 and 0.803, respectively). Compared with 2D and 2.5D deep learning models, the 3D deep learning model demonstrated superior predictive performance. The 3D deep model based on the VOI_Comb2 region achieved the highest AUC among all the deep learning models on the internal and external validation sets (AUC=0.839 and 0.814, respectively). Among the fusion models, the soft voting strategy achieved the highest AUC on the internal and external validation sets, reaching 0.925 and 0.889, respectively. On the external validation set, the AUC of the soft voting model was significantly greater than that of the hard voting model, early fusion model, or any single modality model.

CONCLUSION

This study demonstrates that combining radiomic and deep learning models based on intratumoral and peritumoral regions is an effective method for capturing comprehensive imaging features in lung adenocarcinoma. The multimodal fusion approach using soft voting leverages the strengths of each modality and provides a robust framework for advanced image feature extraction to support personalized treatment.

摘要

研究原理与目的

本研究旨在通过放射组学和深度学习方法,基于胸部CT图像分析肺腺癌患者的肿瘤内及肿瘤周围特征,并开发和验证一种用于预测表皮生长因子受体(EGFR)突变状态的多模型融合策略。

材料与方法

收集了来自两家医院的826例肺腺癌患者的回顾性数据。中心1的数据用于模型训练和内部验证,而中心2的数据留作外部验证。使用nnUNet网络进行肿瘤分割,随后得出肿瘤及其肿瘤周围区域(2毫米、4毫米、6毫米、8毫米、10毫米)的感兴趣体积(VOI)。使用PyRadiomics从各种VOI中提取放射组学特征,并使用套索回归和多种机器学习算法开发放射组学模型。以从不同VOI得出的二维、2.5维和三维图像作为输入,训练多个深度学习模型并比较其性能。选择表现出最佳预测性能的放射组学和深度学习模型,并与临床模型进行整合以进行模型融合。使用特征级融合和各种决策级融合策略(包括硬投票、软投票和堆叠集成)实现临床、放射组学和深度学习特征的多模型融合。系统地评估和比较各种融合模型的预测性能。

结果

在现有的放射组学模型中,基于肿瘤内和肿瘤周围2毫米区域的模型(VOI_Comb2)在内部和外部验证集上表现最佳(AUC分别为0.843和0.803)。与二维和2.5维深度学习模型相比,三维深度学习模型表现出更好的预测性能。基于VOI_Comb2区域的三维深度模型在内部和外部验证集上的AUC在所有深度学习模型中最高(AUC分别为0.839和0.814)。在融合模型中,软投票策略在内部和外部验证集上的AUC最高,分别达到0.925和0.889。在外部验证集上,软投票模型的AUC显著大于硬投票模型、早期融合模型或任何单一模态模型。

结论

本研究表明,结合基于肿瘤内和肿瘤周围区域的放射组学和深度学习模型是一种在肺腺癌中捕获全面影像特征的有效方法。使用软投票的多模态融合方法利用了每种模态的优势,并为高级影像特征提取提供了一个强大的框架,以支持个性化治疗。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验