He Xinliu, Guan Chao, Chen Ting, Wu Houde, Su Liuchao, Zhao Mingfang, Guo Li
School of Medical Imaging, School of Medical Technology, Tianjin Medical University, Tianjin 300203, China.
Department of Oncology, Shengjing Hospital of China Medical University, Shenyang, China.
Eur J Radiol. 2025 Sep;190:112265. doi: 10.1016/j.ejrad.2025.112265. Epub 2025 Jun 26.
OBJECTIVES: This study aims to establish a dual-feature fusion model integrating radiomic features with deep learning features, utilizing single-modality pre-treatment lung CT image data to achieve early warning of brain metastasis (BM) risk within 2 years in EGFR-positive lung adenocarcinoma. MATERIALS AND METHODS: After rigorous screening of 362 EGFR-positive lung adenocarcinoma patients with pre-treatment lung CT images, 173 eligible participants were ultimately enrolled in this study, including 93 patients with BM and 80 without BM. Radiomic features were extracted from manually segmented lung nodule regions, and a selection of features was used to develop radiomics models. For deep learning, ROI-level CT images were processed using several deep learning networks, including the novel vision mamba, which was applied for the first time in this context. A feature-level fusion model was developed by combining radiomic and deep learning features. Model performance was assessed using receiver operating characteristic (ROC) curves and decision curve analysis (DCA), with statistical comparisons of area under the curve (AUC) values using the DeLong test. RESULTS: Among the models evaluated, the fused vision mamba model demonstrated the best classification performance, achieving an AUC of 0.86 (95% CI: 0.82-0.90), with a recall of 0.88, F1-score of 0.70, and accuracy of 0.76. This fusion model outperformed both radiomics-only and deep learning-only models, highlighting its superior predictive accuracy for early BM risk detection in EGFR-positive lung adenocarcinoma patients. CONCLUSION: The fused vision mamba model, utilizing single CT imaging data, significantly enhances the prediction of brain metastasis within two years in EGFR-positive lung adenocarcinoma patients. This novel approach, combining radiomic and deep learning features, offers promising clinical value for early detection and personalized treatment.
目的:本研究旨在建立一个将放射组学特征与深度学习特征相结合的双特征融合模型,利用单模态治疗前肺部CT图像数据,实现对表皮生长因子受体(EGFR)阳性肺腺癌患者2年内脑转移(BM)风险的早期预警。 材料与方法:在对362例有治疗前肺部CT图像的EGFR阳性肺腺癌患者进行严格筛选后,最终有173名符合条件的参与者纳入本研究,其中包括93例发生BM的患者和80例未发生BM的患者。从手动分割的肺结节区域提取放射组学特征,并使用部分特征建立放射组学模型。对于深度学习,使用包括新型视觉曼巴网络在内的多个深度学习网络对感兴趣区(ROI)层面的CT图像进行处理,该网络在此研究中首次应用。通过结合放射组学和深度学习特征建立特征层面的融合模型。使用受试者操作特征(ROC)曲线和决策曲线分析(DCA)评估模型性能,并使用德龙检验对曲线下面积(AUC)值进行统计学比较。 结果:在评估的模型中,融合视觉曼巴模型表现出最佳的分类性能,AUC为0.86(95%CI:0.82 - 0.90),召回率为0.88,F1值为0.70,准确率为0.76。该融合模型优于单纯的放射组学模型和单纯的深度学习模型,突出了其在EGFR阳性肺腺癌患者早期BM风险检测中的卓越预测准确性。 结论:融合视觉曼巴模型利用单一CT成像数据,显著提高了EGFR阳性肺腺癌患者两年内脑转移的预测能力。这种结合放射组学和深度学习特征的新方法,为早期检测和个性化治疗提供了有前景的临床价值。