Albadr Rafid Jihad, Sur Dharmesh, Yadav Anupam, Rekha M M, Jain Bhavik, Jayabalan Karthikeyan, Kubaev Aziz, Taher Waam Mohammed, Alwan Mariem, Jawad Mahmood Jasem, Al-Nuaimi Ali M Ali, Mohammadifard Mahyar, Farhood Bagher, Akhavan-Sigari Reza
Ahl Al Bayt University, Kerbala, Iraq.
Department of Chemical Engineering, Faculty of Engineering & Technology, Marwadi University Research Center, Marwadi University, Rajkot, Gujarat, 360003, India.
Eur J Med Res. 2025 Aug 26;30(1):808. doi: 10.1186/s40001-025-03066-5.
This study aims to develop a robust and clinically applicable framework for preoperative grading of meningiomas using T1-contrast-enhanced and T2-weighted MRI images. The approach integrates radiomic feature extraction, attention-guided deep learning models, and reproducibility assessment to achieve high diagnostic accuracy, model interpretability, and clinical reliability.
We analyzed MRI scans from 2546 patients with histopathologically confirmed meningiomas (1560 low-grade, 986 high-grade). High-quality T1-contrast and T2-weighted images were preprocessed through harmonization, normalization, resizing, and augmentation. Tumor segmentation was performed using ITK-SNAP, and inter-rater reliability of radiomic features was evaluated using the intraclass correlation coefficient (ICC). Radiomic features were extracted via the SERA software, while deep features were derived from pre-trained models (ResNet50 and EfficientNet-B0), with attention mechanisms enhancing focus on tumor-relevant regions. Feature fusion and dimensionality reduction were conducted using PCA and LASSO. Ensemble models employing Random Forest, XGBoost, and LightGBM were implemented to optimize classification performance using both radiomic and deep features.
Reproducibility analysis showed that 52% of radiomic features demonstrated excellent reliability (ICC > 0.90). Deep features from EfficientNet-B0 outperformed ResNet50, achieving AUCs of 94.12% (T1) and 93.17% (T2). Hybrid models combining radiomic and deep features further improved performance, with XGBoost reaching AUCs of 95.19% (T2) and 96.87% (T1). Ensemble models incorporating both deep architectures achieved the highest classification performance, with AUCs of 96.12% (T2) and 96.80% (T1), demonstrating superior robustness and accuracy.
This work introduces a comprehensive and clinically meaningful AI framework that significantly enhances the preoperative grading of meningiomas. The model's high accuracy, interpretability, and reproducibility support its potential to inform surgical planning, reduce reliance on invasive diagnostics, and facilitate more personalized therapeutic decision-making in routine neuro-oncology practice.
Not applicable.
本研究旨在开发一种强大且适用于临床的框架,用于使用T1增强和T2加权MRI图像对脑膜瘤进行术前分级。该方法整合了放射组学特征提取、注意力引导的深度学习模型和可重复性评估,以实现高诊断准确性、模型可解释性和临床可靠性。
我们分析了2546例经组织病理学确诊的脑膜瘤患者的MRI扫描图像(1560例低级别,986例高级别)。高质量的T1增强和T2加权图像经过归一化、标准化、调整大小和增强等预处理。使用ITK-SNAP进行肿瘤分割,并使用组内相关系数(ICC)评估放射组学特征的评分者间可靠性。通过SERA软件提取放射组学特征,而深度特征则来自预训练模型(ResNet50和EfficientNet-B0),注意力机制增强了对肿瘤相关区域的关注。使用主成分分析(PCA)和套索回归(LASSO)进行特征融合和降维。采用随机森林、XGBoost和LightGBM的集成模型,利用放射组学和深度特征优化分类性能。
可重复性分析表明,52%的放射组学特征显示出极佳的可靠性(ICC>0.90)。来自EfficientNet-B0的深度特征优于ResNet50,T1图像的曲线下面积(AUC)为94.12%,T2图像为93.17%。结合放射组学和深度特征的混合模型进一步提高了性能,XGBoost在T2图像上的AUC为95.19%,在T1图像上为96.87%。包含两种深度架构的集成模型实现了最高的分类性能,T2图像的AUC为96.12%,T1图像为96.80%,显示出卓越的稳健性和准确性。
本研究引入了一个全面且具有临床意义的人工智能框架,显著提高了脑膜瘤的术前分级。该模型的高准确性、可解释性和可重复性支持其在为手术规划提供信息、减少对侵入性诊断的依赖以及在常规神经肿瘤学实践中促进更个性化治疗决策方面的潜力。
不适用。