Suppr超能文献

通过混合放射组学和深度学习特征实现跨多中心MRI协议的可重复脑膜瘤分级。

Reproducible meningioma grading across multi-center MRI protocols via hybrid radiomic and deep learning features.

作者信息

Saadh Mohamed J, Albadr Rafid Jihad, Sur Dharmesh, Yadav Anupam, Roopashree R, Sangwan Gargi, Krithiga T, Aminov Zafar, Taher Waam Mohammed, Alwan Mariem, Jawad Mahmood Jasem, Al-Nuaimi Ali M Ali, Farhood Bagher

机构信息

Faculty of Pharmacy, Middle East University, 11831, Amman, Jordan.

Ahl al Bayt University, Kerbala, Iraq.

出版信息

Neuroradiology. 2025 Aug 18. doi: 10.1007/s00234-025-03725-8.

Abstract

OBJECTIVE

This study aimed to create a reliable method for preoperative grading of meningiomas by combining radiomic features and deep learning-based features extracted using a 3D autoencoder. The goal was to utilize the strengths of both handcrafted radiomic features and deep learning features to improve accuracy and reproducibility across different MRI protocols.

MATERIALS AND METHODS

The study included 3,523 patients with histologically confirmed meningiomas, consisting of 1,900 low-grade (Grade I) and 1,623 high-grade (Grades II and III) cases. Radiomic features were extracted from T1-contrast-enhanced and T2-weighted MRI scans using the Standardized Environment for Radiomics Analysis (SERA). Deep learning features were obtained from the bottleneck layer of a 3D autoencoder integrated with attention mechanisms. Feature selection was performed using Principal Component Analysis (PCA) and Analysis of Variance (ANOVA). Classification was done using machine learning models like XGBoost, CatBoost, and stacking ensembles. Reproducibility was evaluated using the Intraclass Correlation Coefficient (ICC), and batch effects were harmonized with the ComBat method. Performance was assessed based on accuracy, sensitivity, and the area under the receiver operating characteristic curve (AUC).

RESULTS

For T1-contrast-enhanced images, combining radiomic and deep learning features provided the highest AUC of 95.85% and accuracy of 95.18%, outperforming models using either feature type alone. T2-weighted images showed slightly lower performance, with the best AUC of 94.12% and accuracy of 93.14%. Deep learning features performed better than radiomic features alone, demonstrating their strength in capturing complex spatial patterns. The end-to-end 3D autoencoder with T1-contrast images achieved an AUC of 92.15%, accuracy of 91.14%, and sensitivity of 92.48%, surpassing T2-weighted imaging models. Reproducibility analysis showed high reliability (ICC > 0.75) for 127 out of 215 features, ensuring consistent performance across multi-center datasets.

CONCLUSIONS

The proposed framework effectively integrates radiomic and deep learning features to provide a robust, non-invasive, and reproducible approach for meningioma grading. Future research should validate this framework in real-world clinical settings and explore adding clinical parameters to enhance its prognostic value.

摘要

目的

本研究旨在通过结合影像组学特征和使用3D自动编码器提取的基于深度学习的特征,创建一种可靠的脑膜瘤术前分级方法。目标是利用手工制作的影像组学特征和深度学习特征的优势,提高不同MRI协议下的准确性和可重复性。

材料与方法

该研究纳入了3523例经组织学确诊的脑膜瘤患者,其中包括1900例低级别(I级)和1623例高级别(II级和III级)病例。使用影像组学分析标准化环境(SERA)从T1加权增强和T2加权MRI扫描中提取影像组学特征。从集成了注意力机制的3D自动编码器的瓶颈层获得深度学习特征。使用主成分分析(PCA)和方差分析(ANOVA)进行特征选择。使用XGBoost、CatBoost和堆叠集成等机器学习模型进行分类。使用组内相关系数(ICC)评估可重复性,并使用ComBat方法协调批次效应。基于准确性、敏感性和受试者操作特征曲线下面积(AUC)评估性能。

结果

对于T1加权增强图像,结合影像组学和深度学习特征可提供最高的AUC为95.85%,准确性为95.18%,优于单独使用任何一种特征类型的模型。T2加权图像的性能略低,最佳AUC为94.12%,准确性为93.14%。深度学习特征的表现优于单独的影像组学特征,证明了其在捕捉复杂空间模式方面的优势。具有T1加权图像的端到端3D自动编码器实现了AUC为92.15%,准确性为91.14%,敏感性为92.48%,超过了T2加权成像模型。可重复性分析显示,215个特征中的127个具有高可靠性(ICC>0.75),确保了跨多中心数据集的一致性能。

结论

所提出的框架有效地整合了影像组学和深度学习特征,为脑膜瘤分级提供了一种强大、无创且可重复的方法。未来的研究应在实际临床环境中验证该框架,并探索添加临床参数以增强其预后价值。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验