Correra Simona, Gunnarsson Arnar Evgení, Recenti Marco, Mercaldo Francesco, Nardone Vittoria, Santone Antonella, Jónsson Halldór, Gargiulo Paolo
Department of Medicine and Health Sciences "Vincenzo Tiberio", University of Molise, 86100 Campobasso, Italy.
Institute of Biomedical and Neural Engineering, Reykjavik University, 102 Reykjavik, Iceland.
Diagnostics (Basel). 2025 Aug 20;15(16):2098. doi: 10.3390/diagnostics15162098.
: This study introduces an explainable, radiomics-based machine learning framework for the automated classification of sarcoma tumors using MRI. The approach aims to empower clinicians, reducing dependence on subjective image interpretation. : A total of 186 MRI scans from 86 patients diagnosed with bone and soft tissue sarcoma were manually segmented to isolate tumor regions and corresponding healthy tissue. From these segmentations, 851 handcrafted radiomic features were extracted, including wavelet-transformed descriptors. A Random Forest classifier was trained to distinguish between tumor and healthy tissue, with hyperparameter tuning performed through nested cross-validation. To ensure transparency and interpretability, model behavior was explored through Feature Importance analysis and Local Interpretable Model-agnostic Explanations (LIME). : The model achieved an F1-score of 0.742, with an accuracy of 0.724 on the test set. LIME analysis revealed that texture and wavelet-based features were the most influential in driving the model's predictions. : By enabling accurate and interpretable classification of sarcomas in MRI, the proposed method provides a non-invasive approach to tumor classification, supporting an earlier, more personalized and precision-driven diagnosis. This study highlights the potential of explainable AI to assist in more secure clinical decision-making.
本研究介绍了一种基于放射组学的可解释机器学习框架,用于利用磁共振成像(MRI)对肉瘤肿瘤进行自动分类。该方法旨在增强临床医生的能力,减少对主观图像解读的依赖。
共对86例诊断为骨与软组织肉瘤患者的186次MRI扫描进行手动分割,以分离肿瘤区域和相应的健康组织。从这些分割图像中提取了851个手工制作的放射组学特征,包括小波变换描述符。训练了一个随机森林分类器来区分肿瘤组织和健康组织,并通过嵌套交叉验证进行超参数调整。为确保透明度和可解释性,通过特征重要性分析和局部可解释模型无关解释(LIME)来探究模型行为。
该模型在测试集上的F1分数为0.742,准确率为0.724。LIME分析表明,基于纹理和小波的特征对驱动模型预测最具影响力。
通过实现对MRI中肉瘤的准确且可解释的分类,所提出的方法提供了一种无创的肿瘤分类方法,支持更早、更个性化和精准驱动的诊断。本研究突出了可解释人工智能在协助更安全临床决策方面的潜力。