Costa Márcus V L, de Aguiar Erikson J, Rodrigues Lucas S, Traina Caetano, Traina Agma J M
Institute of Mathematics and Computer Science, University of São Paulo, São Carlos, São Paulo 13566-590 Brazil.
Health Inf Sci Syst. 2024 Dec 29;13(1):11. doi: 10.1007/s13755-024-00330-6. eCollection 2025 Dec.
Deep learning-based radiomics techniques have the potential to aid specialists and physicians in performing decision-making in COVID-19 scenarios. Specifically, a Deep Learning (DL) ensemble model is employed to classify medical images when addressing the diagnosis during the classification tasks for COVID-19 using chest X-ray images. It also provides feasible and reliable visual explicability concerning the results to support decision-making.
Our DEELE-Rad approach integrates DL and Machine Learning (ML) techniques. We use deep learning models to extract deep radiomics features and evaluate its performance regarding end-to-end classifiers. We avoid successive radiomics approach steps by employing these models with transfer learning techniques from ImageNet, such as VGG16, ResNet50V2, and DenseNet201 architectures. We extract 100 and 500 deep radiomics features from each DL model. We also placed these features into well-established ML classifiers and applied automatic parameter tuning and a cross-validation strategy. Besides, we exploit insights into the decision-making behavior by applying a visual explanation method.
Experimental evaluation on our proposed approach achieved 89.97% AUC when using 500 deep radiomics features from the DenseNet201 end-to-end classifier. Besides, our ensemble DEELE-Rad method improves the results up to 96.19% AUC for the 500 dimensions. To outperform, ML DEELE-Rad reached the best results with an Accuracy of 98.39% and 99.19% AUC for the same setup. Our visual assessment employs additional possibilities for specialists and physicians to decision-making.
The results reflect that the DEELE-Rad approach provides robustness and confidence to the images' analysis. Our approach can benefit healthcare specialists when employed at clinical routines and respective decision-making procedures. For reproducibility, our code is available at https://github.com/usmarcv/deele-rad.
基于深度学习的放射组学技术有潜力协助专家和医生在新冠疫情场景中进行决策。具体而言,在使用胸部X光图像对新冠进行分类诊断的任务中,采用深度学习(DL)集成模型对医学图像进行分类。它还为结果提供了可行且可靠的视觉可解释性,以支持决策。
我们的DEELE-Rad方法整合了深度学习和机器学习(ML)技术。我们使用深度学习模型提取深度放射组学特征,并评估其在端到端分类器方面的性能。通过将这些模型与来自ImageNet的迁移学习技术(如VGG16、ResNet50V2和DenseNet201架构)结合使用,我们避免了放射组学方法的连续步骤。我们从每个DL模型中提取100个和500个深度放射组学特征。我们还将这些特征放入成熟的ML分类器中,并应用自动参数调整和交叉验证策略。此外,我们通过应用视觉解释方法来深入了解决策行为。
对我们提出的方法进行实验评估时,使用来自DenseNet201端到端分类器的500个深度放射组学特征时,AUC达到了89.97%。此外,我们的集成DEELE-Rad方法在500维度时将结果提高到了96.19%的AUC。为了达到更好的效果,ML DEELE-Rad在相同设置下以98.39%的准确率和99.19%的AUC取得了最佳结果。我们的视觉评估为专家和医生的决策提供了更多可能性。
结果表明,DEELE-Rad方法为图像分析提供了稳健性和可信度。我们的方法在临床常规和相应决策程序中使用时,可以使医疗保健专家受益。为了实现可重复性,我们的代码可在https://github.com/usmarcv/deele-rad上获取。