Suppr超能文献

基于放射组学特征的医学影像可解释机器学习的实现和可视化的流水线。

A Pipeline for the Implementation and Visualization of Explainable Machine Learning for Medical Imaging Using Radiomics Features.

机构信息

Department of Biostatistics and Informatics, University of Colorado, Aurora, CO 80045, USA.

Department of Radiology, Yonsei University College of Medicine, Seoul 03722, Korea.

出版信息

Sensors (Basel). 2022 Jul 12;22(14):5205. doi: 10.3390/s22145205.

Abstract

Machine learning (ML) models have been shown to predict the presence of clinical factors from medical imaging with remarkable accuracy. However, these complex models can be difficult to interpret and are often criticized as "black boxes". Prediction models that provide no insight into how their predictions are obtained are difficult to trust for making important clinical decisions, such as medical diagnoses or treatment. Explainable machine learning (XML) methods, such as Shapley values, have made it possible to explain the behavior of ML algorithms and to identify which predictors contribute most to a prediction. Incorporating XML methods into medical software tools has the potential to increase trust in ML-powered predictions and aid physicians in making medical decisions. Specifically, in the field of medical imaging analysis the most used methods for explaining deep learning-based model predictions are saliency maps that highlight important areas of an image. However, they do not provide a straightforward interpretation of which qualities of an image area are important. Here, we describe a novel pipeline for XML imaging that uses radiomics data and Shapley values as tools to explain outcome predictions from complex prediction models built with medical imaging with well-defined predictors. We present a visualization of XML imaging results in a clinician-focused dashboard that can be generalized to various settings. We demonstrate the use of this workflow for developing and explaining a prediction model using MRI data from glioma patients to predict a genetic mutation.

摘要

机器学习 (ML) 模型已被证明可以从医学影像中准确预测临床因素的存在。然而,这些复杂的模型很难解释,并且经常被批评为“黑盒子”。对于做出重要的临床决策,如医学诊断或治疗,无法深入了解模型预测是如何得出的预测模型很难让人信任。可解释性机器学习 (XML) 方法,如 Shapley 值,使得解释 ML 算法的行为和确定哪些预测因子对预测贡献最大成为可能。将 XML 方法纳入医疗软件工具中有可能增加对基于 ML 的预测的信任,并帮助医生做出医疗决策。具体来说,在医学影像分析领域,用于解释基于深度学习的模型预测的最常用方法是显著图,它突出显示图像的重要区域。然而,它们并没有提供对图像区域的哪些质量重要的直接解释。在这里,我们描述了一种新颖的 XML 成像管道,该管道使用放射组学数据和 Shapley 值作为工具,根据具有明确定义预测因子的医学成像来解释复杂预测模型的结果预测。我们在以临床医生为中心的仪表板中呈现 XML 成像结果的可视化,该仪表板可以推广到各种环境中。我们演示了如何使用此工作流程来开发和解释一个使用胶质母细胞瘤患者的 MRI 数据来预测基因突变的预测模型。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0840/9318445/b59c17652c88/sensors-22-05205-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验