Suppr超能文献

解释用于伽马射线探测和识别的机器学习模型。

Explaining machine-learning models for gamma-ray detection and identification.

机构信息

Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, California, United States of America.

Physics Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee, United States of America.

出版信息

PLoS One. 2023 Jun 20;18(6):e0286829. doi: 10.1371/journal.pone.0286829. eCollection 2023.

Abstract

As more complex predictive models are used for gamma-ray spectral analysis, methods are needed to probe and understand their predictions and behavior. Recent work has begun to bring the latest techniques from the field of Explainable Artificial Intelligence (XAI) into the applications of gamma-ray spectroscopy, including the introduction of gradient-based methods like saliency mapping and Gradient-weighted Class Activation Mapping (Grad-CAM), and black box methods like Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP). In addition, new sources of synthetic radiological data are becoming available, and these new data sets present opportunities to train models using more data than ever before. In this work, we use a neural network model trained on synthetic NaI(Tl) urban search data to compare some of these explanation methods and identify modifications that need to be applied to adapt the methods to gamma-ray spectral data. We find that the black box methods LIME and SHAP are especially accurate in their results, and recommend SHAP since it requires little hyperparameter tuning. We also propose and demonstrate a technique for generating counterfactual explanations using orthogonal projections of LIME and SHAP explanations.

摘要

随着更复杂的预测模型被用于伽马射线光谱分析,需要有方法来探测和理解它们的预测和行为。最近的工作开始将可解释人工智能(XAI)领域的最新技术引入伽马射线光谱学的应用中,包括引入基于梯度的方法,如显着性映射和梯度加权类激活映射(Grad-CAM),以及黑盒方法,如局部可解释模型无关解释(LIME)和 Shapley 加法解释(SHAP)。此外,新的合成放射学数据来源正在变得可用,这些新数据集提供了使用比以往更多的数据来训练模型的机会。在这项工作中,我们使用在合成 Nal(Tl)城市搜索数据上训练的神经网络模型来比较这些解释方法中的一些,并确定需要应用哪些修改来使这些方法适应伽马射线光谱数据。我们发现黑盒方法 LIME 和 SHAP 在结果中特别准确,并建议使用 SHAP,因为它需要很少的超参数调整。我们还提出并演示了一种使用 LIME 和 SHAP 解释的正交投影生成反事实解释的技术。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dde7/10281578/cf459ee44dca/pone.0286829.g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验