• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

SEGAL 时间序列分类 - 使用生成模型和 LIME 的自适应加权方法进行稳定解释。

SEGAL time series classification - Stable explanations using a generative model and an adaptive weighting method for LIME.

机构信息

College of Information Science and Engineering/College of Artificial Intelligence, China University of Petroleum (Beijing), Beijing, 102249, China; Computational Optimisation and Learning (COL) Lab, School of Computer Science, University of Nottingham, Nottingham, United Kingdom; The Lab for Uncertainty in Data and Decision Making (LUCID), School of Computer Science, University of Nottingham, Nottingham, United Kingdom.

The Lab for Uncertainty in Data and Decision Making (LUCID), School of Computer Science, University of Nottingham, Nottingham, United Kingdom.

出版信息

Neural Netw. 2024 Aug;176:106345. doi: 10.1016/j.neunet.2024.106345. Epub 2024 Apr 27.

DOI:10.1016/j.neunet.2024.106345
PMID:38733798
Abstract

Local Interpretability Model-agnostic Explanations (LIME) is a well-known post-hoc technique for explaining black-box models. While very useful, recent research highlights challenges around the explanations generated. In particular, there is a potential lack of stability, where the explanations provided vary over repeated runs of the algorithm, casting doubt on their reliability. This paper investigates the stability of LIME when applied to multivariate time series classification. We demonstrate that the traditional methods for generating neighbours used in LIME carry a high risk of creating 'fake' neighbours, which are out-of-distribution in respect to the trained model and far away from the input to be explained. This risk is particularly pronounced for time series data because of their substantial temporal dependencies. We discuss how these out-of-distribution neighbours contribute to unstable explanations. Furthermore, LIME weights neighbours based on user-defined hyperparameters which are problem-dependent and hard to tune. We show how unsuitable hyperparameters can impact the stability of explanations. We propose a two-fold approach to address these issues. First, a generative model is employed to approximate the distribution of the training data set, from which within-distribution samples and thus meaningful neighbours can be created for LIME. Second, an adaptive weighting method is designed in which the hyperparameters are easier to tune than those of the traditional method. Experiments on real-world data sets demonstrate the effectiveness of the proposed method in providing more stable explanations using the LIME framework. In addition, in-depth discussions are provided on the reasons behind these results.

摘要

局部可解释模型不可知解释(LIME)是一种用于解释黑盒模型的知名事后技术。虽然非常有用,但最近的研究强调了生成的解释所面临的挑战。特别是,存在缺乏稳定性的问题,即算法的重复运行提供的解释会发生变化,这让人对其可靠性产生怀疑。本文研究了 LIME 在多元时间序列分类中的稳定性。我们证明了 LIME 中用于生成邻居的传统方法存在很高的风险,会创建出“假”邻居,这些邻居与训练模型的分布不相关,并且远离要解释的输入。由于时间序列数据具有很强的时间依赖性,因此这种风险尤为明显。我们讨论了这些离群邻居如何导致不稳定的解释。此外,LIME 根据用户定义的超参数对邻居进行加权,这些超参数取决于问题且难以调整。我们展示了不合适的超参数如何影响解释的稳定性。我们提出了一种两方面的方法来解决这些问题。首先,我们使用生成模型来近似训练数据集的分布,从中可以为 LIME 创建分布内的样本,从而创建有意义的邻居。其次,我们设计了一种自适应加权方法,其中超参数比传统方法更容易调整。在真实数据集上的实验证明了我们的方法在使用 LIME 框架提供更稳定解释方面的有效性。此外,我们还对这些结果背后的原因进行了深入讨论。

相似文献

1
SEGAL time series classification - Stable explanations using a generative model and an adaptive weighting method for LIME.SEGAL 时间序列分类 - 使用生成模型和 LIME 的自适应加权方法进行稳定解释。
Neural Netw. 2024 Aug;176:106345. doi: 10.1016/j.neunet.2024.106345. Epub 2024 Apr 27.
2
Explaining machine-learning models for gamma-ray detection and identification.解释用于伽马射线探测和识别的机器学习模型。
PLoS One. 2023 Jun 20;18(6):e0286829. doi: 10.1371/journal.pone.0286829. eCollection 2023.
3
A Machine Learning Approach with Human-AI Collaboration for Automated Classification of Patient Safety Event Reports: Algorithm Development and Validation Study.一种人机协作的机器学习方法用于患者安全事件报告的自动分类:算法开发与验证研究
JMIR Hum Factors. 2024 Jan 25;11:e53378. doi: 10.2196/53378.
4
Evolved explainable classifications for lymph node metastases.演进的可解释淋巴结转移分类。
Neural Netw. 2022 Apr;148:1-12. doi: 10.1016/j.neunet.2021.12.014. Epub 2021 Dec 31.
5
Ada-WHIPS: explaining AdaBoost classification with applications in the health sciences.Ada-WHIPS:解释 AdaBoost 分类及其在健康科学中的应用。
BMC Med Inform Decis Mak. 2020 Oct 2;20(1):250. doi: 10.1186/s12911-020-01201-2.
6
Explaining multivariate molecular diagnostic tests via Shapley values.通过 Shapley 值解释多变量分子诊断测试。
BMC Med Inform Decis Mak. 2021 Jul 8;21(1):211. doi: 10.1186/s12911-021-01569-9.
7
Enhanced joint hybrid deep neural network explainable artificial intelligence model for 1-hr ahead solar ultraviolet index prediction.用于 1 小时提前太阳紫外线指数预测的增强型关节混合深度神经网络可解释人工智能模型。
Comput Methods Programs Biomed. 2023 Nov;241:107737. doi: 10.1016/j.cmpb.2023.107737. Epub 2023 Aug 5.
8
Understanding How CNNs Recognize Facial Expressions: A Case Study with LIME and CEM.理解 CNN 如何识别面部表情:LIME 和 CEM 的案例研究。
Sensors (Basel). 2022 Dec 23;23(1):131. doi: 10.3390/s23010131.
9
Interpretable Machine Learning for Personalized Medical Recommendations: A LIME-Based Approach.用于个性化医疗推荐的可解释机器学习:一种基于LIME的方法。
Diagnostics (Basel). 2023 Aug 15;13(16):2681. doi: 10.3390/diagnostics13162681.
10
A global model-agnostic rule-based XAI method based on Parameterized Event Primitives for time series classifiers.一种基于参数化事件原语的用于时间序列分类器的全局模型无关的基于规则的可解释人工智能方法。
Front Artif Intell. 2024 Sep 20;7:1381921. doi: 10.3389/frai.2024.1381921. eCollection 2024.