• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于时间序列分类的稳健解释器推荐

Robust explainer recommendation for time series classification.

作者信息

Nguyen Thu Trang, Le Nguyen Thach, Ifrim Georgiana

机构信息

School of Computer Science, University College Dublin, Dublin, Ireland.

出版信息

Data Min Knowl Discov. 2024;38(6):3372-3413. doi: 10.1007/s10618-024-01045-8. Epub 2024 Jun 20.

DOI:10.1007/s10618-024-01045-8
PMID:39473587
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11513768/
Abstract

Time series classification is a task which deals with temporal sequences, a prevalent data type common in domains such as human activity recognition, sports analytics and general sensing. In this area, interest in explanability has been growing as explanation is key to understand the data and the model better. Recently, a great variety of techniques (e.g., LIME, SHAP, CAM) have been proposed and adapted for time series to provide explanation in the form of , where the importance of each data point in the time series is quantified with a numerical value. However, the saliency maps can and often disagree, so it is unclear which one to use. This paper provides a novel framework to . We show how to robustly evaluate the informativeness of a given explanation method (i.e., relevance for the classification task), and how to compare explanations side-by-side. We propose AMEE, a Model-Agnostic Explanation Evaluation framework, for recommending saliency-based explanations for time series classification. In this approach, data perturbation is added to the input time series guided by each explanation. Our results show that perturbing discriminative parts of the time series leads to significant changes in classification accuracy, which can be used to evaluate each explanation. To be robust to different types of perturbations and different types of classifiers, we aggregate the accuracy loss across perturbations and classifiers. This novel approach allows us to recommend the best explainer among a set of different explainers, including random and oracle explainers. We provide a quantitative and qualitative analysis for synthetic datasets, a variety of time-series datasets, as well as a real-world case study with known expert ground truth.

摘要

时间序列分类是一项处理时间序列的任务,时间序列是一种在人类活动识别、体育分析和一般传感等领域常见的流行数据类型。在这个领域,对可解释性的兴趣一直在增长,因为解释是更好地理解数据和模型的关键。最近,已经提出并适用于时间序列的各种技术(例如,LIME、SHAP、CAM),以提供以 形式的解释,其中时间序列中每个数据点的重要性用一个数值来量化。然而,显著性图可能而且经常不一致,所以不清楚该使用哪一个。本文提供了一个新颖的框架来 。我们展示了如何稳健地评估给定解释方法的信息性(即与分类任务的相关性),以及如何并排比较解释。我们提出了AMEE,一个模型无关的解释评估框架,用于为时间序列分类推荐基于显著性的解释。在这种方法中,数据扰动被添加到由每个解释引导的输入时间序列中。我们的结果表明,扰动时间序列的判别部分会导致分类准确率的显著变化,这可用于评估每个解释。为了对不同类型的扰动和不同类型的分类器具有鲁棒性,我们汇总了跨扰动和分类器的准确率损失。这种新颖的方法使我们能够在一组不同的解释器中推荐最佳解释器,包括随机和神谕解释器。我们对合成数据集、各种时间序列数据集以及具有已知专家地面真值的实际案例研究进行了定量和定性分析。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/992058ead32a/10618_2024_1045_Fig21_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/03e0838ba9b8/10618_2024_1045_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/446d3e049457/10618_2024_1045_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/52bf9d7a77b6/10618_2024_1045_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/eba9d6f7b215/10618_2024_1045_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/49b1405d9e11/10618_2024_1045_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/d6f5139ae64f/10618_2024_1045_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/bfc930fd28c1/10618_2024_1045_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/1662865af319/10618_2024_1045_Figa_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/46b4816571f5/10618_2024_1045_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/8bcc6683c21b/10618_2024_1045_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/10e696eaacb5/10618_2024_1045_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/9fe52f185f90/10618_2024_1045_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/a031d0cff49a/10618_2024_1045_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/0fb0afa985fb/10618_2024_1045_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/5baece0ae4a8/10618_2024_1045_Fig14_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/e547c48d7191/10618_2024_1045_Fig15_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/71e820222b11/10618_2024_1045_Fig16_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/865becba4de0/10618_2024_1045_Fig17_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/dfc750c54e6f/10618_2024_1045_Fig18_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/55db0d184451/10618_2024_1045_Fig19_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/17dbb616bc0a/10618_2024_1045_Fig20_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/69ab887c2526/10618_2024_1045_Figb_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/992058ead32a/10618_2024_1045_Fig21_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/03e0838ba9b8/10618_2024_1045_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/446d3e049457/10618_2024_1045_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/52bf9d7a77b6/10618_2024_1045_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/eba9d6f7b215/10618_2024_1045_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/49b1405d9e11/10618_2024_1045_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/d6f5139ae64f/10618_2024_1045_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/bfc930fd28c1/10618_2024_1045_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/1662865af319/10618_2024_1045_Figa_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/46b4816571f5/10618_2024_1045_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/8bcc6683c21b/10618_2024_1045_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/10e696eaacb5/10618_2024_1045_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/9fe52f185f90/10618_2024_1045_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/a031d0cff49a/10618_2024_1045_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/0fb0afa985fb/10618_2024_1045_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/5baece0ae4a8/10618_2024_1045_Fig14_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/e547c48d7191/10618_2024_1045_Fig15_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/71e820222b11/10618_2024_1045_Fig16_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/865becba4de0/10618_2024_1045_Fig17_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/dfc750c54e6f/10618_2024_1045_Fig18_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/55db0d184451/10618_2024_1045_Fig19_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/17dbb616bc0a/10618_2024_1045_Fig20_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/69ab887c2526/10618_2024_1045_Figb_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52bb/11513768/992058ead32a/10618_2024_1045_Fig21_HTML.jpg

相似文献

1
Robust explainer recommendation for time series classification.用于时间序列分类的稳健解释器推荐
Data Min Knowl Discov. 2024;38(6):3372-3413. doi: 10.1007/s10618-024-01045-8. Epub 2024 Jun 20.
2
Explaining the black-box smoothly-A counterfactual approach.黑盒解释的平滑化——反事实方法。
Med Image Anal. 2023 Feb;84:102721. doi: 10.1016/j.media.2022.102721. Epub 2022 Dec 13.
3
Ensemble-based genetic algorithm explainer with automized image segmentation: A case study on melanoma detection dataset.基于集成的遗传算法解释器与自动化图像分割:黑色素瘤检测数据集的案例研究
Comput Biol Med. 2023 Mar;155:106613. doi: 10.1016/j.compbiomed.2023.106613. Epub 2023 Feb 5.
4
A Meta-Learning Approach for Training Explainable Graph Neural Networks.一种用于训练可解释图神经网络的元学习方法。
IEEE Trans Neural Netw Learn Syst. 2024 Apr;35(4):4647-4655. doi: 10.1109/TNNLS.2022.3171398. Epub 2024 Apr 4.
5
Explainable machine learning models based on multimodal time-series data for the early detection of Parkinson's disease.基于多模态时间序列数据的可解释机器学习模型用于帕金森病的早期检测。
Comput Methods Programs Biomed. 2023 Jun;234:107495. doi: 10.1016/j.cmpb.2023.107495. Epub 2023 Mar 23.
6
A global model-agnostic rule-based XAI method based on Parameterized Event Primitives for time series classifiers.一种基于参数化事件原语的用于时间序列分类器的全局模型无关的基于规则的可解释人工智能方法。
Front Artif Intell. 2024 Sep 20;7:1381921. doi: 10.3389/frai.2024.1381921. eCollection 2024.
7
Model-agnostic explanations for survival prediction models.用于生存预测模型的模型不可知解释。
Stat Med. 2024 May 20;43(11):2161-2182. doi: 10.1002/sim.10057. Epub 2024 Mar 26.
8
An Explainable AI-Enabled Framework for Interpreting Pulmonary Diseases from Chest Radiographs.一种用于从胸部X光片中解读肺部疾病的可解释人工智能框架。
Cancers (Basel). 2023 Jan 3;15(1):314. doi: 10.3390/cancers15010314.
9
ExAID: A multimodal explanation framework for computer-aided diagnosis of skin lesions.EXAID:一种用于皮肤损伤计算机辅助诊断的多模态解释框架。
Comput Methods Programs Biomed. 2022 Mar;215:106620. doi: 10.1016/j.cmpb.2022.106620. Epub 2022 Jan 5.
10
Explanation strategies in humans versus current explainable artificial intelligence: Insights from image classification.人类的解释策略与当前可解释人工智能的比较:图像分类的见解。
Br J Psychol. 2024 Jun 10. doi: 10.1111/bjop.12714.

本文引用的文献

1
The great multivariate time series classification bake off: a review and experimental evaluation of recent algorithmic advances.多元时间序列分类大比拼:对近期算法进展的综述与实验评估
Data Min Knowl Discov. 2021;35(2):401-449. doi: 10.1007/s10618-020-00727-3. Epub 2020 Dec 18.
2
Accurate and interpretable evaluation of surgical skills from kinematic data using fully convolutional neural networks.使用全卷积神经网络从运动学数据中准确且可解释地评估手术技能。
Int J Comput Assist Radiol Surg. 2019 Sep;14(9):1611-1617. doi: 10.1007/s11548-019-02039-4. Epub 2019 Jul 30.
3
Explainable machine-learning predictions for the prevention of hypoxaemia during surgery.
用于预防手术期间低氧血症的可解释机器学习预测。
Nat Biomed Eng. 2018 Oct;2(10):749-760. doi: 10.1038/s41551-018-0304-0. Epub 2018 Oct 10.
4
Seizure detection, seizure prediction, and closed-loop warning systems in epilepsy.癫痫中的发作检测、发作预测及闭环预警系统
Epilepsy Behav. 2014 Aug;37:291-307. doi: 10.1016/j.yebeh.2014.06.023. Epub 2014 Aug 29.