Suppr超能文献

用于时间序列分类的图像可解释原型学习

PIP: Pictorial Interpretable Prototype Learning for Time Series Classification.

作者信息

Ghods Alireza, Cook Diane J

机构信息

Washington State University, USA.

出版信息

IEEE Comput Intell Mag. 2022 Feb;17(1):34-45. doi: 10.1109/mci.2021.3129957. Epub 2022 Jan 12.

Abstract

Time series classifiers are not only challenging to design, but they are also notoriously difficult to deploy for critical applications because end users may not understand or trust black-box models. Despite new efforts, explanations generated by other interpretable time series models are complicated for non-engineers to understand. The goal of PIP is to provide time series explanations that are tailored toward specific end users. To address the challenge, this paper introduces PIP, a novel deep learning architecture that jointly learns classification models and meaningful visual class prototypes. PIP allows users to train the model on their choice of class illustrations. Thus, PIP can create a user-friendly explanation by leaning on end-users definitions. We hypothesize that a pictorial description is an effective way to communicate a learned concept to non-expert users. Based on an end-user experiment with participants from multiple backgrounds, PIP offers an improved combination of accuracy and interpretability over baseline methods for time series classification.

摘要

时间序列分类器不仅设计具有挑战性,而且对于关键应用来说,其部署也极为困难,因为终端用户可能无法理解或信任黑箱模型。尽管有新的努力,但其他可解释时间序列模型生成的解释对于非工程师来说很难理解。PIP的目标是提供针对特定终端用户量身定制的时间序列解释。为应对这一挑战,本文介绍了PIP,一种新颖的深度学习架构,它联合学习分类模型和有意义的视觉类原型。PIP允许用户根据自己选择的类插图来训练模型。因此,PIP可以依靠终端用户的定义创建一个用户友好的解释。我们假设,图形描述是向非专业用户传达所学概念的有效方式。基于一项针对来自多个背景参与者的终端用户实验,与时间序列分类的基线方法相比,PIP在准确性和可解释性方面提供了更好的组合。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5825/9273001/183d106b8c26/nihms-1768501-f0002.jpg

相似文献

1
PIP: Pictorial Interpretable Prototype Learning for Time Series Classification.用于时间序列分类的图像可解释原型学习
IEEE Comput Intell Mag. 2022 Feb;17(1):34-45. doi: 10.1109/mci.2021.3129957. Epub 2022 Jan 12.
6
CEFEs: A CNN Explainable Framework for ECG Signals.CEFEs:用于心电图信号的 CNN 可解释框架。
Artif Intell Med. 2021 May;115:102059. doi: 10.1016/j.artmed.2021.102059. Epub 2021 Mar 26.
8
X-CHAR: A Concept-based Explainable Complex Human Activity Recognition Model.X-CHAR:一种基于概念的可解释复杂人类活动识别模型。
Proc ACM Interact Mob Wearable Ubiquitous Technol. 2023 Mar;7(1). doi: 10.1145/3580804. Epub 2023 Mar 28.

本文引用的文献

8
Trust in automation: designing for appropriate reliance.对自动化的信任:设计适度的依赖。
Hum Factors. 2004 Spring;46(1):50-80. doi: 10.1518/hfes.46.1.50_30392.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验