Suppr超能文献

迈向基于脑电图的脑机接口深度学习模型解读的最佳实践。

Towards best practice of interpreting deep learning models for EEG-based brain computer interfaces.

作者信息

Cui Jian, Yuan Liqiang, Wang Zhaoxiang, Li Ruilin, Jiang Tianzi

机构信息

Research Center for Augmented Intelligence, Research Institute of Artificial Intelligence, Zhejiang Lab, Hangzhou, China.

School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, Singapore.

出版信息

Front Comput Neurosci. 2023 Aug 17;17:1232925. doi: 10.3389/fncom.2023.1232925. eCollection 2023.

Abstract

INTRODUCTION

As deep learning has achieved state-of-the-art performance for many tasks of EEG-based BCI, many efforts have been made in recent years trying to understand what have been learned by the models. This is commonly done by generating a heatmap indicating to which extent each pixel of the input contributes to the final classification for a trained model. Despite the wide use, it is not yet understood to which extent the obtained interpretation results can be trusted and how accurate they can reflect the model decisions.

METHODS

We conduct studies to quantitatively evaluate seven different deep interpretation techniques across different models and datasets for EEG-based BCI.

RESULTS

The results reveal the importance of selecting a proper interpretation technique as the initial step. In addition, we also find that the quality of the interpretation results is inconsistent for individual samples despite when a method with an overall good performance is used. Many factors, including model structure and dataset types, could potentially affect the quality of the interpretation results.

DISCUSSION

Based on the observations, we propose a set of procedures that allow the interpretation results to be presented in an understandable and trusted way. We illustrate the usefulness of our method for EEG-based BCI with instances selected from different scenarios.

摘要

引言

由于深度学习在基于脑电图的脑机接口的许多任务中都取得了领先的性能,近年来人们做出了很多努力来试图了解模型学到了什么。这通常是通过生成一个热图来完成的,该热图指示训练模型的输入的每个像素对最终分类的贡献程度。尽管热图被广泛使用,但人们尚未了解所获得的解释结果在多大程度上可以被信任,以及它们能多准确地反映模型决策。

方法

我们进行了研究,以定量评估基于脑电图的脑机接口在不同模型和数据集上的七种不同的深度解释技术。

结果

结果揭示了作为第一步选择合适的解释技术的重要性。此外,我们还发现,尽管使用了总体性能良好的方法,但对于单个样本而言,解释结果的质量并不一致。许多因素,包括模型结构和数据集类型,都可能影响解释结果的质量。

讨论

基于这些观察结果,我们提出了一套程序,使解释结果能够以一种易于理解和可信的方式呈现。我们用从不同场景中选择的实例说明了我们的方法对基于脑电图的脑机接口的有用性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/238a/10470463/c00681452492/fncom-17-1232925-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验