Suppr超能文献

用于基于脑电图的听觉注意力检测的低功耗轻量级脉冲变压器

Low-power and lightweight spiking transformer for EEG-based auditory attention detection.

作者信息

Lan Yawen, Wang Yuchen, Zhang Yuping, Zhu Hong

机构信息

School of Automation Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China.

Department of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China.

出版信息

Neural Netw. 2025 Mar;183:106977. doi: 10.1016/j.neunet.2024.106977. Epub 2024 Dec 4.

Abstract

EEG signal analysis can be used to study brain activity and the function and structure of neural networks, helping to understand neural mechanisms such as cognition, emotion, and behavior. EEG-based auditory attention detection is using EEG signals to determine an individual's level of attention to specific auditory stimuli. In this technique, researchers record and analyze a subject's electrical activity to infer whether an individual is paying attention to a specific auditory stimulus. The model deployed in edge devices will be greatly convenient for subjects to use. However, most of the existing EEG-based auditory attention detection models use traditional neural network models, and their high computing load makes deployment on edge devices challenging. We present a pioneering approach in the form of a binarized spiking Transformer for EEG-based auditory attention detection, which is characterized by high accuracy, low power consumption, and lightweight design, making it highly suitable for deployment on edge devices. In terms of low power consumption, the network is constructed using spiking neurons, which emit sparse and binary spike sequences, which can effectively reduce computing power consumption. In terms of lightweight, we use a post-training quantization strategy to quantize the full-precision network weights into binary weights, which greatly reduces the model size. In addition, the structure of the Transformer ensures that the model can learn effective information and ensure its high performance. We verify the model through mainstream datasets, and experimental results show that our model performance can exceed the existing state-of-the-art models, and the model size can be reduced by more than 21 times compared with the original full-precision network counterpart.

摘要

脑电图(EEG)信号分析可用于研究大脑活动以及神经网络的功能和结构,有助于理解认知、情感和行为等神经机制。基于EEG的听觉注意力检测是利用EEG信号来确定个体对特定听觉刺激的关注程度。在这项技术中,研究人员记录并分析受试者的电活动,以推断个体是否在关注特定的听觉刺激。部署在边缘设备中的模型将极大地方便受试者使用。然而,现有的大多数基于EEG的听觉注意力检测模型都使用传统神经网络模型,其高计算负荷使得在边缘设备上部署具有挑战性。我们提出了一种开创性的方法,即用于基于EEG的听觉注意力检测的二值化脉冲Transformer,其特点是高精度、低功耗和轻量级设计,使其非常适合在边缘设备上部署。在低功耗方面,该网络使用脉冲神经元构建,这些神经元发射稀疏和二值化的脉冲序列,可有效降低计算功耗。在轻量级方面,我们使用训练后量化策略将全精度网络权重量化为二值权重,这大大减小了模型大小。此外,Transformer的结构确保模型能够学习有效信息并保证其高性能。我们通过主流数据集对该模型进行了验证,实验结果表明,我们的模型性能可以超过现有的最先进模型,并且与原始全精度网络相比,模型大小可减少21倍以上。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验