Suppr超能文献

基于深度学习网络和时空信息的Transformer 结合方法用于原始 EEG 分类。

A Transformer-Based Approach Combining Deep Learning Network and Spatial-Temporal Information for Raw EEG Classification.

出版信息

IEEE Trans Neural Syst Rehabil Eng. 2022;30:2126-2136. doi: 10.1109/TNSRE.2022.3194600. Epub 2022 Aug 4.

Abstract

The attention mechanism of the Transformer has the advantage of extracting feature correlation in the long-sequence data and visualizing the model. As time-series data, the spatial and temporal dependencies of the EEG signals between the time points and the different channels contain important information for accurate classification. So far, Transformer-based approaches have not been widely explored in motor-imagery EEG classification and visualization, especially lacking general models based on cross-individual validation. Taking advantage of the Transformer model and the spatial-temporal characteristics of the EEG signals, we designed Transformer-based models for classifications of motor imagery EEG based on the PhysioNet dataset. With 3s EEG data, our models obtained the best classification accuracy of 83.31%, 74.44%, and 64.22% on two-, three-, and four-class motor-imagery tasks in cross-individual validation, which outperformed other state-of-the-art models by 0.88%, 2.11%, and 1.06%. The inclusion of the positional embedding modules in the Transformer could improve the EEG classification performance. Furthermore, the visualization results of attention weights provided insights into the working mechanism of the Transformer-based networks during motor imagery tasks. The topography of the attention weights revealed a pattern of event-related desynchronization (ERD) which was consistent with the results from the spectral analysis of Mu and beta rhythm over the sensorimotor areas. Together, our deep learning methods not only provide novel and powerful tools for classifying and understanding EEG data but also have broad applications for brain-computer interface (BCI) systems.

摘要

Transformer 的注意力机制具有提取长序列数据中特征相关性的优势,并可对模型进行可视化。作为时间序列数据,脑电信号在时间点之间和不同通道之间的空间和时间依赖性包含用于准确分类的重要信息。到目前为止,基于 Transformer 的方法尚未在运动想象脑电分类和可视化中得到广泛探索,特别是缺乏基于个体间验证的通用模型。利用 Transformer 模型和脑电信号的时空特征,我们针对 PhysioNet 数据集,基于 Transformer 设计了用于运动想象脑电分类的模型。对于 3s 的脑电数据,我们的模型在个体间验证中,在二分类、三分类和四分类运动想象任务中分别获得了 83.31%、74.44%和 64.22%的最佳分类准确率,相对于其他最先进的模型分别提高了 0.88%、2.11%和 1.06%。Transformer 中包含位置嵌入模块可以提高脑电分类性能。此外,注意力权重的可视化结果提供了关于 Transformer 网络在运动想象任务期间工作机制的见解。注意力权重的地形图揭示了事件相关去同步(ERD)的模式,与感觉运动区域的 Mu 和 beta 节律的频谱分析结果一致。总之,我们的深度学习方法不仅为分类和理解脑电数据提供了新颖而强大的工具,而且还为脑机接口(BCI)系统提供了广泛的应用。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验