Dong Liuyuan, Xu Chengzhi, Xie Ruizhen, Wang Xuyang, Yang Wanli, Li Yimeng
Hubei Provincial Key Laboratory of Green Intelligent Computing Power Network, School of Computer, Hubei University of Technology, Wuhan 430068, China.
School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China.
Biomimetics (Basel). 2025 Aug 21;10(8):554. doi: 10.3390/biomimetics10080554.
Steady-State Visual Evoked Potentials (SSVEPs) have emerged as an efficient means of interaction in brain-computer interfaces (BCIs), achieving bioinspired efficient language output for individuals with aphasia. Addressing the underutilization of frequency information of SSVEPs and redundant computation by existing transformer-based deep learning methods, this paper analyzes signals from both the time and frequency domains, proposing a stacked encoder-decoder (SED) network architecture based on an xLSTM model and spatial attention mechanism, termed SED-xLSTM, which firstly applies xLSTM to the SSVEP speller field. This model takes the low-channel spectrogram as input and employs the filter bank technique to make full use of harmonic information. By leveraging a gating mechanism, SED-xLSTM effectively extracts and fuses high-dimensional spatial-channel semantic features from SSVEP signals. Experimental results on three public datasets demonstrate the superior performance of SED-xLSTM in terms of classification accuracy and information transfer rate, particularly outperforming existing methods under cross-validation across various temporal scales.
稳态视觉诱发电位(SSVEPs)已成为脑机接口(BCIs)中一种高效的交互方式,为失语症患者实现了受生物启发的高效语言输出。针对现有基于Transformer的深度学习方法对SSVEPs频率信息利用不足和计算冗余的问题,本文从时域和频域对信号进行分析,提出了一种基于xLSTM模型和空间注意力机制的堆叠编码器-解码器(SED)网络架构,称为SED-xLSTM,该架构首次将xLSTM应用于SSVEP拼写器领域。该模型以低通道频谱图为输入,采用滤波器组技术充分利用谐波信息。通过利用门控机制,SED-xLSTM有效地从SSVEP信号中提取并融合高维空间通道语义特征。在三个公共数据集上的实验结果表明,SED-xLSTM在分类准确率和信息传输率方面具有卓越性能,特别是在跨不同时间尺度的交叉验证下优于现有方法。