Mai Ximing, Meng Jianjun, Ding Yi, Zhu Xiangyang, Guan Cuntai
IEEE Trans Neural Syst Rehabil Eng. 2025;33:1460-1472. doi: 10.1109/TNSRE.2025.3560434. Epub 2025 Apr 23.
The prolonged calibration time required by steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs) poses a significant challenge to real-life applications. Cross-stimulus transfer emerges as a promising solution, wherein a model trained on a subset of classes (seen classes) can predict both seen and unseen classes. Existing approaches extracted common components from SSVEP templates of seen classes to construct templates for unseen classes; however, they are limited by the class-specific activities and noise contained in these components, leading to imprecise templates that degrade classification performance. To address this issue, this study proposed an SSVEP Response Regression Network (SRRNet), which learned the regression mapping between sine-cosine reference signals and SSVEP templates using seen class data. This network reconstructed SSVEP templates for unseen classes utilizing their corresponding sine-cosine signals. Additionally, an SSVEP template regressing and spatial filtering (SRSF) framework was introduced, where both test data and SSVEP templates were projected by task-related component analysis (TRCA) spatial filters, and correlations were computed for target prediction. Comparative evaluations on two public datasets revealed that our method significantly outperformed state-of-the-art methods, elevating the information transfer rate (ITR) from 173.33 bits/min to 203.79 bits/min. By effectively modeling the regression from sine-cosine reference signals to SSVEP templates, SRRNet can construct SSVEP templates for unseen classes without training samples from those classes. By integrating regressed SSVEP templates with spatial filtering-based methods, our method enhances cross-stimulus transfer performance in SSVEP-BCIs, thus advancing their practical applicability. The code is available at https://github.com/MaiXiming/SRRNet.
基于稳态视觉诱发电位(SSVEP)的脑机接口(BCI)所需的长时间校准对实际应用构成了重大挑战。交叉刺激转移成为一种有前景的解决方案,即在一组类别(可见类别)上训练的模型可以预测可见和不可见类别。现有方法从可见类别的SSVEP模板中提取共同成分以构建不可见类别的模板;然而,它们受到这些成分中包含的特定类别活动和噪声的限制,导致模板不精确,从而降低分类性能。为了解决这个问题,本研究提出了一种SSVEP响应回归网络(SRRNet),该网络使用可见类数据学习正弦 - 余弦参考信号与SSVEP模板之间的回归映射。该网络利用相应的正弦 - 余弦信号为不可见类别重建SSVEP模板。此外,还引入了一个SSVEP模板回归和空间滤波(SRSF)框架,其中测试数据和SSVEP模板都通过任务相关成分分析(TRCA)空间滤波器进行投影,并计算相关性以进行目标预测。在两个公共数据集上的比较评估表明,我们的方法明显优于现有方法,将信息传输率(ITR)从173.33比特/分钟提高到203.79比特/分钟。通过有效地对从正弦 - 余弦参考信号到SSVEP模板的回归进行建模,SRRNet可以在没有来自这些类别的训练样本的情况下为不可见类别构建SSVEP模板。通过将回归的SSVEP模板与基于空间滤波的方法相结合,我们的方法提高了SSVEP - BCI中的交叉刺激转移性能,从而推进了它们的实际适用性。代码可在https://github.com/MaiXiming/SRRNet获取。