Suppr超能文献

从数据效率的角度提高广义零镜头 SSVEP 分类性能。

Improving Generalized Zero-Shot Learning SSVEP Classification Performance From Data-Efficient Perspective.

出版信息

IEEE Trans Neural Syst Rehabil Eng. 2023;31:4135-4145. doi: 10.1109/TNSRE.2023.3324148. Epub 2023 Oct 24.

Abstract

Generalized zero-shot learning (GZSL) has significantly reduced the training requirements for steady-state visual evoked potential (SSVEP) based brain-computer interfaces (BCIs). Traditional methods require complete class data sets for training, but GZSL allows for only partial class data sets, dividing them into 'seen' (those with training data) and 'unseen' classes (those without training data). However, inefficient utilization of SSVEP data limits the accuracy and information transfer rate (ITR) of existing GZSL methods. To this end, we proposed a framework for more effective utilization of SSVEP data at three systematically combined levels: data acquisition, feature extraction, and decision-making. First, prevalent SSVEP-based BCIs overlook the inter-subject variance in visual latency and employ fixed sampling starting time (SST). We introduced a dynamic sampling starting time (DSST) strategy at the data acquisition level. This strategy uses the classification results on the validation set to find the optimal sampling starting time (OSST) for each subject. In addition, we developed a Transformer structure to capture the global information of input data for compensating the small receptive field of existing networks. The global receptive fields of the Transformer can adequately process the information from longer input sequences. For the decision-making level, we designed a classifier selection strategy that can automatically select the optimal classifier for the seen and unseen classes, respectively. We also proposed a training procedure to make the above solutions in conjunction with each other. Our method was validated on three public datasets and outperformed the state-of-the-art (SOTA) methods. Crucially, we also outperformed the representative methods that require training data for all classes.

摘要

广义零样本学习(GZSL)显著降低了基于稳态视觉诱发电位(SSVEP)的脑机接口(BCI)的训练要求。传统方法需要完整的类别数据集进行训练,但 GZSL 只允许部分类别数据集,将其分为“已见”(有训练数据)和“未见”类别(无训练数据)。然而,SSVEP 数据的低效利用限制了现有 GZSL 方法的准确性和信息传输率(ITR)。为此,我们提出了一个在三个系统组合的层面上更有效地利用 SSVEP 数据的框架:数据采集、特征提取和决策。首先,流行的基于 SSVEP 的 BCI 忽略了视觉潜伏期的个体间差异,并采用固定的采样起始时间(SST)。我们在数据采集层面引入了动态采样起始时间(DSST)策略。该策略使用验证集上的分类结果为每个被试找到最佳采样起始时间(OSST)。此外,我们开发了一种 Transformer 结构,用于捕获输入数据的全局信息,以补偿现有网络的小感受野。Transformer 的全局感受野可以充分处理来自更长输入序列的信息。对于决策层面,我们设计了一种分类器选择策略,可以自动为已见和未见类别分别选择最佳分类器。我们还提出了一种训练过程,使上述解决方案相互结合。我们的方法在三个公共数据集上进行了验证,优于最先进的(SOTA)方法。至关重要的是,我们也优于需要所有类别训练数据的代表性方法。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验