Suppr超能文献

通过基于浅层微调的迁移学习减少稳态视觉诱发电位脑机接口的校准工作量。

Reducing calibration efforts of SSVEP-BCIs by shallow fine-tuning-based transfer learning.

作者信息

Ding Wenlong, Liu Aiping, Chen Xingui, Xie Chengjuan, Wang Kai, Chen Xun

机构信息

Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei, 230027 China.

Department of Neurology, The First Affiliated Hospital of Anhui Medical University, Hefei, 230022 China.

出版信息

Cogn Neurodyn. 2025 Dec;19(1):81. doi: 10.1007/s11571-025-10264-8. Epub 2025 May 26.

Abstract

The utilization of transfer learning (TL), particularly through pre-training and fine-tuning, in steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs) has substantially reduced the calibration efforts. However, commonly employed fine-tuning approaches, including end-to-end fine-tuning and last-layer fine-tuning, require data from target subjects that encompass all categories (stimuli), resulting in a time-consuming data collection process, especially in systems with numerous categories. To address this challenge, this study introduces a straightforward yet effective ShallOw Fine-Tuning (SOFT) method to substantially reduce the number of calibration categories needed for model fine-tuning, thereby further mitigating the calibration efforts for target subjects. Specifically, SOFT involves freezing the parameters of the deeper layers while updating those of the shallow layers during fine-tuning. Freezing the parameters of deeper layers preserves the model's ability to recognize semantic and high-level features across all categories, as established during pre-training. Moreover, data from different categories exhibit similar individual-specific low-level features in SSVEP-BCIs. Consequently, updating the parameters of shallow layers-responsible for processing low-level features-with data solely from partial categories enables the fine-tuned model to efficiently capture the individual-related features shared by all categories. The effectiveness of SOFT is validated using two public datasets. Comparative analysis with commonly used end-to-end and last-layer fine-tuning methods reveals that SOFT achieves higher classification accuracy while requiring fewer calibration categories. The proposed SOFT method further decreases the calibration efforts for target subjects by reducing the calibration category requirements, thereby improving the feasibility of SSVEP-BCIs for real-world applications.

摘要

在基于稳态视觉诱发电位(SSVEP)的脑机接口(BCI)中,迁移学习(TL)的应用,尤其是通过预训练和微调,已大幅减少了校准工作。然而,常用的微调方法,包括端到端微调和最后一层微调,需要来自目标受试者的涵盖所有类别(刺激)的数据,这导致数据收集过程耗时,特别是在具有众多类别的系统中。为应对这一挑战,本研究引入了一种简单而有效的浅度微调(SOFT)方法,以大幅减少模型微调所需的校准类别数量,从而进一步减轻目标受试者的校准工作。具体而言,SOFT在微调期间冻结深层的参数,同时更新浅层的参数。冻结深层参数可保留模型在预训练期间建立的跨所有类别识别语义和高级特征的能力。此外,在SSVEP - BCI中,来自不同类别的数据呈现出相似的个体特定低级特征。因此,仅使用部分类别的数据更新负责处理低级特征的浅层参数,可使微调后的模型有效捕获所有类别共享的与个体相关的特征。使用两个公共数据集验证了SOFT的有效性。与常用的端到端和最后一层微调方法的比较分析表明,SOFT在需要更少校准类别的情况下实现了更高的分类准确率。所提出的SOFT方法通过减少校准类别要求,进一步降低了目标受试者的校准工作,从而提高了SSVEP - BCI在实际应用中的可行性。

相似文献

7
Measures implemented in the school setting to contain the COVID-19 pandemic.学校为控制 COVID-19 疫情而采取的措施。
Cochrane Database Syst Rev. 2022 Jan 17;1(1):CD015029. doi: 10.1002/14651858.CD015029.

本文引用的文献

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验