Suppr超能文献

多源去中心化迁移的隐私保护脑机接口。

Multi-Source Decentralized Transfer for Privacy-Preserving BCIs.

出版信息

IEEE Trans Neural Syst Rehabil Eng. 2022;30:2710-2720. doi: 10.1109/TNSRE.2022.3207494. Epub 2022 Sep 26.

Abstract

Transfer learning, which utilizes labeled source domains to facilitate the learning in a target model, is effective in alleviating high intra- and inter-subject variations in electroencephalogram (EEG) based brain-computer interfaces (BCIs). Existing transfer learning approaches usually use the source subjects' EEG data directly, leading to privacy concerns. This paper considers a decentralized privacy-preserving transfer learning scenario: there are multiple source subjects, whose data and computations are kept local, and only the parameters or predictions of their pre-trained models can be accessed for privacy-protection; then, how to perform effective cross-subject transfer for a new subject with unlabeled EEG trials? We propose an offline unsupervised multi-source decentralized transfer (MSDT) approach, which first generates a pre-trained model from each source subject, and then performs decentralized transfer using the source model parameters (in gray-box settings) or predictions (in black-box settings). Experiments on two datasets from two BCI paradigms, motor imagery and affective BCI, demonstrated that MSDT outperformed several existing approaches, which do not consider privacy-protection at all. In other words, MSDT achieved both high privacy-protection and better classification performance.

摘要

迁移学习利用有标签的源域来促进目标模型中的学习,对于缓解基于脑电图(EEG)的脑机接口(BCI)中的个体内和个体间的高变异性非常有效。现有的迁移学习方法通常直接使用源主体的 EEG 数据,这引发了隐私问题。本文考虑了一种分散式隐私保护迁移学习场景:有多个源主体,它们的数据和计算都保持在本地,只能访问其预训练模型的参数或预测结果以进行隐私保护;然后,对于一个新的没有标记 EEG 试验的主体,如何进行有效的跨主体转移?我们提出了一种离线无监督多源分散式转移(MSDT)方法,该方法首先从每个源主体生成一个预训练模型,然后使用源模型参数(在灰盒设置中)或预测(在黑盒设置中)进行分散式转移。在来自两种脑机接口范式(运动想象和情感脑机接口)的两个数据集上的实验表明,MSDT 优于完全不考虑隐私保护的几种现有方法。换句话说,MSDT 实现了高隐私保护和更好的分类性能。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验