IEEE Trans Neural Syst Rehabil Eng. 2019 May;27(5):814-825. doi: 10.1109/TNSRE.2019.2908955. Epub 2019 Apr 2.
Deep learning has been successfully used in numerous applications because of its outstanding performance and the ability to avoid manual feature engineering. One such application is electroencephalogram (EEG)-based brain-computer interface (BCI), where multiple convolutional neural network (CNN) models have been proposed for EEG classification. However, it has been found that deep learning models can be easily fooled with adversarial examples, which are normal examples with small deliberate perturbations. This paper proposes an unsupervised fast gradient sign method (UFGSM) to attack three popular CNN classifiers in BCIs, and demonstrates its effectiveness. We also verify the transferability of adversarial examples in BCIs, which means we can perform attacks even without knowing the architecture and parameters of the target models, or the datasets they were trained on. To the best of our knowledge, this is the first study on the vulnerability of CNN classifiers in EEG-based BCIs, and hopefully will trigger more attention on the security of BCI systems.
深度学习由于其出色的性能和避免手动特征工程的能力,已成功应用于众多领域。其中一个应用是基于脑电图(EEG)的脑机接口(BCI),其中已经提出了多个卷积神经网络(CNN)模型来进行 EEG 分类。然而,已经发现深度学习模型很容易被对抗样本所欺骗,对抗样本是指正常样本中带有微小故意干扰的样本。本文提出了一种无监督快速梯度符号法(UFGSM)来攻击 BCI 中的三个流行的 CNN 分类器,并证明了其有效性。我们还验证了对抗样本在 BCI 中的可转移性,这意味着我们甚至可以在不知道目标模型的架构和参数或它们所训练的数据集的情况下进行攻击。据我们所知,这是首次对基于 EEG 的 BCI 中 CNN 分类器的脆弱性进行研究,希望这将引起人们对 BCI 系统安全性的更多关注。