Suppr超能文献

生成式扰动网络:针对脑机接口的通用对抗攻击

Generative Perturbation Network for Universal Adversarial Attacks on Brain-Computer Interfaces.

出版信息

IEEE J Biomed Health Inform. 2023 Nov;27(11):5622-5633. doi: 10.1109/JBHI.2023.3303494. Epub 2023 Nov 7.

Abstract

Deep neural networks (DNNs) have successfully classified EEG-based brain-computer interface (BCI) systems. However, recent studies have found that well-designed input samples, known as adversarial examples, can easily fool well-performed deep neural networks model with minor perturbations undetectable by a human. This paper proposes an efficient generative model named generative perturbation network (GPN), which can generate universal adversarial examples with the same architecture for non-targeted and targeted attacks. Furthermore, the proposed model can be efficiently extended to conditionally or simultaneously generate perturbations for various targets and victim models. Our experimental evaluation demonstrates that perturbations generated by the proposed model outperform previous approaches for crafting signal-agnostic perturbations. We demonstrate that the extended network for signal-specific methods also significantly reduces generation time while performing similarly. The transferability across classification networks of the proposed method is superior to the other methods, which shows our perturbations' high level of generality.

摘要

深度神经网络(DNN)已经成功地对基于脑电图的脑机接口(BCI)系统进行了分类。然而,最近的研究发现,经过精心设计的输入样本,即对抗样本,只需进行微小的人类无法察觉的扰动,就可以轻易欺骗性能良好的深度神经网络模型。本文提出了一种名为生成式扰动网络(GPN)的高效生成模型,该模型可以使用相同的架构为非定向和定向攻击生成通用对抗样本。此外,所提出的模型可以有效地扩展为针对各种目标和受害模型条件或同时生成扰动。我们的实验评估表明,所提出模型生成的扰动在制作信号无关的扰动方面优于以前的方法。我们证明,针对信号特定方法的扩展网络在执行类似任务时也能显著减少生成时间。所提出方法在分类网络之间的可转移性优于其他方法,这表明我们的扰动具有高度的通用性。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验