• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

针对分布式群体学习的后门攻击。

Backdoor attacks against distributed swarm learning.

作者信息

Chen Kongyang, Zhang Huaiyuan, Feng Xiangyu, Zhang Xiaoting, Mi Bing, Jin Zhiping

机构信息

Institute of Artificial Intelligence and Blockchain, Guangzhou University, Guangzhou, 510006, China; Pazhou Lab, Guangzhou, 510330, China; Jiangsu Key Laboratory of Media Design and Software Technology, Jiangnan University, Wuxi, China.

School of Computer Science and Cyber Engineering, Guangzhou University, China.

出版信息

ISA Trans. 2023 Oct;141:59-72. doi: 10.1016/j.isatra.2023.03.034. Epub 2023 Mar 28.

DOI:10.1016/j.isatra.2023.03.034
PMID:37012167
Abstract

Traditional machine learning approaches often need a central server, where raw datasets or model updates are trained or aggregated in a centralized way. However, these approaches are vulnerable to many attacks, especially by the malicious server. Recently, a new distributed machine learning paradigm, called Swarm Learning (SL), has been proposed to support no-central-server based decentralized training. In each training round, each participant node has a chance to be selected to serve as a temporary server. Thus, these participant nodes do not need to share their private datasets to ensure a fair and secure model aggregation in a central server. To the best of our knowledge, there are no existing solutions about the security threats in swarm learning. In this paper, we investigate how to implant backdoor attacks against swarm learning to illustrate its potential security risk. Experiment results confirm the effectiveness of our method with high attack accuracies in different scenarios. We also study several defense methods to alleviate these backdoor attacks.

摘要

传统的机器学习方法通常需要一个中央服务器,在那里原始数据集或模型更新以集中方式进行训练或聚合。然而,这些方法容易受到多种攻击,尤其是来自恶意服务器的攻击。最近,一种名为群体学习(SL)的新分布式机器学习范式被提出来支持基于无中央服务器的分散式训练。在每个训练轮次中,每个参与节点都有机会被选中作为临时服务器。因此,这些参与节点无需共享其私有数据集,就能确保在中央服务器中进行公平且安全的模型聚合。据我们所知,目前尚无关于群体学习中安全威胁的现有解决方案。在本文中,我们研究如何对群体学习植入后门攻击,以说明其潜在的安全风险。实验结果证实了我们的方法在不同场景下具有高攻击准确率的有效性。我们还研究了几种减轻这些后门攻击的防御方法。

相似文献

1
Backdoor attacks against distributed swarm learning.针对分布式群体学习的后门攻击。
ISA Trans. 2023 Oct;141:59-72. doi: 10.1016/j.isatra.2023.03.034. Epub 2023 Mar 28.
2
Edge-Cloud Collaborative Defense against Backdoor Attacks in Federated Learning.边缘-云协同防御联邦学习中的后门攻击。
Sensors (Basel). 2023 Jan 17;23(3):1052. doi: 10.3390/s23031052.
3
Backdoor attack and defense in federated generative adversarial network-based medical image synthesis.联邦生成对抗网络的后门攻击与防御在医学图像合成中的应用。
Med Image Anal. 2023 Dec;90:102965. doi: 10.1016/j.media.2023.102965. Epub 2023 Sep 22.
4
Swarm-FHE: Fully Homomorphic Encryption-based Swarm Learning for Malicious Clients.群集 FHE:基于全同态加密的恶意客户端群集学习。
Int J Neural Syst. 2023 Aug;33(8):2350033. doi: 10.1142/S0129065723500338. Epub 2023 May 27.
5
Federated Learning Backdoor Attack Based on Frequency Domain Injection.基于频域注入的联邦学习后门攻击
Entropy (Basel). 2024 Feb 14;26(2):164. doi: 10.3390/e26020164.
6
How to backdoor split learning.后门分裂学习。
Neural Netw. 2023 Nov;168:326-336. doi: 10.1016/j.neunet.2023.09.037. Epub 2023 Sep 24.
7
Exploiting Missing Value Patterns for a Backdoor Attack on Machine Learning Models of Electronic Health Records: Development and Validation Study.利用缺失值模式对电子健康记录机器学习模型进行后门攻击:开发与验证研究
JMIR Med Inform. 2022 Aug 19;10(8):e38440. doi: 10.2196/38440.
8
A Textual Backdoor Defense Method Based on Deep Feature Classification.一种基于深度特征分类的文本后门防御方法。
Entropy (Basel). 2023 Jan 23;25(2):220. doi: 10.3390/e25020220.
9
Poison Ink: Robust and Invisible Backdoor Attack.毒墨:稳健且不可见的后门攻击
IEEE Trans Image Process. 2022;31:5691-5705. doi: 10.1109/TIP.2022.3201472. Epub 2022 Sep 2.
10
Fair detection of poisoning attacks in federated learning on non-i.i.d. data.在非独立同分布数据的联邦学习中对中毒攻击的公平检测。
Data Min Knowl Discov. 2023 Jan 4:1-26. doi: 10.1007/s10618-022-00912-6.