Chen Kongyang, Zhang Huaiyuan, Feng Xiangyu, Zhang Xiaoting, Mi Bing, Jin Zhiping
Institute of Artificial Intelligence and Blockchain, Guangzhou University, Guangzhou, 510006, China; Pazhou Lab, Guangzhou, 510330, China; Jiangsu Key Laboratory of Media Design and Software Technology, Jiangnan University, Wuxi, China.
School of Computer Science and Cyber Engineering, Guangzhou University, China.
ISA Trans. 2023 Oct;141:59-72. doi: 10.1016/j.isatra.2023.03.034. Epub 2023 Mar 28.
Traditional machine learning approaches often need a central server, where raw datasets or model updates are trained or aggregated in a centralized way. However, these approaches are vulnerable to many attacks, especially by the malicious server. Recently, a new distributed machine learning paradigm, called Swarm Learning (SL), has been proposed to support no-central-server based decentralized training. In each training round, each participant node has a chance to be selected to serve as a temporary server. Thus, these participant nodes do not need to share their private datasets to ensure a fair and secure model aggregation in a central server. To the best of our knowledge, there are no existing solutions about the security threats in swarm learning. In this paper, we investigate how to implant backdoor attacks against swarm learning to illustrate its potential security risk. Experiment results confirm the effectiveness of our method with high attack accuracies in different scenarios. We also study several defense methods to alleviate these backdoor attacks.
传统的机器学习方法通常需要一个中央服务器,在那里原始数据集或模型更新以集中方式进行训练或聚合。然而,这些方法容易受到多种攻击,尤其是来自恶意服务器的攻击。最近,一种名为群体学习(SL)的新分布式机器学习范式被提出来支持基于无中央服务器的分散式训练。在每个训练轮次中,每个参与节点都有机会被选中作为临时服务器。因此,这些参与节点无需共享其私有数据集,就能确保在中央服务器中进行公平且安全的模型聚合。据我们所知,目前尚无关于群体学习中安全威胁的现有解决方案。在本文中,我们研究如何对群体学习植入后门攻击,以说明其潜在的安全风险。实验结果证实了我们的方法在不同场景下具有高攻击准确率的有效性。我们还研究了几种减轻这些后门攻击的防御方法。