Department of Computer Science, Nagoya Institute of Technology, Nagoya 466-8555, Japan.
Faculty of Informatics, Zagazig University, Zagazig 44519, Egypt.
Sensors (Basel). 2020 Dec 15;20(24):7182. doi: 10.3390/s20247182.
With the advent of smart devices, smartphones, and smart everything, the Internet of Things (IoT) has emerged with an incredible impact on the industries and human life. The IoT consists of millions of clients that exchange massive amounts of critical data, which results in high privacy risks when processed by a centralized cloud server. Motivated by this privacy concern, a new machine learning paradigm has emerged, namely Federated Learning (FL). Specifically, FL allows for each client to train a learning model locally and performs global model aggregation at the centralized cloud server in order to avoid the direct data leakage from clients. However, despite this efficient distributed training technique, an individual's private information can still be compromised. To this end, in this paper, we investigate the privacy and security threats that can harm the whole execution process of FL. Additionally, we provide practical solutions to overcome those attacks and protect the individual's privacy. We also present experimental results in order to highlight the discussed issues and possible solutions. We expect that this work will open exciting perspectives for future research in FL.
随着智能设备、智能手机和万物智能的出现,物联网(IoT)已经产生了巨大的影响,改变了各个行业和人们的生活。物联网由数百万个客户端组成,这些客户端交换着大量的关键数据,当这些数据由集中式云服务器进行处理时,会产生很高的隐私风险。鉴于这种隐私问题,一种新的机器学习范例已经出现,即联邦学习(FL)。具体来说,FL 允许每个客户端在本地训练学习模型,并在集中式云服务器上执行全局模型聚合,以避免客户端的直接数据泄露。然而,尽管这种高效的分布式训练技术能够保护数据隐私,但个人的隐私信息仍然可能受到威胁。为此,在本文中,我们研究了可能危及 FL 整个执行过程的隐私和安全威胁,并提供了实用的解决方案来克服这些攻击,保护个人隐私。我们还展示了实验结果,以突出讨论的问题和可能的解决方案。我们希望这项工作将为 FL 的未来研究开辟令人兴奋的前景。