• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

重新审视联邦学习攻击:对差距、假设和评估设置的批判性讨论

Federated Learning Attacks Revisited: A Critical Discussion of Gaps, Assumptions, and Evaluation Setups.

作者信息

Wainakh Aidmar, Zimmer Ephraim, Subedi Sandeep, Keim Jens, Grube Tim, Karuppayah Shankar, Sanchez Guinea Alejandro, Mühlhäuser Max

机构信息

Telecooperation Lab, Technical University of Darmstadt, 64289 Darmstadt, Germany.

National Advanced IPv6 Centre (NAv6), University of Science Malaysia, Penang 11800, Malaysia.

出版信息

Sensors (Basel). 2022 Dec 20;23(1):31. doi: 10.3390/s23010031.

DOI:10.3390/s23010031
PMID:36616629
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9824092/
Abstract

Deep learning pervades heavy data-driven disciplines in research and development. The Internet of Things and sensor systems, which enable smart environments and services, are settings where deep learning can provide invaluable utility. However, the data in these systems are very often directly or indirectly related to people, which raises privacy concerns. Federated learning (FL) mitigates some of these concerns and empowers deep learning in sensor-driven environments by enabling multiple entities to collaboratively train a machine learning model without sharing their data. Nevertheless, a number of works in the literature propose attacks that can manipulate the model and disclose information about the training data in FL. As a result, there has been a growing belief that FL is highly vulnerable to severe attacks. Although these attacks do indeed highlight security and privacy risks in FL, some of them may not be as effective in production deployment because they are feasible only given special-sometimes impractical-assumptions. In this paper, we investigate this issue by conducting a quantitative analysis of the attacks against FL and their evaluation settings in 48 papers. This analysis is the first of its kind to reveal several research gaps with regard to the types and architectures of target models. Additionally, the quantitative analysis allows us to highlight unrealistic assumptions in some attacks related to the hyper-parameters of the model and data distribution. Furthermore, we identify fallacies in the evaluation of attacks which raise questions about the generalizability of the conclusions. As a remedy, we propose a set of recommendations to promote adequate evaluations.

摘要

深度学习在研发中广泛应用于大量数据驱动的学科领域。物联网和传感器系统能够实现智能环境与服务,是深度学习可提供巨大实用价值的场景。然而,这些系统中的数据往往直接或间接地与人员相关,这引发了隐私担忧。联邦学习(FL)缓解了其中一些担忧,并通过使多个实体能够在不共享数据的情况下协作训练机器学习模型,在传感器驱动的环境中推动了深度学习的发展。尽管如此,文献中的一些研究提出了可以操纵模型并泄露联邦学习中训练数据信息的攻击方法。因此,人们越来越认为联邦学习极易受到严重攻击。虽然这些攻击确实凸显了联邦学习中的安全和隐私风险,但其中一些攻击在实际生产部署中可能并不那么有效,因为它们仅在特殊的(有时不切实际的)假设下才可行。在本文中,我们通过对48篇论文中针对联邦学习的攻击及其评估设置进行定量分析来研究这个问题。这种分析是首次揭示在目标模型的类型和架构方面存在的几个研究差距。此外,定量分析使我们能够突出一些与模型超参数和数据分布相关的攻击中不现实的假设。此外,我们还发现了攻击评估中的谬误,这些谬误对结论的可推广性提出了质疑。作为补救措施,我们提出了一套建议以促进充分的评估。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb00/9824092/ddf539ecd369/sensors-23-00031-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb00/9824092/7284a0aa0182/sensors-23-00031-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb00/9824092/fce6cfc281eb/sensors-23-00031-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb00/9824092/68f5a280de4b/sensors-23-00031-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb00/9824092/f1e92a6f2dc3/sensors-23-00031-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb00/9824092/17b7193dcef1/sensors-23-00031-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb00/9824092/1e36248ea67a/sensors-23-00031-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb00/9824092/4128381b5401/sensors-23-00031-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb00/9824092/a2b73c080bcc/sensors-23-00031-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb00/9824092/7eecc2ad8c31/sensors-23-00031-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb00/9824092/95af3bf9b554/sensors-23-00031-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb00/9824092/c886a4799c6f/sensors-23-00031-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb00/9824092/ddf539ecd369/sensors-23-00031-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb00/9824092/7284a0aa0182/sensors-23-00031-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb00/9824092/fce6cfc281eb/sensors-23-00031-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb00/9824092/68f5a280de4b/sensors-23-00031-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb00/9824092/f1e92a6f2dc3/sensors-23-00031-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb00/9824092/17b7193dcef1/sensors-23-00031-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb00/9824092/1e36248ea67a/sensors-23-00031-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb00/9824092/4128381b5401/sensors-23-00031-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb00/9824092/a2b73c080bcc/sensors-23-00031-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb00/9824092/7eecc2ad8c31/sensors-23-00031-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb00/9824092/95af3bf9b554/sensors-23-00031-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb00/9824092/c886a4799c6f/sensors-23-00031-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb00/9824092/ddf539ecd369/sensors-23-00031-g012.jpg

相似文献

1
Federated Learning Attacks Revisited: A Critical Discussion of Gaps, Assumptions, and Evaluation Setups.重新审视联邦学习攻击:对差距、假设和评估设置的批判性讨论
Sensors (Basel). 2022 Dec 20;23(1):31. doi: 10.3390/s23010031.
2
A Critical Evaluation of Privacy and Security Threats in Federated Learning.联邦学习中的隐私与安全威胁的批判性评估
Sensors (Basel). 2020 Dec 15;20(24):7182. doi: 10.3390/s20247182.
3
Privacy Enhancing and Scalable Federated Learning to Accelerate AI Implementation in Cross-Silo and IoMT Environments.隐私增强可扩展的联邦学习加速跨部门和物联网环境中的人工智能应用。
IEEE J Biomed Health Inform. 2023 Feb;27(2):744-755. doi: 10.1109/JBHI.2022.3185418. Epub 2023 Feb 3.
4
Dynamic Asynchronous Anti Poisoning Federated Deep Learning with Blockchain-Based Reputation-Aware Solutions.基于区块链信誉感知方案的动态异步抗中毒联邦深度学习
Sensors (Basel). 2022 Jan 17;22(2):684. doi: 10.3390/s22020684.
5
Securing federated learning with blockchain: a systematic literature review.利用区块链保障联邦学习安全:一项系统文献综述
Artif Intell Rev. 2023;56(5):3951-3985. doi: 10.1007/s10462-022-10271-9. Epub 2022 Sep 16.
6
Analysis of Privacy Preservation Enhancements in Federated Learning Frameworks联邦学习框架中隐私保护增强措施分析
7
A Conditional Privacy-Preserving Identity-Authentication Scheme for Federated Learning in the Internet of Vehicles.一种用于车联网中联邦学习的条件隐私保护身份认证方案。
Entropy (Basel). 2024 Jul 10;26(7):590. doi: 10.3390/e26070590.
8
Do Gradient Inversion Attacks Make Federated Learning Unsafe?梯度反转攻击是否使联邦学习变得不安全?
IEEE Trans Med Imaging. 2023 Jul;42(7):2044-2056. doi: 10.1109/TMI.2023.3239391. Epub 2023 Jun 30.
9
Federated Machine Learning, Privacy-Enhancing Technologies, and Data Protection Laws in Medical Research: Scoping Review.联邦机器学习、隐私增强技术和医疗研究中的数据保护法规:范围综述。
J Med Internet Res. 2023 Mar 30;25:e41588. doi: 10.2196/41588.
10
Exploring the Relationship Between Privacy and Utility in Mobile Health: Algorithm Development and Validation via Simulations of Federated Learning, Differential Privacy, and External Attacks.探索移动健康中隐私与效用的关系:通过联邦学习、差分隐私和外部攻击的模拟算法开发和验证。
J Med Internet Res. 2023 Apr 20;25:e43664. doi: 10.2196/43664.

本文引用的文献

1
High performance logistic regression for privacy-preserving genome analysis.用于隐私保护基因组分析的高性能逻辑回归。
BMC Med Genomics. 2021 Jan 20;14(1):23. doi: 10.1186/s12920-020-00869-9.
2
Federated Learning: A Survey on Enabling Technologies, Protocols, and Applications.联邦学习:关于使能技术、协议及应用的综述
IEEE Access. 2020;8:140699-140725. doi: 10.1109/access.2020.3013541. Epub 2020 Jul 31.
3
LSTM: A Search Space Odyssey.长短期记忆网络:搜索空间奥德赛。
IEEE Trans Neural Netw Learn Syst. 2017 Oct;28(10):2222-2232. doi: 10.1109/TNNLS.2016.2582924. Epub 2016 Jul 8.