• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

成员护盾:一种具有成员隐私性的联邦学习框架。

MemberShield: A framework for federated learning with membership privacy.

作者信息

Ahmed Faisal, Sánchez David, Haddi Zouhair, Domingo-Ferrer Josep

机构信息

NVISION Systems and Technologies SL, Gran Via Carles III, 124, ent. 1a, 08034, Barcelona, Catalonia, Spain; Universitat Rovira i Virgili, Dept. of Computer Engineering and Mathematics, CYBERCAT-Center for Cybersecurity Research of Catalonia, Av. Països Catalans 26, 43007 Tarragona, Catalonia, Spain.

Universitat Rovira i Virgili, Dept. of Computer Engineering and Mathematics, CYBERCAT-Center for Cybersecurity Research of Catalonia, Av. Països Catalans 26, 43007 Tarragona, Catalonia, Spain.

出版信息

Neural Netw. 2025 Jan;181:106768. doi: 10.1016/j.neunet.2024.106768. Epub 2024 Oct 1.

DOI:10.1016/j.neunet.2024.106768
PMID:39383677
Abstract

Federated Learning (FL) allows multiple data owners to build high-quality deep learning models collaboratively, by sharing only model updates and keeping data on their premises. Even though FL offers privacy-by-design, it is vulnerable to membership inference attacks (MIA), where an adversary tries to determine whether a sample was included in the training data. Existing defenses against MIA cannot offer meaningful privacy protection without significantly hampering the model's utility and causing a non-negligible training overhead. In this paper we analyze the underlying causes of the differences in the model behavior for member and non-member samples, which arise from model overfitting and facilitate MIAs. Accordingly, we propose MemberShield, a generalization-based defense method for MIAs that consists of: (i) one-time preprocessing of each client's training data labels that transforms one-hot encoded labels to soft labels and eventually exploits them in local training, and (ii) early stopping the training when the local model's validation accuracy does not improve on that of the global model for a number of epochs. Extensive empirical evaluations on three widely used datasets and four model architectures demonstrate that MemberShield outperforms state-of-the-art defense methods by delivering substantially better practical privacy protection against all forms of MIAs, while better preserving the target model utility. On top of that, our proposal significantly reduces training time and is straightforward to implement, by just tuning a single hyperparameter.

摘要

联邦学习(FL)允许多个数据所有者通过仅共享模型更新并将数据保留在本地,来协作构建高质量的深度学习模型。尽管联邦学习提供了设计层面的隐私保护,但它容易受到成员推理攻击(MIA),即对手试图确定一个样本是否包含在训练数据中。现有的针对成员推理攻击的防御措施,如果不严重妨碍模型的实用性并造成不可忽视的训练开销,就无法提供有意义的隐私保护。在本文中,我们分析了成员样本和非成员样本在模型行为上存在差异的根本原因,这些差异源于模型过拟合并助长了成员推理攻击。因此,我们提出了MemberShield,一种基于泛化的针对成员推理攻击的防御方法,它包括:(i)对每个客户端的训练数据标签进行一次性预处理,将独热编码标签转换为软标签,并最终在本地训练中加以利用;(ii)当本地模型的验证准确率在多个轮次中没有超过全局模型时,提前停止训练。在三个广泛使用的数据集和四种模型架构上进行的广泛实证评估表明,MemberShield通过针对所有形式的成员推理攻击提供实质上更好的实际隐私保护,同时更好地保留目标模型的实用性,从而优于现有的防御方法。除此之外,我们的方案通过仅调整单个超参数,显著减少了训练时间且易于实现。

相似文献

1
MemberShield: A framework for federated learning with membership privacy.成员护盾:一种具有成员隐私性的联邦学习框架。
Neural Netw. 2025 Jan;181:106768. doi: 10.1016/j.neunet.2024.106768. Epub 2024 Oct 1.
2
Exploring the Relationship Between Privacy and Utility in Mobile Health: Algorithm Development and Validation via Simulations of Federated Learning, Differential Privacy, and External Attacks.探索移动健康中隐私与效用的关系:通过联邦学习、差分隐私和外部攻击的模拟算法开发和验证。
J Med Internet Res. 2023 Apr 20;25:e43664. doi: 10.2196/43664.
3
Federated Motor Imagery Classification for Privacy-Preserving Brain-Computer Interfaces.联邦电机意象分类的隐私保护脑机接口。
IEEE Trans Neural Syst Rehabil Eng. 2024;32:3442-3451. doi: 10.1109/TNSRE.2024.3457504. Epub 2024 Sep 18.
4
Robust and Privacy-Preserving Decentralized Deep Federated Learning Training: Focusing on Digital Healthcare Applications.稳健且保护隐私的去中心化深度联邦学习训练:专注于数字医疗保健应用。
IEEE/ACM Trans Comput Biol Bioinform. 2024 Jul-Aug;21(4):890-901. doi: 10.1109/TCBB.2023.3243932. Epub 2024 Aug 8.
5
Rethinking the impact of noisy labels in graph classification: A utility and privacy perspective.从效用和隐私角度重新思考噪声标签在图分类中的影响
Neural Netw. 2025 Feb;182:106919. doi: 10.1016/j.neunet.2024.106919. Epub 2024 Nov 20.
6
Subgraph-level federated graph neural network for privacy-preserving recommendation with meta-learning.基于元学习的子图级联邦图神经网络隐私保护推荐。
Neural Netw. 2024 Nov;179:106574. doi: 10.1016/j.neunet.2024.106574. Epub 2024 Jul 25.
7
LFighter: Defending against the label-flipping attack in federated learning.LFighter:防御联邦学习中的标签翻转攻击。
Neural Netw. 2024 Feb;170:111-126. doi: 10.1016/j.neunet.2023.11.019. Epub 2023 Nov 11.
8
Deep Neural Network Quantization Framework for Effective Defense against Membership Inference Attacks.用于有效防御成员推理攻击的深度神经网络量化框架
Sensors (Basel). 2023 Sep 7;23(18):7722. doi: 10.3390/s23187722.
9
Differential Privacy Protection Against Membership Inference Attack on Machine Learning for Genomic Data.针对基因组数据机器学习的成员推理攻击的差分隐私保护。
Pac Symp Biocomput. 2021;26:26-37.
10
The 'Sandwich' meta-framework for architecture agnostic deep privacy-preserving transfer learning for non-invasive brainwave decoding.用于非侵入性脑电波解码的与架构无关的深度隐私保护迁移学习的“三明治”元框架。
J Neural Eng. 2025 Jan 23;22(1). doi: 10.1088/1741-2552/ad9957.