Suppr超能文献

成员护盾:一种具有成员隐私性的联邦学习框架。

MemberShield: A framework for federated learning with membership privacy.

作者信息

Ahmed Faisal, Sánchez David, Haddi Zouhair, Domingo-Ferrer Josep

机构信息

NVISION Systems and Technologies SL, Gran Via Carles III, 124, ent. 1a, 08034, Barcelona, Catalonia, Spain; Universitat Rovira i Virgili, Dept. of Computer Engineering and Mathematics, CYBERCAT-Center for Cybersecurity Research of Catalonia, Av. Països Catalans 26, 43007 Tarragona, Catalonia, Spain.

Universitat Rovira i Virgili, Dept. of Computer Engineering and Mathematics, CYBERCAT-Center for Cybersecurity Research of Catalonia, Av. Països Catalans 26, 43007 Tarragona, Catalonia, Spain.

出版信息

Neural Netw. 2025 Jan;181:106768. doi: 10.1016/j.neunet.2024.106768. Epub 2024 Oct 1.

Abstract

Federated Learning (FL) allows multiple data owners to build high-quality deep learning models collaboratively, by sharing only model updates and keeping data on their premises. Even though FL offers privacy-by-design, it is vulnerable to membership inference attacks (MIA), where an adversary tries to determine whether a sample was included in the training data. Existing defenses against MIA cannot offer meaningful privacy protection without significantly hampering the model's utility and causing a non-negligible training overhead. In this paper we analyze the underlying causes of the differences in the model behavior for member and non-member samples, which arise from model overfitting and facilitate MIAs. Accordingly, we propose MemberShield, a generalization-based defense method for MIAs that consists of: (i) one-time preprocessing of each client's training data labels that transforms one-hot encoded labels to soft labels and eventually exploits them in local training, and (ii) early stopping the training when the local model's validation accuracy does not improve on that of the global model for a number of epochs. Extensive empirical evaluations on three widely used datasets and four model architectures demonstrate that MemberShield outperforms state-of-the-art defense methods by delivering substantially better practical privacy protection against all forms of MIAs, while better preserving the target model utility. On top of that, our proposal significantly reduces training time and is straightforward to implement, by just tuning a single hyperparameter.

摘要

联邦学习(FL)允许多个数据所有者通过仅共享模型更新并将数据保留在本地,来协作构建高质量的深度学习模型。尽管联邦学习提供了设计层面的隐私保护,但它容易受到成员推理攻击(MIA),即对手试图确定一个样本是否包含在训练数据中。现有的针对成员推理攻击的防御措施,如果不严重妨碍模型的实用性并造成不可忽视的训练开销,就无法提供有意义的隐私保护。在本文中,我们分析了成员样本和非成员样本在模型行为上存在差异的根本原因,这些差异源于模型过拟合并助长了成员推理攻击。因此,我们提出了MemberShield,一种基于泛化的针对成员推理攻击的防御方法,它包括:(i)对每个客户端的训练数据标签进行一次性预处理,将独热编码标签转换为软标签,并最终在本地训练中加以利用;(ii)当本地模型的验证准确率在多个轮次中没有超过全局模型时,提前停止训练。在三个广泛使用的数据集和四种模型架构上进行的广泛实证评估表明,MemberShield通过针对所有形式的成员推理攻击提供实质上更好的实际隐私保护,同时更好地保留目标模型的实用性,从而优于现有的防御方法。除此之外,我们的方案通过仅调整单个超参数,显著减少了训练时间且易于实现。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验