Suppr超能文献

用于有效防御成员推理攻击的深度神经网络量化框架

Deep Neural Network Quantization Framework for Effective Defense against Membership Inference Attacks.

作者信息

Famili Azadeh, Lao Yingjie

机构信息

The Holcombe Department of Electrical and Computer Engineering, Clemson University, Clemson, SC 29634, USA.

出版信息

Sensors (Basel). 2023 Sep 7;23(18):7722. doi: 10.3390/s23187722.

Abstract

Machine learning deployment on edge devices has faced challenges such as computational costs and privacy issues. Membership inference attack (MIA) refers to the attack where the adversary aims to infer whether a data sample belongs to the training set. In other words, user data privacy might be compromised by MIA from a well-trained model. Therefore, it is vital to have defense mechanisms in place to protect training data, especially in privacy-sensitive applications such as healthcare. This paper exploits the implications of quantization on privacy leakage and proposes a novel quantization method that enhances the resistance of a neural network against MIA. Recent studies have shown that model quantization leads to resistance against membership inference attacks. Existing quantization approaches primarily prioritize performance and energy efficiency; we propose a quantization framework with the main objective of boosting the resistance against membership inference attacks. Unlike conventional quantization methods whose primary objectives are compression or increased speed, our proposed quantization aims to provide defense against MIA. We evaluate the effectiveness of our methods on various popular benchmark datasets and model architectures. All popular evaluation metrics, including precision, recall, and F1-score, show improvement when compared to the full bitwidth model. For example, for ResNet on Cifar10, our experimental results show that our algorithm can reduce the attack accuracy of MIA by 14%, the true positive rate by 37%, and F1-score of members by 39% compared to the full bitwidth network. Here, reduction in true positive rate means the attacker will not be able to identify the training dataset members, which is the main goal of the MIA.

摘要

在边缘设备上部署机器学习面临着计算成本和隐私问题等挑战。成员推理攻击(MIA)是指攻击者旨在推断一个数据样本是否属于训练集的攻击。换句话说,训练有素的模型实施的MIA可能会危及用户数据隐私。因此,建立防御机制来保护训练数据至关重要,尤其是在医疗保健等对隐私敏感的应用中。本文探讨了量化对隐私泄露的影响,并提出了一种新的量化方法,该方法增强了神经网络对MIA的抵抗力。最近的研究表明,模型量化会产生对成员推理攻击的抵抗力。现有的量化方法主要优先考虑性能和能源效率;我们提出了一个量化框架,其主要目标是增强对成员推理攻击的抵抗力。与主要目标是压缩或提高速度的传统量化方法不同,我们提出的量化旨在防范MIA。我们在各种流行的基准数据集和模型架构上评估了我们方法的有效性。与全比特宽度模型相比,所有流行的评估指标,包括精度、召回率和F1分数,都有改进。例如,对于Cifar10上的ResNet,我们的实验结果表明,与全比特宽度网络相比,我们的算法可以将MIA的攻击准确率降低14%,真阳性率降低37%,成员的F1分数降低39%。在这里,真阳性率的降低意味着攻击者将无法识别训练数据集的成员这正是MIA的主要目标。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f2cd/10538103/36e547873b63/sensors-23-07722-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验