• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于有效防御成员推理攻击的深度神经网络量化框架

Deep Neural Network Quantization Framework for Effective Defense against Membership Inference Attacks.

作者信息

Famili Azadeh, Lao Yingjie

机构信息

The Holcombe Department of Electrical and Computer Engineering, Clemson University, Clemson, SC 29634, USA.

出版信息

Sensors (Basel). 2023 Sep 7;23(18):7722. doi: 10.3390/s23187722.

DOI:10.3390/s23187722
PMID:37765778
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10538103/
Abstract

Machine learning deployment on edge devices has faced challenges such as computational costs and privacy issues. Membership inference attack (MIA) refers to the attack where the adversary aims to infer whether a data sample belongs to the training set. In other words, user data privacy might be compromised by MIA from a well-trained model. Therefore, it is vital to have defense mechanisms in place to protect training data, especially in privacy-sensitive applications such as healthcare. This paper exploits the implications of quantization on privacy leakage and proposes a novel quantization method that enhances the resistance of a neural network against MIA. Recent studies have shown that model quantization leads to resistance against membership inference attacks. Existing quantization approaches primarily prioritize performance and energy efficiency; we propose a quantization framework with the main objective of boosting the resistance against membership inference attacks. Unlike conventional quantization methods whose primary objectives are compression or increased speed, our proposed quantization aims to provide defense against MIA. We evaluate the effectiveness of our methods on various popular benchmark datasets and model architectures. All popular evaluation metrics, including precision, recall, and F1-score, show improvement when compared to the full bitwidth model. For example, for ResNet on Cifar10, our experimental results show that our algorithm can reduce the attack accuracy of MIA by 14%, the true positive rate by 37%, and F1-score of members by 39% compared to the full bitwidth network. Here, reduction in true positive rate means the attacker will not be able to identify the training dataset members, which is the main goal of the MIA.

摘要

在边缘设备上部署机器学习面临着计算成本和隐私问题等挑战。成员推理攻击(MIA)是指攻击者旨在推断一个数据样本是否属于训练集的攻击。换句话说,训练有素的模型实施的MIA可能会危及用户数据隐私。因此,建立防御机制来保护训练数据至关重要,尤其是在医疗保健等对隐私敏感的应用中。本文探讨了量化对隐私泄露的影响,并提出了一种新的量化方法,该方法增强了神经网络对MIA的抵抗力。最近的研究表明,模型量化会产生对成员推理攻击的抵抗力。现有的量化方法主要优先考虑性能和能源效率;我们提出了一个量化框架,其主要目标是增强对成员推理攻击的抵抗力。与主要目标是压缩或提高速度的传统量化方法不同,我们提出的量化旨在防范MIA。我们在各种流行的基准数据集和模型架构上评估了我们方法的有效性。与全比特宽度模型相比,所有流行的评估指标,包括精度、召回率和F1分数,都有改进。例如,对于Cifar10上的ResNet,我们的实验结果表明,与全比特宽度网络相比,我们的算法可以将MIA的攻击准确率降低14%,真阳性率降低37%,成员的F1分数降低39%。在这里,真阳性率的降低意味着攻击者将无法识别训练数据集的成员这正是MIA的主要目标。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f2cd/10538103/1b1031ae0117/sensors-23-07722-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f2cd/10538103/36e547873b63/sensors-23-07722-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f2cd/10538103/f27900dced53/sensors-23-07722-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f2cd/10538103/6bba385e6e4f/sensors-23-07722-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f2cd/10538103/e4545f9aa56d/sensors-23-07722-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f2cd/10538103/925edf594519/sensors-23-07722-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f2cd/10538103/1b1031ae0117/sensors-23-07722-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f2cd/10538103/36e547873b63/sensors-23-07722-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f2cd/10538103/f27900dced53/sensors-23-07722-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f2cd/10538103/6bba385e6e4f/sensors-23-07722-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f2cd/10538103/e4545f9aa56d/sensors-23-07722-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f2cd/10538103/925edf594519/sensors-23-07722-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f2cd/10538103/1b1031ae0117/sensors-23-07722-g006.jpg

相似文献

1
Deep Neural Network Quantization Framework for Effective Defense against Membership Inference Attacks.用于有效防御成员推理攻击的深度神经网络量化框架
Sensors (Basel). 2023 Sep 7;23(18):7722. doi: 10.3390/s23187722.
2
Differential Privacy Protection Against Membership Inference Attack on Machine Learning for Genomic Data.针对基因组数据机器学习的成员推理攻击的差分隐私保护。
Pac Symp Biocomput. 2021;26:26-37.
3
Defense against membership inference attack in graph neural networks through graph perturbation.通过图扰动防御图神经网络中的成员推理攻击
Int J Inf Secur. 2023;22(2):497-509. doi: 10.1007/s10207-022-00646-y. Epub 2022 Dec 16.
4
Membership inference attack on differentially private block coordinate descent.对差分隐私块坐标下降法的成员推理攻击。
PeerJ Comput Sci. 2023 Oct 5;9:e1616. doi: 10.7717/peerj-cs.1616. eCollection 2023.
5
mDARTS: Searching ML-Based ECG Classifiers Against Membership Inference Attacks.mDARTS:针对成员推理攻击搜索基于机器学习的心电图分类器
IEEE J Biomed Health Inform. 2025 Jan;29(1):177-187. doi: 10.1109/JBHI.2024.3481505. Epub 2025 Jan 7.
6
MiDA: Membership inference attacks against domain adaptation.
ISA Trans. 2023 Oct;141:103-112. doi: 10.1016/j.isatra.2023.01.021. Epub 2023 Jan 20.
7
MemberShield: A framework for federated learning with membership privacy.成员护盾:一种具有成员隐私性的联邦学习框架。
Neural Netw. 2025 Jan;181:106768. doi: 10.1016/j.neunet.2024.106768. Epub 2024 Oct 1.
8
Mitigating Membership Inference in Deep Survival Analyses with Differential Privacy.通过差分隐私减轻深度生存分析中的成员推理
Proc (IEEE Int Conf Healthc Inform). 2023 Jun;2023:81-90. doi: 10.1109/ichi57859.2023.00022. Epub 2023 Dec 11.
9
MBFQuant: A Multiplier-Bitwidth-Fixed, Mixed-Precision Quantization Method for Mobile CNN-Based Applications.MBFQuant:一种用于移动 CNN 应用的乘法器位数固定、混合精度量化方法。
IEEE Trans Image Process. 2023;32:2438-2453. doi: 10.1109/TIP.2023.3268562. Epub 2023 May 1.
10
Tunable Privacy Risk Evaluation of Generative Adversarial Networks.生成式对抗网络的可调隐私风险评估。
Stud Health Technol Inform. 2024 Aug 22;316:1233-1237. doi: 10.3233/SHTI240634.

引用本文的文献

1
Diagnostic Accuracy of Deep Learning Models in Predicting Glioma Molecular Markers: A Systematic Review and Meta-Analysis.深度学习模型预测胶质瘤分子标志物的诊断准确性:系统评价与Meta分析
Diagnostics (Basel). 2025 Mar 21;15(7):797. doi: 10.3390/diagnostics15070797.

本文引用的文献

1
The language of proteins: NLP, machine learning & protein sequences.蛋白质的语言:自然语言处理、机器学习与蛋白质序列
Comput Struct Biotechnol J. 2021 Mar 25;19:1750-1758. doi: 10.1016/j.csbj.2021.03.022. eCollection 2021.
2
An End-to-End Deep Neural Network for Autonomous Driving Designed for Embedded Automotive Platforms.专为嵌入式汽车平台设计的用于自动驾驶的端到端深度神经网络。
Sensors (Basel). 2019 May 3;19(9):2064. doi: 10.3390/s19092064.
3
Machine Learning in Medical Imaging.医学影像中的机器学习。
J Am Coll Radiol. 2018 Mar;15(3 Pt B):512-520. doi: 10.1016/j.jacr.2017.12.028. Epub 2018 Feb 2.