• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

对差分隐私块坐标下降法的成员推理攻击。

Membership inference attack on differentially private block coordinate descent.

作者信息

Riaz Shazia, Ali Saqib, Wang Guojun, Latif Muhammad Ahsan, Iqbal Muhammad Zafar

机构信息

School of Computing, Macquarie University, Sydney, Australia.

Department of Computer Science, University of Agriculture, Faisalabad, Punjab, Pakistan.

出版信息

PeerJ Comput Sci. 2023 Oct 5;9:e1616. doi: 10.7717/peerj-cs.1616. eCollection 2023.

DOI:10.7717/peerj-cs.1616
PMID:37869463
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10588713/
Abstract

The extraordinary success of deep learning is made possible due to the availability of crowd-sourced large-scale training datasets. Mostly, these datasets contain personal and confidential information, thus, have great potential of being misused, raising privacy concerns. Consequently, privacy-preserving deep learning has become a primary research interest nowadays. One of the prominent approaches adopted to prevent the leakage of sensitive information about the training data is by implementing differential privacy during training for their differentially private training, which aims to preserve the privacy of deep learning models. Though these models are claimed to be a safeguard against privacy attacks targeting sensitive information, however, least amount of work is found in the literature to practically evaluate their capability by performing a sophisticated attack model on them. Recently, DP-BCD is proposed as an alternative to state-of-the-art DP-SGD, to preserve the privacy of deep-learning models, having low privacy cost and fast convergence speed with highly accurate prediction results. To check its practical capability, in this article, we analytically evaluate the impact of a sophisticated privacy attack called the membership inference attack against it in both black box as well as white box settings. More precisely, we inspect how much information can be inferred from a differentially private deep model's training data. We evaluate our experiments on benchmark datasets using AUC, attacker advantage, precision, recall, and F1-score performance metrics. The experimental results exhibit that DP-BCD keeps its promise to preserve privacy against strong adversaries while providing acceptable model utility compared to state-of-the-art techniques.

摘要

深度学习取得非凡成功,得益于众包大规模训练数据集的可用性。大多数情况下,这些数据集包含个人和机密信息,因此极有可能被滥用,引发了隐私担忧。因此,隐私保护深度学习已成为当下主要的研究热点。为防止训练数据的敏感信息泄露而采用的一种突出方法,是在训练期间实施差分隐私进行差分隐私训练,其目的是保护深度学习模型的隐私。尽管这些模型据称能防范针对敏感信息的隐私攻击,然而,在文献中发现几乎没有工作通过对它们执行复杂的攻击模型来实际评估其能力。最近,提出了DP-BCD作为现有技术DP-SGD的替代方案,以保护深度学习模型的隐私,具有低隐私成本和快速收敛速度以及高度准确的预测结果。为检验其实际能力,在本文中,我们在黑盒和白盒设置下,通过分析评估一种名为成员推理攻击的复杂隐私攻击对它的影响。更确切地说,我们研究能从差分隐私深度模型的训练数据中推断出多少信息。我们使用AUC、攻击者优势、精确率、召回率和F1分数性能指标在基准数据集上评估我们的实验。实验结果表明,与现有技术相比,DP-BCD在保护隐私免受强大对手攻击方面兑现了承诺,同时提供了可接受的模型效用。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2488/10588713/ca16cf3621c9/peerj-cs-09-1616-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2488/10588713/37c70c884778/peerj-cs-09-1616-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2488/10588713/60b173111e67/peerj-cs-09-1616-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2488/10588713/74f8212572ab/peerj-cs-09-1616-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2488/10588713/289ff8f3bbce/peerj-cs-09-1616-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2488/10588713/ca73e1203836/peerj-cs-09-1616-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2488/10588713/026d2dec3bd7/peerj-cs-09-1616-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2488/10588713/67e46c41a84f/peerj-cs-09-1616-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2488/10588713/e439c3dbf85f/peerj-cs-09-1616-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2488/10588713/25e5b55684d9/peerj-cs-09-1616-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2488/10588713/38168b5f041d/peerj-cs-09-1616-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2488/10588713/99bc4ec196c5/peerj-cs-09-1616-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2488/10588713/108edd2b39a0/peerj-cs-09-1616-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2488/10588713/5474594412d3/peerj-cs-09-1616-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2488/10588713/928846963377/peerj-cs-09-1616-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2488/10588713/1c7b4e736cf5/peerj-cs-09-1616-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2488/10588713/ca16cf3621c9/peerj-cs-09-1616-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2488/10588713/37c70c884778/peerj-cs-09-1616-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2488/10588713/60b173111e67/peerj-cs-09-1616-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2488/10588713/74f8212572ab/peerj-cs-09-1616-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2488/10588713/289ff8f3bbce/peerj-cs-09-1616-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2488/10588713/ca73e1203836/peerj-cs-09-1616-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2488/10588713/026d2dec3bd7/peerj-cs-09-1616-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2488/10588713/67e46c41a84f/peerj-cs-09-1616-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2488/10588713/e439c3dbf85f/peerj-cs-09-1616-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2488/10588713/25e5b55684d9/peerj-cs-09-1616-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2488/10588713/38168b5f041d/peerj-cs-09-1616-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2488/10588713/99bc4ec196c5/peerj-cs-09-1616-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2488/10588713/108edd2b39a0/peerj-cs-09-1616-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2488/10588713/5474594412d3/peerj-cs-09-1616-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2488/10588713/928846963377/peerj-cs-09-1616-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2488/10588713/1c7b4e736cf5/peerj-cs-09-1616-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2488/10588713/ca16cf3621c9/peerj-cs-09-1616-g016.jpg

相似文献

1
Membership inference attack on differentially private block coordinate descent.对差分隐私块坐标下降法的成员推理攻击。
PeerJ Comput Sci. 2023 Oct 5;9:e1616. doi: 10.7717/peerj-cs.1616. eCollection 2023.
2
Exploring the Relationship Between Privacy and Utility in Mobile Health: Algorithm Development and Validation via Simulations of Federated Learning, Differential Privacy, and External Attacks.探索移动健康中隐私与效用的关系:通过联邦学习、差分隐私和外部攻击的模拟算法开发和验证。
J Med Internet Res. 2023 Apr 20;25:e43664. doi: 10.2196/43664.
3
Inference attacks against differentially private query results from genomic datasets including dependent tuples.针对包含依赖元组的基因组数据集的差分隐私查询结果的推理攻击。
Bioinformatics. 2020 Jul 1;36(Suppl_1):i136-i145. doi: 10.1093/bioinformatics/btaa475.
4
Preserving differential privacy in deep neural networks with relevance-based adaptive noise imposition.基于相关性的自适应噪声引入保护深度神经网络的差分隐私。
Neural Netw. 2020 May;125:131-141. doi: 10.1016/j.neunet.2020.02.001. Epub 2020 Feb 11.
5
Deep Neural Network Quantization Framework for Effective Defense against Membership Inference Attacks.用于有效防御成员推理攻击的深度神经网络量化框架
Sensors (Basel). 2023 Sep 7;23(18):7722. doi: 10.3390/s23187722.
6
Defense against membership inference attack in graph neural networks through graph perturbation.通过图扰动防御图神经网络中的成员推理攻击
Int J Inf Secur. 2023;22(2):497-509. doi: 10.1007/s10207-022-00646-y. Epub 2022 Dec 16.
7
A(DP) SGD: Asynchronous Decentralized Parallel Stochastic Gradient Descent With Differential Privacy.异步去中心化并行随机梯度下降与差分隐私。
IEEE Trans Pattern Anal Mach Intell. 2022 Nov;44(11):8036-8047. doi: 10.1109/TPAMI.2021.3107796. Epub 2022 Oct 4.
8
Mitigating Membership Inference in Deep Survival Analyses with Differential Privacy.通过差分隐私减轻深度生存分析中的成员推理
Proc (IEEE Int Conf Healthc Inform). 2023 Jun;2023:81-90. doi: 10.1109/ichi57859.2023.00022. Epub 2023 Dec 11.
9
Differentially Private Graph Neural Networks for Whole-Graph Classification.基于差分隐私的图神经网络全图分类方法。
IEEE Trans Pattern Anal Mach Intell. 2023 Jun;45(6):7308-7318. doi: 10.1109/TPAMI.2022.3228315. Epub 2023 May 5.
10
Quantum machine learning with differential privacy.带差分隐私的量子机器学习。
Sci Rep. 2023 Feb 11;13(1):2453. doi: 10.1038/s41598-022-24082-z.

本文引用的文献

1
A novel approach for Arabic business email classification based on deep learning machines.一种基于深度学习机器的阿拉伯语商务电子邮件分类新方法。
PeerJ Comput Sci. 2023 Jan 25;9:e1221. doi: 10.7717/peerj-cs.1221. eCollection 2023.
2
Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing.药物遗传学中的隐私:华法林个体化给药的端到端案例研究。
Proc USENIX Secur Symp. 2014 Aug;2014:17-32.
3
Deep learning.深度学习。
Nature. 2015 May 28;521(7553):436-44. doi: 10.1038/nature14539.
4
Resolving individuals contributing trace amounts of DNA to highly complex mixtures using high-density SNP genotyping microarrays.使用高密度单核苷酸多态性(SNP)基因分型微阵列解析对高度复杂混合物贡献微量DNA的个体。
PLoS Genet. 2008 Aug 29;4(8):e1000167. doi: 10.1371/journal.pgen.1000167.