• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

调查:推理时的泄漏和隐私问题。

Survey: Leakage and Privacy at Inference Time.

出版信息

IEEE Trans Pattern Anal Mach Intell. 2023 Jul;45(7):9090-9108. doi: 10.1109/TPAMI.2022.3229593. Epub 2023 Jun 5.

DOI:10.1109/TPAMI.2022.3229593
PMID:37015684
Abstract

Leakage of data from publicly available Machine Learning (ML) models is an area of growing significance since commercial and government applications of ML can draw on multiple sources of data, potentially including users' and clients' sensitive data. We provide a comprehensive survey of contemporary advances on several fronts, covering involuntary data leakage which is natural to ML models, potential malicious leakage which is caused by privacy attacks, and currently available defence mechanisms. We focus on inference-time leakage, as the most likely scenario for publicly available models. We first discuss what leakage is in the context of different data, tasks, and model architectures. We then propose a taxonomy across involuntary and malicious leakage, followed by description of currently available defences, assessment metrics, and applications. We conclude with outstanding challenges and open questions, outlining some promising directions for future research.

摘要

机器学习 (ML) 模型中数据泄露是一个日益重要的领域,因为 ML 的商业和政府应用程序可以利用多个数据源,其中可能包括用户和客户的敏感数据。我们全面调查了当代在几个方面的进展,涵盖了 ML 模型中自然存在的无意识数据泄露、由隐私攻击引起的潜在恶意泄露以及当前可用的防御机制。我们专注于推断时泄露,因为这是公开可用模型最有可能的情况。我们首先讨论了在不同数据、任务和模型架构上下文中的泄露是什么。然后,我们提出了一个跨无意识和恶意泄露的分类法,接着描述了当前可用的防御措施、评估指标和应用。最后,我们总结了尚未解决的挑战和问题,概述了未来研究的一些有前途的方向。

相似文献

1
Survey: Leakage and Privacy at Inference Time.调查:推理时的泄漏和隐私问题。
IEEE Trans Pattern Anal Mach Intell. 2023 Jul;45(7):9090-9108. doi: 10.1109/TPAMI.2022.3229593. Epub 2023 Jun 5.
2
Federated Learning in Edge Computing: A Systematic Survey.边缘计算中的联邦学习:系统综述。
Sensors (Basel). 2022 Jan 7;22(2):450. doi: 10.3390/s22020450.
3
mDARTS: Searching ML-Based ECG Classifiers Against Membership Inference Attacks.mDARTS:针对成员推理攻击搜索基于机器学习的心电图分类器
IEEE J Biomed Health Inform. 2025 Jan;29(1):177-187. doi: 10.1109/JBHI.2024.3481505. Epub 2025 Jan 7.
4
Privacy-preserving Speech-based Depression Diagnosis via Federated Learning.基于联邦学习的隐私保护语音抑郁诊断。
Annu Int Conf IEEE Eng Med Biol Soc. 2022 Jul;2022:1371-1374. doi: 10.1109/EMBC48229.2022.9871861.
5
Understanding Deep Gradient Leakage via Inversion Influence Functions.通过反演影响函数理解深度梯度泄漏
Adv Neural Inf Process Syst. 2023 Dec;36:3921-3944.
6
Is Homomorphic Encryption-Based Deep Learning Secure Enough?基于同态加密的深度学习安全吗?
Sensors (Basel). 2021 Nov 24;21(23):7806. doi: 10.3390/s21237806.
7
A Game-Theoretic Framework to Preserve Location Information Privacy in Location-based Service Applications.基于博弈论的位置服务应用中位置信息隐私保护框架
Sensors (Basel). 2019 Apr 1;19(7):1581. doi: 10.3390/s19071581.
8
Robot location privacy protection based on Q-learning particle swarm optimization algorithm in mobile crowdsensing.移动群智感知中基于Q学习粒子群优化算法的机器人位置隐私保护
Front Neurorobot. 2022 Sep 30;16:981390. doi: 10.3389/fnbot.2022.981390. eCollection 2022.
9
Homomorphic Encryption-Based Federated Privacy Preservation for Deep Active Learning.基于同态加密的深度主动学习联邦隐私保护
Entropy (Basel). 2022 Oct 27;24(11):1545. doi: 10.3390/e24111545.
10
Federated Learning Attacks Revisited: A Critical Discussion of Gaps, Assumptions, and Evaluation Setups.重新审视联邦学习攻击:对差距、假设和评估设置的批判性讨论
Sensors (Basel). 2022 Dec 20;23(1):31. doi: 10.3390/s23010031.

引用本文的文献

1
Adherence of Studies on Large Language Models for Medical Applications Published in Leading Medical Journals According to the MI-CLEAR-LLM Checklist.根据MI-CLEAR-LLM清单,顶级医学期刊发表的关于医学应用大语言模型的研究的依从性。
Korean J Radiol. 2025 Apr;26(4):304-312. doi: 10.3348/kjr.2024.1161. Epub 2025 Jan 23.
2
Impact of Large Language Models on Medical Education and Teaching Adaptations.大语言模型对医学教育及教学适应性的影响
JMIR Med Inform. 2024 Jul 25;12:e55933. doi: 10.2196/55933.
3
Ethical Considerations and Fundamental Principles of Large Language Models in Medical Education: Viewpoint.
医学教育中大型语言模型的伦理考量与基本原则:观点
J Med Internet Res. 2024 Aug 1;26:e60083. doi: 10.2196/60083.
4
Machine learning models in trusted research environments - understanding operational risks.可信研究环境中的机器学习模型——了解运营风险。
Int J Popul Data Sci. 2023 Dec 14;8(1):2165. doi: 10.23889/ijpds.v8i1.2165. eCollection 2023.
5
Neural networks memorise personal information from one sample.神经网络从一个样本中记忆个人信息。
Sci Rep. 2023 Dec 4;13(1):21366. doi: 10.1038/s41598-023-48034-3.
6
Disclosure control of machine learning models from trusted research environments (TRE): New challenges and opportunities.来自可信研究环境(TRE)的机器学习模型的披露控制:新挑战与机遇。
Heliyon. 2023 Apr 3;9(4):e15143. doi: 10.1016/j.heliyon.2023.e15143. eCollection 2023 Apr.
7
Privacy-Aware Early Detection of COVID-19 Through Adversarial Training.通过对抗训练实现对新冠病毒病的隐私感知早期检测
IEEE J Biomed Health Inform. 2023 Mar;27(3):1249-1258. doi: 10.1109/JBHI.2022.3230663. Epub 2023 Mar 7.