• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

SensitiveNets:学习与面孔图像应用无关的表示。

SensitiveNets: Learning Agnostic Representations with Application to Face Images.

出版信息

IEEE Trans Pattern Anal Mach Intell. 2021 Jun;43(6):2158-2164. doi: 10.1109/TPAMI.2020.3015420. Epub 2021 May 11.

DOI:10.1109/TPAMI.2020.3015420
PMID:32776875
Abstract

This work proposes a novel privacy-preserving neural network feature representation to suppress the sensitive information of a learned space while maintaining the utility of the data. The new international regulation for personal data protection forces data controllers to guarantee privacy and avoid discriminative hazards while managing sensitive data of users. In our approach, privacy and discrimination are related to each other. Instead of existing approaches aimed directly at fairness improvement, the proposed feature representation enforces the privacy of selected attributes. This way fairness is not the objective, but the result of a privacy-preserving learning method. This approach guarantees that sensitive information cannot be exploited by any agent who process the output of the model, ensuring both privacy and equality of opportunity. Our method is based on an adversarial regularizer that introduces a sensitive information removal function in the learning objective. The method is evaluated on three different primary tasks (identity, attractiveness, and smiling) and three publicly available benchmarks. In addition, we present a new face annotation dataset with balanced distribution between genders and ethnic origins. The experiments demonstrate that it is possible to improve the privacy and equality of opportunity while retaining competitive performance independently of the task.

摘要

这项工作提出了一种新颖的隐私保护神经网络特征表示方法,旨在抑制学习空间中的敏感信息,同时保持数据的实用性。新的国际个人数据保护法规要求数据控制者在管理用户的敏感数据时保证隐私并避免歧视性风险。在我们的方法中,隐私和歧视是相互关联的。与直接针对公平性改进的现有方法不同,所提出的特征表示方法强制选择属性的隐私性。这样,公平性不是目标,而是隐私保护学习方法的结果。这种方法保证了敏感信息不能被处理模型输出的任何代理利用,从而确保了隐私和机会均等。我们的方法基于对抗正则化器,该正则化器在学习目标中引入了敏感信息去除函数。该方法在三个不同的主要任务(身份、吸引力和微笑)和三个公开可用的基准上进行了评估。此外,我们还提出了一个新的带有性别和种族平衡分布的人脸注释数据集。实验表明,在不依赖任务的情况下,提高隐私性和机会均等性的同时,保持竞争力是有可能的。

相似文献

1
SensitiveNets: Learning Agnostic Representations with Application to Face Images.SensitiveNets:学习与面孔图像应用无关的表示。
IEEE Trans Pattern Anal Mach Intell. 2021 Jun;43(6):2158-2164. doi: 10.1109/TPAMI.2020.3015420. Epub 2021 May 11.
2
A Two-Stage Differential Privacy Scheme for Federated Learning Based on Edge Intelligence.基于边缘智能的联邦学习两阶段差分隐私方案。
IEEE J Biomed Health Inform. 2024 Jun;28(6):3349-3360. doi: 10.1109/JBHI.2023.3306425. Epub 2024 Jun 6.
3
Model-Protected Multi-Task Learning.模型保护的多任务学习。
IEEE Trans Pattern Anal Mach Intell. 2022 Feb;44(2):1002-1019. doi: 10.1109/TPAMI.2020.3015859. Epub 2022 Jan 7.
4
Preserving fairness and diagnostic accuracy in private large-scale AI models for medical imaging.在用于医学成像的私有大规模人工智能模型中保持公平性和诊断准确性。
Commun Med (Lond). 2024 Mar 14;4(1):46. doi: 10.1038/s43856-024-00462-6.
5
Preserving differential privacy in deep neural networks with relevance-based adaptive noise imposition.基于相关性的自适应噪声引入保护深度神经网络的差分隐私。
Neural Netw. 2020 May;125:131-141. doi: 10.1016/j.neunet.2020.02.001. Epub 2020 Feb 11.
6
Disentangled Representation Learning for Multiple Attributes Preserving Face Deidentification.用于多属性保留面部去识别的解缠表示学习
IEEE Trans Neural Netw Learn Syst. 2022 Jan;33(1):244-256. doi: 10.1109/TNNLS.2020.3027617. Epub 2022 Jan 5.
7
Privacy-Preserving Deep Action Recognition: An Adversarial Learning Framework and A New Dataset.隐私保护的深度动作识别:对抗学习框架与新数据集。
IEEE Trans Pattern Anal Mach Intell. 2022 Apr;44(4):2126-2139. doi: 10.1109/TPAMI.2020.3026709. Epub 2022 Mar 4.
8
Privacy Preserving Defense For Black Box Classifiers Against On-Line Adversarial Attacks.隐私保护的黑盒分类器对抗在线对抗攻击。
IEEE Trans Pattern Anal Mach Intell. 2022 Dec;44(12):9503-9520. doi: 10.1109/TPAMI.2021.3125931. Epub 2022 Nov 7.
9
Privacy preserving Generative Adversarial Networks to model Electronic Health Records.用于建模电子健康记录的隐私保护生成对抗网络。
Neural Netw. 2022 Sep;153:339-348. doi: 10.1016/j.neunet.2022.06.022. Epub 2022 Jun 25.
10
Toward Learning Trustworthily from Data Combining Privacy, Fairness, and Explainability: An Application to Face Recognition.迈向从融合隐私、公平性和可解释性的数据中可靠学习:人脸识别应用
Entropy (Basel). 2021 Aug 14;23(8):1047. doi: 10.3390/e23081047.

引用本文的文献

1
Generation of Face Privacy-Protected Images Based on the Diffusion Model.基于扩散模型生成面部隐私保护图像。
Entropy (Basel). 2024 May 31;26(6):479. doi: 10.3390/e26060479.
2
Privacy-Preserving Face Recognition Method Based on Randomization and Local Feature Learning.基于随机化和局部特征学习的隐私保护人脸识别方法
J Imaging. 2024 Feb 28;10(3):59. doi: 10.3390/jimaging10030059.
3
Efficient adversarial debiasing with concept activation vector - Medical image case-studies.基于概念激活向量的高效对抗去偏——医学影像案例研究。
J Biomed Inform. 2024 Jan;149:104548. doi: 10.1016/j.jbi.2023.104548. Epub 2023 Dec 1.
4
Exploring gender biases in ML and AI academic research through systematic literature review.通过系统文献综述探索机器学习和人工智能学术研究中的性别偏见。
Front Artif Intell. 2022 Oct 11;5:976838. doi: 10.3389/frai.2022.976838. eCollection 2022.
5
SELM: Siamese extreme learning machine with application to face biometrics.连体极端学习机及其在面部生物识别中的应用
Neural Comput Appl. 2022;34(14):12143-12157. doi: 10.1007/s00521-022-07100-z. Epub 2022 Mar 15.