Suppr超能文献

基于对比学习的隐私保护图像模板共享

Privacy-Preserving Image Template Sharing Using Contrastive Learning.

作者信息

Rezaeifar Shideh, Voloshynovskiy Slava, Asgari Jirhandeh Meisam, Kinakh Vitality

机构信息

Department of Computer Science, University of Geneva, 1227 Carouge, Switzerland.

出版信息

Entropy (Basel). 2022 May 3;24(5):643. doi: 10.3390/e24050643.

Abstract

With the recent developments of Machine Learning as a Service (MLaaS), various privacy concerns have been raised. Having access to the user's data, an adversary can design attacks with different objectives, namely, reconstruction or attribute inference attacks. In this paper, we propose two different training frameworks for an image classification task while preserving user data privacy against the two aforementioned attacks. In both frameworks, an encoder is trained with contrastive loss, providing a superior utility-privacy trade-off. In the reconstruction attack scenario, a supervised contrastive loss was employed to provide maximal discrimination for the targeted classification task. The encoded features are further perturbed using the obfuscator module to remove all redundant information. Moreover, the obfuscator module is jointly trained with a classifier to minimize the correlation between private feature representation and original data while retaining the model utility for the classification. For the attribute inference attack, we aim to provide a representation of data that is independent of the sensitive attribute. Therefore, the encoder is trained with supervised and private contrastive loss. Furthermore, an obfuscator module is trained in an adversarial manner to preserve the privacy of sensitive attributes while maintaining the classification performance on the target attribute. The reported results on the CelebA dataset validate the effectiveness of the proposed frameworks.

摘要

随着机器学习即服务(MLaaS)的最新发展,引发了各种隐私问题。攻击者在获取用户数据后,可以设计具有不同目标的攻击,即重建攻击或属性推断攻击。在本文中,我们针对图像分类任务提出了两种不同的训练框架,同时针对上述两种攻击保护用户数据隐私。在这两种框架中,编码器都使用对比损失进行训练,从而实现了卓越的效用-隐私权衡。在重建攻击场景中,采用监督对比损失为目标分类任务提供最大区分度。使用混淆器模块对编码特征进一步进行扰动,以去除所有冗余信息。此外,混淆器模块与分类器联合训练,以最小化私有特征表示与原始数据之间的相关性,同时保留模型用于分类的效用。对于属性推断攻击,我们旨在提供与敏感属性无关的数据表示。因此,编码器使用监督和私有对比损失进行训练。此外,混淆器模块以对抗方式进行训练,以保护敏感属性的隐私,同时保持对目标属性的分类性能。在CelebA数据集上报告的结果验证了所提出框架的有效性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1c12/9141880/4f7afe13ead4/entropy-24-00643-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验