Kim Hyeong-Geon, Shin Jinmyeong, Choi Yoon-Ho
School of Computer Science and Engineering, Pusan National University, Busan 46241, Republic of Korea.
Sensors (Basel). 2024 May 16;24(10):3166. doi: 10.3390/s24103166.
Differential privacy has emerged as a practical technique for privacy-preserving deep learning. However, recent studies on privacy attacks have demonstrated vulnerabilities in the existing differential privacy implementations for deep models. While encryption-based methods offer robust security, their computational overheads are often prohibitive. To address these challenges, we propose a novel differential privacy-based image generation method. Our approach employs two distinct noise types: one makes the image unrecognizable to humans, preserving privacy during transmission, while the other maintains features essential for machine learning analysis. This allows the deep learning service to provide accurate results, without compromising data privacy. We demonstrate the feasibility of our method on the CIFAR100 dataset, which offers a realistic complexity for evaluation.
差分隐私已成为一种用于隐私保护深度学习的实用技术。然而,最近关于隐私攻击的研究表明,现有深度模型的差分隐私实现存在漏洞。虽然基于加密的方法提供了强大的安全性,但其计算开销往往过高。为应对这些挑战,我们提出了一种新颖的基于差分隐私的图像生成方法。我们的方法采用两种不同类型的噪声:一种使图像对人类不可识别,在传输过程中保护隐私,而另一种保留机器学习分析所需的特征。这使得深度学习服务能够提供准确的结果,同时不损害数据隐私。我们在CIFAR100数据集上证明了我们方法的可行性,该数据集为评估提供了现实的复杂度。