School of Computer Science and Engineering, Pusan National University, Busan 609-735, Korea.
Sensors (Basel). 2021 Nov 24;21(23):7806. doi: 10.3390/s21237806.
As the amount of data collected and analyzed by machine learning technology increases, data that can identify individuals is also being collected in large quantities. In particular, as deep learning technology-which requires a large amount of analysis data-is activated in various service fields, the possibility of exposing sensitive information of users increases, and the user privacy problem is growing more than ever. As a solution to this user's data privacy problem, homomorphic encryption technology, which is an encryption technology that supports arithmetic operations using encrypted data, has been applied to various field including finance and health care in recent years. If so, is it possible to use the deep learning service while preserving the data privacy of users by using the data to which homomorphic encryption is applied? In this paper, we propose three attack methods to infringe user's data privacy by exploiting possible security vulnerabilities in the process of using homomorphic encryption-based deep learning services for the first time. To specify and verify the feasibility of exploiting possible security vulnerabilities, we propose three attacks: (1) an adversarial attack exploiting communication link between client and trusted party; (2) a reconstruction attack using the paired input and output data; and (3) a membership inference attack by malicious insider. In addition, we describe real-world exploit scenarios for financial and medical services. From the experimental evaluation results, we show that the adversarial example and reconstruction attacks are a practical threat to homomorphic encryption-based deep learning models. The adversarial attack decreased average classification accuracy from 0.927 to 0.043, and the reconstruction attack showed average reclassification accuracy of 0.888, respectively.
随着机器学习技术所收集和分析的数据量的增加,也在大量收集可识别个人的信息。特别是,随着深度学习技术——需要大量分析数据——在各个服务领域被激活,用户的敏感信息暴露的可能性增加,用户隐私问题比以往任何时候都更加严重。作为解决这个用户数据隐私问题的方法,同态加密技术,即使用加密数据支持算术运算的加密技术,近年来已应用于金融和医疗保健等各个领域。如果是这样,是否可以在使用同态加密数据的同时使用深度学习服务来保护用户的数据隐私?在本文中,我们首次提出了三种攻击方法,通过利用同态加密的深度学习服务过程中的可能安全漏洞来侵犯用户的数据隐私。为了具体说明和验证利用可能的安全漏洞的可行性,我们提出了三种攻击:(1)利用客户端和可信方之间的通信链路的对抗性攻击;(2)使用配对的输入和输出数据的重构攻击;(3)恶意内部人员的成员推理攻击。此外,我们还描述了金融和医疗服务的实际利用场景。从实验评估结果来看,我们表明对抗性示例和重构攻击对基于同态加密的深度学习模型是一种实际威胁。对抗性攻击将平均分类准确率从 0.927 降低到 0.043,重构攻击的平均重新分类准确率为 0.888。