Suppr超能文献

基于生成对抗网络的深度学习人脸识别系统对抗性补丁攻击。

Adversarial Patch Attacks on Deep-Learning-Based Face Recognition Systems Using Generative Adversarial Networks.

机构信息

College of Artificial Intelligence, National Yang Ming Chiao Tung University, Tainan 71150, Taiwan.

Computer Science and Information Engineering Department, National Chung Cheng University, Chiayi 62102, Taiwan.

出版信息

Sensors (Basel). 2023 Jan 11;23(2):853. doi: 10.3390/s23020853.

Abstract

Deep learning technology has developed rapidly in recent years and has been successfully applied in many fields, including face recognition. Face recognition is used in many scenarios nowadays, including security control systems, access control management, health and safety management, employee attendance monitoring, automatic border control, and face scan payment. However, deep learning models are vulnerable to adversarial attacks conducted by perturbing probe images to generate adversarial examples, or using adversarial patches to generate well-designed perturbations in specific regions of the image. Most previous studies on adversarial attacks assume that the attacker hacks into the system and knows the architecture and parameters behind the deep learning model. In other words, the attacked model is a white box. However, this scenario is unrepresentative of most real-world adversarial attacks. Consequently, the present study assumes the face recognition system to be a black box, over which the attacker has no control. A Generative Adversarial Network method is proposed for generating adversarial patches to carry out dodging and impersonation attacks on the targeted face recognition system. The experimental results show that the proposed method yields a higher attack success rate than previous works.

摘要

深度学习技术近年来发展迅速,并已成功应用于许多领域,包括人脸识别。人脸识别目前被广泛应用于许多场景,包括安全控制系统、门禁管理、健康和安全管理、员工考勤监控、自动边境控制和面部扫描支付。然而,深度学习模型容易受到对抗攻击,攻击者可以通过干扰探针图像来生成对抗样本,或者使用对抗补丁在图像的特定区域生成精心设计的干扰。以前大多数关于对抗攻击的研究都假设攻击者入侵系统并了解深度学习模型背后的架构和参数。换句话说,被攻击的模型是一个白盒。然而,这种场景并不代表大多数现实世界中的对抗攻击。因此,本研究假设人脸识别系统是一个黑盒,攻击者无法控制它。提出了一种生成对抗网络方法来生成对抗补丁,以便对目标人脸识别系统进行躲避和伪装攻击。实验结果表明,所提出的方法比以前的工作具有更高的攻击成功率。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83ed/9863200/5c451d6b6820/sensors-23-00853-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验