Suppr超能文献

用于对抗性掌静脉图像攻击的视觉和身份特征解耦。

Decoupling visual and identity features for adversarial palm-vein image attack.

机构信息

The School of Computer Science and Technology, Guangdong University of Technology, Guangzhou, China.

The Institute of Textiles and Clothing, The Hong Kong Polytechnic University, Hong Kong, China.

出版信息

Neural Netw. 2024 Dec;180:106693. doi: 10.1016/j.neunet.2024.106693. Epub 2024 Sep 19.

Abstract

Palm-vein has been widely used for biometric recognition due to its resistance to theft and forgery. However, with the emergence of adversarial attacks, most existing palm-vein recognition methods are vulnerable to adversarial image attacks, and to the best of our knowledge, there is still no study specifically focusing on palm-vein image attacks. In this paper, we propose an adversarial palm-vein image attack network that generates highly similar adversarial palm-vein images to the original samples, but with altered palm-identities. Unlike most existing generator-oriented methods that directly learn image features via concatenated convolutional layers, our proposed network first maps palm-vein images into multi-scale high-dimensional shallow representation, and then develops attention-based dual-path feature learning modules to extensively exploit diverse palm-vein-specific features. After that, we design visual-consistency and identity-aware loss functions to specially decouple the visual and identity features to reconstruct the adversarial palm-vein images. By doing this, the visual characteristics of palm-vein images can be largely preserved while the identity information is removed in the adversarial palm-vein images, such that high-aggressive adversarial palm-vein samples can be obtained. Extensive white-box and black-box attack experiments conducted on three widely used databases clearly show the effectiveness of the proposed network.

摘要

由于静脉具有防盗和防伪造的特性,因此已被广泛应用于生物识别领域。然而,随着对抗攻击的出现,大多数现有的静脉识别方法都容易受到对抗图像攻击的影响,据我们所知,目前仍然没有专门针对静脉图像攻击的研究。在本文中,我们提出了一种对抗性静脉图像攻击网络,该网络可以生成与原始样本高度相似的对抗性静脉图像,但改变了静脉身份。与大多数现有的基于生成器的方法不同,这些方法直接通过串联卷积层学习图像特征,我们提出的网络首先将静脉图像映射到多尺度高维浅层表示中,然后开发基于注意力的双路径特征学习模块,以广泛利用各种特定于静脉的特征。之后,我们设计了视觉一致性和身份感知损失函数,以专门分离视觉和身份特征,从而重建对抗性静脉图像。通过这样做,在保留静脉图像的视觉特征的同时,可以去除对抗性静脉图像中的身份信息,从而得到高攻击性的对抗性静脉样本。在三个广泛使用的数据库上进行的广泛的白盒和黑盒攻击实验清楚地表明了该网络的有效性。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验