Suppr超能文献

用于图像检索中深度乘积量化网络的对抗样本生成

Adversarial Examples Generation for Deep Product Quantization Networks on Image Retrieval.

作者信息

Chen Bin, Feng Yan, Dai Tao, Bai Jiawang, Jiang Yong, Xia Shu-Tao, Wang Xuan

出版信息

IEEE Trans Pattern Anal Mach Intell. 2023 Feb;45(2):1388-1404. doi: 10.1109/TPAMI.2022.3165024. Epub 2023 Jan 6.

Abstract

Deep product quantization networks (DPQNs) have been successfully used in image retrieval tasks, due to their powerful feature extraction ability and high efficiency of encoding high-dimensional visual features. Recent studies show that deep neural networks (DNNs) are vulnerable to input with small and maliciously designed perturbations (a.k.a., adversarial examples) for classification. However, little effort has been devoted to investigating how adversarial examples affect DPQNs, which raises the potential safety hazard when deploying DPQNs in a commercial search engine. To this end, we propose an adversarial example generation framework by generating adversarial query images for DPQN-based retrieval systems. Unlike the adversarial generation for the classic image classification task that heavily relies on ground-truth labels, we alternatively perturb the probability distribution of centroids assignments for a clean query, then we can induce effective non-targeted attacks on DPQNs in white-box and black-box settings. Moreover, we further extend the non-targeted attack to a targeted attack by a novel sample space averaging scheme ([Formula: see text]AS), whose theoretical guarantee is also obtained. Extensive experiments show that our methods can create adversarial examples to successfully mislead the target DPQNs. Besides, we found that our methods both significantly degrade the retrieval performance under a wide variety of experimental settings. The source code is available at https://github.com/Kira0096/PQAG.

摘要

深度积量化网络(DPQNs)因其强大的特征提取能力和对高维视觉特征进行编码的高效率,已成功应用于图像检索任务。最近的研究表明,深度神经网络(DNNs)在分类时容易受到带有微小恶意设计扰动的输入(即对抗样本)的影响。然而,很少有人致力于研究对抗样本如何影响DPQNs,这在将DPQNs部署到商业搜索引擎中时带来了潜在的安全隐患。为此,我们通过为基于DPQN的检索系统生成对抗查询图像,提出了一个对抗样本生成框架。与严重依赖真实标签的经典图像分类任务的对抗生成不同,我们通过扰动干净查询的质心分配概率分布,进而在白盒和黑盒设置下对DPQNs进行有效的非目标攻击。此外,我们通过一种新颖的样本空间平均方案([公式:见原文]AS)将非目标攻击进一步扩展为目标攻击,并获得了其理论保证。大量实验表明,我们的方法能够创建对抗样本以成功误导目标DPQNs。此外,我们发现我们的方法在各种实验设置下均显著降低了检索性能。源代码可在https://github.com/Kira0096/PQAG获取。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验