IEEE Trans Pattern Anal Mach Intell. 2021 Jun;43(6):2119-2126. doi: 10.1109/TPAMI.2020.3031625. Epub 2021 May 11.
Person re-identification (re-ID) has attracted much attention recently due to its great importance in video surveillance. In general, distance metrics used to identify two person images are expected to be robust under various appearance changes. However, our work observes the extreme vulnerability of existing distance metrics to adversarial examples, generated by simply adding human-imperceptible perturbations to person images. Hence, the security danger is dramatically increased when deploying commercial re-ID systems in video surveillance. Although adversarial examples have been extensively applied for classification analysis, it is rarely studied in metric analysis like person re-identification. The most likely reason is the natural gap between the training and testing of re-ID networks, that is, the predictions of a re-ID network cannot be directly used during testing without an effective metric. In this work, we bridge the gap by proposing Adversarial Metric Attack, a parallel methodology to adversarial classification attacks. Comprehensive experiments clearly reveal the adversarial effects in re-ID systems. Meanwhile, we also present an early attempt of training a metric-preserving network, thereby defending the metric against adversarial attacks. At last, by benchmarking various adversarial settings, we expect that our work can facilitate the development of adversarial attack and defense in metric-based applications.
由于在视频监控中的重要性,人体重识别(re-ID)最近引起了广泛关注。通常,用于识别两个人体图像的距离度量应能在各种外观变化下稳健。然而,我们的工作观察到,现有距离度量在对抗样本下非常脆弱,这些对抗样本是通过简单地向人体图像添加人眼不可察觉的扰动而生成的。因此,当在视频监控中部署商业 re-ID 系统时,安全风险大大增加。尽管对抗样本已被广泛应用于分类分析,但在像人体重识别这样的度量分析中却很少被研究。最可能的原因是 re-ID 网络的训练和测试之间存在天然的差距,即如果没有有效的度量,re-ID 网络的预测在测试期间不能直接使用。在这项工作中,我们通过提出对抗度量攻击来弥合这一差距,这是一种与对抗分类攻击并行的方法。全面的实验清楚地揭示了 re-ID 系统中的对抗效应。同时,我们还首次尝试训练一种保持度量的网络,从而防御度量免受对抗攻击。最后,通过对各种对抗设置进行基准测试,我们希望我们的工作能够促进基于度量的应用中的对抗攻击和防御的发展。