Suppr超能文献

目标检测中对抗攻击的调查与评估

A Survey and Evaluation of Adversarial Attacks in Object Detection.

作者信息

Nguyen Khoi Nguyen Tiet, Zhang Wenyu, Lu Kangkang, Wu Yu-Huan, Zheng Xingjian, Li Tan Hui, Zhen Liangli

出版信息

IEEE Trans Neural Netw Learn Syst. 2025 Sep;36(9):15706-15722. doi: 10.1109/TNNLS.2025.3561225.

Abstract

Deep learning models achieve remarkable accuracy in computer vision tasks yet remain vulnerable to adversarial examples-carefully crafted perturbations to input images that can deceive these models into making confident but incorrect predictions. This vulnerability poses significant risks in high-stakes applications such as autonomous vehicles, security surveillance, and safety-critical inspection systems. While the existing literature extensively covers adversarial attacks in image classification, comprehensive analyses of such attacks on object detection systems remain limited. This article presents a novel taxonomic framework for categorizing adversarial attacks specific to object detection architectures, synthesizes existing robustness metrics, and provides a comprehensive empirical evaluation of state-of-the-art attack methodologies on popular object detection models, including both traditional detectors and modern detectors with vision-language pretraining. Through rigorous analysis of open-source attack implementations and their effectiveness across diverse detection architectures, we derive key insights into attack characteristics. Furthermore, we delineate critical research gaps and emerging challenges to guide future investigations in securing object detection systems against adversarial threats. Our findings establish a foundation for developing more robust detection models while highlighting the urgent need for standardized evaluation protocols in this rapidly evolving domain.

摘要

深度学习模型在计算机视觉任务中取得了显著的准确率,但仍然容易受到对抗样本的影响——对抗样本是对输入图像精心设计的扰动,能够欺骗这些模型做出自信但错误的预测。这种脆弱性在诸如自动驾驶车辆、安全监控和安全关键检查系统等高风险应用中构成了重大风险。虽然现有文献广泛涵盖了图像分类中的对抗攻击,但对目标检测系统的此类攻击的全面分析仍然有限。本文提出了一个新颖的分类框架,用于对特定于目标检测架构的对抗攻击进行分类,综合现有的鲁棒性指标,并对流行的目标检测模型(包括传统检测器和具有视觉语言预训练的现代检测器)上的最新攻击方法进行全面的实证评估。通过对开源攻击实现及其在不同检测架构上的有效性进行严格分析,我们得出了关于攻击特征的关键见解。此外,我们划定了关键的研究差距和新出现的挑战,以指导未来在保护目标检测系统免受对抗威胁方面的研究。我们的发现为开发更强大的检测模型奠定了基础,同时突出了在这个快速发展的领域中对标准化评估协议的迫切需求。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验