• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

使用异常定位增强基于神经网络的行人检测器对对抗性补丁攻击的鲁棒性

Increasing Neural-Based Pedestrian Detectors' Robustness to Adversarial Patch Attacks Using Anomaly Localization.

作者信息

Ilina Olga, Tereshonok Maxim, Ziyadinov Vadim

机构信息

Science and Research Department, Moscow Technical University of Communications and Informatics, 111024 Moscow, Russia.

出版信息

J Imaging. 2025 Jan 17;11(1):26. doi: 10.3390/jimaging11010026.

DOI:10.3390/jimaging11010026
PMID:39852339
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11765776/
Abstract

Object detection in images is a fundamental component of many safety-critical systems, such as autonomous driving, video surveillance systems, and robotics. Adversarial patch attacks, being easily implemented in the real world, provide effective counteraction to object detection by state-of-the-art neural-based detectors. It poses a serious danger in various fields of activity. Existing defense methods against patch attacks are insufficiently effective, which underlines the need to develop new reliable solutions. In this manuscript, we propose a method which helps to increase the robustness of neural network systems to the input adversarial images. The proposed method consists of a Deep Convolutional Neural Network to reconstruct a benign image from the adversarial one; a Calculating Maximum Error block to highlight the mismatches between input and reconstructed images; a Localizing Anomalous Fragments block to extract the anomalous regions using the Isolation Forest algorithm from histograms of images' fragments; and a Clustering and Processing block to group and evaluate the extracted anomalous regions. The proposed method, based on anomaly localization, demonstrates high resistance to adversarial patch attacks while maintaining the high quality of object detection. The experimental results show that the proposed method is effective in defending against adversarial patch attacks. Using the YOLOv3 algorithm with the proposed defensive method for pedestrian detection in the INRIAPerson dataset under the adversarial attacks, the mAP50 metric reaches 80.97% compared to 46.79% without a defensive method. The results of the research demonstrate that the proposed method is promising for improvement of object detection systems security.

摘要

图像中的目标检测是许多安全关键系统的基本组成部分,如自动驾驶、视频监控系统和机器人技术。对抗性补丁攻击在现实世界中易于实施,能对基于神经网络的最先进目标检测器进行有效的对抗。它在各个活动领域都构成了严重威胁。现有的针对补丁攻击的防御方法效果不佳,这凸显了开发新的可靠解决方案的必要性。在本论文中,我们提出了一种有助于提高神经网络系统对输入对抗性图像鲁棒性的方法。所提出的方法包括一个深度卷积神经网络,用于从对抗性图像重建良性图像;一个计算最大误差模块,用于突出输入图像和重建图像之间的不匹配;一个定位异常片段模块,用于使用图像片段直方图的孤立森林算法提取异常区域;以及一个聚类和处理模块,用于对提取的异常区域进行分组和评估。所提出的基于异常定位的方法在保持目标检测高质量的同时,对对抗性补丁攻击具有很高的抗性。实验结果表明,所提出的方法在防御对抗性补丁攻击方面是有效的。在对抗攻击下,将所提出的防御方法与YOLOv3算法用于INRIAPerson数据集中的行人检测时,mAP50指标达到80.97%,而没有防御方法时为46.79%。研究结果表明,所提出的方法在提高目标检测系统安全性方面具有前景。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3a4a/11765776/81f7aa731dfe/jimaging-11-00026-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3a4a/11765776/03d159068b56/jimaging-11-00026-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3a4a/11765776/9116c3cde973/jimaging-11-00026-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3a4a/11765776/c50778f5e34f/jimaging-11-00026-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3a4a/11765776/79ef6796c21f/jimaging-11-00026-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3a4a/11765776/ca5bafc825cd/jimaging-11-00026-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3a4a/11765776/15e0fd58c2b9/jimaging-11-00026-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3a4a/11765776/eae491b4b546/jimaging-11-00026-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3a4a/11765776/ca7273b1b1c0/jimaging-11-00026-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3a4a/11765776/81f7aa731dfe/jimaging-11-00026-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3a4a/11765776/03d159068b56/jimaging-11-00026-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3a4a/11765776/9116c3cde973/jimaging-11-00026-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3a4a/11765776/c50778f5e34f/jimaging-11-00026-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3a4a/11765776/79ef6796c21f/jimaging-11-00026-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3a4a/11765776/ca5bafc825cd/jimaging-11-00026-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3a4a/11765776/15e0fd58c2b9/jimaging-11-00026-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3a4a/11765776/eae491b4b546/jimaging-11-00026-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3a4a/11765776/ca7273b1b1c0/jimaging-11-00026-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3a4a/11765776/81f7aa731dfe/jimaging-11-00026-g009.jpg

相似文献

1
Increasing Neural-Based Pedestrian Detectors' Robustness to Adversarial Patch Attacks Using Anomaly Localization.使用异常定位增强基于神经网络的行人检测器对对抗性补丁攻击的鲁棒性
J Imaging. 2025 Jan 17;11(1):26. doi: 10.3390/jimaging11010026.
2
Defending Person Detection Against Adversarial Patch Attack by Using Universal Defensive Frame.利用通用防御框架防御对抗性补丁攻击的人像检测。
IEEE Trans Image Process. 2022;31:6976-6990. doi: 10.1109/TIP.2022.3217375. Epub 2022 Nov 14.
3
Improving Adversarial Robustness Against Universal Patch Attacks Through Feature Norm Suppressing.通过特征范数抑制提高针对通用补丁攻击的对抗鲁棒性。
IEEE Trans Neural Netw Learn Syst. 2025 Jan;36(1):1410-1424. doi: 10.1109/TNNLS.2023.3326871. Epub 2025 Jan 7.
4
Auto encoder-based defense mechanism against popular adversarial attacks in deep learning.基于自动编码器的深度学习中流行对抗攻击防御机制。
PLoS One. 2024 Oct 21;19(10):e0307363. doi: 10.1371/journal.pone.0307363. eCollection 2024.
5
ROSA: Robust Salient Object Detection Against Adversarial Attacks.ROSA:对抗对抗攻击的鲁棒显著目标检测。
IEEE Trans Cybern. 2020 Nov;50(11):4835-4847. doi: 10.1109/TCYB.2019.2914099. Epub 2019 May 17.
6
A Survey and Evaluation of Adversarial Attacks in Object Detection.目标检测中对抗攻击的调查与评估
IEEE Trans Neural Netw Learn Syst. 2025 Sep;36(9):15706-15722. doi: 10.1109/TNNLS.2025.3561225.
7
Advertising or adversarial? AdvSign: Artistic advertising sign camouflage for target physical attacking to object detector.广告还是对抗手段?AdvSign:用于对目标物体检测器进行物理攻击的艺术广告标志伪装。
Neural Netw. 2025 Jun;186:107271. doi: 10.1016/j.neunet.2025.107271. Epub 2025 Feb 19.
8
Defending the Defender: Adversarial Learning Based Defending Strategy for Learning Based Security Methods in Cyber-Physical Systems (CPS).捍卫防御者:基于对抗学习的防御策略,用于网络物理系统 (CPS) 中的基于学习的安全方法。
Sensors (Basel). 2023 Jun 9;23(12):5459. doi: 10.3390/s23125459.
9
A Local Adversarial Attack with a Maximum Aggregated Region Sparseness Strategy for 3D Objects.一种针对3D物体的具有最大聚合区域稀疏性策略的局部对抗攻击。
J Imaging. 2025 Jan 13;11(1):25. doi: 10.3390/jimaging11010025.
10
Salient object detection dataset with adversarial attacks for genetic programming and neural networks.用于遗传编程和神经网络的带有对抗攻击的显著目标检测数据集。
Data Brief. 2024 Nov 4;57:111043. doi: 10.1016/j.dib.2024.111043. eCollection 2024 Dec.

本文引用的文献

1
Adversarial Sticker: A Stealthy Attack Method in the Physical World.对抗性贴纸:物理世界中的一种隐蔽攻击方法。
IEEE Trans Pattern Anal Mach Intell. 2023 Mar;45(3):2711-2725. doi: 10.1109/TPAMI.2022.3176760. Epub 2023 Feb 3.
2
Abandoned Object Detection in Video-Surveillance: Survey and Comparison.视频监控中遗弃物检测:调查与比较。
Sensors (Basel). 2018 Dec 5;18(12):4290. doi: 10.3390/s18124290.