• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种针对3D物体的具有最大聚合区域稀疏性策略的局部对抗攻击。

A Local Adversarial Attack with a Maximum Aggregated Region Sparseness Strategy for 3D Objects.

作者信息

Zhao Ling, Lv Xun, Zhu Lili, Luo Binyan, Cao Hang, Cui Jiahao, Li Haifeng, Peng Jian

机构信息

Department of School of Geosciences and Info-Physics, Central South University, Changsha 410083, China.

Department of Hunan Provincial Institute of Land and Resources Planning, Hunan Key Laboratory of Land Resources Evaluation and Utilization, Changsha 410083, China.

出版信息

J Imaging. 2025 Jan 13;11(1):25. doi: 10.3390/jimaging11010025.

DOI:10.3390/jimaging11010025
PMID:39852338
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11766271/
Abstract

The increasing reliance on deep neural network-based object detection models in various applications has raised significant security concerns due to their vulnerability to adversarial attacks. In physical 3D environments, existing adversarial attacks that target object detection (3D-AE) face significant challenges. These attacks often require large and dispersed modifications to objects, making them easily noticeable and reducing their effectiveness in real-world scenarios. To maximize the attack effectiveness, large and dispersed attack camouflages are often employed, which makes the camouflages overly conspicuous and reduces their visual stealth. The core issue is how to use minimal and concentrated camouflage to maximize the attack effect. Addressing this, our research focuses on developing more subtle and efficient attack methods that can better evade detection in practical settings. Based on these principles, this paper proposes a local 3D attack method driven by a Maximum Aggregated Region Sparseness (MARS) strategy. In simpler terms, our approach strategically concentrates the attack modifications to specific areas to enhance effectiveness while maintaining stealth. To maximize the aggregation of attack-camouflaged regions, an aggregation regularization term is designed to constrain the mask aggregation matrix based on the face-adjacency relationships. To minimize the attack camouflage regions, a sparseness regularization is designed to make the mask weights tend toward a U-shaped distribution and limit extreme values. Additionally, neural rendering is used to obtain gradient-propagating multi-angle augmented data and suppress the model's detection to locate universal critical decision regions from multiple angles. These technical strategies ensure that the adversarial modifications remain effective across different viewpoints and conditions. We test the attack effectiveness of different region selection strategies. On the CARLA dataset, the average attack efficiency of attacking the YOLOv3 and v5 series networks reaches 1.724, which represents an improvement of 0.986 (134%) compared to baseline methods. These results demonstrate a significant enhancement in attack performance, highlighting the potential risks to real-world object detection systems. The experimental results demonstrate that our attack method achieves both stealth and aggressiveness from different viewpoints. Furthermore, we explore the transferability of the decision regions. The results indicate that our method can be effectively combined with different texture optimization methods, with the average precision decreasing by 0.488 and 0.662 across different networks, which indicates a strong attack effectiveness.

摘要

在各种应用中,对基于深度神经网络的目标检测模型的依赖日益增加,由于其易受对抗攻击,引发了重大安全问题。在物理三维环境中,现有的针对目标检测的对抗攻击(3D - AE)面临重大挑战。这些攻击通常需要对物体进行大规模且分散的修改,这使得它们很容易被察觉,并降低了其在现实场景中的有效性。为了最大化攻击效果,通常会采用大规模且分散的攻击伪装,但这使得伪装过于显眼,降低了视觉上的隐蔽性。核心问题是如何使用最小化且集中的伪装来最大化攻击效果。针对这一问题,我们的研究专注于开发更隐蔽、更高效的攻击方法,使其在实际场景中能更好地躲避检测。基于这些原则,本文提出了一种由最大聚合区域稀疏性(MARS)策略驱动的局部三维攻击方法。简单来说,我们的方法将攻击修改策略性地集中在特定区域,以提高有效性同时保持隐蔽性。为了最大化攻击伪装区域的聚合,设计了一个聚合正则化项,基于面邻接关系约束掩码聚合矩阵。为了最小化攻击伪装区域,设计了一个稀疏正则化,使掩码权重趋向于U形分布并限制极值。此外,使用神经渲染来获取梯度传播的多角度增强数据,并抑制模型检测,从多个角度定位通用关键决策区域。这些技术策略确保对抗修改在不同视角和条件下都保持有效。我们测试了不同区域选择策略的攻击有效性。在CARLA数据集上,攻击YOLOv3和v5系列网络的平均攻击效率达到1.724,与基线方法相比提高了0.986(134%)。这些结果表明攻击性能有显著提升,凸显了对现实世界目标检测系统的潜在风险。实验结果表明,我们的攻击方法从不同视角实现了隐蔽性和攻击性。此外,我们探索了决策区域的可转移性。结果表明,我们的方法可以有效地与不同的纹理优化方法相结合,在不同网络中平均精度分别下降0.488和0.662,这表明具有强大的攻击效果。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f98c/11766271/29528756611b/jimaging-11-00025-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f98c/11766271/537cc2eb7d5b/jimaging-11-00025-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f98c/11766271/32b0b580298f/jimaging-11-00025-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f98c/11766271/96de19a2013d/jimaging-11-00025-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f98c/11766271/494eab45f643/jimaging-11-00025-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f98c/11766271/7efdc95544a7/jimaging-11-00025-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f98c/11766271/6b7a289cd32f/jimaging-11-00025-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f98c/11766271/14461265ac6d/jimaging-11-00025-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f98c/11766271/29528756611b/jimaging-11-00025-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f98c/11766271/537cc2eb7d5b/jimaging-11-00025-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f98c/11766271/32b0b580298f/jimaging-11-00025-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f98c/11766271/96de19a2013d/jimaging-11-00025-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f98c/11766271/494eab45f643/jimaging-11-00025-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f98c/11766271/7efdc95544a7/jimaging-11-00025-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f98c/11766271/6b7a289cd32f/jimaging-11-00025-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f98c/11766271/14461265ac6d/jimaging-11-00025-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f98c/11766271/29528756611b/jimaging-11-00025-g008.jpg

相似文献

1
A Local Adversarial Attack with a Maximum Aggregated Region Sparseness Strategy for 3D Objects.一种针对3D物体的具有最大聚合区域稀疏性策略的局部对抗攻击。
J Imaging. 2025 Jan 13;11(1):25. doi: 10.3390/jimaging11010025.
2
Advertising or adversarial? AdvSign: Artistic advertising sign camouflage for target physical attacking to object detector.广告还是对抗手段?AdvSign:用于对目标物体检测器进行物理攻击的艺术广告标志伪装。
Neural Netw. 2025 Jun;186:107271. doi: 10.1016/j.neunet.2025.107271. Epub 2025 Feb 19.
3
Physically Realizable Adversarial Creating Attack against Vision-based BEV Space 3D Object Detection.针对基于视觉的鸟瞰图空间3D目标检测的物理可实现对抗性创建攻击。
IEEE Trans Image Process. 2025 Jan 10;PP. doi: 10.1109/TIP.2025.3526056.
4
Stealthy Vehicle Adversarial Camouflage Texture Generation Based on Neural Style Transfer.基于神经风格迁移的隐形车辆对抗伪装纹理生成
Entropy (Basel). 2024 Oct 24;26(11):903. doi: 10.3390/e26110903.
5
Differential evolution based dual adversarial camouflage: Fooling human eyes and object detectors.基于差分进化的双重对抗伪装:愚弄人类的眼睛和目标检测器。
Neural Netw. 2023 Jun;163:256-271. doi: 10.1016/j.neunet.2023.03.041. Epub 2023 Mar 31.
6
GLH: From Global to Local Gradient Attacks with High-Frequency Momentum Guidance for Object Detection.GLH:用于目标检测的具有高频动量引导的从全局到局部梯度攻击
Entropy (Basel). 2023 Mar 6;25(3):461. doi: 10.3390/e25030461.
7
Increasing Neural-Based Pedestrian Detectors' Robustness to Adversarial Patch Attacks Using Anomaly Localization.使用异常定位增强基于神经网络的行人检测器对对抗性补丁攻击的鲁棒性
J Imaging. 2025 Jan 17;11(1):26. doi: 10.3390/jimaging11010026.
8
Curriculum-Guided Adversarial Learning for Enhanced Robustness in 3D Object Detection.用于增强3D目标检测鲁棒性的课程引导对抗学习
Sensors (Basel). 2025 Mar 9;25(6):1697. doi: 10.3390/s25061697.
9
Extended Spatially Localized Perturbation GAN (eSLP-GAN) for Robust Adversarial Camouflage Patches.用于鲁棒对抗性伪装补丁的扩展空间局部化扰动生成对抗网络(eSLP-GAN)。
Sensors (Basel). 2021 Aug 6;21(16):5323. doi: 10.3390/s21165323.
10
FDAA: A feature distribution-aware transferable adversarial attack method.
Neural Netw. 2024 Oct;178:106467. doi: 10.1016/j.neunet.2024.106467. Epub 2024 Jun 14.

本文引用的文献

1
Physical Adversarial Attack Meets Computer Vision: A Decade Survey.物理对抗攻击与计算机视觉:十年综述
IEEE Trans Pattern Anal Mach Intell. 2024 Dec;46(12):9797-9817. doi: 10.1109/TPAMI.2024.3430860. Epub 2024 Nov 7.
2
Lifelong Learning With Cycle Memory Networks.循环记忆网络的终身学习。
IEEE Trans Neural Netw Learn Syst. 2024 Nov;35(11):16439-16452. doi: 10.1109/TNNLS.2023.3294495. Epub 2024 Oct 29.
3
Augmentation-Free Graph Contrastive Learning of Invariant-Discriminative Representations.无增强的不变判别表示的图对比学习
IEEE Trans Neural Netw Learn Syst. 2024 Aug;35(8):11157-11167. doi: 10.1109/TNNLS.2023.3248871. Epub 2024 Aug 5.
4
Simultaneously Optimizing Perturbations and Positions for Black-Box Adversarial Patch Attacks.同时优化黑盒对抗补丁攻击的扰动和位置。
IEEE Trans Pattern Anal Mach Intell. 2023 Jul;45(7):9041-9054. doi: 10.1109/TPAMI.2022.3231886. Epub 2023 Jun 5.
5
Deepfakes Generation and Detection: A Short Survey.深度伪造的生成与检测:简要综述
J Imaging. 2023 Jan 13;9(1):18. doi: 10.3390/jimaging9010018.
6
Adversarial Sticker: A Stealthy Attack Method in the Physical World.对抗性贴纸:物理世界中的一种隐蔽攻击方法。
IEEE Trans Pattern Anal Mach Intell. 2023 Mar;45(3):2711-2725. doi: 10.1109/TPAMI.2022.3176760. Epub 2023 Feb 3.
7
Geometry-Aware Generation of Adversarial Point Clouds.基于几何感知的对抗式点云生成。
IEEE Trans Pattern Anal Mach Intell. 2022 Jun;44(6):2984-2999. doi: 10.1109/TPAMI.2020.3044712. Epub 2022 May 5.