• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于神经风格迁移的隐形车辆对抗伪装纹理生成

Stealthy Vehicle Adversarial Camouflage Texture Generation Based on Neural Style Transfer.

作者信息

Cai Wei, Di Xingyu, Wang Xin, Gao Weijie, Jia Haoran

机构信息

The Third Faculty of Xi'an Research Institute of High Technology, Xi'an 710064, China.

出版信息

Entropy (Basel). 2024 Oct 24;26(11):903. doi: 10.3390/e26110903.

DOI:10.3390/e26110903
PMID:39593848
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11592712/
Abstract

Adversarial attacks that mislead deep neural networks (DNNs) into making incorrect predictions can also be implemented in the physical world. However, most of the existing adversarial camouflage textures that attack object detection models only consider the effectiveness of the attack, ignoring the stealthiness of adversarial attacks, resulting in the generated adversarial camouflage textures appearing abrupt to human observers. To address this issue, we propose a style transfer module added to an adversarial texture generation framework. By calculating the style loss between the texture and the specified style image, the adversarial texture generated by the model is guided to have good stealthiness and is not easily detected by DNNs and human observers in specific scenes. Experiments have shown that in both the digital and physical worlds, the vehicle full coverage adversarial camouflage texture we create has good stealthiness and can effectively fool advanced DNN object detectors while evading human observers in specific scenes.

摘要

误导深度神经网络(DNN)做出错误预测的对抗攻击同样可以在现实世界中实施。然而,现有的大多数攻击目标检测模型的对抗伪装纹理仅考虑攻击的有效性,而忽略了对抗攻击的隐蔽性,导致生成的对抗伪装纹理在人类观察者看来显得突兀。为了解决这个问题,我们提出了一个添加到对抗纹理生成框架中的风格迁移模块。通过计算纹理与指定风格图像之间的风格损失,引导模型生成的对抗纹理具有良好的隐蔽性,在特定场景中不易被DNN和人类观察者检测到。实验表明,在数字世界和现实世界中,我们创建的车辆全覆盖对抗伪装纹理都具有良好的隐蔽性,能够有效愚弄先进的DNN目标检测器,同时在特定场景中躲避人类观察者。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c488/11592712/8913edcf3bbb/entropy-26-00903-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c488/11592712/fdc14b2b55f8/entropy-26-00903-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c488/11592712/31a39c62a813/entropy-26-00903-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c488/11592712/d20231dd2cbb/entropy-26-00903-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c488/11592712/a6fe3a891eeb/entropy-26-00903-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c488/11592712/83c60e95ea59/entropy-26-00903-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c488/11592712/9eea1af10307/entropy-26-00903-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c488/11592712/2fccf397baf9/entropy-26-00903-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c488/11592712/f7731478fe83/entropy-26-00903-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c488/11592712/8913edcf3bbb/entropy-26-00903-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c488/11592712/fdc14b2b55f8/entropy-26-00903-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c488/11592712/31a39c62a813/entropy-26-00903-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c488/11592712/d20231dd2cbb/entropy-26-00903-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c488/11592712/a6fe3a891eeb/entropy-26-00903-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c488/11592712/83c60e95ea59/entropy-26-00903-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c488/11592712/9eea1af10307/entropy-26-00903-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c488/11592712/2fccf397baf9/entropy-26-00903-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c488/11592712/f7731478fe83/entropy-26-00903-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c488/11592712/8913edcf3bbb/entropy-26-00903-g009.jpg

相似文献

1
Stealthy Vehicle Adversarial Camouflage Texture Generation Based on Neural Style Transfer.基于神经风格迁移的隐形车辆对抗伪装纹理生成
Entropy (Basel). 2024 Oct 24;26(11):903. doi: 10.3390/e26110903.
2
Differential evolution based dual adversarial camouflage: Fooling human eyes and object detectors.基于差分进化的双重对抗伪装:愚弄人类的眼睛和目标检测器。
Neural Netw. 2023 Jun;163:256-271. doi: 10.1016/j.neunet.2023.03.041. Epub 2023 Mar 31.
3
Advertising or adversarial? AdvSign: Artistic advertising sign camouflage for target physical attacking to object detector.广告还是对抗手段?AdvSign:用于对目标物体检测器进行物理攻击的艺术广告标志伪装。
Neural Netw. 2025 Jun;186:107271. doi: 10.1016/j.neunet.2025.107271. Epub 2025 Feb 19.
4
Universal Adversarial Patch Attack for Automatic Checkout Using Perceptual and Attentional Bias.利用感知和注意偏差的通用对抗补丁攻击实现自动结账。
IEEE Trans Image Process. 2022;31:598-611. doi: 10.1109/TIP.2021.3127849. Epub 2021 Dec 22.
5
A Local Adversarial Attack with a Maximum Aggregated Region Sparseness Strategy for 3D Objects.一种针对3D物体的具有最大聚合区域稀疏性策略的局部对抗攻击。
J Imaging. 2025 Jan 13;11(1):25. doi: 10.3390/jimaging11010025.
6
Extended Spatially Localized Perturbation GAN (eSLP-GAN) for Robust Adversarial Camouflage Patches.用于鲁棒对抗性伪装补丁的扩展空间局部化扰动生成对抗网络(eSLP-GAN)。
Sensors (Basel). 2021 Aug 6;21(16):5323. doi: 10.3390/s21165323.
7
Frequency-Tuned Universal Adversarial Attacks on Texture Recognition.基于纹理识别的频率调谐通用对抗攻击
IEEE Trans Image Process. 2022;31:5856-5868. doi: 10.1109/TIP.2022.3202366. Epub 2022 Sep 8.
8
Adversarial Sticker: A Stealthy Attack Method in the Physical World.对抗性贴纸:物理世界中的一种隐蔽攻击方法。
IEEE Trans Pattern Anal Mach Intell. 2023 Mar;45(3):2711-2725. doi: 10.1109/TPAMI.2022.3176760. Epub 2023 Feb 3.
9
Adversarial Infrared Curves: An attack on infrared pedestrian detectors in the physical world.
Neural Netw. 2024 Oct;178:106459. doi: 10.1016/j.neunet.2024.106459. Epub 2024 Jun 12.
10
Universal adversarial attacks on deep neural networks for medical image classification.针对医学图像分类的深度神经网络的通用对抗攻击。
BMC Med Imaging. 2021 Jan 7;21(1):9. doi: 10.1186/s12880-020-00530-y.