• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

图像传感器接口故障注入攻击引发的深度神经网络后门攻击。

Backdoor Attack on Deep Neural Networks Triggered by Fault Injection Attack on Image Sensor Interface.

机构信息

Graduate School of Science and Engineering, Ritsumeikan University, 1-1-1 Noji-higashi, Kusatsu 525-8577, Shiga, Japan.

Department of Science and Engineering, Ritsumeikan University, 1-1-1 Noji-higashi, Kusatsu 525-8577, Shiga, Japan.

出版信息

Sensors (Basel). 2023 May 14;23(10):4742. doi: 10.3390/s23104742.

DOI:10.3390/s23104742
PMID:37430657
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10220730/
Abstract

A backdoor attack is a type of attack method that induces deep neural network (DNN) misclassification. The adversary who aims to trigger the backdoor attack inputs the image with a specific pattern (the adversarial mark) into the DNN model (backdoor model). In general, the adversary mark is created on the physical object input to an image by capturing a photo. With this conventional method, the success of the backdoor attack is not stable because the size and position change depending on the shooting environment. So far, we have proposed a method of creating an adversarial mark for triggering backdoor attacks by means of a fault injection attack on the mobile industry processor interface (MIPI), which is the image sensor interface. We propose the image tampering model, with which the adversarial mark can be generated in the actual fault injection to create the adversarial mark pattern. Then, the backdoor model was trained with poison data images, which the proposed simulation model created. We conducted a backdoor attack experiment using a backdoor model trained on a dataset containing 5% poison data. The clean data accuracy in normal operation was 91%; nevertheless, the attack success rate with fault injection was 83%.

摘要

后门攻击是一种诱导深度神经网络(DNN)错误分类的攻击方法。旨在触发后门攻击的对手将具有特定模式(对抗标记)的图像输入到 DNN 模型(后门模型)中。通常,对抗标记是通过拍摄照片在输入到图像的物理对象上创建的。使用这种传统方法,后门攻击的成功率不稳定,因为大小和位置会根据拍摄环境而变化。到目前为止,我们已经提出了一种通过对图像传感器接口移动行业处理器接口(MIPI)进行故障注入攻击来创建触发后门攻击的对抗标记的方法。我们提出了图像篡改模型,通过该模型可以在实际的故障注入中生成对抗标记模式。然后,使用所提出的仿真模型创建的毒化数据图像来训练后门模型。我们使用包含 5%毒化数据的数据集训练的后门模型进行了后门攻击实验。正常运行时干净数据的准确率为 91%;然而,故障注入的攻击成功率为 83%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/123bb39230e9/sensors-23-04742-g024.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/04a5bd444409/sensors-23-04742-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/787ff2dac9af/sensors-23-04742-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/3ef0a28aee77/sensors-23-04742-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/7e4ae464da6a/sensors-23-04742-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/c2867ad38784/sensors-23-04742-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/4fe3b7d99a5b/sensors-23-04742-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/67a5e5b27289/sensors-23-04742-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/c8be66655fb1/sensors-23-04742-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/3dbde1f9841b/sensors-23-04742-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/21a41ae07ba2/sensors-23-04742-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/465d369ceaf4/sensors-23-04742-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/086480c064f8/sensors-23-04742-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/4ac60991760b/sensors-23-04742-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/0b9ebf942db6/sensors-23-04742-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/2fd4507eb349/sensors-23-04742-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/771cb19aa6a4/sensors-23-04742-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/22da19219dd6/sensors-23-04742-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/bc4201a78cbd/sensors-23-04742-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/9fbb2b817267/sensors-23-04742-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/89ec45e4a4af/sensors-23-04742-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/9a960595306e/sensors-23-04742-g021.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/d5a20ddb52bc/sensors-23-04742-g022.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/3ca55063b3b8/sensors-23-04742-g023.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/123bb39230e9/sensors-23-04742-g024.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/04a5bd444409/sensors-23-04742-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/787ff2dac9af/sensors-23-04742-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/3ef0a28aee77/sensors-23-04742-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/7e4ae464da6a/sensors-23-04742-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/c2867ad38784/sensors-23-04742-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/4fe3b7d99a5b/sensors-23-04742-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/67a5e5b27289/sensors-23-04742-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/c8be66655fb1/sensors-23-04742-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/3dbde1f9841b/sensors-23-04742-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/21a41ae07ba2/sensors-23-04742-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/465d369ceaf4/sensors-23-04742-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/086480c064f8/sensors-23-04742-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/4ac60991760b/sensors-23-04742-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/0b9ebf942db6/sensors-23-04742-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/2fd4507eb349/sensors-23-04742-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/771cb19aa6a4/sensors-23-04742-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/22da19219dd6/sensors-23-04742-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/bc4201a78cbd/sensors-23-04742-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/9fbb2b817267/sensors-23-04742-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/89ec45e4a4af/sensors-23-04742-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/9a960595306e/sensors-23-04742-g021.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/d5a20ddb52bc/sensors-23-04742-g022.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/3ca55063b3b8/sensors-23-04742-g023.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4723/10220730/123bb39230e9/sensors-23-04742-g024.jpg

相似文献

1
Backdoor Attack on Deep Neural Networks Triggered by Fault Injection Attack on Image Sensor Interface.图像传感器接口故障注入攻击引发的深度神经网络后门攻击。
Sensors (Basel). 2023 May 14;23(10):4742. doi: 10.3390/s23104742.
2
Poison Ink: Robust and Invisible Backdoor Attack.毒墨:稳健且不可见的后门攻击
IEEE Trans Image Process. 2022;31:5691-5705. doi: 10.1109/TIP.2022.3201472. Epub 2022 Sep 2.
3
Detection of Backdoors in Trained Classifiers Without Access to the Training Set.在无法访问训练集的情况下检测训练分类器中的后门。
IEEE Trans Neural Netw Learn Syst. 2022 Mar;33(3):1177-1191. doi: 10.1109/TNNLS.2020.3041202. Epub 2022 Feb 28.
4
Backdoor attack and defense in federated generative adversarial network-based medical image synthesis.联邦生成对抗网络的后门攻击与防御在医学图像合成中的应用。
Med Image Anal. 2023 Dec;90:102965. doi: 10.1016/j.media.2023.102965. Epub 2023 Sep 22.
5
Towards Unified Robustness Against Both Backdoor and Adversarial Attacks.迈向抵御后门攻击和对抗性攻击的统一鲁棒性。
IEEE Trans Pattern Anal Mach Intell. 2024 Dec;46(12):7589-7605. doi: 10.1109/TPAMI.2024.3392760. Epub 2024 Nov 6.
6
Backdoor Attack against Face Sketch Synthesis.针对面部草图合成的后门攻击。
Entropy (Basel). 2023 Jun 25;25(7):974. doi: 10.3390/e25070974.
7
Exploiting Missing Value Patterns for a Backdoor Attack on Machine Learning Models of Electronic Health Records: Development and Validation Study.利用缺失值模式对电子健康记录机器学习模型进行后门攻击:开发与验证研究
JMIR Med Inform. 2022 Aug 19;10(8):e38440. doi: 10.2196/38440.
8
Federated Learning Backdoor Attack Based on Frequency Domain Injection.基于频域注入的联邦学习后门攻击
Entropy (Basel). 2024 Feb 14;26(2):164. doi: 10.3390/e26020164.
9
Unambiguous and High-Fidelity Backdoor Watermarking for Deep Neural Networks.用于深度神经网络的明确且高保真的后门水印
IEEE Trans Neural Netw Learn Syst. 2024 Aug;35(8):11204-11217. doi: 10.1109/TNNLS.2023.3250210. Epub 2024 Aug 5.
10
Trap and Replace: Defending Backdoor Attacks by Trapping Them into an Easy-to-Replace Subnetwork.诱捕与替换:通过将后门攻击诱捕到易于替换的子网中来防御后门攻击
Adv Neural Inf Process Syst. 2022 Dec;35:36026-36039.

本文引用的文献

1
Man vs. computer: benchmarking machine learning algorithms for traffic sign recognition.人与计算机的较量:用于交通标志识别的机器学习算法基准测试。
Neural Netw. 2012 Aug;32:323-32. doi: 10.1016/j.neunet.2012.02.016. Epub 2012 Feb 20.