• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过对传感器进行物理攻击对基于神经网络的设备端学习异常检测器实施数据中毒攻击。

Data Poisoning Attack against Neural Network-Based On-Device Learning Anomaly Detector by Physical Attacks on Sensors.

作者信息

Ino Takahito, Yoshida Kota, Matsutani Hiroki, Fujino Takeshi

机构信息

College of Science and Engineering, Ritsumeikan University, Kusatsu 525-8577, Japan.

Faculty of Science and Technology, Keio University, Yokohama 223-8522, Japan.

出版信息

Sensors (Basel). 2024 Oct 3;24(19):6416. doi: 10.3390/s24196416.

DOI:10.3390/s24196416
PMID:39409456
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11479347/
Abstract

In this paper, we introduce a security approach for on-device learning Edge AIs designed to detect abnormal conditions in factory machines. Since Edge AIs are easily accessible by an attacker physically, there are security risks due to physical attacks. In particular, there is a concern that the attacker may tamper with the training data of the on-device learning Edge AIs to degrade the task accuracy. Few risk assessments have been reported. It is important to understand these security risks before considering countermeasures. In this paper, we demonstrate a data poisoning attack against an on-device learning Edge AI. Our attack target is an on-device learning anomaly detection system. The system adopts MEMS accelerometers to measure the vibration of factory machines and detect anomalies. The anomaly detector also adopts a concept drift detection algorithm and multiple models to accommodate multiple normal patterns. For the attack, we used a method in which measurements are tampered with by exposing the MEMS accelerometer to acoustic waves of a specific frequency. The acceleration data falsified by this method were trained on an anomaly detector, and the result was that the abnormal state could not be detected.

摘要

在本文中,我们介绍了一种针对用于检测工厂机器异常状况的设备端学习边缘人工智能的安全方法。由于边缘人工智能在物理上容易被攻击者访问,因此存在因物理攻击而导致的安全风险。特别是,人们担心攻击者可能会篡改设备端学习边缘人工智能的训练数据,从而降低任务准确性。很少有风险评估报告。在考虑对策之前,了解这些安全风险很重要。在本文中,我们展示了针对设备端学习边缘人工智能的数据中毒攻击。我们的攻击目标是一个设备端学习异常检测系统。该系统采用MEMS加速度计来测量工厂机器的振动并检测异常。异常检测器还采用概念漂移检测算法和多个模型来适应多种正常模式。对于此次攻击,我们使用了一种方法,即通过将MEMS加速度计暴露在特定频率的声波中来篡改测量数据。用这种方法伪造的加速度数据在异常检测器上进行训练,结果是无法检测到异常状态。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0087/11479347/5f52034541ae/sensors-24-06416-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0087/11479347/6a4f18b5f8e1/sensors-24-06416-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0087/11479347/f6561472cc91/sensors-24-06416-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0087/11479347/b53a44130f50/sensors-24-06416-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0087/11479347/23c1b56081dc/sensors-24-06416-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0087/11479347/d24cd277ae99/sensors-24-06416-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0087/11479347/a00183bfa28a/sensors-24-06416-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0087/11479347/ae794eb3802d/sensors-24-06416-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0087/11479347/7d5826d8cf7c/sensors-24-06416-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0087/11479347/aa412e53c761/sensors-24-06416-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0087/11479347/8e9fbd7c9192/sensors-24-06416-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0087/11479347/365eb8b9a1b7/sensors-24-06416-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0087/11479347/5dbb66061e23/sensors-24-06416-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0087/11479347/371cc6eb667c/sensors-24-06416-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0087/11479347/88045ddcf9a4/sensors-24-06416-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0087/11479347/36ddf7b73727/sensors-24-06416-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0087/11479347/5f52034541ae/sensors-24-06416-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0087/11479347/6a4f18b5f8e1/sensors-24-06416-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0087/11479347/f6561472cc91/sensors-24-06416-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0087/11479347/b53a44130f50/sensors-24-06416-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0087/11479347/23c1b56081dc/sensors-24-06416-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0087/11479347/d24cd277ae99/sensors-24-06416-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0087/11479347/a00183bfa28a/sensors-24-06416-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0087/11479347/ae794eb3802d/sensors-24-06416-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0087/11479347/7d5826d8cf7c/sensors-24-06416-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0087/11479347/aa412e53c761/sensors-24-06416-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0087/11479347/8e9fbd7c9192/sensors-24-06416-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0087/11479347/365eb8b9a1b7/sensors-24-06416-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0087/11479347/5dbb66061e23/sensors-24-06416-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0087/11479347/371cc6eb667c/sensors-24-06416-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0087/11479347/88045ddcf9a4/sensors-24-06416-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0087/11479347/36ddf7b73727/sensors-24-06416-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0087/11479347/5f52034541ae/sensors-24-06416-g016.jpg

相似文献

1
Data Poisoning Attack against Neural Network-Based On-Device Learning Anomaly Detector by Physical Attacks on Sensors.通过对传感器进行物理攻击对基于神经网络的设备端学习异常检测器实施数据中毒攻击。
Sensors (Basel). 2024 Oct 3;24(19):6416. doi: 10.3390/s24196416.
2
When Not to Classify: Anomaly Detection of Attacks (ADA) on DNN Classifiers at Test Time.在测试时不对分类进行分类:DNN 分类器的攻击异常检测(ADA)。
Neural Comput. 2019 Aug;31(8):1624-1670. doi: 10.1162/neco_a_01209. Epub 2019 Jul 1.
3
Adversarial concept drift detection under poisoning attacks for robust data stream mining.针对稳健数据流挖掘的中毒攻击下对抗概念漂移检测
Mach Learn. 2022 Jun 2:1-36. doi: 10.1007/s10994-022-06177-w.
4
DL-Based Physical Tamper Attack Detection in OFDM Systems with Multiple Receiver Antennas: A Performance-Complexity Trade-Off.基于深度学习的多接收天线OFDM系统物理篡改攻击检测:性能与复杂度的权衡
Sensors (Basel). 2022 Aug 30;22(17):6547. doi: 10.3390/s22176547.
5
Detection of False Data Injection Attacks in Smart Grids Based on Expectation Maximization.基于期望最大化的智能电网虚假数据注入攻击检测。
Sensors (Basel). 2023 Feb 3;23(3):1683. doi: 10.3390/s23031683.
6
IDAC: Federated Learning-Based Intrusion Detection Using Autonomously Extracted Anomalies in IoT.IDAC:基于联邦学习的物联网入侵检测,利用自主提取的异常情况
Sensors (Basel). 2024 May 18;24(10):3218. doi: 10.3390/s24103218.
7
Defending the Defender: Adversarial Learning Based Defending Strategy for Learning Based Security Methods in Cyber-Physical Systems (CPS).捍卫防御者:基于对抗学习的防御策略,用于网络物理系统 (CPS) 中的基于学习的安全方法。
Sensors (Basel). 2023 Jun 9;23(12):5459. doi: 10.3390/s23125459.
8
Poisoning Attacks against Communication and Computing Task Classification and Detection Techniques.针对通信与计算任务分类及检测技术的中毒攻击。
Sensors (Basel). 2024 Jan 5;24(2):0. doi: 10.3390/s24020338.
9
Exploiting Missing Value Patterns for a Backdoor Attack on Machine Learning Models of Electronic Health Records: Development and Validation Study.利用缺失值模式对电子健康记录机器学习模型进行后门攻击:开发与验证研究
JMIR Med Inform. 2022 Aug 19;10(8):e38440. doi: 10.2196/38440.
10
Online data poisoning attack against edge AI paradigm for IoT-enabled smart city.针对物联网支持的智慧城市的边缘人工智能范式的在线数据中毒攻击。
Math Biosci Eng. 2023 Sep 15;20(10):17726-17746. doi: 10.3934/mbe.2023788.

本文引用的文献

1
A fast and accurate online sequential learning algorithm for feedforward networks.一种用于前馈网络的快速准确的在线序贯学习算法。
IEEE Trans Neural Netw. 2006 Nov;17(6):1411-23. doi: 10.1109/TNN.2006.880583.
2
Reducing the dimensionality of data with neural networks.使用神经网络降低数据维度。
Science. 2006 Jul 28;313(5786):504-7. doi: 10.1126/science.1127647.