Ino Takahito, Yoshida Kota, Matsutani Hiroki, Fujino Takeshi
College of Science and Engineering, Ritsumeikan University, Kusatsu 525-8577, Japan.
Faculty of Science and Technology, Keio University, Yokohama 223-8522, Japan.
Sensors (Basel). 2024 Oct 3;24(19):6416. doi: 10.3390/s24196416.
In this paper, we introduce a security approach for on-device learning Edge AIs designed to detect abnormal conditions in factory machines. Since Edge AIs are easily accessible by an attacker physically, there are security risks due to physical attacks. In particular, there is a concern that the attacker may tamper with the training data of the on-device learning Edge AIs to degrade the task accuracy. Few risk assessments have been reported. It is important to understand these security risks before considering countermeasures. In this paper, we demonstrate a data poisoning attack against an on-device learning Edge AI. Our attack target is an on-device learning anomaly detection system. The system adopts MEMS accelerometers to measure the vibration of factory machines and detect anomalies. The anomaly detector also adopts a concept drift detection algorithm and multiple models to accommodate multiple normal patterns. For the attack, we used a method in which measurements are tampered with by exposing the MEMS accelerometer to acoustic waves of a specific frequency. The acceleration data falsified by this method were trained on an anomaly detector, and the result was that the abnormal state could not be detected.
在本文中,我们介绍了一种针对用于检测工厂机器异常状况的设备端学习边缘人工智能的安全方法。由于边缘人工智能在物理上容易被攻击者访问,因此存在因物理攻击而导致的安全风险。特别是,人们担心攻击者可能会篡改设备端学习边缘人工智能的训练数据,从而降低任务准确性。很少有风险评估报告。在考虑对策之前,了解这些安全风险很重要。在本文中,我们展示了针对设备端学习边缘人工智能的数据中毒攻击。我们的攻击目标是一个设备端学习异常检测系统。该系统采用MEMS加速度计来测量工厂机器的振动并检测异常。异常检测器还采用概念漂移检测算法和多个模型来适应多种正常模式。对于此次攻击,我们使用了一种方法,即通过将MEMS加速度计暴露在特定频率的声波中来篡改测量数据。用这种方法伪造的加速度数据在异常检测器上进行训练,结果是无法检测到异常状态。