Suppr超能文献

针对联网与自动驾驶车辆车载网络中入侵检测系统的对抗攻击。

Adversarial Attacks on Intrusion Detection Systems in In-Vehicle Networks of Connected and Autonomous Vehicles.

作者信息

Aloraini Fatimah, Javed Amir, Rana Omer

机构信息

School of Computer Science and Informatics, Cardiff University, Cardiff CF10 3AT, UK.

College of Sciences and Humanitie, Shaqra University, Shaqra 11911, Saudi Arabia.

出版信息

Sensors (Basel). 2024 Jun 14;24(12):3848. doi: 10.3390/s24123848.

Abstract

Rapid advancements in connected and autonomous vehicles (CAVs) are fueled by breakthroughs in machine learning, yet they encounter significant risks from adversarial attacks. This study explores the vulnerabilities of machine learning-based intrusion detection systems (IDSs) within in-vehicle networks (IVNs) to adversarial attacks, shifting focus from the common research on manipulating CAV perception models. Considering the relatively simple nature of IVN data, we assess the susceptibility of IVN-based IDSs to manipulation-a crucial examination, as adversarial attacks typically exploit complexity. We propose an adversarial attack method using a substitute IDS trained with data from the onboard diagnostic port. In conducting these attacks under black-box conditions while adhering to realistic IVN traffic constraints, our method seeks to deceive the IDS into misclassifying both normal-to-malicious and malicious-to-normal cases. Evaluations on two IDS models-a baseline IDS and a state-of-the-art model, MTH-IDS-demonstrated substantial vulnerability, decreasing the F1 scores from 95% to 38% and from 97% to 79%, respectively. Notably, inducing false alarms proved particularly effective as an adversarial strategy, undermining user trust in the defense mechanism. Despite the simplicity of IVN-based IDSs, our findings reveal critical vulnerabilities that could threaten vehicle safety and necessitate careful consideration in the development of IVN-based IDSs and in formulating responses to the IDSs' alarms.

摘要

机器学习的突破推动了联网和自动驾驶汽车(CAV)的快速发展,但它们也面临着来自对抗性攻击的重大风险。本研究探讨了车载网络(IVN)中基于机器学习的入侵检测系统(IDS)易受对抗性攻击的脆弱性,将重点从对操纵CAV感知模型的常见研究转移。考虑到IVN数据相对简单的性质,我们评估了基于IVN的IDS对操纵的敏感性——这是一项至关重要的检验,因为对抗性攻击通常利用复杂性。我们提出了一种对抗性攻击方法,使用从车载诊断端口获取的数据训练的替代IDS。在黑盒条件下进行这些攻击的同时,我们的方法遵循现实的IVN流量限制,旨在欺骗IDS将正常到恶意以及恶意到正常的情况都误分类。对两种IDS模型——基线IDS和先进模型MTH - IDS——的评估表明它们存在很大的脆弱性,F1分数分别从95%降至38%以及从97%降至79%。值得注意的是,制造误报作为一种对抗策略被证明特别有效,削弱了用户对防御机制的信任。尽管基于IVN的IDS很简单,但我们的研究结果揭示了可能威胁车辆安全的关键漏洞,在基于IVN的IDS开发以及制定对IDS警报的应对措施时需要仔细考虑。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2f4c/11207422/0b11209e28b7/sensors-24-03848-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验