Suppr超能文献

关于评估用于增强自动驾驶系统异常检测的黑箱可解释人工智能方法

On Evaluating Black-Box Explainable AI Methods for Enhancing Anomaly Detection in Autonomous Driving Systems.

作者信息

Nazat Sazid, Arreche Osvaldo, Abdallah Mustafa

机构信息

Electrical and Computer Engineering Department, Purdue School of Engineering and Technology, Indiana University-Purdue University Indianapolis, Indianapolis, IN 46202, USA.

Computer and Information Technology Department, Purdue School of Engineering and Technology, Indiana University-Purdue University Indianapolis, Indianapolis, IN 46202, USA.

出版信息

Sensors (Basel). 2024 May 29;24(11):3515. doi: 10.3390/s24113515.

Abstract

The recent advancements in autonomous driving come with the associated cybersecurity issue of compromising networks of autonomous vehicles (AVs), motivating the use of AI models for detecting anomalies on these networks. In this context, the usage of explainable AI (XAI) for explaining the behavior of these anomaly detection AI models is crucial. This work introduces a comprehensive framework to assess black-box XAI techniques for anomaly detection within AVs, facilitating the examination of both global and local XAI methods to elucidate the decisions made by XAI techniques that explain the behavior of AI models classifying anomalous AV behavior. By considering six evaluation metrics (descriptive accuracy, sparsity, stability, efficiency, robustness, and completeness), the framework evaluates two well-known black-box XAI techniques, SHAP and LIME, involving applying XAI techniques to identify primary features crucial for anomaly classification, followed by extensive experiments assessing SHAP and LIME across the six metrics using two prevalent autonomous driving datasets, VeReMi and Sensor. This study advances the deployment of black-box XAI methods for real-world anomaly detection in autonomous driving systems, contributing valuable insights into the strengths and limitations of current black-box XAI methods within this critical domain.

摘要

自动驾驶领域的最新进展伴随着自动驾驶车辆(AV)网络被攻破这一相关的网络安全问题,这促使人们使用人工智能模型来检测这些网络上的异常情况。在这种背景下,使用可解释人工智能(XAI)来解释这些异常检测人工智能模型的行为至关重要。这项工作引入了一个全面的框架,用于评估自动驾驶车辆中用于异常检测的黑盒XAI技术,便于对全局和局部XAI方法进行检验,以阐明解释对异常AV行为进行分类的人工智能模型行为的XAI技术所做的决策。通过考虑六个评估指标(描述准确性、稀疏性、稳定性、效率、鲁棒性和完整性),该框架评估了两种著名的黑盒XAI技术,即SHAP和LIME,包括应用XAI技术来识别对异常分类至关重要的主要特征,随后使用两个流行的自动驾驶数据集VeReMi和Sensor,针对六个指标对SHAP和LIME进行了广泛的实验评估。这项研究推动了黑盒XAI方法在自动驾驶系统实际异常检测中的应用,为该关键领域当前黑盒XAI方法的优势和局限性提供了有价值的见解。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验