• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

关于评估用于增强自动驾驶系统异常检测的黑箱可解释人工智能方法

On Evaluating Black-Box Explainable AI Methods for Enhancing Anomaly Detection in Autonomous Driving Systems.

作者信息

Nazat Sazid, Arreche Osvaldo, Abdallah Mustafa

机构信息

Electrical and Computer Engineering Department, Purdue School of Engineering and Technology, Indiana University-Purdue University Indianapolis, Indianapolis, IN 46202, USA.

Computer and Information Technology Department, Purdue School of Engineering and Technology, Indiana University-Purdue University Indianapolis, Indianapolis, IN 46202, USA.

出版信息

Sensors (Basel). 2024 May 29;24(11):3515. doi: 10.3390/s24113515.

DOI:10.3390/s24113515
PMID:38894306
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11175219/
Abstract

The recent advancements in autonomous driving come with the associated cybersecurity issue of compromising networks of autonomous vehicles (AVs), motivating the use of AI models for detecting anomalies on these networks. In this context, the usage of explainable AI (XAI) for explaining the behavior of these anomaly detection AI models is crucial. This work introduces a comprehensive framework to assess black-box XAI techniques for anomaly detection within AVs, facilitating the examination of both global and local XAI methods to elucidate the decisions made by XAI techniques that explain the behavior of AI models classifying anomalous AV behavior. By considering six evaluation metrics (descriptive accuracy, sparsity, stability, efficiency, robustness, and completeness), the framework evaluates two well-known black-box XAI techniques, SHAP and LIME, involving applying XAI techniques to identify primary features crucial for anomaly classification, followed by extensive experiments assessing SHAP and LIME across the six metrics using two prevalent autonomous driving datasets, VeReMi and Sensor. This study advances the deployment of black-box XAI methods for real-world anomaly detection in autonomous driving systems, contributing valuable insights into the strengths and limitations of current black-box XAI methods within this critical domain.

摘要

自动驾驶领域的最新进展伴随着自动驾驶车辆(AV)网络被攻破这一相关的网络安全问题,这促使人们使用人工智能模型来检测这些网络上的异常情况。在这种背景下,使用可解释人工智能(XAI)来解释这些异常检测人工智能模型的行为至关重要。这项工作引入了一个全面的框架,用于评估自动驾驶车辆中用于异常检测的黑盒XAI技术,便于对全局和局部XAI方法进行检验,以阐明解释对异常AV行为进行分类的人工智能模型行为的XAI技术所做的决策。通过考虑六个评估指标(描述准确性、稀疏性、稳定性、效率、鲁棒性和完整性),该框架评估了两种著名的黑盒XAI技术,即SHAP和LIME,包括应用XAI技术来识别对异常分类至关重要的主要特征,随后使用两个流行的自动驾驶数据集VeReMi和Sensor,针对六个指标对SHAP和LIME进行了广泛的实验评估。这项研究推动了黑盒XAI方法在自动驾驶系统实际异常检测中的应用,为该关键领域当前黑盒XAI方法的优势和局限性提供了有价值的见解。

相似文献

1
On Evaluating Black-Box Explainable AI Methods for Enhancing Anomaly Detection in Autonomous Driving Systems.关于评估用于增强自动驾驶系统异常检测的黑箱可解释人工智能方法
Sensors (Basel). 2024 May 29;24(11):3515. doi: 10.3390/s24113515.
2
Toward explainable AI (XAI) for mental health detection based on language behavior.迈向基于语言行为的可解释人工智能(XAI)用于心理健康检测。
Front Psychiatry. 2023 Dec 7;14:1219479. doi: 10.3389/fpsyt.2023.1219479. eCollection 2023.
3
Explainability and white box in drug discovery.药物发现中的可解释性和白盒。
Chem Biol Drug Des. 2023 Jul;102(1):217-233. doi: 10.1111/cbdd.14262. Epub 2023 Apr 27.
4
SHAP and LIME: An Evaluation of Discriminative Power in Credit Risk.SHAP与LIME:信用风险判别能力评估
Front Artif Intell. 2021 Sep 17;4:752558. doi: 10.3389/frai.2021.752558. eCollection 2021.
5
Model-agnostic explainable artificial intelligence tools for severity prediction and symptom analysis on Indian COVID-19 data.用于印度新冠疫情数据严重程度预测和症状分析的模型无关可解释人工智能工具。
Front Artif Intell. 2023 Dec 4;6:1272506. doi: 10.3389/frai.2023.1272506. eCollection 2023.
6
Explainable artificial intelligence in breast cancer detection and risk prediction: A systematic scoping review.乳腺癌检测与风险预测中的可解释人工智能:一项系统综述。
Cancer Innov. 2024 Jul 3;3(5):e136. doi: 10.1002/cai2.136. eCollection 2024 Oct.
7
Utilization of model-agnostic explainable artificial intelligence frameworks in oncology: a narrative review.模型无关可解释人工智能框架在肿瘤学中的应用:一项叙述性综述
Transl Cancer Res. 2022 Oct;11(10):3853-3868. doi: 10.21037/tcr-22-1626.
8
Explainable AI-driven model for gastrointestinal cancer classification.用于胃肠道癌分类的可解释人工智能驱动模型。
Front Med (Lausanne). 2024 Apr 15;11:1349373. doi: 10.3389/fmed.2024.1349373. eCollection 2024.
9
To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods.是否信任一种解释:使用LEAF评估局部线性可解释人工智能方法。
PeerJ Comput Sci. 2021 Apr 16;7:e479. doi: 10.7717/peerj-cs.479. eCollection 2021.
10
Development of an ensemble CNN model with explainable AI for the classification of gastrointestinal cancer.基于可解释人工智能的集成 CNN 模型在胃肠道癌分类中的开发。
PLoS One. 2024 Jun 25;19(6):e0305628. doi: 10.1371/journal.pone.0305628. eCollection 2024.

引用本文的文献

1
Ensemble Learning Framework for Anomaly Detection in Autonomous Driving Systems.用于自动驾驶系统异常检测的集成学习框架
Sensors (Basel). 2025 Aug 17;25(16):5105. doi: 10.3390/s25165105.
2
Leveraging explainable artificial intelligence for early detection and mitigation of cyber threat in large-scale network environments.利用可解释人工智能在大规模网络环境中进行网络威胁的早期检测与缓解。
Sci Rep. 2025 Jul 9;15(1):24662. doi: 10.1038/s41598-025-08597-9.
3
Explainable artificial intelligence with temporal convolutional networks for adverse weather condition detection in driverless vehicles.

本文引用的文献

1
From Local Explanations to Global Understanding with Explainable AI for Trees.利用可解释人工智能实现从局部解释到树木的全局理解
Nat Mach Intell. 2020 Jan;2(1):56-67. doi: 10.1038/s42256-019-0138-9. Epub 2020 Jan 17.
2
Recurrent Neural Networks for Multivariate Time Series with Missing Values.具有缺失值的多元时间序列的递归神经网络。
Sci Rep. 2018 Apr 17;8(1):6085. doi: 10.1038/s41598-018-24271-9.
用于无人驾驶车辆恶劣天气状况检测的基于时间卷积网络的可解释人工智能。
Sci Rep. 2025 Jun 3;15(1):19475. doi: 10.1038/s41598-025-05136-4.
4
Machine Learning-Based Prediction of Well Logs Guided by Rock Physics and Its Interpretation.基于岩石物理指导的测井曲线机器学习预测及其解释
Sensors (Basel). 2025 Jan 30;25(3):836. doi: 10.3390/s25030836.
5
A hybrid approach for intrusion detection in vehicular networks using feature selection and dimensionality reduction with optimized deep learning.一种使用特征选择和降维以及优化深度学习的车载网络入侵检测混合方法。
PLoS One. 2025 Feb 6;20(2):e0312752. doi: 10.1371/journal.pone.0312752. eCollection 2025.
6
LPDi GAN: A License Plate De-Identification Method to Preserve Strong Data Utility.LPDi GAN:一种保留强大数据效用的车牌去识别方法。
Sensors (Basel). 2024 Jul 30;24(15):4922. doi: 10.3390/s24154922.