Suppr超能文献

它在何时、何地以及如何失效?一种用于自动驾驶中可解释目标检测的时空视觉分析方法。

When, Where and How Does it Fail? A Spatial-Temporal Visual Analytics Approach for Interpretable Object Detection in Autonomous Driving.

作者信息

Wang Junhong, Li Yun, Zhou Zhaoyu, Wang Chengshun, Hou Yijie, Zhang Li, Xue Xiangyang, Kamp Michael, Zhang Xiaolong Luke, Chen Siming

出版信息

IEEE Trans Vis Comput Graph. 2023 Dec;29(12):5033-5049. doi: 10.1109/TVCG.2022.3201101. Epub 2023 Nov 13.

Abstract

Arguably the most representative application of artificial intelligence, autonomous driving systems usually rely on computer vision techniques to detect the situations of the external environment. Object detection underpins the ability of scene understanding in such systems. However, existing object detection algorithms often behave as a black box, so when a model fails, no information is available on When, Where and How the failure happened. In this paper, we propose a visual analytics approach to help model developers interpret the model failures. The system includes the micro- and macro-interpreting modules to address the interpretability problem of object detection in autonomous driving. The micro-interpreting module extracts and visualizes the features of a convolutional neural network (CNN) algorithm with density maps, while the macro-interpreting module provides spatial-temporal information of an autonomous driving vehicle and its environment. With the situation awareness of the spatial, temporal and neural network information, our system facilitates the understanding of the results of object detection algorithms, and helps the model developers better understand, tune and develop the models. We use real-world autonomous driving data to perform case studies by involving domain experts in computer vision and autonomous driving to evaluate our system. The results from our interviews with them show the effectiveness of our approach.

摘要

自动驾驶系统可以说是人工智能最具代表性的应用,通常依靠计算机视觉技术来检测外部环境状况。目标检测是此类系统中场景理解能力的基础。然而,现有的目标检测算法往往表现得像一个黑箱,因此当模型出现故障时,没有关于故障发生的时间、地点和方式的任何信息。在本文中,我们提出了一种视觉分析方法来帮助模型开发者解释模型故障。该系统包括微观和宏观解释模块,以解决自动驾驶中目标检测的可解释性问题。微观解释模块通过密度图提取并可视化卷积神经网络(CNN)算法的特征,而宏观解释模块提供自动驾驶车辆及其环境的时空信息。通过对空间、时间和神经网络信息的态势感知,我们的系统有助于理解目标检测算法的结果,并帮助模型开发者更好地理解、调整和开发模型。我们使用真实世界的自动驾驶数据进行案例研究,邀请计算机视觉和自动驾驶领域的专家参与评估我们的系统。我们与他们访谈的结果表明了我们方法的有效性。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验