Alvey Brendan, Anderson Derek, Keller James, Buck Andrew
Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO 65211, USA.
Sensors (Basel). 2023 Aug 3;23(15):6879. doi: 10.3390/s23156879.
Deep learning has become increasingly common in aerial imagery analysis. As its use continues to grow, it is crucial that we understand and can explain its behavior. One eXplainable AI (XAI) approach is to generate linguistic summarizations of data and/or models. However, the number of summaries can increase significantly with the number of data attributes, posing a challenge. Herein, we proposed a hierarchical approach for generating and evaluating linguistic statements of black box deep learning models. Our approach scores and ranks statements according to user-specified criteria. A systematic process was outlined for the evaluation of an object detector on a low altitude aerial drone. A deep learning model trained on real imagery was evaluated on a photorealistic simulated dataset with known ground truth across different contexts. The effectiveness and versatility of our approach was demonstrated by showing tailored linguistic summaries for different user types. Ultimately, this process is an efficient human-centric way of identifying successes, shortcomings, and biases in data and deep learning models.
深度学习在航空图像分析中已变得越来越普遍。随着其应用的持续增长,我们理解并能够解释其行为至关重要。一种可解释人工智能(XAI)方法是生成数据和/或模型的语言摘要。然而,摘要的数量会随着数据属性的数量显著增加,这带来了挑战。在此,我们提出了一种用于生成和评估黑箱深度学习模型语言陈述的分层方法。我们的方法根据用户指定的标准对陈述进行评分和排序。概述了一个用于评估低空航测无人机上目标检测器的系统过程。在具有不同场景下已知地面真值的逼真模拟数据集上,对在真实图像上训练的深度学习模型进行了评估。通过为不同用户类型展示定制的语言摘要,证明了我们方法的有效性和通用性。最终,这个过程是以用户为中心的有效方式,用于识别数据和深度学习模型中的成功之处、缺点和偏差。