Thekke Kanapram Divya, Marcenaro Lucio, Martin Gomez David, Regazzoni Carlo
Department of Electrical, Electronics and Telecommunication Engineering and Naval Architecture, University of Genova, 16145 Genova, Italy.
Centre for Intelligent Sensing, School of Electronic Engineering and Computer Science (EECS), Queen Mary University of London, London E1 4NS, UK.
Sensors (Basel). 2022 Mar 15;22(6):2260. doi: 10.3390/s22062260.
In recent days, it is becoming essential to ensure that the outcomes of signal processing methods based on machine learning (ML) data-driven models can provide interpretable predictions. The interpretability of ML models can be defined as the capability to understand the reasons that contributed to generating a given outcome in a complex autonomous or semi-autonomous system. The necessity of interpretability is often related to the evaluation of performances in complex systems and the acceptance of agents' automatization processes where critical high-risk decisions have to be taken. This paper concentrates on one of the core functionality of such systems, i.e., abnormality detection, and on choosing a model representation modality based on a data-driven machine learning (ML) technique such that the outcomes become interpretable. The interpretability in this work is achieved through graph matching of semantic level vocabulary generated from the data and their relationships. The proposed approach assumes that the data-driven models to be chosen should support emergent self-awareness (SA) of the agents at multiple abstraction levels. It is demonstrated that the capability of incrementally updating learned representation models based on progressive experiences of the agent is shown to be strictly related to interpretability capability. As a case study, abnormality detection is analyzed as a primary feature of the collective awareness (CA) of a network of vehicles performing cooperative behaviors. Each vehicle is considered an example of an Internet of Things (IoT) node, therefore providing results that can be generalized to an IoT framework where agents have different sensors, actuators, and tasks to be accomplished. The capability of a model to allow evaluation of abnormalities at different levels of abstraction in the learned models is addressed as a key aspect for interpretability.
近年来,确保基于机器学习(ML)数据驱动模型的信号处理方法的结果能够提供可解释的预测变得至关重要。ML模型的可解释性可以定义为理解在复杂的自主或半自主系统中导致给定结果的原因的能力。可解释性的必要性通常与复杂系统中性能的评估以及在必须做出关键高风险决策的代理自动化过程的接受度相关。本文专注于此类系统的核心功能之一,即异常检测,并基于数据驱动的机器学习(ML)技术选择一种模型表示方式,以使结果变得可解释。这项工作中的可解释性是通过对从数据及其关系生成的语义级词汇进行图匹配来实现的。所提出的方法假设要选择的数据驱动模型应支持代理在多个抽象级别上的涌现式自我意识(SA)。结果表明,基于代理的渐进经验逐步更新学习表示模型的能力与可解释性能力密切相关。作为一个案例研究,异常检测被分析为执行协作行为的车辆网络的集体意识(CA)的主要特征。每辆车都被视为物联网(IoT)节点的一个示例,因此提供的结果可以推广到一个物联网框架,其中代理具有不同的传感器、执行器和要完成的任务。模型允许在学习模型的不同抽象级别评估异常的能力被视为可解释性的一个关键方面。