• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于自我事物网络异常检测的基于图的可解释机器学习模型

Graph-Powered Interpretable Machine Learning Models for Abnormality Detection in Ego-Things Network.

作者信息

Thekke Kanapram Divya, Marcenaro Lucio, Martin Gomez David, Regazzoni Carlo

机构信息

Department of Electrical, Electronics and Telecommunication Engineering and Naval Architecture, University of Genova, 16145 Genova, Italy.

Centre for Intelligent Sensing, School of Electronic Engineering and Computer Science (EECS), Queen Mary University of London, London E1 4NS, UK.

出版信息

Sensors (Basel). 2022 Mar 15;22(6):2260. doi: 10.3390/s22062260.

DOI:10.3390/s22062260
PMID:35336431
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8953755/
Abstract

In recent days, it is becoming essential to ensure that the outcomes of signal processing methods based on machine learning (ML) data-driven models can provide interpretable predictions. The interpretability of ML models can be defined as the capability to understand the reasons that contributed to generating a given outcome in a complex autonomous or semi-autonomous system. The necessity of interpretability is often related to the evaluation of performances in complex systems and the acceptance of agents' automatization processes where critical high-risk decisions have to be taken. This paper concentrates on one of the core functionality of such systems, i.e., abnormality detection, and on choosing a model representation modality based on a data-driven machine learning (ML) technique such that the outcomes become interpretable. The interpretability in this work is achieved through graph matching of semantic level vocabulary generated from the data and their relationships. The proposed approach assumes that the data-driven models to be chosen should support emergent self-awareness (SA) of the agents at multiple abstraction levels. It is demonstrated that the capability of incrementally updating learned representation models based on progressive experiences of the agent is shown to be strictly related to interpretability capability. As a case study, abnormality detection is analyzed as a primary feature of the collective awareness (CA) of a network of vehicles performing cooperative behaviors. Each vehicle is considered an example of an Internet of Things (IoT) node, therefore providing results that can be generalized to an IoT framework where agents have different sensors, actuators, and tasks to be accomplished. The capability of a model to allow evaluation of abnormalities at different levels of abstraction in the learned models is addressed as a key aspect for interpretability.

摘要

近年来,确保基于机器学习(ML)数据驱动模型的信号处理方法的结果能够提供可解释的预测变得至关重要。ML模型的可解释性可以定义为理解在复杂的自主或半自主系统中导致给定结果的原因的能力。可解释性的必要性通常与复杂系统中性能的评估以及在必须做出关键高风险决策的代理自动化过程的接受度相关。本文专注于此类系统的核心功能之一,即异常检测,并基于数据驱动的机器学习(ML)技术选择一种模型表示方式,以使结果变得可解释。这项工作中的可解释性是通过对从数据及其关系生成的语义级词汇进行图匹配来实现的。所提出的方法假设要选择的数据驱动模型应支持代理在多个抽象级别上的涌现式自我意识(SA)。结果表明,基于代理的渐进经验逐步更新学习表示模型的能力与可解释性能力密切相关。作为一个案例研究,异常检测被分析为执行协作行为的车辆网络的集体意识(CA)的主要特征。每辆车都被视为物联网(IoT)节点的一个示例,因此提供的结果可以推广到一个物联网框架,其中代理具有不同的传感器、执行器和要完成的任务。模型允许在学习模型的不同抽象级别评估异常的能力被视为可解释性的一个关键方面。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/f16209d2ccd0/sensors-22-02260-g025.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/087414a28d51/sensors-22-02260-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/57977c648189/sensors-22-02260-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/a67329e70564/sensors-22-02260-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/187092eb595f/sensors-22-02260-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/ac05139e0db7/sensors-22-02260-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/cd375ed51765/sensors-22-02260-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/176e126f8511/sensors-22-02260-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/ab51529bb561/sensors-22-02260-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/41a1221fea8a/sensors-22-02260-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/1d7da9021bc3/sensors-22-02260-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/74f1f07c1c05/sensors-22-02260-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/6f2e7a178f12/sensors-22-02260-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/bf4d743dc2bf/sensors-22-02260-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/fd058d5e3f78/sensors-22-02260-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/d48870043e70/sensors-22-02260-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/8fb17a3e7d23/sensors-22-02260-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/48fa1904beb4/sensors-22-02260-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/487a48a743a6/sensors-22-02260-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/cab3de9f4cb4/sensors-22-02260-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/02afa56bc2a7/sensors-22-02260-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/75ddaa95a226/sensors-22-02260-g021.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/faed7c42fec0/sensors-22-02260-g022.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/cb33183c88ff/sensors-22-02260-g023.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/176c7e98772d/sensors-22-02260-g024.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/f16209d2ccd0/sensors-22-02260-g025.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/087414a28d51/sensors-22-02260-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/57977c648189/sensors-22-02260-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/a67329e70564/sensors-22-02260-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/187092eb595f/sensors-22-02260-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/ac05139e0db7/sensors-22-02260-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/cd375ed51765/sensors-22-02260-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/176e126f8511/sensors-22-02260-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/ab51529bb561/sensors-22-02260-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/41a1221fea8a/sensors-22-02260-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/1d7da9021bc3/sensors-22-02260-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/74f1f07c1c05/sensors-22-02260-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/6f2e7a178f12/sensors-22-02260-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/bf4d743dc2bf/sensors-22-02260-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/fd058d5e3f78/sensors-22-02260-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/d48870043e70/sensors-22-02260-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/8fb17a3e7d23/sensors-22-02260-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/48fa1904beb4/sensors-22-02260-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/487a48a743a6/sensors-22-02260-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/cab3de9f4cb4/sensors-22-02260-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/02afa56bc2a7/sensors-22-02260-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/75ddaa95a226/sensors-22-02260-g021.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/faed7c42fec0/sensors-22-02260-g022.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/cb33183c88ff/sensors-22-02260-g023.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/176c7e98772d/sensors-22-02260-g024.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7254/8953755/f16209d2ccd0/sensors-22-02260-g025.jpg

相似文献

1
Graph-Powered Interpretable Machine Learning Models for Abnormality Detection in Ego-Things Network.用于自我事物网络异常检测的基于图的可解释机器学习模型
Sensors (Basel). 2022 Mar 15;22(6):2260. doi: 10.3390/s22062260.
2
Enhancing interpretability of automatically extracted machine learning features: application to a RBM-Random Forest system on brain lesion segmentation.增强自动提取的机器学习特征的可解释性:在基于 RBM-Random Forest 的脑损伤分割系统中的应用。
Med Image Anal. 2018 Feb;44:228-244. doi: 10.1016/j.media.2017.12.009. Epub 2017 Dec 20.
3
On the interpretability of machine learning-based model for predicting hypertension.基于机器学习的高血压预测模型的可解释性研究。
BMC Med Inform Decis Mak. 2019 Jul 29;19(1):146. doi: 10.1186/s12911-019-0874-0.
4
A critical moment in machine learning in medicine: on reproducible and interpretable learning.医学机器学习的关键时刻:可重现且可解释的学习。
Acta Neurochir (Wien). 2024 Jan 16;166(1):14. doi: 10.1007/s00701-024-05892-8.
5
An Aggregated Mutual Information Based Feature Selection with Machine Learning Methods for Enhancing IoT Botnet Attack Detection.基于聚合互信息的特征选择与机器学习方法在增强物联网僵尸网络攻击检测中的应用。
Sensors (Basel). 2021 Dec 28;22(1):185. doi: 10.3390/s22010185.
6
Interpretable neural networks: principles and applications.可解释神经网络:原理与应用
Front Artif Intell. 2023 Oct 13;6:974295. doi: 10.3389/frai.2023.974295. eCollection 2023.
7
The future of Cochrane Neonatal.考克兰新生儿协作网的未来。
Early Hum Dev. 2020 Nov;150:105191. doi: 10.1016/j.earlhumdev.2020.105191. Epub 2020 Sep 12.
8
Unsupervised Event Graph Representation and Similarity Learning on Biomedical Literature.基于生物医学文献的无监督事件图表示和相似性学习。
Sensors (Basel). 2021 Dec 21;22(1):3. doi: 10.3390/s22010003.
9
Biologically Inspired Model for Visual Cognition Achieving Unsupervised Episodic and Semantic Feature Learning.受生物启发的视觉认知模型实现无监督的情景和语义特征学习。
IEEE Trans Cybern. 2016 Oct;46(10):2335-2347. doi: 10.1109/TCYB.2015.2476706. Epub 2015 Sep 18.
10
Dynamic Heterogeneous User Generated Contents-Driven Relation Assessment via Graph Representation Learning.基于图表示学习的动态异质用户生成内容驱动关系评估。
Sensors (Basel). 2022 Feb 11;22(4):1402. doi: 10.3390/s22041402.

本文引用的文献

1
Definitions, methods, and applications in interpretable machine learning.可解释机器学习中的定义、方法和应用。
Proc Natl Acad Sci U S A. 2019 Oct 29;116(44):22071-22080. doi: 10.1073/pnas.1900654116. Epub 2019 Oct 16.
2
Machine Learning in Medicine.医学中的机器学习
Circulation. 2015 Nov 17;132(20):1920-30. doi: 10.1161/CIRCULATIONAHA.115.001593.
3
A Free Energy Principle for Biological Systems.生物系统的自由能原理。
Entropy (Basel). 2012 Nov;14(11):2100-2121. doi: 10.3390/e14112100.
4
Learning graph matching.学习图匹配。
IEEE Trans Pattern Anal Mach Intell. 2009 Jun;31(6):1048-58. doi: 10.1109/TPAMI.2009.28.
5
;Neural-gas' network for vector quantization and its application to time-series prediction.用于矢量量化的“神经气”网络及其在时间序列预测中的应用。
IEEE Trans Neural Netw. 1993;4(4):558-69. doi: 10.1109/72.238311.