• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

边缘:使用图信号分析揭示深度神经网络

MARGIN: Uncovering Deep Neural Networks Using Graph Signal Analysis.

作者信息

Anirudh Rushil, Thiagarajan Jayaraman J, Sridhar Rahul, Bremer Peer-Timo

机构信息

Center for Applied Scientific Computing (CASC), Lawrence Livermore National Laboratory, Livermore, CA, United States.

Walmart Labs, California, CA, United States.

出版信息

Front Big Data. 2021 May 4;4:589417. doi: 10.3389/fdata.2021.589417. eCollection 2021.

DOI:10.3389/fdata.2021.589417
PMID:34337397
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8320743/
Abstract

Interpretability has emerged as a crucial aspect of building trust in machine learning systems, aimed at providing insights into the working of complex neural networks that are otherwise opaque to a user. There are a plethora of existing solutions addressing various aspects of interpretability ranging from identifying prototypical samples in a dataset to explaining image predictions or explaining mis-classifications. While all of these diverse techniques address seemingly different aspects of interpretability, we hypothesize that a large family of interepretability tasks are variants of the same central problem which is identifying change in a model's prediction. This paper introduces MARGIN, a simple yet general approach to address a large set of interpretability tasks MARGIN exploits ideas rooted in graph signal analysis to determine influential nodes in a graph, which are defined as those nodes that maximally describe a function defined on the graph. By carefully defining task-specific graphs and functions, we demonstrate that MARGIN outperforms existing approaches in a number of disparate interpretability challenges.

摘要

可解释性已成为机器学习系统中建立信任的关键方面,旨在深入了解复杂神经网络的工作方式,否则用户将对其一无所知。现有大量解决方案涉及可解释性的各个方面,从识别数据集中的原型样本到解释图像预测或错误分类。虽然所有这些不同的技术都解决了可解释性看似不同的方面,但我们假设一大类可解释性任务是同一个核心问题的变体,即识别模型预测中的变化。本文介绍了MARGIN,这是一种简单而通用的方法,用于解决大量可解释性任务。MARGIN利用源于图信号分析的思想来确定图中的有影响力的节点,这些节点被定义为那些最大程度描述图上定义的函数的节点。通过仔细定义特定任务的图和函数,我们证明了MARGIN在许多不同的可解释性挑战中优于现有方法。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e193/8320743/7025e0376c12/fdata-04-589417-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e193/8320743/4472bb9f1f2b/fdata-04-589417-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e193/8320743/4e0b6aaeda4e/fdata-04-589417-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e193/8320743/05ae7be059ee/fdata-04-589417-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e193/8320743/b244fb7afbd2/fdata-04-589417-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e193/8320743/0bb26efb4ddf/fdata-04-589417-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e193/8320743/a9340e01ead7/fdata-04-589417-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e193/8320743/fba309a846a0/fdata-04-589417-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e193/8320743/7025e0376c12/fdata-04-589417-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e193/8320743/4472bb9f1f2b/fdata-04-589417-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e193/8320743/4e0b6aaeda4e/fdata-04-589417-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e193/8320743/05ae7be059ee/fdata-04-589417-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e193/8320743/b244fb7afbd2/fdata-04-589417-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e193/8320743/0bb26efb4ddf/fdata-04-589417-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e193/8320743/a9340e01ead7/fdata-04-589417-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e193/8320743/fba309a846a0/fdata-04-589417-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e193/8320743/7025e0376c12/fdata-04-589417-g008.jpg

相似文献

1
MARGIN: Uncovering Deep Neural Networks Using Graph Signal Analysis.边缘:使用图信号分析揭示深度神经网络
Front Big Data. 2021 May 4;4:589417. doi: 10.3389/fdata.2021.589417. eCollection 2021.
2
Explaining decisions of graph convolutional neural networks: patient-specific molecular subnetworks responsible for metastasis prediction in breast cancer.解释图卷积神经网络决策:乳腺癌转移预测中与患者特异性相关的分子子网络。
Genome Med. 2021 Mar 11;13(1):42. doi: 10.1186/s13073-021-00845-7.
3
Interpretable Molecular Property Predictions Using Marginalized Graph Kernels.使用边缘化图核进行可解释的分子性质预测
J Chem Inf Model. 2023 Aug 14;63(15):4633-4640. doi: 10.1021/acs.jcim.3c00396. Epub 2023 Jul 28.
4
GNNExplainer: Generating Explanations for Graph Neural Networks.GNNExplainer:为图神经网络生成解释
Adv Neural Inf Process Syst. 2019 Dec;32:9240-9251.
5
A Mobile App That Addresses Interpretability Challenges in Machine Learning-Based Diabetes Predictions: Survey-Based User Study.一款应对基于机器学习的糖尿病预测中可解释性挑战的移动应用程序:基于调查的用户研究。
JMIR Form Res. 2023 Nov 13;7:e50328. doi: 10.2196/50328.
6
Derivative-free optimization adversarial attacks for graph convolutional networks.用于图卷积网络的无导数优化对抗攻击
PeerJ Comput Sci. 2021 Aug 24;7:e693. doi: 10.7717/peerj-cs.693. eCollection 2021.
7
Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations.峰会:通过可视化激活和归因总结来扩展深度学习可解释性。
IEEE Trans Vis Comput Graph. 2020 Jan;26(1):1096-1106. doi: 10.1109/TVCG.2019.2934659. Epub 2019 Aug 20.
8
Co-Embedding of Nodes and Edges With Graph Neural Networks.节点和边的图神经网络联合嵌入。
IEEE Trans Pattern Anal Mach Intell. 2023 Jun;45(6):7075-7086. doi: 10.1109/TPAMI.2020.3029762. Epub 2023 May 5.
9
Unsupervised Event Graph Representation and Similarity Learning on Biomedical Literature.基于生物医学文献的无监督事件图表示和相似性学习。
Sensors (Basel). 2021 Dec 21;22(1):3. doi: 10.3390/s22010003.
10
Survey on graph embeddings and their applications to machine learning problems on graphs.关于图嵌入及其在图上机器学习问题中的应用的综述。
PeerJ Comput Sci. 2021 Feb 4;7:e357. doi: 10.7717/peerj-cs.357. eCollection 2021.

本文引用的文献

1
SLIC superpixels compared to state-of-the-art superpixel methods.SLIC 超像素与最先进的超像素方法比较。
IEEE Trans Pattern Anal Mach Intell. 2012 Nov;34(11):2274-82. doi: 10.1109/TPAMI.2012.120.