• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种用于训练可解释图神经网络的元学习方法。

A Meta-Learning Approach for Training Explainable Graph Neural Networks.

作者信息

Spinelli Indro, Scardapane Simone, Uncini Aurelio

出版信息

IEEE Trans Neural Netw Learn Syst. 2024 Apr;35(4):4647-4655. doi: 10.1109/TNNLS.2022.3171398. Epub 2024 Apr 4.

DOI:10.1109/TNNLS.2022.3171398
PMID:35544494
Abstract

In this article, we investigate the degree of explainability of graph neural networks (GNNs). The existing explainers work by finding global/local subgraphs to explain a prediction, but they are applied after a GNN has already been trained. Here, we propose a meta-explainer for improving the level of explainability of a GNN directly at training time, by steering the optimization procedure toward minima that allow post hoc explainers to achieve better results, without sacrificing the overall accuracy of GNN. Our framework (called MATE, MetA-Train to Explain) jointly trains a model to solve the original task, e.g., node classification, and to provide easily processable outputs for downstream algorithms that explain the model's decisions in a human-friendly way. In particular, we meta-train the model's parameters to quickly minimize the error of an instance-level GNNExplainer trained on-the-fly on randomly sampled nodes. The final internal representation relies on a set of features that can be "better" understood by an explanation algorithm, e.g., another instance of GNNExplainer. Our model-agnostic approach can improve the explanations produced for different GNN architectures and use any instance-based explainer to drive this process. Experiments on synthetic and real-world datasets for node and graph classification show that we can produce models that are consistently easier to explain by different algorithms. Furthermore, this increase in explainability comes at no cost to the accuracy of the model.

摘要

在本文中,我们研究了图神经网络(GNN)的可解释程度。现有的解释器通过寻找全局/局部子图来解释预测结果,但它们是在GNN已经训练好之后应用的。在此,我们提出一种元解释器,通过在训练时将优化过程导向能使事后解释器取得更好结果的最小值,来直接提高GNN的可解释性水平,同时不牺牲GNN的整体准确性。我们的框架(称为MATE,即Meta - Train to Explain)联合训练一个模型来解决原始任务,例如节点分类,并为以人类友好方式解释模型决策的下游算法提供易于处理的输出。具体而言,我们对模型参数进行元训练,以快速最小化在随机采样节点上即时训练的实例级GNNExplainer的误差。最终的内部表示依赖于一组特征,这些特征能够被解释算法(例如,GNNExplainer的另一个实例)“更好地”理解。我们的模型无关方法可以改进为不同GNN架构生成的解释,并使用任何基于实例的解释器来推动这一过程。在用于节点和图分类的合成数据集和真实世界数据集上的实验表明,我们能够生成不同算法一致认为更易于解释的模型。此外,这种可解释性的提高并未以牺牲模型准确性为代价。

相似文献

1
A Meta-Learning Approach for Training Explainable Graph Neural Networks.一种用于训练可解释图神经网络的元学习方法。
IEEE Trans Neural Netw Learn Syst. 2024 Apr;35(4):4647-4655. doi: 10.1109/TNNLS.2022.3171398. Epub 2024 Apr 4.
2
GNNExplainer: Generating Explanations for Graph Neural Networks.GNNExplainer:为图神经网络生成解释
Adv Neural Inf Process Syst. 2019 Dec;32:9240-9251.
3
Global explanation supervision for Graph Neural Networks.图神经网络的全局解释监督
Front Big Data. 2024 Jul 1;7:1410424. doi: 10.3389/fdata.2024.1410424. eCollection 2024.
4
Towards Inductive and Efficient Explanations for Graph Neural Networks.迈向图神经网络的归纳式高效解释
IEEE Trans Pattern Anal Mach Intell. 2024 Aug;46(8):5245-5259. doi: 10.1109/TPAMI.2024.3362584. Epub 2024 Jul 2.
5
CI-GNN: A Granger causality-inspired graph neural network for interpretable brain network-based psychiatric diagnosis.CI-GNN:一种基于 Granger 因果关系的图神经网络,用于可解释的脑网络精神病学诊断。
Neural Netw. 2024 Apr;172:106147. doi: 10.1016/j.neunet.2024.106147. Epub 2024 Jan 26.
6
Augmented Graph Neural Network with hierarchical global-based residual connections.基于层次全局残差连接的增强图神经网络。
Neural Netw. 2022 Jun;150:149-166. doi: 10.1016/j.neunet.2022.03.008. Epub 2022 Mar 10.
7
PAGE: Prototype-Based Model-Level Explanations for Graph Neural Networks.PAGE:基于原型的图神经网络模型级解释。
IEEE Trans Pattern Anal Mach Intell. 2024 Oct;46(10):6559-6576. doi: 10.1109/TPAMI.2024.3379251. Epub 2024 Sep 5.
8
PSA-GNN: An augmented GNN framework with priori subgraph knowledge.PSA-GNN:基于先验子图知识的增强图神经网络框架。
Neural Netw. 2024 May;173:106155. doi: 10.1016/j.neunet.2024.106155. Epub 2024 Feb 4.
9
Evaluating explainability for graph neural networks.评估图神经网络的可解释性。
Sci Data. 2023 Mar 18;10(1):144. doi: 10.1038/s41597-023-01974-x.
10
GNN-SubNet: disease subnetwork detection with explainable graph neural networks.GNN-SubNet:基于可解释图神经网络的疾病子网络检测。
Bioinformatics. 2022 Sep 16;38(Suppl_2):ii120-ii126. doi: 10.1093/bioinformatics/btac478.

引用本文的文献

1
Machine Learning-Enabled Drug-Induced Toxicity Prediction.基于机器学习的药物诱导毒性预测
Adv Sci (Weinh). 2025 Apr;12(16):e2413405. doi: 10.1002/advs.202413405. Epub 2025 Feb 3.