Suppr超能文献

一种用于训练可解释图神经网络的元学习方法。

A Meta-Learning Approach for Training Explainable Graph Neural Networks.

作者信息

Spinelli Indro, Scardapane Simone, Uncini Aurelio

出版信息

IEEE Trans Neural Netw Learn Syst. 2024 Apr;35(4):4647-4655. doi: 10.1109/TNNLS.2022.3171398. Epub 2024 Apr 4.

Abstract

In this article, we investigate the degree of explainability of graph neural networks (GNNs). The existing explainers work by finding global/local subgraphs to explain a prediction, but they are applied after a GNN has already been trained. Here, we propose a meta-explainer for improving the level of explainability of a GNN directly at training time, by steering the optimization procedure toward minima that allow post hoc explainers to achieve better results, without sacrificing the overall accuracy of GNN. Our framework (called MATE, MetA-Train to Explain) jointly trains a model to solve the original task, e.g., node classification, and to provide easily processable outputs for downstream algorithms that explain the model's decisions in a human-friendly way. In particular, we meta-train the model's parameters to quickly minimize the error of an instance-level GNNExplainer trained on-the-fly on randomly sampled nodes. The final internal representation relies on a set of features that can be "better" understood by an explanation algorithm, e.g., another instance of GNNExplainer. Our model-agnostic approach can improve the explanations produced for different GNN architectures and use any instance-based explainer to drive this process. Experiments on synthetic and real-world datasets for node and graph classification show that we can produce models that are consistently easier to explain by different algorithms. Furthermore, this increase in explainability comes at no cost to the accuracy of the model.

摘要

在本文中,我们研究了图神经网络(GNN)的可解释程度。现有的解释器通过寻找全局/局部子图来解释预测结果,但它们是在GNN已经训练好之后应用的。在此,我们提出一种元解释器,通过在训练时将优化过程导向能使事后解释器取得更好结果的最小值,来直接提高GNN的可解释性水平,同时不牺牲GNN的整体准确性。我们的框架(称为MATE,即Meta - Train to Explain)联合训练一个模型来解决原始任务,例如节点分类,并为以人类友好方式解释模型决策的下游算法提供易于处理的输出。具体而言,我们对模型参数进行元训练,以快速最小化在随机采样节点上即时训练的实例级GNNExplainer的误差。最终的内部表示依赖于一组特征,这些特征能够被解释算法(例如,GNNExplainer的另一个实例)“更好地”理解。我们的模型无关方法可以改进为不同GNN架构生成的解释,并使用任何基于实例的解释器来推动这一过程。在用于节点和图分类的合成数据集和真实世界数据集上的实验表明,我们能够生成不同算法一致认为更易于解释的模型。此外,这种可解释性的提高并未以牺牲模型准确性为代价。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验