IEEE Trans Pattern Anal Mach Intell. 2022 Nov;44(11):7581-7596. doi: 10.1109/TPAMI.2021.3115452. Epub 2022 Oct 4.
Graph Neural Networks (GNNs) are a popular approach for predicting graph structured data. As GNNs tightly entangle the input graph into the neural network structure, common explainable AI approaches are not applicable. To a large extent, GNNs have remained black-boxes for the user so far. In this paper, we show that GNNs can in fact be naturally explained using higher-order expansions, i.e., by identifying groups of edges that jointly contribute to the prediction. Practically, we find that such explanations can be extracted using a nested attribution scheme, where existing techniques such as layer-wise relevance propagation (LRP) can be applied at each step. The output is a collection of walks into the input graph that are relevant for the prediction. Our novel explanation method, which we denote by GNN-LRP, is applicable to a broad range of graph neural networks and lets us extract practically relevant insights on sentiment analysis of text data, structure-property relationships in quantum chemistry, and image classification.
图神经网络(GNNs)是一种用于预测图结构数据的流行方法。由于 GNN 将输入图紧密地纠缠到神经网络结构中,因此常见的可解释 AI 方法不适用于 GNN。到目前为止,GNN 在很大程度上仍然对用户来说是一个黑盒。在本文中,我们表明 GNN 实际上可以使用高阶展开来进行自然解释,即通过识别共同有助于预测的一组边。实际上,我们发现可以使用嵌套归因方案来提取这样的解释,其中可以在每个步骤应用现有技术,如层间相关性传播(LRP)。输出是一组进入输入图的遍历,这些遍历与预测相关。我们的新解释方法,我们称之为 GNN-LRP,适用于广泛的图神经网络,并让我们提取关于文本数据情感分析、量子化学结构-性质关系和图像分类的实际相关见解。