Liu Li, Wan Pengyu, Zhang Feiyan, Zhang Youmin, Liu Qun, Wang Guoyin
Chongqing Key Laboratory of Computational Intelligence, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China; School of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China; Key Laboratory of Cyberspace Big Data Intelligent Security, Ministry of Education, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China.
Chongqing Key Laboratory of Computational Intelligence, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China; Key Laboratory of Cyberspace Big Data Intelligent Security, Ministry of Education, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China.
Neural Netw. 2025 Nov;191:107815. doi: 10.1016/j.neunet.2025.107815. Epub 2025 Jul 5.
Model-level graph neural network (GNN) explainers identify general graph patterns that significantly contribute to a GNN's prediction of a target class. However, current studies often lack proper constraints during the generation of the explanation, resulting in unreliable and non-typical graph patterns that limit the quality of explanations. To address this issue, we propose MOSE, a simple yet effective model-level explainer that learns MOdel-level explanations via a Subgraph order Embedding space. MOSE employs a graph encoder to learn an embedding space that preserves subgraph relationships among graphs in order. A score function with a greedy sampling strategy is then introduced to efficiently generate graph pattern candidates under the constraint of the subgraph order embedding, ensuring that the candidates are reliable and typical patterns of real data. The explanations are further selected from the candidates based on the predicted probability by the GNN to be explained. Additionally, by constructing induced graphs, we extend MOSE to the node classification task, which has rarely been studied before, enhancing the generalization of MOSE. Extensive experiments conducted on two synthetic datasets and six real-world datasets demonstrate the effectiveness of MOSE across various metrics, including predictive accuracy, model utility, and model efficiency.
模型级图神经网络(GNN)解释器可识别对GNN预测目标类别有显著贡献的一般图模式。然而,当前研究在生成解释的过程中往往缺乏适当的约束,导致生成的图模式不可靠且不典型,从而限制了解释的质量。为了解决这个问题,我们提出了MOSE,这是一种简单而有效的模型级解释器,它通过子图顺序嵌入空间来学习模型级解释。MOSE采用图编码器来学习一个嵌入空间,该空间按顺序保留图之间的子图关系。然后引入一个带有贪婪采样策略的评分函数,以在子图顺序嵌入的约束下高效地生成图模式候选,确保候选是真实数据的可靠且典型的模式。基于待解释的GNN预测概率,进一步从候选中选择解释。此外,通过构建诱导图,我们将MOSE扩展到之前很少研究的节点分类任务,提高了MOSE的泛化能力。在两个合成数据集和六个真实世界数据集上进行的大量实验证明了MOSE在各种指标上的有效性,包括预测准确性、模型效用和模型效率。