Suppr超能文献

关于解释图神经网络的必要性和充分性概率:一种下界优化方法。

On the probability of necessity and sufficiency of explaining Graph Neural Networks: A lower bound optimization approach.

作者信息

Cai Ruichu, Zhu Yuxuan, Chen Xuexin, Fang Yuan, Wu Min, Qiao Jie, Hao Zhifeng

机构信息

School of Computer Science, Guangdong University of Technology, Guangzhou 510006, China; Peng Cheng Laboratory, Shenzhen 518066, China.

School of Computer Science, Guangdong University of Technology, Guangzhou 510006, China.

出版信息

Neural Netw. 2025 Apr;184:107065. doi: 10.1016/j.neunet.2024.107065. Epub 2024 Dec 24.

Abstract

The explainability of Graph Neural Networks (GNNs) is critical to various GNN applications, yet it remains a significant challenge. A convincing explanation should be both necessary and sufficient simultaneously. However, existing GNN explaining approaches focus on only one of the two aspects, necessity or sufficiency, or a heuristic trade-off between the two. Theoretically, the Probability of Necessity and Sufficiency (PNS) holds the potential to identify the most necessary and sufficient explanation since it can mathematically quantify the necessity and sufficiency of an explanation. Nevertheless, the difficulty of obtaining PNS due to non-monotonicity and the challenge of counterfactual estimation limit its wide use. To address the non-identifiability of PNS, we resort to a lower bound of PNS that can be optimized via counterfactual estimation, and propose a framework of Necessary and Sufficient Explanation for GNN (NSEG) via optimizing that lower bound. Specifically, we depict the GNN as a structural causal model (SCM), and estimate the probability of counterfactual via the intervention under the SCM. Additionally, we leverage continuous masks with a sampling strategy to optimize the lower bound to enhance the scalability. Empirical results demonstrate that NSEG outperforms state-of-the-art methods, consistently generating the most necessary and sufficient explanations. The implementation of our NSEG is available at https://github.com/EthanChu7/NSEG.

摘要

图神经网络(GNN)的可解释性对于各种GNN应用至关重要,但它仍然是一个重大挑战。一个有说服力的解释应该同时具备必要性和充分性。然而,现有的GNN解释方法只关注这两个方面中的一个,即必要性或充分性,或者是两者之间的启发式权衡。从理论上讲,必要性和充分性概率(PNS)有潜力识别出最必要和充分的解释,因为它可以从数学上量化解释的必要性和充分性。然而,由于非单调性导致难以获得PNS,以及反事实估计的挑战限制了它的广泛应用。为了解决PNS的不可识别性问题,我们求助于可以通过反事实估计进行优化的PNS下界,并通过优化该下界提出了一种GNN的必要和充分解释框架(NSEG)。具体来说,我们将GNN描述为一个结构因果模型(SCM),并通过SCM下的干预来估计反事实的概率。此外,我们利用带有采样策略的连续掩码来优化下界,以提高可扩展性。实证结果表明,NSEG优于现有方法,始终能生成最必要和充分的解释。我们的NSEG实现可在https://github.com/EthanChu7/NSEG获取。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验