Dai L N, Xu D H, Gao Y F
Zhejiang Financial College, Xueyuan Street 118, Qiantang District, 310018 Hangzhou, Zhejiang Province, China.
Sun Yat-sen University, 132 Waihuan East Road, Panyu District, 510006 Guangzhou, Guangdong Province, China.
J Biomed Inform. 2025 May;165:104813. doi: 10.1016/j.jbi.2025.104813. Epub 2025 Mar 21.
Current explainability strategies for Graph Neural Networks (GNNs) often focus on individual nodes or edges, neglecting the significance of key subgraphs in decision-making processes. This limitation can result in dispersed and less reliable explanatory outcomes, particularly for complex tasks. This paper proposes a key subgraph retrieval method based on Euclidean distance, leveraging node representations obtained through training on the BA3 and Mutagenicity datasets to interpret GNN decisions. The proposed method achieves accuracies of 99.25% and 82.40% on the respective datasets. Performance comparison experiments with other mainstream explainability strategies, along with visualization analyses, demonstrate the effectiveness and robustness of this approach.
当前用于图神经网络(GNN)的可解释性策略通常侧重于单个节点或边,而忽略了关键子图在决策过程中的重要性。这种局限性可能导致解释结果分散且可靠性较低,尤其是对于复杂任务。本文提出了一种基于欧几里得距离的关键子图检索方法,利用在BA3和致突变性数据集上训练得到的节点表示来解释GNN决策。该方法在各自的数据集上分别达到了99.25%和82.40%的准确率。与其他主流可解释性策略的性能比较实验以及可视化分析表明了该方法的有效性和鲁棒性。