Suppr超能文献

CI-GNN:一种基于 Granger 因果关系的图神经网络,用于可解释的脑网络精神病学诊断。

CI-GNN: A Granger causality-inspired graph neural network for interpretable brain network-based psychiatric diagnosis.

机构信息

National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, and Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, China.

Department of Computer Science, Vrije Universiteit Amsterdam, Amsterdam, Netherlands; Machine Learning Group, UiT - Arctic University of Norway, Tromsø, Norway.

出版信息

Neural Netw. 2024 Apr;172:106147. doi: 10.1016/j.neunet.2024.106147. Epub 2024 Jan 26.

Abstract

There is a recent trend to leverage the power of graph neural networks (GNNs) for brain-network based psychiatric diagnosis, which, in turn, also motivates an urgent need for psychiatrists to fully understand the decision behavior of the used GNNs. However, most of the existing GNN explainers are either post-hoc in which another interpretive model needs to be created to explain a well-trained GNN, or do not consider the causal relationship between the extracted explanation and the decision, such that the explanation itself contains spurious correlations and suffers from weak faithfulness. In this work, we propose a granger causality-inspired graph neural network (CI-GNN), a built-in interpretable model that is able to identify the most influential subgraph (i.e., functional connectivity within brain regions) that is causally related to the decision (e.g., major depressive disorder patients or healthy controls), without the training of an auxillary interpretive network. CI-GNN learns disentangled subgraph-level representations α and β that encode, respectively, the causal and non-causal aspects of original graph under a graph variational autoencoder framework, regularized by a conditional mutual information (CMI) constraint. We theoretically justify the validity of the CMI regulation in capturing the causal relationship. We also empirically evaluate the performance of CI-GNN against three baseline GNNs and four state-of-the-art GNN explainers on synthetic data and three large-scale brain disease datasets. We observe that CI-GNN achieves the best performance in a wide range of metrics and provides more reliable and concise explanations which have clinical evidence. The source code and implementation details of CI-GNN are freely available at GitHub repository (https://github.com/ZKZ-Brain/CI-GNN/).

摘要

目前有一种利用图神经网络(GNN)进行基于脑网络的精神疾病诊断的趋势,这反过来也促使精神科医生迫切需要充分了解所使用的 GNN 的决策行为。然而,现有的大多数 GNN 解释器要么是事后解释,即需要创建另一个解释模型来解释训练有素的 GNN,要么不考虑提取的解释与决策之间的因果关系,使得解释本身包含虚假相关性并存在较弱的可信度。在这项工作中,我们提出了一种基于格兰杰因果关系的图神经网络(CI-GNN),这是一种内置的可解释模型,能够识别与决策(例如,重度抑郁症患者或健康对照者)最相关的最有影响力的子图(即大脑区域内的功能连接),而无需训练辅助解释网络。CI-GNN 在图变分自动编码器框架下学习解耦的子图级表示 α 和 β,分别编码原始图的因果和非因果方面,由条件互信息(CMI)约束正则化。我们从理论上证明了 CMI 调节在捕捉因果关系方面的有效性。我们还在合成数据和三个大规模脑疾病数据集上,将 CI-GNN 的性能与三个基线 GNN 和四个最先进的 GNN 解释器进行了实证评估。我们观察到,CI-GNN 在广泛的指标中表现出最佳的性能,并提供了更可靠和简洁的解释,这些解释具有临床证据。CI-GNN 的源代码和实现细节可在 GitHub 存储库(https://github.com/ZKZ-Brain/CI-GNN/)上免费获得。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验