• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

统一有向网络中节点分类的拓扑结构和自注意力机制

Unifying topological structure and self-attention mechanism for node classification in directed networks.

作者信息

Peng Yue, Xia Jiwen, Liu Dafeng, Liu Miao, Xiao Long, Shi Benyun

机构信息

College of Computer and Information Engineering, Nanjing Tech University, Nanjing, 211800, China.

College of Artificial Intelligence, Nanjing Tech University, Nanjing, 211800, China.

出版信息

Sci Rep. 2025 Jan 4;15(1):805. doi: 10.1038/s41598-024-84816-z.

DOI:10.1038/s41598-024-84816-z
PMID:39755762
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11700118/
Abstract

Graph data is essential for modeling complex relationships among entities. Graph Neural Networks (GNNs) have demonstrated effectiveness in processing low-order undirected graph data; however, in complex directed graphs, relationships between nodes extend beyond first-order connections and encompass higher-order relationships. Additionally, the asymmetry introduced by edge directionality further complicates node interactions, presenting greater challenges for extracting node information. In this paper, We propose TWC-GNN, a novel graph neural network design, as a solution to this problem. TWC-GNN uses node degrees to define higher-order topological structures, assess node importance, and capture mutual interactions between central nodes and their adjacent counterparts. This approach improves our understanding of complex relationships within the network. Furthermore, by integrating self-attention mechanisms, TWC-GNN effectively gathers higher-order node information in addition to focusing on first-order node information. Experimental results demonstrate that the integration of topological structures and higher-order node information is crucial for the learning process of graph neural networks, particularly in directed graphs, leading to improved classification accuracy.

摘要

图数据对于建模实体之间的复杂关系至关重要。图神经网络(GNN)已在处理低阶无向图数据方面展现出有效性;然而,在复杂有向图中,节点之间的关系不仅限于一阶连接,还包括高阶关系。此外,边的方向性所引入的不对称性进一步使节点交互变得复杂,给提取节点信息带来了更大挑战。在本文中,我们提出了TWC-GNN,一种新颖的图神经网络设计,作为解决此问题的方案。TWC-GNN使用节点度数来定义高阶拓扑结构、评估节点重要性,并捕捉中心节点与其相邻节点之间的相互作用。这种方法增进了我们对网络内复杂关系的理解。此外,通过整合自注意力机制,TWC-GNN除了关注一阶节点信息外,还能有效地收集高阶节点信息。实验结果表明,拓扑结构和高阶节点信息的整合对于图神经网络的学习过程至关重要,特别是在有向图中,可提高分类准确率。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c773/11700118/3377fb4ae148/41598_2024_84816_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c773/11700118/2a8cfc39a3b9/41598_2024_84816_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c773/11700118/43004c2be6ba/41598_2024_84816_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c773/11700118/92aaa360c1ac/41598_2024_84816_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c773/11700118/8634db36161a/41598_2024_84816_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c773/11700118/c59bf0723eb1/41598_2024_84816_Figa_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c773/11700118/07c0f3178393/41598_2024_84816_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c773/11700118/3377fb4ae148/41598_2024_84816_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c773/11700118/2a8cfc39a3b9/41598_2024_84816_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c773/11700118/43004c2be6ba/41598_2024_84816_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c773/11700118/92aaa360c1ac/41598_2024_84816_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c773/11700118/8634db36161a/41598_2024_84816_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c773/11700118/c59bf0723eb1/41598_2024_84816_Figa_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c773/11700118/07c0f3178393/41598_2024_84816_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c773/11700118/3377fb4ae148/41598_2024_84816_Fig6_HTML.jpg

相似文献

1
Unifying topological structure and self-attention mechanism for node classification in directed networks.统一有向网络中节点分类的拓扑结构和自注意力机制
Sci Rep. 2025 Jan 4;15(1):805. doi: 10.1038/s41598-024-84816-z.
2
Co-embedding of edges and nodes with deep graph convolutional neural networks.使用深度图卷积神经网络进行边和节点的联合嵌入
Sci Rep. 2023 Oct 8;13(1):16966. doi: 10.1038/s41598-023-44224-1.
3
An Integrated Fuzzy Neural Network and Topological Data Analysis for Molecular Graph Representation Learning and Property Forecasting.用于分子图表示学习和性质预测的集成模糊神经网络与拓扑数据分析
Mol Inform. 2025 Mar;44(3):e202400335. doi: 10.1002/minf.202400335.
4
muxGNN: Multiplex Graph Neural Network for Heterogeneous Graphs.muxGNN:用于异构图的多路复用图神经网络。
IEEE Trans Pattern Anal Mach Intell. 2023 Sep;45(9):11067-11078. doi: 10.1109/TPAMI.2023.3263079. Epub 2023 Aug 7.
5
CCP-GNN: Competitive Covariance Pooling for Improving Graph Neural Networks.CCP-GNN:用于改进图神经网络的竞争协方差池化
IEEE Trans Neural Netw Learn Syst. 2025 Apr;36(4):6395-6406. doi: 10.1109/TNNLS.2024.3390249. Epub 2025 Apr 4.
6
Augmented Graph Neural Network with hierarchical global-based residual connections.基于层次全局残差连接的增强图神经网络。
Neural Netw. 2022 Jun;150:149-166. doi: 10.1016/j.neunet.2022.03.008. Epub 2022 Mar 10.
7
Graph Aggregating-Repelling Network: Do Not Trust All Neighbors in Heterophilic Graphs.图聚合-排斥网络:在异质图中不要信任所有邻居。
Neural Netw. 2024 Oct;178:106484. doi: 10.1016/j.neunet.2024.106484. Epub 2024 Jun 21.
8
Harnessing collective structure knowledge in data augmentation for graph neural networks.利用图神经网络中数据增强的集体结构知识。
Neural Netw. 2024 Dec;180:106651. doi: 10.1016/j.neunet.2024.106651. Epub 2024 Aug 23.
9
Graph-Graph Similarity Network.图-图相似性网络
IEEE Trans Neural Netw Learn Syst. 2024 Jul;35(7):9136-9146. doi: 10.1109/TNNLS.2022.3218936. Epub 2024 Jul 10.
10
SP-GNN: Learning structure and position information from graphs.SP-GNN:从图中学习结构和位置信息。
Neural Netw. 2023 Apr;161:505-514. doi: 10.1016/j.neunet.2023.01.051. Epub 2023 Feb 4.

引用本文的文献

1
Knowledge Distillation for Molecular Property Prediction: A Scalability Analysis.用于分子性质预测的知识蒸馏:可扩展性分析
Adv Sci (Weinh). 2025 Jun;12(22):e2503271. doi: 10.1002/advs.202503271. Epub 2025 Apr 9.

本文引用的文献

1
One-Stage Shifted Laplacian Refining for Multiple Kernel Clustering.用于多核聚类的单阶段移位拉普拉斯细化
IEEE Trans Neural Netw Learn Syst. 2024 Aug;35(8):11501-11513. doi: 10.1109/TNNLS.2023.3262590. Epub 2024 Aug 5.
2
Data-Driven Tabulation for Chemistry Integration Using Recurrent Neural Networks.使用递归神经网络进行化学整合的数据驱动制表
IEEE Trans Neural Netw Learn Syst. 2023 Sep;34(9):5392-5402. doi: 10.1109/TNNLS.2022.3175301. Epub 2023 Sep 1.
3
Learning Representations by Graphical Mutual Information Estimation and Maximization.
通过图形互信息估计与最大化学习表示。
IEEE Trans Pattern Anal Mach Intell. 2023 Jan;45(1):722-737. doi: 10.1109/TPAMI.2022.3147886. Epub 2022 Dec 5.
4
A Comprehensive Survey on Graph Neural Networks.图神经网络综述。
IEEE Trans Neural Netw Learn Syst. 2021 Jan;32(1):4-24. doi: 10.1109/TNNLS.2020.2978386. Epub 2021 Jan 4.
5
Cliques and cavities in the human connectome.人类连接组中的团块和空洞。
J Comput Neurosci. 2018 Feb;44(1):115-145. doi: 10.1007/s10827-017-0672-6. Epub 2017 Nov 16.
6
node2vec: Scalable Feature Learning for Networks.节点2向量:网络的可扩展特征学习
KDD. 2016 Aug;2016:855-864. doi: 10.1145/2939672.2939754.
7
Higher-order organization of complex networks.复杂网络的高阶组织
Science. 2016 Jul 8;353(6295):163-6. doi: 10.1126/science.aad9029.
8
Deep learning.深度学习。
Nature. 2015 May 28;521(7553):436-44. doi: 10.1038/nature14539.
9
Finding community structure in very large networks.在超大型网络中寻找社区结构。
Phys Rev E Stat Nonlin Soft Matter Phys. 2004 Dec;70(6 Pt 2):066111. doi: 10.1103/PhysRevE.70.066111. Epub 2004 Dec 6.
10
Modeling interactome: scale-free or geometric?建模相互作用组:无标度还是几何?
Bioinformatics. 2004 Dec 12;20(18):3508-15. doi: 10.1093/bioinformatics/bth436. Epub 2004 Jul 29.