• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过知识蒸馏增强的可扩展图神经网络

Enhanced Scalable Graph Neural Network via Knowledge Distillation.

作者信息

Mai Chengyuan, Chang Yaomin, Chen Chuan, Zheng Zibin

出版信息

IEEE Trans Neural Netw Learn Syst. 2025 Jan;36(1):1258-1271. doi: 10.1109/TNNLS.2023.3333846. Epub 2025 Jan 7.

DOI:10.1109/TNNLS.2023.3333846
PMID:37999962
Abstract

Graph neural networks (GNNs) have achieved state-of-the-art performance in various graph representation learning scenarios. However, when applied to graph data in real world, GNNs have encountered scalability issues. Existing GNNs often have high computational load in both training and inference stages, making them incapable of meeting the performance needs of large-scale scenarios with a large number of nodes. Although several studies on scalable GNNs have developed, they either merely improve GNNs with limited scalability or come at the expense of reduced effectiveness. Inspired by knowledge distillation's (KDs) achievement in preserving performances while balancing scalability in computer vision and natural language processing, we propose an enhanced scalable GNN via KD (KD-SGNN) to improve the scalability and effectiveness of GNNs. On the one hand, KD-SGNN adopts the idea of decoupled GNNs, which decouples feature transformation and feature propagation in GNNs and leverages preprocessing techniques to improve the scalability of GNNs. On the other hand, KD-SGNN proposes two KD mechanisms (i.e., soft-target (ST) distillation and shallow imitation (SI) distillation) to improve the expressiveness. The scalability and effectiveness of KD-SGNN are evaluated on multiple real datasets. Besides, the effectiveness of the proposed KD mechanisms is also verified through comprehensive analyses.

摘要

图神经网络(GNNs)在各种图表示学习场景中取得了领先的性能。然而,当应用于现实世界的图数据时,GNNs遇到了可扩展性问题。现有的GNNs在训练和推理阶段通常具有很高的计算负载,使其无法满足具有大量节点的大规模场景的性能需求。尽管已经开展了几项关于可扩展GNNs的研究,但它们要么只是以有限的可扩展性改进GNNs,要么是以降低有效性为代价。受知识蒸馏(KD)在计算机视觉和自然语言处理中平衡可扩展性的同时保持性能方面的成就启发,我们提出了一种通过KD的增强型可扩展GNN(KD-SGNN),以提高GNNs的可扩展性和有效性。一方面,KD-SGNN采用了解耦GNNs的思想,它将GNNs中的特征变换和特征传播解耦,并利用预处理技术来提高GNNs的可扩展性。另一方面,KD-SGNN提出了两种KD机制(即软目标(ST)蒸馏和浅层模仿(SI)蒸馏)来提高表现力。KD-SGNN的可扩展性和有效性在多个真实数据集上进行了评估。此外,还通过综合分析验证了所提出的KD机制的有效性。

相似文献

1
Enhanced Scalable Graph Neural Network via Knowledge Distillation.通过知识蒸馏增强的可扩展图神经网络
IEEE Trans Neural Netw Learn Syst. 2025 Jan;36(1):1258-1271. doi: 10.1109/TNNLS.2023.3333846. Epub 2025 Jan 7.
2
Shared Growth of Graph Neural Networks via Prompted Free-Direction Knowledge Distillation.通过提示自由方向知识蒸馏实现图神经网络的共享增长
IEEE Trans Pattern Anal Mach Intell. 2025 Jun;47(6):4377-4394. doi: 10.1109/TPAMI.2025.3543211. Epub 2025 May 7.
3
Beyond low-pass filtering on large-scale graphs via Adaptive Filtering Graph Neural Networks.通过自适应滤波图神经网络对大规模图进行超越低通滤波。
Neural Netw. 2024 Jan;169:1-10. doi: 10.1016/j.neunet.2023.09.042. Epub 2023 Oct 11.
4
On Representation Knowledge Distillation for Graph Neural Networks.关于图神经网络的表示知识蒸馏
IEEE Trans Neural Netw Learn Syst. 2024 Apr;35(4):4656-4667. doi: 10.1109/TNNLS.2022.3223018. Epub 2024 Apr 4.
5
Decoupled graph knowledge distillation: A general logits-based method for learning MLPs on graphs.解耦图知识蒸馏:一种基于对数的在图上学习 MLP 的通用方法。
Neural Netw. 2024 Nov;179:106567. doi: 10.1016/j.neunet.2024.106567. Epub 2024 Jul 23.
6
Position-Sensing Graph Neural Networks: Proactively Learning Nodes Relative Positions.位置感知图神经网络:主动学习节点相对位置。
IEEE Trans Neural Netw Learn Syst. 2025 Mar;36(3):5787-5794. doi: 10.1109/TNNLS.2024.3374464. Epub 2025 Feb 28.
7
Automatic Design of Deep Graph Neural Networks With Decoupled Mode.具有解耦模式的深度图神经网络自动设计
IEEE Trans Neural Netw Learn Syst. 2025 May;36(5):7918-7930. doi: 10.1109/TNNLS.2024.3438609. Epub 2025 May 2.
8
DropAGG: Robust Graph Neural Networks via Drop Aggregation.DropAGG:通过随机聚合的鲁棒图神经网络。
Neural Netw. 2023 Jun;163:65-74. doi: 10.1016/j.neunet.2023.03.022. Epub 2023 Mar 29.
9
Generalizing Graph Neural Networks on Out-of-Distribution Graphs.将图神经网络推广到分布外的图上。
IEEE Trans Pattern Anal Mach Intell. 2024 Jan;46(1):322-337. doi: 10.1109/TPAMI.2023.3321097. Epub 2023 Dec 5.
10
Fine-Grained Learning Behavior-Oriented Knowledge Distillation for Graph Neural Networks.面向图神经网络的细粒度学习行为导向知识蒸馏
IEEE Trans Neural Netw Learn Syst. 2025 May;36(5):9422-9436. doi: 10.1109/TNNLS.2024.3420895. Epub 2025 May 2.