Suppr超能文献

面向图神经网络的细粒度学习行为导向知识蒸馏

Fine-Grained Learning Behavior-Oriented Knowledge Distillation for Graph Neural Networks.

作者信息

Liu Kang, Huang Zhenhua, Wang Chang-Dong, Gao Beibei, Chen Yunwen

出版信息

IEEE Trans Neural Netw Learn Syst. 2025 May;36(5):9422-9436. doi: 10.1109/TNNLS.2024.3420895. Epub 2025 May 2.

Abstract

Knowledge distillation (KD), as an effective compression technology, is used to reduce the resource consumption of graph neural networks (GNNs) and facilitate their deployment on resource-constrained devices. Numerous studies exist on GNN distillation, and however, the impacts of knowledge complexity and differences in learning behavior between teachers and students on distillation efficiency remain underexplored. We propose a KD method for fine-grained learning behavior (FLB), comprising two main components: feature knowledge decoupling (FKD) and teacher learning behavior guidance (TLBG). Specifically, FKD decouples the intermediate-layer features of the student network into two types: teacher-related features (TRFs) and downstream features (DFs), enhancing knowledge comprehension and learning efficiency by guiding the student to simultaneously focus on these features. TLBG maps the teacher model's learning behaviors to provide reliable guidance for correcting deviations in student learning. Extensive experiments across eight datasets and 12 baseline frameworks demonstrate that FLB significantly enhances the performance and robustness of student GNNs within the original framework.

摘要

知识蒸馏(KD)作为一种有效的压缩技术,用于减少图神经网络(GNN)的资源消耗,并便于其在资源受限设备上部署。关于GNN蒸馏已有大量研究,然而,知识复杂性以及教师和学生之间学习行为差异对蒸馏效率的影响仍未得到充分探索。我们提出了一种用于细粒度学习行为(FLB)的KD方法,它包含两个主要组件:特征知识解耦(FKD)和教师学习行为引导(TLBG)。具体而言,FKD将学生网络的中间层特征解耦为两种类型:与教师相关的特征(TRF)和下游特征(DF),通过引导学生同时关注这些特征来提高知识理解和学习效率。TLBG映射教师模型的学习行为,为纠正学生学习中的偏差提供可靠指导。在八个数据集和12个基线框架上进行的大量实验表明,FLB在原始框架内显著提高了学生GNN的性能和鲁棒性。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验