• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

面向图神经网络的细粒度学习行为导向知识蒸馏

Fine-Grained Learning Behavior-Oriented Knowledge Distillation for Graph Neural Networks.

作者信息

Liu Kang, Huang Zhenhua, Wang Chang-Dong, Gao Beibei, Chen Yunwen

出版信息

IEEE Trans Neural Netw Learn Syst. 2025 May;36(5):9422-9436. doi: 10.1109/TNNLS.2024.3420895. Epub 2025 May 2.

DOI:10.1109/TNNLS.2024.3420895
PMID:39012738
Abstract

Knowledge distillation (KD), as an effective compression technology, is used to reduce the resource consumption of graph neural networks (GNNs) and facilitate their deployment on resource-constrained devices. Numerous studies exist on GNN distillation, and however, the impacts of knowledge complexity and differences in learning behavior between teachers and students on distillation efficiency remain underexplored. We propose a KD method for fine-grained learning behavior (FLB), comprising two main components: feature knowledge decoupling (FKD) and teacher learning behavior guidance (TLBG). Specifically, FKD decouples the intermediate-layer features of the student network into two types: teacher-related features (TRFs) and downstream features (DFs), enhancing knowledge comprehension and learning efficiency by guiding the student to simultaneously focus on these features. TLBG maps the teacher model's learning behaviors to provide reliable guidance for correcting deviations in student learning. Extensive experiments across eight datasets and 12 baseline frameworks demonstrate that FLB significantly enhances the performance and robustness of student GNNs within the original framework.

摘要

知识蒸馏(KD)作为一种有效的压缩技术,用于减少图神经网络(GNN)的资源消耗,并便于其在资源受限设备上部署。关于GNN蒸馏已有大量研究,然而,知识复杂性以及教师和学生之间学习行为差异对蒸馏效率的影响仍未得到充分探索。我们提出了一种用于细粒度学习行为(FLB)的KD方法,它包含两个主要组件:特征知识解耦(FKD)和教师学习行为引导(TLBG)。具体而言,FKD将学生网络的中间层特征解耦为两种类型:与教师相关的特征(TRF)和下游特征(DF),通过引导学生同时关注这些特征来提高知识理解和学习效率。TLBG映射教师模型的学习行为,为纠正学生学习中的偏差提供可靠指导。在八个数据集和12个基线框架上进行的大量实验表明,FLB在原始框架内显著提高了学生GNN的性能和鲁棒性。

相似文献

1
Fine-Grained Learning Behavior-Oriented Knowledge Distillation for Graph Neural Networks.面向图神经网络的细粒度学习行为导向知识蒸馏
IEEE Trans Neural Netw Learn Syst. 2025 May;36(5):9422-9436. doi: 10.1109/TNNLS.2024.3420895. Epub 2025 May 2.
2
Multiview attention networks for fine-grained watershed categorization via knowledge distillation.通过知识蒸馏实现细粒度流域分类的多视图注意力网络
PLoS One. 2025 Jan 16;20(1):e0313115. doi: 10.1371/journal.pone.0313115. eCollection 2025.
3
On Representation Knowledge Distillation for Graph Neural Networks.关于图神经网络的表示知识蒸馏
IEEE Trans Neural Netw Learn Syst. 2024 Apr;35(4):4656-4667. doi: 10.1109/TNNLS.2022.3223018. Epub 2024 Apr 4.
4
Decoupled graph knowledge distillation: A general logits-based method for learning MLPs on graphs.解耦图知识蒸馏:一种基于对数的在图上学习 MLP 的通用方法。
Neural Netw. 2024 Nov;179:106567. doi: 10.1016/j.neunet.2024.106567. Epub 2024 Jul 23.
5
Shared Growth of Graph Neural Networks via Prompted Free-Direction Knowledge Distillation.通过提示自由方向知识蒸馏实现图神经网络的共享增长
IEEE Trans Pattern Anal Mach Intell. 2025 Jun;47(6):4377-4394. doi: 10.1109/TPAMI.2025.3543211. Epub 2025 May 7.
6
Frameless Graph Knowledge Distillation.无框架图知识蒸馏
IEEE Trans Neural Netw Learn Syst. 2025 May;36(5):8125-8139. doi: 10.1109/TNNLS.2024.3442379. Epub 2025 May 2.
7
EPANet-KD: Efficient progressive attention network for fine-grained provincial village classification via knowledge distillation.EPANet-KD:通过知识蒸馏实现基于高效渐进注意力网络的细粒度省份-乡村分类。
PLoS One. 2024 Feb 15;19(2):e0298452. doi: 10.1371/journal.pone.0298452. eCollection 2024.
8
FCKDNet: A Feature Condensation Knowledge Distillation Network for Semantic Segmentation.FCKDNet:一种用于语义分割的特征压缩知识蒸馏网络。
Entropy (Basel). 2023 Jan 7;25(1):125. doi: 10.3390/e25010125.
9
DCCD: Reducing Neural Network Redundancy via Distillation.DCCD:通过蒸馏减少神经网络冗余
IEEE Trans Neural Netw Learn Syst. 2024 Jul;35(7):10006-10017. doi: 10.1109/TNNLS.2023.3238337. Epub 2024 Jul 8.
10
Leveraging different learning styles for improved knowledge distillation in biomedical imaging.利用不同的学习方式提高生物医学成像中的知识蒸馏效果。
Comput Biol Med. 2024 Jan;168:107764. doi: 10.1016/j.compbiomed.2023.107764. Epub 2023 Nov 30.