• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过双级知识蒸馏实现少样本图异常检测

Few-Shot Graph Anomaly Detection via Dual-Level Knowledge Distillation.

作者信息

Li Xuan, Cheng Dejie, Zhang Luheng, Zhang Chengfang, Feng Ziliang

机构信息

National Key Laboratory of Fundamental Science on Synthetic Vision, Sichuan University, Chengdu 610065, China.

出版信息

Entropy (Basel). 2025 Jan 1;27(1):28. doi: 10.3390/e27010028.

DOI:10.3390/e27010028
PMID:39851648
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11765473/
Abstract

Graph anomaly detection is crucial in many high-impact applications across diverse fields. In anomaly detection tasks, collecting plenty of annotated data tends to be costly and laborious. As a result, few-shot learning has been explored to address the issue by requiring only a few labeled samples to achieve good performance. However, conventional few-shot models may not fully exploit the information within auxiliary sets, leading to suboptimal performance. To tackle these limitations, we propose a dual-level knowledge distillation-based approach for graph anomaly detection, DualKD, which leverages two distinct distillation losses to improve generalization capabilities. In our approach, we initially train a teacher model to generate prediction distributions as soft labels, capturing the entropy of uncertainty in the data. These soft labels are then employed to construct the corresponding loss for training a student model, which can capture more detailed node features. In addition, we introduce two representation distillation losses-short and long representation distillation-to effectively transfer knowledge from the auxiliary set to the target set. Comprehensive experiments conducted on four datasets verify that DualKD remarkably outperforms the advanced baselines, highlighting its effectiveness in enhancing identification performance.

摘要

图异常检测在众多不同领域的高影响力应用中至关重要。在异常检测任务中,收集大量带注释的数据往往既昂贵又费力。因此,人们探索了少样本学习来解决这个问题,即只需少量有标签的样本就能实现良好的性能。然而,传统的少样本模型可能无法充分利用辅助集中的信息,导致性能次优。为了解决这些局限性,我们提出了一种基于双层次知识蒸馏的图异常检测方法DualKD,该方法利用两种不同的蒸馏损失来提高泛化能力。在我们的方法中,我们首先训练一个教师模型来生成预测分布作为软标签,捕捉数据中不确定性的熵。然后使用这些软标签来构建用于训练学生模型的相应损失,该学生模型可以捕捉更详细的节点特征。此外,我们引入了两种表示蒸馏损失——短表示蒸馏和长表示蒸馏——以有效地将知识从辅助集转移到目标集。在四个数据集上进行的综合实验验证了DualKD显著优于先进的基线,突出了其在提高识别性能方面的有效性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/732b/11765473/b5b0a65462ac/entropy-27-00028-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/732b/11765473/7e8f0a65952d/entropy-27-00028-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/732b/11765473/d9d0061f0fe2/entropy-27-00028-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/732b/11765473/b5b0a65462ac/entropy-27-00028-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/732b/11765473/7e8f0a65952d/entropy-27-00028-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/732b/11765473/d9d0061f0fe2/entropy-27-00028-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/732b/11765473/b5b0a65462ac/entropy-27-00028-g003.jpg

相似文献

1
Few-Shot Graph Anomaly Detection via Dual-Level Knowledge Distillation.通过双级知识蒸馏实现少样本图异常检测
Entropy (Basel). 2025 Jan 1;27(1):28. doi: 10.3390/e27010028.
2
Relation Knowledge Distillation by Auxiliary Learning for Object Detection.用于目标检测的基于辅助学习的关系知识蒸馏
IEEE Trans Image Process. 2024;33:4796-4810. doi: 10.1109/TIP.2024.3445740. Epub 2024 Aug 30.
3
Dual Distillation Discriminator Networks for Domain Adaptive Few-Shot Learning.双蒸馏鉴别器网络用于领域自适应少样本学习。
Neural Netw. 2023 Aug;165:625-633. doi: 10.1016/j.neunet.2023.06.009. Epub 2023 Jun 15.
4
Unsupervised Anomaly Detection with Distillated Teacher-Student Network Ensemble.基于蒸馏师生网络集成的无监督异常检测
Entropy (Basel). 2021 Feb 6;23(2):201. doi: 10.3390/e23020201.
5
Fault anomaly detection method of aero-engine rolling bearing based on distillation learning.基于蒸馏学习的航空发动机滚动轴承故障异常检测方法
ISA Trans. 2024 Feb;145:387-398. doi: 10.1016/j.isatra.2023.11.034. Epub 2023 Nov 25.
6
Hierarchical Knowledge Propagation and Distillation for Few-Shot Learning.用于少样本学习的分层知识传播与蒸馏
Neural Netw. 2023 Oct;167:615-625. doi: 10.1016/j.neunet.2023.08.040. Epub 2023 Sep 9.
7
Cosine similarity knowledge distillation for surface anomaly detection.用于表面异常检测的余弦相似度知识蒸馏
Sci Rep. 2024 Apr 8;14(1):8150. doi: 10.1038/s41598-024-58409-9.
8
Few-Shot Face Stylization via GAN Prior Distillation.通过GAN先验蒸馏实现少样本面部风格化
IEEE Trans Neural Netw Learn Syst. 2025 Mar;36(3):4492-4503. doi: 10.1109/TNNLS.2024.3377609. Epub 2025 Feb 28.
9
Decoupled graph knowledge distillation: A general logits-based method for learning MLPs on graphs.解耦图知识蒸馏:一种基于对数的在图上学习 MLP 的通用方法。
Neural Netw. 2024 Nov;179:106567. doi: 10.1016/j.neunet.2024.106567. Epub 2024 Jul 23.
10
On Representation Knowledge Distillation for Graph Neural Networks.关于图神经网络的表示知识蒸馏
IEEE Trans Neural Netw Learn Syst. 2024 Apr;35(4):4656-4667. doi: 10.1109/TNNLS.2022.3223018. Epub 2024 Apr 4.

本文引用的文献

1
A Comprehensive Survey on Deep Graph Representation Learning.关于深度图表示学习的全面调查。
Neural Netw. 2024 May;173:106207. doi: 10.1016/j.neunet.2024.106207. Epub 2024 Feb 27.
2
On Representation Knowledge Distillation for Graph Neural Networks.关于图神经网络的表示知识蒸馏
IEEE Trans Neural Netw Learn Syst. 2024 Apr;35(4):4656-4667. doi: 10.1109/TNNLS.2022.3223018. Epub 2024 Apr 4.
3
Anomaly Detection on Attributed Networks via Contrastive Self-Supervised Learning.基于对比自监督学习的属性网络异常检测
IEEE Trans Neural Netw Learn Syst. 2022 Jun;33(6):2378-2392. doi: 10.1109/TNNLS.2021.3068344. Epub 2022 Jun 1.