Suppr超能文献

IBPL:用于图分布外检测的基于信息瓶颈的提示学习

IBPL: Information Bottleneck-based Prompt Learning for graph out-of-distribution detection.

作者信息

Cao Yanan, Shi Fengzhao, Yu Qing, Lin Xixun, Zhou Chuan, Zou Lixin, Zhang Peng, Li Zhao, Yin Dawei

机构信息

Institute of Information Engineering, Chinese Academy of Sciences, China; School of Cyber Security, University of Chinese Academy of Sciences, China.

School of Cyber Science and Engineering, Wuhan University, China.

出版信息

Neural Netw. 2025 Aug;188:107381. doi: 10.1016/j.neunet.2025.107381. Epub 2025 Mar 25.

Abstract

When training and test graph samples follow different data distributions, graph out-of-distribution (OOD) detection becomes an indispensable component of constructing the reliable and safe graph learning systems. Motivated by the significant progress on prompt learning, graph prompt-based methods, which enable a well-trained graph neural network to detect OOD graphs without modifying any model parameters, have been a standard benchmark with promising computational efficiency and model effectiveness. However, these methods ignore the influence of overlapping features existed in both in-distribution (ID) and OOD graphs, which weakens the difference between them and leads to sub-optimal detection results. In this paper, we present the Information Bottleneck-based Prompt Learning (IBPL) to overcome this challenging problem. Specifically, IBPL includes a new graph prompt that jointly performs the mask operation on node features and the graph structure. Building upon this, we develop an information bottleneck (IB)-based objective to optimize the proposed graph prompt. Since the overlapping features are inaccessible, IBPL introduces the noise data augmentation which generates a series of perturbed graphs to fully covering the overlapping features. Through minimizing the mutual information between the prompt graph and the perturbed graphs, our objective can eliminate the overlapping features effectively. In order to avoid the negative impact of perturbed graphs, IBPL simultaneously maximizes the mutual information between the prompt graph and the category label for better extracting the ID features. We conduct experiments on multiple real-world datasets in both supervised and unsupervised scenarios. The empirical results and extensive model analyses demonstrate the superior performance of IBPL over several competitive baselines.

摘要

当训练和测试图样本遵循不同的数据分布时,图的分布外(OOD)检测成为构建可靠且安全的图学习系统不可或缺的组成部分。受提示学习取得的重大进展的启发,基于图提示的方法能够使经过良好训练的图神经网络在不修改任何模型参数的情况下检测OOD图,已成为具有良好计算效率和模型有效性的标准基准。然而,这些方法忽略了分布内(ID)图和OOD图中都存在的重叠特征的影响,这削弱了它们之间的差异并导致次优的检测结果。在本文中,我们提出了基于信息瓶颈的提示学习(IBPL)来克服这一具有挑战性的问题。具体而言,IBPL包括一个新的图提示,它对节点特征和图结构联合执行掩码操作。在此基础上,我们开发了一个基于信息瓶颈(IB)的目标来优化所提出的图提示。由于重叠特征难以获取,IBPL引入了噪声数据增强,它生成一系列扰动图以充分覆盖重叠特征。通过最小化提示图与扰动图之间的互信息,我们的目标可以有效地消除重叠特征。为了避免扰动图的负面影响,IBPL同时最大化提示图与类别标签之间的互信息,以便更好地提取ID特征。我们在有监督和无监督场景下的多个真实世界数据集上进行了实验。实证结果和广泛的模型分析证明了IBPL相对于几个有竞争力的基线的优越性能。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验