Suppr超能文献

通过时间跨度视图对比学习动态图表示。

Learning dynamic graph representations through timespan view contrasts.

机构信息

School of Computer Science and Technology, Xi'an Jiaotong University, PR China.

School of Distance Education, Xi'an Jiaotong University, PR China.

出版信息

Neural Netw. 2024 Aug;176:106384. doi: 10.1016/j.neunet.2024.106384. Epub 2024 May 9.

Abstract

The rich information underlying graphs has inspired further investigation of unsupervised graph representation. Existing studies mainly depend on node features and topological properties within static graphs to create self-supervised signals, neglecting the temporal components carried by real-world graph data, such as timestamps of edges. To overcome this limitation, this paper explores how to model temporal evolution on dynamic graphs elegantly. Specifically, we introduce a new inductive bias, namely temporal translation invariance, which illustrates the tendency of the identical node to keep similar labels across different timespans. Based on this assumption, we develop a dynamic graph representation framework CLDG that encourages the node to maintain locally consistent temporal translation invariance through contrastive learning on different timespans. Except for standard CLDG which only considers explicit topological links, our further proposed CLDG++additionally employs graph diffusion to uncover global contextual correlations between nodes, and designs a multi-scale contrastive learning objective composed of local-local, local-global, and global-global contrasts to enhance representation capabilities. Interestingly, by measuring the consistency between different timespans to shape anomaly indicators, CLDG and CLDG++are seamlessly integrated with the task of spotting anomalies on dynamic graphs, which has broad applications in many high-impact domains, such as finance, cybersecurity, and healthcare. Experiments demonstrate that CLDG and CLDG++both exhibit desirable performance in downstream tasks including node classification and dynamic graph anomaly detection. Moreover, CLDG significantly reduces time and space complexity by implicitly exploiting temporal cues instead of complicated sequence models. The code and data are available at https://github.com/yimingxu24/CLDG.

摘要

图中丰富的信息激发了对无监督图表示的进一步研究。现有的研究主要依赖于静态图中的节点特征和拓扑属性来创建自监督信号,而忽略了真实世界图数据所携带的时间成分,例如边的时间戳。为了克服这一限制,本文探讨了如何优雅地对动态图进行时间演变建模。具体来说,我们引入了一种新的归纳偏差,即时间平移不变性,它说明了相同节点在不同时间段内保持相似标签的趋势。基于这个假设,我们开发了一个动态图表示框架 CLDG,通过在不同时间段上进行对比学习,鼓励节点通过对比学习在不同时间段上保持局部一致的时间平移不变性。除了仅考虑显式拓扑链接的标准 CLDG 之外,我们进一步提出的 CLDG++还采用图扩散来揭示节点之间的全局上下文相关性,并设计了一个由局部-局部、局部-全局和全局-全局对比组成的多尺度对比学习目标,以增强表示能力。有趣的是,通过测量不同时间段之间的一致性来形成异常指标,CLDG 和 CLDG++无缝地集成到动态图上的异常检测任务中,这在金融、网络安全和医疗保健等许多高影响领域都有广泛的应用。实验表明,CLDG 和 CLDG++在下游任务(包括节点分类和动态图异常检测)中都表现出了良好的性能。此外,CLDG 通过隐式利用时间线索而不是复杂的序列模型,显著降低了时间和空间复杂度。代码和数据可在 https://github.com/yimingxu24/CLDG 上获取。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验