Wu Wei, Zhai Xuemeng
Changzhou College of Information Technology, Changzhou 213164, China.
School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China.
Entropy (Basel). 2023 Nov 30;25(12):1611. doi: 10.3390/e25121611.
Dynamic network representation learning has recently attracted increasing attention because real-world networks evolve over time, that is nodes and edges join or leave the networks over time. Different from static networks, the representation learning of dynamic networks should not only consider how to capture the structural information of network snapshots, but also consider how to capture the temporal dynamic information of network structure evolution from the network snapshot sequence. From the existing work on dynamic network representation, there are two main problems: (1) A significant number of methods target dynamic networks, which only allow nodes to increase over time, not decrease, which reduces the applicability of such methods to real-world networks. (2) At present, most network-embedding methods, especially dynamic network representation learning approaches, use Euclidean embedding space. However, the network itself is geometrically non-Euclidean, which leads to geometric inconsistencies between the embedded space and the underlying space of the network, which can affect the performance of the model. In order to solve the above two problems, we propose a geometry-based dynamic network learning framework, namely DyLFG. Our proposed framework targets dynamic networks, which allow nodes and edges to join or exit the network over time. In order to extract the structural information of network snapshots, we designed a new hyperbolic geometry processing layer, which is different from the previous literature. In order to deal with the temporal dynamics of the network snapshot sequence, we propose a gated recurrent unit (GRU) module based on Ricci curvature, that is the RGRU. In the proposed framework, we used a temporal attention layer and the RGRU to evolve the neural network weight matrix to capture temporal dynamics in the network snapshot sequence. The experimental results showed that our model outperformed the baseline approaches on the baseline datasets.
动态网络表示学习最近受到越来越多的关注,因为现实世界中的网络会随时间演变,即节点和边会随着时间的推移加入或离开网络。与静态网络不同,动态网络的表示学习不仅要考虑如何捕捉网络快照的结构信息,还要考虑如何从网络快照序列中捕捉网络结构演变的时间动态信息。从现有的动态网络表示工作来看,存在两个主要问题:(1)大量方法针对的是动态网络,这些方法只允许节点随时间增加,而不允许减少,这降低了此类方法对现实世界网络的适用性。(2)目前,大多数网络嵌入方法,尤其是动态网络表示学习方法,使用欧几里得嵌入空间。然而,网络本身在几何上是非欧几里得的,这导致嵌入空间与网络的基础空间之间存在几何不一致,从而可能影响模型的性能。为了解决上述两个问题,我们提出了一个基于几何的动态网络学习框架,即DyLFG。我们提出的框架针对动态网络,允许节点和边随时间加入或退出网络。为了提取网络快照的结构信息,我们设计了一个新的双曲几何处理层,这与以往的文献不同。为了处理网络快照序列的时间动态,我们提出了一种基于里奇曲率的门控循环单元(GRU)模块,即RGRU。在所提出的框架中,我们使用时间注意力层和RGRU来演化神经网络权重矩阵,以捕捉网络快照序列中的时间动态。实验结果表明,我们的模型在基线数据集上优于基线方法。