Molaei Soheila, Niknam Ghazaleh, Ghosheh Ghadeer O, Chauhan Vinod Kumar, Zare Hadi, Zhu Tingting, Pan Shirui, Clifton David A
Department of Engineering Science, University of Oxford, United Kingdom.
Department of Data Science and Technology, University of Tehran, Iran.
Knowl Based Syst. 2024 Sep 5;299:None. doi: 10.1016/j.knosys.2024.112110.
This research introduces the Variational Graph Attention Dynamics (VarGATDyn), addressing the complexities of dynamic graph representation learning, where existing models, tailored for static graphs, prove inadequate. VarGATDyn melds attention mechanisms with a Markovian assumption to surpass the challenges of maintaining temporal consistency and the extensive dataset requirements typical of RNN-based frameworks. It harnesses the strengths of the Variational Graph Auto-Encoder (VGAE) framework, Graph Attention Networks (GAT), and Gaussian Mixture Models (GMM) to adeptly navigate the temporal and structural intricacies of dynamic graphs. Through the strategic application of GMMs, the model handles multimodal patterns, thereby rectifying misalignments between prior and estimated posterior distributions. An innovative multiple-learning methodology bolsters the model's adaptability, leading to an encompassing and effective learning process. Empirical tests underscore VarGATDyn's dominance in dynamic link prediction across various datasets, highlighting its proficiency in capturing multimodal distributions and temporal dynamics.
本研究引入了变分图注意力动力学(VarGATDyn),以应对动态图表示学习的复杂性,现有为静态图量身定制的模型在这方面已证明不足。VarGATDyn将注意力机制与马尔可夫假设相结合,以克服维持时间一致性的挑战以及基于循环神经网络(RNN)的框架典型的大量数据集要求。它利用变分图自动编码器(VGAE)框架、图注意力网络(GAT)和高斯混合模型(GMM)的优势,巧妙地应对动态图的时间和结构复杂性。通过高斯混合模型的策略性应用,该模型处理多模态模式,从而纠正先验分布与估计后验分布之间的不一致。一种创新的多重学习方法增强了模型的适应性,从而带来一个全面且有效的学习过程。实证测试强调了VarGATDyn在跨各种数据集的动态链接预测中的优势,突出了其在捕捉多模态分布和时间动态方面的能力。