Guo Xianwei, Yu Zhiyong, Huang Fangwan, Chen Xing, Yang Dingqi, Wang Jiangtao
College of Computer and Data Science, Fuzhou University, WuLong Jiang North Avenue, University Town, Fuzhou, 350108, China; Engineering Research Center of Big Data Intelligence, Ministry of Education, Fuzhou University, WuLong Jiang North Avenue, University Town, Fuzhou, 350108, China.
Department of Computer and Information Science, University of Macau, Avenida da Universidade, Taipa, Macao Special Administrative Region of China.
Neural Netw. 2025 Jan;181:106805. doi: 10.1016/j.neunet.2024.106805. Epub 2024 Oct 18.
Spatiotemporal Graph (STG) forecasting is an essential task within the realm of spatiotemporal data mining and urban computing. Over the past few years, Spatiotemporal Graph Neural Networks (STGNNs) have gained significant attention as promising solutions for STG forecasting. However, existing methods often overlook two issues: the dynamic spatial dependencies of urban networks and the heterogeneity of urban spatiotemporal data. In this paper, we propose a novel framework for STG learning called Dynamic Meta-Graph Convolutional Recurrent Network (DMetaGCRN), which effectively tackles both challenges. Specifically, we first build a meta-graph generator to dynamically generate graph structures, which integrates various dynamic features, including input sensor signals and their historical trends, periodic information (timestamp embeddings), and meta-node embeddings. Among them, a memory network is used to guide the learning of meta-node embeddings. The meta-graph generation process enables the model to simulate the dynamic spatial dependencies of urban networks and capture data heterogeneity. Then, we design a Dynamic Meta-Graph Convolutional Recurrent Unit (DMetaGCRU) to simultaneously model spatial and temporal dependencies. Finally, we formulate the proposed DMetaGCRN in an encoder-decoder architecture built upon DMetaGCRU and meta-graph generator components. Extensive experiments on four real-world urban spatiotemporal datasets validate that the proposed DMetaGCRN framework outperforms state-of-the-art approaches.
时空图(STG)预测是时空数据挖掘和城市计算领域的一项重要任务。在过去几年中,时空图神经网络(STGNNs)作为STG预测的有前景的解决方案受到了广泛关注。然而,现有方法往往忽略了两个问题:城市网络的动态空间依赖性和城市时空数据的异质性。在本文中,我们提出了一种用于STG学习的新颖框架,称为动态元图卷积递归网络(DMetaGCRN),它有效地应对了这两个挑战。具体而言,我们首先构建一个元图生成器来动态生成图结构,该生成器整合了各种动态特征,包括输入传感器信号及其历史趋势、周期信息(时间戳嵌入)和元节点嵌入。其中,一个记忆网络用于指导元节点嵌入的学习。元图生成过程使模型能够模拟城市网络的动态空间依赖性并捕捉数据异质性。然后,我们设计了一个动态元图卷积递归单元(DMetaGCRU)来同时对空间和时间依赖性进行建模。最后,我们在基于DMetaGCRU和元图生成器组件构建的编码器 - 解码器架构中构建了所提出的DMetaGCRN。在四个真实世界的城市时空数据集上进行的大量实验验证了所提出的DMetaGCRN框架优于现有最先进的方法。