He Hengyuan, Long Zhengtao, Zhang Yingchao, Jiang Xiaofei
College of Big Data and Information Engineering, GuiZhou University, Guiyang, Guizhou, China.
PLoS One. 2025 Sep 2;20(9):e0331139. doi: 10.1371/journal.pone.0331139. eCollection 2025.
Traffic prediction is a core technology in intelligent transportation systems with broad application prospects. However, traffic flow data exhibits complex characteristics across both temporal and spatial dimensions, posing challenges for accurate prediction. In this paper, we propose a spatiotemporal Transformer network based on multi-level causal attention (MLCAFormer). We design a multi-level temporal causal attention mechanism that captures complex long- and short-term dependencies from local to global through a hierarchical architecture while strictly adhering to temporal causality. We also present a node-identity-aware spatial attention mechanism, which enhances the model's ability to distinguish nodes and learn spatial correlations by assigning a unique identity embedding to each node. Moreover, our model integrates several input features, including original traffic flow data, cyclical patterns, and collaborative spatio-temporal embedding. Comprehensive tests on four real-world traffic datasets-METR-LA, PEMS-BAY, PEMS04, and PEMS08-show that our proposed MLCAFormer outperforms current benchmark models.
交通预测是智能交通系统中的一项核心技术,具有广阔的应用前景。然而,交通流数据在时间和空间维度上都呈现出复杂的特征,这给准确预测带来了挑战。在本文中,我们提出了一种基于多级因果注意力(MLCAFormer)的时空Transformer网络。我们设计了一种多级时间因果注意力机制,该机制通过分层架构从局部到全局捕捉复杂的长期和短期依赖关系,同时严格遵守时间因果关系。我们还提出了一种节点身份感知空间注意力机制,通过为每个节点分配唯一的身份嵌入来增强模型区分节点和学习空间相关性的能力。此外,我们的模型集成了多个输入特征,包括原始交通流数据、周期性模式和协作时空嵌入。在四个真实世界交通数据集——METR-LA、PEMS-BAY、PEMS04和PEMS08上的综合测试表明,我们提出的MLCAFormer优于当前的基准模型。