Suppr超能文献

一种基于改进变压器的交通流预测模型。

An improved transformer based traffic flow prediction model.

作者信息

Liu Shipeng, Wang Xingjian

机构信息

College of Computer and Control Engineering, Northeast Forestry University, HeXing Road, Harbin, China.

出版信息

Sci Rep. 2025 Mar 10;15(1):8284. doi: 10.1038/s41598-025-92425-7.

Abstract

Traffic flow prediction is a key challenge in intelligent transportation, and the ability to accurately forecast future traffic flow directly affects the efficiency of urban transportation systems. However, existing deep learning-based prediction models suffer from the following issues: First, CNN- or RNN-based models are limited by their architecture and unsuitable for modeling long-term sequences. Second, most Transformer-based methods focus solely on the traffic flow data itself during embedding, neglecting the implicit information behind the traffic data. This implicit information includes behavioral trends, community and surrounding traffic patterns, urban weather, semantic information, and temporal periodicity. Third, methods using the original multi-head self-attention mechanism calculate attention scores point by point in the temporal dimension without utilizing contextual information, which to some extent leads to less accurate attention computation. Fourth, existing methods struggle to capture long and short-range spatial dependencies simultaneously. To address these four issues, we propose an IEEAFormer technique (Implicit-information Embedding and Enhanced Spatial-Temporal Multi-Head Attention Transformer). First, it adopts a Transformer architecture and incorporates an embedding layer to capture implicit information in the input. Secondly, the method replaces the traditional multi-head self-attention with time-environment-aware self-attention in the temporal dimension, enabling each node to perceive the contextual environment. Additionally, the technique uses two unique graph mask matrices in the spatial dimension. It employs a novel parallel spatial self-attention architecture to capture both long-range and short-range dependencies in the data simultaneously. The results verified on four real-world traffic datasets show that the proposed IEEAFormer outperforms most existing models regarding prediction performance.

摘要

交通流预测是智能交通中的一项关键挑战,准确预测未来交通流的能力直接影响城市交通系统的效率。然而,现有的基于深度学习的预测模型存在以下问题:第一,基于卷积神经网络(CNN)或循环神经网络(RNN)的模型受其架构限制,不适用于对长期序列进行建模。第二,大多数基于Transformer的方法在嵌入过程中仅关注交通流数据本身,忽略了交通数据背后的隐含信息。这种隐含信息包括行为趋势、社区和周边交通模式、城市天气、语义信息以及时间周期性。第三,使用原始多头自注意力机制的方法在时间维度上逐点计算注意力分数,未利用上下文信息,这在一定程度上导致注意力计算不够准确。第四,现有方法难以同时捕捉长距离和短距离空间依赖性。为了解决这四个问题,我们提出了一种IEEAFormer技术(隐含信息嵌入与增强时空多头注意力Transformer)。首先,它采用Transformer架构并结合一个嵌入层来捕捉输入中的隐含信息。其次,该方法在时间维度上用时间环境感知自注意力取代传统的多头自注意力,使每个节点能够感知上下文环境。此外,该技术在空间维度上使用两个独特的图掩码矩阵。它采用一种新颖的并行空间自注意力架构来同时捕捉数据中的长距离和短距离依赖性。在四个真实世界交通数据集上验证的结果表明,所提出的IEEAFormer在预测性能方面优于大多数现有模型。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/385c/11893897/4e4183417670/41598_2025_92425_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验