Suppr超能文献

用于多步交通流预测的可解释局部流注意力机制

Interpretable local flow attention for multi-step traffic flow prediction.

作者信息

Huang Xu, Zhang Bowen, Feng Shanshan, Ye Yunming, Li Xutao

机构信息

School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen, China.

College of Big Data and Internet, Shenzhen Technology University, Shenzhen, China.

出版信息

Neural Netw. 2023 Apr;161:25-38. doi: 10.1016/j.neunet.2023.01.023. Epub 2023 Jan 28.

Abstract

Traffic flow prediction (TFP) has attracted increasing attention with the development of smart city. In the past few years, neural network-based methods have shown impressive performance for TFP. However, most of previous studies fail to explicitly and effectively model the relationship between inflows and outflows. Consequently, these methods are usually uninterpretable and inaccurate. In this paper, we propose an interpretable local flow attention (LFA) mechanism for TFP, which yields three advantages. (1) LFA is flow-aware. Different from existing works, which blend inflows and outflows in the channel dimension, we explicitly exploit the correlations between flows with a novel attention mechanism. (2) LFA is interpretable. It is formulated by the truisms of traffic flow, and the learned attention weights can well explain the flow correlations. (3) LFA is efficient. Instead of using global spatial attention as in previous studies, LFA leverages the local mode. The attention query is only performed on the local related regions. This not only reduces computational cost but also avoids false attention. Based on LFA, we further develop a novel spatiotemporal cell, named LFA-ConvLSTM (LFA-based convolutional long short-term memory), to capture the complex dynamics in traffic data. Specifically, LFA-ConvLSTM consists of three parts. (1) A ConvLSTM module is utilized to learn flow-specific features. (2) An LFA module accounts for modeling the correlations between flows. (3) A feature aggregation module fuses the above two to obtain a comprehensive feature. Extensive experiments on two real-world datasets show that our method achieves a better prediction performance. We improve the RMSE metric by 3.2%-4.6%, and the MAPE metric by 6.2%-6.7%. Our LFA-ConvLSTM is also almost 32% faster than global self-attention ConvLSTM in terms of prediction time. Furthermore, we also present some visual results to analyze the learned flow correlations.

摘要

随着智慧城市的发展,交通流预测(TFP)受到了越来越多的关注。在过去几年中,基于神经网络的方法在交通流预测方面表现出了令人印象深刻的性能。然而,大多数先前的研究未能明确有效地对流入量和流出量之间的关系进行建模。因此,这些方法通常难以解释且不准确。在本文中,我们提出了一种用于交通流预测的可解释局部流注意力(LFA)机制,它具有三个优点。(1)LFA是流量感知的。与现有工作在通道维度上混合流入量和流出量不同,我们通过一种新颖的注意力机制明确地利用了流量之间的相关性。(2)LFA是可解释的。它由交通流的基本原理构建而成,学习到的注意力权重能够很好地解释流量相关性。(3)LFA是高效的。与先前研究中使用全局空间注意力不同,LFA利用局部模式。注意力查询仅在局部相关区域进行。这不仅降低了计算成本,还避免了错误注意力。基于LFA,我们进一步开发了一种新颖的时空单元,名为LFA-ConvLSTM(基于LFA的卷积长短期记忆),以捕捉交通数据中的复杂动态。具体而言,LFA-ConvLSTM由三个部分组成。(1)一个ConvLSTM模块用于学习特定于流量的特征。(2)一个LFA模块用于对流量之间的相关性进行建模。(3)一个特征聚合模块将上述两者融合以获得综合特征。在两个真实世界数据集上进行的大量实验表明,我们的方法取得了更好的预测性能。我们将均方根误差(RMSE)指标提高了3.2%-4.6%,平均绝对百分比误差(MAPE)指标提高了6.2%-6.7%。在预测时间方面,我们的LFA-ConvLSTM也比全局自注意力ConvLSTM快近32%。此外,我们还展示了一些可视化结果来分析学习到的流量相关性。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验