• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于时空解耦 3D 密集网络与注意力残差网络的城市交通流预测

ST-D3DDARN: Urban traffic flow prediction based on spatio-temporal decoupled 3D DenseNet with attention ResNet.

机构信息

College of Information Technology and Engineering, Tianjin University of Technology and Education, Tianjin, China.

出版信息

PLoS One. 2024 Jun 12;19(6):e0305424. doi: 10.1371/journal.pone.0305424. eCollection 2024.

DOI:10.1371/journal.pone.0305424
PMID:38865366
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11168702/
Abstract

Urban traffic flow prediction plays a crucial role in intelligent transportation systems (ITS), which can enhance traffic efficiency and ensure public safety. However, predicting urban traffic flow faces numerous challenges, such as intricate temporal dependencies, spatial correlations, and the influence of external factors. Existing research methods cannot fully capture the complex spatio-temporal dependence of traffic flow. Inspired by video analysis in computer vision, we represent traffic flow as traffic frames and propose an end-to-end urban traffic flow prediction model named Spatio-temporal Decoupled 3D DenseNet with Attention ResNet (ST-D3DDARN). Specifically, this model extracts multi-source traffic flow features through closeness, period, trend, and external factor branches. Subsequently, it dynamically establishes global spatio-temporal correlations by integrating spatial self-attention and coordinate attention in a residual network, accurately predicting the inflow and outflow of traffic throughout the city. In order to evaluate the effectiveness of the ST-D3DDARN model, experiments are carried out on two publicly available real-world datasets. The results indicate that ST-D3DDARN outperforms existing models in terms of single-step prediction, multi-step prediction, and efficiency.

摘要

城市交通流预测在智能交通系统(ITS)中起着至关重要的作用,它可以提高交通效率,确保公共安全。然而,预测城市交通流面临着许多挑战,例如复杂的时间依赖性、空间相关性和外部因素的影响。现有的研究方法无法充分捕捉交通流的复杂时空依赖性。受计算机视觉中视频分析的启发,我们将交通流表示为交通帧,并提出了一种名为时空解耦 3D 密集网络与注意力残差网络(ST-D3DDARN)的端到端城市交通流预测模型。具体来说,该模型通过接近度、周期、趋势和外部因素分支提取多源交通流特征。然后,它通过在残差网络中集成空间自注意力和坐标注意力来动态建立全局时空相关性,准确预测整个城市的交通流量的流入和流出。为了评估 ST-D3DDARN 模型的有效性,我们在两个公开可用的真实数据集上进行了实验。结果表明,ST-D3DDARN 在单步预测、多步预测和效率方面均优于现有模型。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/adf5/11168702/2dec922ee219/pone.0305424.g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/adf5/11168702/ed83ed3a662f/pone.0305424.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/adf5/11168702/19fb0f5f6369/pone.0305424.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/adf5/11168702/ad05608f8bd6/pone.0305424.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/adf5/11168702/4c7b55979039/pone.0305424.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/adf5/11168702/b972c2c9a0d0/pone.0305424.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/adf5/11168702/35a0e075c3a7/pone.0305424.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/adf5/11168702/75c346af612c/pone.0305424.g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/adf5/11168702/8a95056bd776/pone.0305424.g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/adf5/11168702/aa4aebf68166/pone.0305424.g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/adf5/11168702/b373a3aa5995/pone.0305424.g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/adf5/11168702/7ab847eefcc1/pone.0305424.g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/adf5/11168702/a12615d56f7c/pone.0305424.g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/adf5/11168702/5e25ed857104/pone.0305424.g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/adf5/11168702/3278c7d12481/pone.0305424.g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/adf5/11168702/7ec8e535e8f3/pone.0305424.g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/adf5/11168702/8490ee72eaf7/pone.0305424.g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/adf5/11168702/9988e49055ce/pone.0305424.g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/adf5/11168702/cb1404f412a3/pone.0305424.g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/adf5/11168702/fc9f57701985/pone.0305424.g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/adf5/11168702/2dec922ee219/pone.0305424.g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/adf5/11168702/ed83ed3a662f/pone.0305424.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/adf5/11168702/19fb0f5f6369/pone.0305424.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/adf5/11168702/ad05608f8bd6/pone.0305424.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/adf5/11168702/4c7b55979039/pone.0305424.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/adf5/11168702/b972c2c9a0d0/pone.0305424.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/adf5/11168702/35a0e075c3a7/pone.0305424.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/adf5/11168702/75c346af612c/pone.0305424.g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/adf5/11168702/8a95056bd776/pone.0305424.g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/adf5/11168702/aa4aebf68166/pone.0305424.g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/adf5/11168702/b373a3aa5995/pone.0305424.g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/adf5/11168702/7ab847eefcc1/pone.0305424.g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/adf5/11168702/a12615d56f7c/pone.0305424.g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/adf5/11168702/5e25ed857104/pone.0305424.g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/adf5/11168702/3278c7d12481/pone.0305424.g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/adf5/11168702/7ec8e535e8f3/pone.0305424.g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/adf5/11168702/8490ee72eaf7/pone.0305424.g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/adf5/11168702/9988e49055ce/pone.0305424.g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/adf5/11168702/cb1404f412a3/pone.0305424.g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/adf5/11168702/fc9f57701985/pone.0305424.g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/adf5/11168702/2dec922ee219/pone.0305424.g020.jpg

相似文献

1
ST-D3DDARN: Urban traffic flow prediction based on spatio-temporal decoupled 3D DenseNet with attention ResNet.基于时空解耦 3D 密集网络与注意力残差网络的城市交通流预测
PLoS One. 2024 Jun 12;19(6):e0305424. doi: 10.1371/journal.pone.0305424. eCollection 2024.
2
Spatio-temporal causal graph attention network for traffic flow prediction in intelligent transportation systems.智能交通系统中用于交通流预测的时空因果图注意力网络
PeerJ Comput Sci. 2023 Jul 28;9:e1484. doi: 10.7717/peerj-cs.1484. eCollection 2023.
3
AST3DRNet: Attention-Based Spatio-Temporal 3D Residual Neural Networks for Traffic Congestion Prediction.AST3DRNet:用于交通拥堵预测的基于注意力的时空3D残差神经网络
Sensors (Basel). 2024 Feb 16;24(4):1261. doi: 10.3390/s24041261.
4
Attention based spatio-temporal graph convolutional network with focal loss for crash risk evaluation on urban road traffic network based on multi-source risks.基于多源风险的城市道路交通网络基于注意力的时空图卷积网络与焦点损失的碰撞风险评估
Accid Anal Prev. 2023 Nov;192:107262. doi: 10.1016/j.aap.2023.107262. Epub 2023 Aug 18.
5
GT-LSTM: A spatio-temporal ensemble network for traffic flow prediction.GT-LSTM:用于交通流预测的时空集成网络。
Neural Netw. 2024 Mar;171:251-262. doi: 10.1016/j.neunet.2023.12.016. Epub 2023 Dec 10.
6
Parking Lot Traffic Prediction Based on Fusion of Multifaceted Spatio-Temporal Features.基于多方面时空特征融合的停车场交通流量预测
Sensors (Basel). 2024 Jul 31;24(15):4971. doi: 10.3390/s24154971.
7
Cross-Attention Fusion Based Spatial-Temporal Multi-Graph Convolutional Network for Traffic Flow Prediction.基于交叉注意力融合的时空多图卷积网络用于交通流预测
Sensors (Basel). 2021 Dec 18;21(24):8468. doi: 10.3390/s21248468.
8
Spatiotemporal information enhanced multi-feature short-term traffic flow prediction.时空信息增强的多特征短期交通流预测。
PLoS One. 2024 Jul 15;19(7):e0306892. doi: 10.1371/journal.pone.0306892. eCollection 2024.
9
Spatial-Temporal Attention Mechanism and Graph Convolutional Networks for Destination Prediction.用于目的地预测的时空注意力机制与图卷积网络
Front Neurorobot. 2022 Jul 6;16:925210. doi: 10.3389/fnbot.2022.925210. eCollection 2022.
10
Local spatial and temporal relation discovery model based on attention mechanism for traffic forecasting.基于注意力机制的交通预测的局部时空关系发现模型。
Neural Netw. 2024 Aug;176:106365. doi: 10.1016/j.neunet.2024.106365. Epub 2024 May 6.

引用本文的文献

1
Traffic flow prediction based on spatiotemporal encoder-decoder model.基于时空编码器-解码器模型的交通流预测
PLoS One. 2025 May 30;20(5):e0321858. doi: 10.1371/journal.pone.0321858. eCollection 2025.

本文引用的文献

1
MVSTT: A Multiview Spatial-Temporal Transformer Network for Traffic-Flow Forecasting.MVSTT:一种用于交通流预测的多视图时空变压器网络。
IEEE Trans Cybern. 2024 Mar;54(3):1582-1595. doi: 10.1109/TCYB.2022.3223918. Epub 2024 Feb 9.
2
FASTNN: A Deep Learning Approach for Traffic Flow Prediction Considering Spatiotemporal Features.FASTNN:一种考虑时空特征的交通流预测深度学习方法。
Sensors (Basel). 2022 Sep 13;22(18):6921. doi: 10.3390/s22186921.
3
Squeeze-and-Excitation Networks.挤压激励网络。
IEEE Trans Pattern Anal Mach Intell. 2020 Aug;42(8):2011-2023. doi: 10.1109/TPAMI.2019.2913372. Epub 2019 Apr 29.