Zheng Guorong, Liu Yuke, Fu Yazhou, Zhao Yingjie, Zhang Zundong
Beijing Key Lab of Urban Intelligent Traffic Control Technology, North China University of Technology, Beijing 100144, China.
Sensors (Basel). 2023 Sep 19;23(18):7975. doi: 10.3390/s23187975.
As urban areas continue to expand, traffic congestion has emerged as a significant challenge impacting urban governance and economic development. Frequent regional traffic congestion has become a primary factor hindering urban economic growth and social activities, necessitating improved regional traffic management. Addressing regional traffic optimization and control methods based on the characteristics of regional congestion has become a crucial and complex issue in the field of traffic management and control research. This paper focuses on the macroscopic fundamental diagram (MFD) and aims to tackle the control problem without relying on traffic determination information. To address this, we introduce the Q-learning (QL) algorithm in reinforcement learning and the Deep Deterministic Policy Gradient (DDPG) algorithm in deep reinforcement learning. Subsequently, we propose the MFD-QL perimeter control model and the MFD-DDPG perimeter control model. We conduct numerical analysis and simulation experiments to verify the effectiveness of the MFD-QL and MFD-DDPG algorithms. The experimental results show that the algorithms converge rapidly to a stable state and achieve superior control effects in optimizing regional perimeter control.
随着城市区域不断扩张,交通拥堵已成为影响城市治理和经济发展的重大挑战。频繁的区域交通拥堵已成为阻碍城市经济增长和社会活动的主要因素,因此需要改进区域交通管理。基于区域拥堵特征来解决区域交通优化与控制方法,已成为交通管理与控制研究领域一个关键且复杂的问题。本文聚焦于宏观基本图(MFD),旨在解决不依赖交通流量确定信息的控制问题。为此,我们在强化学习中引入Q学习(QL)算法,在深度强化学习中引入深度确定性策略梯度(DDPG)算法。随后,我们提出了MFD - QL周边控制模型和MFD - DDPG周边控制模型。我们进行了数值分析和仿真实验,以验证MFD - QL和MFD - DDPG算法的有效性。实验结果表明,这些算法能快速收敛到稳定状态,并在优化区域周边控制方面取得了优异的控制效果。