Li Dianzhao, Okhrin Ostap
Chair of Econometrics and Statistics, esp. in the Transport Sector, Technische Universität Dresden, Dresden, Germany.
Center for Scalable Data Analytics and Artificial Intelligence (ScaDS.AI) Dresden/Leipzig, Dresden, Germany.
Commun Eng. 2024 Oct 17;3(1):147. doi: 10.1038/s44172-024-00292-3.
Autonomous driving presents unique challenges, particularly in transferring agents trained in simulation to real-world environments due to the discrepancies between the two. To address this issue, here we propose a robust Deep Reinforcement Learning (DRL) framework that incorporates platform-dependent perception modules to extract task-relevant information, enabling the training of a lane-following and overtaking agent in simulation. This framework facilitates the efficient transfer of the DRL agent to new simulated environments and the real world with minimal adjustments. We assess the performance of the agent across various driving scenarios in both simulation and the real world, comparing it to human drivers and a proportional-integral-derivative (PID) baseline in simulation. Additionally, we contrast it with other DRL baselines to clarify the rationale behind choosing this framework. Our proposed approach helps bridge the gaps between different platforms and the Simulation to Reality (Sim2Real) gap, allowing the trained agent to perform consistently in both simulation and real-world scenarios, effectively driving the vehicle.
自动驾驶带来了独特的挑战,尤其是由于模拟环境和现实世界之间的差异,将在模拟环境中训练的智能体转移到现实世界环境时会面临这些挑战。为了解决这个问题,我们在此提出一个强大的深度强化学习(DRL)框架,该框架整合了依赖平台的感知模块来提取与任务相关的信息,从而能够在模拟环境中训练一个车道跟踪和超车智能体。这个框架有助于以最小的调整将DRL智能体高效地转移到新的模拟环境和现实世界中。我们在模拟环境和现实世界的各种驾驶场景中评估该智能体的性能,在模拟环境中将其与人类驾驶员和比例积分微分(PID)基线进行比较。此外,我们将其与其他DRL基线进行对比,以阐明选择这个框架的理由。我们提出的方法有助于弥合不同平台之间的差距以及模拟到现实(Sim2Real)的差距,使训练后的智能体在模拟和现实世界场景中都能持续表现良好,有效地驾驶车辆。