Suppr超能文献

基于地图融合图像的固定路线自动驾驶车辆鲁棒可行驶区域检测

Robust Drivable Road Region Detection for Fixed-Route Autonomous Vehicles Using Map-Fusion Images.

机构信息

School of Mechanical and Electronic Engineering, Wuhan University of Technology, Wuhan 430070, China.

California PATH, University of California, Berkeley, Richmond, CA 94804-2468, USA.

出版信息

Sensors (Basel). 2018 Nov 27;18(12):4158. doi: 10.3390/s18124158.

Abstract

Environment perception is one of the major issues in autonomous driving systems. In particular, effective and robust drivable road region detection still remains a challenge to be addressed for autonomous vehicles in multi-lane roads, intersections and unstructured road environments. In this paper, a computer vision and neural networks-based drivable road region detection approach is proposed for fixed-route autonomous vehicles (e.g., shuttles, buses and other vehicles operating on fixed routes), using a vehicle-mounted camera, route map and real-time vehicle location. The key idea of the proposed approach is to fuse an image with its corresponding local route map to obtain the map-fusion image (MFI) where the information of the image and route map act as complementary to each other. The information of the image can be utilized in road regions with rich features, while local route map acts as critical heuristics that enable robust drivable road region detection in areas without clear lane marking or borders. A neural network model constructed upon the Convolutional Neural Networks (CNNs), namely FCN-VGG16, is utilized to extract the drivable road region from the fused MFI. The proposed approach is validated using real-world driving scenario videos captured by an industrial camera mounted on a testing vehicle. Experiments demonstrate that the proposed approach outperforms the conventional approach which uses non-fused images in terms of detection accuracy and robustness, and it achieves desirable robustness against undesirable illumination conditions and pavement appearance, as well as projection and map-fusion errors.

摘要

环境感知是自动驾驶系统的主要问题之一。特别是,在多车道、交叉路口和非结构化道路环境中,对于自动驾驶车辆来说,有效和鲁棒的可行驶道路区域检测仍然是一个需要解决的挑战。本文提出了一种基于计算机视觉和神经网络的可行驶道路区域检测方法,用于固定路线自动驾驶车辆(例如,穿梭巴士、公共汽车和其他在固定路线上运行的车辆),使用车载摄像头、路线图和实时车辆位置。该方法的关键思想是融合图像与其对应的局部路线图,以获得地图融合图像(MFI),其中图像和路线图的信息相互补充。图像信息可用于特征丰富的道路区域,而局部路线图则作为关键启发式方法,可在没有清晰车道标记或边界的区域实现鲁棒的可行驶道路区域检测。利用基于卷积神经网络(CNN)的神经网络模型,即 FCN-VGG16,从融合的 MFI 中提取可行驶道路区域。该方法使用安装在测试车辆上的工业摄像机拍摄的真实驾驶场景视频进行验证。实验表明,与使用非融合图像的传统方法相比,该方法在检测精度和鲁棒性方面表现出色,并且对不良光照条件和路面外观、投影和地图融合误差具有良好的鲁棒性。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验