Suppr超能文献

基于多任务学习的全景驾驶感知融合算法。

A panoramic driving perception fusion algorithm based on multi-task learning.

机构信息

Guangxi Applied Mathematics Center, College of Electronic Information, Guangxi Minzu University, Nanning, China.

Guangxi Postdoctoral Innovation Practice Base, Wuzhou University, Wuzhou, China.

出版信息

PLoS One. 2024 Jun 4;19(6):e0304691. doi: 10.1371/journal.pone.0304691. eCollection 2024.

Abstract

With the rapid development of intelligent connected vehicles, there is an increasing demand for hardware facilities and onboard systems of driver assistance systems. Currently, most vehicles are constrained by the hardware resources of onboard systems, which mainly process single-task and single-sensor data. This poses a significant challenge in achieving complex panoramic driving perception technology. While the panoramic driving perception algorithm YOLOP has achieved outstanding performance in multi-task processing, it suffers from poor adaptability of feature map pooling operations and loss of details during downsampling. To address these issues, this paper proposes a panoramic driving perception fusion algorithm based on multi-task learning. The model training involves the introduction of different loss functions and a series of processing steps for lidar point cloud data. Subsequently, the perception information from lidar and vision sensors is fused to achieve synchronized processing of multi-task and multi-sensor data, thereby effectively improving the performance and reliability of the panoramic driving perception system. To evaluate the performance of the proposed algorithm in multi-task processing, the BDD100K dataset is used. The results demonstrate that, compared to the YOLOP model, the multi-task learning network performs better in lane detection, drivable area detection, and vehicle detection tasks. Specifically, the lane detection accuracy improves by 11.6%, the mean Intersection over Union (mIoU) for drivable area detection increases by 2.1%, and the mean Average Precision at 50% IoU (mAP50) for vehicle detection improves by 3.7%.

摘要

随着智能网联汽车的快速发展,对驾驶员辅助系统的硬件设施和车载系统的需求日益增长。目前,大多数车辆受到车载系统硬件资源的限制,主要处理单任务和单传感器数据。这在实现复杂的全景驾驶感知技术方面带来了重大挑战。虽然全景驾驶感知算法 YOLOP 在多任务处理方面表现出色,但它在特征图池化操作的适应性和下采样过程中细节的丢失方面存在问题。为了解决这些问题,本文提出了一种基于多任务学习的全景驾驶感知融合算法。该模型的训练涉及引入不同的损失函数和一系列针对激光雷达点云数据的处理步骤。随后,将激光雷达和视觉传感器的感知信息进行融合,实现多任务和多传感器数据的同步处理,从而有效提高全景驾驶感知系统的性能和可靠性。为了评估所提出算法在多任务处理方面的性能,使用了 BDD100K 数据集。结果表明,与 YOLOP 模型相比,多任务学习网络在车道检测、可行驶区域检测和车辆检测任务中的性能更好。具体来说,车道检测的准确率提高了 11.6%,可行驶区域检测的平均交并比(mIoU)提高了 2.1%,车辆检测的平均精度在 50%IoU(mAP50)提高了 3.7%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6fe/11149871/712ffc947685/pone.0304691.g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验