• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于多任务学习的全景驾驶感知融合算法。

A panoramic driving perception fusion algorithm based on multi-task learning.

机构信息

Guangxi Applied Mathematics Center, College of Electronic Information, Guangxi Minzu University, Nanning, China.

Guangxi Postdoctoral Innovation Practice Base, Wuzhou University, Wuzhou, China.

出版信息

PLoS One. 2024 Jun 4;19(6):e0304691. doi: 10.1371/journal.pone.0304691. eCollection 2024.

DOI:10.1371/journal.pone.0304691
PMID:38833435
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11149871/
Abstract

With the rapid development of intelligent connected vehicles, there is an increasing demand for hardware facilities and onboard systems of driver assistance systems. Currently, most vehicles are constrained by the hardware resources of onboard systems, which mainly process single-task and single-sensor data. This poses a significant challenge in achieving complex panoramic driving perception technology. While the panoramic driving perception algorithm YOLOP has achieved outstanding performance in multi-task processing, it suffers from poor adaptability of feature map pooling operations and loss of details during downsampling. To address these issues, this paper proposes a panoramic driving perception fusion algorithm based on multi-task learning. The model training involves the introduction of different loss functions and a series of processing steps for lidar point cloud data. Subsequently, the perception information from lidar and vision sensors is fused to achieve synchronized processing of multi-task and multi-sensor data, thereby effectively improving the performance and reliability of the panoramic driving perception system. To evaluate the performance of the proposed algorithm in multi-task processing, the BDD100K dataset is used. The results demonstrate that, compared to the YOLOP model, the multi-task learning network performs better in lane detection, drivable area detection, and vehicle detection tasks. Specifically, the lane detection accuracy improves by 11.6%, the mean Intersection over Union (mIoU) for drivable area detection increases by 2.1%, and the mean Average Precision at 50% IoU (mAP50) for vehicle detection improves by 3.7%.

摘要

随着智能网联汽车的快速发展,对驾驶员辅助系统的硬件设施和车载系统的需求日益增长。目前,大多数车辆受到车载系统硬件资源的限制,主要处理单任务和单传感器数据。这在实现复杂的全景驾驶感知技术方面带来了重大挑战。虽然全景驾驶感知算法 YOLOP 在多任务处理方面表现出色,但它在特征图池化操作的适应性和下采样过程中细节的丢失方面存在问题。为了解决这些问题,本文提出了一种基于多任务学习的全景驾驶感知融合算法。该模型的训练涉及引入不同的损失函数和一系列针对激光雷达点云数据的处理步骤。随后,将激光雷达和视觉传感器的感知信息进行融合,实现多任务和多传感器数据的同步处理,从而有效提高全景驾驶感知系统的性能和可靠性。为了评估所提出算法在多任务处理方面的性能,使用了 BDD100K 数据集。结果表明,与 YOLOP 模型相比,多任务学习网络在车道检测、可行驶区域检测和车辆检测任务中的性能更好。具体来说,车道检测的准确率提高了 11.6%,可行驶区域检测的平均交并比(mIoU)提高了 2.1%,车辆检测的平均精度在 50%IoU(mAP50)提高了 3.7%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6fe/11149871/f63c8c40e396/pone.0304691.g021.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6fe/11149871/712ffc947685/pone.0304691.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6fe/11149871/2f5f7f6480b3/pone.0304691.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6fe/11149871/7f334715b87e/pone.0304691.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6fe/11149871/dfbeebc0ba8b/pone.0304691.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6fe/11149871/4ccdf5057fcb/pone.0304691.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6fe/11149871/6653ffee75ba/pone.0304691.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6fe/11149871/89e23e33c3f9/pone.0304691.g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6fe/11149871/f29eff9f328b/pone.0304691.g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6fe/11149871/112de51b5981/pone.0304691.g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6fe/11149871/61b16a634bf6/pone.0304691.g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6fe/11149871/acbec271b073/pone.0304691.g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6fe/11149871/8059f91a3b32/pone.0304691.g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6fe/11149871/00c9c3fb5221/pone.0304691.g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6fe/11149871/de05d8f0b8a8/pone.0304691.g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6fe/11149871/458e6cecc65c/pone.0304691.g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6fe/11149871/bdde2c60cfa0/pone.0304691.g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6fe/11149871/65b3cd2761df/pone.0304691.g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6fe/11149871/350226cce2ff/pone.0304691.g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6fe/11149871/72e43d02e9e9/pone.0304691.g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6fe/11149871/770d8f47321b/pone.0304691.g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6fe/11149871/f63c8c40e396/pone.0304691.g021.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6fe/11149871/712ffc947685/pone.0304691.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6fe/11149871/2f5f7f6480b3/pone.0304691.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6fe/11149871/7f334715b87e/pone.0304691.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6fe/11149871/dfbeebc0ba8b/pone.0304691.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6fe/11149871/4ccdf5057fcb/pone.0304691.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6fe/11149871/6653ffee75ba/pone.0304691.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6fe/11149871/89e23e33c3f9/pone.0304691.g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6fe/11149871/f29eff9f328b/pone.0304691.g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6fe/11149871/112de51b5981/pone.0304691.g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6fe/11149871/61b16a634bf6/pone.0304691.g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6fe/11149871/acbec271b073/pone.0304691.g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6fe/11149871/8059f91a3b32/pone.0304691.g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6fe/11149871/00c9c3fb5221/pone.0304691.g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6fe/11149871/de05d8f0b8a8/pone.0304691.g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6fe/11149871/458e6cecc65c/pone.0304691.g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6fe/11149871/bdde2c60cfa0/pone.0304691.g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6fe/11149871/65b3cd2761df/pone.0304691.g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6fe/11149871/350226cce2ff/pone.0304691.g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6fe/11149871/72e43d02e9e9/pone.0304691.g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6fe/11149871/770d8f47321b/pone.0304691.g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6fe/11149871/f63c8c40e396/pone.0304691.g021.jpg

相似文献

1
A panoramic driving perception fusion algorithm based on multi-task learning.基于多任务学习的全景驾驶感知融合算法。
PLoS One. 2024 Jun 4;19(6):e0304691. doi: 10.1371/journal.pone.0304691. eCollection 2024.
2
Multi-Task Environmental Perception Methods for Autonomous Driving.用于自动驾驶的多任务环境感知方法
Sensors (Basel). 2024 Aug 28;24(17):5552. doi: 10.3390/s24175552.
3
Optimal Configuration of Multi-Task Learning for Autonomous Driving.用于自动驾驶的多任务学习的最优配置
Sensors (Basel). 2023 Dec 9;23(24):9729. doi: 10.3390/s23249729.
4
A Multi-Task Network Based on Dual-Neck Structure for Autonomous Driving Perception.一种基于双颈部结构的多任务网络用于自动驾驶感知。
Sensors (Basel). 2024 Feb 28;24(5):1547. doi: 10.3390/s24051547.
5
Research on Road Scene Understanding of Autonomous Vehicles Based on Multi-Task Learning.基于多任务学习的自动驾驶车辆道路场景理解研究。
Sensors (Basel). 2023 Jul 7;23(13):6238. doi: 10.3390/s23136238.
6
Interactive Attention Learning on Detection of Lane and Lane Marking on the Road by Monocular Camera Image.基于单目相机图像的道路车道和车道线检测中的交互式注意力学习
Sensors (Basel). 2023 Jul 20;23(14):6545. doi: 10.3390/s23146545.
7
Real-Time 3D Object Detection and SLAM Fusion in a Low-Cost LiDAR Test Vehicle Setup.低成本激光雷达测试车中实时 3D 目标检测与 SLAM 融合。
Sensors (Basel). 2021 Dec 15;21(24):8381. doi: 10.3390/s21248381.
8
Optimized Design of EdgeBoard Intelligent Vehicle Based on PP-YOLOE.基于PP-YOLOE的EdgeBoard智能车辆优化设计
Sensors (Basel). 2024 May 16;24(10):3180. doi: 10.3390/s24103180.
9
Evaluating Autonomous Urban Perception and Planning in a 1/10th Scale MiniCity.评估 1/10 比例迷你城市中的自主城市感知和规划。
Sensors (Basel). 2022 Sep 8;22(18):6793. doi: 10.3390/s22186793.
10
A Multi-Task Road Feature Extraction Network with Grouped Convolution and Attention Mechanisms.一种具有分组卷积和注意力机制的多任务道路特征提取网络。
Sensors (Basel). 2023 Sep 30;23(19):8182. doi: 10.3390/s23198182.

本文引用的文献

1
Sugarcane stem node identification algorithm based on improved YOLOv5.基于改进 YOLOv5 的甘蔗茎节点识别算法。
PLoS One. 2023 Dec 11;18(12):e0295565. doi: 10.1371/journal.pone.0295565. eCollection 2023.
2
A Multi-Task Road Feature Extraction Network with Grouped Convolution and Attention Mechanisms.一种具有分组卷积和注意力机制的多任务道路特征提取网络。
Sensors (Basel). 2023 Sep 30;23(19):8182. doi: 10.3390/s23198182.
3
Research on Road Scene Understanding of Autonomous Vehicles Based on Multi-Task Learning.基于多任务学习的自动驾驶车辆道路场景理解研究。
Sensors (Basel). 2023 Jul 7;23(13):6238. doi: 10.3390/s23136238.
4
An object detection algorithm combining self-attention and YOLOv4 in traffic scene.一种结合自注意力机制和 YOLOv4 的交通场景目标检测算法。
PLoS One. 2023 May 18;18(5):e0285654. doi: 10.1371/journal.pone.0285654. eCollection 2023.
5
A visual defect detection for optics lens based on the YOLOv5 -C3CA-SPPF network model.基于 YOLOv5-C3CA-SPPF 网络模型的光学镜头视觉缺陷检测。
Opt Express. 2023 Jan 16;31(2):2628-2643. doi: 10.1364/OE.480816.
6
An improved Deeplabv3+ semantic segmentation algorithm with multiple loss constraints.一种具有多重损失约束的改进型 Deeplabv3+ 语义分割算法。
PLoS One. 2022 Jan 19;17(1):e0261582. doi: 10.1371/journal.pone.0261582. eCollection 2022.
7
Low-complexity adaptive radius outlier removal filter based on PCA for lidar point cloud denoising.基于主成分分析的低复杂度自适应半径离群值去除滤波器用于激光雷达点云去噪
Appl Opt. 2021 Jul 10;60(20):E1-E7. doi: 10.1364/AO.416341.
8
Deep learning with convolutional neural networks for EEG decoding and visualization.基于卷积神经网络的 EEG 解码和可视化深度学习。
Hum Brain Mapp. 2017 Nov;38(11):5391-5420. doi: 10.1002/hbm.23730. Epub 2017 Aug 7.
9
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.更快的 R-CNN:基于区域建议网络的实时目标检测。
IEEE Trans Pattern Anal Mach Intell. 2017 Jun;39(6):1137-1149. doi: 10.1109/TPAMI.2016.2577031. Epub 2016 Jun 6.