• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

使用深度Q网络的自动驾驶车辆多目标跟踪的多传感器融合与分割

Multi-sensor fusion and segmentation for autonomous vehicle multi-object tracking using deep Q networks.

作者信息

Vinoth K, Sasikumar P

机构信息

School of Electronics Engineering, Vellore Institute of Technology, Vellore, India.

出版信息

Sci Rep. 2024 Dec 28;14(1):31130. doi: 10.1038/s41598-024-82356-0.

DOI:10.1038/s41598-024-82356-0
PMID:39732930
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11682159/
Abstract

Autonomous vehicles, often known as self-driving cars, have emerged as a disruptive technology with the promise of safer, more efficient, and convenient transportation. The existing works provide achievable results but lack effective solutions, as accumulation on roads can obscure lane markings and traffic signs, making it difficult for the self-driving car to navigate safely. Heavy rain, snow, fog, or dust storms can severely limit the car's sensors' ability to detect obstacles, pedestrians, and other vehicles, which pose potential safety risks. So, we have presented a multi-sensor fusion and segmentation for multi-object tracking using DQN in self-driving cars. Our proposed scheme incorporates the handling of pipelines for camera and LiDAR data and the development of an autonomous solution for object detection by handling sensor images. An Improved Adaptive Extended Kalman Filter (IAEKF) was used for noise reduction. The Contrast enhancement was done using a Normalised Gamma Transformation based CLAHE (NGT-CLAHE), and the adaptive thresholding was implemented using an Improved Adaptive Weighted Mean Filter (IAWMF) which was used for preprocessing. The multi-segmentation based on orientation employs various segmentation techniques and degrees. The dense net-based multi-image fusion gives more efficiency and a high memory in terms of fast processing time. The Energy Valley Optimizer (EVO) approach is used to select grid map-based paths and lanes. This strategy solves complicated tasks in a simple manner, which leads to ease of flexibility, resilience, and scalability. In addition, the YOLO V7 model is used for detection and categorization. The proposed work is evaluated using metrics such as velocity, accuracy rate, success rate, success ratio, mean squared error, loss rate, and accumulated reward.

摘要

自动驾驶汽车,通常被称为无人驾驶汽车,已成为一项具有颠覆性的技术,有望实现更安全、高效和便捷的交通。现有研究取得了一定成果,但缺乏有效的解决方案,因为道路上的堆积物会遮挡车道标记和交通标志,使自动驾驶汽车难以安全导航。暴雨、大雪、大雾或沙尘暴会严重限制汽车传感器检测障碍物、行人及其他车辆的能力,从而带来潜在安全风险。因此,我们提出了一种在自动驾驶汽车中使用深度Q网络(DQN)进行多目标跟踪的多传感器融合与分割方法。我们提出的方案包括处理相机和激光雷达数据的管道,以及通过处理传感器图像开发用于目标检测的自主解决方案。使用改进的自适应扩展卡尔曼滤波器(IAEKF)进行降噪。使用基于归一化伽马变换的对比度受限自适应直方图均衡化(NGT-CLAHE)进行对比度增强,并使用改进的自适应加权均值滤波器(IAWMF)实现自适应阈值处理,该滤波器用于预处理。基于方向的多分割采用了各种分割技术和程度。基于密集网络的多图像融合在快速处理时间方面具有更高的效率和更高的内存。能量谷优化器(EVO)方法用于选择基于网格地图的路径和车道。这种策略以简单的方式解决复杂任务,从而实现了灵活性、弹性和可扩展性。此外,使用YOLO V7模型进行检测和分类。使用速度、准确率、成功率、成功比、均方误差、损失率和累积奖励等指标对所提出的工作进行评估。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f15/11682159/5a0d9b5eb46b/41598_2024_82356_Fig15_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f15/11682159/5348e7c9d5e4/41598_2024_82356_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f15/11682159/160947a4097d/41598_2024_82356_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f15/11682159/1679142b2113/41598_2024_82356_Figb_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f15/11682159/c2120b55e1c1/41598_2024_82356_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f15/11682159/93ea09762f1c/41598_2024_82356_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f15/11682159/66843a9378a6/41598_2024_82356_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f15/11682159/510bba05e707/41598_2024_82356_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f15/11682159/f74a45b60918/41598_2024_82356_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f15/11682159/db44e4e6bfb9/41598_2024_82356_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f15/11682159/caab2c133b5c/41598_2024_82356_Fig14_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f15/11682159/5a0d9b5eb46b/41598_2024_82356_Fig15_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f15/11682159/5348e7c9d5e4/41598_2024_82356_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f15/11682159/160947a4097d/41598_2024_82356_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f15/11682159/1679142b2113/41598_2024_82356_Figb_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f15/11682159/c2120b55e1c1/41598_2024_82356_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f15/11682159/93ea09762f1c/41598_2024_82356_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f15/11682159/66843a9378a6/41598_2024_82356_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f15/11682159/510bba05e707/41598_2024_82356_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f15/11682159/f74a45b60918/41598_2024_82356_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f15/11682159/db44e4e6bfb9/41598_2024_82356_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f15/11682159/caab2c133b5c/41598_2024_82356_Fig14_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f15/11682159/5a0d9b5eb46b/41598_2024_82356_Fig15_HTML.jpg

相似文献

1
Multi-sensor fusion and segmentation for autonomous vehicle multi-object tracking using deep Q networks.使用深度Q网络的自动驾驶车辆多目标跟踪的多传感器融合与分割
Sci Rep. 2024 Dec 28;14(1):31130. doi: 10.1038/s41598-024-82356-0.
2
Brain tumor segmentation and detection in MRI using convolutional neural networks and VGG16.使用卷积神经网络和VGG16在磁共振成像(MRI)中进行脑肿瘤分割与检测
Cancer Biomark. 2025 Mar;42(3):18758592241311184. doi: 10.1177/18758592241311184. Epub 2025 Apr 4.
3
Kalman Filter-Based Fusion of LiDAR and Camera Data in Bird's Eye View for Multi-Object Tracking in Autonomous Vehicles.基于卡尔曼滤波器的激光雷达与相机数据融合在鸟瞰图中用于自动驾驶车辆的多目标跟踪
Sensors (Basel). 2024 Dec 3;24(23):7718. doi: 10.3390/s24237718.
4
Sensor Fusion in Autonomous Vehicle with Traffic Surveillance Camera System: Detection, Localization, and AI Networking.自动驾驶车辆中的传感器融合:交通监测摄像系统中的检测、定位和人工智能网络。
Sensors (Basel). 2023 Mar 22;23(6):3335. doi: 10.3390/s23063335.
5
Real-Time Hybrid Multi-Sensor Fusion Framework for Perception in Autonomous Vehicles.实时混合多传感器融合框架用于自动驾驶车辆的感知。
Sensors (Basel). 2019 Oct 9;19(20):4357. doi: 10.3390/s19204357.
6
Sensor and Sensor Fusion Technology in Autonomous Vehicles: A Review.自动驾驶车辆中的传感器与传感器融合技术:综述。
Sensors (Basel). 2021 Mar 18;21(6):2140. doi: 10.3390/s21062140.
7
Stabilization and Validation of 3D Object Position Using Multimodal Sensor Fusion and Semantic Segmentation.使用多模态传感器融合和语义分割技术稳定和验证三维物体位置。
Sensors (Basel). 2020 Feb 18;20(4):1110. doi: 10.3390/s20041110.
8
End-to-End Multimodal Sensor Dataset Collection Framework for Autonomous Vehicles.用于自动驾驶车辆的端到端多模态传感器数据集收集框架
Sensors (Basel). 2023 Jul 29;23(15):6783. doi: 10.3390/s23156783.
9
Enhancing LiDAR Mapping with YOLO-Based Potential Dynamic Object Removal in Autonomous Driving.在自动驾驶中通过基于YOLO的潜在动态物体去除增强激光雷达测绘
Sensors (Basel). 2024 Nov 27;24(23):7578. doi: 10.3390/s24237578.
10
Design of a Robust System Architecture for Tracking Vehicle on Highway Based on Monocular Camera.基于单目相机的高速公路车辆跟踪稳健系统架构设计。
Sensors (Basel). 2022 Apr 27;22(9):3359. doi: 10.3390/s22093359.

引用本文的文献

1
RoboMNIST: A Multimodal Dataset for Multi-Robot Activity Recognition Using WiFi Sensing, Video, and Audio.RoboMNIST:一个用于多机器人活动识别的多模态数据集,使用WiFi传感、视频和音频。
Sci Data. 2025 Feb 22;12(1):326. doi: 10.1038/s41597-025-04636-2.

本文引用的文献

1
Fault Diagnosis of the Autonomous Driving Perception System Based on Information Fusion.基于信息融合的自动驾驶感知系统故障诊断
Sensors (Basel). 2023 May 26;23(11):5110. doi: 10.3390/s23115110.
2
CF-YOLOX: An Autonomous Driving Detection Model for Multi-Scale Object Detection.CF-YOLOX:一种用于多尺度目标检测的自动驾驶检测模型。
Sensors (Basel). 2023 Apr 7;23(8):3794. doi: 10.3390/s23083794.
3
Sensor Fusion in Autonomous Vehicle with Traffic Surveillance Camera System: Detection, Localization, and AI Networking.自动驾驶车辆中的传感器融合:交通监测摄像系统中的检测、定位和人工智能网络。
Sensors (Basel). 2023 Mar 22;23(6):3335. doi: 10.3390/s23063335.
4
LiDAR-as-Camera for End-to-End Driving.激光雷达相机融合的端到端驾驶系统。
Sensors (Basel). 2023 Mar 6;23(5):2845. doi: 10.3390/s23052845.
5
Vehicle Detection on Occupancy Grid Maps: Comparison of Five Detectors Regarding Real-Time Performance.基于占据栅格地图的车辆检测:五种检测器实时性能比较。
Sensors (Basel). 2023 Feb 2;23(3):1613. doi: 10.3390/s23031613.
6
SDC-Net: End-to-End Multitask Self-Driving Car Camera Cocoon IoT-Based System.SDC-Net:端到端多任务自动驾驶汽车摄像头茧物联网系统。
Sensors (Basel). 2022 Nov 24;22(23):9108. doi: 10.3390/s22239108.
7
Analysis of Thermal Imaging Performance under Extreme Foggy Conditions: Applications to Autonomous Driving.极端大雾条件下热成像性能分析:在自动驾驶中的应用
J Imaging. 2022 Nov 9;8(11):306. doi: 10.3390/jimaging8110306.
8
Application of Laser Systems for Detection and Ranging in the Modern Road Transportation and Maritime Sector.激光系统在现代道路交通运输和海事领域中的检测和测距应用。
Sensors (Basel). 2022 Aug 9;22(16):5946. doi: 10.3390/s22165946.
9
Enhanced Perception for Autonomous Driving Using Semantic and Geometric Data Fusion.利用语义和几何数据融合实现自动驾驶的增强感知。
Sensors (Basel). 2022 Jul 5;22(13):5061. doi: 10.3390/s22135061.
10
Improving Semantic Segmentation of Urban Scenes for Self-Driving Cars with Synthetic Images.利用合成图像提高自动驾驶汽车的城市场景语义分割。
Sensors (Basel). 2022 Mar 14;22(6):2252. doi: 10.3390/s22062252.