• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

激光雷达和广角相机数据的稳健融合在自主移动机器人中的应用。

Robust Fusion of LiDAR and Wide-Angle Camera Data for Autonomous Mobile Robots.

机构信息

Institute for Digital Technologies, Loughborough University, London E15 2GZ, UK.

出版信息

Sensors (Basel). 2018 Aug 20;18(8):2730. doi: 10.3390/s18082730.

DOI:10.3390/s18082730
PMID:30127253
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC6112019/
Abstract

Autonomous robots that assist humans in day to day living tasks are becoming increasingly popular. Autonomous mobile robots operate by sensing and perceiving their surrounding environment to make accurate driving decisions. A combination of several different sensors such as LiDAR, radar, ultrasound sensors and cameras are utilized to sense the surrounding environment of autonomous vehicles. These heterogeneous sensors simultaneously capture various physical attributes of the environment. Such multimodality and redundancy of sensing need to be positively utilized for reliable and consistent perception of the environment through sensor data fusion. However, these multimodal sensor data streams are different from each other in many ways, such as temporal and spatial resolution, data format, and geometric alignment. For the subsequent perception algorithms to utilize the diversity offered by multimodal sensing, the data streams need to be spatially, geometrically and temporally aligned with each other. In this paper, we address the problem of fusing the outputs of a Light Detection and Ranging (LiDAR) scanner and a wide-angle monocular image sensor for free space detection. The outputs of LiDAR scanner and the image sensor are of different spatial resolutions and need to be aligned with each other. A geometrical model is used to spatially align the two sensor outputs, followed by a Gaussian Process (GP) regression-based resolution matching algorithm to interpolate the missing data with quantifiable uncertainty. The results indicate that the proposed sensor data fusion framework significantly aids the subsequent perception steps, as illustrated by the performance improvement of a uncertainty aware free space detection algorithm.

摘要

自主机器人在日常生活任务中协助人类,越来越受到欢迎。自主移动机器人通过感知和感知其周围环境来做出准确的驾驶决策。结合几种不同的传感器,如激光雷达、雷达、超声传感器和摄像机,用于感知自主车辆的周围环境。这些异构传感器同时捕获环境的各种物理属性。需要积极利用这种多模态和冗余的传感,通过传感器数据融合实现对环境的可靠和一致感知。然而,这些多模态传感器数据流在许多方面彼此不同,例如时间和空间分辨率、数据格式和几何对准。为了让后续的感知算法利用多模态传感提供的多样性,数据流需要在空间、几何和时间上彼此对准。在本文中,我们解决了融合激光雷达(LiDAR)扫描仪和广角单目图像传感器的输出以进行自由空间检测的问题。LiDAR 扫描仪和图像传感器的输出具有不同的空间分辨率,需要彼此对准。使用几何模型来空间对准两个传感器的输出,然后使用基于高斯过程(GP)回归的分辨率匹配算法来以可量化的不确定性内插缺失数据。结果表明,所提出的传感器数据融合框架显著有助于后续的感知步骤,如具有不确定性感知的自由空间检测算法的性能改进所示。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a96a/6112019/a5c0ff0a5cc1/sensors-18-02730-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a96a/6112019/6386c4cb2f5b/sensors-18-02730-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a96a/6112019/18fe60d83f61/sensors-18-02730-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a96a/6112019/69ae19f2d158/sensors-18-02730-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a96a/6112019/84590acc362b/sensors-18-02730-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a96a/6112019/f2bc026268f7/sensors-18-02730-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a96a/6112019/6b59a028f7d2/sensors-18-02730-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a96a/6112019/bee8058b5b73/sensors-18-02730-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a96a/6112019/7bb9edd84953/sensors-18-02730-g008a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a96a/6112019/569a6c8825f6/sensors-18-02730-g009a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a96a/6112019/d382eb510646/sensors-18-02730-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a96a/6112019/46aa7ae21c2b/sensors-18-02730-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a96a/6112019/87a0a530da70/sensors-18-02730-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a96a/6112019/50afad0480c9/sensors-18-02730-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a96a/6112019/a5c0ff0a5cc1/sensors-18-02730-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a96a/6112019/6386c4cb2f5b/sensors-18-02730-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a96a/6112019/18fe60d83f61/sensors-18-02730-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a96a/6112019/69ae19f2d158/sensors-18-02730-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a96a/6112019/84590acc362b/sensors-18-02730-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a96a/6112019/f2bc026268f7/sensors-18-02730-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a96a/6112019/6b59a028f7d2/sensors-18-02730-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a96a/6112019/bee8058b5b73/sensors-18-02730-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a96a/6112019/7bb9edd84953/sensors-18-02730-g008a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a96a/6112019/569a6c8825f6/sensors-18-02730-g009a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a96a/6112019/d382eb510646/sensors-18-02730-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a96a/6112019/46aa7ae21c2b/sensors-18-02730-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a96a/6112019/87a0a530da70/sensors-18-02730-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a96a/6112019/50afad0480c9/sensors-18-02730-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a96a/6112019/a5c0ff0a5cc1/sensors-18-02730-g014.jpg

相似文献

1
Robust Fusion of LiDAR and Wide-Angle Camera Data for Autonomous Mobile Robots.激光雷达和广角相机数据的稳健融合在自主移动机器人中的应用。
Sensors (Basel). 2018 Aug 20;18(8):2730. doi: 10.3390/s18082730.
2
End-to-End Multimodal Sensor Dataset Collection Framework for Autonomous Vehicles.用于自动驾驶车辆的端到端多模态传感器数据集收集框架
Sensors (Basel). 2023 Jul 29;23(15):6783. doi: 10.3390/s23156783.
3
High Definition 3D Map Creation Using GNSS/IMU/LiDAR Sensor Integration to Support Autonomous Vehicle Navigation.利用全球导航卫星系统/惯性测量单元/激光雷达传感器集成创建高清3D地图以支持自动驾驶车辆导航。
Sensors (Basel). 2020 Feb 7;20(3):899. doi: 10.3390/s20030899.
4
Sensor and Sensor Fusion Technology in Autonomous Vehicles: A Review.自动驾驶车辆中的传感器与传感器融合技术:综述。
Sensors (Basel). 2021 Mar 18;21(6):2140. doi: 10.3390/s21062140.
5
Multitarget-Tracking Method Based on the Fusion of Millimeter-Wave Radar and LiDAR Sensor Information for Autonomous Vehicles.基于毫米波雷达与激光雷达传感器信息融合的自动驾驶车辆多目标跟踪方法
Sensors (Basel). 2023 Aug 3;23(15):6920. doi: 10.3390/s23156920.
6
Real-Time Hybrid Multi-Sensor Fusion Framework for Perception in Autonomous Vehicles.实时混合多传感器融合框架用于自动驾驶车辆的感知。
Sensors (Basel). 2019 Oct 9;19(20):4357. doi: 10.3390/s19204357.
7
Survey of Datafusion Techniques for Laser and Vision Based Sensor Integration for Autonomous Navigation.基于激光和视觉的自主导航传感器数据融合技术综述。
Sensors (Basel). 2020 Apr 12;20(8):2180. doi: 10.3390/s20082180.
8
Accuracy-Power Controllable LiDAR Sensor System with 3D Object Recognition for Autonomous Vehicle.用于自动驾驶车辆的具有3D目标识别功能的精度-功率可控激光雷达传感器系统
Sensors (Basel). 2020 Oct 7;20(19):5706. doi: 10.3390/s20195706.
9
Estimation of the Closest In-Path Vehicle by Low-Channel LiDAR and Camera Sensor Fusion for Autonomous Vehicles.基于低通道激光雷达和相机传感器融合的自动驾驶车辆最近路径车辆估计。
Sensors (Basel). 2021 Apr 30;21(9):3124. doi: 10.3390/s21093124.
10
Towards Camera-LIDAR Fusion-Based Terrain Modelling for Planetary Surfaces: Review and Analysis.面向基于相机-激光雷达融合的行星表面地形建模:综述与分析
Sensors (Basel). 2016 Nov 20;16(11):1952. doi: 10.3390/s16111952.

引用本文的文献

1
A Survey of the Multi-Sensor Fusion Object Detection Task in Autonomous Driving.自动驾驶中多传感器融合目标检测任务的综述
Sensors (Basel). 2025 Apr 29;25(9):2794. doi: 10.3390/s25092794.
2
BAFusion: Bidirectional Attention Fusion for 3D Object Detection Based on LiDAR and Camera.BAFusion:基于激光雷达和摄像头的用于3D目标检测的双向注意力融合
Sensors (Basel). 2024 Jul 20;24(14):4718. doi: 10.3390/s24144718.
3
Probability-Based LIDAR-Camera Calibration Considering Target Positions and Parameter Evaluation Using a Data Fusion Map.

本文引用的文献

1
Robust smoothing of gridded data in one and higher dimensions with missing values.对一维及更高维含缺失值的网格化数据进行稳健平滑处理。
Comput Stat Data Anal. 2010 Apr 1;54(4):1167-1178. doi: 10.1016/j.csda.2009.09.020.
2
Calibration between color camera and 3D LIDAR instruments with a polygonal planar board.使用多边形平面板实现彩色相机与三维激光雷达仪器之间的校准。
Sensors (Basel). 2014 Mar 17;14(3):5333-53. doi: 10.3390/s140305333.
3
3D LIDAR-camera extrinsic calibration using an arbitrary trihedron.使用任意三面体进行 3D LIDAR-相机外参标定。
基于概率的激光雷达-相机标定:考虑目标位置并使用数据融合地图进行参数评估
Sensors (Basel). 2024 Jun 19;24(12):3981. doi: 10.3390/s24123981.
4
Research on WSN reliable ranging and positioning algorithm for forest environment.森林环境下无线传感器网络可靠测距与定位算法研究
Sci Rep. 2024 Mar 5;14(1):5417. doi: 10.1038/s41598-024-56180-5.
5
Characterization of the iPhone LiDAR-Based Sensing System for Vibration Measurement and Modal Analysis.基于iPhone激光雷达的振动测量与模态分析传感系统特性
Sensors (Basel). 2023 Sep 12;23(18):7832. doi: 10.3390/s23187832.
6
SLAM and 3D Semantic Reconstruction Based on the Fusion of Lidar and Monocular Vision.基于激光雷达和单目视觉融合的 SLAM 和 3D 语义重建。
Sensors (Basel). 2023 Jan 29;23(3):1502. doi: 10.3390/s23031502.
7
An Entropy Analysis-Based Window Size Optimization Scheme for Merging LiDAR Data Frames.基于熵分析的 LiDAR 数据帧合并窗口大小优化方案。
Sensors (Basel). 2022 Nov 29;22(23):9293. doi: 10.3390/s22239293.
8
Automatic Extrinsic Calibration of 3D LIDAR and Multi-Cameras Based on Graph Optimization.基于图优化的三维激光雷达与多相机自动外部校准
Sensors (Basel). 2022 Mar 13;22(6):2221. doi: 10.3390/s22062221.
9
Free Space Detection Algorithm Using Object Tracking for Autonomous Vehicles.用于自动驾驶车辆的基于目标跟踪的自由空间检测算法
Sensors (Basel). 2021 Dec 31;22(1):315. doi: 10.3390/s22010315.
10
Convolution-Based Encoding of Depth Images for Transfer Learning in RGB-D Scene Classification.基于卷积的深度图像编码在 RGB-D 场景分类中的迁移学习。
Sensors (Basel). 2021 Nov 28;21(23):7950. doi: 10.3390/s21237950.
Sensors (Basel). 2013 Feb 1;13(2):1902-18. doi: 10.3390/s130201902.