• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于鸟瞰面相机-激光雷达融合的自由空间检测。

Free Space Detection Using Camera-LiDAR Fusion in a Bird's Eye View Plane.

机构信息

Department of Smart Car Engineering, Chungbuk National University, Cheongju 28644, Korea.

Department of Control and Robot Engineering, Chungbuk National University, Cheongju 28644, Korea.

出版信息

Sensors (Basel). 2021 Nov 17;21(22):7623. doi: 10.3390/s21227623.

DOI:10.3390/s21227623
PMID:34833698
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8619025/
Abstract

Although numerous road segmentation studies have utilized vision data, obtaining robust classification is still challenging due to vision sensor noise and target object deformation. Long-distance images are still problematic because of blur and low resolution, and these features make distinguishing roads from objects difficult. This study utilizes light detection and ranging (LiDAR), which generates information that camera images lack, such as distance, height, and intensity, as a reliable supplement to address this problem. In contrast to conventional approaches, additional domain transformation to a bird's eye view space is executed to obtain long-range data with resolutions comparable to those of short-range data. This study proposes a convolutional neural network architecture that processes data transformed to a bird's eye view plane. The network's pathways are split into two parts to resolve calibration errors in the transformed image and point cloud. The network, which has modules that operate sequentially at various scaled dilated convolution rates, is designed to quickly and accurately handle a wide range of data. Comprehensive empirical studies using the Karlsruhe Institute of Technology and Toyota Technological Institute's (KITTI's) road detection benchmarks demonstrate that this study's approach takes advantage of camera and LiDAR information, achieving robust road detection with short runtimes. Our result ranks 22nd in the KITTI's leaderboard and shows real-time performance.

摘要

尽管许多道路分割研究都利用了视觉数据,但由于视觉传感器噪声和目标物体变形,获得稳健的分类仍然具有挑战性。远距离图像仍然存在问题,因为模糊和低分辨率,这些特征使得从物体中区分道路变得困难。本研究利用激光雷达(LiDAR),它生成了相机图像缺乏的信息,如距离、高度和强度,作为可靠的补充来解决这个问题。与传统方法相比,还执行了额外的域变换到鸟瞰空间,以获得与近距离数据相当的远距离数据分辨率。本研究提出了一种处理转换到鸟瞰平面的数据的卷积神经网络架构。网络的路径分为两部分,以解决变换图像和点云中的校准误差。该网络具有在不同比例的扩张卷积率下顺序运行的模块,旨在快速准确地处理广泛的数据。使用卡尔斯鲁厄理工学院和丰田技术研究所(KITTI)的道路检测基准进行的全面实证研究表明,本研究的方法利用了相机和 LiDAR 信息,实现了具有短运行时间的稳健道路检测。我们的结果在 KITTI 的排行榜上排名第 22 位,并展示了实时性能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d69a/8619025/73106c9cd382/sensors-21-07623-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d69a/8619025/e9b35e4ac0cc/sensors-21-07623-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d69a/8619025/104b1a974719/sensors-21-07623-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d69a/8619025/2ddae6d70147/sensors-21-07623-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d69a/8619025/c30dc3b12ae6/sensors-21-07623-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d69a/8619025/f2dbc1c41451/sensors-21-07623-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d69a/8619025/45008cd4afba/sensors-21-07623-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d69a/8619025/14c5d2927efa/sensors-21-07623-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d69a/8619025/e2c4c72fc2fa/sensors-21-07623-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d69a/8619025/73106c9cd382/sensors-21-07623-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d69a/8619025/e9b35e4ac0cc/sensors-21-07623-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d69a/8619025/104b1a974719/sensors-21-07623-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d69a/8619025/2ddae6d70147/sensors-21-07623-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d69a/8619025/c30dc3b12ae6/sensors-21-07623-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d69a/8619025/f2dbc1c41451/sensors-21-07623-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d69a/8619025/45008cd4afba/sensors-21-07623-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d69a/8619025/14c5d2927efa/sensors-21-07623-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d69a/8619025/e2c4c72fc2fa/sensors-21-07623-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d69a/8619025/73106c9cd382/sensors-21-07623-g009.jpg

相似文献

1
Free Space Detection Using Camera-LiDAR Fusion in a Bird's Eye View Plane.基于鸟瞰面相机-激光雷达融合的自由空间检测。
Sensors (Basel). 2021 Nov 17;21(22):7623. doi: 10.3390/s21227623.
2
BiFNet: Bidirectional Fusion Network for Road Segmentation.BiFNet:用于道路分割的双向融合网络。
IEEE Trans Cybern. 2022 Sep;52(9):8617-8628. doi: 10.1109/TCYB.2021.3105488. Epub 2022 Aug 18.
3
Fast vehicle detection based on colored point cloud with bird's eye view representation.基于鸟瞰彩色点云的快速车辆检测。
Sci Rep. 2023 May 8;13(1):7447. doi: 10.1038/s41598-023-34479-z.
4
Estimation of the Closest In-Path Vehicle by Low-Channel LiDAR and Camera Sensor Fusion for Autonomous Vehicles.基于低通道激光雷达和相机传感器融合的自动驾驶车辆最近路径车辆估计。
Sensors (Basel). 2021 Apr 30;21(9):3124. doi: 10.3390/s21093124.
5
Towards Interpretable Camera and LiDAR Data Fusion for Autonomous Ground Vehicles Localisation.面向自动驾驶车辆定位的可解释相机和激光雷达数据融合。
Sensors (Basel). 2022 Oct 20;22(20):8021. doi: 10.3390/s22208021.
6
Accurate 3D to 2D Object Distance Estimation from the Mapped Point Cloud Data.从映射点云数据中准确估计 3D 到 2D 的物体距离。
Sensors (Basel). 2023 Feb 13;23(4):2103. doi: 10.3390/s23042103.
7
Dataset of bird's eye chilies farm for stereo image semantic segmentation.用于立体图像语义分割的鸟眼辣椒农场数据集。
Data Brief. 2023 Oct 23;51:109714. doi: 10.1016/j.dib.2023.109714. eCollection 2023 Dec.
8
Accuracy and Speed Improvement of Event Camera Motion Estimation Using a Bird's-Eye View Transformation.基于鸟瞰图变换的事件相机运动估计的准确性和速度改进。
Sensors (Basel). 2022 Jan 20;22(3):773. doi: 10.3390/s22030773.
9
Monocular BEV Perception of Road Scenes via Front-to-Top View Projection.通过前视图到顶视图投影实现道路场景的单目鸟瞰图感知
IEEE Trans Pattern Anal Mach Intell. 2024 Sep;46(9):6109-6125. doi: 10.1109/TPAMI.2024.3377812. Epub 2024 Aug 6.
10
HeightFormer: Explicit Height Modeling Without Extra Data for Camera-Only 3D Object Detection in Bird's Eye View.HeightFormer:无需额外数据的显式高度建模,用于鸟瞰视角下仅基于相机的3D目标检测
IEEE Trans Image Process. 2025;34:689-700. doi: 10.1109/TIP.2024.3427701. Epub 2025 Jan 28.

引用本文的文献

1
LiDAR-camera fusion for road detection using a recurrent conditional random field model.基于循环条件随机场模型的激光雷达-相机融合道路检测。
Sci Rep. 2022 Jul 5;12(1):11320. doi: 10.1038/s41598-022-14438-w.
2
Real-Time LIDAR-Based Urban Road and Sidewalk Detection for Autonomous Vehicles.基于实时激光雷达的自动驾驶车辆城市道路和人行道检测。
Sensors (Basel). 2021 Dec 28;22(1):194. doi: 10.3390/s22010194.

本文引用的文献

1
DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs.DeepLab:基于深度卷积网络、空洞卷积和全连接条件随机场的语义图像分割。
IEEE Trans Pattern Anal Mach Intell. 2018 Apr;40(4):834-848. doi: 10.1109/TPAMI.2017.2699184. Epub 2017 Apr 27.