• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于激光雷达-立体视觉的自动驾驶实时深度补全

Real-time depth completion based on LiDAR-stereo for autonomous driving.

作者信息

Wei Ming, Zhu Ming, Zhang Yaoyuan, Wang Jiarong, Sun Jiaqi

机构信息

Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun, China.

University of Chinese Academy of Sciences, Beijing, China.

出版信息

Front Neurorobot. 2023 Apr 18;17:1124676. doi: 10.3389/fnbot.2023.1124676. eCollection 2023.

DOI:10.3389/fnbot.2023.1124676
PMID:37144086
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10151502/
Abstract

The integration of multiple sensors is a crucial and emerging trend in the development of autonomous driving technology. The depth image obtained by stereo matching of the binocular camera is easily influenced by environment and distance. The point cloud of LiDAR has strong penetrability. However, it is much sparser than binocular images. LiDAR-stereo fusion can neutralize the advantages of the two sensors and maximize the acquisition of reliable three-dimensional information to improve the safety of automatic driving. Cross-sensor fusion is a key issue in the development of autonomous driving technology. This study proposed a real-time LiDAR-stereo depth completion network without 3D convolution to fuse point clouds and binocular images using injection guidance. At the same time, a kernel-connected spatial propagation network was utilized to refine the depth. The output of dense 3D information is more accurate for autonomous driving. Experimental results on the KITTI dataset showed that our method used real-time techniques effectively. Further, we demonstrated our solution's ability to address sensor defects and challenging environmental conditions using the p-KITTI dataset.

摘要

多传感器融合是自动驾驶技术发展中一个关键且新兴的趋势。通过双目相机立体匹配获得的深度图像很容易受到环境和距离的影响。激光雷达的点云具有很强的穿透性。然而,它比双目图像稀疏得多。激光雷达 - 立体融合可以中和两种传感器的优势,最大限度地获取可靠的三维信息,以提高自动驾驶的安全性。跨传感器融合是自动驾驶技术发展中的一个关键问题。本研究提出了一种无需3D卷积的实时激光雷达 - 立体深度补全网络,利用注入引导融合点云和双目图像。同时,利用内核连接的空间传播网络来细化深度。密集3D信息的输出对于自动驾驶更为准确。在KITTI数据集上的实验结果表明,我们的方法有效地使用了实时技术。此外,我们使用p - KITTI数据集展示了我们的解决方案解决传感器缺陷和应对具有挑战性的环境条件的能力。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7038/10151502/f26ae0511b26/fnbot-17-1124676-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7038/10151502/0c9e665ab32e/fnbot-17-1124676-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7038/10151502/096c5d81c5c8/fnbot-17-1124676-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7038/10151502/941aed8508b8/fnbot-17-1124676-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7038/10151502/33a958ea3dde/fnbot-17-1124676-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7038/10151502/a58ce66cfd44/fnbot-17-1124676-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7038/10151502/c48344bfa190/fnbot-17-1124676-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7038/10151502/998fafd85623/fnbot-17-1124676-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7038/10151502/ece4700c8a42/fnbot-17-1124676-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7038/10151502/f26ae0511b26/fnbot-17-1124676-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7038/10151502/0c9e665ab32e/fnbot-17-1124676-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7038/10151502/096c5d81c5c8/fnbot-17-1124676-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7038/10151502/941aed8508b8/fnbot-17-1124676-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7038/10151502/33a958ea3dde/fnbot-17-1124676-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7038/10151502/a58ce66cfd44/fnbot-17-1124676-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7038/10151502/c48344bfa190/fnbot-17-1124676-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7038/10151502/998fafd85623/fnbot-17-1124676-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7038/10151502/ece4700c8a42/fnbot-17-1124676-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7038/10151502/f26ae0511b26/fnbot-17-1124676-g0009.jpg

相似文献

1
Real-time depth completion based on LiDAR-stereo for autonomous driving.基于激光雷达-立体视觉的自动驾驶实时深度补全
Front Neurorobot. 2023 Apr 18;17:1124676. doi: 10.3389/fnbot.2023.1124676. eCollection 2023.
2
Real time object detection using LiDAR and camera fusion for autonomous driving.基于激光雷达和相机融合的自动驾驶实时目标检测。
Sci Rep. 2023 May 17;13(1):8056. doi: 10.1038/s41598-023-35170-z.
3
Efficient Stereo Depth Estimation for Pseudo-LiDAR: A Self-Supervised Approach Based on Multi-Input ResNet Encoder.基于多输入 ResNet 编码器的伪激光雷达高效立体深度估计:一种自监督方法。
Sensors (Basel). 2023 Feb 2;23(3):1650. doi: 10.3390/s23031650.
4
PLIN: A Network for Pseudo-LiDAR Point Cloud Interpolation.PLIN:用于伪激光雷达点云插值的网络。
Sensors (Basel). 2020 Mar 12;20(6):1573. doi: 10.3390/s20061573.
5
3D Object Detection with SLS-Fusion Network in Foggy Weather Conditions.雾天条件下基于SLS融合网络的3D目标检测
Sensors (Basel). 2021 Oct 9;21(20):6711. doi: 10.3390/s21206711.
6
Guided Depth Completion with Instance Segmentation Fusion in Autonomous Driving Applications.自动驾驶应用中的实例分割融合引导深度补全。
Sensors (Basel). 2022 Dec 7;22(24):9578. doi: 10.3390/s22249578.
7
3D Object Detection for Self-Driving Cars Using Video and LiDAR: An Ablation Study.自动驾驶汽车中基于视频和激光雷达的 3D 对象检测:消融研究。
Sensors (Basel). 2023 Mar 17;23(6):3223. doi: 10.3390/s23063223.
8
Learning Guided Convolutional Network for Depth Completion.用于深度补全的学习引导卷积网络。
IEEE Trans Image Process. 2021;30:1116-1129. doi: 10.1109/TIP.2020.3040528. Epub 2020 Dec 15.
9
An end-to-end stereo matching algorithm based on improved convolutional neural network.基于改进卷积神经网络的端到端立体匹配算法。
Math Biosci Eng. 2020 Nov 6;17(6):7787-7803. doi: 10.3934/mbe.2020396.
10
A Survey on Deep-Learning-Based LiDAR 3D Object Detection for Autonomous Driving.基于深度学习的自动驾驶激光雷达 3D 目标检测研究综述。
Sensors (Basel). 2022 Dec 7;22(24):9577. doi: 10.3390/s22249577.

本文引用的文献

1
A Kalman-Filter-Incorporated Latent Factor Analysis Model for Temporally Dynamic Sparse Data.一种用于时间动态稀疏数据的结合卡尔曼滤波器的潜在因子分析模型。
IEEE Trans Cybern. 2023 Sep;53(9):5788-5801. doi: 10.1109/TCYB.2022.3185117. Epub 2023 Aug 17.
2
Adaptive Context-Aware Multi-Modal Network for Depth Completion.用于深度补全的自适应上下文感知多模态网络
IEEE Trans Image Process. 2021;30:5264-5276. doi: 10.1109/TIP.2021.3079821. Epub 2021 May 31.
3
An α-β-Divergence-Generalized Recommender for Highly Accurate Predictions of Missing User Preferences.
一种用于高精度预测缺失用户偏好的α-β散度广义推荐器。
IEEE Trans Cybern. 2022 Aug;52(8):8006-8018. doi: 10.1109/TCYB.2020.3026425. Epub 2022 Jul 19.
4
HMS-Net: Hierarchical Multi-scale Sparsity-invariant Network for Sparse Depth Completion.HMS-Net:用于稀疏深度补全的分层多尺度稀疏不变网络
IEEE Trans Image Process. 2019 Dec 31. doi: 10.1109/TIP.2019.2960589.
5
Learning Depth with Convolutional Spatial Propagation Network.基于卷积空间传播网络的深度学习
IEEE Trans Pattern Anal Mach Intell. 2020 Oct;42(10):2361-2379. doi: 10.1109/TPAMI.2019.2947374. Epub 2019 Oct 15.
6
Confidence Propagation through CNNs for Guided Sparse Depth Regression.通过卷积神经网络进行置信传播以实现引导式稀疏深度回归
IEEE Trans Pattern Anal Mach Intell. 2020 Oct;42(10):2423-2436. doi: 10.1109/TPAMI.2019.2929170. Epub 2019 Jul 17.