• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

自动驾驶应用中的实例分割融合引导深度补全。

Guided Depth Completion with Instance Segmentation Fusion in Autonomous Driving Applications.

机构信息

Electrical and Computer Engineering Department, Western Michigan University, Kalamazoo, MI 49008, USA.

Civil and Construction Engineering Department, Western Michigan University, Kalamazoo, MI 49008, USA.

出版信息

Sensors (Basel). 2022 Dec 7;22(24):9578. doi: 10.3390/s22249578.

DOI:10.3390/s22249578
PMID:36559946
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9781309/
Abstract

Pixel-level depth information is crucial to many applications, such as autonomous driving, robotics navigation, 3D scene reconstruction, and augmented reality. However, depth information, which is usually acquired by sensors such as LiDAR, is sparse. Depth completion is a process that predicts missing pixels' depth information from a set of sparse depth measurements. Most of the ongoing research applies deep neural networks on the entire sparse depth map and camera scene without utilizing any information about the available objects, which results in more complex and resource-demanding networks. In this work, we propose to use image instance segmentation to detect objects of interest with pixel-level locations, along with sparse depth data, to support depth completion. The framework utilizes a two-branch encoder-decoder deep neural network. It fuses information about scene available objects, such as objects' type and pixel-level location, LiDAR, and RGB camera, to predict dense accurate depth maps. Experimental results on the KITTI dataset showed faster training and improved prediction accuracy. The proposed method reaches a convergence state faster and surpasses the baseline model in all evaluation metrics.

摘要

像素级深度信息对于许多应用至关重要,例如自动驾驶、机器人导航、3D 场景重建和增强现实。然而,深度信息通常是由 LiDAR 等传感器获取的,因此是稀疏的。深度补全是一个从一组稀疏深度测量值中预测缺失像素的深度信息的过程。目前的大多数研究都是在整个稀疏深度图和摄像机场景上应用深度神经网络,而没有利用任何关于可用对象的信息,这导致了更复杂和资源密集型的网络。在这项工作中,我们提出使用图像实例分割来检测具有像素级位置的感兴趣对象,以及稀疏深度数据,以支持深度补全。该框架利用了一个两分支的编解码器深度神经网络。它融合了有关场景中可用对象的信息,例如对象的类型和像素级位置、LiDAR 和 RGB 摄像机,以预测密集准确的深度图。在 KITTI 数据集上的实验结果表明,该方法的训练速度更快,预测精度更高。所提出的方法更快地达到收敛状态,并在所有评估指标上都超过了基线模型。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54e3/9781309/06eeefe0dfeb/sensors-22-09578-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54e3/9781309/5449e5a016dc/sensors-22-09578-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54e3/9781309/277c3d8a1c81/sensors-22-09578-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54e3/9781309/ebcd3aea68c4/sensors-22-09578-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54e3/9781309/6811e9857106/sensors-22-09578-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54e3/9781309/d42bafe00fc7/sensors-22-09578-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54e3/9781309/2d05f6b45dcb/sensors-22-09578-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54e3/9781309/a89b14859aec/sensors-22-09578-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54e3/9781309/83d43d08f699/sensors-22-09578-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54e3/9781309/3ee896b25422/sensors-22-09578-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54e3/9781309/9aaa1dfb9a07/sensors-22-09578-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54e3/9781309/8f9be59cb5c4/sensors-22-09578-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54e3/9781309/52c785474716/sensors-22-09578-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54e3/9781309/732ba5c7d84b/sensors-22-09578-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54e3/9781309/baf6cb493cea/sensors-22-09578-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54e3/9781309/06eeefe0dfeb/sensors-22-09578-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54e3/9781309/5449e5a016dc/sensors-22-09578-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54e3/9781309/277c3d8a1c81/sensors-22-09578-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54e3/9781309/ebcd3aea68c4/sensors-22-09578-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54e3/9781309/6811e9857106/sensors-22-09578-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54e3/9781309/d42bafe00fc7/sensors-22-09578-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54e3/9781309/2d05f6b45dcb/sensors-22-09578-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54e3/9781309/a89b14859aec/sensors-22-09578-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54e3/9781309/83d43d08f699/sensors-22-09578-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54e3/9781309/3ee896b25422/sensors-22-09578-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54e3/9781309/9aaa1dfb9a07/sensors-22-09578-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54e3/9781309/8f9be59cb5c4/sensors-22-09578-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54e3/9781309/52c785474716/sensors-22-09578-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54e3/9781309/732ba5c7d84b/sensors-22-09578-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54e3/9781309/baf6cb493cea/sensors-22-09578-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54e3/9781309/06eeefe0dfeb/sensors-22-09578-g015.jpg

相似文献

1
Guided Depth Completion with Instance Segmentation Fusion in Autonomous Driving Applications.自动驾驶应用中的实例分割融合引导深度补全。
Sensors (Basel). 2022 Dec 7;22(24):9578. doi: 10.3390/s22249578.
2
Brain tumor segmentation and detection in MRI using convolutional neural networks and VGG16.使用卷积神经网络和VGG16在磁共振成像(MRI)中进行脑肿瘤分割与检测
Cancer Biomark. 2025 Mar;42(3):18758592241311184. doi: 10.1177/18758592241311184. Epub 2025 Apr 4.
3
HMS-Net: Hierarchical Multi-scale Sparsity-invariant Network for Sparse Depth Completion.HMS-Net:用于稀疏深度补全的分层多尺度稀疏不变网络
IEEE Trans Image Process. 2019 Dec 31. doi: 10.1109/TIP.2019.2960589.
4
Multi-Task Foreground-Aware Network with Depth Completion for Enhanced RGB-D Fusion Object Detection Based on Transformer.基于Transformer的具有深度补全功能的多任务前景感知网络用于增强RGB-D融合目标检测
Sensors (Basel). 2024 Apr 8;24(7):2374. doi: 10.3390/s24072374.
5
Real-time depth completion based on LiDAR-stereo for autonomous driving.基于激光雷达-立体视觉的自动驾驶实时深度补全
Front Neurorobot. 2023 Apr 18;17:1124676. doi: 10.3389/fnbot.2023.1124676. eCollection 2023.
6
PLIN: A Network for Pseudo-LiDAR Point Cloud Interpolation.PLIN:用于伪激光雷达点云插值的网络。
Sensors (Basel). 2020 Mar 12;20(6):1573. doi: 10.3390/s20061573.
7
Learning Guided Convolutional Network for Depth Completion.用于深度补全的学习引导卷积网络。
IEEE Trans Image Process. 2021;30:1116-1129. doi: 10.1109/TIP.2020.3040528. Epub 2020 Dec 15.
8
Recent Advances in Conventional and Deep Learning-Based Depth Completion: A Survey.基于传统方法和深度学习的深度补全研究进展综述
IEEE Trans Neural Netw Learn Syst. 2024 Mar;35(3):3395-3415. doi: 10.1109/TNNLS.2022.3201534. Epub 2024 Feb 29.
9
Real time object detection using LiDAR and camera fusion for autonomous driving.基于激光雷达和相机融合的自动驾驶实时目标检测。
Sci Rep. 2023 May 17;13(1):8056. doi: 10.1038/s41598-023-35170-z.
10
Distance Transform Pooling Neural Network for LiDAR Depth Completion.用于激光雷达深度补全的距离变换池化神经网络。
IEEE Trans Neural Netw Learn Syst. 2023 Sep;34(9):5580-5589. doi: 10.1109/TNNLS.2021.3129801. Epub 2023 Sep 1.

本文引用的文献

1
A Comprehensive Survey of Depth Completion Approaches.深度完成方法综述。
Sensors (Basel). 2022 Sep 14;22(18):6969. doi: 10.3390/s22186969.
2
Multitask GANs for Semantic Segmentation and Depth Completion With Cycle Consistency.具有循环一致性的用于语义分割和深度补全的多任务生成对抗网络
IEEE Trans Neural Netw Learn Syst. 2021 Dec;32(12):5404-5415. doi: 10.1109/TNNLS.2021.3072883. Epub 2021 Nov 30.
3
Sensor and Sensor Fusion Technology in Autonomous Vehicles: A Review.自动驾驶车辆中的传感器与传感器融合技术:综述。
Sensors (Basel). 2021 Mar 18;21(6):2140. doi: 10.3390/s21062140.
4
Deep Learning Sensor Fusion for Autonomous Vehicle Perception and Localization: A Review.深度学习传感器融合在自动驾驶感知和定位中的应用综述
Sensors (Basel). 2020 Jul 29;20(15):4220. doi: 10.3390/s20154220.
5
Mask R-CNN.Mask R-CNN。
IEEE Trans Pattern Anal Mach Intell. 2020 Feb;42(2):386-397. doi: 10.1109/TPAMI.2018.2844175. Epub 2018 Jun 5.