• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于联合双边滤波器扩展稀疏雷达深度用于雷达引导的单目深度估计

Expanding Sparse Radar Depth Based on Joint Bilateral Filter for Radar-Guided Monocular Depth Estimation.

作者信息

Lo Chen-Chou, Vandewalle Patrick

机构信息

Processing Speech and Images (PSI), Department of Electrical Engineering (ESAT), KU Leuven, 3001 Leuven, Belgium.

出版信息

Sensors (Basel). 2024 Mar 14;24(6):1864. doi: 10.3390/s24061864.

DOI:10.3390/s24061864
PMID:38544126
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10975743/
Abstract

Radar data can provide additional depth information for monocular depth estimation. It provides a cost-effective solution and is robust in various weather conditions, particularly when compared with lidar. Given the sparse and limited vertical field of view of radar signals, existing methods employ either a vertical extension of radar points or the training of a preprocessing neural network to extend sparse radar points under lidar supervision. In this work, we present a novel radar expansion technique inspired by the joint bilateral filter, tailored for radar-guided monocular depth estimation. Our approach is motivated by the synergy of spatial and range kernels within the joint bilateral filter. Unlike traditional methods that assign a weighted average of nearby pixels to the current pixel, we expand sparse radar points by calculating a confidence score based on the values of spatial and range kernels. Additionally, we propose the use of a range-aware window size for radar expansion instead of a fixed window size in the image plane. Our proposed method effectively increases the number of radar points from an average of 39 points in a raw radar frame to an average of 100 K points. Notably, the expanded radar exhibits fewer intrinsic errors when compared with raw radar and previous methodologies. To validate our approach, we assess our proposed depth estimation model on the nuScenes dataset. Comparative evaluations with existing radar-guided depth estimation models demonstrate its state-of-the-art performance.

摘要

雷达数据可为单目深度估计提供额外的深度信息。它提供了一种经济高效的解决方案,并且在各种天气条件下都很稳健,尤其是与激光雷达相比时。鉴于雷达信号的垂直视野稀疏且有限,现有方法要么采用雷达点的垂直扩展,要么训练一个预处理神经网络,在激光雷达监督下扩展稀疏的雷达点。在这项工作中,我们提出了一种受联合双边滤波器启发的新型雷达扩展技术,专为雷达引导的单目深度估计量身定制。我们的方法受到联合双边滤波器中空间和距离内核协同作用的启发。与传统方法不同,传统方法将附近像素的加权平均值分配给当前像素,我们通过基于空间和距离内核的值计算置信度得分来扩展稀疏雷达点。此外,我们建议使用距离感知窗口大小进行雷达扩展,而不是图像平面中的固定窗口大小。我们提出的方法有效地将原始雷达帧中雷达点的平均数量从39个增加到平均100K个。值得注意的是,与原始雷达和先前方法相比,扩展后的雷达表现出更少的固有误差。为了验证我们的方法,我们在nuScenes数据集上评估了我们提出的深度估计模型。与现有雷达引导深度估计模型的比较评估证明了其领先的性能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/315c/10975743/1d05c4ebac1e/sensors-24-01864-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/315c/10975743/d405bf15170f/sensors-24-01864-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/315c/10975743/e489af141bdb/sensors-24-01864-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/315c/10975743/b3e799ff4ca9/sensors-24-01864-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/315c/10975743/2550d4149251/sensors-24-01864-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/315c/10975743/9f9d2fe94a93/sensors-24-01864-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/315c/10975743/2f697a7dd3bf/sensors-24-01864-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/315c/10975743/1d05c4ebac1e/sensors-24-01864-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/315c/10975743/d405bf15170f/sensors-24-01864-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/315c/10975743/e489af141bdb/sensors-24-01864-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/315c/10975743/b3e799ff4ca9/sensors-24-01864-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/315c/10975743/2550d4149251/sensors-24-01864-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/315c/10975743/9f9d2fe94a93/sensors-24-01864-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/315c/10975743/2f697a7dd3bf/sensors-24-01864-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/315c/10975743/1d05c4ebac1e/sensors-24-01864-g007.jpg

相似文献

1
Expanding Sparse Radar Depth Based on Joint Bilateral Filter for Radar-Guided Monocular Depth Estimation.基于联合双边滤波器扩展稀疏雷达深度用于雷达引导的单目深度估计
Sensors (Basel). 2024 Mar 14;24(6):1864. doi: 10.3390/s24061864.
2
Radar-Camera Fusion Network for Depth Estimation in Structured Driving Scenes.用于结构化驾驶场景深度估计的雷达-相机融合网络
Sensors (Basel). 2023 Aug 31;23(17):7560. doi: 10.3390/s23177560.
3
Monocular Depth Estimation from a Fisheye Camera Based on Knowledge Distillation.基于知识蒸馏的鱼眼相机单目深度估计
Sensors (Basel). 2023 Dec 16;23(24):9866. doi: 10.3390/s23249866.
4
Multitarget-Tracking Method Based on the Fusion of Millimeter-Wave Radar and LiDAR Sensor Information for Autonomous Vehicles.基于毫米波雷达与激光雷达传感器信息融合的自动驾驶车辆多目标跟踪方法
Sensors (Basel). 2023 Aug 3;23(15):6920. doi: 10.3390/s23156920.
5
MonoDCN: Monocular 3D object detection based on dynamic convolution.MonoDCN:基于动态卷积的单目三维目标检测。
PLoS One. 2022 Oct 4;17(10):e0275438. doi: 10.1371/journal.pone.0275438. eCollection 2022.
6
Unsupervised Estimation of Monocular Depth and VO in Dynamic Environments via Hybrid Masks.通过混合掩码对动态环境中的单目深度和视觉里程计进行无监督估计。
IEEE Trans Neural Netw Learn Syst. 2022 May;33(5):2023-2033. doi: 10.1109/TNNLS.2021.3100895. Epub 2022 May 2.
7
Depth-Guided Optimization of Neural Radiance Fields for Indoor Multi-View Stereo.用于室内多视图立体视觉的神经辐射场深度引导优化
IEEE Trans Pattern Anal Mach Intell. 2023 Sep;45(9):10835-10849. doi: 10.1109/TPAMI.2023.3263464. Epub 2023 Aug 7.
8
Learning Steering Kernels for Guided Depth Completion.用于引导深度补全的学习导向内核
IEEE Trans Image Process. 2021;30:2850-2861. doi: 10.1109/TIP.2021.3055629. Epub 2021 Feb 12.
9
Exploring Chromatic Aberration and Defocus Blur for Relative Depth Estimation From Monocular Hyperspectral Image.探索用于从单目高光谱图像进行相对深度估计的色差和散焦模糊
IEEE Trans Image Process. 2021;30:4357-4370. doi: 10.1109/TIP.2021.3071682. Epub 2021 Apr 21.
10
Geometric Occlusion Analysis in Depth Estimation Using Integral Guided Filter for Light-Field Image.基于积分导向滤波器的光场图像深度估计中的几何遮挡分析。
IEEE Trans Image Process. 2017 Dec;26(12):5758-5771. doi: 10.1109/TIP.2017.2745100. Epub 2017 Aug 25.

本文引用的文献

1
Deep Ordinal Regression Network for Monocular Depth Estimation.用于单目深度估计的深度序数回归网络
Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit. 2018 Jun;2018:2002-2011. doi: 10.1109/CVPR.2018.00214. Epub 2018 Dec 17.
2
Make3D: learning 3D scene structure from a single still image.Make3D:从单张静止图像学习3D场景结构。
IEEE Trans Pattern Anal Mach Intell. 2009 May;31(5):824-40. doi: 10.1109/TPAMI.2008.132.