• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

深度补全的自适应融合算法。

An Adaptive Fusion Algorithm for Depth Completion.

机构信息

Institute of Microelectronics, Chinese Academy of Sciences, Beijing 100029, China.

University of Chinese Academy of Sciences, Beijing 100049, China.

出版信息

Sensors (Basel). 2022 Jun 18;22(12):4603. doi: 10.3390/s22124603.

DOI:10.3390/s22124603
PMID:35746385
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9227403/
Abstract

Dense depth perception is critical for many applications. However, LiDAR sensors can only provide sparse depth measurements. Therefore, completing the sparse LiDAR data becomes an important task. Due to the rich textural information of RGB images, researchers commonly use synchronized RGB images to guide this depth completion. However, most existing depth completion methods simply fuse LiDAR information with RGB image information through feature concatenation or element-wise addition. In view of this, this paper proposes a method to adaptively fuse the information from these two sensors by generating different convolutional kernels according to the content and positions of the feature vectors. Specifically, we divided the features into different blocks and utilized an attention network to generate a different kernel weight for each block. These kernels were then applied to fuse the multi-modal features. Using the KITTI depth completion dataset, our method outperformed the state-of-the-art FCFR-Net method by 0.01 for the inverse mean absolute error (iMAE) metric. Furthermore, our method achieved a good balance of runtime and accuracy, which would make our method more suitable for some real-time applications.

摘要

深度感知对于许多应用来说至关重要。然而,LiDAR 传感器只能提供稀疏的深度测量值。因此,完成稀疏的 LiDAR 数据成为一项重要任务。由于 RGB 图像具有丰富的纹理信息,研究人员通常使用同步的 RGB 图像来指导这种深度完成。然而,大多数现有的深度完成方法只是通过特征连接或元素级相加将 LiDAR 信息与 RGB 图像信息简单地融合在一起。针对这一问题,本文提出了一种根据特征向量的内容和位置自适应地融合来自这两个传感器信息的方法,通过生成不同的卷积核来实现。具体来说,我们将特征分为不同的块,并使用注意力网络为每个块生成不同的核权重。然后,这些核被应用于融合多模态特征。使用 KITTI 深度完成数据集,我们的方法在逆平均绝对误差 (iMAE) 指标上比最先进的 FCFR-Net 方法提高了 0.01。此外,我们的方法在运行时间和准确性之间取得了很好的平衡,这使得我们的方法更适用于一些实时应用。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/76f7/9227403/a566b19e94f7/sensors-22-04603-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/76f7/9227403/a246584a853d/sensors-22-04603-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/76f7/9227403/b0082e186ae8/sensors-22-04603-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/76f7/9227403/a09dd4ad8b14/sensors-22-04603-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/76f7/9227403/a566b19e94f7/sensors-22-04603-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/76f7/9227403/a246584a853d/sensors-22-04603-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/76f7/9227403/b0082e186ae8/sensors-22-04603-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/76f7/9227403/a09dd4ad8b14/sensors-22-04603-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/76f7/9227403/a566b19e94f7/sensors-22-04603-g004.jpg

相似文献

1
An Adaptive Fusion Algorithm for Depth Completion.深度补全的自适应融合算法。
Sensors (Basel). 2022 Jun 18;22(12):4603. doi: 10.3390/s22124603.
2
Learning Guided Convolutional Network for Depth Completion.用于深度补全的学习引导卷积网络。
IEEE Trans Image Process. 2021;30:1116-1129. doi: 10.1109/TIP.2020.3040528. Epub 2020 Dec 15.
3
HMS-Net: Hierarchical Multi-scale Sparsity-invariant Network for Sparse Depth Completion.HMS-Net:用于稀疏深度补全的分层多尺度稀疏不变网络
IEEE Trans Image Process. 2019 Dec 31. doi: 10.1109/TIP.2019.2960589.
4
Multi-Task Foreground-Aware Network with Depth Completion for Enhanced RGB-D Fusion Object Detection Based on Transformer.基于Transformer的具有深度补全功能的多任务前景感知网络用于增强RGB-D融合目标检测
Sensors (Basel). 2024 Apr 8;24(7):2374. doi: 10.3390/s24072374.
5
SGSNet: A Lightweight Depth Completion Network Based on Secondary Guidance and Spatial Fusion.SGSNet:一种基于二次引导和空间融合的轻量级深度补全网络。
Sensors (Basel). 2022 Aug 25;22(17):6414. doi: 10.3390/s22176414.
6
A Comprehensive Survey of Depth Completion Approaches.深度完成方法综述。
Sensors (Basel). 2022 Sep 14;22(18):6969. doi: 10.3390/s22186969.
7
Learning Steering Kernels for Guided Depth Completion.用于引导深度补全的学习导向内核
IEEE Trans Image Process. 2021;30:2850-2861. doi: 10.1109/TIP.2021.3055629. Epub 2021 Feb 12.
8
Real time object detection using LiDAR and camera fusion for autonomous driving.基于激光雷达和相机融合的自动驾驶实时目标检测。
Sci Rep. 2023 May 17;13(1):8056. doi: 10.1038/s41598-023-35170-z.
9
Guided Depth Completion with Instance Segmentation Fusion in Autonomous Driving Applications.自动驾驶应用中的实例分割融合引导深度补全。
Sensors (Basel). 2022 Dec 7;22(24):9578. doi: 10.3390/s22249578.
10
Real-time depth completion based on LiDAR-stereo for autonomous driving.基于激光雷达-立体视觉的自动驾驶实时深度补全
Front Neurorobot. 2023 Apr 18;17:1124676. doi: 10.3389/fnbot.2023.1124676. eCollection 2023.

引用本文的文献

1
Decomposed Multilateral Filtering for Accelerating Filtering with Multiple Guidance Images.用于加速多引导图像滤波的分解多边滤波
Sensors (Basel). 2024 Jan 19;24(2):633. doi: 10.3390/s24020633.
2
LiDAR Intensity Completion: Fully Exploiting the Message from LiDAR Sensors.激光雷达强度补全:充分利用激光雷达传感器的信息
Sensors (Basel). 2022 Oct 4;22(19):7533. doi: 10.3390/s22197533.
3
Lightweight Depth Completion Network with Local Similarity-Preserving Knowledge Distillation.具有局部相似性保持知识蒸馏的轻量级深度补全网络

本文引用的文献

1
A novel perceptual two layer image fusion using deep learning for imbalanced COVID-19 dataset.一种用于不平衡新冠肺炎数据集的基于深度学习的新型感知双层图像融合方法。
PeerJ Comput Sci. 2021 Feb 10;7:e364. doi: 10.7717/peerj-cs.364. eCollection 2021.
2
Learning Guided Convolutional Network for Depth Completion.用于深度补全的学习引导卷积网络。
IEEE Trans Image Process. 2021;30:1116-1129. doi: 10.1109/TIP.2020.3040528. Epub 2020 Dec 15.
3
HMS-Net: Hierarchical Multi-scale Sparsity-invariant Network for Sparse Depth Completion.HMS-Net:用于稀疏深度补全的分层多尺度稀疏不变网络
Sensors (Basel). 2022 Sep 28;22(19):7388. doi: 10.3390/s22197388.
IEEE Trans Image Process. 2019 Dec 31. doi: 10.1109/TIP.2019.2960589.
4
Learning Depth with Convolutional Spatial Propagation Network.基于卷积空间传播网络的深度学习
IEEE Trans Pattern Anal Mach Intell. 2020 Oct;42(10):2361-2379. doi: 10.1109/TPAMI.2019.2947374. Epub 2019 Oct 15.
5
Confidence Propagation through CNNs for Guided Sparse Depth Regression.通过卷积神经网络进行置信传播以实现引导式稀疏深度回归
IEEE Trans Pattern Anal Mach Intell. 2020 Oct;42(10):2423-2436. doi: 10.1109/TPAMI.2019.2929170. Epub 2019 Jul 17.
6
Depth reconstruction from sparse samples: representation, algorithm, and sampling.从稀疏样本中进行深度重建:表示、算法和采样。
IEEE Trans Image Process. 2015 Jun;24(6):1983-96. doi: 10.1109/TIP.2015.2409551. Epub 2015 Mar 6.