• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过卷积神经网络进行置信传播以实现引导式稀疏深度回归

Confidence Propagation through CNNs for Guided Sparse Depth Regression.

作者信息

Eldesokey Abdelrahman, Felsberg Michael, Khan Fahad Shahbaz

出版信息

IEEE Trans Pattern Anal Mach Intell. 2020 Oct;42(10):2423-2436. doi: 10.1109/TPAMI.2019.2929170. Epub 2019 Jul 17.

DOI:10.1109/TPAMI.2019.2929170
PMID:31331882
Abstract

Generally, convolutional neural networks (CNNs) process data on a regular grid, e.g., data generated by ordinary cameras. Designing CNNs for sparse and irregularly spaced input data is still an open research problem with numerous applications in autonomous driving, robotics, and surveillance. In this paper, we propose an algebraically-constrained normalized convolution layer for CNNs with highly sparse input that has a smaller number of network parameters compared to related work. We propose novel strategies for determining the confidence from the convolution operation and propagating it to consecutive layers. We also propose an objective function that simultaneously minimizes the data error while maximizing the output confidence. To integrate structural information, we also investigate fusion strategies to combine depth and RGB information in our normalized convolution network framework. In addition, we introduce the use of output confidence as an auxiliary information to improve the results. The capabilities of our normalized convolution network framework are demonstrated for the problem of scene depth completion. Comprehensive experiments are performed on the KITTI-Depth and the NYU-Depth-v2 datasets. The results clearly demonstrate that the proposed approach achieves superior performance while requiring only about 1-5 percent of the number of parameters compared to the state-of-the-art methods.

摘要

一般来说,卷积神经网络(CNN)在规则网格上处理数据,例如普通相机生成的数据。为稀疏且间隔不规则的输入数据设计CNN仍然是一个开放的研究问题,在自动驾驶、机器人技术和监控等领域有众多应用。在本文中,我们为具有高度稀疏输入的CNN提出了一种代数约束归一化卷积层,与相关工作相比,该层具有更少的网络参数。我们提出了从卷积操作中确定置信度并将其传播到后续层的新颖策略。我们还提出了一个目标函数,该函数在最小化数据误差的同时最大化输出置信度。为了整合结构信息,我们还研究了在归一化卷积网络框架中融合深度和RGB信息的策略。此外,我们引入了使用输出置信度作为辅助信息来改进结果。我们的归一化卷积网络框架的能力在场景深度完成问题上得到了证明。在KITTI-Depth和NYU-Depth-v2数据集上进行了全面的实验。结果清楚地表明,与现有方法相比,所提出的方法在仅需要大约1%至5%的参数数量的情况下实现了卓越的性能。

相似文献

1
Confidence Propagation through CNNs for Guided Sparse Depth Regression.通过卷积神经网络进行置信传播以实现引导式稀疏深度回归
IEEE Trans Pattern Anal Mach Intell. 2020 Oct;42(10):2423-2436. doi: 10.1109/TPAMI.2019.2929170. Epub 2019 Jul 17.
2
Learning Depth with Convolutional Spatial Propagation Network.基于卷积空间传播网络的深度学习
IEEE Trans Pattern Anal Mach Intell. 2020 Oct;42(10):2361-2379. doi: 10.1109/TPAMI.2019.2947374. Epub 2019 Oct 15.
3
Learning Guided Convolutional Network for Depth Completion.用于深度补全的学习引导卷积网络。
IEEE Trans Image Process. 2021;30:1116-1129. doi: 10.1109/TIP.2020.3040528. Epub 2020 Dec 15.
4
HMS-Net: Hierarchical Multi-scale Sparsity-invariant Network for Sparse Depth Completion.HMS-Net:用于稀疏深度补全的分层多尺度稀疏不变网络
IEEE Trans Image Process. 2019 Dec 31. doi: 10.1109/TIP.2019.2960589.
5
Guided Depth Completion with Instance Segmentation Fusion in Autonomous Driving Applications.自动驾驶应用中的实例分割融合引导深度补全。
Sensors (Basel). 2022 Dec 7;22(24):9578. doi: 10.3390/s22249578.
6
Adaptive Context-Aware Multi-Modal Network for Depth Completion.用于深度补全的自适应上下文感知多模态网络
IEEE Trans Image Process. 2021;30:5264-5276. doi: 10.1109/TIP.2021.3079821. Epub 2021 May 31.
7
CNNs-Based RGB-D Saliency Detection via Cross-View Transfer and Multiview Fusion.基于卷积神经网络的跨视图迁移和多视图融合的 RGB-D 显著目标检测。
IEEE Trans Cybern. 2018 Nov;48(11):3171-3183. doi: 10.1109/TCYB.2017.2761775. Epub 2017 Oct 31.
8
A Dual Neural Architecture Combined SqueezeNet with OctConv for LiDAR Data Classification.一种结合 SqueezeNet 和 OctConv 的双重神经架构的 LiDAR 数据分类方法。
Sensors (Basel). 2019 Nov 12;19(22):4927. doi: 10.3390/s19224927.
9
Non-local affinity adaptive acceleration propagation network for generating dense depth maps from LiDAR.基于非局部亲和自适应加速传播网络的 LiDAR 密集深度图生成方法。
Opt Express. 2023 Jun 19;31(13):22012-22029. doi: 10.1364/OE.492187.
10
Monocular Depth Estimation: Lightweight Convolutional and Matrix Capsule Feature-Fusion Network.单目深度估计:轻量级卷积和矩阵胶囊特征融合网络。
Sensors (Basel). 2022 Aug 23;22(17):6344. doi: 10.3390/s22176344.

引用本文的文献

1
GeometryFormer: Semi-Convolutional Transformer Integrated with Geometric Perception for Depth Completion in Autonomous Driving Scenes.GeometryFormer:集成几何感知的半卷积Transformer,用于自动驾驶场景中的深度补全
Sensors (Basel). 2024 Dec 18;24(24):8066. doi: 10.3390/s24248066.
2
A 256 × 256 LiDAR Imaging System Based on a 200 mW SPAD-Based SoC with Microlens Array and Lightweight RGB-Guided Depth Completion Neural Network.一种基于200毫瓦单光子雪崩二极管(SPAD)的片上系统(SoC)、带有微透镜阵列以及轻量级RGB引导深度补全神经网络的256×256激光雷达成像系统。
Sensors (Basel). 2023 Aug 3;23(15):6927. doi: 10.3390/s23156927.
3
Real-time depth completion based on LiDAR-stereo for autonomous driving.
基于激光雷达-立体视觉的自动驾驶实时深度补全
Front Neurorobot. 2023 Apr 18;17:1124676. doi: 10.3389/fnbot.2023.1124676. eCollection 2023.
4
Unsupervised Depth Completion Guided by Visual Inertial System and Confidence.无监督深度补全方法,基于视觉惯性系统和置信度
Sensors (Basel). 2023 Mar 24;23(7):3430. doi: 10.3390/s23073430.
5
SPNet: Structure preserving network for depth completion.SPNet:用于深度完成的结构保持网络。
PLoS One. 2023 Jan 24;18(1):e0280886. doi: 10.1371/journal.pone.0280886. eCollection 2023.
6
INV-Flow2PoseNet: Light-Resistant Rigid Object Pose from Optical Flow of RGB-D Images Using Images, Normals and Vertices.INV-Flow2PoseNet:利用RGB-D图像的光流、法线和顶点信息实现抗光照刚性物体姿态估计
Sensors (Basel). 2022 Nov 14;22(22):8798. doi: 10.3390/s22228798.
7
A Comprehensive Survey of Depth Completion Approaches.深度完成方法综述。
Sensors (Basel). 2022 Sep 14;22(18):6969. doi: 10.3390/s22186969.
8
An Adaptive Fusion Algorithm for Depth Completion.深度补全的自适应融合算法。
Sensors (Basel). 2022 Jun 18;22(12):4603. doi: 10.3390/s22124603.