• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

SGSNet:一种基于二次引导和空间融合的轻量级深度补全网络。

SGSNet: A Lightweight Depth Completion Network Based on Secondary Guidance and Spatial Fusion.

机构信息

The School of Automation, Central South University, Changsha 410083, China.

Beijing Institute of Automation Equipment, Beijing 100074, China.

出版信息

Sensors (Basel). 2022 Aug 25;22(17):6414. doi: 10.3390/s22176414.

DOI:10.3390/s22176414
PMID:36080872
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9459817/
Abstract

The depth completion task aims to generate a dense depth map from a sparse depth map and the corresponding RGB image. As a data preprocessing task, obtaining denser depth maps without affecting the real-time performance of downstream tasks is the challenge. In this paper, we propose a lightweight depth completion network based on secondary guidance and spatial fusion named SGSNet. We design the image feature extraction module to better extract features from different scales between and within layers in parallel and to generate guidance features. Then, SGSNet uses the secondary guidance to complete the depth completion. The first guidance uses the lightweight guidance module to quickly guide LiDAR feature extraction with the texture features of RGB images. The second guidance uses the depth information completion module for sparse depth map feature completion and inputs it into the DA-CSPN++ module to complete the dense depth map re-guidance. By using a lightweight bootstrap module, the overall network runs ten times faster than the baseline. The overall network is relatively lightweight, up to thirty frames, which is sufficient to meet the speed needs of large SLAM and three-dimensional reconstruction for sensor data extraction. At the time of submission, the accuracy of the algorithm in SGSNet ranked first in the KITTI ranking of lightweight depth completion methods. It was 37.5% faster than the top published algorithms in the rank and was second in the full ranking.

摘要

深度补全任务旨在从稀疏深度图和相应的 RGB 图像生成密集的深度图。作为数据预处理任务,在不影响下游任务实时性能的情况下获得更密集的深度图是一个挑战。在本文中,我们提出了一种基于二次引导和空间融合的轻量级深度补全网络,称为 SGSNet。我们设计了图像特征提取模块,以便更好地从不同尺度的层间和层内并行提取特征,并生成引导特征。然后,SGSNet 使用二次引导来完成深度补全。第一次引导使用轻量级引导模块,快速引导 LiDAR 特征提取,同时利用 RGB 图像的纹理特征。第二次引导使用稀疏深度图特征补全的深度信息补全模块,并将其输入到 DA-CSPN++模块中,以完成密集深度图的重新引导。通过使用轻量级引导模块,整个网络的运行速度比基线快十倍。整个网络相对较轻量级,最高可达三十帧,足以满足传感器数据提取的大型 SLAM 和三维重建的速度需求。在提交时,算法在 SGSNet 中的准确性在轻量级深度补全方法的 KITTI 排名中排名第一,比排名中发布的最高算法快 37.5%,在总排名中排名第二。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e0f8/9459817/c54c8d5eb653/sensors-22-06414-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e0f8/9459817/a0a189d5ef12/sensors-22-06414-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e0f8/9459817/05591b2ee6bd/sensors-22-06414-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e0f8/9459817/f8964e574006/sensors-22-06414-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e0f8/9459817/10cedda67954/sensors-22-06414-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e0f8/9459817/681d70e9cad7/sensors-22-06414-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e0f8/9459817/d171f754be96/sensors-22-06414-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e0f8/9459817/c54c8d5eb653/sensors-22-06414-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e0f8/9459817/a0a189d5ef12/sensors-22-06414-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e0f8/9459817/05591b2ee6bd/sensors-22-06414-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e0f8/9459817/f8964e574006/sensors-22-06414-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e0f8/9459817/10cedda67954/sensors-22-06414-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e0f8/9459817/681d70e9cad7/sensors-22-06414-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e0f8/9459817/d171f754be96/sensors-22-06414-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e0f8/9459817/c54c8d5eb653/sensors-22-06414-g007.jpg

相似文献

1
SGSNet: A Lightweight Depth Completion Network Based on Secondary Guidance and Spatial Fusion.SGSNet:一种基于二次引导和空间融合的轻量级深度补全网络。
Sensors (Basel). 2022 Aug 25;22(17):6414. doi: 10.3390/s22176414.
2
HMS-Net: Hierarchical Multi-scale Sparsity-invariant Network for Sparse Depth Completion.HMS-Net:用于稀疏深度补全的分层多尺度稀疏不变网络
IEEE Trans Image Process. 2019 Dec 31. doi: 10.1109/TIP.2019.2960589.
3
Multi-Task Foreground-Aware Network with Depth Completion for Enhanced RGB-D Fusion Object Detection Based on Transformer.基于Transformer的具有深度补全功能的多任务前景感知网络用于增强RGB-D融合目标检测
Sensors (Basel). 2024 Apr 8;24(7):2374. doi: 10.3390/s24072374.
4
An Adaptive Fusion Algorithm for Depth Completion.深度补全的自适应融合算法。
Sensors (Basel). 2022 Jun 18;22(12):4603. doi: 10.3390/s22124603.
5
A Comprehensive Survey of Depth Completion Approaches.深度完成方法综述。
Sensors (Basel). 2022 Sep 14;22(18):6969. doi: 10.3390/s22186969.
6
A Transformer-Based Image-Guided Depth-Completion Model with Dual-Attention Fusion Module.一种基于Transformer的具有双注意力融合模块的图像引导深度补全模型。
Sensors (Basel). 2024 Sep 27;24(19):6270. doi: 10.3390/s24196270.
7
Distance Transform Pooling Neural Network for LiDAR Depth Completion.用于激光雷达深度补全的距离变换池化神经网络。
IEEE Trans Neural Netw Learn Syst. 2023 Sep;34(9):5580-5589. doi: 10.1109/TNNLS.2021.3129801. Epub 2023 Sep 1.
8
Learning Guided Convolutional Network for Depth Completion.用于深度补全的学习引导卷积网络。
IEEE Trans Image Process. 2021;30:1116-1129. doi: 10.1109/TIP.2020.3040528. Epub 2020 Dec 15.
9
BDIS-SLAM: a lightweight CPU-based dense stereo SLAM for surgery.BDIS-SLAM:一种基于 CPU 的轻量级稠密立体手术 SLAM。
Int J Comput Assist Radiol Surg. 2024 May;19(5):811-820. doi: 10.1007/s11548-023-03055-1. Epub 2024 Jan 19.
10
Sparse-to-dense coarse-to-fine depth estimation for colonoscopy.结肠镜检查的稀疏到密集的粗到精深度估计。
Comput Biol Med. 2023 Jun;160:106983. doi: 10.1016/j.compbiomed.2023.106983. Epub 2023 May 6.

引用本文的文献

1
LiDAR Intensity Completion: Fully Exploiting the Message from LiDAR Sensors.激光雷达强度补全:充分利用激光雷达传感器的信息
Sensors (Basel). 2022 Oct 4;22(19):7533. doi: 10.3390/s22197533.

本文引用的文献

1
Adaptive Context-Aware Multi-Modal Network for Depth Completion.用于深度补全的自适应上下文感知多模态网络
IEEE Trans Image Process. 2021;30:5264-5276. doi: 10.1109/TIP.2021.3079821. Epub 2021 May 31.
2
Learning Guided Convolutional Network for Depth Completion.用于深度补全的学习引导卷积网络。
IEEE Trans Image Process. 2021;30:1116-1129. doi: 10.1109/TIP.2020.3040528. Epub 2020 Dec 15.
3
HMS-Net: Hierarchical Multi-scale Sparsity-invariant Network for Sparse Depth Completion.HMS-Net:用于稀疏深度补全的分层多尺度稀疏不变网络
IEEE Trans Image Process. 2019 Dec 31. doi: 10.1109/TIP.2019.2960589.
4
Depth perception in virtual reality: distance estimations in peri- and extrapersonal space.虚拟现实中的深度感知:个人周边空间和个人外部空间的距离估计
Cyberpsychol Behav. 2008 Feb;11(1):9-15. doi: 10.1089/cpb.2007.9935.