• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种基于残差网络和现场可编程门阵列的实时深度图增强系统。

A Residual Network and FPGA Based Real-Time Depth Map Enhancement System.

作者信息

Li Zhenni, Sun Haoyi, Gao Yuliang, Wang Jiao

机构信息

College of Information Science and Engineering, Northeastern University, Shenyang 110819, China.

College of Artificial Intelligence, Nankai University, Tianjin 300071, China.

出版信息

Entropy (Basel). 2021 Apr 28;23(5):546. doi: 10.3390/e23050546.

DOI:10.3390/e23050546
PMID:33924967
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8145842/
Abstract

Depth maps obtained through sensors are often unsatisfactory because of their low-resolution and noise interference. In this paper, we propose a real-time depth map enhancement system based on a residual network which uses dual channels to process depth maps and intensity maps respectively and cancels the preprocessing process, and the algorithm proposed can achieve real-time processing speed at more than 30 fps. Furthermore, the FPGA design and implementation for depth sensing is also introduced. In this FPGA design, intensity image and depth image are captured by the dual-camera synchronous acquisition system as the input of neural network. Experiments on various depth map restoration shows our algorithms has better performance than existing LRMC, DE-CNN and DDTF algorithms on standard datasets and has a better depth map super-resolution, and our FPGA completed the test of the system to ensure that the data throughput of the USB 3.0 interface of the acquisition system is stable at 226 Mbps, and support dual-camera to work at full speed, that is, 54 fps@ (1280 × 960 + 328 × 248 × 3).

摘要

通过传感器获得的深度图往往不尽人意,因为其分辨率低且存在噪声干扰。在本文中,我们提出了一种基于残差网络的实时深度图增强系统,该系统使用双通道分别处理深度图和强度图,并取消了预处理过程,所提出的算法能够在超过30帧每秒的速度下实现实时处理。此外,还介绍了用于深度传感的FPGA设计与实现。在该FPGA设计中,强度图像和深度图像由双相机同步采集系统捕获,作为神经网络的输入。在各种深度图恢复实验中表明,我们的算法在标准数据集上比现有的LRMC、DE-CNN和DDTF算法具有更好的性能,并且具有更好的深度图超分辨率,我们的FPGA完成了系统测试,以确保采集系统的USB 3.0接口的数据吞吐量稳定在226Mbps,并支持双相机全速工作,即54帧每秒@(1280×960 + 328×248×3)。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/89b5/8145842/fd3d08717fa2/entropy-23-00546-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/89b5/8145842/e24a6526a99b/entropy-23-00546-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/89b5/8145842/861ce87449d0/entropy-23-00546-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/89b5/8145842/8e9b81dcd805/entropy-23-00546-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/89b5/8145842/21fa29f80363/entropy-23-00546-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/89b5/8145842/da91092483e4/entropy-23-00546-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/89b5/8145842/a432f6666ecf/entropy-23-00546-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/89b5/8145842/4a1eeb3b5a11/entropy-23-00546-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/89b5/8145842/4298ec7d6fee/entropy-23-00546-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/89b5/8145842/447a7decabb5/entropy-23-00546-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/89b5/8145842/f4cd7e699461/entropy-23-00546-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/89b5/8145842/f45008717b09/entropy-23-00546-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/89b5/8145842/45e6963c7d94/entropy-23-00546-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/89b5/8145842/6dec9eef7eee/entropy-23-00546-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/89b5/8145842/8a70e06c8066/entropy-23-00546-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/89b5/8145842/14f8f6e248f3/entropy-23-00546-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/89b5/8145842/7ffbb6390917/entropy-23-00546-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/89b5/8145842/a37c74ff14fe/entropy-23-00546-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/89b5/8145842/228dd5698622/entropy-23-00546-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/89b5/8145842/08c049d3281e/entropy-23-00546-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/89b5/8145842/fd3d08717fa2/entropy-23-00546-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/89b5/8145842/e24a6526a99b/entropy-23-00546-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/89b5/8145842/861ce87449d0/entropy-23-00546-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/89b5/8145842/8e9b81dcd805/entropy-23-00546-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/89b5/8145842/21fa29f80363/entropy-23-00546-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/89b5/8145842/da91092483e4/entropy-23-00546-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/89b5/8145842/a432f6666ecf/entropy-23-00546-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/89b5/8145842/4a1eeb3b5a11/entropy-23-00546-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/89b5/8145842/4298ec7d6fee/entropy-23-00546-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/89b5/8145842/447a7decabb5/entropy-23-00546-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/89b5/8145842/f4cd7e699461/entropy-23-00546-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/89b5/8145842/f45008717b09/entropy-23-00546-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/89b5/8145842/45e6963c7d94/entropy-23-00546-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/89b5/8145842/6dec9eef7eee/entropy-23-00546-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/89b5/8145842/8a70e06c8066/entropy-23-00546-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/89b5/8145842/14f8f6e248f3/entropy-23-00546-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/89b5/8145842/7ffbb6390917/entropy-23-00546-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/89b5/8145842/a37c74ff14fe/entropy-23-00546-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/89b5/8145842/228dd5698622/entropy-23-00546-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/89b5/8145842/08c049d3281e/entropy-23-00546-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/89b5/8145842/fd3d08717fa2/entropy-23-00546-g020.jpg

相似文献

1
A Residual Network and FPGA Based Real-Time Depth Map Enhancement System.一种基于残差网络和现场可编程门阵列的实时深度图增强系统。
Entropy (Basel). 2021 Apr 28;23(5):546. doi: 10.3390/e23050546.
2
Real-Time Underwater Image Recognition with FPGA Embedded System for Convolutional Neural Network.基于 FPGA 嵌入式系统的卷积神经网络实时水下图像识别。
Sensors (Basel). 2019 Jan 16;19(2):350. doi: 10.3390/s19020350.
3
A Quantized CNN-Based Microfluidic Lensless-Sensing Mobile Blood-Acquisition and Analysis System.基于量化卷积神经网络的微流控无镜头式移动血液采集与分析系统。
Sensors (Basel). 2019 Nov 21;19(23):5103. doi: 10.3390/s19235103.
4
Color-Guided Depth Map Super-Resolution Using a Dual-Branch Multi-Scale Residual Network with Channel Interaction.基于双通道多尺度残差网络和通道交互的彩色引导深度图超分辨率方法。
Sensors (Basel). 2020 Mar 11;20(6):1560. doi: 10.3390/s20061560.
5
Multi-Scale FPGA-Based Infrared Image Enhancement by Using RGF and CLAHE.基于多尺度现场可编程门阵列的使用递归引导滤波和对比度受限自适应直方图均衡化的红外图像增强
Sensors (Basel). 2023 Sep 27;23(19):8101. doi: 10.3390/s23198101.
6
Depth from a Motion Algorithm and a Hardware Architecture for Smart Cameras.运动算法和智能相机硬件架构的深度
Sensors (Basel). 2018 Dec 23;19(1):53. doi: 10.3390/s19010053.
7
FPGA Implementation of Complex-Valued Neural Network for Polar-Represented Image Classification.用于极坐标表示图像分类的复值神经网络的现场可编程门阵列实现
Sensors (Basel). 2024 Jan 30;24(3):897. doi: 10.3390/s24030897.
8
Hierarchical Features Driven Residual Learning for Depth Map Super-Resolution.用于深度图超分辨率的层次特征驱动残差学习
IEEE Trans Image Process. 2018 Dec 17. doi: 10.1109/TIP.2018.2887029.
9
The design and implementation of postprocessing for depth map on real-time extraction system.
ScientificWorldJournal. 2014;2014:363287. doi: 10.1155/2014/363287. Epub 2014 Jun 4.
10
FPGA-Based Hybrid-Type Implementation of Quantized Neural Networks for Remote Sensing Applications.基于 FPGA 的量化神经网络混合式实现及其在遥感中的应用。
Sensors (Basel). 2019 Feb 22;19(4):924. doi: 10.3390/s19040924.

本文引用的文献

1
fpgaConvNet: Mapping Regular and Irregular Convolutional Neural Networks on FPGAs.FPGA卷积神经网络:在FPGA上映射规则与不规则卷积神经网络
IEEE Trans Neural Netw Learn Syst. 2019 Feb;30(2):326-342. doi: 10.1109/TNNLS.2018.2844093. Epub 2018 Jul 2.
2
Learning Depth from Single Images with Deep Neural Network Embedding Focal Length.通过深度神经网络嵌入焦距从单张图像中学习深度
IEEE Trans Image Process. 2018 May 17. doi: 10.1109/TIP.2018.2832296.
3
A Deep Cascade of Convolutional Neural Networks for Dynamic MR Image Reconstruction.
一种用于动态磁共振图像重建的深度级联卷积神经网络。
IEEE Trans Med Imaging. 2018 Feb;37(2):491-503. doi: 10.1109/TMI.2017.2760978. Epub 2017 Oct 13.
4
Exploiting Depth From Single Monocular Images for Object Detection and Semantic Segmentation.利用单目图像的深度进行目标检测和语义分割。
IEEE Trans Image Process. 2017 Feb;26(2):836-846. doi: 10.1109/TIP.2016.2621673. Epub 2016 Oct 26.
5
Joint-Feature Guided Depth Map Super-Resolution With Face Priors.基于联合特征引导和人脸先验的深度图超分辨率方法。
IEEE Trans Cybern. 2018 Jan;48(1):399-411. doi: 10.1109/TCYB.2016.2638856. Epub 2016 Dec 22.
6
Depth Map Restoration From Undersampled Data.从欠采样数据中进行深度图恢复。
IEEE Trans Image Process. 2017 Jan;26(1):119-134. doi: 10.1109/TIP.2016.2621410. Epub 2016 Oct 25.
7
Vision processing for realtime 3-D data acquisition based on coded structured light.基于编码结构光的实时三维数据采集视觉处理
IEEE Trans Image Process. 2008 Feb;17(2):167-76. doi: 10.1109/TIP.2007.914755.