Li Zhenni, Sun Haoyi, Gao Yuliang, Wang Jiao
College of Information Science and Engineering, Northeastern University, Shenyang 110819, China.
College of Artificial Intelligence, Nankai University, Tianjin 300071, China.
Entropy (Basel). 2021 Apr 28;23(5):546. doi: 10.3390/e23050546.
Depth maps obtained through sensors are often unsatisfactory because of their low-resolution and noise interference. In this paper, we propose a real-time depth map enhancement system based on a residual network which uses dual channels to process depth maps and intensity maps respectively and cancels the preprocessing process, and the algorithm proposed can achieve real-time processing speed at more than 30 fps. Furthermore, the FPGA design and implementation for depth sensing is also introduced. In this FPGA design, intensity image and depth image are captured by the dual-camera synchronous acquisition system as the input of neural network. Experiments on various depth map restoration shows our algorithms has better performance than existing LRMC, DE-CNN and DDTF algorithms on standard datasets and has a better depth map super-resolution, and our FPGA completed the test of the system to ensure that the data throughput of the USB 3.0 interface of the acquisition system is stable at 226 Mbps, and support dual-camera to work at full speed, that is, 54 fps@ (1280 × 960 + 328 × 248 × 3).
通过传感器获得的深度图往往不尽人意,因为其分辨率低且存在噪声干扰。在本文中,我们提出了一种基于残差网络的实时深度图增强系统,该系统使用双通道分别处理深度图和强度图,并取消了预处理过程,所提出的算法能够在超过30帧每秒的速度下实现实时处理。此外,还介绍了用于深度传感的FPGA设计与实现。在该FPGA设计中,强度图像和深度图像由双相机同步采集系统捕获,作为神经网络的输入。在各种深度图恢复实验中表明,我们的算法在标准数据集上比现有的LRMC、DE-CNN和DDTF算法具有更好的性能,并且具有更好的深度图超分辨率,我们的FPGA完成了系统测试,以确保采集系统的USB 3.0接口的数据吞吐量稳定在226Mbps,并支持双相机全速工作,即54帧每秒@(1280×960 + 328×248×3)。