Ren Wenqi, Liu Sifei, Ma Lin, Xu Qianqian, Xu Xiangyu, Cao Xiaochun, Du Junping, Yang Ming-Hsuan
IEEE Trans Image Process. 2019 Sep;28(9):4364-4375. doi: 10.1109/TIP.2019.2910412. Epub 2019 Apr 16.
Camera sensors often fail to capture clear images or videos in a poorly lit environment. In this paper, we propose a trainable hybrid network to enhance the visibility of such degraded images. The proposed network consists of two distinct streams to simultaneously learn the global content and the salient structures of the clear image in a unified network. More specifically, the content stream estimates the global content of the low-light input through an encoder-decoder network. However, the encoder in the content stream tends to lose some structure details. To remedy this, we propose a novel spatially variant recurrent neural network (RNN) as an edge stream to model edge details, with the guidance of another auto-encoder. The experimental results show that the proposed network favorably performs against the state-of-the-art low-light image enhancement algorithms.
在光线不佳的环境中,相机传感器常常难以捕捉到清晰的图像或视频。在本文中,我们提出了一种可训练的混合网络,以增强此类退化图像的可视性。所提出的网络由两个不同的流组成,以便在统一网络中同时学习清晰图像的全局内容和显著结构。更具体地说,内容流通过编码器-解码器网络估计低光输入的全局内容。然而,内容流中的编码器往往会丢失一些结构细节。为了弥补这一点,我们提出了一种新颖的空间可变递归神经网络(RNN)作为边缘流来对边缘细节进行建模,并由另一个自动编码器提供指导。实验结果表明,所提出的网络在与当前最先进的低光图像增强算法相比时表现出色。