Liu Fangjin, Hua Zhen, Li Jinjiang, Fan Linwei
College of Electronic and Communications Engineering, Shandong Technology and Business University, Yantai, China.
Institute of Network Technology, ICT, Yantai, China.
Front Neurorobot. 2022 Mar 10;16:836551. doi: 10.3389/fnbot.2022.836551. eCollection 2022.
In low-light environments, image acquisition devices do not obtain sufficient light sources, resulting in low brightness and contrast of images, which poses a great obstacle for other computer vision tasks to be performed. To enable other vision tasks to be performed smoothly, it is essential to enhance the research on low-light image enhancement algorithms. In this article, a multi-scale feature fusion image enhancement network based on recursive structure is proposed. The network uses a dual attention module-Convolutional Block Attention Module. It was abbreviated as CBAM, which includes two attention mechanisms: channel attention and spatial attention. To extract and fuse multi-scale features, we extend the U-Net model using the inception model to form the Multi-scale inception U-Net Module or MIU module for short. The learning of the whole network is divided into T recursive stages, and the input of each stage is the original low-light image and the inter-mediate estimation result of the output of the previous recursion. In the -th recursion, CBAM is first used to extract channel feature information and spatial feature information to make the network focus more on the low-light region of the image. Next, the MIU module fuses features from three different scales to obtain inter-mediate enhanced image results. Finally, the inter-mediate enhanced image is stitched with the original input image and fed into the + 1th recursive iteration. The inter-mediate enhancement result provides higher-order feature information, and the original input image provides lower-order feature information. The entire network outputs the enhanced image after several recursive cycles. We conduct experiments on several public datasets and analyze the experimental results subjectively and objectively. The experimental results show that although the structure of the network in this article is simple, the method in this article can recover the details and increase the brightness of the image better and reduce the image degradation compared with other methods.
在低光照环境下,图像采集设备无法获得足够的光源,导致图像亮度和对比度较低,这给其他计算机视觉任务的执行带来了很大障碍。为了使其他视觉任务能够顺利执行,加强对低光照图像增强算法的研究至关重要。在本文中,提出了一种基于递归结构的多尺度特征融合图像增强网络。该网络使用了一个双重注意力模块——卷积块注意力模块,简称为CBAM,它包括通道注意力和空间注意力两种注意力机制。为了提取和融合多尺度特征,我们使用Inception模型扩展U-Net模型,形成多尺度Inception U-Net模块,简称为MIU模块。整个网络的学习分为T个递归阶段,每个阶段的输入是原始低光照图像和上一次递归输出的中间估计结果。在第t次递归中,首先使用CBAM提取通道特征信息和空间特征信息,使网络更关注图像的低光照区域。接下来,MIU模块融合来自三个不同尺度的特征,以获得中间增强图像结果。最后,将中间增强图像与原始输入图像拼接,并输入到第t + 1次递归迭代中。中间增强结果提供高阶特征信息,原始输入图像提供低阶特征信息。经过几个递归循环后,整个网络输出增强后的图像。我们在几个公共数据集上进行了实验,并对实验结果进行了主观和客观分析。实验结果表明,尽管本文网络结构简单,但与其他方法相比,本文方法能够更好地恢复图像细节、增加图像亮度并减少图像退化。