Ma Long, Liu Risheng, Zhang Jiaao, Fan Xin, Luo Zhongxuan
IEEE Trans Neural Netw Learn Syst. 2022 Oct;33(10):5666-5680. doi: 10.1109/TNNLS.2021.3071245. Epub 2022 Oct 5.
Enhancing the quality of low-light (LOL) images plays a very important role in many image processing and multimedia applications. In recent years, a variety of deep learning techniques have been developed to address this challenging task. A typical framework is to simultaneously estimate the illumination and reflectance, but they disregard the scene-level contextual information encapsulated in feature spaces, causing many unfavorable outcomes, e.g., details loss, color unsaturation, and artifacts. To address these issues, we develop a new context-sensitive decomposition network (CSDNet) architecture to exploit the scene-level contextual dependencies on spatial scales. More concretely, we build a two-stream estimation mechanism including reflectance and illumination estimation network. We design a novel context-sensitive decomposition connection to bridge the two-stream mechanism by incorporating the physical principle. The spatially varying illumination guidance is further constructed for achieving the edge-aware smoothness property of the illumination component. According to different training patterns, we construct CSDNet (paired supervision) and context-sensitive decomposition generative adversarial network (CSDGAN) (unpaired supervision) to fully evaluate our designed architecture. We test our method on seven testing benchmarks [including massachusetts institute of technology (MIT)-Adobe FiveK, LOL, ExDark, and naturalness preserved enhancement (NPE)] to conduct plenty of analytical and evaluated experiments. Thanks to our designed context-sensitive decomposition connection, we successfully realized excellent enhanced results (with sufficient details, vivid colors, and few noises), which fully indicates our superiority against existing state-of-the-art approaches. Finally, considering the practical needs for high efficiency, we develop a lightweight CSDNet (named LiteCSDNet) by reducing the number of channels. Furthermore, by sharing an encoder for these two components, we obtain a more lightweight version (SLiteCSDNet for short). SLiteCSDNet just contains 0.0301M parameters but achieves the almost same performance as CSDNet. Code is available at https://github.com/KarelZhang/CSDNet-CSDGAN.
提高低光照(LOL)图像的质量在许多图像处理和多媒体应用中起着非常重要的作用。近年来,人们开发了各种深度学习技术来解决这一具有挑战性的任务。一个典型的框架是同时估计光照和反射率,但它们忽略了特征空间中封装的场景级上下文信息,导致了许多不利的结果,例如细节丢失、颜色不饱和和伪影。为了解决这些问题,我们开发了一种新的上下文敏感分解网络(CSDNet)架构,以利用空间尺度上的场景级上下文依赖关系。更具体地说,我们构建了一个双流估计机制,包括反射率和光照估计网络。我们设计了一种新颖的上下文敏感分解连接,通过纳入物理原理来桥接双流机制。进一步构建了空间变化的光照引导,以实现光照分量的边缘感知平滑特性。根据不同的训练模式,我们构建了CSDNet(成对监督)和上下文敏感分解生成对抗网络(CSDGAN)(不成对监督),以全面评估我们设计的架构。我们在七个测试基准(包括麻省理工学院(MIT)-Adobe FiveK、LOL、ExDark和自然保留增强(NPE))上测试了我们的方法,以进行大量的分析和评估实验。由于我们设计的上下文敏感分解连接,我们成功实现了出色的增强结果(具有足够的细节、鲜艳的颜色和很少的噪声),这充分表明了我们相对于现有最先进方法的优越性。最后,考虑到对高效性的实际需求,我们通过减少通道数量开发了一个轻量级的CSDNet(名为LiteCSDNet)。此外,通过为这两个组件共享一个编码器,我们获得了一个更轻量级的版本(简称为SLiteCSDNet)。SLiteCSDNet仅包含0.0301M个参数,但实现了与CSDNet几乎相同的性能。代码可在https://github.com/KarelZhang/CSDNet-CSDGAN上获取。