Jiang Yifan, Gong Xinyu, Liu Ding, Cheng Yu, Fang Chen, Shen Xiaohui, Yang Jianchao, Zhou Pan, Wang Zhangyang
IEEE Trans Image Process. 2021;30:2340-2349. doi: 10.1109/TIP.2021.3051462. Epub 2021 Jan 27.
Deep learning-based methods have achieved remarkable success in image restoration and enhancement, but are they still competitive when there is a lack of paired training data? As one such example, this paper explores the low-light image enhancement problem, where in practice it is extremely challenging to simultaneously take a low-light and a normal-light photo of the same visual scene. We propose a highly effective unsupervised generative adversarial network, dubbed EnlightenGAN, that can be trained without low/normal-light image pairs, yet proves to generalize very well on various real-world test images. Instead of supervising the learning using ground truth data, we propose to regularize the unpaired training using the information extracted from the input itself, and benchmark a series of innovations for the low-light image enhancement problem, including a global-local discriminator structure, a self-regularized perceptual loss fusion, and the attention mechanism. Through extensive experiments, our proposed approach outperforms recent methods under a variety of metrics in terms of visual quality and subjective user study. Thanks to the great flexibility brought by unpaired training, EnlightenGAN is demonstrated to be easily adaptable to enhancing real-world images from various domains. Our codes and pre-trained models are available at: https://github.com/VITA-Group/EnlightenGAN.
基于深度学习的方法在图像恢复和增强方面取得了显著成功,但是当缺乏成对的训练数据时,它们是否仍然具有竞争力?作为这样一个例子,本文探讨了低光照图像增强问题,在实际中,要同时拍摄同一视觉场景的低光照和正常光照照片极具挑战性。我们提出了一种高效的无监督生成对抗网络,称为EnlightenGAN,它可以在没有低光照/正常光照图像对的情况下进行训练,但在各种真实世界测试图像上的泛化效果非常好。我们不是使用真实数据监督学习,而是建议使用从输入本身提取的信息对无配对训练进行正则化,并为低光照图像增强问题设定了一系列创新,包括全局-局部判别器结构、自正则化感知损失融合和注意力机制。通过大量实验,我们提出的方法在视觉质量和主观用户研究方面的各种指标下优于近期方法。由于无配对训练带来的极大灵活性,EnlightenGAN被证明很容易适应增强来自各种领域的真实世界图像。我们的代码和预训练模型可在以下网址获取:https://github.com/VITA-Group/EnlightenGAN 。