Wu Yuhui, Wang Guoqing, Liu Shaochong, Yang Yang, Li Wei, Tang Xiongxin, Gu Shuhang, Li Chongyi, Shen Heng Tao
IEEE Trans Pattern Anal Mach Intell. 2024 Dec;46(12):9921-9939. doi: 10.1109/TPAMI.2024.3432308. Epub 2024 Nov 6.
Low-light image enhancement (LLIE) investigates how to improve the brightness of an image captured in illumination-insufficient environments. The majority of existing methods enhance low-light images in a global and uniform manner, without taking into account the semantic information of different regions. Consequently, a network may easily deviate from the original color of local regions. To address this issue, we propose a semantic-aware knowledge-guided framework (SKF) that can assist a low-light enhancement model in learning rich and diverse priors encapsulated in a semantic segmentation model. We concentrate on incorporating semantic knowledge from three key aspects: a semantic-aware embedding module that adaptively integrates semantic priors in feature representation space, a semantic-guided color histogram loss that preserves color consistency of various instances, and a semantic-guided adversarial loss that produces more natural textures by semantic priors. Our SKF is appealing in acting as a general framework in the LLIE task. We further present a refined framework SKF++ with two new techniques: (a) Extra convolutional branch for intra-class illumination and color recovery through extracting local information and (b) Equalization-based histogram transformation for contrast enhancement and high dynamic range adjustment. Extensive experiments on various benchmarks of LLIE task and other image processing tasks show that models equipped with the SKF/SKF++ significantly outperform the baselines and our SKF/SKF++ generalizes to different models and scenes well. Besides, the potential benefits of our method in face detection and semantic segmentation in low-light conditions are discussed.
低光照图像增强(LLIE)研究如何改善在光照不足环境下拍摄图像的亮度。大多数现有方法以全局且统一的方式增强低光照图像,而没有考虑不同区域的语义信息。因此,网络可能很容易偏离局部区域的原始颜色。为了解决这个问题,我们提出了一种语义感知知识引导框架(SKF),它可以帮助低光照增强模型学习封装在语义分割模型中的丰富多样的先验知识。我们专注于从三个关键方面融入语义知识:一个语义感知嵌入模块,它在特征表示空间中自适应地整合语义先验知识;一个语义引导的颜色直方图损失,它保留各种实例的颜色一致性;以及一个语义引导的对抗损失,它通过语义先验知识产生更自然的纹理。我们的SKF在低光照图像增强任务中作为一个通用框架很有吸引力。我们进一步提出了一个改进的框架SKF++,它有两项新技术:(a)用于通过提取局部信息进行类内光照和颜色恢复的额外卷积分支,以及(b)用于对比度增强和高动态范围调整的基于均衡化的直方图变换。在低光照图像增强任务的各种基准测试以及其他图像处理任务上进行的大量实验表明,配备SKF/SKF++的模型明显优于基线模型,并且我们的SKF/SKF++能够很好地推广到不同的模型和场景。此外,还讨论了我们的方法在低光照条件下面部检测和语义分割中的潜在优势。