Khan Rizwan, Mehmood Atif, Zheng Zhonglong
Opt Express. 2022 Oct 10;30(21):37736-37752. doi: 10.1364/OE.472557.
Low light image enhancement with adaptive brightness, color and contrast preservation in degraded visual conditions (e.g., extreme dark background, lowlight, back-light, mist. etc.) is becoming more challenging for machine cognition applications than anticipated. A realistic image enhancement framework should preserve brightness and contrast in robust scenarios. The extant direct enhancement methods amplify objectionable structure and texture artifacts, whereas network-based enhancement approaches are based on paired or large-scale training datasets, raising fundamental concerns about their real-world applicability. This paper presents a new framework to get deep into darkness in degraded visual conditions following the fundamental of retinex-based image decomposition. We separate the reflection and illumination components to perform independent weighted enhancement operations on each component to preserve the visual details with a balance of brightness and contrast. A comprehensive weighting strategy is proposed to constrain image decomposition while disrupting the irregularities of high frequency reflection and illumination to improve the contrast. At the same time, we propose to guide the illumination component with a high-frequency component for structure and texture preservation in degraded visual conditions. Unlike existing approaches, the proposed method works regardless of the training data type (i.e., low light, normal light, or normal and low light pairs). A deep into darkness network (D2D-Net) is proposed to maintain the visual balance of smoothness without compromising the image quality. We conduct extensive experiments to demonstrate the superiority of the proposed enhancement. We test the performance of our method for object detection tasks in extremely dark scenarios. Experimental results demonstrate that our method maintains the balance of visual smoothness, making it more viable for future interactive visual applications.
在退化视觉条件(如极端暗背景、低光照、逆光、雾气等)下进行具有自适应亮度、颜色和对比度保持的低光照图像增强,对于机器认知应用来说,比预期的更具挑战性。一个现实的图像增强框架应该在稳健场景中保持亮度和对比度。现有的直接增强方法会放大令人反感的结构和纹理伪影,而基于网络的增强方法则基于配对或大规模训练数据集,这引发了对其实际应用适用性的根本担忧。本文提出了一个新的框架,基于基于视网膜皮层的图像分解原理,深入探索退化视觉条件下的暗部。我们分离反射和光照分量,对每个分量进行独立的加权增强操作,以在平衡亮度和对比度的同时保留视觉细节。提出了一种综合加权策略来约束图像分解,同时消除高频反射和光照的不规则性以提高对比度。同时,我们提出在退化视觉条件下,用高频分量引导光照分量以保留结构和纹理。与现有方法不同,所提出的方法无论训练数据类型(即低光照、正常光照或正常和低光照对)如何都能工作。提出了一个深入黑暗网络(D2D-Net)来保持视觉平滑度的平衡而不损害图像质量。我们进行了广泛的实验来证明所提出增强方法的优越性。我们在极端黑暗场景中测试了我们方法在目标检测任务中的性能。实验结果表明,我们的方法保持了视觉平滑度的平衡,使其在未来的交互式视觉应用中更具可行性。