Lamba Mohit, Rachavarapu Kranthi Kumar, Mitra Kaushik
IEEE Trans Image Process. 2021;30:1501-1513. doi: 10.1109/TIP.2020.3045617. Epub 2020 Dec 31.
Light Field (LF) offers unique advantages such as post-capture refocusing and depth estimation, but low-light conditions severely limit these capabilities. To restore low-light LFs we should harness the geometric cues present in different LF views, which is not possible using single-frame low-light enhancement techniques. We propose a deep neural network L3Fnet for Low-Light Light Field (L3F) restoration, which not only performs visual enhancement of each LF view but also preserves the epipolar geometry across views. We achieve this by adopting a two-stage architecture for L3Fnet. Stage-I looks at all the LF views to encode the LF geometry. This encoded information is then used in Stage-II to reconstruct each LF view. To facilitate learning-based techniques for low-light LF imaging, we collected a comprehensive LF dataset of various scenes. For each scene, we captured four LFs, one with near-optimal exposure and ISO settings and the others at different levels of low-light conditions varying from low to extreme low-light settings. The effectiveness of the proposed L3Fnet is supported by both visual and numerical comparisons on this dataset. To further analyze the performance of low-light restoration methods, we also propose the L3F-wild dataset that contains LF captured late at night with almost zero lux values. No ground truth is available in this dataset. To perform well on the L3F-wild dataset, any method must adapt to the light level of the captured scene. To do this we use a pre-processing block that makes L3Fnet robust to various degrees of low-light conditions. Lastly, we show that L3Fnet can also be used for low-light enhancement of single-frame images, despite it being engineered for LF data. We do so by converting the single-frame DSLR image into a form suitable to L3Fnet, which we call as pseudo-LF. Our code and dataset is available for download at https://mohitlamba94.github.io/L3Fnet/.
光场(LF)具有诸如拍摄后重新聚焦和深度估计等独特优势,但低光条件会严重限制这些功能。为了恢复低光光场,我们应利用不同光场视图中存在的几何线索,而单帧低光增强技术无法做到这一点。我们提出了一种用于低光光场(L3F)恢复的深度神经网络L3Fnet,它不仅对每个光场视图进行视觉增强,还能保留视图间的极线几何关系。我们通过为L3Fnet采用两阶段架构来实现这一点。第一阶段查看所有光场视图以编码光场几何信息。然后在第二阶段使用此编码信息来重建每个光场视图。为了促进基于学习的低光光场成像技术,我们收集了一个包含各种场景的综合光场数据集。对于每个场景,我们捕获了四个光场,一个具有接近最佳的曝光和ISO设置,其他的则处于从低到极低光条件的不同低光水平。该数据集上的视觉和数值比较都证明了所提出的L3Fnet的有效性。为了进一步分析低光恢复方法的性能,我们还提出了L3F-wild数据集,其中包含在深夜几乎零勒克斯值下捕获的光场。此数据集中没有可用的真实数据。要在L3F-wild数据集上表现良好,任何方法都必须适应捕获场景的光照水平。为此,我们使用一个预处理块,使L3Fnet对各种程度的低光条件具有鲁棒性。最后,我们表明L3Fnet尽管是为光场数据设计的,但也可用于单帧图像的低光增强。我们通过将单帧数码单反图像转换为适合L3Fnet的形式来做到这一点,我们将其称为伪光场。我们的代码和数据集可在https://mohitlamba94.github.io/L3Fnet/上下载。