• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

利用光场的多视角进行低光成像。

Harnessing Multi-View Perspective of Light Fields for Low-Light Imaging.

作者信息

Lamba Mohit, Rachavarapu Kranthi Kumar, Mitra Kaushik

出版信息

IEEE Trans Image Process. 2021;30:1501-1513. doi: 10.1109/TIP.2020.3045617. Epub 2020 Dec 31.

DOI:10.1109/TIP.2020.3045617
PMID:33360991
Abstract

Light Field (LF) offers unique advantages such as post-capture refocusing and depth estimation, but low-light conditions severely limit these capabilities. To restore low-light LFs we should harness the geometric cues present in different LF views, which is not possible using single-frame low-light enhancement techniques. We propose a deep neural network L3Fnet for Low-Light Light Field (L3F) restoration, which not only performs visual enhancement of each LF view but also preserves the epipolar geometry across views. We achieve this by adopting a two-stage architecture for L3Fnet. Stage-I looks at all the LF views to encode the LF geometry. This encoded information is then used in Stage-II to reconstruct each LF view. To facilitate learning-based techniques for low-light LF imaging, we collected a comprehensive LF dataset of various scenes. For each scene, we captured four LFs, one with near-optimal exposure and ISO settings and the others at different levels of low-light conditions varying from low to extreme low-light settings. The effectiveness of the proposed L3Fnet is supported by both visual and numerical comparisons on this dataset. To further analyze the performance of low-light restoration methods, we also propose the L3F-wild dataset that contains LF captured late at night with almost zero lux values. No ground truth is available in this dataset. To perform well on the L3F-wild dataset, any method must adapt to the light level of the captured scene. To do this we use a pre-processing block that makes L3Fnet robust to various degrees of low-light conditions. Lastly, we show that L3Fnet can also be used for low-light enhancement of single-frame images, despite it being engineered for LF data. We do so by converting the single-frame DSLR image into a form suitable to L3Fnet, which we call as pseudo-LF. Our code and dataset is available for download at https://mohitlamba94.github.io/L3Fnet/.

摘要

光场(LF)具有诸如拍摄后重新聚焦和深度估计等独特优势,但低光条件会严重限制这些功能。为了恢复低光光场,我们应利用不同光场视图中存在的几何线索,而单帧低光增强技术无法做到这一点。我们提出了一种用于低光光场(L3F)恢复的深度神经网络L3Fnet,它不仅对每个光场视图进行视觉增强,还能保留视图间的极线几何关系。我们通过为L3Fnet采用两阶段架构来实现这一点。第一阶段查看所有光场视图以编码光场几何信息。然后在第二阶段使用此编码信息来重建每个光场视图。为了促进基于学习的低光光场成像技术,我们收集了一个包含各种场景的综合光场数据集。对于每个场景,我们捕获了四个光场,一个具有接近最佳的曝光和ISO设置,其他的则处于从低到极低光条件的不同低光水平。该数据集上的视觉和数值比较都证明了所提出的L3Fnet的有效性。为了进一步分析低光恢复方法的性能,我们还提出了L3F-wild数据集,其中包含在深夜几乎零勒克斯值下捕获的光场。此数据集中没有可用的真实数据。要在L3F-wild数据集上表现良好,任何方法都必须适应捕获场景的光照水平。为此,我们使用一个预处理块,使L3Fnet对各种程度的低光条件具有鲁棒性。最后,我们表明L3Fnet尽管是为光场数据设计的,但也可用于单帧图像的低光增强。我们通过将单帧数码单反图像转换为适合L3Fnet的形式来做到这一点,我们将其称为伪光场。我们的代码和数据集可在https://mohitlamba94.github.io/L3Fnet/上下载。

相似文献

1
Harnessing Multi-View Perspective of Light Fields for Low-Light Imaging.利用光场的多视角进行低光成像。
IEEE Trans Image Process. 2021;30:1501-1513. doi: 10.1109/TIP.2020.3045617. Epub 2020 Dec 31.
2
Deep Coarse-to-Fine Dense Light Field Reconstruction With Flexible Sampling and Geometry-Aware Fusion.基于灵活采样和几何感知融合的深度粗到细密集光场重建
IEEE Trans Pattern Anal Mach Intell. 2022 Apr;44(4):1819-1836. doi: 10.1109/TPAMI.2020.3026039. Epub 2022 Mar 4.
3
End-to-End Light Field Spatial Super-Resolution Network Using Multiple Epipolar Geometry.基于多对极几何的端到端光场空间超分辨率网络
IEEE Trans Image Process. 2021;30:5956-5968. doi: 10.1109/TIP.2021.3079805. Epub 2021 Jun 30.
4
Geometry-aware view reconstruction network for light field image compression.基于几何感知的光场图像压缩视图重建网络。
Sci Rep. 2022 Dec 23;12(1):22254. doi: 10.1038/s41598-022-26887-4.
5
Disentangling Light Fields for Super-Resolution and Disparity Estimation.用于超分辨率和视差估计的光场解缠
IEEE Trans Pattern Anal Mach Intell. 2023 Jan;45(1):425-443. doi: 10.1109/TPAMI.2022.3152488. Epub 2022 Dec 5.
6
Light Field Reconstruction Using Residual Networks on Raw Images.基于原始图像的残差网络的光场重建。
Sensors (Basel). 2022 Mar 2;22(5):1956. doi: 10.3390/s22051956.
7
LRT: An Efficient Low-Light Restoration Transformer for Dark Light Field Images.LRT:一种用于暗光场图像的高效低光恢复Transformer
IEEE Trans Image Process. 2023;32:4314-4326. doi: 10.1109/TIP.2023.3297412. Epub 2023 Aug 1.
8
Multi-Attention Learning and Exposure Guidance Toward Ghost-Free High Dynamic Range Light Field Imaging.面向无鬼影高动态范围光场成像的多注意力学习与曝光引导
IEEE Trans Vis Comput Graph. 2025 Sep;31(9):5304-5320. doi: 10.1109/TVCG.2024.3446789.
9
Enhancing Low-Light Light Field Images With a Deep Compensation Unfolding Network.使用深度补偿展开网络增强低光光场图像
IEEE Trans Image Process. 2024;33:4131-4144. doi: 10.1109/TIP.2024.3420797. Epub 2024 Jul 9.
10
Light Field Image Super-Resolution Using Deformable Convolution.使用可变形卷积的光场图像超分辨率
IEEE Trans Image Process. 2021;30:1057-1071. doi: 10.1109/TIP.2020.3042059. Epub 2020 Dec 11.