• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于更强大计算机视觉任务的低光图像和视频增强:综述

Low-Light Image and Video Enhancement for More Robust Computer Vision Tasks: A Review.

作者信息

Tatana Mpilo M, Tsoeu Mohohlo S, Maswanganyi Rito C

机构信息

Department of Electronic and Computer Engineering, Durban University of Technology, Durban 4001, South Africa.

Steve Biko Campus, Durban University of Technology, Durban 4001, South Africa.

出版信息

J Imaging. 2025 Apr 21;11(4):125. doi: 10.3390/jimaging11040125.

DOI:10.3390/jimaging11040125
PMID:40278041
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12027663/
Abstract

Computer vision aims to enable machines to understand the visual world. Computer vision encompasses numerous tasks, namely action recognition, object detection and image classification. Much research has been focused on solving these tasks, but one that remains relatively uncharted is light enhancement (LE). Low-light enhancement (LLE) is crucial as computer vision tasks fail in the absence of sufficient lighting, having to rely on the addition of peripherals such as sensors. This review paper will shed light on this (focusing on video enhancement) subfield of computer vision, along with the other forementioned computer vision tasks. The review analyzes both traditional and deep learning-based enhancers and provides a comparative analysis on recent models in the field. The review also analyzes how popular computer vision tasks are improved and made more robust when coupled with light enhancement algorithms. Results show that deep learners outperform traditional enhancers, with supervised learners obtaining the best results followed by zero-shot learners, while computer vision tasks are improved with light enhancement coupling. The review concludes by highlighting major findings such as that although supervised learners obtain the best results, due to a lack of real-world data and robustness to new data, a shift to zero-shot learners is required.

摘要

计算机视觉旨在使机器能够理解视觉世界。计算机视觉涵盖众多任务,即动作识别、目标检测和图像分类。许多研究都集中在解决这些任务上,但一个相对尚未得到充分探索的领域是光照增强(LE)。低光照增强(LLE)至关重要,因为在缺乏足够光照的情况下,计算机视觉任务会失败,不得不依赖添加诸如传感器等外围设备。这篇综述论文将阐明计算机视觉的这个(专注于视频增强)子领域,以及上述其他计算机视觉任务。该综述分析了传统的和基于深度学习的增强器,并对该领域的近期模型进行了比较分析。该综述还分析了在与光照增强算法相结合时,热门的计算机视觉任务是如何得到改进并变得更稳健的。结果表明,深度学习器优于传统增强器,有监督学习器取得了最佳结果,其次是零样本学习器,而计算机视觉任务通过光照增强耦合得到了改进。该综述最后强调了主要发现,例如尽管有监督学习器取得了最佳结果,但由于缺乏真实世界数据以及对新数据的稳健性不足,需要转向零样本学习器。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/b5a4aab7499c/jimaging-11-00125-g027.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/2a645e4edb6b/jimaging-11-00125-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/c073666ed124/jimaging-11-00125-g025.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/583afd517bee/jimaging-11-00125-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/224670010e00/jimaging-11-00125-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/db4eee2cc30d/jimaging-11-00125-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/117c3e51fde8/jimaging-11-00125-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/3d4107717b7e/jimaging-11-00125-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/3fab07a1f894/jimaging-11-00125-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/011dcdda26cd/jimaging-11-00125-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/43b54248be22/jimaging-11-00125-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/bd0618b8bf87/jimaging-11-00125-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/89b289f09c1f/jimaging-11-00125-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/7769b14e8a5b/jimaging-11-00125-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/4a0d1f7fca95/jimaging-11-00125-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/ceeddecd96b6/jimaging-11-00125-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/a564216b0333/jimaging-11-00125-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/2b2e26c02785/jimaging-11-00125-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/6c9df43b5c4e/jimaging-11-00125-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/a853e2341166/jimaging-11-00125-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/49b22ba76633/jimaging-11-00125-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/02ddaac5add3/jimaging-11-00125-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/941cf705fc96/jimaging-11-00125-g021.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/062434491c84/jimaging-11-00125-g022.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/beae607a9d9d/jimaging-11-00125-g023.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/024aaded6aed/jimaging-11-00125-g024.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/435884d34d10/jimaging-11-00125-g026.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/b5a4aab7499c/jimaging-11-00125-g027.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/2a645e4edb6b/jimaging-11-00125-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/c073666ed124/jimaging-11-00125-g025.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/583afd517bee/jimaging-11-00125-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/224670010e00/jimaging-11-00125-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/db4eee2cc30d/jimaging-11-00125-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/117c3e51fde8/jimaging-11-00125-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/3d4107717b7e/jimaging-11-00125-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/3fab07a1f894/jimaging-11-00125-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/011dcdda26cd/jimaging-11-00125-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/43b54248be22/jimaging-11-00125-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/bd0618b8bf87/jimaging-11-00125-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/89b289f09c1f/jimaging-11-00125-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/7769b14e8a5b/jimaging-11-00125-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/4a0d1f7fca95/jimaging-11-00125-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/ceeddecd96b6/jimaging-11-00125-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/a564216b0333/jimaging-11-00125-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/2b2e26c02785/jimaging-11-00125-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/6c9df43b5c4e/jimaging-11-00125-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/a853e2341166/jimaging-11-00125-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/49b22ba76633/jimaging-11-00125-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/02ddaac5add3/jimaging-11-00125-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/941cf705fc96/jimaging-11-00125-g021.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/062434491c84/jimaging-11-00125-g022.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/beae607a9d9d/jimaging-11-00125-g023.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/024aaded6aed/jimaging-11-00125-g024.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/435884d34d10/jimaging-11-00125-g026.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c23/12027663/b5a4aab7499c/jimaging-11-00125-g027.jpg

相似文献

1
Low-Light Image and Video Enhancement for More Robust Computer Vision Tasks: A Review.用于更强大计算机视觉任务的低光图像和视频增强:综述
J Imaging. 2025 Apr 21;11(4):125. doi: 10.3390/jimaging11040125.
2
Multi-label zero-shot human action recognition via joint latent ranking embedding.基于联合潜在排序嵌入的多标签零镜头人体动作识别。
Neural Netw. 2020 Feb;122:1-23. doi: 10.1016/j.neunet.2019.09.029. Epub 2019 Oct 21.
3
Self-Supervised and Zero-Shot Learning in Multi-Modal Raman Light Sheet Microscopy.多模态拉曼光片显微镜中的自监督和零样本学习
Sensors (Basel). 2024 Dec 20;24(24):8143. doi: 10.3390/s24248143.
4
A survey on generative adversarial networks for imbalance problems in computer vision tasks.关于计算机视觉任务中不平衡问题的生成对抗网络调查。
J Big Data. 2021;8(1):27. doi: 10.1186/s40537-021-00414-0. Epub 2021 Jan 29.
5
Deep Learning for Computer Vision: A Brief Review.深度学习在计算机视觉中的应用综述
Comput Intell Neurosci. 2018 Feb 1;2018:7068349. doi: 10.1155/2018/7068349. eCollection 2018.
6
Unsupervised Illumination Adaptation for Low-Light Vision.用于低光视觉的无监督光照自适应
IEEE Trans Pattern Anal Mach Intell. 2024 Sep;46(9):5951-5966. doi: 10.1109/TPAMI.2024.3382108. Epub 2024 Aug 6.
7
ChampKit: A framework for rapid evaluation of deep neural networks for patch-based histopathology classification.ChampKit:一种基于补丁的组织病理学分类的深度神经网络快速评估框架。
Comput Methods Programs Biomed. 2023 Sep;239:107631. doi: 10.1016/j.cmpb.2023.107631. Epub 2023 May 30.
8
Deep learning-based small object detection: A survey.基于深度学习的小目标检测:一项综述。
Math Biosci Eng. 2023 Feb 2;20(4):6551-6590. doi: 10.3934/mbe.2023282.
9
A hybrid zero-reference and dehazing network for joint low-light underground image enhancement.一种用于联合低光照地下图像增强的混合零参考去雾网络。
Sci Rep. 2025 Mar 24;15(1):10135. doi: 10.1038/s41598-025-95366-3.
10
CUI-Net: a correcting uneven illumination net for low-light image enhancement.CUI-Net:一种用于低光照图像增强的校正光照不均网络。
Sci Rep. 2023 Aug 9;13(1):12894. doi: 10.1038/s41598-023-39524-5.

引用本文的文献

1
DFCNet: Dual-Stage Frequency-Domain Calibration Network for Low-Light Image Enhancement.DFCNet:用于低光照图像增强的双阶段频域校准网络
J Imaging. 2025 Jul 28;11(8):253. doi: 10.3390/jimaging11080253.

本文引用的文献

1
VRT: A Video Restoration Transformer.VRT:一种视频恢复Transformer。
IEEE Trans Image Process. 2024;33:2171-2182. doi: 10.1109/TIP.2024.3372454. Epub 2024 Mar 22.
2
DTCM: Joint Optimization of Dark Enhancement and Action Recognition in Videos.深度时态对比学习:视频中暗部增强与动作识别的联合优化
IEEE Trans Image Process. 2023;32:3507-3520. doi: 10.1109/TIP.2023.3286254. Epub 2023 Jun 23.
3
Human Action Recognition From Various Data Modalities: A Review.基于多种数据模态的人类行为识别综述
IEEE Trans Pattern Anal Mach Intell. 2023 Mar;45(3):3200-3225. doi: 10.1109/TPAMI.2022.3183112. Epub 2023 Feb 3.
4
Deep Video Prior for Video Consistency and Propagation.用于视频一致性和传播的深度视频先验
IEEE Trans Pattern Anal Mach Intell. 2023 Jan;45(1):356-371. doi: 10.1109/TPAMI.2022.3142071. Epub 2022 Dec 5.
5
Low-Light Image and Video Enhancement Using Deep Learning: A Survey.基于深度学习的低光照图像与视频增强:综述
IEEE Trans Pattern Anal Mach Intell. 2022 Dec;44(12):9396-9416. doi: 10.1109/TPAMI.2021.3126387. Epub 2022 Nov 7.
6
Learning to Enhance Low-Light Image via Zero-Reference Deep Curve Estimation.通过零参考深度曲线估计学习增强低光图像
IEEE Trans Pattern Anal Mach Intell. 2022 Aug;44(8):4225-4238. doi: 10.1109/TPAMI.2021.3063604. Epub 2022 Jul 1.
7
EnlightenGAN: Deep Light Enhancement Without Paired Supervision.EnlightenGAN:无需配对监督的深度光照增强
IEEE Trans Image Process. 2021;30:2340-2349. doi: 10.1109/TIP.2021.3051462. Epub 2021 Jan 27.
8
A dataset for automatic violence detection in videos.一个用于视频中暴力行为自动检测的数据集。
Data Brief. 2020 Nov 26;33:106587. doi: 10.1016/j.dib.2020.106587. eCollection 2020 Dec.
9
Advancing Image Understanding in Poor Visibility Environments: A Collective Benchmark Study.在低能见度环境中推进图像理解:一项综合基准研究。
IEEE Trans Image Process. 2020 Mar 27. doi: 10.1109/TIP.2020.2981922.
10
Structure-Revealing Low-Light Image Enhancement Via Robust Retinex Model.基于鲁棒反射率模型的结构揭示微光图像增强方法
IEEE Trans Image Process. 2018 Jun;27(6):2828-2841. doi: 10.1109/TIP.2018.2810539.