• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于红外与可见光视频融合的夜视抗光晕方法。

Night Vision Anti-Halation Method Based on Infrared and Visible Video Fusion.

机构信息

School of Electronic and Information Engineering, Xi'an Technological University, Xi'an 710021, China.

出版信息

Sensors (Basel). 2022 Oct 2;22(19):7494. doi: 10.3390/s22197494.

DOI:10.3390/s22197494
PMID:36236591
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9573233/
Abstract

In order to address the discontinuity caused by the direct application of the infrared and visible image fusion anti-halation method to a video, an efficient night vision anti-halation method based on video fusion is proposed. The designed frame selection based on inter-frame difference determines the optimal cosine angle threshold by analyzing the relation of cosine angle threshold with nonlinear correlation information entropy and de-frame rate. The proposed time-mark-based adaptive motion compensation constructs the same number of interpolation frames as the redundant frames by taking the retained frame number as a time stamp. At the same time, considering the motion vector of two adjacent retained frames as the benchmark, the adaptive weights are constructed according to the interframe differences between the interpolated frame and the last retained frame, then the motion vector of the interpolated frame is estimated. The experimental results show that the proposed frame selection strategy ensures the maximum safe frame removal under the premise of continuous video content at different vehicle speeds in various halation scenes. The frame numbers and playing duration of the fused video are consistent with that of the original video, and the content of the interpolated frame is highly synchronized with that of the corresponding original frames. The average FPS of video fusion in this work is about six times that in the frame-by-frame fusion, which effectively improves the anti-halation processing efficiency of video fusion.

摘要

为了解决将红外和可见光图像融合反晕光方法直接应用于视频时出现的不连续性问题,提出了一种基于视频融合的高效夜视反晕光方法。所设计的基于帧间差的帧选择通过分析余弦角度阈值与非线性相关信息熵和去帧率之间的关系来确定最佳余弦角度阈值。所提出的基于时间标记的自适应运动补偿通过保留帧数作为时间戳,构建与冗余帧数相同数量的插值帧数。同时,考虑到两个相邻保留帧的运动向量作为基准,根据插值帧与最后一个保留帧之间的帧间差构建自适应权重,然后估计插值帧的运动向量。实验结果表明,所提出的帧选择策略在不同晕光场景下不同车速的连续视频内容前提下,保证了最大安全帧的去除。融合视频的帧数和播放时长与原始视频一致,并且插值帧的内容与相应的原始帧高度同步。与逐帧融合相比,这项工作中的视频融合的平均帧率约为其六倍,有效地提高了视频融合的反晕光处理效率。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b38/9573233/bdc1b6ce92e3/sensors-22-07494-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b38/9573233/71f884b2dc85/sensors-22-07494-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b38/9573233/49e24fc89cb4/sensors-22-07494-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b38/9573233/c015eb3abbbf/sensors-22-07494-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b38/9573233/2aa56771ec6d/sensors-22-07494-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b38/9573233/cf3b9dd3c496/sensors-22-07494-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b38/9573233/fbd8aba66871/sensors-22-07494-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b38/9573233/e6747b425661/sensors-22-07494-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b38/9573233/deddda4be237/sensors-22-07494-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b38/9573233/7579012a513f/sensors-22-07494-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b38/9573233/ff5b1b32be5c/sensors-22-07494-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b38/9573233/aae5e8281f0a/sensors-22-07494-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b38/9573233/bdc1b6ce92e3/sensors-22-07494-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b38/9573233/71f884b2dc85/sensors-22-07494-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b38/9573233/49e24fc89cb4/sensors-22-07494-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b38/9573233/c015eb3abbbf/sensors-22-07494-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b38/9573233/2aa56771ec6d/sensors-22-07494-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b38/9573233/cf3b9dd3c496/sensors-22-07494-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b38/9573233/fbd8aba66871/sensors-22-07494-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b38/9573233/e6747b425661/sensors-22-07494-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b38/9573233/deddda4be237/sensors-22-07494-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b38/9573233/7579012a513f/sensors-22-07494-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b38/9573233/ff5b1b32be5c/sensors-22-07494-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b38/9573233/aae5e8281f0a/sensors-22-07494-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b38/9573233/bdc1b6ce92e3/sensors-22-07494-g012.jpg

相似文献

1
Night Vision Anti-Halation Method Based on Infrared and Visible Video Fusion.基于红外与可见光视频融合的夜视抗光晕方法。
Sensors (Basel). 2022 Oct 2;22(19):7494. doi: 10.3390/s22197494.
2
Frame rate up conversion based on variational image fusion.基于变分图像融合的帧率上转换。
IEEE Trans Image Process. 2014 Jan;23(1):399-412. doi: 10.1109/TIP.2013.2288139.
3
A multistage motion vector processing method for motion-compensated frame interpolation.一种用于运动补偿帧插值的多级运动矢量处理方法。
IEEE Trans Image Process. 2008 May;17(5):694-708. doi: 10.1109/TIP.2008.919360.
4
Novel true-motion estimation algorithm and its application to motion-compensated temporal frame interpolation.新型真实运动估计算法及其在运动补偿时域帧插值中的应用。
IEEE Trans Image Process. 2013 Aug;22(8):2931-45. doi: 10.1109/TIP.2012.2222893. Epub 2012 Oct 4.
5
A motion-aligned auto-regressive model for frame rate up conversion.一种用于帧率上转换的运动对齐自回归模型。
IEEE Trans Image Process. 2010 May;19(5):1248-58. doi: 10.1109/TIP.2009.2039055. Epub 2009 Dec 22.
6
Optimal temporal interpolation filter for motion-compensated frame rate up conversion.用于运动补偿帧率提升的最优时间插值滤波器。
IEEE Trans Image Process. 2006 Apr;15(4):978-91. doi: 10.1109/tip.2005.863947.
7
Rate distortion optimization for H.264 interframe coding: a general framework and algorithms.用于H.264帧间编码的率失真优化:通用框架与算法
IEEE Trans Image Process. 2007 Jul;16(7):1774-84. doi: 10.1109/tip.2007.896685.
8
Full-frame video stabilization with motion inpainting.采用运动修复技术的全帧视频稳定
IEEE Trans Pattern Anal Mach Intell. 2006 Jul;28(7):1150-63. doi: 10.1109/TPAMI.2006.141.
9
3-D model-based frame interpolation for distributed video coding of static scenes.用于静态场景分布式视频编码的基于3D模型的帧插值
IEEE Trans Image Process. 2007 May;16(5):1246-57. doi: 10.1109/tip.2007.894272.
10
Optical flow estimation using temporally oversampled video.使用时间过采样视频的光流估计
IEEE Trans Image Process. 2005 Aug;14(8):1074-87. doi: 10.1109/tip.2005.851688.

本文引用的文献

1
Driver glare exposure with different vehicle frontlighting systems.驾驶员在不同车辆前照灯系统下的眩光暴露。
J Safety Res. 2021 Feb;76:228-237. doi: 10.1016/j.jsr.2020.12.018. Epub 2021 Jan 8.
2
Optical Flow Based Co-located Reference Frame for Video Compression.基于光流的视频压缩共置参考帧
IEEE Trans Image Process. 2020 Aug 12;PP. doi: 10.1109/TIP.2020.3014723.
3
Nighttime driving: visual, lighting and visibility challenges.夜间驾驶:视觉、照明和可见度挑战。
Ophthalmic Physiol Opt. 2020 Mar;40(2):187-201. doi: 10.1111/opo.12659. Epub 2019 Dec 25.
4
MEMC-Net: Motion Estimation and Motion Compensation Driven Neural Network for Video Interpolation and Enhancement.MEMC-Net:用于视频插值与增强的运动估计和运动补偿驱动神经网络。
IEEE Trans Pattern Anal Mach Intell. 2021 Mar;43(3):933-948. doi: 10.1109/TPAMI.2019.2941941. Epub 2021 Feb 4.
5
Spatial Control of Multiphoton Electron Excitations in InAs Nanowires by Varying Crystal Phase and Light Polarization.通过改变晶体相和光偏振来控制 InAs 纳米线中的多光子电子激发的空间分布。
Nano Lett. 2018 Feb 14;18(2):907-915. doi: 10.1021/acs.nanolett.7b04267. Epub 2018 Jan 11.
6
Motion estimation methods for overlapped block motion compensation.重叠块运动补偿的运动估计方法。
IEEE Trans Image Process. 2000;9(9):1509-21. doi: 10.1109/83.862628.