Suppr超能文献

通过梯度一致性优化实现光照引导的视频合成

Illumination-Guided Video Composition via Gradient Consistency Optimization.

作者信息

Wang Jingye, Sheng Bin, Li Ping, Jin Yuxi, Feng David Dagan

出版信息

IEEE Trans Image Process. 2019 May 20. doi: 10.1109/TIP.2019.2916769.

Abstract

Video composition aims at cloning a patch from the source video into the target scene to create a seamless and harmonious blending frame sequence. Previous work in video composition usually suffer from artifacts around the blending region and spatial-temporal consistency when illumination intensity varies in the input source and target video. We propose an illumination-guided video composition method via a unified spatial and temporal optimization framework. Our method can produce globally consistent composition results and maintain the temporal coherency. We first compute a spatial-temporal blending boundary iteratively. For each frame, the gradient field of the target and source frames are mixed adaptively based on gradients and inter-frame color difference. The temporal consistency is further obtained by optimizing luminance gradients throughout all the composition frames. Moreover, we extend the mean-value cloning by smoothing discrepancies between the source and target frames, then eliminate the color distribution overflow exponentially to reduce falsely blending pixels. Various experiments have shown the effectiveness and high-quality performance of our illumination-guided composition.

摘要

视频合成旨在从源视频中克隆一个补丁到目标场景中,以创建无缝且和谐的融合帧序列。以往的视频合成工作在输入源视频和目标视频的光照强度变化时,通常会在融合区域周围出现伪影以及空间-时间一致性问题。我们提出了一种通过统一的空间和时间优化框架的光照引导视频合成方法。我们的方法可以产生全局一致的合成结果并保持时间连贯性。我们首先迭代计算一个空间-时间融合边界。对于每一帧,基于梯度和帧间颜色差异自适应地混合目标帧和源帧的梯度场。通过优化所有合成帧的亮度梯度进一步获得时间一致性。此外,我们通过平滑源帧和目标帧之间的差异来扩展均值克隆,然后以指数方式消除颜色分布溢出,以减少错误融合的像素。各种实验表明了我们的光照引导合成方法的有效性和高质量性能。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验