Lei Chenyang, Xing Yazhou, Ouyang Hao, Chen Qifeng
IEEE Trans Pattern Anal Mach Intell. 2023 Jan;45(1):356-371. doi: 10.1109/TPAMI.2022.3142071. Epub 2022 Dec 5.
Applying an image processing algorithm independently to each video frame often leads to temporal inconsistency in the resulting video. To address this issue, we present a novel and general approach for blind video temporal consistency. Our method is only trained on a pair of original and processed videos directly instead of a large dataset. Unlike most previous methods that enforce temporal consistency with optical flow, we show that temporal consistency can be achieved by training a convolutional network on a video with Deep Video Prior (DVP). Moreover, a carefully designed iteratively reweighted training strategy is proposed to address the challenging multimodal inconsistency problem. We demonstrate the effectiveness of our approach on 7 computer vision tasks on videos. Extensive quantitative and perceptual experiments show that our approach obtains superior performance than state-of-the-art methods on blind video temporal consistency. We further extend DVP to video propagation and demonstrate its effectiveness in propagating three different types of information (color, artistic style, and object segmentation). A progressive propagation strategy with pseudo labels is also proposed to enhance DVP's performance on video propagation. Our source codes are publicly available at https://github.com/ChenyangLEI/deep-video-prior.
对每个视频帧独立应用图像处理算法通常会导致生成的视频出现时间上的不一致性。为了解决这个问题,我们提出了一种新颖且通用的盲视频时间一致性方法。我们的方法仅在一对原始视频和处理后的视频上直接进行训练,而不是在大型数据集上训练。与大多数以前使用光流来强制时间一致性的方法不同,我们表明通过在具有深度视频先验(DVP)的视频上训练卷积网络可以实现时间一致性。此外,还提出了一种精心设计的迭代重新加权训练策略来解决具有挑战性的多模态不一致问题。我们在视频上的7个计算机视觉任务中展示了我们方法的有效性。广泛的定量和感知实验表明,在盲视频时间一致性方面,我们的方法比现有方法具有更优的性能。我们进一步将DVP扩展到视频传播,并证明了其在传播三种不同类型信息(颜色、艺术风格和对象分割)方面的有效性。还提出了一种带有伪标签的渐进传播策略,以提高DVP在视频传播方面的性能。我们的源代码可在https://github.com/ChenyangLEI/deep-video-prior上公开获取。