IEEE Trans Vis Comput Graph. 2017 Sep;23(9):2096-2107. doi: 10.1109/TVCG.2016.2608828. Epub 2016 Sep 13.
Using synthetic videos to present a 3D scene is a common requirement for architects, designers, engineers or Cultural Heritage professionals however it is usually time consuming and, in order to obtain high quality results, the support of a film maker/computer animation expert is necessary. We introduce an alternative approach that takes the 3D scene of interest and an example video as input, and automatically produces a video of the input scene that resembles the given video example. In other words, our algorithm allows the user to "replicate" an existing video, on a different 3D scene. We build on the intuition that a video sequence of a static environment is strongly characterized by its optical flow, or, in other words, that two videos are similar if their optical flows are similar. We therefore recast the problem as producing a video of the input scene whose optical flow is similar to the optical flow of the input video. Our intuition is supported by a user-study specifically designed to verify this statement. We have successfully tested our approach on several scenes and input videos, some of which are reported in the accompanying material of this paper.
使用合成视频来呈现三维场景是建筑师、设计师、工程师或文化遗产专业人员的常见要求,但这通常需要花费大量时间,并且为了获得高质量的结果,需要电影制作人员/计算机动画专家的支持。我们介绍了一种替代方法,该方法将感兴趣的三维场景和示例视频作为输入,并自动生成类似于给定示例视频的输入场景的视频。换句话说,我们的算法允许用户在不同的三维场景上“复制”现有视频。我们的直觉是,静态环境的视频序列由其光流强烈表征,或者换句话说,如果两个视频的光流相似,那么它们就相似。因此,我们将问题重新表述为生成输入场景的视频,其光流与输入视频的光流相似。我们的直觉得到了一项专门设计的用户研究的支持,该研究旨在验证这一说法。我们已经成功地在几个场景和输入视频上测试了我们的方法,其中一些在本文的配套材料中有所报道。