Fang Hui, Hart John C
Google Inc, Mountain View, CA 94043, USA.
IEEE Trans Vis Comput Graph. 2006 Nov-Dec;12(6):1580-9. doi: 10.1109/TVCG.2006.102.
We propose a video editing system that allows a user to apply a time-coherent texture to a surface depicted in the raw video from a single uncalibrated camera, including the surface texture mapping of a texture image and the surface texture synthesis from a texture swatch. Our system avoids the construction of a 3D shape model and instead uses the recovered normal field to deform the texture so that it plausibly adheres to the undulations of the depicted surface. The texture mapping method uses the nonlinear least-squares optimization of a spring model to control the behavior of the texture image as it is deformed to match the evolving normal field through the video. The texture synthesis method uses a coarse optical flow to advect clusters of pixels corresponding to patches of similarly oriented surface points. These clusters are organized into a minimum advection tree to account for the dynamic visibility of clusters. We take a rather crude approach to normal recovering and optical flow estimation, yet the results are robust and plausible for nearly diffuse surfaces such as faces and t-shirts.
我们提出了一种视频编辑系统,该系统允许用户将时间连贯的纹理应用于来自单个未校准相机的原始视频中描绘的表面,包括纹理图像的表面纹理映射和纹理样本的表面纹理合成。我们的系统避免构建3D形状模型,而是使用恢复的法线场来使纹理变形,以便它合理地附着在所描绘表面的起伏上。纹理映射方法使用弹簧模型的非线性最小二乘优化来控制纹理图像在变形以匹配视频中不断变化的法线场时的行为。纹理合成方法使用粗略的光流来平流与方向相似的表面点补丁对应的像素簇。这些簇被组织成一个最小平流树,以考虑簇的动态可见性。我们对法线恢复和光流估计采用了相当粗略的方法,但对于诸如面部和T恤等几乎漫反射的表面,结果是稳健且合理的。