Wang Huamin, Sun Mingxuan, Yang Ruigang
Georgia Institute of Technology, Atlanta, GA 30332-0760, USA.
IEEE Trans Vis Comput Graph. 2007 Jul-Aug;13(4):697-710. doi: 10.1109/TVCG.2007.1019.
In this paper, we propose a novel framework called space-time light field rendering, which allows continuous exploration of a dynamic scene in both space and time. Compared to existing light field capture/rendering systems, it offers the capability of using unsynchronized video inputs and the added freedom of controlling the visualization in the temporal domain, such as smooth slow motion and temporal integration. In order to synthesize novel views from any viewpoint at any time instant, we develop a two-stage rendering algorithm. We first interpolate in the temporal domain to generate globally synchronized images using a robust spatial-temporal image registration algorithm followed by edge-preserving image morphing. We then interpolate these software-synchronized images in the spatial domain to synthesize the final view. In addition, we introduce a very accurate and robust algorithm to estimate subframe temporal offsets among input video sequences. Experimental results from unsynchronized videos with or without time stamps show that our approach is capable of maintaining photorealistic quality from a variety of real scenes.
在本文中,我们提出了一种名为时空光场渲染的新颖框架,它允许在空间和时间上对动态场景进行连续探索。与现有的光场捕获/渲染系统相比,它具有使用非同步视频输入的能力以及在时间域中控制可视化的额外自由度,例如平滑慢动作和时间积分。为了在任何时刻从任何视角合成新视图,我们开发了一种两阶段渲染算法。我们首先在时间域中进行插值,使用鲁棒的时空图像配准算法生成全局同步图像,随后进行保边图像变形。然后我们在空间域中对这些软件同步的图像进行插值以合成最终视图。此外,我们引入了一种非常精确且鲁棒的算法来估计输入视频序列之间的子帧时间偏移。来自有或没有时间戳的非同步视频的实验结果表明,我们的方法能够从各种真实场景中保持逼真的质量。