Xut Jia, Mukherjee Lopamudra, Li Yin, Warner Jamieson, Rehg James M, Singht Vikas
University of Wisconsin-Madison.
University of Wisconsin-Whitewater.
Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit. 2015 Jun;2015:2235-2244. doi: 10.1109/CVPR.2015.7298836.
With the proliferation of wearable cameras, the number of videos of users documenting their personal lives using such devices is rapidly increasing. Since such videos may span hours, there is an important need for mechanisms that represent the information content in a compact form (i.e., shorter videos which are more easily browsable/sharable). Motivated by these applications, this paper focuses on the problem of egocentric video summarization. Such videos are usually continuous with significant camera shake and other quality issues. Because of these reasons, there is growing consensus that direct application of standard video summarization tools to such data yields unsatisfactory performance. In this paper, we demonstrate that using gaze tracking information (such as fixation and saccade) significantly helps the summarization task. It allows meaningful comparison of different image frames and enables deriving personalized summaries (gaze provides a sense of the camera wearer's intent). We formulate a summarization model which captures common-sense properties of a good summary, and show that it can be solved as a submodular function maximization with partition matroid constraints, opening the door to a rich body of work from combinatorial optimization. We evaluate our approach on a new gaze-enabled egocentric video dataset (over 15 hours), which will be a valuable standalone resource.
随着可穿戴式摄像机的普及,用户使用此类设备记录个人生活的视频数量正在迅速增加。由于此类视频可能长达数小时,因此迫切需要以紧凑形式(即更易于浏览/分享的短视频)呈现信息内容的机制。受这些应用的启发,本文聚焦于第一人称视角视频摘要问题。此类视频通常是连续的,存在明显的镜头抖动和其他质量问题。由于这些原因,越来越多的人达成共识,即直接将标准视频摘要工具应用于此类数据会产生不尽人意的性能。在本文中,我们证明使用注视跟踪信息(如注视点和扫视)能显著助力摘要任务。它允许对不同图像帧进行有意义的比较,并能生成个性化摘要(注视提供了摄像机佩戴者意图的一种感觉)。我们制定了一个摘要模型,该模型捕捉了一个好的摘要的常识属性,并表明它可以作为具有划分拟阵约束的次模函数最大化问题来求解,为组合优化领域丰富的研究工作打开了大门。我们在一个新的支持注视的第一人称视角视频数据集(超过15小时)上评估我们的方法,该数据集将成为一个有价值的独立资源。