IEEE Trans Pattern Anal Mach Intell. 2018 Jun;40(6):1510-1517. doi: 10.1109/TPAMI.2017.2712608. Epub 2017 Jun 6.
Typical human actions last several seconds and exhibit characteristic spatio-temporal structure. Recent methods attempt to capture this structure and learn action representations with convolutional neural networks. Such representations, however, are typically learned at the level of a few video frames failing to model actions at their full temporal extent. In this work we learn video representations using neural networks with long-term temporal convolutions (LTC). We demonstrate that LTC-CNN models with increased temporal extents improve the accuracy of action recognition. We also study the impact of different low-level representations, such as raw values of video pixels and optical flow vector fields and demonstrate the importance of high-quality optical flow estimation for learning accurate action models. We report state-of-the-art results on two challenging benchmarks for human action recognition UCF101 (92.7%) and HMDB51 (67.2%).
典型的人类动作持续数秒,并呈现出特征性的时空结构。最近的方法试图捕捉这种结构,并使用卷积神经网络学习动作表示。然而,这些表示通常是在少数视频帧的水平上学习的,无法对整个时间范围内的动作进行建模。在这项工作中,我们使用具有长期时间卷积 (LTC) 的神经网络来学习视频表示。我们证明了具有更长时间范围的 LTC-CNN 模型可以提高动作识别的准确性。我们还研究了不同底层表示的影响,例如视频像素的原始值和光流矢量场,并证明了高质量光流估计对于学习准确的动作模型的重要性。我们在两个具有挑战性的人类动作识别基准上报告了最先进的结果:UCF101(92.7%)和 HMDB51(67.2%)。