Faculty of Computers and Artificial Intelligence, Cairo University, Egypt; Member of Scientific Research Group in Egypt (SRGE), Egypt.
Faculty of Computers and Artificial Intelligence, Cairo University, Egypt; Member of Scientific Research Group in Egypt (SRGE), Egypt.
Neural Netw. 2020 Aug;128:331-344. doi: 10.1016/j.neunet.2020.05.017. Epub 2020 May 19.
Detecting the locations of multiple actions in videos and classifying them in real-time are challenging problems termed "action localization and prediction" problem. Convolutional neural networks (ConvNets) have achieved great success for action localization and prediction in still images. A major advance occurred when the AlexNet architecture was introduced in the ImageNet competition. ConvNets have since achieved state-of-the-art performances across a wide variety of machine vision tasks, including object detection, image segmentation, image classification, facial recognition, human pose estimation, and tracking. However, few works exist that address action localization and prediction in videos. The current action localization research primarily focuses on the classification of temporally trimmed videos in which only one action occurs per frame. Moreover, nearly all the current approaches work only offline and are too slow to be useful in real-world environments. In this work, we propose a fast and accurate deep-learning approach to perform real-time action localization and prediction. The proposed approach uses convolutional neural networks to localize multiple actions and predict their classes in real time. This approach starts by using appearance and motion detection networks (known as "you only look once" (YOLO) networks) to localize and classify actions from RGB frames and optical flow frames using a two-stream model. We then propose a fusion step that increases the localization accuracy of the proposed approach. Moreover, we generate an action tube based on frame level detection. The frame by frame processing introduces an early action detection and prediction with top performance in terms of detection speed and precision. The experimental results demonstrate this superiority of our proposed approach in terms of both processing time and accuracy compared to recent offline and online action localization and prediction approaches on the challenging UCF-101-24 and J-HMDB-21 benchmarks.
在视频中检测多个动作的位置并实时对其进行分类是一个具有挑战性的问题,称为“动作定位与预测”问题。卷积神经网络(ConvNets)在静态图像中的动作定位和预测方面取得了巨大的成功。当 AlexNet 架构在 ImageNet 竞赛中推出时,取得了重大进展。此后,ConvNets 在各种机器视觉任务中实现了最先进的性能,包括目标检测、图像分割、图像分类、人脸识别、人体姿态估计和跟踪。然而,很少有作品涉及视频中的动作定位和预测。当前的动作定位研究主要集中在对每帧只发生一个动作的时间修剪视频的分类上。此外,几乎所有当前的方法都仅在离线环境中工作,在实际环境中速度太慢,无法使用。在这项工作中,我们提出了一种快速准确的深度学习方法来实时执行动作定位和预测。所提出的方法使用卷积神经网络实时定位多个动作并预测它们的类别。该方法首先使用外观和运动检测网络(称为“仅看一次”(YOLO)网络)使用双流模型从 RGB 帧和光流帧中定位和分类动作。然后,我们提出了一种融合步骤,以提高所提出方法的定位精度。此外,我们基于帧级检测生成动作管。逐帧处理引入了早期动作检测和预测,在检测速度和精度方面具有最高性能。实验结果表明,与 UCF-101-24 和 J-HMDB-21 基准上最近的离线和在线动作定位和预测方法相比,我们提出的方法在处理时间和准确性方面具有优越性。