Department of Software Engineering, Fatima Jinnah Women University, Rawalpindi 46000, Pakistan.
Software Engineering Department, University of Engineering and Technology, Taxila 47050, Pakistan.
Sensors (Basel). 2019 Jun 21;19(12):2790. doi: 10.3390/s19122790.
Human action recognition (HAR) has emerged as a core research domain for video understanding and analysis, thus attracting many researchers. Although significant results have been achieved in simple scenarios, HAR is still a challenging task due to issues associated with view independence, occlusion and inter-class variation observed in realistic scenarios. In previous research efforts, the classical bag of visual words approach along with its variations has been widely used. In this paper, we propose a Dynamic Spatio-Temporal Bag of Expressions (D-STBoE) model for human action recognition without compromising the strengths of the classical bag of visual words approach. Expressions are formed based on the density of a spatio-temporal cube of a visual word. To handle inter-class variation, we use class-specific visual word representation for visual expression generation. In contrast to the Bag of Expressions (BoE) model, the formation of visual expressions is based on the density of spatio-temporal cubes built around each visual word, as constructing neighborhoods with a fixed number of neighbors could include non-relevant information making a visual expression less discriminative in scenarios with occlusion and changing viewpoints. Thus, the proposed approach makes the model more robust to occlusion and changing viewpoint challenges present in realistic scenarios. Furthermore, we train a multi-class Support Vector Machine (SVM) for classifying bag of expressions into action classes. Comprehensive experiments on four publicly available datasets: KTH, UCF Sports, UCF11 and UCF50 show that the proposed model outperforms existing state-of-the-art human action recognition methods in term of accuracy to 99.21%, 98.60%, 96.94 and 94.10%, respectively.
人体动作识别(HAR)已经成为视频理解和分析的核心研究领域,因此吸引了众多研究人员。尽管在简单场景中已经取得了显著的成果,但由于在现实场景中观察到的视图独立性、遮挡和类内变化等问题,HAR 仍然是一项具有挑战性的任务。在以前的研究工作中,经典的视觉词袋方法及其变体得到了广泛的应用。在本文中,我们提出了一种无需牺牲经典视觉词袋方法优势的动态时空词袋(D-STBoE)模型用于人体动作识别。基于视觉词的时空立方体的密度形成表达。为了处理类内变化,我们使用特定于类的视觉词表示来生成视觉表达。与表达袋(BoE)模型不同,视觉表达的形成是基于围绕每个视觉词构建的时空立方体的密度,因为使用固定数量的邻居构建邻域可能会包含不相关的信息,从而使视觉表达在具有遮挡和视点变化的场景中缺乏判别能力。因此,所提出的方法使模型更能抵抗现实场景中遮挡和视点变化的挑战。此外,我们还训练了一个多类支持向量机(SVM),用于将表达袋分类到动作类别中。在四个公开可用的数据集(KTH、UCF Sports、UCF11 和 UCF50)上进行的综合实验表明,所提出的模型在精度方面优于现有的人动作识别方法,分别达到 99.21%、98.60%、96.94%和 94.10%。