School of Arts, Media and Engineering, 699 S. Mill Ave., Suite 395, PO Box 878709, Tempe, AZ 85281, USA.
IEEE Trans Pattern Anal Mach Intell. 2011 Jun;33(6):1175-88. doi: 10.1109/TPAMI.2010.199.
This paper presents a robust framework for online full-body gesture spotting from visual hull data. Using view-invariant pose features as observations, hidden Markov models (HMMs) are trained for gesture spotting from continuous movement data streams. Two major contributions of this paper are 1) view-invariant pose feature extraction from visual hulls, and 2) a systematic approach to automatically detecting and modeling specific nongesture movement patterns and using their HMMs for outlier rejection in gesture spotting. The experimental results have shown the view-invariance property of the proposed pose features for both training poses and new poses unseen in training, as well as the efficacy of using specific nongesture models for outlier rejection. Using the IXMAS gesture data set, the proposed framework has been extensively tested and the gesture spotting results are superior to those reported on the same data set obtained using existing state-of-the-art gesture spotting methods.
本文提出了一种从视觉外壳数据中进行在线全身手势识别的鲁棒框架。使用视图不变姿态特征作为观测值,从连续运动数据流中训练隐马尔可夫模型(HMM)进行手势识别。本文的两个主要贡献是:1)从视觉外壳中提取视图不变姿态特征,2)一种系统的方法来自动检测和建模特定的非手势运动模式,并使用它们的 HMM 对手势识别中的异常值进行拒绝。实验结果表明,所提出的姿态特征对于训练姿势和训练中未见过的新姿势具有不变性,并且使用特定的非手势模型进行异常值拒绝是有效的。使用 IXMAS 手势数据集,对所提出的框架进行了广泛的测试,并且手势识别结果优于使用现有最先进的手势识别方法在同一数据集上报告的结果。