Suppr超能文献

基于视觉体素数据的在线手势识别。

Online gesture spotting from visual hull data.

机构信息

School of Arts, Media and Engineering, 699 S. Mill Ave., Suite 395, PO Box 878709, Tempe, AZ 85281, USA.

出版信息

IEEE Trans Pattern Anal Mach Intell. 2011 Jun;33(6):1175-88. doi: 10.1109/TPAMI.2010.199.

Abstract

This paper presents a robust framework for online full-body gesture spotting from visual hull data. Using view-invariant pose features as observations, hidden Markov models (HMMs) are trained for gesture spotting from continuous movement data streams. Two major contributions of this paper are 1) view-invariant pose feature extraction from visual hulls, and 2) a systematic approach to automatically detecting and modeling specific nongesture movement patterns and using their HMMs for outlier rejection in gesture spotting. The experimental results have shown the view-invariance property of the proposed pose features for both training poses and new poses unseen in training, as well as the efficacy of using specific nongesture models for outlier rejection. Using the IXMAS gesture data set, the proposed framework has been extensively tested and the gesture spotting results are superior to those reported on the same data set obtained using existing state-of-the-art gesture spotting methods.

摘要

本文提出了一种从视觉外壳数据中进行在线全身手势识别的鲁棒框架。使用视图不变姿态特征作为观测值,从连续运动数据流中训练隐马尔可夫模型(HMM)进行手势识别。本文的两个主要贡献是:1)从视觉外壳中提取视图不变姿态特征,2)一种系统的方法来自动检测和建模特定的非手势运动模式,并使用它们的 HMM 对手势识别中的异常值进行拒绝。实验结果表明,所提出的姿态特征对于训练姿势和训练中未见过的新姿势具有不变性,并且使用特定的非手势模型进行异常值拒绝是有效的。使用 IXMAS 手势数据集,对所提出的框架进行了广泛的测试,并且手势识别结果优于使用现有最先进的手势识别方法在同一数据集上报告的结果。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验