Liang Yu-Ming, Shih Sheng-Wen, Shih Arthur Chun-Chieh, Liao Hong-Yuan Mark, Lin Cheng-Chung
Department of Computer Science, National Chiao Tung University, Hsinchu 300, Taiwan.
IEEE Trans Syst Man Cybern B Cybern. 2009 Feb;39(1):268-80. doi: 10.1109/TSMCB.2008.2005643. Epub 2008 Dec 9.
Visual analysis of human behavior has generated considerable interest in the field of computer vision because of its wide spectrum of potential applications. Human behavior can be segmented into atomic actions, each of which indicates a basic and complete movement. Learning and recognizing atomic human actions are essential to human behavior analysis. In this paper, we propose a framework for handling this task using variable-length Markov models (VLMMs). The framework is comprised of the following two modules: a posture labeling module and a VLMM atomic action learning and recognition module. First, a posture template selection algorithm, based on a modified shape context matching technique, is developed. The selected posture templates form a codebook that is used to convert input posture sequences into discrete symbol sequences for subsequent processing. Then, the VLMM technique is applied to learn the training symbol sequences of atomic actions. Finally, the constructed VLMMs are transformed into hidden Markov models (HMMs) for recognizing input atomic actions. This approach combines the advantages of the excellent learning function of a VLMM and the fault-tolerant recognition ability of an HMM. Experiments on realistic data demonstrate the efficacy of the proposed system.
由于人类行为的视觉分析具有广泛的潜在应用,它在计算机视觉领域引起了相当大的兴趣。人类行为可以被分割为原子动作,每个原子动作都表示一个基本且完整的运动。学习和识别原子人类动作对于人类行为分析至关重要。在本文中,我们提出了一个使用可变长度马尔可夫模型(VLMM)来处理此任务的框架。该框架由以下两个模块组成:一个姿势标记模块和一个VLMM原子动作学习与识别模块。首先,基于改进的形状上下文匹配技术开发了一种姿势模板选择算法。所选的姿势模板形成一个码本,用于将输入的姿势序列转换为离散符号序列以便后续处理。然后,应用VLMM技术来学习原子动作的训练符号序列。最后,将构建的VLMM转换为隐马尔可夫模型(HMM)以识别输入的原子动作。这种方法结合了VLMM出色的学习功能和HMM的容错识别能力的优点。对实际数据的实验证明了所提出系统的有效性。