Yoon Sungbaek, Park Hyunjin, Yi Juneho
School of Electronic and Electrical Engineering, Sungkyunkwan University, Suwon 16419, Korea.
School of Information and Communication Engineering, North University of China, Taiyuan 03000, China.
Sensors (Basel). 2016 Jun 25;16(7):981. doi: 10.3390/s16070981.
This research features object recognition that exploits the context of object-action interaction to enhance the recognition performance. Since objects have specific usages, and human actions corresponding to these usages can be associated with these objects, human actions can provide effective information for object recognition. When objects from different categories have similar appearances, the human action associated with each object can be very effective in resolving ambiguities related to recognizing these objects. We propose an efficient method that integrates human interaction with objects into a form of object recognition. We represent human actions by concatenating poselet vectors computed from key frames and learn the probabilities of objects and actions using random forest and multi-class AdaBoost algorithms. Our experimental results show that poselet representation of human actions is quite effective in integrating human action information into object recognition.
本研究的特点是利用对象 - 动作交互的上下文来增强识别性能的目标识别。由于对象具有特定用途,并且与这些用途相对应的人类动作可以与这些对象相关联,因此人类动作可以为目标识别提供有效信息。当来自不同类别的对象具有相似外观时,与每个对象相关联的人类动作在解决与识别这些对象相关的模糊性方面非常有效。我们提出了一种将人类与对象的交互整合到目标识别形式中的有效方法。我们通过连接从关键帧计算出的姿态向量来表示人类动作,并使用随机森林和多类AdaBoost算法学习对象和动作的概率。我们的实验结果表明,人类动作的姿态表示在将人类动作信息整合到目标识别中非常有效。