Anderson Keith, McOwan Peter W
Department of Computer Science, Queen Mary, University of London, London E1 4NS, UK.
IEEE Trans Syst Man Cybern B Cybern. 2006 Feb;36(1):96-105. doi: 10.1109/tsmcb.2005.854502.
A fully automated, multistage system for real-time recognition of facial expression is presented. The system uses facial motion to characterize monochrome frontal views of facial expressions and is able to operate effectively in cluttered and dynamic scenes, recognizing the six emotions universally associated with unique facial expressions, namely happiness, sadness, disgust, surprise, fear, and anger. Faces are located using a spatial ratio template tracker algorithm. Optical flow of the face is subsequently determined using a real-time implementation of a robust gradient model. The expression recognition system then averages facial velocity information over identified regions of the face and cancels out rigid head motion by taking ratios of this averaged motion. The motion signatures produced are then classified using Support Vector Machines as either nonexpressive or as one of the six basic emotions. The completed system is demonstrated in two simple affective computing applications that respond in real-time to the facial expressions of the user, thereby providing the potential for improvements in the interaction between a computer user and technology.
本文提出了一种用于实时识别面部表情的全自动多阶段系统。该系统利用面部运动来表征面部表情的单色正面视图,并且能够在杂乱和动态的场景中有效运行,识别与独特面部表情普遍相关的六种情绪,即快乐、悲伤、厌恶、惊讶、恐惧和愤怒。使用空间比率模板跟踪器算法定位面部。随后,使用稳健梯度模型的实时实现来确定面部的光流。表情识别系统然后对面部识别区域的面部速度信息进行平均,并通过取该平均运动的比率来消除刚性头部运动。然后使用支持向量机将产生的运动特征分类为无表情或六种基本情绪之一。完整的系统在两个简单的情感计算应用中得到了演示,这些应用实时响应用户的面部表情,从而为改善计算机用户与技术之间的交互提供了潜力。