Sorbonne University, Université de technologie de Compiègne, CNRS, UMR 7338 Biomechanics and Bioengineering, Centre de recherche Royallieu, CS 60 319 Compiègne, France.
Department of maxillo-facial surgery, CHU AMIENS-PICARDIE, Amiens, France; CHIMERE Team, University of Picardie Jules Verne, 80000 Amiens, France.
Comput Methods Programs Biomed. 2020 Jul;191:105410. doi: 10.1016/j.cmpb.2020.105410. Epub 2020 Feb 19.
Head and facial mimic animations play important roles in various fields such as human-machine interactions, internet communications, multimedia applications, and facial mimic analysis. Numerous studies have been trying to simulate these animations. However, they hardly achieved all requirements of full rigid head and non-rigid facial mimic animations in a subject-specific manner with real-time framerates. Consequently, this present study aimed to develop a real-time computer vision system for tracking simultaneously rigid head and non-rigid facial mimic movements.
Our system was developed using the system of systems approach. A data acquisition sub-system was implemented using a contactless Kinect sensor. A subject-specific model generation sub-system was designed to create the geometrical model from the Kinect sensor without texture information. A subject-specific texture generation sub-system was designed for enhancing the reality of the generated model with texture information. A head animation sub-system with graphical user interfaces was also developed. Model accuracy and system performances were analyzed.
The comparison with MRI-based model shows a very good accuracy level (distance deviation of ~1 mm in neutral position and an error range of [2-3 mm] for different facial mimic positions) for the generated model from our system. Moreover, the system speed can be optimized to reach a high framerate (up to 60 fps) during different head and facial mimic animations.
This study presents a novel computer vision system for tracking simultaneously subject-specific rigid head and non-rigid facial mimic movements in real time. In perspectives, serious game technology will be integrated into this system towards a full computer-aided decision support system for facial rehabilitation.
头部和面部表情动画在人机交互、互联网通信、多媒体应用和面部表情分析等领域发挥着重要作用。许多研究都试图模拟这些动画。然而,它们很难以特定于主体的方式以实时帧率实现全刚性头部和非刚性面部表情动画的所有要求。因此,本研究旨在开发一种用于实时跟踪刚性头部和非刚性面部表情运动的计算机视觉系统。
我们的系统采用系统的系统方法开发。使用非接触式 Kinect 传感器实现数据采集子系统。设计了一个特定于主体的模型生成子系统,用于从 Kinect 传感器创建没有纹理信息的几何模型。设计了一个特定于主体的纹理生成子系统,用于通过纹理信息增强生成模型的真实性。还开发了具有图形用户界面的头部动画子系统。分析了模型精度和系统性能。
与基于 MRI 的模型相比,我们系统生成的模型具有非常好的精度水平(中性位置的距离偏差约为 1 毫米,不同面部表情位置的误差范围为 [2-3 毫米])。此外,系统速度可以优化以在不同的头部和面部表情动画中达到高帧率(高达 60 fps)。
本研究提出了一种用于实时跟踪特定于主体的刚性头部和非刚性面部表情运动的新型计算机视觉系统。从观点上讲,将把严肃游戏技术集成到该系统中,以开发用于面部康复的全计算机辅助决策支持系统。