Wang Zhibo, Ling Jingwang, Feng Chengzeng, Lu Ming, Xu Feng
IEEE Trans Vis Comput Graph. 2022 Jun;28(6):2364-2375. doi: 10.1109/TVCG.2020.3033838. Epub 2022 May 2.
Blendshape representations are widely used in facial animation. Consistent semantics must be maintained for all the blendshapes to build the blendshapes of one character. However, this is difficult for real characters because the face shape of the same semantics varies significantly across identities. Previous studies have handled this issue by asking users to perform a set of predefined expressions with specified semantics. We observe that facial emotions can be used to define semantics. Herein, we propose a real-time technique that directly updates blendshapes without predefined expressions. Its aim is to preserve semantics based on the emotion information extracted from an arbitrary facial motion sequence. In addition, we have designed corresponding algorithms to efficiently update blendshapes with large- and middle-scale face shapes and fine-scale facial details, such as wrinkles, in a real-time face tracking system. The experimental results indicate that using a commodity RGBD sensor, we can achieve real-time online blendshape updates with well-preserved semantics and user-specific facial features and details.
混合形状表示在面部动画中被广泛使用。为构建一个角色的混合形状,必须为所有混合形状保持一致的语义。然而,对于真实角色来说这很困难,因为具有相同语义的面部形状在不同个体之间差异很大。先前的研究通过要求用户执行一组具有指定语义的预定义表情来处理这个问题。我们观察到面部表情可以用来定义语义。在此,我们提出一种无需预定义表情即可直接更新混合形状的实时技术。其目的是基于从任意面部运动序列中提取的情感信息来保留语义。此外,我们设计了相应的算法,以便在实时面部跟踪系统中高效地更新具有大、中尺度面部形状以及诸如皱纹等精细面部细节的混合形状。实验结果表明,使用商用RGBD传感器,我们可以实现具有良好保留的语义以及用户特定面部特征和细节的实时在线混合形状更新。