Zhu ChangAn, Joslin Chris
School of Information Technology Carleton University Ottawa Ontario Canada.
Comput Animat Virtual Worlds. 2024 Nov-Dec;35(6):e70001. doi: 10.1002/cav.70001. Epub 2024 Nov 19.
3D facial motion retargeting has the advantage of capturing and recreating the nuances of human facial motions and speeding up the time-consuming 3D facial animation process. However, the facial motion retargeting pipeline is limited in reflecting the facial motion's semantic information (i.e., meaning and intensity), especially when applied to nonhuman characters. The retargeting quality heavily relies on the target face rig, which requires time-consuming preparation such as 3D scanning of human faces and modeling of blendshapes. In this paper, we propose a facial motion retargeting pipeline aiming to provide fast and semantically accurate retargeting results for diverse characters. The new framework comprises a target face parameterization module based on face anatomy and a compatible source motion interpretation module. From the quantitative and qualitative evaluations, we found that the proposed retargeting pipeline can naturally recreate the expressions performed by a motion capture subject in equivalent meanings and intensities, such semantic accuracy extends to the faces of nonhuman characters without labor-demanding preparations.
3D面部运动重定向具有捕捉和重现人类面部运动细微差别以及加快耗时的3D面部动画制作过程的优势。然而,面部运动重定向流程在反映面部运动的语义信息(即含义和强度)方面存在局限性,尤其是在应用于非人类角色时。重定向质量严重依赖于目标面部绑定,这需要耗时的准备工作,如人脸的3D扫描和混合形状建模。在本文中,我们提出了一种面部运动重定向流程,旨在为各种角色提供快速且语义准确的重定向结果。新框架包括一个基于面部解剖结构的目标面部参数化模块和一个兼容的源运动解释模块。通过定量和定性评估,我们发现所提出的重定向流程能够自然地以等效的含义和强度重现动作捕捉对象所表现的表情,这种语义准确性无需费力准备即可扩展到非人类角色的面部。