Suppr超能文献

利用面部动画增加拟真度和化身自我认同。

Using Facial Animation to Increase the Enfacement Illusion and Avatar Self-Identification.

出版信息

IEEE Trans Vis Comput Graph. 2020 May;26(5):2023-2029. doi: 10.1109/TVCG.2020.2973075. Epub 2020 Feb 13.

Abstract

Through avatar embodiment in Virtual Reality (VR) we can achieve the illusion that an avatar is substituting our body: the avatar moves as we move and we see it from a first person perspective. However, self-identification, the process of identifying a representation as being oneself, poses new challenges because a key determinant is that we see and have agency in our own face. Providing control over the face is hard with current HMD technologies because face tracking is either cumbersome or error prone. However, limited animation is easily achieved based on speaking. We investigate the level of avatar enfacement, that is believing that a picture of a face is one's own face, with three levels of facial animation: (i) one in which the facial expressions of the avatars are static, (ii) one in which we implement lip-sync motion and (iii) one in which the avatar presents lip-sync plus additional facial animations, with blinks, designed by a professional animator. We measure self-identification using a face morphing tool that morphs from the face of the participant to the face of a gender matched avatar. We find that self-identification on avatars can be increased through pre-baked animations even when these are not photorealistic nor look like the participant.

摘要

通过虚拟现实(VR)中的头像体现,我们可以实现一种错觉,即头像代替了我们的身体:头像随我们的移动而移动,我们从第一人称视角看到它。然而,自我认同,即识别代表自己的过程,带来了新的挑战,因为一个关键决定因素是我们看到并拥有自己面部的代理权。由于当前的 HMD 技术难以实现面部追踪,因此很难提供对脸部的控制。但是,基于说话可以轻松实现有限的动画。我们通过三种面部动画水平来研究头像认同程度,即相信一张面部图片就是自己的面部图片:(i)头像面部表情静止的水平,(ii)我们实现唇同步运动的水平,(iii)头像呈现唇同步加由专业动画师设计的额外面部动画的水平,包括眨眼。我们使用面部变形工具来测量自我认同,该工具可以从参与者的面部变形到性别匹配的头像的面部。我们发现,即使预烘焙的动画不真实或不像参与者,也可以通过预先烘焙的动画来增加对头像的自我认同。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验