Suppr超能文献

基于 3D 可变形模型的变形系数学习实现动作单元检测。

Action Unit Detection by Learning the Deformation Coefficients of a 3D Morphable Model.

机构信息

Media Integration and Communication Center, University of Florence, 50134 Firenze, Italy.

出版信息

Sensors (Basel). 2021 Jan 15;21(2):589. doi: 10.3390/s21020589.

Abstract

Facial Action Units (AUs) correspond to the deformation/contraction of individual facial muscles or their combinations. As such, each AU affects just a small portion of the face, with deformations that are asymmetric in many cases. Generating and analyzing AUs in 3D is particularly relevant for the potential applications it can enable. In this paper, we propose a solution for 3D AU detection and synthesis by developing on a newly defined 3D Morphable Model (3DMM) of the face. Differently from most of the 3DMMs existing in the literature, which mainly model global variations of the face and show limitations in adapting to local and asymmetric deformations, the proposed solution is specifically devised to cope with such difficult morphings. During a training phase, the deformation coefficients are learned that enable the 3DMM to deform to 3D target scans showing neutral and facial expression of the same individual, thus decoupling expression from identity deformations. Then, such deformation coefficients are used, on the one hand, to train an AU classifier, on the other, they can be applied to a 3D neutral scan to generate AU deformations in a subject-independent manner. The proposed approach for AU detection is validated on the Bosphorus dataset, reporting competitive results with respect to the state-of-the-art, even in a challenging cross-dataset setting. We further show the learned coefficients are general enough to synthesize realistic 3D face instances with AUs activation.

摘要

面部动作单元 (AUs) 对应于个体面部肌肉的变形/收缩或它们的组合。因此,每个 AU 仅影响面部的一小部分,在许多情况下变形是不对称的。生成和分析 3D AU 对于其潜在应用特别重要。在本文中,我们通过开发新定义的面部 3D 可变形模型 (3DMM) 来提出 3D AU 检测和合成的解决方案。与文献中存在的大多数主要对人脸的全局变化建模并在适应局部和不对称变形方面存在局限性的 3DMM 不同,所提出的解决方案专门设计用于应对这种困难的变形。在训练阶段,学习变形系数,使 3DMM 能够变形为 3D 目标扫描,这些扫描显示相同个体的中性和面部表情,从而将表情与身份变形分离。然后,一方面使用这些变形系数来训练 AU 分类器,另一方面,它们可以应用于 3D 中性扫描,以独立于主体的方式生成 AU 变形。所提出的 AU 检测方法在博斯普鲁斯数据集上进行了验证,与最先进的方法相比,即使在具有挑战性的跨数据集设置中,也报告了具有竞争力的结果。我们进一步表明,所学习的系数足够通用,可以合成具有 AU 激活的逼真的 3D 人脸实例。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/642f/7830313/b77c496192da/sensors-21-00589-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验