• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于 3D 可变形模型的变形系数学习实现动作单元检测。

Action Unit Detection by Learning the Deformation Coefficients of a 3D Morphable Model.

机构信息

Media Integration and Communication Center, University of Florence, 50134 Firenze, Italy.

出版信息

Sensors (Basel). 2021 Jan 15;21(2):589. doi: 10.3390/s21020589.

DOI:10.3390/s21020589
PMID:33467595
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7830313/
Abstract

Facial Action Units (AUs) correspond to the deformation/contraction of individual facial muscles or their combinations. As such, each AU affects just a small portion of the face, with deformations that are asymmetric in many cases. Generating and analyzing AUs in 3D is particularly relevant for the potential applications it can enable. In this paper, we propose a solution for 3D AU detection and synthesis by developing on a newly defined 3D Morphable Model (3DMM) of the face. Differently from most of the 3DMMs existing in the literature, which mainly model global variations of the face and show limitations in adapting to local and asymmetric deformations, the proposed solution is specifically devised to cope with such difficult morphings. During a training phase, the deformation coefficients are learned that enable the 3DMM to deform to 3D target scans showing neutral and facial expression of the same individual, thus decoupling expression from identity deformations. Then, such deformation coefficients are used, on the one hand, to train an AU classifier, on the other, they can be applied to a 3D neutral scan to generate AU deformations in a subject-independent manner. The proposed approach for AU detection is validated on the Bosphorus dataset, reporting competitive results with respect to the state-of-the-art, even in a challenging cross-dataset setting. We further show the learned coefficients are general enough to synthesize realistic 3D face instances with AUs activation.

摘要

面部动作单元 (AUs) 对应于个体面部肌肉的变形/收缩或它们的组合。因此,每个 AU 仅影响面部的一小部分,在许多情况下变形是不对称的。生成和分析 3D AU 对于其潜在应用特别重要。在本文中,我们通过开发新定义的面部 3D 可变形模型 (3DMM) 来提出 3D AU 检测和合成的解决方案。与文献中存在的大多数主要对人脸的全局变化建模并在适应局部和不对称变形方面存在局限性的 3DMM 不同,所提出的解决方案专门设计用于应对这种困难的变形。在训练阶段,学习变形系数,使 3DMM 能够变形为 3D 目标扫描,这些扫描显示相同个体的中性和面部表情,从而将表情与身份变形分离。然后,一方面使用这些变形系数来训练 AU 分类器,另一方面,它们可以应用于 3D 中性扫描,以独立于主体的方式生成 AU 变形。所提出的 AU 检测方法在博斯普鲁斯数据集上进行了验证,与最先进的方法相比,即使在具有挑战性的跨数据集设置中,也报告了具有竞争力的结果。我们进一步表明,所学习的系数足够通用,可以合成具有 AU 激活的逼真的 3D 人脸实例。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/642f/7830313/3cb57768c632/sensors-21-00589-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/642f/7830313/b77c496192da/sensors-21-00589-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/642f/7830313/a5c6391eaa63/sensors-21-00589-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/642f/7830313/62e39048ee5e/sensors-21-00589-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/642f/7830313/b5beed18173d/sensors-21-00589-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/642f/7830313/b1c1299b1e56/sensors-21-00589-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/642f/7830313/854b29f70ad8/sensors-21-00589-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/642f/7830313/1478041bf95b/sensors-21-00589-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/642f/7830313/728eb63e5f9c/sensors-21-00589-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/642f/7830313/3cb57768c632/sensors-21-00589-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/642f/7830313/b77c496192da/sensors-21-00589-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/642f/7830313/a5c6391eaa63/sensors-21-00589-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/642f/7830313/62e39048ee5e/sensors-21-00589-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/642f/7830313/b5beed18173d/sensors-21-00589-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/642f/7830313/b1c1299b1e56/sensors-21-00589-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/642f/7830313/854b29f70ad8/sensors-21-00589-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/642f/7830313/1478041bf95b/sensors-21-00589-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/642f/7830313/728eb63e5f9c/sensors-21-00589-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/642f/7830313/3cb57768c632/sensors-21-00589-g009.jpg

相似文献

1
Action Unit Detection by Learning the Deformation Coefficients of a 3D Morphable Model.基于 3D 可变形模型的变形系数学习实现动作单元检测。
Sensors (Basel). 2021 Jan 15;21(2):589. doi: 10.3390/s21020589.
2
On Learning 3D Face Morphable Model from In-the-Wild Images.从自然图像中学习3D人脸可变形模型
IEEE Trans Pattern Anal Mach Intell. 2021 Jan;43(1):157-171. doi: 10.1109/TPAMI.2019.2927975. Epub 2020 Dec 4.
3
Reconstructing 3D Face of Infants in Social Interactions Using Morphable Models of Non-Infants.使用非婴儿的可变形模型重建社交互动中婴儿的三维面部。
Eurographics Workshop 3D Object Retr. 2022 Sep;2022. doi: 10.2312/3dor.20221178.
4
A Sparse and Locally Coherent Morphable Face Model for Dense Semantic Correspondence Across Heterogeneous 3D Faces.一种用于跨异质 3D 人脸密集语义对应关系的稀疏且局部连贯的可变形人脸模型。
IEEE Trans Pattern Anal Mach Intell. 2022 Oct;44(10):6667-6682. doi: 10.1109/TPAMI.2021.3090942. Epub 2022 Sep 14.
5
Facial Action Unit Representation Based on Self-Supervised Learning With Ensembled Priori Constraints.基于自我监督学习与集成先验约束的面部动作单元表示。
IEEE Trans Image Process. 2024;33:5045-5059. doi: 10.1109/TIP.2024.3446250. Epub 2024 Sep 17.
6
Inequality-Constrained and Robust 3D Face Model Fitting.不等式约束与鲁棒三维人脸模型拟合
Comput Vis ECCV. 2020;12354:433-449.
7
Large Scale 3D Morphable Models.大规模三维可变形模型
Int J Comput Vis. 2018;126(2):233-254. doi: 10.1007/s11263-017-1009-7. Epub 2017 Apr 8.
8
Facial Action Unit Detection using 3D Face Landmarks for Pain Detection.基于 3D 人脸地标点的面部动作单元检测在疼痛检测中的应用。
Annu Int Conf IEEE Eng Med Biol Soc. 2023 Jul;2023:1-5. doi: 10.1109/EMBC40787.2023.10340059.
9
Comparison of trueness and repeatability of facial prosthesis design using a 3D morphable model approach, traditional computer-aided design methods, and conventional manual sculpting techniques.使用3D可变形模型方法、传统计算机辅助设计方法和传统手工雕刻技术对面部假体设计的准确性和可重复性进行比较。
J Prosthet Dent. 2025 Feb;133(2):598-607. doi: 10.1016/j.prosdent.2024.03.006. Epub 2024 Apr 14.
10
Learning Pain from Action Unit Combinations: A Weakly Supervised Approach via Multiple Instance Learning.从动作单元组合中学习疼痛:一种通过多实例学习的弱监督方法。
IEEE Trans Affect Comput. 2022 Jan-Mar;13(1):135-146. doi: 10.1109/taffc.2019.2949314. Epub 2019 Oct 30.

引用本文的文献

1
Facial Expression Recognition with Geometric Scattering on 3D Point Clouds.基于 3D 点云的几何散射的面部表情识别。
Sensors (Basel). 2022 Oct 29;22(21):8293. doi: 10.3390/s22218293.

本文引用的文献

1
A Sparse and Locally Coherent Morphable Face Model for Dense Semantic Correspondence Across Heterogeneous 3D Faces.一种用于跨异质 3D 人脸密集语义对应关系的稀疏且局部连贯的可变形人脸模型。
IEEE Trans Pattern Anal Mach Intell. 2022 Oct;44(10):6667-6682. doi: 10.1109/TPAMI.2021.3090942. Epub 2022 Sep 14.
2
Gaussian Process Morphable Models.高斯过程可变形模型
IEEE Trans Pattern Anal Mach Intell. 2018 Aug;40(8):1860-1873. doi: 10.1109/TPAMI.2017.2739743. Epub 2017 Aug 14.
3
Dense 3D Face Correspondence.密集 3D 人脸对应。
IEEE Trans Pattern Anal Mach Intell. 2018 Jul;40(7):1584-1598. doi: 10.1109/TPAMI.2017.2725279. Epub 2017 Jul 11.
4
FaceWarehouse: a 3D facial expression database for visual computing.面部数据库:一个用于视觉计算的3D面部表情数据库。
IEEE Trans Vis Comput Graph. 2014 Mar;20(3):413-25. doi: 10.1109/TVCG.2013.249.