• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于可重构差分听诊器-麦克风的面部肌肉活动识别

Facial Muscle Activity Recognition with Reconfigurable Differential Stethoscope-Microphones.

机构信息

German Research Center for Artificial Intelligence(DFKI), 67663 Kaiserslautern, Germany.

Department of Computer Science, University of Kaiserslautern, 67663 Kaiserslautern, Germany.

出版信息

Sensors (Basel). 2020 Aug 30;20(17):4904. doi: 10.3390/s20174904.

DOI:10.3390/s20174904
PMID:32872633
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7506891/
Abstract

Many human activities and states are related to the facial muscles' actions: from the expression of emotions, stress, and non-verbal communication through health-related actions, such as coughing and sneezing to nutrition and drinking. In this work, we describe, in detail, the design and evaluation of a wearable system for facial muscle activity monitoring based on a re-configurable differential array of stethoscope-microphones. In our system, six stethoscopes are placed at locations that could easily be integrated into the frame of smart glasses. The paper describes the detailed hardware design and selection and adaptation of appropriate signal processing and machine learning methods. For the evaluation, we asked eight participants to imitate a set of facial actions, such as expressions of happiness, anger, surprise, sadness, upset, and disgust, and gestures, like kissing, winkling, sticking the tongue out, and taking a pill. An evaluation of a complete data set of 2640 events with 66% training and a 33% testing rate has been performed. Although we encountered high variability of the volunteers' expressions, our approach shows a recall = 55%, precision = 56%, and f1-score of 54% for the user-independent scenario(9% chance-level). On a user-dependent basis, our worst result has an f1-score = 60% and best result with f1-score = 89%. Having a recall ≥60% for expressions like happiness, anger, kissing, sticking the tongue out, and neutral(Null-class).

摘要

许多人类活动和状态都与面部肌肉的动作有关

从表达情绪、压力和非言语交流,到与健康相关的动作,如咳嗽和打喷嚏,再到营养和饮水。在这项工作中,我们详细描述了一种基于听诊器麦克风可重构差分阵列的面部肌肉活动监测可穿戴系统的设计和评估。在我们的系统中,六个听诊器放置在可以轻松集成到智能眼镜框架中的位置。本文描述了详细的硬件设计以及对适当信号处理和机器学习方法的选择和调整。为了进行评估,我们要求八名参与者模仿一组面部动作,如表达快乐、愤怒、惊讶、悲伤、沮丧和厌恶,以及手势,如亲吻、眨眼、伸出舌头和吃药。已经对一个包含 2640 个事件的完整数据集进行了 66%的训练和 33%的测试率的评估。尽管我们遇到了志愿者表情高度变化的情况,但我们的方法在用户独立的场景下表现出召回率为 55%、精度为 56%和 F1 得分为 54%(9%的机会水平)。在基于用户的基础上,我们最差的结果 F1 得分为 60%,最好的结果 F1 得分为 89%。对于快乐、愤怒、亲吻、伸出舌头和中立(Null-class)等表情,召回率≥60%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0954/7506891/7359069efecc/sensors-20-04904-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0954/7506891/18e8c5eb596f/sensors-20-04904-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0954/7506891/c4302d11674b/sensors-20-04904-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0954/7506891/06304daed262/sensors-20-04904-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0954/7506891/1e174ac1681d/sensors-20-04904-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0954/7506891/0daa3281a7d9/sensors-20-04904-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0954/7506891/90a92d37fc24/sensors-20-04904-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0954/7506891/c58e0a312cef/sensors-20-04904-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0954/7506891/fa50f4f4b844/sensors-20-04904-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0954/7506891/929d61cfe497/sensors-20-04904-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0954/7506891/1ee2ae818627/sensors-20-04904-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0954/7506891/ceea9f4fb26f/sensors-20-04904-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0954/7506891/84e36215b8c2/sensors-20-04904-g012a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0954/7506891/b089262d25a1/sensors-20-04904-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0954/7506891/35b74d7825fc/sensors-20-04904-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0954/7506891/7359069efecc/sensors-20-04904-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0954/7506891/18e8c5eb596f/sensors-20-04904-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0954/7506891/c4302d11674b/sensors-20-04904-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0954/7506891/06304daed262/sensors-20-04904-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0954/7506891/1e174ac1681d/sensors-20-04904-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0954/7506891/0daa3281a7d9/sensors-20-04904-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0954/7506891/90a92d37fc24/sensors-20-04904-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0954/7506891/c58e0a312cef/sensors-20-04904-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0954/7506891/fa50f4f4b844/sensors-20-04904-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0954/7506891/929d61cfe497/sensors-20-04904-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0954/7506891/1ee2ae818627/sensors-20-04904-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0954/7506891/ceea9f4fb26f/sensors-20-04904-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0954/7506891/84e36215b8c2/sensors-20-04904-g012a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0954/7506891/b089262d25a1/sensors-20-04904-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0954/7506891/35b74d7825fc/sensors-20-04904-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0954/7506891/7359069efecc/sensors-20-04904-g015.jpg

相似文献

1
Facial Muscle Activity Recognition with Reconfigurable Differential Stethoscope-Microphones.基于可重构差分听诊器-麦克风的面部肌肉活动识别
Sensors (Basel). 2020 Aug 30;20(17):4904. doi: 10.3390/s20174904.
2
Facial Emotion Recognition and Expression in Parkinson's Disease: An Emotional Mirror Mechanism?帕金森病中的面部情绪识别与表达:一种情绪镜像机制?
PLoS One. 2017 Jan 9;12(1):e0169110. doi: 10.1371/journal.pone.0169110. eCollection 2017.
3
Enhanced facial expression recognition using 3D point sets and geometric deep learning.基于 3D 点集和几何深度学习的增强型面部表情识别。
Med Biol Eng Comput. 2021 Jun;59(6):1235-1244. doi: 10.1007/s11517-021-02383-1. Epub 2021 May 24.
4
Perceptual learning and recognition confusion reveal the underlying relationships among the six basic emotions.知觉学习与识别混淆揭示了六种基本情绪之间的潜在关系。
Cogn Emot. 2019 Jun;33(4):754-767. doi: 10.1080/02699931.2018.1491831. Epub 2018 Jun 30.
5
Portable Facial Expression System Based on EMG Sensors and Machine Learning Models.基于肌电传感器和机器学习模型的便携式面部表情系统。
Sensors (Basel). 2024 May 23;24(11):3350. doi: 10.3390/s24113350.
6
Towards smart glasses for facial expression recognition using OMG and machine learning.基于 OMG 和机器学习的用于面部表情识别的智能眼镜。
Sci Rep. 2023 Sep 25;13(1):16043. doi: 10.1038/s41598-023-43135-5.
7
Relative preservation of the recognition of positive facial expression "happiness" in Alzheimer disease.阿尔茨海默病患者对“快乐”的积极面部表情识别相对保留。
Int Psychogeriatr. 2013 Jan;25(1):105-10. doi: 10.1017/S1041610212001482. Epub 2012 Aug 24.
8
Cerebral processing of facial emotions in bipolar I and II disorders: An event-related potential study.双相情感障碍 I 型和 II 型障碍对面部情绪的大脑处理:一项事件相关电位研究。
J Affect Disord. 2018 Aug 15;236:37-44. doi: 10.1016/j.jad.2018.04.098. Epub 2018 Apr 22.
9
Facial mimicry and emotional contagion to dynamic emotional facial expressions and their influence on decoding accuracy.对面部动态表情的面部模仿与情绪感染及其对解码准确性的影响。
Int J Psychophysiol. 2001 Mar;40(2):129-41. doi: 10.1016/s0167-8760(00)00161-6.
10
Selective Impairment of Basic Emotion Recognition in People with Autism: Discrimination Thresholds for Recognition of Facial Expressions of Varying Intensities.自闭症患者基本情绪识别的选择性损伤:识别不同强度面部表情的辨别阈值。
J Autism Dev Disord. 2018 Jun;48(6):1886-1894. doi: 10.1007/s10803-017-3428-2.

本文引用的文献

1
Expressure: Detect Expressions Related to Emotional and Cognitive Activities Using Forehead Textile Pressure Mechanomyography.利用额部织物压力肌电描记术检测与情感和认知活动相关的表情。
Sensors (Basel). 2020 Jan 28;20(3):730. doi: 10.3390/s20030730.
2
AcCorps: A low-cost 3D printed stethoscope for fetal phonocardiography.AcCorps:一种用于胎儿心音描记术的低成本3D打印听诊器。
Annu Int Conf IEEE Eng Med Biol Soc. 2019 Jul;2019:52-55. doi: 10.1109/EMBC.2019.8856575.
3
How Many Participants Do We Have to Include in Properly Powered Experiments? A Tutorial of Power Analysis with Reference Tables.在具备足够检验效能的实验中我们需要纳入多少参与者?一份带有参考表的检验效能分析教程。
J Cogn. 2019 Jul 19;2(1):16. doi: 10.5334/joc.72.
4
Research on GA-SVM Based Head-Motion Classification via Mechanomyography Feature Analysis.基于肌动图特征分析的遗传算法支持向量机头部运动分类研究
Sensors (Basel). 2019 Apr 28;19(9):1986. doi: 10.3390/s19091986.
5
Design and Evaluation of a Diaphragm for Electrocardiography in Electronic Stethoscopes.电子听诊器中心电图用膜片的设计与评估。
IEEE Trans Biomed Eng. 2020 Feb;67(2):391-398. doi: 10.1109/TBME.2019.2913913. Epub 2019 Apr 29.
6
Facial Expressions of Basic Emotions in Japanese Laypeople.日本普通民众基本情绪的面部表情
Front Psychol. 2019 Feb 12;10:259. doi: 10.3389/fpsyg.2019.00259. eCollection 2019.
7
An in vitro acoustic analysis and comparison of popular stethoscopes.流行听诊器的体外声学分析与比较
Med Devices (Auckl). 2019 Jan 15;12:41-52. doi: 10.2147/MDER.S186076. eCollection 2019.
8
Continuously steerable second-order differential microphone arrays.连续可控二阶微分麦克风阵列
J Acoust Soc Am. 2018 Mar;143(3):EL225. doi: 10.1121/1.5027500.
9
Validation of an effective, low cost, Free/open access 3D-printed stethoscope.验证一种有效、低成本、免费/开放获取的 3D 打印听诊器。
PLoS One. 2018 Mar 14;13(3):e0193087. doi: 10.1371/journal.pone.0193087. eCollection 2018.
10
A Brief Review of Facial Emotion Recognition Based on Visual Information.基于视觉信息的面部情绪识别综述。
Sensors (Basel). 2018 Jan 30;18(2):401. doi: 10.3390/s18020401.