Suppr超能文献

利用运动活动和面部表情的多模态驾驶员情绪识别

Multimodal driver emotion recognition using motor activity and facial expressions.

作者信息

Espino-Salinas Carlos H, Luna-García Huizilopoztli, Celaya-Padilla José M, Barría-Huidobro Cristian, Gamboa Rosales Nadia Karina, Rondon David, Villalba-Condori Klinge Orlando

机构信息

Laboratorio de Tecnologías Interactivas y Experiencia de Usuario, Universidad Autónoma de Zacatecas, Unidad Academica de Ingeniería Electrica, Zacatecas, Mexico.

Centro de Investigación en Ciberseguridad, Universidad Mayor de Chile, Providencia, Chile.

出版信息

Front Artif Intell. 2024 Nov 27;7:1467051. doi: 10.3389/frai.2024.1467051. eCollection 2024.

Abstract

Driving performance can be significantly impacted when a person experiences intense emotions behind the wheel. Research shows that emotions such as anger, sadness, agitation, and joy can increase the risk of traffic accidents. This study introduces a methodology to recognize four specific emotions using an intelligent model that processes and analyzes signals from motor activity and driver behavior, which are generated by interactions with basic driving elements, along with facial geometry images captured during emotion induction. The research applies machine learning to identify the most relevant motor activity signals for emotion recognition. Furthermore, a pre-trained Convolutional Neural Network (CNN) model is employed to extract probability vectors from images corresponding to the four emotions under investigation. These data sources are integrated through a unidimensional network for emotion classification. The main proposal of this research was to develop a multimodal intelligent model that combines motor activity signals and facial geometry images to accurately recognize four specific emotions (anger, sadness, agitation, and joy) in drivers, achieving a 96.0% accuracy in a simulated environment. The study confirmed a significant relationship between drivers' motor activity, behavior, facial geometry, and the induced emotions.

摘要

当一个人在驾驶时体验到强烈情绪时,驾驶性能会受到显著影响。研究表明,愤怒、悲伤、焦虑和喜悦等情绪会增加交通事故的风险。本研究介绍了一种方法,该方法使用一个智能模型来识别四种特定情绪,该模型处理和分析由与基本驾驶元素的交互产生的运动活动和驾驶员行为信号,以及在情绪诱导期间捕获的面部几何图像。该研究应用机器学习来识别用于情绪识别的最相关运动活动信号。此外,使用一个预训练的卷积神经网络(CNN)模型从与所研究的四种情绪相对应的图像中提取概率向量。这些数据源通过一个单维网络进行集成以进行情绪分类。本研究的主要提议是开发一个多模态智能模型,该模型结合运动活动信号和面部几何图像,以准确识别驾驶员的四种特定情绪(愤怒、悲伤、焦虑和喜悦),在模拟环境中实现了96.0%的准确率。该研究证实了驾驶员的运动活动、行为、面部几何形状与诱发情绪之间存在显著关系。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/519b/11631879/cf40ee892805/frai-07-1467051-g0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验