• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于 HAM 的驾驶员综合情绪识别

Drivers' Comprehensive Emotion Recognition Based on HAM.

机构信息

School of Mechanical and Electrical Engineering, Chengdu University of Technology, Chengdu 610059, China.

China Unicom Digital Technology Co., Ltd. Hubei Branch, Wuhan 430015, China.

出版信息

Sensors (Basel). 2023 Oct 7;23(19):8293. doi: 10.3390/s23198293.

DOI:10.3390/s23198293
PMID:37837124
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10574905/
Abstract

Negative emotions of drivers may lead to some dangerous driving behaviors, which in turn lead to serious traffic accidents. However, most of the current studies on driver emotions use a single modality, such as EEG, eye trackers, and driving data. In complex situations, a single modality may not be able to fully consider a driver's complete emotional characteristics and provides poor robustness. In recent years, some studies have used multimodal thinking to monitor single emotions such as driver fatigue and anger, but in actual driving environments, negative emotions such as sadness, anger, fear, and fatigue all have a significant impact on driving safety. However, there are very few research cases using multimodal data to accurately predict drivers' comprehensive emotions. Therefore, based on the multi-modal idea, this paper aims to improve drivers' comprehensive emotion recognition. By combining the three modalities of a driver's voice, facial image, and video sequence, the six classification tasks of drivers' emotions are performed as follows: sadness, anger, fear, fatigue, happiness, and emotional neutrality. In order to accurately identify drivers' negative emotions to improve driving safety, this paper proposes a multi-modal fusion framework based on the CNN + Bi-LSTM + HAM to identify driver emotions. The framework fuses feature vectors of driver audio, facial expressions, and video sequences for comprehensive driver emotion recognition. Experiments have proved the effectiveness of the multi-modal data proposed in this paper for driver emotion recognition, and its recognition accuracy has reached 85.52%. At the same time, the validity of this method is verified by comparing experiments and evaluation indicators such as accuracy and F1 score.

摘要

驾驶员的负面情绪可能导致一些危险的驾驶行为,进而导致严重的交通事故。然而,目前大多数关于驾驶员情绪的研究都使用单一模态,如 EEG、眼动追踪器和驾驶数据。在复杂的情况下,单一模态可能无法充分考虑驾驶员的完整情绪特征,并且提供的鲁棒性较差。近年来,一些研究使用多模态思维来监测驾驶员疲劳和愤怒等单一情绪,但在实际驾驶环境中,悲伤、愤怒、恐惧和疲劳等负面情绪都会对驾驶安全产生重大影响。然而,使用多模态数据准确预测驾驶员综合情绪的研究案例非常少。因此,基于多模态思想,本文旨在提高驾驶员的综合情绪识别能力。通过结合驾驶员的声音、面部图像和视频序列这三种模态,执行驾驶员情绪的六个分类任务:悲伤、愤怒、恐惧、疲劳、高兴和情绪中性。为了准确识别驾驶员的负面情绪,以提高驾驶安全性,本文提出了一种基于 CNN + Bi-LSTM + HAM 的多模态融合框架,用于识别驾驶员的情绪。该框架融合了驾驶员音频、面部表情和视频序列的特征向量,以进行全面的驾驶员情绪识别。实验证明了本文提出的多模态数据在驾驶员情绪识别中的有效性,其识别准确率达到了 85.52%。同时,通过对比实验和准确性、F1 分数等评价指标,验证了该方法的有效性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7408/10574905/c0208b95f009/sensors-23-08293-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7408/10574905/23d36307f51c/sensors-23-08293-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7408/10574905/cac57fc36399/sensors-23-08293-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7408/10574905/9b6058ce1ed2/sensors-23-08293-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7408/10574905/57c8e5130e37/sensors-23-08293-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7408/10574905/a939f7a5f51e/sensors-23-08293-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7408/10574905/a5c70932d0a5/sensors-23-08293-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7408/10574905/f64acc459dfd/sensors-23-08293-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7408/10574905/6dce1832b555/sensors-23-08293-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7408/10574905/fc0a3c6625cc/sensors-23-08293-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7408/10574905/16cd50574386/sensors-23-08293-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7408/10574905/c2d7743b332d/sensors-23-08293-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7408/10574905/32ba154ba55b/sensors-23-08293-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7408/10574905/6862f53ccec6/sensors-23-08293-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7408/10574905/d198f54cb077/sensors-23-08293-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7408/10574905/dc5efe09b380/sensors-23-08293-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7408/10574905/d5f8c6bad251/sensors-23-08293-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7408/10574905/7599a23f2c86/sensors-23-08293-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7408/10574905/fd927ed092d0/sensors-23-08293-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7408/10574905/7da49e09a0ee/sensors-23-08293-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7408/10574905/c0208b95f009/sensors-23-08293-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7408/10574905/23d36307f51c/sensors-23-08293-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7408/10574905/cac57fc36399/sensors-23-08293-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7408/10574905/9b6058ce1ed2/sensors-23-08293-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7408/10574905/57c8e5130e37/sensors-23-08293-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7408/10574905/a939f7a5f51e/sensors-23-08293-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7408/10574905/a5c70932d0a5/sensors-23-08293-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7408/10574905/f64acc459dfd/sensors-23-08293-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7408/10574905/6dce1832b555/sensors-23-08293-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7408/10574905/fc0a3c6625cc/sensors-23-08293-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7408/10574905/16cd50574386/sensors-23-08293-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7408/10574905/c2d7743b332d/sensors-23-08293-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7408/10574905/32ba154ba55b/sensors-23-08293-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7408/10574905/6862f53ccec6/sensors-23-08293-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7408/10574905/d198f54cb077/sensors-23-08293-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7408/10574905/dc5efe09b380/sensors-23-08293-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7408/10574905/d5f8c6bad251/sensors-23-08293-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7408/10574905/7599a23f2c86/sensors-23-08293-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7408/10574905/fd927ed092d0/sensors-23-08293-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7408/10574905/7da49e09a0ee/sensors-23-08293-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7408/10574905/c0208b95f009/sensors-23-08293-g020.jpg

相似文献

1
Drivers' Comprehensive Emotion Recognition Based on HAM.基于 HAM 的驾驶员综合情绪识别
Sensors (Basel). 2023 Oct 7;23(19):8293. doi: 10.3390/s23198293.
2
Research on driver's anger recognition method based on multimodal data fusion.基于多模态数据融合的驾驶员愤怒识别方法研究。
Traffic Inj Prev. 2024;25(3):354-363. doi: 10.1080/15389588.2023.2297658. Epub 2024 Feb 12.
3
Driver's Visual Attention Characteristics and Their Emotional Influencing Mechanism under Different Cognitive Tasks.驾驶员在不同认知任务下的视觉注意力特征及其情绪影响机制。
Int J Environ Res Public Health. 2022 Apr 21;19(9):5059. doi: 10.3390/ijerph19095059.
4
DRER: Deep Learning-Based Driver's Real Emotion Recognizer.DRER:基于深度学习的驾驶员真实情感识别器。
Sensors (Basel). 2021 Mar 19;21(6):2166. doi: 10.3390/s21062166.
5
How does a driver feel behind the wheel? An exploratory study of drivers' emotions and the effect of their sociodemographic background.驾驶员在驾驶时的感受如何?一项关于驾驶员情绪及其社会人口背景影响的探索性研究。
Accid Anal Prev. 2023 Apr;183:106974. doi: 10.1016/j.aap.2023.106974. Epub 2023 Jan 31.
6
A Hybrid Model for Driver Emotion Detection Using Feature Fusion Approach.基于特征融合方法的驾驶员情绪检测混合模型。
Int J Environ Res Public Health. 2022 Mar 6;19(5):3085. doi: 10.3390/ijerph19053085.
7
Emotional states of drivers and the impact on speed, acceleration and traffic violations - a simulator study.驾驶员的情绪状态及其对速度、加速度和交通违法行为的影响——一项模拟器研究。
Accid Anal Prev. 2014 Sep;70:282-92. doi: 10.1016/j.aap.2014.04.010. Epub 2014 May 15.
8
The restless mind while driving: drivers' thoughts behind the wheel.驾驶时躁动不安的大脑:驾驶员开车时的想法。
Accid Anal Prev. 2015 Mar;76:159-65. doi: 10.1016/j.aap.2015.01.005. Epub 2015 Feb 16.
9
Multimodal Data Collection System for Driver Emotion Recognition Based on Self-Reporting in Real-World Driving.基于自我报告的驾驶员真实驾驶情境下情绪识别的多模态数据采集系统。
Sensors (Basel). 2022 Jun 10;22(12):4402. doi: 10.3390/s22124402.
10
The correlation between drivers' road familiarity and glance behavior using real vehicle experimental data and mathematical models.基于真实车辆实验数据和数学模型的驾驶员道路熟悉度与扫视行为的相关性研究。
Traffic Inj Prev. 2024;25(5):705-713. doi: 10.1080/15389588.2024.2324915. Epub 2024 May 6.

引用本文的文献

1
A Comprehensive Review of Unobtrusive Biosensing in Intelligent Vehicles: Sensors, Algorithms, and Integration Challenges.智能车辆中无创生物传感的综合综述:传感器、算法及集成挑战
Bioengineering (Basel). 2025 Jun 18;12(6):669. doi: 10.3390/bioengineering12060669.
2
Multimodal driver emotion recognition using motor activity and facial expressions.利用运动活动和面部表情的多模态驾驶员情绪识别
Front Artif Intell. 2024 Nov 27;7:1467051. doi: 10.3389/frai.2024.1467051. eCollection 2024.

本文引用的文献

1
EEG-Based Emotion Recognition Using Spatial-Temporal Graph Convolutional LSTM With Attention Mechanism.基于 EEG 的情绪识别使用时空图卷积 LSTM 与注意力机制。
IEEE J Biomed Health Inform. 2022 Nov;26(11):5406-5417. doi: 10.1109/JBHI.2022.3198688. Epub 2022 Nov 10.
2
Deep Learning-Based Approach for Emotion Recognition Using Electroencephalography (EEG) Signals Using Bi-Directional Long Short-Term Memory (Bi-LSTM).基于深度学习的脑电(EEG)信号情绪识别方法:使用双向长短时记忆网络(Bi-LSTM)。
Sensors (Basel). 2022 Apr 13;22(8):2976. doi: 10.3390/s22082976.
3
The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English.
瑞尔森情感语音和歌曲音频视频数据库(RAVDESS):一组具有北美英语特色的动态、多模态面部和声音表情数据集。
PLoS One. 2018 May 16;13(5):e0196391. doi: 10.1371/journal.pone.0196391. eCollection 2018.
4
DREAMER: A Database for Emotion Recognition Through EEG and ECG Signals From Wireless Low-cost Off-the-Shelf Devices.DREAMER:一个通过无线低成本现成设备的 EEG 和 ECG 信号进行情感识别的数据库。
IEEE J Biomed Health Inform. 2018 Jan;22(1):98-107. doi: 10.1109/JBHI.2017.2688239. Epub 2017 Mar 27.
5
The multidimensional driving style inventory--scale construct and validation.多维驾驶风格量表——量表构建与验证
Accid Anal Prev. 2004 May;36(3):323-32. doi: 10.1016/S0001-4575(03)00010-1.
6
Constants across cultures in the face and emotion.面部与情感方面的跨文化常量。
J Pers Soc Psychol. 1971 Feb;17(2):124-9. doi: 10.1037/h0030377.