• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于认知负荷估计的多模态数据集。

: A Multimodal Dataset for Cognitive Load Estimation.

机构信息

Department Digital Health Systems, Fraunhofer IIS, Fraunhofer Institute for Integrated Circuits IIS, 91058 Erlangen, Germany.

Machine Learning and Data Analytics Lab (MaD Lab), Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-University Erlangen Nuremberg, 91052 Erlangen, Germany.

出版信息

Sensors (Basel). 2022 Dec 28;23(1):340. doi: 10.3390/s23010340.

DOI:10.3390/s23010340
PMID:36616939
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9823940/
Abstract

Driver monitoring systems play an important role in lower to mid-level autonomous vehicles. Our work focuses on the detection of cognitive load as a component of driver-state estimation to improve traffic safety. By inducing single and dual-task workloads of increasing intensity on 51 subjects, while continuously measuring signals from multiple modalities, based on measurements such as ECG, EDA, EMG, PPG, respiration rate, skin temperature and eye tracker data, as well as measurements such as action units extracted from facial videos, metrics like reaction time and feedback using questionnaires, we create (utonomous riving Cognitive Load ssessment Data) As a reference method to induce cognitive load onto subjects, we use the well-established -back test, in addition to our novel simulator-based -drive test, motivated by real-world semi-autonomously vehicles. We extract expert features of all measurements and find significant changes in multiple modalities. Ultimately we train and evaluate machine learning algorithms using single and multimodal inputs to distinguish cognitive load levels. We carefully evaluate model behavior and study feature importance. In summary, we introduce a novel cognitive load test, create a cognitive load database, validate changes using statistical tests, introduce novel classification and regression tasks for machine learning and train and evaluate machine learning models.

摘要

驾驶员监控系统在低级别到中级别自动驾驶汽车中起着重要作用。我们的工作重点是检测认知负荷作为驾驶员状态估计的一个组成部分,以提高交通安全。通过在 51 名受试者上诱导单一和双重任务工作负荷,同时连续测量来自多个模态的信号,基于测量,如心电图、EDA、EMG、PPG、呼吸率、皮肤温度和眼动追踪数据,以及从面部视频中提取的动作单元等测量,以及使用问卷的反应时间和反馈等指标,我们创建了(自主驾驶认知负荷评估数据)作为向受试者诱导认知负荷的参考方法,我们使用了成熟的 n-back 测试,以及我们基于新型模拟器的 -drive 测试,这是由现实世界中的半自动驾驶车辆驱动的。我们提取所有测量的专家特征,并发现多个模态中的显著变化。最终,我们使用单模态和多模态输入来训练和评估机器学习算法,以区分认知负荷水平。我们仔细评估模型行为并研究特征重要性。总之,我们引入了一种新的认知负荷测试,创建了一个认知负荷数据库,使用统计测试验证变化,引入了用于机器学习的新的分类和回归任务,并训练和评估了机器学习模型。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd98/9823940/a07b5e38c3b5/sensors-23-00340-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd98/9823940/a757574dcb95/sensors-23-00340-g0A1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd98/9823940/fc84544b402c/sensors-23-00340-g0A2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd98/9823940/1134c3bf3c2b/sensors-23-00340-g0A3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd98/9823940/e36cd617c9c1/sensors-23-00340-g0A4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd98/9823940/091801fa109c/sensors-23-00340-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd98/9823940/5dc4af6e6e79/sensors-23-00340-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd98/9823940/edd43f589068/sensors-23-00340-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd98/9823940/183417221908/sensors-23-00340-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd98/9823940/4b7d79d2d945/sensors-23-00340-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd98/9823940/9f7d7478f4d6/sensors-23-00340-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd98/9823940/4d8651ba0bd0/sensors-23-00340-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd98/9823940/abf450ae3348/sensors-23-00340-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd98/9823940/c5c3b785c017/sensors-23-00340-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd98/9823940/6eb4b93fe40a/sensors-23-00340-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd98/9823940/c801d680e7d3/sensors-23-00340-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd98/9823940/d5eec6be98cb/sensors-23-00340-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd98/9823940/c4b4b90d17d4/sensors-23-00340-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd98/9823940/561d85e30cb0/sensors-23-00340-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd98/9823940/a07b5e38c3b5/sensors-23-00340-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd98/9823940/a757574dcb95/sensors-23-00340-g0A1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd98/9823940/fc84544b402c/sensors-23-00340-g0A2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd98/9823940/1134c3bf3c2b/sensors-23-00340-g0A3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd98/9823940/e36cd617c9c1/sensors-23-00340-g0A4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd98/9823940/091801fa109c/sensors-23-00340-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd98/9823940/5dc4af6e6e79/sensors-23-00340-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd98/9823940/edd43f589068/sensors-23-00340-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd98/9823940/183417221908/sensors-23-00340-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd98/9823940/4b7d79d2d945/sensors-23-00340-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd98/9823940/9f7d7478f4d6/sensors-23-00340-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd98/9823940/4d8651ba0bd0/sensors-23-00340-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd98/9823940/abf450ae3348/sensors-23-00340-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd98/9823940/c5c3b785c017/sensors-23-00340-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd98/9823940/6eb4b93fe40a/sensors-23-00340-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd98/9823940/c801d680e7d3/sensors-23-00340-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd98/9823940/d5eec6be98cb/sensors-23-00340-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd98/9823940/c4b4b90d17d4/sensors-23-00340-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd98/9823940/561d85e30cb0/sensors-23-00340-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd98/9823940/a07b5e38c3b5/sensors-23-00340-g015.jpg

相似文献

1
: A Multimodal Dataset for Cognitive Load Estimation.用于认知负荷估计的多模态数据集。
Sensors (Basel). 2022 Dec 28;23(1):340. doi: 10.3390/s23010340.
2
Measurement and identification of mental workload during simulated computer tasks with multimodal methods and machine learning.采用多模态方法和机器学习测量和识别模拟计算机任务中的心理负荷。
Ergonomics. 2020 Jul;63(7):896-908. doi: 10.1080/00140139.2020.1759699. Epub 2020 May 7.
3
Machine-Learning Based Monitoring of Cognitive Workload in Rescue Missions With Drones.基于机器学习的无人机救援任务中认知负荷监测
IEEE J Biomed Health Inform. 2022 Sep;26(9):4751-4762. doi: 10.1109/JBHI.2022.3186625. Epub 2022 Sep 9.
4
Real-Time Cognitive Workload Monitoring Based on Machine Learning Using Physiological Signals in Rescue Missions.基于机器学习利用生理信号的救援任务实时认知工作负荷监测
Annu Int Conf IEEE Eng Med Biol Soc. 2019 Jul;2019:3779-3785. doi: 10.1109/EMBC.2019.8857501.
5
Feature Selection Model based on EEG Signals for Assessing the Cognitive Workload in Drivers.基于 EEG 信号的驾驶员认知负荷评估特征选择模型。
Sensors (Basel). 2020 Oct 17;20(20):5881. doi: 10.3390/s20205881.
6
Vision-Based Driver's Cognitive Load Classification Considering Eye Movement Using Machine Learning and Deep Learning.基于机器学习和深度学习的考虑眼动的基于视觉的驾驶员认知负荷分类。
Sensors (Basel). 2021 Nov 30;21(23):8019. doi: 10.3390/s21238019.
7
Effect of cognitive load on drivers' State and task performance during automated driving: Introducing a novel method for determining stabilisation time following take-over of control.认知负荷对自动驾驶中驾驶员状态和任务表现的影响:引入一种新的方法来确定接管控制后的稳定时间。
Accid Anal Prev. 2021 Mar;151:105967. doi: 10.1016/j.aap.2020.105967. Epub 2021 Jan 11.
8
Characterisation of Cognitive Load Using Machine Learning Classifiers of Electroencephalogram Data.使用脑电图数据的机器学习分类器对认知负荷进行特征描述。
Sensors (Basel). 2023 Oct 17;23(20):8528. doi: 10.3390/s23208528.
9
Assessment of Drivers' Mental Workload by Multimodal Measures during Auditory-Based Dual-Task Driving Scenarios.基于听觉的驾驶双重任务情境下多模态测量评估驾驶员的精神工作负荷。
Sensors (Basel). 2024 Feb 5;24(3):1041. doi: 10.3390/s24031041.
10
The Effect of Cognitive Load on Auditory Susceptibility During Automated Driving.认知负荷对自动驾驶中听觉敏感性的影响。
Hum Factors. 2022 Nov;64(7):1195-1209. doi: 10.1177/0018720821998850. Epub 2021 Mar 11.

引用本文的文献

1
A deep learning framework for virtual continuous glucose monitoring and glucose prediction based on life-log data.基于生活日志数据的虚拟连续血糖监测与血糖预测深度学习框架。
Sci Rep. 2025 May 10;15(1):16290. doi: 10.1038/s41598-025-01367-7.
2
Eye-Based Recognition of User Traits and States-A Systematic State-of-the-Art Review.基于眼睛的用户特征与状态识别——一项系统的最新技术综述
J Eye Mov Res. 2025 Apr 1;18(2):8. doi: 10.3390/jemr18020008. eCollection 2025 Apr.
3
A Dataset on Takeover during Distracted L2 Automated Driving.一个关于分心状态下第二语言自动驾驶时接管情况的数据集。

本文引用的文献

1
Py-Feat: Python Facial Expression Analysis Toolbox.Py-Feat:Python面部表情分析工具箱。
Affect Sci. 2023 Aug 8;4(4):781-796. doi: 10.1007/s42761-023-00191-4. eCollection 2023 Dec.
2
Optimizing the usage of pupillary based indicators for cognitive workload.优化基于瞳孔的认知负荷指标的使用。
J Eye Mov Res. 2021 Jun 11;14(2). doi: 10.16910/jemr.14.2.4. eCollection 2021.
3
Heart Rate Variability in Psychology: A Review of HRV Indices and an Analysis Tutorial.心理学中的心率变异性:HRV 指标回顾与分析教程。
Sci Data. 2025 Mar 31;12(1):539. doi: 10.1038/s41597-025-04781-8.
4
A cross-attention swin transformer network for EEG-based subject-independent cognitive load assessment.一种用于基于脑电图的独立于个体的认知负荷评估的交叉注意力窗口变压器网络。
Cogn Neurodyn. 2024 Dec;18(6):3805-3819. doi: 10.1007/s11571-024-10160-7. Epub 2024 Aug 20.
5
Accuracy of a continuous glucose monitoring system applied before, during, and after an intense leg-squat session with low- and high-carbohydrate availability in young adults without diabetes.在年轻非糖尿病个体进行低和高碳水化合物摄入的腿部抗阻训练前后,应用连续血糖监测系统的准确性。
Eur J Appl Physiol. 2024 Dec;124(12):3557-3569. doi: 10.1007/s00421-024-05557-5. Epub 2024 Jul 22.
6
Machine learning-based cognitive load prediction model for AR-HUD to improve OSH of professional drivers.基于机器学习的 AR-HUD 认知负荷预测模型,提高专业驾驶员的 OSH。
Front Public Health. 2023 Aug 3;11:1195961. doi: 10.3389/fpubh.2023.1195961. eCollection 2023.
7
Food Choices after Cognitive Load: An Affective Computing Approach.认知负荷后食物选择:一种情感计算方法。
Sensors (Basel). 2023 Jul 21;23(14):6597. doi: 10.3390/s23146597.
Sensors (Basel). 2021 Jun 9;21(12):3998. doi: 10.3390/s21123998.
4
Fixation duration and the learning process: an eye tracking study with subtitled videos.注视时长与学习过程:一项针对带字幕视频的眼动追踪研究
J Eye Mov Res. 2020 Aug 16;13(6). doi: 10.16910/jemr.13.6.1.
5
Review of Eye Tracking Metrics Involved in Emotional and Cognitive Processes.眼动追踪指标在情感和认知过程中的研究综述。
IEEE Rev Biomed Eng. 2023;16:260-277. doi: 10.1109/RBME.2021.3066072. Epub 2023 Jan 5.
6
WAUC: A Multi-Modal Database for Mental Workload Assessment Under Physical Activity.WAUC:一个用于评估身体活动下心理负荷的多模态数据库。
Front Neurosci. 2020 Dec 1;14:549524. doi: 10.3389/fnins.2020.549524. eCollection 2020.
7
Heart Rate Variability (HRV) and Pulse Rate Variability (PRV) for the Assessment of Autonomic Responses.用于评估自主神经反应的心率变异性(HRV)和脉率变异性(PRV)。
Front Physiol. 2020 Jul 23;11:779. doi: 10.3389/fphys.2020.00779. eCollection 2020.
8
Overloaded and at Work: Investigating the Effect of Cognitive Workload on Assembly Task Performance.任务过重与工作状态:研究认知工作负荷对装配任务绩效的影响。
Hum Factors. 2021 Aug;63(5):813-820. doi: 10.1177/0018720820929928. Epub 2020 Jun 12.
9
The uulmMAC Database-A Multimodal Affective Corpus for Affective Computing in Human-Computer Interaction.uulmMAC 数据库——用于人机交互中情感计算的多模态情感语料库。
Sensors (Basel). 2020 Apr 17;20(8):2308. doi: 10.3390/s20082308.
10
NASA RTLX as a Novel Assessment for Determining Cognitive Load and User Acceptance of Expert and User-Based Evaluation Methods Exemplified Through a mHealth Diabetes Self-Management Application Evaluation.美国国家航空航天局实时任务负荷指数(NASA RTLX)作为一种新型评估方法,用于确定认知负荷以及专家和基于用户的评估方法的用户接受度,以一款移动健康糖尿病自我管理应用程序评估为例进行说明。
Stud Health Technol Inform. 2019;261:185-190.