Suppr超能文献

基于多模态可穿戴数据和挤压激励网络的驾驶员分心自动评估。

Automated Assessment of Driver Distraction Using Multimodal Wearable Data and Squeeze-Excitation Networks.

机构信息

Department of Biomedical Engineering, IIT Hyderabad, Telangana, India.

出版信息

Stud Health Technol Inform. 2024 Aug 22;316:951-952. doi: 10.3233/SHTI240568.

Abstract

Driver distraction, crucial for road safety, can benefit from multimodal physiological signals assessment. However, fusion of heterogeneous data is highly challenging. In this study, we address this challenge by exploring 1D convolution neural network (CNN) with squeeze and excitation networks (SEcNN) on multimodal data. For this, electrocardiogram (256Hz) and respiration (128Hz) are obtained from subjects (N=10) while using textile electrodes and driving in different scenarios namely normal, texting and calling. The obtained multimodal data is preprocessed and SEcNN to identify driver distraction. Experiments are performed using Leave-one-out-subject cross validation. The proposed approach is able to discriminate drivers distraction. It is observed that SEcNN yields average accuracy 57.03% and average F1 score 54.90% for shorter segments. Thus, the proposed approach using wearable shirts could be useful for non-intrusive monitoring in real world driver scenarios.

摘要

驾驶员分神对道路安全至关重要,可通过多模态生理信号评估加以改善。然而,异构数据的融合极具挑战性。在本研究中,我们通过探索在多模态数据上使用挤压激励网络(SEcNN)的一维卷积神经网络(CNN)来应对这一挑战。为此,我们使用纺织电极让受试者(N=10)在正常、发短信和打电话等不同场景下驾驶,并采集 256Hz 的心电图和 128Hz 的呼吸数据。对所获得的多模态数据进行预处理,并使用 SExcNN 来识别驾驶员分心。采用留一法受试者交叉验证进行实验。所提出的方法能够区分驾驶员的分神状态。结果表明,对于较短的片段,SEcNN 的平均准确率为 57.03%,平均 F1 得分为 54.90%。因此,使用可穿戴衬衫的这种方法可用于现实世界驾驶员场景中的非侵入式监测。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验