Zheng Wei-Long, Lu Bao-Liang
Center for Brain-like Computing and Machine Intelligence, Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China.
J Neural Eng. 2017 Apr;14(2):026017. doi: 10.1088/1741-2552/aa5a98. Epub 2017 Jan 19.
Covert aspects of ongoing user mental states provide key context information for user-aware human computer interactions. In this paper, we focus on the problem of estimating the vigilance of users using EEG and EOG signals.
The PERCLOS index as vigilance annotation is obtained from eye tracking glasses. To improve the feasibility and wearability of vigilance estimation devices for real-world applications, we adopt a novel electrode placement for forehead EOG and extract various eye movement features, which contain the principal information of traditional EOG. We explore the effects of EEG from different brain areas and combine EEG and forehead EOG to leverage their complementary characteristics for vigilance estimation. Considering that the vigilance of users is a dynamic changing process because the intrinsic mental states of users involve temporal evolution, we introduce continuous conditional neural field and continuous conditional random field models to capture dynamic temporal dependency.
We propose a multimodal approach to estimating vigilance by combining EEG and forehead EOG and incorporating the temporal dependency of vigilance into model training. The experimental results demonstrate that modality fusion can improve the performance compared with a single modality, EOG and EEG contain complementary information for vigilance estimation, and the temporal dependency-based models can enhance the performance of vigilance estimation. From the experimental results, we observe that theta and alpha frequency activities are increased, while gamma frequency activities are decreased in drowsy states in contrast to awake states.
The forehead setup allows for the simultaneous collection of EEG and EOG and achieves comparative performance using only four shared electrodes in comparison with the temporal and posterior sites.
用户当前心理状态的隐蔽方面为用户感知的人机交互提供关键上下文信息。在本文中,我们专注于使用脑电图(EEG)和眼电图(EOG)信号估计用户警觉性的问题。
从眼动追踪眼镜获取作为警觉性标注的PERCLOS指数。为提高用于实际应用的警觉性估计设备的可行性和可穿戴性,我们采用一种新颖的前额EOG电极放置方式,并提取各种眼动特征,这些特征包含传统EOG的主要信息。我们探究来自不同脑区的EEG的影响,并结合EEG和前额EOG以利用它们的互补特性进行警觉性估计。考虑到用户的警觉性是一个动态变化的过程,因为用户的内在心理状态涉及时间演变,我们引入连续条件神经场和连续条件随机场模型来捕捉动态时间依赖性。
我们提出一种多模态方法,通过结合EEG和前额EOG并将警觉性的时间依赖性纳入模型训练来估计警觉性。实验结果表明,模态融合与单一模态相比可以提高性能,EOG和EEG在警觉性估计方面包含互补信息,并且基于时间依赖性的模型可以提高警觉性估计的性能。从实验结果中,我们观察到与清醒状态相比,困倦状态下θ和α频率活动增加,而γ频率活动减少。
前额设置允许同时采集EEG和EOG,并且与颞部和后部部位相比,仅使用四个共享电极就能实现相当的性能。