Center for Intelligent & Interactive Robotics, Artificial Intelligence and Robot Institute, Korea Institute of Science and Technology, Seoul 02792, Korea.
Department of Otolaryngology-Head and Neck Surgery, College of Medicine, Hanyang University, Seoul 04763, Korea.
Sensors (Basel). 2021 Jan 13;21(2):531. doi: 10.3390/s21020531.
Auditory attention detection (AAD) is the tracking of a sound source to which a listener is attending based on neural signals. Despite expectation for the applicability of AAD in real-life, most AAD research has been conducted on recorded electroencephalograms (EEGs), which is far from online implementation. In the present study, we attempted to propose an online AAD model and to implement it on a streaming EEG. The proposed model was devised by introducing a sliding window into the linear decoder model and was simulated using two datasets obtained from separate experiments to evaluate the feasibility. After simulation, the online model was constructed and evaluated based on the streaming EEG of an individual, acquired during a dichotomous listening experiment. Our model was able to detect the transient direction of a participant's attention on the order of one second during the experiment and showed up to 70% average detection accuracy. We expect that the proposed online model could be applied to develop adaptive hearing aids or neurofeedback training for auditory attention and speech perception.
听觉注意力检测(AAD)是基于神经信号来跟踪听众正在关注的声源。尽管人们期望 AAD 在现实生活中的适用性,但大多数 AAD 研究都是在记录的脑电图(EEG)上进行的,这与在线实施相去甚远。在本研究中,我们试图提出一种在线 AAD 模型,并将其应用于实时 EEG。所提出的模型是通过在线性解码器模型中引入滑动窗口来设计的,并使用来自两个单独实验的数据进行模拟,以评估其可行性。模拟后,根据个体在二分听实验中获取的实时 EEG 构建并评估在线模型。我们的模型能够在实验过程中以大约一秒的时间分辨率检测到参与者注意力的瞬时方向,平均检测准确率高达 70%。我们期望所提出的在线模型可以应用于开发自适应助听器或用于听觉注意力和语音感知的神经反馈训练。