Dasenbrock Steffen, Blum Sarah, Maanen Paul, Debener Stefan, Hohmann Volker, Kayser Hendrik
Auditory Signal Processing and Hearing Devices, Department of Medical Physics and Acoustics, University of Oldenburg, Oldenburg, Germany.
Cluster of Excellence "Hearing4all", University of Oldenburg, Oldenburg, Germany.
Front Neurosci. 2022 Sep 1;16:904003. doi: 10.3389/fnins.2022.904003. eCollection 2022.
Recent advancements in neuroscientific research and miniaturized ear-electroencephalography (EEG) technologies have led to the idea of employing brain signals as additional input to hearing aid algorithms. The information acquired through EEG could potentially be used to control the audio signal processing of the hearing aid or to monitor communication-related physiological factors. In previous work, we implemented a research platform to develop methods that utilize EEG in combination with a hearing device. The setup combines currently available mobile EEG hardware and the so-called Portable Hearing Laboratory (PHL), which can fully replicate a complete hearing aid. Audio and EEG data are synchronized using the Lab Streaming Layer (LSL) framework. In this study, we evaluated the setup in three scenarios focusing particularly on the alignment of audio and EEG data. In Scenario I, we measured the latency between software event markers and actual audio playback of the PHL. In Scenario II, we measured the latency between an analog input signal and the sampled data stream of the EEG system. In Scenario III, we measured the latency in the whole setup as it would be used in a real EEG experiment. The results of Scenario I showed a jitter (standard deviation of trial latencies) of below 0.1 ms. The jitter in Scenarios II and III was around 3 ms in both cases. The results suggest that the increased jitter compared to Scenario I can be attributed to the EEG system. Overall, the findings show that the measurement setup can time-accurately present acoustic stimuli while generating LSL data streams over multiple hours of playback. Further, the setup can capture the audio and EEG LSL streams with sufficient temporal accuracy to extract event-related potentials from EEG signals. We conclude that our setup is suitable for studying closed-loop EEG & audio applications for future hearing aids.
神经科学研究和小型化耳脑电图(EEG)技术的最新进展引发了将脑信号用作助听器算法额外输入的想法。通过脑电图获取的信息有可能用于控制助听器的音频信号处理或监测与通信相关的生理因素。在之前的工作中,我们搭建了一个研究平台,以开发将脑电图与听力设备结合使用的方法。该装置结合了目前可用的移动脑电图硬件和所谓的便携式听力实验室(PHL),它可以完全复制一个完整的助听器。音频和脑电图数据使用实验室流层(LSL)框架进行同步。在本研究中,我们在三种场景下对该装置进行了评估,特别关注音频和脑电图数据的对齐情况。在场景I中,我们测量了软件事件标记与PHL实际音频播放之间的延迟。在场景II中,我们测量了模拟输入信号与脑电图系统采样数据流之间的延迟。在场景III中,我们测量了整个装置在实际脑电图实验中使用时的延迟。场景I的结果显示抖动(试验延迟的标准差)低于0.1毫秒。场景II和III中的抖动在两种情况下均约为3毫秒。结果表明,与场景I相比抖动增加可归因于脑电图系统。总体而言,研究结果表明,该测量装置能够在数小时的播放过程中准确地实时呈现声学刺激,同时生成LSL数据流。此外,该装置能够以足够的时间精度捕获音频和脑电图LSL流,以便从脑电图信号中提取事件相关电位。我们得出结论,我们的装置适用于研究未来助听器的闭环脑电图和音频应用。