Suppr超能文献

基于快速脑电图的听觉注意方向的定向空间模式解码。

Fast EEG-Based Decoding Of The Directional Focus Of Auditory Attention Using Common Spatial Patterns.

出版信息

IEEE Trans Biomed Eng. 2021 May;68(5):1557-1568. doi: 10.1109/TBME.2020.3033446. Epub 2021 Apr 21.

Abstract

OBJECTIVE

Noise reduction algorithms in current hearing devices lack informationabout the sound source a user attends to when multiple sources are present. To resolve this issue, they can be complemented with auditory attention decoding (AAD) algorithms, which decode the attention using electroencephalography (EEG) sensors. State-of-the-art AAD algorithms employ a stimulus reconstruction approach, in which the envelope of the attended source is reconstructed from the EEG and correlated with the envelopes of the individual sources. This approach, however, performs poorly on short signal segments, whilelonger segments yield impractically long detection delays when the user switches attention.

METHODS

We propose decoding the directional focus of attention using filterbank common spatial pattern filters (FB-CSP) as an alternative AAD paradigm, whichdoes not require access to the clean source envelopes.

RESULTS

The proposed FB-CSP approach outperforms both the stimulus reconstruction approach on short signal segments, as well as a convolutional neural network approach on the same task. We achieve a high accuracy (80% for [Formula: see text] windows and 70% for quasi-instantaneous decisions), which is sufficient to reach minimal expected switch durations below [Formula: see text]. We also demonstrate that the decoder can adapt to unlabeled data from anunseen subject and works with only a subset of EEG channels located around the ear to emulate a wearable EEG setup.

CONCLUSION

The proposed FB-CSP method provides fast and accurate decoding of the directional focus of auditory attention.

SIGNIFICANCE

The high accuracy on very short data segments is a major step forward towards practical neuro-steered hearing devices.

摘要

目的

目前的听力设备中的降噪算法缺乏关于用户在多个声源存在时关注的声源的信息。为了解决这个问题,可以使用听觉注意力解码(AAD)算法来补充这些算法,这些算法使用脑电图(EEG)传感器来解码注意力。最先进的 AAD 算法采用刺激重建方法,即从 EEG 中重建关注源的包络,并与各个源的包络相关联。然而,这种方法在短信号段上表现不佳,而在用户注意力转移时,较长的段则会产生不切实际的长检测延迟。

方法

我们提出使用滤波器组共空间模式滤波器(FB-CSP)解码定向注意力焦点,作为替代 AAD 范式,它不需要访问干净源的包络。

结果

与短信号段上的刺激重建方法相比,所提出的 FB-CSP 方法表现更好,与相同任务上的卷积神经网络方法相比也表现更好。我们实现了高精度([Formula: see text] 窗口的准确率为 80%,准瞬时决策的准确率为 70%),足以达到低于 [Formula: see text] 的最小预期切换持续时间。我们还证明,解码器可以适应来自未标记数据的看不见的主体,并且仅使用位于耳朵周围的 EEG 通道子集工作,以模拟可穿戴 EEG 设置。

结论

所提出的 FB-CSP 方法提供了听觉注意力定向焦点的快速准确解码。

意义

在非常短的数据段上的高精度是朝着实用的神经引导听力设备迈出的重要一步。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验