• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

使用变压器模型对 EEG 进行选择性听觉注意力解码。

Decoding selective auditory attention with EEG using a transformer model.

机构信息

Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072 China; Tianjin Key Laboratory of Brain Science and Neuroengineering, Tianjin 300072 China.

Medizinische Physik, Carl von Ossietzky Universität Oldenburg and Cluster of Excellence "Hearing4all", Küpkersweg 74, 26129, Oldenburg, Germany.

出版信息

Methods. 2022 Aug;204:410-417. doi: 10.1016/j.ymeth.2022.04.009. Epub 2022 Apr 18.

DOI:10.1016/j.ymeth.2022.04.009
PMID:35447360
Abstract

The human auditory system extracts valid information in noisy environments while ignoring other distractions, relying primarily on auditory attention. Studies have shown that the cerebral cortex responds differently to the sound source locations and that auditory attention is time-varying. In this work, we proposed a data-driven encoder-decoder architecture model for auditory attention detection (AAD), denoted as AAD-transformer. The model contains temporal self-attention and channel attention modules and could reconstruct the speech envelope by dynamically assigning weights according to the temporal self-attention and channel attention mechanisms of electroencephalogram (EEG). In addition, the model is conducted based on data-driven without additional preprocessing steps. The proposed model was validated using a binaural listening dataset, in which the speech stimulus was Mandarin, and compared with other models. The results showed that the decoding accuracy of the AAD-transformer in the 0.15-second decoding time window was 76.35%, which was much higher than the accuracy of the linear model using temporal response function in the 3-second decoding time window (increased by 16.27%). This work provides a novel auditory attention detection method, and the data-driven characteristic makes it convenient for neural-steered hearing devices, especially those who speak tonal languages.

摘要

人类听觉系统在嘈杂环境中提取有效信息,同时忽略其他干扰,主要依赖听觉注意力。研究表明,大脑皮层对声源位置的反应不同,听觉注意力是时变的。在这项工作中,我们提出了一种用于听觉注意力检测(AAD)的数据驱动编码器-解码器架构模型,称为 AAD-transformer。该模型包含时间自注意力和通道注意力模块,可以通过根据脑电图(EEG)的时间自注意力和通道注意力机制动态分配权重来重建语音包络。此外,该模型基于数据驱动,无需额外的预处理步骤。该模型使用双耳听力数据集进行验证,其中语音刺激是普通话,并与其他模型进行了比较。结果表明,在 0.15 秒的解码时间窗口中,AAD-transformer 的解码准确率为 76.35%,比使用 3 秒解码时间窗口的时变响应函数的线性模型准确率(提高了 16.27%)高得多。这项工作提供了一种新的听觉注意力检测方法,数据驱动的特点使其方便用于神经导向的听力设备,特别是那些说声调语言的人。

相似文献

1
Decoding selective auditory attention with EEG using a transformer model.使用变压器模型对 EEG 进行选择性听觉注意力解码。
Methods. 2022 Aug;204:410-417. doi: 10.1016/j.ymeth.2022.04.009. Epub 2022 Apr 18.
2
Auditory attention decoding from EEG-based Mandarin speech envelope reconstruction.基于脑电图的汉语语音包络重构的听觉注意解码
Hear Res. 2022 Sep 1;422:108552. doi: 10.1016/j.heares.2022.108552. Epub 2022 Jun 11.
3
Congruent audiovisual speech enhances auditory attention decoding with EEG.视听语音一致增强了 EEG 对听觉注意力的解码。
J Neural Eng. 2019 Nov 6;16(6):066033. doi: 10.1088/1741-2552/ab4340.
4
EEG-based auditory attention decoding using speech-level-based segmented computational models.基于脑电的听觉注意解码,使用基于语音分段的计算模型。
J Neural Eng. 2021 May 25;18(4). doi: 10.1088/1741-2552/abfeba.
5
'Are you even listening?' - EEG-based decoding of absolute auditory attention to natural speech.“你在听吗?”- 基于 EEG 的自然语音绝对听觉注意力解码。
J Neural Eng. 2024 Jun 20;21(3). doi: 10.1088/1741-2552/ad5403.
6
Where is the cocktail party? Decoding locations of attended and unattended moving sound sources using EEG.鸡尾酒会在哪里?使用 EEG 解码有注意和无注意移动声源的位置。
Neuroimage. 2020 Jan 15;205:116283. doi: 10.1016/j.neuroimage.2019.116283. Epub 2019 Oct 17.
7
Robust decoding of the speech envelope from EEG recordings through deep neural networks.通过深度神经网络从 EEG 记录中稳健地解码语音包络。
J Neural Eng. 2022 Jul 6;19(4). doi: 10.1088/1741-2552/ac7976.
8
Impact of Different Acoustic Components on EEG-Based Auditory Attention Decoding in Noisy and Reverberant Conditions.不同声成分对噪声和混响环境下基于 EEG 的听觉注意解码的影响。
IEEE Trans Neural Syst Rehabil Eng. 2019 Apr;27(4):652-663. doi: 10.1109/TNSRE.2019.2903404. Epub 2019 Mar 7.
9
Cortical Auditory Attention Decoding During Music and Speech Listening.音乐和言语聆听过程中的皮质听觉注意解码。
IEEE Trans Neural Syst Rehabil Eng. 2023;31:2903-2911. doi: 10.1109/TNSRE.2023.3291239. Epub 2023 Jul 12.
10
ADT Network: A Novel Nonlinear Method for Decoding Speech Envelopes From EEG Signals.ADT 网络:一种从 EEG 信号中解码语音包络的新型非线性方法。
Trends Hear. 2024 Jan-Dec;28:23312165241282872. doi: 10.1177/23312165241282872.

引用本文的文献

1
Improving auditory attention decoding by classifying intracranial responses to glimpsed and masked acoustic events.通过对瞥见和掩蔽声学事件的颅内反应进行分类来改善听觉注意力解码。
Imaging Neurosci (Camb). 2024;2. doi: 10.1162/imag_a_00148. Epub 2024 Apr 26.
2
Feasibility of decoding covert speech in ECoG with a Transformer trained on overt speech.使用基于口语训练的 Transformer 对 ECoG 中的隐蔽语音进行解码的可行性。
Sci Rep. 2024 May 20;14(1):11491. doi: 10.1038/s41598-024-62230-9.
3
A multivariate comparison of electroencephalogram and functional magnetic resonance imaging to electrocorticogram using visual object representations in humans.
利用人类视觉物体表征对脑电图、功能磁共振成像与皮质电图进行多变量比较。
Front Neurosci. 2022 Oct 18;16:983602. doi: 10.3389/fnins.2022.983602. eCollection 2022.
4
Machine learning for health and clinical applications.用于健康和临床应用的机器学习。
Methods. 2022 Oct;206:56-57. doi: 10.1016/j.ymeth.2022.08.004. Epub 2022 Aug 11.