• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于Transformer和卷积图神经网络组合的选择性听觉注意力检测

Selective Auditory Attention Detection Using Combined Transformer and Convolutional Graph Neural Networks.

作者信息

Geravanchizadeh Masoud, Shaygan Asl Amir, Danishvar Sebelan

机构信息

Faculty of Electrical & Computer Engineering, University of Tabriz, Tabriz 51666-15813, Iran.

College of Engineering, Design and Physical Sciences, Brunel University London, London UB8 3PH, UK.

出版信息

Bioengineering (Basel). 2024 Nov 30;11(12):1216. doi: 10.3390/bioengineering11121216.

DOI:10.3390/bioengineering11121216
PMID:39768034
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11673410/
Abstract

Attention is one of many human cognitive functions that are essential in everyday life. Given our limited processing capacity, attention helps us focus only on what matters. Focusing attention on one speaker in an environment with many speakers is a critical ability of the human auditory system. This paper proposes a new end-to-end method based on the combined transformer and graph convolutional neural network (TraGCNN) that can effectively detect auditory attention from electroencephalograms (EEGs). This approach eliminates the need for manual feature extraction, which is often time-consuming and subjective. Here, the first EEG signals are converted to graphs. We then extract attention information from these graphs using spatial and temporal approaches. Finally, our models are trained with these data. Our model can detect auditory attention in both the spatial and temporal domains. Here, the EEG input is first processed by transformer layers to obtain a sequential representation of EEG based on attention onsets. Then, a family of graph convolutional layers is used to find the most active electrodes using the spatial position of electrodes. Finally, the corresponding EEG features of active electrodes are fed into the graph attention layers to detect auditory attention. The Fuglsang 2020 dataset is used in the experiments to train and test the proposed and baseline systems. The new TraGCNN approach, as compared with state-of-the-art attention classification methods from the literature, yields the highest performance in terms of accuracy (80.12%) as a classification metric. Additionally, the proposed model results in higher performance than our previously graph-based model for different lengths of EEG segments. The new TraGCNN approach is advantageous because attenuation detection is achieved from EEG signals of subjects without requiring speech stimuli, as is the case with conventional auditory attention detection methods. Furthermore, examining the proposed model for different lengths of EEG segments shows that the model is faster than our previous graph-based detection method in terms of computational complexity. The findings of this study have important implications for the understanding and assessment of auditory attention, which is crucial for many applications, such as brain-computer interface (BCI) systems, speech separation, and neuro-steered hearing aid development.

摘要

注意力是人类众多认知功能之一,在日常生活中至关重要。鉴于我们有限的处理能力,注意力帮助我们只专注于重要的事情。在有许多说话者的环境中,将注意力集中在一个说话者身上是人类听觉系统的一项关键能力。本文提出了一种基于组合变压器和图卷积神经网络(TraGCNN)的新的端到端方法,该方法可以有效地从脑电图(EEG)中检测听觉注意力。这种方法无需手动特征提取,而手动特征提取通常既耗时又主观。在此,首先将EEG信号转换为图。然后,我们使用空间和时间方法从这些图中提取注意力信息。最后,用这些数据训练我们的模型。我们的模型可以在空间和时间域中检测听觉注意力。在此,EEG输入首先由变压器层进行处理,以基于注意力起始点获得EEG的序列表示。然后,使用一族图卷积层根据电极的空间位置找到最活跃的电极。最后,将活跃电极的相应EEG特征输入到图注意力层中以检测听觉注意力。实验中使用Fuglsang 2020数据集来训练和测试所提出的系统和基线系统。与文献中最先进的注意力分类方法相比,新的TraGCNN方法在作为分类指标的准确率(80.12%)方面产生了最高的性能。此外,对于不同长度的EEG段,所提出的模型比我们之前基于图的模型具有更高的性能。新的TraGCNN方法具有优势,因为与传统听觉注意力检测方法不同,它无需语音刺激即可从受试者的EEG信号中实现衰减检测。此外,对不同长度的EEG段检查所提出的模型表明,该模型在计算复杂度方面比我们之前基于图的检测方法更快。本研究的结果对于理解和评估听觉注意力具有重要意义,而听觉注意力对于许多应用至关重要,例如脑机接口(BCI)系统、语音分离和神经导向助听器的开发。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/27a4/11673410/c458f06dbb36/bioengineering-11-01216-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/27a4/11673410/d0148cd6a0ae/bioengineering-11-01216-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/27a4/11673410/2c84b4139b57/bioengineering-11-01216-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/27a4/11673410/8da5efd78496/bioengineering-11-01216-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/27a4/11673410/033eb8ddc419/bioengineering-11-01216-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/27a4/11673410/c458f06dbb36/bioengineering-11-01216-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/27a4/11673410/d0148cd6a0ae/bioengineering-11-01216-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/27a4/11673410/2c84b4139b57/bioengineering-11-01216-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/27a4/11673410/8da5efd78496/bioengineering-11-01216-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/27a4/11673410/033eb8ddc419/bioengineering-11-01216-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/27a4/11673410/c458f06dbb36/bioengineering-11-01216-g005.jpg

相似文献

1
Selective Auditory Attention Detection Using Combined Transformer and Convolutional Graph Neural Networks.基于Transformer和卷积图神经网络组合的选择性听觉注意力检测
Bioengineering (Basel). 2024 Nov 30;11(12):1216. doi: 10.3390/bioengineering11121216.
2
DGSD: Dynamical graph self-distillation for EEG-based auditory spatial attention detection.DGSD:基于 EEG 的听觉空间注意检测的动态图自蒸馏。
Neural Netw. 2024 Nov;179:106580. doi: 10.1016/j.neunet.2024.106580. Epub 2024 Jul 26.
3
Brain connectivity and time-frequency fusion-based auditory spatial attention detection.基于脑连接和时频融合的听觉空间注意检测。
Neuroscience. 2024 Nov 12;560:397-405. doi: 10.1016/j.neuroscience.2024.09.017. Epub 2024 Sep 10.
4
Multimodal depression detection based on an attention graph convolution and transformer.基于注意力图卷积和变换器的多模态抑郁症检测
Math Biosci Eng. 2025 Feb 27;22(3):652-676. doi: 10.3934/mbe.2025024.
5
Brain Topology Modeling With EEG-Graphs for Auditory Spatial Attention Detection.脑拓扑建模与 EEG 图谱用于听觉空间注意检测。
IEEE Trans Biomed Eng. 2024 Jan;71(1):171-182. doi: 10.1109/TBME.2023.3294242. Epub 2023 Dec 22.
6
Selective auditory attention detection based on effective connectivity by single-trial EEG.基于单试 EEG 的有效连通性的选择性听觉注意力检测。
J Neural Eng. 2020 Apr 17;17(2):026021. doi: 10.1088/1741-2552/ab7c8d.
7
Emotion recognition using spatial-temporal EEG features through convolutional graph attention network.基于卷积图注意网络的时空 EEG 特征的情绪识别。
J Neural Eng. 2023 Feb 14;20(1). doi: 10.1088/1741-2552/acb79e.
8
Attention-guided graph structure learning network for EEG-enabled auditory attention detection.用于基于脑电图的听觉注意力检测的注意力引导图结构学习网络。
J Neural Eng. 2024 May 30;21(3). doi: 10.1088/1741-2552/ad4f1a.
9
EEG-based emotion recognition using graph convolutional neural network with dual attention mechanism.基于脑电图的情绪识别:使用具有双重注意力机制的图卷积神经网络
Front Comput Neurosci. 2024 Jul 19;18:1416494. doi: 10.3389/fncom.2024.1416494. eCollection 2024.
10
Auditory attention tracking states in a cocktail party environment can be decoded by deep convolutional neural networks.鸡尾酒会环境中的听觉注意跟踪状态可以通过深度卷积神经网络进行解码。
J Neural Eng. 2020 Jun 12;17(3):036013. doi: 10.1088/1741-2552/ab92b2.

引用本文的文献

1
Two-Dimensional Latent Space Manifold of Brain Connectomes Across the Spectrum of Clinical Cognitive Decline.临床认知衰退谱系中脑连接组的二维潜在空间流形
Bioengineering (Basel). 2025 Jul 29;12(8):819. doi: 10.3390/bioengineering12080819.

本文引用的文献

1
Low-Latency Auditory Spatial Attention Detection Based on Spectro-Spatial Features from EEG.基于 EEG 谱-空间特征的低延迟听觉空间注意检测。
Annu Int Conf IEEE Eng Med Biol Soc. 2021 Nov;2021:5812-5815. doi: 10.1109/EMBC46164.2021.9630902.
2
Extracting the Auditory Attention in a Dual-Speaker Scenario From EEG Using a Joint CNN-LSTM Model.使用联合卷积神经网络-长短期记忆网络模型从脑电图中提取双扬声器场景下的听觉注意力。
Front Physiol. 2021 Aug 2;12:700655. doi: 10.3389/fphys.2021.700655. eCollection 2021.
3
Graph Neural Networks and Their Current Applications in Bioinformatics.
图神经网络及其在生物信息学中的当前应用。
Front Genet. 2021 Jul 29;12:690049. doi: 10.3389/fgene.2021.690049. eCollection 2021.
4
Dynamic selective auditory attention detection using RNN and reinforcement learning.基于 RNN 和强化学习的动态选择性听觉注意力检测。
Sci Rep. 2021 Jul 29;11(1):15497. doi: 10.1038/s41598-021-94876-0.
5
Ear-EEG-based binaural speech enhancement (ee-BSE) using auditory attention detection and audiometric characteristics of hearing-impaired subjects.基于耳脑电图的双耳语音增强(ee-BSE),使用听觉注意力检测和听力受损受试者的听力特征。
J Neural Eng. 2021 Aug 20;18(4). doi: 10.1088/1741-2552/ac16b4.
6
Attention in Psychology, Neuroscience, and Machine Learning.心理学、神经科学和机器学习中的注意力
Front Comput Neurosci. 2020 Apr 16;14:29. doi: 10.3389/fncom.2020.00029. eCollection 2020.
7
Selective auditory attention detection based on effective connectivity by single-trial EEG.基于单试 EEG 的有效连通性的选择性听觉注意力检测。
J Neural Eng. 2020 Apr 17;17(2):026021. doi: 10.1088/1741-2552/ab7c8d.
8
Effects of Sensorineural Hearing Loss on Cortical Synchronization to Competing Speech during Selective Attention.感音神经性听力损失对选择性注意期间竞争语音的皮层同步的影响。
J Neurosci. 2020 Mar 18;40(12):2562-2572. doi: 10.1523/JNEUROSCI.1936-19.2020. Epub 2020 Feb 24.
9
Comparison of Two-Talker Attention Decoding from EEG with Nonlinear Neural Networks and Linear Methods.两种从 EEG 中解码双说话人注意力的方法比较:非线性神经网络与线性方法。
Sci Rep. 2019 Aug 8;9(1):11538. doi: 10.1038/s41598-019-47795-0.
10
Speaker-independent auditory attention decoding without access to clean speech sources.无需访问干净语音源的说话人无关听觉注意力解码。
Sci Adv. 2019 May 15;5(5):eaav6134. doi: 10.1126/sciadv.aav6134. eCollection 2019 May.