• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

听觉图形背景感知的脑电图反应。

EEG Responses to auditory figure-ground perception.

机构信息

Biosciences institute, Newcastle University, United Kingdom.

Biosciences institute, Newcastle University, United Kingdom; Institute of Neuroscience and Psychology, University of Glasgow, United Kingdom.

出版信息

Hear Res. 2022 Sep 1;422:108524. doi: 10.1016/j.heares.2022.108524. Epub 2022 May 16.

DOI:10.1016/j.heares.2022.108524
PMID:35691269
Abstract

Speech-in-noise difficulty is commonly reported among hearing-impaired individuals. Recent work has established generic behavioural measures of sound segregation and grouping that are related to speech-in-noise processing but do not require language. In this study, we assessed potential clinical electroencephalographic (EEG) measures of central auditory grouping (stochastic figure-ground test) and speech-in-noise perception (speech-in-babble test) with and without relevant tasks. Auditory targets were presented within background noise (16 talker-babble or randomly generated pure-tones) in 50% of the trials and composed either a figure (pure-tone frequency chords repeating over time) or speech (English names), while the rest of the trials only had background noise. EEG was recorded while participants were presented with the target stimuli (figure or speech) under different attentional states (relevant task or visual-distractor task). EEG time-domain analysis demonstrated enhanced negative responses during detection of both types of auditory targets within the time window 150-350 ms but only figure detection produced significantly enhanced responses under the distracted condition. Further single-channel analysis showed that simple vertex-to-mastoid acquisition defines a very similar response to more complex arrays based on multiple channels. Evoked-potentials to the generic figure-ground task therefore represent a potential clinical measure of grouping relevant to real-world listening that can be assessed irrespective of language knowledge and expertise even without a relevant task.

摘要

在听力受损人群中,常报告存在噪声下言语理解困难的问题。最近的研究已经确定了与言语感知相关的通用声音分离和分组的行为测量方法,但这些方法不需要语言。在这项研究中,我们评估了潜在的临床电生理(EEG)指标,包括中央听觉分组(随机图形-背景测试)和噪声下言语感知(言语噪声测试),并在有和没有相关任务的情况下进行了评估。听觉目标在背景噪声(16 个说话者噪声或随机生成的纯音)中呈现,在 50%的试验中,目标由图形(纯音频率和弦随时间重复)或言语(英语名称)组成,而其余试验只有背景噪声。在参与者呈现目标刺激(图形或言语)时,记录 EEG,同时参与者处于不同的注意状态(相关任务或视觉分散任务)。EEG 时域分析表明,在 150-350ms 的时间窗口内,两种类型的听觉目标的检测都产生了增强的负响应,但只有在分散注意条件下,图形检测才产生了显著增强的响应。进一步的单通道分析表明,简单的顶点到乳突采集与基于多个通道的更复杂阵列产生了非常相似的响应。因此,通用图形-背景任务的诱发电位代表了一种与现实听力相关的潜在临床分组测量方法,即使没有相关任务,也可以评估其与语言知识和专业知识无关的情况。

相似文献

1
EEG Responses to auditory figure-ground perception.听觉图形背景感知的脑电图反应。
Hear Res. 2022 Sep 1;422:108524. doi: 10.1016/j.heares.2022.108524. Epub 2022 May 16.
2
Effects of directional sound processing and listener's motivation on EEG responses to continuous noisy speech: Do normal-hearing and aided hearing-impaired listeners differ?方向性声音处理和听者动机对连续噪声语音的 EEG 反应的影响:正常听力和助听听力障碍者是否不同?
Hear Res. 2019 Jun;377:260-270. doi: 10.1016/j.heares.2019.04.005. Epub 2019 Apr 11.
3
Comparing approaches for predicting behavioural speech-in-noise performance using cortical responses to unattended stimuli.比较使用对未注意刺激的皮层反应来预测噪声中言语行为表现的方法。
Hear Res. 2025 Mar;457:109197. doi: 10.1016/j.heares.2025.109197. Epub 2025 Jan 15.
4
Neural entrainment to pitch changes of auditory targets in noise.噪声中对听觉目标音高变化的神经同步。
Neuroimage. 2025 Jul 1;314:121270. doi: 10.1016/j.neuroimage.2025.121270. Epub 2025 May 13.
5
'Normal' hearing thresholds and fundamental auditory grouping processes predict difficulties with speech-in-noise perception.“正常”听力阈值和基本听觉分组过程可预测言语感知在噪声环境中的困难。
Sci Rep. 2019 Nov 14;9(1):16771. doi: 10.1038/s41598-019-53353-5.
6
Effects of Signal Type and Noise Background on Auditory Evoked Potential N1, P2, and P3 Measurements in Blast-Exposed Veterans.信号类型和噪声背景对爆炸暴露退伍军人听觉诱发电位N1、P2和P3测量的影响
Ear Hear. 2021 Jan/Feb;42(1):106-121. doi: 10.1097/AUD.0000000000000906.
7
Event-related potentials for better speech perception in noise by cochlear implant users.人工耳蜗使用者在噪声中更好地进行言语感知的事件相关电位。
Hear Res. 2014 Oct;316:110-21. doi: 10.1016/j.heares.2014.08.001. Epub 2014 Aug 23.
8
Processing of Visual Speech Cues in Speech-in-Noise Comprehension Depends on Working Memory Capacity and Enhances Neural Speech Tracking in Older Adults With Hearing Impairment.视觉语音线索在噪声环境下言语理解中的处理依赖于工作记忆容量,并增强了听力障碍老年人的神经言语跟踪。
Trends Hear. 2024 Jan-Dec;28:23312165241287622. doi: 10.1177/23312165241287622.
9
Pure-tone auditory stream segregation and speech perception in noise in cochlear implant recipients.人工耳蜗植入者的纯音听觉流分离与噪声中的言语感知
J Acoust Soc Am. 2006 Jul;120(1):360-74. doi: 10.1121/1.2204450.
10
Predicting speech-in-noise ability with static and dynamic auditory figure-ground analysis using structural equation modelling.使用结构方程模型,通过静态和动态听觉背景分离分析预测噪声环境下的言语能力。
Proc Biol Sci. 2025 Mar;292(2042):20242503. doi: 10.1098/rspb.2024.2503. Epub 2025 Mar 5.

引用本文的文献

1
Cortical oscillations predict auditory grouping in listeners with and without hearing loss.皮层振荡可预测有听力损失和无听力损失的听众的听觉分组。
medRxiv. 2025 Sep 4:2025.09.02.25334927. doi: 10.1101/2025.09.02.25334927.
2
Temporal-coherence induces binding of responses to sound sequences in ferret auditory cortex.时间相干性诱导雪貂听觉皮层中对声音序列的反应绑定。
iScience. 2025 Feb 12;28(3):111991. doi: 10.1016/j.isci.2025.111991. eCollection 2025 Mar 21.
3
Temporal-Coherence Induces Binding of Responses to Sound Sequences in Ferret Auditory Cortex.
时间相干性诱导雪貂听觉皮层中对声音序列的反应绑定
bioRxiv. 2024 May 29:2024.05.21.595170. doi: 10.1101/2024.05.21.595170.
4
Predicting speech-in-noise ability in normal and impaired hearing based on auditory cognitive measures.基于听觉认知测量预测正常听力和听力受损者的噪声环境下言语能力。
Front Neurosci. 2023 Feb 7;17:1077344. doi: 10.3389/fnins.2023.1077344. eCollection 2023.