Department of Psychology, Clinical Psychology and Neuropsychology, University of Konstanz, Germany.
Neuropsychologia. 2010 Apr;48(5):1417-25. doi: 10.1016/j.neuropsychologia.2010.01.009. Epub 2010 Jan 15.
To explore the neural processes underlying concurrent sound segregation, auditory evoked fields (AEFs) were measured using magnetoencephalography (MEG). To induce the segregation of two auditory objects we manipulated harmonicity and onset synchrony. Participants were presented with complex sounds with (i) all harmonics in-tune (ii) the third harmonic mistuned by 8% of its original value (iii) the onset of the third harmonic delayed by 160 ms compared to the other harmonics. During recording, participants listened to the sounds and performed an auditory localisation task whereas in another session they ignored the sounds and performed a visual localisation task. Active and passive listening was chosen to evaluate the contribution of attention on sound segregation. Both cues - inharmonicity and onset asynchrony - elicited sound segregation, as participants were more likely to report correctly on which side they heard the third harmonic when it was mistuned or delayed compared to being in-tune with all other harmonics. AEF activity associated with concurrent sound segregation was identified over both temporal lobes. We found an early deflection at approximately 75 ms (P75m) after sound onset, probably reflecting an automatic registration of the mistuned harmonic. Subsequent deflections, the object-related negativity (ORNm) and a later displacement (P230m) seem to be more general markers of concurrent sound segregation, as they were elicited by both mistuning and delaying the third harmonic. Results indicate that the ORNm reflects relatively automatic, bottom-up sound segregation processes, whereas the P230m is more sensitive to attention, especially with inharmonicity as the cue for concurrent sound segregation.
为了探索同时进行的声音分离的神经过程,我们使用脑磁图(MEG)测量了听觉诱发场(AEFs)。为了诱导两个听觉对象的分离,我们操纵了谐和性和起始同步性。参与者被呈现具有以下特征的复杂声音:(i)所有谐波都在调谐内,(ii)第三个谐波的 8%的原始值失谐,(iii)与其他谐波相比,第三个谐波的起始延迟了 160 毫秒。在记录过程中,参与者听声音并执行听觉定位任务,而在另一个会话中,他们忽略声音并执行视觉定位任务。主动和被动聆听被选择来评估注意力对声音分离的贡献。这两个线索——不谐和性和起始异步性——都引起了声音分离,因为与所有其他谐波调谐相比,当第三个谐波失谐或延迟时,参与者更有可能正确报告他们听到第三个谐波的哪一侧。与同时进行的声音分离相关的 AEF 活动在两个颞叶上都被识别出来。我们发现大约在声音开始后 75 毫秒(P75m)出现了一个早期的偏移,可能反映了对失谐谐波的自动注册。随后的偏移,即与目标相关的负性(ORNm)和较晚的位移(P230m)似乎是同时进行的声音分离的更一般标记,因为它们被失谐和延迟第三个谐波都激发了。结果表明,ORNm 反映了相对自动的、自下而上的声音分离过程,而 P230m 对注意力更敏感,尤其是当不谐和性作为同时进行的声音分离的线索时。