Snyder Joel S, Elhilali Mounya
Department of Psychology, University of Nevada, Las Vegas, Las Vegas, Nevada.
Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, Maryland.
Ann N Y Acad Sci. 2017 May;1396(1):39-55. doi: 10.1111/nyas.13317. Epub 2017 Feb 15.
Studies of auditory scene analysis have traditionally relied on paradigms using artificial sounds-and conventional behavioral techniques-to elucidate how we perceptually segregate auditory objects or streams from each other. In the past few decades, however, there has been growing interest in uncovering the neural underpinnings of auditory segregation using human and animal neuroscience techniques, as well as computational modeling. This largely reflects the growth in the fields of cognitive neuroscience and computational neuroscience and has led to new theories of how the auditory system segregates sounds in complex arrays. The current review focuses on neural and computational studies of auditory scene perception published in the last few years. Following the progress that has been made in these studies, we describe (1) theoretical advances in our understanding of the most well-studied aspects of auditory scene perception, namely segregation of sequential patterns of sounds and concurrently presented sounds; (2) the diversification of topics and paradigms that have been investigated; and (3) how new neuroscience techniques (including invasive neurophysiology in awake humans, genotyping, and brain stimulation) have been used in this field.
传统上,听觉场景分析的研究依赖于使用人工声音的范式以及传统行为技术,以阐明我们如何在感知上区分不同的听觉对象或声流。然而,在过去几十年中,人们越来越有兴趣利用人类和动物神经科学技术以及计算模型来揭示听觉分离的神经基础。这在很大程度上反映了认知神经科学和计算神经科学领域的发展,并催生了关于听觉系统如何在复杂阵列中分离声音的新理论。本综述聚焦于过去几年发表的关于听觉场景感知的神经和计算研究。随着这些研究取得的进展,我们描述了:(1)在我们对听觉场景感知中研究最深入的方面的理解上取得的理论进展,即声音序列模式和同时呈现的声音的分离;(2)所研究主题和范式的多样化;以及(3)新的神经科学技术(包括清醒人类的侵入性神经生理学、基因分型和脑刺激)如何在该领域得到应用。