New Jersey Institute of Technology, Newark, United States.
Rutgers University, Newark, United States.
Elife. 2019 Oct 21;8:e47027. doi: 10.7554/eLife.47027.
Human sound localization is an important computation performed by the brain. Models of sound localization commonly assume that sound lateralization from interaural time differences is level invariant. Here we observe that two prevalent theories of sound localization make opposing predictions. The labelled-line model encodes location through tuned representations of spatial location and predicts that perceived direction is level invariant. In contrast, the hemispheric-difference model encodes location through spike-rate and predicts that perceived direction becomes medially biased at low sound levels. Here, behavioral experiments find that softer sounds are perceived closer to midline than louder sounds, favoring rate-coding models of human sound localization. Analogously, visual depth perception, which is based on interocular disparity, depends on the contrast of the target. The similar results in hearing and vision suggest that the brain may use a canonical computation of location: encoding perceived location through population spike rate relative to baseline.
人类声音定位是大脑进行的一项重要计算。声音定位模型通常假设来自两耳时间差的声音侧化是水平不变的。在这里,我们观察到两种流行的声音定位理论做出了相反的预测。标记线模型通过对空间位置的调谐表示来编码位置,并预测感知方向是水平不变的。相比之下,半球差模型通过尖峰率编码位置,并预测在低声音水平下感知方向会向中间偏置。在这里,行为实验发现,较软的声音比较响的声音被感知为更接近中线,这有利于人类声音定位的率编码模型。类似地,基于双眼视差的视觉深度感知依赖于目标的对比度。听觉和视觉中的相似结果表明,大脑可能使用一种位置的规范计算:通过相对于基线的群体尖峰率来编码感知位置。