LIMSI-CNRS, BP133, Université Paris Sud, Orsay 91403, France.
J Acoust Soc Am. 2012 Apr;131(4):2948-57. doi: 10.1121/1.3687448.
The paper reports on the ability of people to rapidly adapt in localizing virtual sound sources in both azimuth and elevation when listening to sounds synthesized using non-individualized head-related transfer functions (HRTFs). Participants were placed within an audio-kinesthetic Virtual Auditory Environment (VAE) platform that allows association of the physical position of a virtual sound source with an alternate set of acoustic spectral cues through the use of a tracked physical ball manipulated by the subject. This set-up offers a natural perception-action coupling, which is not limited to the visual field of view. The experiment consisted of three sessions: an initial localization test to evaluate participants' performance, an adaptation session, and a subsequent localization test. A reference control group was included using individual measured HRTFs. Results show significant improvement in localization performance. Relative to the control group, participants using non-individual HRTFs reduced localization errors in elevation by 10° with three sessions of 12 min. No significant improvement was found for azimuthal errors or for single session adaptation.
该论文报告了人们在使用非个性化头部相关传递函数 (HRTF) 合成声音时,在方位和高度上快速定位虚拟声源的能力。参与者被置于一个音频运动学虚拟听觉环境 (VAE) 平台内,该平台允许通过主体操作的跟踪物理球将虚拟声源的物理位置与一组替代的声学频谱线索相关联。这种设置提供了一种自然的感知-动作耦合,不受视场限制。实验包括三个阶段:初始定位测试以评估参与者的表现、适应阶段和随后的定位测试。使用个体测量的 HRTF 包括一个参考对照组。结果表明定位性能有显著提高。与对照组相比,使用非个性化 HRTF 的参与者在三个 12 分钟的适应阶段后,在高度上的定位误差减少了 10°。在方位误差或单次适应阶段未发现显著改善。