McLachlan Glen, Lladó Pedro, Peremans Herbert
Active Perception Lab, Department of Engineering Management, University of Antwerp, Belgium.
Acoustics Lab, Department of Information and Communication Engineering, Aalto University, Espoo, Finland.
J Neurophysiol. 2024 Dec 1;132(6):1857-1866. doi: 10.1152/jn.00298.2024. Epub 2024 Oct 30.
Recent interest in dynamic sound localization models has created a need to better understand the head movements made by humans. Previous studies have shown that static head positions and small oscillations of the head obey Donders' law: for each facing direction there is one unique three-dimensional orientation. It is unclear whether this same constraint applies to audiovisual localization, where head movement is unrestricted and subjects may rotate their heads depending on the available auditory information. In an auditory-guided visual search task, human subjects were instructed to localize an audiovisual target within a field of visual distractors in the frontal hemisphere. During this task, head and torso movements were monitored with a motion capture system. Head rotations were found to follow Donders' law during search tasks. Individual differences were present in the amount of roll that subjects deployed, though there was no statistically significant improvement in model performance when including these individual differences in a gimbal model. The roll component of head rotation could therefore be predicted with a truncated Fick gimbal, which consists of a pitch axis nested within a yaw axis. This led to a reduction from three to two degrees of freedom when modeling head movement during localization tasks. Understanding how humans utilize head movements during sound localization is crucial for the advancement of auditory perception models and improvement of practical applications like hearing aids and virtual reality systems. By analyzing head motion data from an auditory-guided visual search task, we concluded that findings from earlier studies on head movement can be generalized to audiovisual localization and, from this, proposed a simple model for head rotation that reduced the number of degrees of freedom.
最近对动态声音定位模型的关注使得有必要更好地了解人类的头部运动。先前的研究表明,头部的静态位置和小幅度摆动遵循东德斯定律:对于每个朝向方向,都有一个独特的三维方向。目前尚不清楚同样的约束是否适用于视听定位,在视听定位中头部运动不受限制,受试者可能会根据可用的听觉信息转动头部。在一项听觉引导的视觉搜索任务中,人类受试者被要求在额叶半球的视觉干扰物场中定位一个视听目标。在这个任务过程中,使用运动捕捉系统监测头部和躯干的运动。发现在搜索任务期间头部旋转遵循东德斯定律。受试者所采用的横滚量存在个体差异,不过在万向节模型中纳入这些个体差异时,模型性能并没有统计学上的显著改善。因此,头部旋转的横滚分量可以用一个截断的菲克万向节来预测,它由嵌套在偏航轴内的俯仰轴组成。这导致在对定位任务期间的头部运动进行建模时,自由度从三个减少到两个。了解人类在声音定位过程中如何利用头部运动对于推进听觉感知模型以及改善助听器和虚拟现实系统等实际应用至关重要。通过分析来自听觉引导视觉搜索任务的头部运动数据,我们得出结论,早期关于头部运动的研究结果可以推广到视听定位,据此提出了一个简化的头部旋转模型,减少了自由度的数量。