Kopiske Karl, Heinrich Elisa-Maria, Jahn Georg, Bendixen Alexandra, Einhäuser Wolfgang
Cognitive Systems Lab, Institute of Physics, Chemnitz University of Technology, Chemnitz, Germany.
Physics of Cognition Group, Institute of Physics, Chemnitz University of Technology, Chemnitz, Germany.
J Neurophysiol. 2023 Oct 1;130(4):1028-1040. doi: 10.1152/jn.00011.2023. Epub 2023 Sep 13.
When humans walk, it is important for them to have some measure of the distance they have traveled. Typically, many cues from different modalities are available, as humans perceive both the environment around them (for example, through vision and haptics) and their own walking. Here, we investigate the contribution of visual cues and nonvisual self-motion cues to distance reproduction when walking on a treadmill through a virtual environment by separately manipulating the speed of a treadmill belt and of the virtual environment. Using mobile eye tracking, we also investigate how our participants sampled the visual information through gaze. We show that, as predicted, both modalities affected how participants ( = 28) reproduced a distance. Participants weighed nonvisual self-motion cues more strongly than visual cues, corresponding also to their respective reliabilities, but with some interindividual variability. Those who looked more toward those parts of the visual scene that contained cues to speed and distance tended also to weigh visual information more strongly, although this correlation was nonsignificant, and participants generally directed their gaze toward visually informative areas of the scene less than expected. As measured by motion capture, participants adjusted their gait patterns to the treadmill speed but not to walked distance. In sum, we show in a naturalistic virtual environment how humans use different sensory modalities when reproducing distances and how the use of these cues differs between participants and depends on information sampling. Combining virtual reality with treadmill walking, we measured the relative importance of visual cues and nonvisual self-motion cues for distance reproduction. Participants used both cues but put more weight on self-motion; weight on visual cues had a trend to correlate with looking at visually informative areas. Participants overshot distances, especially when self-motion was slow; they adjusted steps to self-motion cues but not to visual cues. Our work thus quantifies the multimodal contributions to distance reproduction.
人类行走时,对自己走过的距离有一定的感知很重要。通常,有来自不同模态的许多线索可供利用,因为人类既能感知周围环境(例如,通过视觉和触觉),也能感知自身的行走。在此,我们通过分别控制跑步机皮带和虚拟环境的速度,研究在跑步机上通过虚拟环境行走时视觉线索和非视觉自我运动线索对距离再现的贡献。我们还使用移动眼动追踪技术,研究参与者如何通过注视来采样视觉信息。我们发现,正如预期的那样,两种模态都影响了参与者(n = 28)对距离的再现。参与者对非视觉自我运动线索的重视程度高于视觉线索,这也与它们各自的可靠性相对应,但存在一些个体差异。那些更倾向于看向视觉场景中包含速度和距离线索部分的人,往往也更重视视觉信息,尽管这种相关性并不显著,而且参与者通常将目光指向场景中视觉信息丰富区域的次数比预期的要少。通过动作捕捉测量发现,参与者会根据跑步机速度调整步态模式,但不会根据行走距离进行调整。总之,我们在自然主义的虚拟环境中展示了人类在再现距离时如何使用不同的感官模态,以及这些线索的使用在参与者之间如何不同,并取决于信息采样。将虚拟现实与跑步机行走相结合,我们测量了视觉线索和非视觉自我运动线索对距离再现的相对重要性。参与者同时使用了这两种线索,但更重视自我运动;对视觉线索的重视程度有与看向视觉信息丰富区域相关的趋势。参与者会走过目标距离,尤其是当自我运动较慢时;他们会根据自我运动线索调整步幅,但不会根据视觉线索进行调整。因此,我们的研究量化了多模态对距离再现的贡献。