Department of Physics, Cornell University , 142 Sciences Drive, Ithaca, NY 14853 , USA.
J R Soc Interface. 2019 May 31;16(154):20180903. doi: 10.1098/rsif.2018.0903.
Swing in a crew boat, a good jazz riff, a fluid conversation: these tasks require extracting sensory information about how others flow in order to mimic and respond. To determine what factors influence coordination, we build an environment to manipulate incoming sensory information by combining virtual reality and motion capture. We study how people mirror the motion of a human avatar's arm as we occlude the avatar. We efficiently map the transition from successful mirroring to failure using Gaussian process regression. Then, we determine the change in behaviour when we introduce audio cues with a frequency proportional to the speed of the avatar's hand or train individuals with a practice session. Remarkably, audio cues extend the range of successful mirroring to regimes where visual information is sparse. Such cues could facilitate joint coordination when navigating visually occluded environments, improve reaction speed in human-computer interfaces or measure altered physiological states and disease.
在船员小艇上摇摆,一段精彩的爵士即兴重复段,一段流畅的对话:这些任务需要提取关于他人如何流畅运动的感官信息,以便进行模仿和回应。为了确定哪些因素会影响协调性,我们构建了一个环境,通过虚拟现实和运动捕捉来组合处理传入的感官信息。当我们遮挡虚拟人物时,我们研究了人们如何模仿人类化身手臂的运动。我们使用高斯过程回归高效地映射从成功模仿到失败的转变。然后,我们确定当我们引入与化身手部速度成正比的音频提示或通过练习会话训练个人时,行为会发生怎样的变化。值得注意的是,音频提示将成功模仿的范围扩展到了视觉信息稀疏的区域。这样的提示可以在导航视觉遮挡环境时促进联合协调,提高人机界面的反应速度,或者测量改变的生理状态和疾病。