Biomechanical Engineering Lab, Department of Mechanical Engineering and Research Center for Biomedical Engineering, Universitat Politècnica de Catalunya, Barcelona, 08028, Spain.
Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, 08950, Spain.
J Neuroeng Rehabil. 2024 Nov 1;21(1):195. doi: 10.1186/s12984-024-01482-y.
Virtual Reality (VR) has proven to be an effective tool for motor (re)learning. Furthermore, with the current commercialization of low-cost head-mounted displays (HMDs), immersive virtual reality (IVR) has become a viable rehabilitation tool. Nonetheless, it is still an open question how immersive virtual environments should be designed to enhance motor learning, especially to support the learning of complex motor tasks. An example of such a complex task is triggering steps while wearing lower-limb exoskeletons as it requires the learning of several sub-tasks, e.g., shifting the weight from one leg to the other, keeping the trunk upright, and initiating steps. This study aims to find the necessary elements in VR to promote motor learning of complex virtual gait tasks.
In this study, we developed an HMD-IVR-based system for training to control wearable lower-limb exoskeletons for people with sensorimotor disorders. The system simulates a virtual walking task of an avatar resembling the sub-tasks needed to trigger steps with an exoskeleton. We ran an experiment with forty healthy participants to investigate the effects of first- (1PP) vs. third-person perspective (3PP) and the provision (or not) of concurrent visual feedback of participants' movements on the walking performance - namely number of steps, trunk inclination, and stride length -, as well as the effects on embodiment, usability, cybersickness, and perceived workload.
We found that all participants learned to execute the virtual walking task. However, no clear interaction of perspective and visual feedback improved the learning of all sub-tasks concurrently. Instead, the key seems to lie in selecting the appropriate perspective and visual feedback for each sub-task. Notably, participants embodied the avatar across all training modalities with low cybersickness levels. Still, participants' cognitive load remained high, leading to marginally acceptable usability scores.
Our findings suggest that to maximize learning, users should train sub-tasks sequentially using the most suitable combination of person's perspective and visual feedback for each sub-task. This research offers valuable insights for future developments in IVR to support individuals with sensorimotor disorders in improving the learning of walking with wearable exoskeletons.
虚拟现实(VR)已被证明是一种有效的运动(再)学习工具。此外,随着低成本头戴式显示器(HMD)的商业化,沉浸式虚拟现实(IVR)已成为一种可行的康复工具。尽管如此,沉浸式虚拟环境应该如何设计以增强运动学习,特别是支持复杂运动任务的学习,仍然是一个悬而未决的问题。一个这样的复杂任务的例子是在穿着下肢外骨骼时触发步伐,因为它需要学习几个子任务,例如将重量从一条腿转移到另一条腿,保持躯干直立,并开始迈出步伐。本研究旨在找到 VR 中的必要元素,以促进复杂虚拟步态任务的运动学习。
在这项研究中,我们开发了一种基于 HMD-IVR 的系统,用于训练控制可穿戴下肢外骨骼的人员,以进行传感器运动障碍患者的行走任务。该系统模拟了一个类似于使用外骨骼触发步伐所需的子任务的虚拟行走任务。我们进行了一项包含 40 名健康参与者的实验,以研究第一人称(1PP)与第三人称(3PP)视角以及提供(或不提供)参与者运动的并发视觉反馈对行走性能的影响,即步数、躯干倾斜度和步长,以及对体现感、可用性、晕动病和感知工作负荷的影响。
我们发现所有参与者都学会了执行虚拟行走任务。然而,视角和视觉反馈的交互并没有明显改善所有子任务的学习。相反,关键似乎在于为每个子任务选择适当的视角和视觉反馈。值得注意的是,参与者在所有培训模式下都以低晕动病水平体现了虚拟化身。尽管如此,参与者的认知负荷仍然很高,导致可用性评分仅可接受。
我们的研究结果表明,为了最大限度地提高学习效果,用户应该使用最适合每个子任务的人的视角和视觉反馈的组合,依次训练子任务。这项研究为 IVR 的未来发展提供了有价值的见解,以支持传感器运动障碍患者通过可穿戴外骨骼改善行走学习。