Osei Tutu Dennis, Habibiabad Sepideh, Van den Noortgate Wim, Saldien Jelle, Bombeke Klaas
imec-mict-UGent, Department of Communication Sciences, Ghent University, Miriam Makebaplein 1, 9000 Ghent, Belgium.
imec-itec-KULeuven, Department of Psychology and Educational Sciences, KU Leuven, Etienne Sabbelaan 51, 8500 Kortrijk, Belgium.
Sensors (Basel). 2025 Sep 4;25(17):5498. doi: 10.3390/s25175498.
Soft skills such as communication and collaboration are vital in both professional and educational settings, yet difficult to train and assess objectively. Traditional role-playing scenarios rely heavily on subjective trainer evaluations-either in real time, where subtle behaviors are missed, or through time-intensive post hoc analysis. Virtual reality (VR) offers a scalable alternative by immersing trainees in controlled, interactive scenarios while simultaneously capturing fine-grained behavioral signals. This study investigates how task design in VR shapes non-verbal and paraverbal behaviors during dyadic collaboration. We compared two puzzle tasks: Task 1, which provided shared visual access and dynamic gesturing, and Task 2, which required verbal coordination through separation and turn-taking. From multimodal tracking data, we extracted features including gaze behaviors (eye contact, joint attention), hand gestures, facial expressions, and speech activity, and compared them across tasks. A clustering analysis explored whether o not tasks could be differentiated by their behavioral profiles. Results showed that Task 2, the more constrained condition, led participants to focus more visually on their own workspaces, suggesting that interaction difficulty can reduce partner-directed attention. Gestures were more frequent in shared-visual tasks, while speech became longer and more structured when turn-taking was enforced. Joint attention increased when participants relied on verbal descriptions rather than on a visible shared reference. These findings highlight how VR can elicit distinct soft skill behaviors through scenario design, enabling data-driven analysis of collaboration. This work contributes to scalable assessment frameworks with applications in training, adaptive agents, and human-AI collaboration.
沟通与协作等软技能在专业和教育环境中都至关重要,但却难以进行客观的培训和评估。传统的角色扮演场景严重依赖主观的培训师评估——要么是实时评估,此时细微的行为会被忽略;要么是通过耗时的事后分析。虚拟现实(VR)提供了一种可扩展的替代方案,它将受训者沉浸在可控的交互式场景中,同时捕捉细粒度的行为信号。本研究调查了VR中的任务设计如何在二元协作过程中塑造非语言和副语言行为。我们比较了两个拼图任务:任务1提供共享视觉访问和动态手势;任务2要求通过分开和轮流进行语言协调。从多模态跟踪数据中,我们提取了包括注视行为(眼神接触、共同关注)、手势、面部表情和语音活动等特征,并在不同任务之间进行比较。聚类分析探讨了任务是否可以通过其行为特征进行区分。结果表明,任务2这种约束性更强的条件会使参与者在视觉上更多地关注自己的工作空间,这表明互动难度会减少对伙伴的关注。在共享视觉任务中手势更频繁,而在强制轮流时语音会变长且更有条理。当参与者依赖语言描述而不是可见的共享参考时,共同关注会增加。这些发现突出了VR如何通过场景设计引发不同的软技能行为,从而实现对协作的数据驱动分析。这项工作有助于建立可扩展评估框架,并应用于培训、自适应智能体和人机人工智能协作。