• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

行动胜于言语:探索二元协作虚拟现实中的非语言和副语言特征。

When Action Speaks Louder than Words: Exploring Non-Verbal and Paraverbal Features in Dyadic Collaborative VR.

作者信息

Osei Tutu Dennis, Habibiabad Sepideh, Van den Noortgate Wim, Saldien Jelle, Bombeke Klaas

机构信息

imec-mict-UGent, Department of Communication Sciences, Ghent University, Miriam Makebaplein 1, 9000 Ghent, Belgium.

imec-itec-KULeuven, Department of Psychology and Educational Sciences, KU Leuven, Etienne Sabbelaan 51, 8500 Kortrijk, Belgium.

出版信息

Sensors (Basel). 2025 Sep 4;25(17):5498. doi: 10.3390/s25175498.

DOI:10.3390/s25175498
PMID:40942927
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12430992/
Abstract

Soft skills such as communication and collaboration are vital in both professional and educational settings, yet difficult to train and assess objectively. Traditional role-playing scenarios rely heavily on subjective trainer evaluations-either in real time, where subtle behaviors are missed, or through time-intensive post hoc analysis. Virtual reality (VR) offers a scalable alternative by immersing trainees in controlled, interactive scenarios while simultaneously capturing fine-grained behavioral signals. This study investigates how task design in VR shapes non-verbal and paraverbal behaviors during dyadic collaboration. We compared two puzzle tasks: Task 1, which provided shared visual access and dynamic gesturing, and Task 2, which required verbal coordination through separation and turn-taking. From multimodal tracking data, we extracted features including gaze behaviors (eye contact, joint attention), hand gestures, facial expressions, and speech activity, and compared them across tasks. A clustering analysis explored whether o not tasks could be differentiated by their behavioral profiles. Results showed that Task 2, the more constrained condition, led participants to focus more visually on their own workspaces, suggesting that interaction difficulty can reduce partner-directed attention. Gestures were more frequent in shared-visual tasks, while speech became longer and more structured when turn-taking was enforced. Joint attention increased when participants relied on verbal descriptions rather than on a visible shared reference. These findings highlight how VR can elicit distinct soft skill behaviors through scenario design, enabling data-driven analysis of collaboration. This work contributes to scalable assessment frameworks with applications in training, adaptive agents, and human-AI collaboration.

摘要

沟通与协作等软技能在专业和教育环境中都至关重要,但却难以进行客观的培训和评估。传统的角色扮演场景严重依赖主观的培训师评估——要么是实时评估,此时细微的行为会被忽略;要么是通过耗时的事后分析。虚拟现实(VR)提供了一种可扩展的替代方案,它将受训者沉浸在可控的交互式场景中,同时捕捉细粒度的行为信号。本研究调查了VR中的任务设计如何在二元协作过程中塑造非语言和副语言行为。我们比较了两个拼图任务:任务1提供共享视觉访问和动态手势;任务2要求通过分开和轮流进行语言协调。从多模态跟踪数据中,我们提取了包括注视行为(眼神接触、共同关注)、手势、面部表情和语音活动等特征,并在不同任务之间进行比较。聚类分析探讨了任务是否可以通过其行为特征进行区分。结果表明,任务2这种约束性更强的条件会使参与者在视觉上更多地关注自己的工作空间,这表明互动难度会减少对伙伴的关注。在共享视觉任务中手势更频繁,而在强制轮流时语音会变长且更有条理。当参与者依赖语言描述而不是可见的共享参考时,共同关注会增加。这些发现突出了VR如何通过场景设计引发不同的软技能行为,从而实现对协作的数据驱动分析。这项工作有助于建立可扩展评估框架,并应用于培训、自适应智能体和人机人工智能协作。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6823/12430992/9bca299e9ac2/sensors-25-05498-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6823/12430992/8b7f077cf920/sensors-25-05498-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6823/12430992/d51d40376174/sensors-25-05498-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6823/12430992/6f088f3366db/sensors-25-05498-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6823/12430992/cff83759d4e8/sensors-25-05498-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6823/12430992/b6d92efc1eb9/sensors-25-05498-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6823/12430992/956c7f49539e/sensors-25-05498-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6823/12430992/5dbc78051152/sensors-25-05498-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6823/12430992/f7aa12989243/sensors-25-05498-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6823/12430992/7814fe7d7980/sensors-25-05498-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6823/12430992/429e7948868f/sensors-25-05498-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6823/12430992/aa2a4d5d6926/sensors-25-05498-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6823/12430992/9bca299e9ac2/sensors-25-05498-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6823/12430992/8b7f077cf920/sensors-25-05498-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6823/12430992/d51d40376174/sensors-25-05498-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6823/12430992/6f088f3366db/sensors-25-05498-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6823/12430992/cff83759d4e8/sensors-25-05498-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6823/12430992/b6d92efc1eb9/sensors-25-05498-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6823/12430992/956c7f49539e/sensors-25-05498-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6823/12430992/5dbc78051152/sensors-25-05498-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6823/12430992/f7aa12989243/sensors-25-05498-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6823/12430992/7814fe7d7980/sensors-25-05498-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6823/12430992/429e7948868f/sensors-25-05498-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6823/12430992/aa2a4d5d6926/sensors-25-05498-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6823/12430992/9bca299e9ac2/sensors-25-05498-g012.jpg

相似文献

1
When Action Speaks Louder than Words: Exploring Non-Verbal and Paraverbal Features in Dyadic Collaborative VR.行动胜于言语:探索二元协作虚拟现实中的非语言和副语言特征。
Sensors (Basel). 2025 Sep 4;25(17):5498. doi: 10.3390/s25175498.
2
Prescription of Controlled Substances: Benefits and Risks管制药品的处方:益处与风险
3
Sexual Harassment and Prevention Training性骚扰与预防培训
4
Using Pupillometry in Virtual Reality as a Tool for Speech-in-Noise Research.在虚拟现实中使用瞳孔测量法作为噪声环境下语音研究的工具。
Ear Hear. 2025 Jul 2. doi: 10.1097/AUD.0000000000001692.
5
Object Manipulation in Physically Constrained Workplaces: Remote Collaboration with Extended Reality.物理受限工作场所中的物体操作:与扩展现实的远程协作。
IISE Trans Occup Ergon Hum Factors. 2025 Apr 2:1-14. doi: 10.1080/24725838.2025.2484731.
6
Policy shaping based on the learned preferences of others accounts for risky decision-making under social observation.基于对他人学习偏好的政策塑造解释了社会观察下的风险决策。
Elife. 2025 Sep 12;13:RP102228. doi: 10.7554/eLife.102228.
7
The agreement of phonetic transcriptions between paediatric speech and language therapists transcribing a disordered speech sample.儿科言语和语言治疗师转写语音样本的音标转录的一致性。
Int J Lang Commun Disord. 2024 Sep-Oct;59(5):1981-1995. doi: 10.1111/1460-6984.13043. Epub 2024 Jun 8.
8
Short-Term Memory Impairment短期记忆障碍
9
Post-pandemic planning for maternity care for local, regional, and national maternity systems across the four nations: a mixed-methods study.针对四个地区的地方、区域和国家孕产妇保健系统的疫情后规划:一项混合方法研究。
Health Soc Care Deliv Res. 2025 Sep;13(35):1-25. doi: 10.3310/HHTE6611.
10
Integrating GPT-Based AI into Virtual Patients to Facilitate Communication Training Among Medical First Responders: Usability Study of Mixed Reality Simulation.将基于GPT的人工智能集成到虚拟患者中以促进医疗急救人员的沟通培训:混合现实模拟的可用性研究
JMIR Form Res. 2024 Dec 11;8:e58623. doi: 10.2196/58623.

本文引用的文献

1
Immersive Study Analyzer: Collaborative Immersive Analysis of Recorded Social VR Studies.沉浸式研究分析器:记录的社交 VR 研究的协作式沉浸式分析。
IEEE Trans Vis Comput Graph. 2024 Nov;30(11):7214-7224. doi: 10.1109/TVCG.2024.3456146. Epub 2024 Oct 10.
2
How and Why People Synchronize: An Integrated Perspective.人们如何以及为何会同步:一个综合视角。
Pers Soc Psychol Rev. 2025 May;29(2):159-187. doi: 10.1177/10888683241252036. Epub 2024 May 21.
3
Facial expression recognition in virtual reality environments: challenges and opportunities.
虚拟现实环境中的面部表情识别:挑战与机遇
Front Psychol. 2023 Oct 11;14:1280136. doi: 10.3389/fpsyg.2023.1280136. eCollection 2023.
4
My hands are running away - learning a complex nursing skill via virtual reality simulation: a randomised mixed methods study.我的双手不受控制了——通过虚拟现实模拟学习复杂的护理技能:一项随机混合方法研究。
BMC Nurs. 2023 Jun 27;22(1):222. doi: 10.1186/s12912-023-01384-9.
5
The influence of perceptual load on gaze-induced attentional orienting: The modulation of expectation.
Conscious Cogn. 2023 Aug;113:103543. doi: 10.1016/j.concog.2023.103543. Epub 2023 Jun 13.
6
Long gaps between turns are awkward for strangers but not for friends.交谈时如果长时间不接话会让陌生人感到尴尬,但朋友之间不会。
Philos Trans R Soc Lond B Biol Sci. 2023 Apr 24;378(1875):20210471. doi: 10.1098/rstb.2021.0471. Epub 2023 Mar 6.
7
Designing Virtual Reality-Based Conversational Agents to Train Clinicians in Verbal De-escalation Skills: Exploratory Usability Study.设计基于虚拟现实的对话代理以培训临床医生的言语缓和技能:探索性可用性研究。
JMIR Serious Games. 2022 Jul 6;10(3):e38669. doi: 10.2196/38669.
8
Effects of perceptual and working memory load on brain responses to task-irrelevant stimuli: Review and implications for future research.知觉和工作记忆负荷对任务无关刺激大脑反应的影响:综述及对未来研究的启示。
Neurosci Biobehav Rev. 2022 Apr;135:104580. doi: 10.1016/j.neubiorev.2022.104580. Epub 2022 Feb 19.
9
Gaze facilitates responsivity during hand coordinated joint attention.注视有助于在手协调的共同注意力期间做出反应。
Sci Rep. 2021 Oct 26;11(1):21037. doi: 10.1038/s41598-021-00476-3.
10
Virtual reality as a communication medium: a comparative study of forced compliance in virtual reality versus physical world.虚拟现实作为一种交流媒介:虚拟现实与现实世界中强迫服从的比较研究。
Virtual Real. 2022;26(2):737-757. doi: 10.1007/s10055-021-00564-9. Epub 2021 Aug 21.