• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于 Unity 中情境感知虚拟现实应用开发的数据收集框架:以化身体现为例。

Data Collection Framework for Context-Aware Virtual Reality Application Development in Unity: Case of Avatar Embodiment.

机构信息

Department of Digital Media, Ajou University, Suwon 16499, Korea.

Smart Mobility Lab, B2B Advanced Technology Center, LG Electronics, Seoul 07796, Korea.

出版信息

Sensors (Basel). 2022 Jun 19;22(12):4623. doi: 10.3390/s22124623.

DOI:10.3390/s22124623
PMID:35746405
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9228658/
Abstract

Virtual Reality (VR) has been adopted as a leading technology for the metaverse, yet most previous VR systems provide one-size-fits-all experiences to users. Context-awareness in VR enables personalized experiences in the metaverse, such as improved embodiment and deeper integration of the real world and virtual worlds. Personalization requires context data from diverse sources. We proposed a reusable and extensible context data collection framework, ManySense VR, which unifies data collection from diverse sources for VR applications. ManySense VR was implemented in Unity based on extensible context data managers collecting data from data sources such as an eye tracker, electroencephalogram, pulse, respiration, galvanic skin response, facial tracker, and Open Weather Map. We used ManySense VR to build a context-aware embodiment VR scene where the user's avatar is synchronized with their bodily actions. The performance evaluation of ManySense VR showed good performance in processor usage, frame rate, and memory footprint. Additionally, we conducted a qualitative formative evaluation by interviewing five developers (two males and three females; mean age: 22) after they used and extended ManySense VR. The participants expressed advantages (e.g., ease-of-use, learnability, familiarity, quickness, and extensibility), disadvantages (e.g., inconvenient/error-prone data query method and lack of diversity in callback methods), future application ideas, and improvement suggestions that indicate potential and can guide future development. In conclusion, ManySense VR is an efficient tool for researchers and developers to easily integrate context data into their Unity-based VR applications for the metaverse.

摘要

虚拟现实 (VR) 已被采用为元宇宙的领先技术,但大多数以前的 VR 系统为用户提供一刀切的体验。VR 中的情境感知使元宇宙中的个性化体验成为可能,例如改善了实体感和对现实世界和虚拟世界的更深层次融合。个性化需要来自不同来源的情境数据。我们提出了一个可重复使用和可扩展的情境数据收集框架 ManySense VR,它统一了来自不同来源的数据收集,用于 VR 应用程序。ManySense VR 是在基于可扩展情境数据管理器的 Unity 中实现的,该管理器从眼动追踪器、脑电图、脉搏、呼吸、皮肤电反应、面部追踪器和 Open Weather Map 等数据源收集数据。我们使用 ManySense VR 构建了一个情境感知体现 VR 场景,其中用户的头像与他们的身体动作同步。ManySense VR 的性能评估显示,在处理器使用、帧率和内存占用方面表现良好。此外,我们通过采访五名开发人员(两名男性和三名女性;平均年龄:22 岁)对他们使用和扩展 ManySense VR 后进行了定性形成性评估。参与者表达了优势(例如,易用性、易学性、熟悉性、快速性和可扩展性)、劣势(例如,数据查询方法不方便/容易出错和回调方法缺乏多样性)、未来应用想法和改进建议,这些都表明了潜力,并可以指导未来的发展。总之,ManySense VR 是研究人员和开发人员的有效工具,可方便地将情境数据集成到基于 Unity 的 VR 应用程序中,用于元宇宙。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f739/9228658/64d33b188ab3/sensors-22-04623-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f739/9228658/521d9ab46fc3/sensors-22-04623-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f739/9228658/ab9d23c23510/sensors-22-04623-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f739/9228658/a7917cbf8db6/sensors-22-04623-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f739/9228658/577ba2e68331/sensors-22-04623-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f739/9228658/cdb508d79d57/sensors-22-04623-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f739/9228658/3916a18789e3/sensors-22-04623-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f739/9228658/b4f3dce0241e/sensors-22-04623-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f739/9228658/bddf3c8f92fc/sensors-22-04623-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f739/9228658/24253effe678/sensors-22-04623-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f739/9228658/1d2be62d148b/sensors-22-04623-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f739/9228658/1314c0656512/sensors-22-04623-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f739/9228658/7ddcccd2aeb3/sensors-22-04623-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f739/9228658/af715873c274/sensors-22-04623-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f739/9228658/eeb06457bf94/sensors-22-04623-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f739/9228658/64d33b188ab3/sensors-22-04623-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f739/9228658/521d9ab46fc3/sensors-22-04623-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f739/9228658/ab9d23c23510/sensors-22-04623-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f739/9228658/a7917cbf8db6/sensors-22-04623-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f739/9228658/577ba2e68331/sensors-22-04623-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f739/9228658/cdb508d79d57/sensors-22-04623-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f739/9228658/3916a18789e3/sensors-22-04623-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f739/9228658/b4f3dce0241e/sensors-22-04623-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f739/9228658/bddf3c8f92fc/sensors-22-04623-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f739/9228658/24253effe678/sensors-22-04623-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f739/9228658/1d2be62d148b/sensors-22-04623-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f739/9228658/1314c0656512/sensors-22-04623-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f739/9228658/7ddcccd2aeb3/sensors-22-04623-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f739/9228658/af715873c274/sensors-22-04623-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f739/9228658/eeb06457bf94/sensors-22-04623-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f739/9228658/64d33b188ab3/sensors-22-04623-g014.jpg

相似文献

1
Data Collection Framework for Context-Aware Virtual Reality Application Development in Unity: Case of Avatar Embodiment.用于 Unity 中情境感知虚拟现实应用开发的数据收集框架:以化身体现为例。
Sensors (Basel). 2022 Jun 19;22(12):4623. doi: 10.3390/s22124623.
2
Facial Motion Capture System Based on Facial Electromyogram and Electrooculogram for Immersive Social Virtual Reality Applications.基于面部肌电图和眼电图的沉浸式社交虚拟现实应用的面部运动捕捉系统。
Sensors (Basel). 2023 Mar 29;23(7):3580. doi: 10.3390/s23073580.
3
Stepping into the Right Shoes: The Effects of User-Matched Avatar Ethnicity and Gender on Sense of Embodiment in Virtual Reality.步入合适之鞋:用户匹配的虚拟形象种族和性别对虚拟现实中具身感的影响。
IEEE Trans Vis Comput Graph. 2024 May;30(5):2434-2443. doi: 10.1109/TVCG.2024.3372067. Epub 2024 Apr 19.
4
TouchMark: Partial Tactile Feedback Design for Upper Limb Rehabilitation in Virtual Reality.触摸标记:虚拟现实中上肢康复的部分触觉反馈设计。
IEEE Trans Vis Comput Graph. 2024 Nov;30(11):7430-7440. doi: 10.1109/TVCG.2024.3456173. Epub 2024 Oct 10.
5
Exploring the Relationship Between Attribute Discrepancy and Avatar Embodiment in Immersive Social Virtual Reality.探索沉浸式社交虚拟现实中属性差异与化身体现之间的关系。
Cyberpsychol Behav Soc Netw. 2023 Oct 18. doi: 10.1089/cyber.2023.0210.
6
Customizing the human-avatar mapping based on EEG error related potentials.基于 EEG 误差相关电位定制人机映射。
J Neural Eng. 2024 Mar 27;21(2). doi: 10.1088/1741-2552/ad2c02.
7
Being an Avatar "for Real": A Survey on Virtual Embodiment in Augmented Reality.化身“真实”:增强现实中的虚拟体现调查。
IEEE Trans Vis Comput Graph. 2022 Dec;28(12):5071-5090. doi: 10.1109/TVCG.2021.3099290. Epub 2022 Oct 26.
8
Avatar error in your favor: Embodied avatars can fix users' mistakes without them noticing.对你有利的化身错误:具身化的化身可以在用户没有注意到的情况下纠正他们的错误。
PLoS One. 2023 Jan 20;18(1):e0266212. doi: 10.1371/journal.pone.0266212. eCollection 2023.
9
The surgical metaverse.手术元宇宙。
Cir Esp (Engl Ed). 2024 Jul;102 Suppl 1:S61-S65. doi: 10.1016/j.cireng.2023.11.009. Epub 2023 Nov 18.
10
NotifiVR: Exploring Interruptions and Notifications in Virtual Reality.NotifiVR:探索虚拟现实中的中断和通知。
IEEE Trans Vis Comput Graph. 2018 Apr;24(4):1447-1456. doi: 10.1109/TVCG.2018.2793698.

引用本文的文献

1
Semi-Supervised Clustering-Based DANA Algorithm for Data Gathering and Disease Detection in Healthcare Wireless Sensor Networks (WSN).基于半监督聚类的 DANA 算法在医疗保健无线传感器网络 (WSN)中的数据收集和疾病检测。
Sensors (Basel). 2023 Dec 19;24(1):18. doi: 10.3390/s24010018.

本文引用的文献

1
Usability Testing of Virtual Reality Applications-The Pilot Study.虚拟现实应用程序的可用性测试——初步研究。
Sensors (Basel). 2022 Feb 10;22(4):1342. doi: 10.3390/s22041342.
2
Embodiment in Virtual Reality Intensifies Emotional Responses to Virtual Stimuli.虚拟现实中的体现会增强对虚拟刺激的情感反应。
Front Psychol. 2021 Sep 6;12:674179. doi: 10.3389/fpsyg.2021.674179. eCollection 2021.
3
Virtual Reality Therapy in Mental Health.虚拟现实疗法在精神健康中的应用。
Annu Rev Clin Psychol. 2021 May 7;17:495-519. doi: 10.1146/annurev-clinpsy-081219-115923. Epub 2021 Feb 19.
4
EEG-Based Emotion Classification for Alzheimer's Disease Patients Using Conventional Machine Learning and Recurrent Neural Network Models.基于 EEG 的阿尔茨海默病患者情绪分类:使用传统机器学习和循环神经网络模型。
Sensors (Basel). 2020 Dec 16;20(24):7212. doi: 10.3390/s20247212.
5
Embodiment Empowers Empathy in Virtual Reality.虚拟现实中的具身化增强了同理心。
Cyberpsychol Behav Soc Netw. 2020 Nov;23(11):725-726. doi: 10.1089/cyber.2020.29199.editorial. Epub 2020 Oct 27.
6
Controlling the Sense of Embodiment for Virtual Avatar Applications: Methods and Empirical Study.虚拟化身应用的具身感控制:方法与实证研究。
JMIR Serious Games. 2020 Sep 22;8(3):e21879. doi: 10.2196/21879.
7
Construction of the Virtual Embodiment Questionnaire (VEQ).虚拟体现问卷(VEQ)的构建。
IEEE Trans Vis Comput Graph. 2020 Dec;26(12):3546-3556. doi: 10.1109/TVCG.2020.3023603. Epub 2020 Nov 10.
8
EEG-Based BCI Emotion Recognition: A Survey.基于脑电的脑机接口情绪识别:综述。
Sensors (Basel). 2020 Sep 7;20(18):5083. doi: 10.3390/s20185083.
9
Presence and Cybersickness in Virtual Reality Are Negatively Related: A Review.虚拟现实中的临场感与晕动症呈负相关:一项综述。
Front Psychol. 2019 Feb 4;10:158. doi: 10.3389/fpsyg.2019.00158. eCollection 2019.
10
Affective computing in virtual reality: emotion recognition from brain and heartbeat dynamics using wearable sensors.虚拟现实中的情感计算:使用可穿戴传感器从大脑和心跳动力学中识别情感。
Sci Rep. 2018 Sep 12;8(1):13657. doi: 10.1038/s41598-018-32063-4.