Suppr超能文献

立体音响视觉:在沉浸式虚拟现实导航范式中探索视觉-听觉感觉替代映射。

Stereosonic vision: Exploring visual-to-auditory sensory substitution mappings in an immersive virtual reality navigation paradigm.

机构信息

Department of Engineering Science, University of Oxford, Oxford, United Kingdom.

Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom.

出版信息

PLoS One. 2018 Jul 5;13(7):e0199389. doi: 10.1371/journal.pone.0199389. eCollection 2018.

Abstract

Sighted people predominantly use vision to navigate spaces, and sight loss has negative consequences for independent navigation and mobility. The recent proliferation of devices that can extract 3D spatial information from visual scenes opens up the possibility of using such mobility-relevant information to assist blind and visually impaired people by presenting this information through modalities other than vision. In this work, we present two new methods for encoding visual scenes using spatial audio: simulated echolocation and distance-dependent hum volume modulation. We implemented both methods in a virtual reality (VR) environment and tested them using a 3D motion-tracking device. This allowed participants to physically walk through virtual mobility scenarios, generating data on real locomotion behaviour. Blindfolded sighted participants completed two tasks: maze navigation and obstacle avoidance. Results were measured against a visual baseline in which participants performed the same two tasks without blindfolds. Task completion time, speed and number of collisions were used as indicators of successful navigation, with additional metrics exploring detailed dynamics of performance. In both tasks, participants were able to navigate using only audio information after minimal instruction. While participants were 65% slower using audio compared to the visual baseline, they reduced their audio navigation time by an average 21% over just 6 trials. Hum volume modulation proved over 20% faster than simulated echolocation in both mobility scenarios, and participants also showed the greatest improvement with this sonification method. Nevertheless, we do speculate that simulated echolocation remains worth exploring as it provides more spatial detail and could therefore be more useful in more complex environments. The fact that participants were intuitively able to successfully navigate space with two new visual-to-audio mappings for conveying spatial information motivates the further exploration of these and other mappings with the goal of assisting blind and visually impaired individuals with independent mobility.

摘要

有视力的人主要使用视觉来导航空间,而视力丧失会对独立导航和移动能力产生负面影响。最近,能够从视觉场景中提取 3D 空间信息的设备大量涌现,这为盲人或视力受损者提供了通过非视觉模态呈现这些信息的可能性,从而帮助他们进行与移动能力相关的活动。在这项工作中,我们提出了两种使用空间音频编码视觉场景的新方法:模拟回声定位和距离相关的音量调制。我们在虚拟现实 (VR) 环境中实现了这两种方法,并使用 3D 运动跟踪设备对其进行了测试。这使得参与者可以在虚拟移动场景中实际行走,从而生成有关实际运动行为的数据。蒙住眼睛的视力正常参与者完成了两项任务:迷宫导航和障碍物回避。我们将结果与视觉基准进行了对比,参与者在不蒙眼的情况下完成了相同的两项任务。任务完成时间、速度和碰撞次数被用作成功导航的指标,额外的指标则用于探索性能的详细动态。在这两项任务中,参与者只需使用音频信息即可完成导航,只需进行最少的指导。虽然与视觉基准相比,参与者使用音频的速度要慢 65%,但他们在仅仅 6 次试验中就将音频导航时间平均缩短了 21%。在这两种移动场景中,音量调制的速度都比模拟回声定位快 20%以上,参与者也表现出对这种声音化方法的最大改进。尽管如此,我们还是推测,模拟回声定位仍然值得探索,因为它提供了更多的空间细节,因此在更复杂的环境中可能更有用。参与者能够直观地使用两种新的视觉到音频映射成功地导航空间,这一事实激发了我们进一步探索这些和其他映射的方法,以帮助盲人或视力受损者实现独立移动。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1dea/6033394/6f7094ff7790/pone.0199389.g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验