Department of Computer Science, Dartmouth College, Hanover, New Hampshire, United States of America.
Department of Psychological and Brain Sciences, Dartmouth College, Hanover, New Hampshire, United States of America.
PLoS One. 2019 Jan 16;14(1):e0210630. doi: 10.1371/journal.pone.0210630. eCollection 2019.
People typically rely heavily on visual information when finding their way to unfamiliar locations. For individuals with reduced vision, there are a variety of navigational tools available to assist with this task if needed. However, for wayfinding in unfamiliar indoor environments the applicability of existing tools is limited. One potential approach to assist with this task is to enhance visual information about the location and content of existing signage in the environment. With this aim, we developed a prototype software application, which runs on a consumer head-mounted augmented reality (AR) device, to assist visually impaired users with sign-reading. The sign-reading assistant identifies real-world text (e.g., signs and room numbers) on command, highlights the text location, converts it to high contrast AR lettering, and optionally reads the content aloud via text-to-speech. We assessed the usability of this application in a behavioral experiment. Participants with simulated visual impairment were asked to locate a particular office within a hallway, either with or without AR assistance (referred to as the AR group and control group, respectively). Subjective assessments indicated that participants in the AR group found the application helpful for this task, and an analysis of walking paths indicated that these participants took more direct routes compared to the control group. However, participants in the AR group also walked more slowly and took more time to complete the task than the control group. The results point to several specific future goals for usability and system performance in AR-based assistive tools.
人们在寻找不熟悉的地点时通常会高度依赖视觉信息。对于视力受损的人,如果需要,有各种导航工具可以帮助他们完成这项任务。然而,在不熟悉的室内环境中进行导航时,现有工具的适用性有限。一种潜在的方法是增强环境中现有标志的位置和内容的视觉信息。为此,我们开发了一个原型软件应用程序,它在消费者头戴式增强现实 (AR) 设备上运行,以帮助视障用户阅读标志。标志阅读助手可以根据命令识别现实世界中的文本(例如标志和房间号码),突出显示文本位置,将其转换为高对比度的 AR 字母,并可选地通过文本转语音大声读出内容。我们在行为实验中评估了该应用程序的可用性。模拟视力障碍的参与者被要求在走廊内找到一个特定的办公室,要么有 AR 辅助(称为 AR 组和对照组),要么没有。主观评估表明,AR 组的参与者认为该应用程序对这项任务很有帮助,对行走路径的分析表明,与对照组相比,这些参与者走的路线更直接。然而,AR 组的参与者完成任务的速度也更慢,用时也更长。结果指出了基于 AR 的辅助工具在可用性和系统性能方面的几个具体未来目标。