Fusco Giovanni, Cheraghi Seyed Ali, Neat Leo, Coughlan James M
The Smith-Kettlewell Eye Research Institute, San Francisco, CA.
Department of Computer Science and Engineering, University of California, Santa Cruz, CA.
Comput Help People Spec Needs. 2020 Sep;12376:485-494. doi: 10.1007/978-3-030-58796-3_56. Epub 2020 Sep 4.
Indoor navigation is a major challenge for people with visual impairments, who often lack access to visual cues such as informational signs, landmarks and structural features that people with normal vision rely on for wayfinding. Building on our recent work on a computer vision-based localization approach that runs in real time on a smartphone, we describe an accessible wayfinding iOS app we have created that provides turn-by-turn directions to a desired destination. The localization approach combines dead reckoning obtained using visual-inertial odometry (VIO) with information about the user's location in the environment from informational sign detections and map constraints. We explain how we estimate the user's distance from Exit signs appearing in the image, describe new improvements in the sign detection and range estimation algorithms, and outline our algorithm for determining appropriate turn-by-turn directions.
室内导航对于视力障碍者来说是一项重大挑战,他们往往无法获取视力正常者在寻路时所依赖的视觉线索,如信息标识、地标和结构特征等。基于我们近期在一种可在智能手机上实时运行的基于计算机视觉的定位方法上所做的工作,我们描述了一款我们创建的无障碍寻路iOS应用程序,它能为用户提供前往目标目的地的逐向导航。该定位方法将通过视觉惯性里程计(VIO)获得的航位推算与从信息标识检测和地图约束中获取的用户在环境中的位置信息相结合。我们解释了如何估计用户与图像中出现的出口标识之间的距离,描述了标识检测和距离估计算法的新改进,并概述了我们确定合适逐向导航方向的算法。