Ghaderi Viviane S, Mulas Marcello, Pereira Vinicius Felisberto Santos, Everding Lukas, Weikersdorfer David, Conradt Jorg
Annu Int Conf IEEE Eng Med Biol Soc. 2015;2015:3371-4. doi: 10.1109/EMBC.2015.7319115.
Proposed is a prototype of a wearable mobility device which aims to assist the blind with navigation and object avoidance via auditory-vision-substitution. The described system uses two dynamic vision sensors and event-based information processing techniques to extract depth information. The 3D visual input is then processed using three different strategies, and converted to a 3D output sound using an individualized head-related transfer function. The performance of the device with different processing strategies is evaluated via initial tests with ten subjects. The outcome of these tests demonstrate promising performance of the system after only very short training times of a few minutes due to the minimal encoding of outputs from the vision sensors which are translated into simple sound patterns easily interpretable for the user. The envisioned system will allow for efficient real-time algorithms on a hands-free and lightweight device with exceptional battery life-time.
本文提出了一种可穿戴移动设备的原型,其旨在通过听觉-视觉替代来协助盲人进行导航和避障。所描述的系统使用两个动态视觉传感器和基于事件的信息处理技术来提取深度信息。然后,使用三种不同的策略对3D视觉输入进行处理,并使用个性化的头部相关传递函数将其转换为3D输出声音。通过对十名受试者的初步测试,评估了该设备在不同处理策略下的性能。这些测试结果表明,由于视觉传感器输出的编码最少,只需几分钟的极短训练时间,该系统就能呈现出良好的性能,其输出被转换为简单的声音模式,用户易于理解。所设想的系统将允许在具有超长电池续航时间的免提轻量级设备上运行高效的实时算法。