Lee Kyungjun, Hong Jonggi, Pimento Simone, Jarjue Ebrima, Kacorri Hernisa
University of Maryland College Park, USA.
ASSETS. 2019 Oct;2019:83-95. doi: 10.1145/3308561.3353799.
For people with visual impairments, photography is essential in identifying objects through remote sighted help and image recognition apps. This is especially the case for teachable object recognizers, where recognition models are trained on user's photos. Here, we propose real-time feedback for communicating the location of an object of interest in the camera frame. Our audio-haptic feedback is powered by a deep learning model that estimates the object center location based on its proximity to the user's hand. To evaluate our approach, we conducted a user study in the lab, where participants with visual impairments ( = 9) used our feedback to train and test their object recognizer in vanilla and cluttered environments. We found that very few photos did not include the object (2% in the vanilla and 8% in the cluttered) and the recognition performance was promising even for participants with no prior camera experience. Participants tended to trust the feedback even though they know it can be wrong. Our cluster analysis indicates that better feedback is associated with photos that include the entire object. Our results provide insights into factors that can degrade feedback and recognition performance in teachable interfaces.
对于视力受损的人来说,摄影对于通过远程视觉帮助和图像识别应用程序识别物体至关重要。对于可训练的物体识别器来说尤其如此,其中识别模型是根据用户的照片进行训练的。在这里,我们提出了实时反馈,用于传达相机帧中感兴趣物体的位置。我们的音频触觉反馈由一个深度学习模型提供支持,该模型根据物体与用户手部的接近程度来估计物体中心位置。为了评估我们的方法,我们在实验室进行了一项用户研究,其中视力受损的参与者( = 9)使用我们的反馈在普通和杂乱环境中训练和测试他们的物体识别器。我们发现,很少有照片不包含物体(普通环境中为2%,杂乱环境中为8%),即使对于没有相机使用经验的参与者,识别性能也很有前景。参与者倾向于信任反馈,尽管他们知道反馈可能是错误的。我们的聚类分析表明,更好的反馈与包含整个物体的照片相关。我们的结果为可训练界面中可能降低反馈和识别性能的因素提供了见解。