Suppr超能文献

在可教学对象识别器的背景下重新审视盲摄影。

Revisiting Blind Photography in the Context of Teachable Object Recognizers.

作者信息

Lee Kyungjun, Hong Jonggi, Pimento Simone, Jarjue Ebrima, Kacorri Hernisa

机构信息

University of Maryland College Park, USA.

出版信息

ASSETS. 2019 Oct;2019:83-95. doi: 10.1145/3308561.3353799.

Abstract

For people with visual impairments, photography is essential in identifying objects through remote sighted help and image recognition apps. This is especially the case for teachable object recognizers, where recognition models are trained on user's photos. Here, we propose real-time feedback for communicating the location of an object of interest in the camera frame. Our audio-haptic feedback is powered by a deep learning model that estimates the object center location based on its proximity to the user's hand. To evaluate our approach, we conducted a user study in the lab, where participants with visual impairments ( = 9) used our feedback to train and test their object recognizer in vanilla and cluttered environments. We found that very few photos did not include the object (2% in the vanilla and 8% in the cluttered) and the recognition performance was promising even for participants with no prior camera experience. Participants tended to trust the feedback even though they know it can be wrong. Our cluster analysis indicates that better feedback is associated with photos that include the entire object. Our results provide insights into factors that can degrade feedback and recognition performance in teachable interfaces.

摘要

对于视力受损的人来说,摄影对于通过远程视觉帮助和图像识别应用程序识别物体至关重要。对于可训练的物体识别器来说尤其如此,其中识别模型是根据用户的照片进行训练的。在这里,我们提出了实时反馈,用于传达相机帧中感兴趣物体的位置。我们的音频触觉反馈由一个深度学习模型提供支持,该模型根据物体与用户手部的接近程度来估计物体中心位置。为了评估我们的方法,我们在实验室进行了一项用户研究,其中视力受损的参与者( = 9)使用我们的反馈在普通和杂乱环境中训练和测试他们的物体识别器。我们发现,很少有照片不包含物体(普通环境中为2%,杂乱环境中为8%),即使对于没有相机使用经验的参与者,识别性能也很有前景。参与者倾向于信任反馈,尽管他们知道反馈可能是错误的。我们的聚类分析表明,更好的反馈与包含整个物体的照片相关。我们的结果为可训练界面中可能降低反馈和识别性能的因素提供了见解。

相似文献

2
Hands Holding Clues for Object Recognition in Teachable Machines.手中线索助力可教机器进行物体识别。
Proc SIGCHI Conf Hum Factor Comput Syst. 2019 May;2019. doi: 10.1145/3290605.3300566.
4
Hand-Priming in Object Localization for Assistive Egocentric Vision.用于辅助自我中心视觉的目标定位中的手动预激发
IEEE Winter Conf Appl Comput Vis. 2020 Mar;2020:3411-3421. doi: 10.1109/wacv45572.2020.9093353. Epub 2020 May 14.
5
An Audio-Based 3D Spatial Guidance AR System for Blind Users.一种面向视障用户的基于音频的3D空间引导增强现实系统。
Comput Help People Spec Needs. 2020 Sep;12376:475-484. doi: 10.1007/978-3-030-58796-3_55. Epub 2020 Sep 4.
6
Leveraging Hand-Object Interactions in Assistive Egocentric Vision.利用辅助自我中心视觉中的手-物交互。
IEEE Trans Pattern Anal Mach Intell. 2023 Jun;45(6):6820-6831. doi: 10.1109/TPAMI.2021.3123303. Epub 2023 May 5.

本文引用的文献

1
Hands Holding Clues for Object Recognition in Teachable Machines.手中线索助力可教机器进行物体识别。
Proc SIGCHI Conf Hum Factor Comput Syst. 2019 May;2019. doi: 10.1145/3290605.3300566.
3
Fully Convolutional Networks for Semantic Segmentation.全卷积网络用于语义分割。
IEEE Trans Pattern Anal Mach Intell. 2017 Apr;39(4):640-651. doi: 10.1109/TPAMI.2016.2572683. Epub 2016 May 24.
5
Visual object understanding.视觉对象理解
Nat Rev Neurosci. 2004 Apr;5(4):291-303. doi: 10.1038/nrn1364.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验