Manzoni Matteo, Mascetti Sergio, Ahmetovic Dragan, Crabb Ryan, Coughlan James M
Department of Computer Science, Università degli Studi di Milano, 2133 Milan, Italy.
The Smith-Kettlewell Eye Research Institute, San Francisco, CA 94115, USA.
IEEE Access. 2025;13:84038-84056. doi: 10.1109/access.2025.3566286. Epub 2025 May 1.
For individuals who are blind or have low vision, tactile maps provide essential spatial information but are limited in the amount of data they can convey. Digitally augmented tactile maps enhance these capabilities with audio feedback, thereby combining the tactile feedback provided by the map with an audio description of the touched elements. In this context, we explore an embodied interaction paradigm to augment tactile maps with conversational interaction based on Large Language Models, thus enabling users to obtain answers to arbitrary questions regarding the map. We analyze the types of questions the users are interested in asking, engineer the Large Language Model's prompt to provide reliable answers, and study the resulting system with a set of 10 participants, evaluating how the users interact with the system, its usability, and user experience.
对于盲人或视力低下的人来说,触觉地图提供了重要的空间信息,但在可传达的数据量方面存在限制。数字增强触觉地图通过音频反馈增强了这些功能,从而将地图提供的触觉反馈与对触摸元素的音频描述相结合。在这种背景下,我们探索一种具身交互范式,以基于大语言模型的对话交互来增强触觉地图,从而使用户能够获得关于地图的任意问题的答案。我们分析用户感兴趣提出的问题类型,设计大语言模型的提示以提供可靠答案,并使用一组10名参与者对所得系统进行研究,评估用户与系统的交互方式、其可用性和用户体验。