Department of Engineering Mathematics, University of Bristol, Bristol, United Kingdom.
SoftLab, Bristol Robotics Laboratory, Bristol, United Kingdom.
PLoS One. 2024 Mar 26;19(3):e0299213. doi: 10.1371/journal.pone.0299213. eCollection 2024.
Multimodal perception is the predominant means by which individuals experience and interact with the world. However, sensory dysfunction or loss can significantly impede this process. In such cases, cross-modality research offers valuable insight into how we can compensate for these sensory deficits through sensory substitution. Although sight and hearing are both used to estimate the distance to an object (e.g., by visual size and sound volume) and the perception of distance is an important element in navigation and guidance, it is not widely studied in cross-modal research. We investigate the relationship between audio and vibrotactile frequencies (in the ranges 47-2,764 Hz and 10-99 Hz, respectively) and distances uniformly distributed in the range 1-12 m. In our experiments participants mapped the distance (represented by an image of a model at that distance) to a frequency via adjusting a virtual tuning knob. The results revealed that the majority (more than 76%) of participants demonstrated a strong negative monotonic relationship between frequency and distance, across both vibrotactile (represented by a natural log function) and auditory domains (represented by an exponential function). However, a subgroup of participants showed the opposite positive linear relationship between frequency and distance. The strong cross-modal sensory correlation could contribute to the development of assistive robotic technologies and devices to augment human perception. This work provides the fundamental foundation for future assisted HRI applications where a mapping between distance and frequency is needed, for example for people with vision or hearing loss, drivers with loss of focus or response delay, doctors undertaking teleoperation surgery, and users in augmented reality (AR) or virtual reality (VR) environments.
多模态感知是个体体验和与世界交互的主要方式。然而,感官功能障碍或丧失会严重阻碍这一过程。在这种情况下,跨模态研究为我们提供了有价值的见解,了解如何通过感觉替代来弥补这些感觉缺陷。虽然视觉和听觉都用于估计物体的距离(例如,通过视觉大小和音量),并且距离感知是导航和引导的重要元素,但在跨模态研究中并没有得到广泛研究。我们研究了音频和振动觉频率(分别在 47-2,764 Hz 和 10-99 Hz 的范围内)与在 1-12 m 范围内均匀分布的距离之间的关系。在我们的实验中,参与者通过调整虚拟调谐旋钮将距离(由该距离处的模型图像表示)映射到频率。结果表明,大多数(超过 76%)参与者在振动觉(由自然对数函数表示)和听觉(由指数函数表示)领域都表现出频率和距离之间的强烈负单调关系。然而,一小部分参与者表现出频率和距离之间的相反的正线性关系。强烈的跨模态感官相关性有助于开发辅助机器人技术和设备,以增强人类的感知。这项工作为未来的辅助人机交互应用提供了基本基础,例如在需要距离和频率之间的映射的情况下,例如在视力或听力丧失、注意力或响应延迟丧失的驾驶员、进行远程手术的医生以及在增强现实(AR)或虚拟现实(VR)环境中的用户。