Department of Electronic Systems, Norwegian University of Science and Technology, NO-7491 Trondheim, Norway.
Consciousness Lab, Institute of Psychology, Jagiellonian University, 30-060 Kraków, Poland.
Sensors (Basel). 2021 Nov 5;21(21):7351. doi: 10.3390/s21217351.
The successful development of a system realizing color sonification would enable auditory representation of the visual environment. The primary beneficiary of such a system would be people that cannot directly access visual information-the visually impaired community. Despite the plethora of sensory substitution devices, developing systems that provide intuitive color sonification remains a challenge. This paper presents design considerations, development, and the usability audit of a sensory substitution device that converts spatial color information into soundscapes. The implemented wearable system uses a dedicated color space and continuously generates natural, spatialized sounds based on the information acquired from a camera. We developed two head-mounted prototype devices and two graphical user interface (GUI) versions. The first GUI is dedicated to researchers, and the second has been designed to be easily accessible for visually impaired persons. Finally, we ran fundamental usability tests to evaluate the new spatial color sonification algorithm and to compare the two prototypes. Furthermore, we propose recommendations for the development of the next iteration of the system.
成功开发出一种实现颜色可听化的系统将能够实现对视觉环境的听觉呈现。这样的系统的主要受益者将是那些无法直接获取视觉信息的人群,即视障人士群体。尽管有大量的感官替代设备,但开发提供直观颜色可听化的系统仍然是一个挑战。本文介绍了一种将空间颜色信息转换为声景的感官替代设备的设计考虑因素、开发和可用性审计。所实现的可穿戴系统使用专用颜色空间,并根据从摄像机获取的信息连续生成自然的、空间化的声音。我们开发了两个头戴式原型设备和两个图形用户界面 (GUI) 版本。第一个 GUI 专为研究人员设计,第二个 GUI 则旨在方便视障人士使用。最后,我们进行了基本的可用性测试,以评估新的空间颜色可听化算法,并比较两个原型。此外,我们还提出了下一迭代系统开发的建议。