Shvadron Shira, Snir Adi, Maimon Amber, Yizhar Or, Harel Sapir, Poradosu Keinan, Amedi Amir
Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel.
The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel.
Front Hum Neurosci. 2023 Mar 2;17:1058617. doi: 10.3389/fnhum.2023.1058617. eCollection 2023.
Current advancements in both technology and science allow us to manipulate our sensory modalities in new and unexpected ways. In the present study, we explore the potential of expanding what we perceive through our natural senses by utilizing a visual-to-auditory sensory substitution device (SSD), the EyeMusic, an algorithm that converts images to sound. The EyeMusic was initially developed to allow blind individuals to create a spatial representation of information arriving from a video feed at a slow sampling rate. In this study, we aimed to use the EyeMusic for the blind areas of sighted individuals. We use it in this initial proof-of-concept study to test the ability of sighted subjects to combine visual information with surrounding auditory sonification representing visual information. Participants in this study were tasked with recognizing and adequately placing the stimuli, using sound to represent the areas outside the standard human visual field. As such, the participants were asked to report shapes' identities as well as their spatial orientation (front/right/back/left), requiring combined visual (90° frontal) and auditory input (the remaining 270°) for the successful performance of the task (content in both vision and audition was presented in a sweeping clockwise motion around the participant). We found that participants were successful at a highly above chance level after a brief 1-h-long session of online training and one on-site training session of an average of 20 min. They could even draw a 2D representation of this image in some cases. Participants could also generalize, recognizing new shapes they were not explicitly trained on. Our findings provide an initial proof of concept indicating that sensory augmentation devices and techniques can potentially be used in combination with natural sensory information in order to expand the natural fields of sensory perception.
当前技术和科学的进步使我们能够以全新且意想不到的方式操控我们的感官模态。在本研究中,我们通过使用一种视觉到听觉的感官替代设备(SSD)——EyeMusic,一种将图像转换为声音的算法,来探索扩展我们通过自然感官所感知内容的潜力。EyeMusic最初是为了让盲人能够以较慢的采样率创建来自视频源的信息的空间表征而开发的。在本研究中,我们旨在将EyeMusic用于有视力个体的盲区。在这个初步的概念验证研究中,我们使用它来测试有视力的受试者将视觉信息与代表视觉信息的周围听觉声音化相结合的能力。本研究的参与者的任务是识别并准确放置刺激物,使用声音来表示标准人类视野之外的区域。因此,要求参与者报告形状的身份以及它们的空间方向(前/右/后/左),成功完成任务需要视觉(90°正面)和听觉输入(其余270°)的结合(视觉和听觉内容均以围绕参与者顺时针的扫描运动呈现)。我们发现,经过短暂的1小时在线训练和一次平均时长为20分钟的现场训练后,参与者的成功率远高于随机水平。在某些情况下,他们甚至能够绘制出该图像的二维表示。参与者还能够进行泛化,识别出他们未明确接受过训练的新形状。我们的研究结果提供了初步的概念验证,表明感官增强设备和技术有可能与自然感官信息结合使用,以扩展感官感知的自然领域。