Buchs Galit, Maidenbaum Shachar, Levy-Tzedek Shelly, Amedi Amir
Department of Cognitive Science, Faculty of Humanities, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel.
The Edmond and Lily Safra Center for Brain Research, Hebrew University of Jerusalem Hadassah Ein-Kerem, Jerusalem, Israel.
Restor Neurol Neurosci. 2016;34(1):97-105. doi: 10.3233/RNN-150592.
To visually perceive our surroundings we constantly move our eyes and focus on particular details, and then integrate them into a combined whole. Current visual rehabilitation methods, both invasive, like bionic-eyes and non-invasive, like Sensory Substitution Devices (SSDs), down-sample visual stimuli into low-resolution images. Zooming-in to sub-parts of the scene could potentially improve detail perception. Can congenitally blind individuals integrate a 'visual' scene when offered this information via different sensory modalities, such as audition? Can they integrate visual information -perceived in parts - into larger percepts despite never having had any visual experience?
We explored these questions using a zooming-in functionality embedded in the EyeMusic visual-to-auditory SSD. Eight blind participants were tasked with identifying cartoon faces by integrating their individual components recognized via the EyeMusic's zooming mechanism.
After specialized training of just 6-10 hours, blind participants successfully and actively integrated facial features into cartooned identities in 79±18% of the trials in a highly significant manner, (chance level 10% ; rank-sum P < 1.55E-04).
These findings show that even users who lacked any previous visual experience whatsoever can indeed integrate this visual information with increased resolution. This potentially has important practical visual rehabilitation implications for both invasive and non-invasive methods.
为了视觉感知周围环境,我们不断移动眼睛并聚焦于特定细节,然后将它们整合为一个整体。当前的视觉康复方法,无论是侵入性的,如仿生眼,还是非侵入性的,如感官替代设备(SSD),都会将视觉刺激下采样为低分辨率图像。放大场景的各个子部分可能会改善细节感知。对于先天性失明的个体,当通过不同的感官模态(如听觉)提供这些信息时,他们能否整合一个“视觉”场景?尽管从未有过任何视觉体验,他们能否将部分感知到的视觉信息整合为更大的感知?
我们使用嵌入在EyeMusic视觉转听觉SSD中的放大功能来探索这些问题。八名盲人参与者的任务是通过整合通过EyeMusic的缩放机制识别出的各个面部组件来识别卡通脸。
经过仅6 - 10小时的专门训练,盲人参与者在79±18%的试验中以高度显著的方式成功且主动地将面部特征整合为卡通形象(机遇水平为10%;秩和检验P < 1.55E - 04)。
这些发现表明,即使是以前没有任何视觉体验的用户,也确实能够整合分辨率更高的视觉信息。这对于侵入性和非侵入性方法都可能具有重要的实际视觉康复意义。