Suppr超能文献

超越视觉的深度感知:通过视障和明眼人群中的概念验证感觉替代方法评估固有和习得线索。

Perceiving depth beyond sight: Evaluating intrinsic and learned cues via a proof of concept sensory substitution method in the visually impaired and sighted.

机构信息

Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel.

Computational Psychiatry and Neurotechnology Lab, Ben Gurion University, Be'er Sheva, Israel.

出版信息

PLoS One. 2024 Sep 25;19(9):e0310033. doi: 10.1371/journal.pone.0310033. eCollection 2024.

Abstract

This study explores spatial perception of depth by employing a novel proof of concept sensory substitution algorithm. The algorithm taps into existing cognitive scaffolds such as language and cross modal correspondences by naming objects in the scene while representing their elevation and depth by manipulation of the auditory properties for each axis. While the representation of verticality utilized a previously tested correspondence with pitch, the representation of depth employed an ecologically inspired manipulation, based on the loss of gain and filtration of higher frequency sounds over distance. The study, involving 40 participants, seven of which were blind (5) or visually impaired (2), investigates the intrinsicness of an ecologically inspired mapping of auditory cues for depth by comparing it to an interchanged condition where the mappings of the two axes are swapped. All participants successfully learned to use the algorithm following a very brief period of training, with the blind and visually impaired participants showing similar levels of success for learning to use the algorithm as did their sighted counterparts. A significant difference was found at baseline between the two conditions, indicating the intuitiveness of the original ecologically inspired mapping. Despite this, participants were able to achieve similar success rates following the training in both conditions. The findings indicate that both intrinsic and learned cues come into play with respect to depth perception. Moreover, they suggest that by employing perceptual learning, novel sensory mappings can be trained in adulthood. Regarding the blind and visually impaired, the results also support the convergence view, which claims that with training, their spatial abilities can converge with those of the sighted. Finally, we discuss how the algorithm can open new avenues for accessibility technologies, virtual reality, and other practical applications.

摘要

本研究通过采用新颖的概念验证感觉替代算法来探索深度的空间感知。该算法利用语言和跨模态对应等现有认知支架,通过对每个轴的听觉属性进行操作来命名场景中的物体,并表示其高度和深度。虽然垂直性的表示利用了先前经过测试的与音高的对应关系,但深度的表示采用了基于增益损失和高频声音随距离过滤的生态启发式操作。该研究涉及 40 名参与者,其中 7 名是盲人(5 名)或视力障碍者(2 名),通过将两个轴的映射交换的条件来比较,研究了基于生态启发的听觉线索深度映射的内在性。所有参与者在经过非常短暂的培训后都成功地学会了使用该算法,盲人参与者和视力障碍参与者在学习使用算法方面与视力正常的参与者表现出相似的成功水平。在基线时,两个条件之间存在显著差异,表明原始生态启发映射的直观性。尽管如此,在两个条件下,参与者都能够在训练后达到相似的成功率。研究结果表明,内在和习得的线索都与深度感知有关。此外,它们表明通过运用感知学习,可以在成年后训练新的感觉映射。关于盲人或视力障碍者,研究结果也支持融合观点,该观点声称,经过训练,他们的空间能力可以与视力正常者的空间能力相融合。最后,我们讨论了该算法如何为辅助技术、虚拟现实和其他实际应用开辟新的途径。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a0ea/11423994/8ea63b21032b/pone.0310033.g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验