Suppr超能文献

单峰知觉分组对跨通道整合的调制:一项视觉触觉似动研究。

The modulation of crossmodal integration by unimodal perceptual grouping: a visuotactile apparent motion study.

作者信息

Lyons Georgina, Sanabria Daniel, Vatakis Argiro, Spence Charles

机构信息

Department of Experimental Psychology, University of Oxford, South Parks Road, Oxford OX1 3UD, UK.

出版信息

Exp Brain Res. 2006 Oct;174(3):510-6. doi: 10.1007/s00221-006-0485-8. Epub 2006 May 23.

Abstract

We adapted the crossmodal dynamic capture task to investigate the modulation of visuotactile crossmodal integration by unimodal visual perceptual grouping. The influence of finger posture on this interaction was also explored. Participants were required to judge the direction of a tactile apparent motion stream (moving either to the left or to the right) presented to their crossed or uncrossed index fingers. The participants were instructed to ignore a distracting visual apparent motion stream, comprised of either 2 or 6 lights presented concurrently with the tactile stimuli. More crossmodal dynamic capture of the direction of the tactile apparent motion stream by the visual apparent motion stream was observed in the 2-lights condition than in the 6-lights condition. This interaction was not modulated by finger posture. These results suggest that visual intramodal perceptual grouping constrains the crossmodal binding of visual and tactile apparent motion information, irrespective of finger posture.

摘要

我们采用了跨模态动态捕捉任务,以研究单峰视觉感知分组对视觉触觉跨模态整合的调节作用。同时还探讨了手指姿势对这种相互作用的影响。要求参与者判断呈现于其交叉或不交叉食指上的触觉似动流(向左或向右移动)的方向。参与者被指示忽略一个由与触觉刺激同时呈现的2个或6个光点组成的干扰性视觉似动流。在2光点条件下比在6光点条件下观察到更多视觉似动流对触觉似动流方向的跨模态动态捕捉。这种相互作用不受手指姿势的调节。这些结果表明,视觉模态内的感知分组会限制视觉和触觉似动信息的跨模态绑定,而与手指姿势无关。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验