School of Psychology, The University of Queensland, St Lucia, Queensland, 4102, Australia.
Experimental Psychology, University of Nottingham, Nottingham, UK.
Sci Rep. 2019 Mar 26;9(1):5155. doi: 10.1038/s41598-018-37888-7.
Information from different sensory modalities can interact, shaping what we think we have seen, heard, or otherwise perceived. Such interactions can enhance the precision of perceptual decisions, relative to those based on information from a single sensory modality. Several computational processes could account for such improvements. Slight improvements could arise if decisions are based on multiple independent sensory estimates, as opposed to just one. Still greater improvements could arise if initially independent estimates are summed to form a single integrated code. This hypothetical process has often been described as optimal when it results in bimodal performance consistent with a summation of unimodal estimates weighted in proportion to the precision of each initially independent sensory code. Here we examine cross-modal cue combination for audio-visual temporal rate and spatial location cues. While suggestive of a cross-modal encoding advantage, the degree of facilitation falls short of that predicted by a precision weighted summation process. These data accord with other published observations, and suggest that precision weighted combination is not a general property of human cross-modal perception.
来自不同感觉模态的信息可以相互作用,从而影响我们对所见、所闻或其他感知的认知。这种相互作用可以提高感知决策的准确性,相对于基于单一感觉模态的信息。有几种计算过程可以解释这种改进。如果决策是基于多个独立的感觉估计,而不是只有一个,那么就会出现轻微的改进。如果最初独立的估计值被相加以形成单个集成代码,则会有更大的改进。当这种假设的过程导致与每个初始独立感觉代码的精度成比例加权的单峰估计的总和一致的双峰表现时,它通常被描述为最优过程。在这里,我们研究了视听时间率和空间位置线索的跨模态线索组合。虽然这表明了跨模态编码的优势,但促进的程度低于预期的精度加权求和过程。这些数据与其他已发表的观察结果一致,并表明精度加权组合不是人类跨模态感知的一般特性。