• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

人类多感官线索组合不佳。

Suboptimal human multisensory cue combination.

机构信息

School of Psychology, The University of Queensland, St Lucia, Queensland, 4102, Australia.

Experimental Psychology, University of Nottingham, Nottingham, UK.

出版信息

Sci Rep. 2019 Mar 26;9(1):5155. doi: 10.1038/s41598-018-37888-7.

DOI:10.1038/s41598-018-37888-7
PMID:30914673
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC6435731/
Abstract

Information from different sensory modalities can interact, shaping what we think we have seen, heard, or otherwise perceived. Such interactions can enhance the precision of perceptual decisions, relative to those based on information from a single sensory modality. Several computational processes could account for such improvements. Slight improvements could arise if decisions are based on multiple independent sensory estimates, as opposed to just one. Still greater improvements could arise if initially independent estimates are summed to form a single integrated code. This hypothetical process has often been described as optimal when it results in bimodal performance consistent with a summation of unimodal estimates weighted in proportion to the precision of each initially independent sensory code. Here we examine cross-modal cue combination for audio-visual temporal rate and spatial location cues. While suggestive of a cross-modal encoding advantage, the degree of facilitation falls short of that predicted by a precision weighted summation process. These data accord with other published observations, and suggest that precision weighted combination is not a general property of human cross-modal perception.

摘要

来自不同感觉模态的信息可以相互作用,从而影响我们对所见、所闻或其他感知的认知。这种相互作用可以提高感知决策的准确性,相对于基于单一感觉模态的信息。有几种计算过程可以解释这种改进。如果决策是基于多个独立的感觉估计,而不是只有一个,那么就会出现轻微的改进。如果最初独立的估计值被相加以形成单个集成代码,则会有更大的改进。当这种假设的过程导致与每个初始独立感觉代码的精度成比例加权的单峰估计的总和一致的双峰表现时,它通常被描述为最优过程。在这里,我们研究了视听时间率和空间位置线索的跨模态线索组合。虽然这表明了跨模态编码的优势,但促进的程度低于预期的精度加权求和过程。这些数据与其他已发表的观察结果一致,并表明精度加权组合不是人类跨模态感知的一般特性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/61c4/6435731/c6fa29e5e4c9/41598_2018_37888_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/61c4/6435731/920dd433007b/41598_2018_37888_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/61c4/6435731/86b1c13e4093/41598_2018_37888_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/61c4/6435731/8ddf2d0827f6/41598_2018_37888_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/61c4/6435731/b2e6aaf58b75/41598_2018_37888_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/61c4/6435731/c6fa29e5e4c9/41598_2018_37888_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/61c4/6435731/920dd433007b/41598_2018_37888_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/61c4/6435731/86b1c13e4093/41598_2018_37888_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/61c4/6435731/8ddf2d0827f6/41598_2018_37888_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/61c4/6435731/b2e6aaf58b75/41598_2018_37888_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/61c4/6435731/c6fa29e5e4c9/41598_2018_37888_Fig5_HTML.jpg

相似文献

1
Suboptimal human multisensory cue combination.人类多感官线索组合不佳。
Sci Rep. 2019 Mar 26;9(1):5155. doi: 10.1038/s41598-018-37888-7.
2
Audio-visual speech cue combination.视听语音提示组合。
PLoS One. 2010 Apr 16;5(4):e10217. doi: 10.1371/journal.pone.0010217.
3
Cue integration in categorical tasks: insights from audio-visual speech perception.类别任务中的线索整合:来自视听语音感知的启示。
PLoS One. 2011;6(5):e19812. doi: 10.1371/journal.pone.0019812. Epub 2011 May 26.
4
Shared Physiological Correlates of Multisensory and Expectation-Based Facilitation.多感觉和基于预期的促进的共同生理相关物。
eNeuro. 2020 Mar 9;7(2). doi: 10.1523/ENEURO.0435-19.2019. Print 2020 Mar/Apr.
5
Weighted integration suggests that visual and tactile signals provide independent estimates about duration.加权整合表明视觉和触觉信号提供了关于持续时间的独立估计。
J Exp Psychol Hum Percept Perform. 2017 May;43(5):868-880. doi: 10.1037/xhp0000368. Epub 2017 Feb 23.
6
The co-occurrence of multisensory facilitation and cross-modal conflict in the human brain.人类大脑中多感觉促进和跨模态冲突的共同发生。
J Neurophysiol. 2011 Dec;106(6):2896-909. doi: 10.1152/jn.00303.2011. Epub 2011 Aug 31.
7
Multisensory cues improve sensorimotor synchronisation.多感官线索可改善感觉运动同步。
Eur J Neurosci. 2010 May;31(10):1828-35. doi: 10.1111/j.1460-9568.2010.07205.x.
8
Predicting the sensory consequences of one's own action: First evidence for multisensory facilitation.预测自身行为的感觉后果:多感官促进作用的首个证据。
Atten Percept Psychophys. 2016 Nov;78(8):2515-2526. doi: 10.3758/s13414-016-1189-1.
9
Cross-modal auditory priors drive the perception of bistable visual stimuli with reliable differences between individuals.跨模态听觉先验驱动个体间具有可靠差异的双稳态视觉刺激的感知。
Sci Rep. 2021 Aug 20;11(1):16943. doi: 10.1038/s41598-021-96198-7.
10
Sensory cue combination in children under 10 years of age.10 岁以下儿童的感觉线索组合。
Cognition. 2019 Dec;193:104014. doi: 10.1016/j.cognition.2019.104014. Epub 2019 Jul 11.

引用本文的文献

1
The myth of the Bayesian brain.贝叶斯大脑的神话。
Eur J Appl Physiol. 2025 Jun 26. doi: 10.1007/s00421-025-05855-6.
2
Automatic multisensory integration follows subjective confidence rather than objective performance.自动多感官整合遵循主观置信度而非客观表现。
Commun Psychol. 2025 Mar 11;3(1):38. doi: 10.1038/s44271-025-00221-w.
3
No evidence for a difference in Bayesian reasoning for egocentric versus allocentric spatial cognition.没有证据表明自我中心与他心空间认知的贝叶斯推理存在差异。

本文引用的文献

1
Weighted integration suggests that visual and tactile signals provide independent estimates about duration.加权整合表明视觉和触觉信号提供了关于持续时间的独立估计。
J Exp Psychol Hum Percept Perform. 2017 May;43(5):868-880. doi: 10.1037/xhp0000368. Epub 2017 Feb 23.
2
Can computational goals inform theories of vision?计算目标能否为视觉理论提供信息?
Top Cogn Sci. 2015 Apr;7(2):274-86. doi: 10.1111/tops.12136. Epub 2015 Mar 13.
3
Crossmodal attention switching: auditory dominance in temporal discrimination tasks.跨模态注意转换:时间辨别任务中的听觉优势
PLoS One. 2024 Oct 10;19(10):e0312018. doi: 10.1371/journal.pone.0312018. eCollection 2024.
4
Optimality of multisensory integration while compensating for uncertain visual target information with artificial vibrotactile cues during reach planning.在进行伸手规划时,利用人工振动触觉提示来补偿不确定的视觉目标信息,从而实现多感觉整合的最优化。
J Neuroeng Rehabil. 2024 Sep 9;21(1):155. doi: 10.1186/s12984-024-01448-0.
5
Stimulus-dependent differences in cortical versus subcortical contributions to visual detection in mice.刺激依赖性差异:小鼠视觉检测中皮质与皮质下的贡献不同。
Curr Biol. 2024 May 6;34(9):1940-1952.e5. doi: 10.1016/j.cub.2024.03.061. Epub 2024 Apr 18.
6
Correctly establishing evidence for cue combination via gains in sensory precision: Why the choice of comparator matters.正确建立通过提高感官精度来实现线索组合的证据:为什么比较器的选择很重要。
Behav Res Methods. 2024 Apr;56(4):2842-2858. doi: 10.3758/s13428-023-02227-w. Epub 2023 Sep 20.
7
Internal biases are linked to disrupted cue combination in children and adults.内部偏见与儿童和成人的线索组合中断有关。
J Vis. 2022 Nov 1;22(12):14. doi: 10.1167/jov.22.12.14.
8
Learning multisensory cue integration: A computational model of crossmodal synaptic plasticity enables reliability-based cue weighting by capturing stimulus statistics.学习多感觉线索整合:通过捕获刺激统计信息,实现基于可靠性的线索加权,这是一种跨模态突触可塑性的计算模型。
Front Neural Circuits. 2022 Aug 8;16:921453. doi: 10.3389/fncir.2022.921453. eCollection 2022.
9
Experimentally disambiguating models of sensory cue integration.实验区分感觉线索整合模型。
J Vis. 2022 Jan 4;22(1):5. doi: 10.1167/jov.22.1.5.
10
Combining cues to judge distance and direction in an immersive virtual reality environment.在沉浸式虚拟现实环境中结合线索判断距离和方向。
J Vis. 2021 Apr 1;21(4):10. doi: 10.1167/jov.21.4.10.
Acta Psychol (Amst). 2014 Nov;153:139-46. doi: 10.1016/j.actpsy.2014.10.003. Epub 2014 Nov 1.
4
Neural correlates of reliability-based cue weighting during multisensory integration.多感觉整合过程中基于可靠性的线索加权的神经相关物。
Nat Neurosci. 2011 Nov 20;15(1):146-54. doi: 10.1038/nn.2983.
5
A normalization model of multisensory integration.多感觉整合的归一化模型。
Nat Neurosci. 2011 Jun;14(6):775-82. doi: 10.1038/nn.2815. Epub 2011 May 8.
6
Audio-visual speech cue combination.视听语音提示组合。
PLoS One. 2010 Apr 16;5(4):e10217. doi: 10.1371/journal.pone.0010217.
7
Bimodal sensory discrimination is finer than dual single modality discrimination.双峰感觉辨别比双单模态辨别更精细。
J Vis. 2007 Aug 31;7(11):14.1-11. doi: 10.1167/7.11.14.
8
Neural correlates of multisensory integration of ecologically valid audiovisual events.生态有效视听事件多感官整合的神经关联
J Cogn Neurosci. 2007 Dec;19(12):1964-73. doi: 10.1162/jocn.2007.19.12.1964.
9
Optimal integration of shape information from vision and touch.视觉与触觉形状信息的最优整合。
Exp Brain Res. 2007 Jun;179(4):595-606. doi: 10.1007/s00221-006-0814-y. Epub 2007 Jan 16.
10
Resolving multisensory conflict: a strategy for balancing the costs and benefits of audio-visual integration.解决多感官冲突:一种平衡视听整合成本与收益的策略。
Proc Biol Sci. 2006 Sep 7;273(1598):2159-68. doi: 10.1098/rspb.2006.3578.