Suppr超能文献

空间一致的听觉和视觉呈现中的分布式多模态注意力感官痕迹与情境编码策略

Divided multimodal attention sensory trace and context coding strategies in spatially congruent auditory and visual presentation.

作者信息

Kristjánsson Tómas, Thorvaldsson Tómas Páll, Kristjánsson Arni

出版信息

Multisens Res. 2014;27(2):91-110. doi: 10.1163/22134808-00002448.

Abstract

Previous research involving both unimodal and multimodal studies suggests that single-response change detection is a capacity-free process while a discriminatory up or down identification is capacity-limited. The trace/context model assumes that this reflects different memory strategies rather than inherent differences between identification and detection. To perform such tasks, one of two strategies is used, a sensory trace or a context coding strategy, and if one is blocked, people will automatically use the other. A drawback to most preceding studies is that stimuli are presented at separate locations, creating the possibility of a spatial confound, which invites alternative interpretations of the results. We describe a series of experiments, investigating divided multimodal attention, without the spatial confound. The results challenge the trace/context model. Our critical experiment involved a gap before a change in volume and brightness, which according to the trace/context model blocks the sensory trace strategy, simultaneously with a roaming pedestal, which should block the context coding strategy. The results clearly show that people can use strategies other than sensory trace and context coding in the tasks and conditions of these experiments, necessitating changes to the trace/context model.

摘要

先前涉及单峰和多峰研究的研究表明,单反应变化检测是一个无容量限制的过程,而辨别性的向上或向下识别是容量受限的。痕迹/情境模型假定,这反映了不同的记忆策略,而非识别与检测之间的内在差异。为执行此类任务,会使用两种策略之一,即感觉痕迹或情境编码策略,若其中一种受阻,人们会自动使用另一种。大多数先前研究的一个缺点是,刺激在不同位置呈现,这就产生了空间混淆的可能性,从而引发对结果的其他解释。我们描述了一系列实验,研究没有空间混淆的分散多峰注意力。结果对痕迹/情境模型提出了挑战。我们的关键实验涉及在音量和亮度变化之前的一个间隙,根据痕迹/情境模型,这会阻断感觉痕迹策略,同时还有一个移动的背景,这应该会阻断情境编码策略。结果清楚地表明,在这些实验的任务和条件下,人们可以使用感觉痕迹和情境编码之外的策略,这就需要对痕迹/情境模型进行修改。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验