Leclercq Virginie, Le Dantec Christophe C, Seitz Aaron R
Department of Psychology, University of California - Riverside, 900 University Avenue, Riverside, CA 92521, USA; INSHEA, Grhapes (EA 7287), Suresnes, France.
Department of Psychology, University of California - Riverside, 900 University Avenue, Riverside, CA 92521, USA.
Vision Res. 2014 Jun;99:5-11. doi: 10.1016/j.visres.2013.09.006. Epub 2013 Sep 23.
The mechanisms guiding our learning and memory processes are of key interest to human cognition. While much research shows that attention and reinforcement processes help guide the encoding process, there is still much to know regarding how our brains choose what to remember. Recent research of task-irrelevant perceptual learning (TIPL) has found that information presented coincident with important events is better encoded even if participants are not aware of its presence (see Seitz & Watanabe, 2009). However a limitation of existing studies of TIPL is that they provide little information regarding the depth of encoding supported by pairing a stimulus with a behaviorally relevant event. The objective of this research was to understand the depth of encoding of information that is learned through TIPL. To do so, we adopted a variant of the "remember/know" paradigm, recently reported by Ingram, Mickes, and Wixted (2012), in which multiple confidence levels of both familiar (know) and remember reports are reported (Experiment 1), and in which episodic information is tested (Experiment 2). TIPL was found in both experiments, with higher recognition performance for target-paired than for distractor-paired images. Furthermore, TIPL benefitted both "familiar" and "remember" reports. The results of Experiment 2 indicate that the most confident "remember" response was associated with episodic information, where participants were able to access the location of image presentation for these items. Together, these results indicate that TIPL results in a deep enhancement in the encoding of target-paired information.
指导我们学习和记忆过程的机制是人类认知的关键研究对象。虽然大量研究表明,注意力和强化过程有助于指导编码过程,但关于我们的大脑如何选择要记忆的内容,仍有许多有待了解之处。最近对任务无关感知学习(TIPL)的研究发现,即使参与者没有意识到其存在,与重要事件同时呈现的信息也能得到更好的编码(见Seitz和Watanabe,2009)。然而,现有TIPL研究的一个局限性在于,它们几乎没有提供关于将刺激与行为相关事件配对所支持的编码深度的信息。本研究的目的是了解通过TIPL学习的信息的编码深度。为此,我们采用了Ingram、Mickes和Wixted(2012)最近报道的“记得/知道”范式的一个变体,其中报告了熟悉(知道)和记得报告的多个置信水平(实验1),并对情景信息进行了测试(实验2)。在两个实验中都发现了TIPL,与分心物配对的图像相比,目标配对的图像具有更高的识别性能。此外,TIPL对“熟悉”和“记得”报告都有帮助。实验2的结果表明,最有信心的“记得”反应与情景信息相关,参与者能够获取这些项目的图像呈现位置。总之,这些结果表明,TIPL导致目标配对信息的编码深度显著增强。