Wakeland-Hart Cheyenne D, Cao Steven A, deBettencourt Megan T, Bainbridge Wilma A, Rosenberg Monica D
Department of Psychology, University of Chicago, Chicago, IL, USA; Department of Psychology, Columbia University, New York, NY, USA.
Department of Psychology, University of Chicago, Chicago, IL, USA.
Cognition. 2022 Oct;227:105201. doi: 10.1016/j.cognition.2022.105201. Epub 2022 Jul 19.
We only remember a fraction of what we see-including images that are highly memorable and those that we encounter during highly attentive states. However, most models of human memory disregard both an image's memorability and an individual's fluctuating attentional states. Here, we build the first model of memory synthesizing these two disparate factors to predict subsequent image recognition. We combine memorability scores of 1100 images (Experiment 1, n = 706) and attentional state indexed by response time on a continuous performance task (Experiments 2 and 3, n = 57 total). Image memorability and sustained attentional state explained significant variance in image memory, and a joint model of memory including both factors outperformed models including either factor alone. Furthermore, models including both factors successfully predicted memory in an out-of-sample group. Thus, building models based on individual- and image-specific factors allows for directed forecasting of our memories. SIGNIFICANCE STATEMENT: Although memory is a fundamental cognitive process, much of the time memory failures cannot be predicted until it is too late. However, in this study, we show that much of memory is surprisingly pre-determined ahead of time, by factors shared across the population and highly specific to each individual. Specifically, we build a new multidimensional model that predicts memory based just on the images a person sees and when they see them. This research synthesizes findings from disparate domains ranging from computer vision, attention, and memory into a predictive model. These findings have resounding implications for domains such as education, business, and marketing, where it is a top priority to predict (and even manipulate) what information people will remember.
我们只能记住所看到内容的一小部分——包括那些令人印象深刻的图像以及我们在高度专注状态下遇到的图像。然而,大多数人类记忆模型都忽略了图像的可记忆性和个体注意力状态的波动。在此,我们构建了第一个综合这两个不同因素的记忆模型,以预测后续的图像识别。我们结合了1100张图像的可记忆性得分(实验1,n = 706)以及在连续执行任务中由反应时间索引的注意力状态(实验2和3,共n = 57)。图像的可记忆性和持续的注意力状态解释了图像记忆中的显著差异,包含这两个因素的联合记忆模型比仅包含其中一个因素的模型表现更好。此外,包含这两个因素的模型成功地预测了样本外组的记忆情况。因此,基于个体和图像特定因素构建模型能够对我们的记忆进行有针对性的预测。意义声明:尽管记忆是一个基本的认知过程,但在很多时候,直到记忆失败时我们才能预测到它。然而,在本研究中,我们表明,大部分记忆在很大程度上是提前由群体共享因素和个体高度特定因素预先决定的。具体而言,我们构建了一个新的多维模型,该模型仅根据一个人所看到的图像以及看到这些图像的时间来预测记忆。这项研究将来自计算机视觉、注意力和记忆等不同领域的研究结果整合到一个预测模型中。这些发现对教育、商业和营销等领域具有深远意义,在这些领域中,预测(甚至操纵)人们会记住哪些信息是首要任务。