Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, United States.
Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
Elife. 2023 Feb 27;12:e82580. doi: 10.7554/eLife.82580.
Understanding object representations requires a broad, comprehensive sampling of the objects in our visual world with dense measurements of brain activity and behavior. Here, we present THINGS-data, a multimodal collection of large-scale neuroimaging and behavioral datasets in humans, comprising densely sampled functional MRI and magnetoencephalographic recordings, as well as 4.70 million similarity judgments in response to thousands of photographic images for up to 1,854 object concepts. THINGS-data is unique in its breadth of richly annotated objects, allowing for testing countless hypotheses at scale while assessing the reproducibility of previous findings. Beyond the unique insights promised by each individual dataset, the multimodality of THINGS-data allows combining datasets for a much broader view into object processing than previously possible. Our analyses demonstrate the high quality of the datasets and provide five examples of hypothesis-driven and data-driven applications. THINGS-data constitutes the core public release of the THINGS initiative (https://things-initiative.org) for bridging the gap between disciplines and the advancement of cognitive neuroscience.
理解物体表示需要广泛、全面地对我们视觉世界中的物体进行采样,同时对大脑活动和行为进行密集测量。在这里,我们提出了 THINGS 数据,这是一个包含大量人类神经影像学和行为数据集的多模态集合,包括密集采样的功能磁共振成像和脑磁图记录,以及对数千张照片的 470 万次相似性判断,涵盖了多达 1854 个物体概念。THING 数据在其丰富的注释物体的广度上是独一无二的,允许在评估以前发现的可重复性的同时,大规模地测试无数的假设。除了每个单独数据集所承诺的独特见解之外,THING 数据的多模态性还允许将数据集组合在一起,从而更广泛地观察物体处理,这在以前是不可能的。我们的分析表明了数据集的高质量,并提供了五个假设驱动和数据驱动应用的示例。THING 数据构成了 THINGS 计划(https://things-initiative.org)的核心公共版本,旨在弥合学科之间的差距,推动认知神经科学的发展。