Suppr超能文献

一只卷尾猴(僧帽猴)利用视频来寻找食物。

A capuchin monkey (Cebus apella) uses video to find food.

作者信息

Potì Patrizia, Saporiti Martina

机构信息

Unit of Cognitive Primatology and Primate Centre, Institute for Cognitive Sciences and Technologies, CNR, Rome, Italy.

出版信息

Folia Primatol (Basel). 2010;81(1):16-30. doi: 10.1159/000277636. Epub 2010 Mar 31.

Abstract

We examined the ability of capuchin monkeys to use video without immediate visual-kinaesthetic feedback as a source of information to guide their action in the 3-dimensional world. In experiment 1, 2 capuchins learned to retrieve food under 1 of 2 different objects in 1 cage after watching the experimenter hiding food under 1 of 2 replica objects while in another cage. Information space and retrieval space were thus separate. The performance criterion was 71% first correct choices in blocks of 24 trials. However, when the subjects watched prerecorded videos of the hiding events, they chose randomly. In experiment 2, we gave the capuchins further trials with video and we enhanced the object shapes by line drawings. One capuchin eventually learned to use the video clips to locate food and he generalized this learning to 2 new objects.

摘要

我们研究了卷尾猴在没有即时视觉 - 动觉反馈的情况下使用视频作为信息源来指导其在三维世界中行动的能力。在实验1中,两只卷尾猴在另一个笼子里观看实验者将食物藏在两个复制物体中的一个下面后,学会在一个笼子里的两个不同物体中的一个下面取回食物。信息空间和取回空间因此是分开的。表现标准是在24次试验的组中首次正确选择达到71%。然而,当受试者观看隐藏事件的预录制视频时,他们随机选择。在实验2中,我们让卷尾猴进一步进行视频试验,并通过线条图增强了物体形状。一只卷尾猴最终学会使用视频片段来定位食物,并将这种学习推广到两个新物体上。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验