Suppr超能文献

SAYCam:一个从婴儿视角记录的大型纵向视听数据集。

SAYCam: A Large, Longitudinal Audiovisual Dataset Recorded From the Infant's Perspective.

作者信息

Sullivan Jessica, Mei Michelle, Perfors Andrew, Wojcik Erica, Frank Michael C

机构信息

Skidmore College.

University of Melbourne.

出版信息

Open Mind (Camb). 2021 May 26;5:20-29. doi: 10.1162/opmi_a_00039. eCollection 2021.

Abstract

We introduce a new resource: the SAYCam corpus. Infants aged 6-32 months wore a head-mounted camera for approximately 2 hr per week, over the course of approximately two-and-a-half years. The result is a large, naturalistic, longitudinal dataset of infant- and child-perspective videos. Over 200,000 words of naturalistic speech have already been transcribed. Similarly, the dataset is searchable using a number of criteria (e.g., age of participant, location, setting, objects present). The resulting dataset will be of broad use to psychologists, linguists, and computer scientists.

摘要

我们引入了一种新资源

SAYCam语料库。6至32个月大的婴儿每周佩戴头戴式摄像头约2小时,持续约两年半的时间。结果得到了一个从婴幼儿视角出发的大型、自然主义的纵向视频数据集。现已转录了超过200,000字的自然语言。同样,该数据集可根据多种标准进行搜索(例如,参与者年龄、地点、场景、出现的物体)。所得数据集将对心理学家、语言学家和计算机科学家有广泛用途。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f830/8412186/826437f2d0b5/opmi-05-20-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验