Suppr超能文献

测试因果过度泛化的计算模型:来自英语、希伯来语、印地语、日语和基切语的儿童判断与产出数据。

Testing a computational model of causative overgeneralizations: Child judgment and production data from English, Hebrew, Hindi, Japanese and K'iche'.

作者信息

Ambridge Ben, Doherty Laura, Maitreyee Ramya, Tatsumi Tomoko, Zicherman Shira, Mateo Pedro Pedro, Kawakami Ayuno, Bidgood Amy, Pye Clifton, Narasimhan Bhuvana, Arnon Inbal, Bekman Dani, Efrati Amir, Fabiola Can Pixabaj Sindy, Marroquín Pelíz Mario, Julajuj Mendoza Margarita, Samanta Soumitra, Campbell Seth, McCauley Stewart, Berman Ruth, Misra Sharma Dipti, Bhaya Nair Rukmini, Fukumura Kumiko

机构信息

University of Liverpool, Liverpool, UK.

ESRC International Centre for Language and Communicative Development (LuCiD), Liverpool, UK.

出版信息

Open Res Eur. 2022 Jan 12;1:1. doi: 10.12688/openreseurope.13008.2. eCollection 2021.

Abstract

How do language learners avoid the production of verb argument structure overgeneralization errors ( c.f. ), while retaining the ability to apply such generalizations productively when appropriate? This question has long been seen as one that is both particularly central to acquisition research and particularly challenging. Focussing on causative overgeneralization errors of this type, a previous study reported a computational model that learns, on the basis of corpus data and human-derived verb-semantic-feature ratings, to predict adults' by-verb preferences for less- versus more-transparent causative forms (e.g., * vs ) across English, Hebrew, Hindi, Japanese and K'iche Mayan. Here, we tested the ability of this model (and an expanded version with multiple hidden layers) to explain binary grammaticality judgment data from children aged 4;0-5;0, and elicited-production data from children aged 4;0-5;0 and 5;6-6;6 ( =48 per language). In general, the model successfully simulated both children's judgment and production data, with correlations of =0.5-0.6 and =0.75-0.85, respectively, and also generalized to unseen verbs. Importantly, learners of all five languages showed some evidence of making the types of overgeneralization errors - in both judgments and production - previously observed in naturalistic studies of English (e.g., ). Together with previous findings, the present study demonstrates that a simple learning model can explain (a) adults' continuous judgment data, (b) children's binary judgment data and (c) children's production data (with no training of these datasets), and therefore constitutes a plausible mechanistic account of the acquisition of verbs' argument structure restrictions.

摘要

语言学习者如何在避免产生动词论元结构过度泛化错误(参见)的同时,还能在适当的时候保持有效应用此类泛化的能力?长期以来,这个问题一直被视为习得研究中特别核心且极具挑战性的问题。一项先前的研究聚焦于此类使役过度泛化错误,报告了一个计算模型,该模型基于语料库数据和人工得出的动词语义特征评级,学习预测成年人在英语、希伯来语、印地语、日语和基切玛雅语中对较不透明与更透明使役形式(例如,*与)的逐动词偏好。在此,我们测试了这个模型(以及一个具有多个隐藏层的扩展版本)解释4岁0个月至5岁0个月儿童的二元语法性判断数据,以及引出4岁0个月至5岁0个月和5岁6个月至6岁6个月儿童(每种语言 = 48名)的产出数据的能力。总体而言,该模型成功模拟了儿童的判断和产出数据,相关性分别为 = 0.5 - 0.6和 = 0.75 - 0.85,并且还能推广到未见过的动词。重要的是,所有五种语言的学习者都显示出一些证据,表明他们在判断和产出中都会出现先前在英语自然主义研究中观察到的那种过度泛化错误类型(例如,)。结合先前的研究结果,本研究表明一个简单的学习模型可以解释(a)成年人的连续判断数据,(b)儿童的二元判断数据,以及(c)儿童的产出数据(无需对这些数据集进行训练),因此构成了一个关于动词论元结构限制习得的合理机制解释。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e7e1/10446158/5bf6e47290f0/openreseurope-1-15546-g0000.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验