Suppr超能文献

Information density and dependency length as complementary cognitive models.

作者信息

Collins Michael Xavier

机构信息

, Norfolk, VA, USA,

出版信息

J Psycholinguist Res. 2014 Oct;43(5):651-81. doi: 10.1007/s10936-013-9273-3.

Abstract

Certain English constructions permit two syntactic alternations. (1) a. I looked up the number. b. I looked the number up. (2) a. He is often at the office. b. He often is at the office. This study investigates the relationship between syntactic alternations and processing difficulty. What cognitive mechanisms are responsible for our attraction to some alternations and our aversion to others?This article reviews three psycholinguistic models of the relationship between syntactic alternations and processing: Maximum Per Word Surprisal (building on the ideas of Hale, in Proceedings of the 2nd Meeting of the North American chapter of the association for computational linguistics. Association for Computational Linguistics, Pittsburgh, PA, pp 159-166, 2001), Uniform Information Density (UID) (Levy and Jaeger in Adv Neural Inf Process Syst 19:849-856, 2007; inter alia), and Dependency Length Minimization (DLM) (Gildea and Temperley in Cognit Sci 34:286-310, 2010). Each theory makes predictions about which alternations native speakers should favor. Subjects were recruited using Amazon Mechanical Turk and asked to judge which of two competing syntactic alternations sounded more natural. Logistic regression analysis on the resulting data suggests that both UID and DLM are powerful predictors of human preferences. We conclude that alternations that approach uniform information density and minimize dependency length are easier to process than those that do not.

摘要

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验