Suppr超能文献

作为一个整合系统的手势与言语的发展。

The development of gesture and speech as an integrated system.

作者信息

Goldin-Meadow S

机构信息

University of Chicago, USA.

出版信息

New Dir Child Dev. 1998 Spring(79):29-42. doi: 10.1002/cd.23219987903.

Abstract

Children, even at the one-word stage of language development, spontaneously produce gestures along with their speech, just as adults do. Although there appears to be a brief period prior to the onset of two-word speech during which gesture and speech do not form a well-integrated system, the ability to coordinate gesture and speech to convey a single message--and to "read" others' gestures with their speech to comprehend a message--develops early and is maintained throughout life. Gesture-speech combinations deliver a coherent message to the listener despite the fact that they consist of two different modalities of expression. According to McNeill (1992; Chapter One), this coherence is possible because gesture and speech share a common cognitive representation; that is, before the communication unfolds, gesture and speech are part of a single idea. As expression proceeds the message is parsed, with most information channeled into speech but some information channeled into gesture. Speech conveys information in a segmented, combinatorial format, whereas gesture conveys information in a global, mimetic format (see Goldin-Meadow, McNeill, and Singleton, 1996). Thus gesture and speech need not, and in fact often do not, convey the same information within a single utterance. Because gesture and speech form a unified system, mismatches between them can be a source of insight into the cognitive state of the speaker. And, indeed, it turns out that in both the young, one-word speaker and the older child (and possibly adults as well; Perry and Elder, 1996), a difference--or mismatch--between the information conveyed in gesture and the information conveyed in speech can signal readiness for cognitive growth. Whether the actual production of gesture-speech mismatches contributes to cognitive growth is an open question. That is, does the act of expressing two different pieces of information across modalities but within a single communicative act improve a child's ability to transpose that knowledge to a new level and thus express those pieces of information within a single modality? More work is needed to investigate whether the act of producing gesture-speech mismatches itself facilitates transition. Even if it turns out that the production of gesture-speech mismatches has little role to play in facilitating cognitive change, mismatch remains a reliable marker of the speaker's potential for cognitive growth. As such, an understanding of the relationship between gesture and speech may prove useful in clinical settings. For example, there is some evidence that children with delayed onset of two-word speech fall naturally into two groups: children who eventually achieve two-word speech, albeit later than the norm (that is, late bloomers), and children who continue to have serious difficulties with spoken language and may never be able to combine words into a single string (Feldman, Holland, Kemp, and Janosky, 1992; Thal, Tobias, and Morrison, 1991). Observation of combinations in which gesture and speech convey different information may prove a useful clinical tool for distinguishing, at a relatively young age, children who will be late bloomers from those who will have great difficulty mastering spoken language without intervention (see Stare, 1996, for preliminary evidence that the relationship between gesture and speech in children with unilateral brain damage correlates with early versus late onset of two-word combinations. In sum, for both speakers and listeners, gesture and speech are two aspects of a single process, with each modality contributing its own unique level of representation. Gesture conveys information in the global, imagistic form for which it is well suited, and speech conveys information in the segmented, combinatorial fashion that characterizes linguistic structures. The total representation of any message is therefore a synthesis of the analog gestural mode and the discrete speech mode. (ABSTRACT TRUNCATED)

摘要

儿童,即使处于语言发展的单词阶段,也会像成年人一样,在说话时自发地做出手势。虽然在双词言语出现之前似乎有一个短暂的时期,在此期间手势和言语并未形成一个整合良好的系统,但协调手势和言语以传达单一信息,以及通过言语“解读”他人手势以理解信息的能力,在早期就已发展并贯穿一生。尽管手势 - 言语组合由两种不同的表达形式组成,但它们能向听众传达连贯的信息。根据麦克尼尔(1992年;第一章)的观点,这种连贯性之所以可能,是因为手势和言语共享一个共同的认知表征;也就是说,在交流展开之前,手势和言语是单一想法的一部分。随着表达的进行,信息被解析,大部分信息通过言语传达,但有些信息通过手势传达。言语以分段、组合的形式传达信息,而手势以整体、模仿的形式传达信息(见戈尔丁 - 梅多、麦克尼尔和辛格尔顿,1996年)。因此,手势和言语在单个话语中不必,而且实际上往往也不会传达相同的信息。由于手势和言语形成一个统一的系统,它们之间的不匹配可能是洞察说话者认知状态的一个来源。事实上,结果表明,在年幼的单词语使用者以及年龄较大的儿童(可能还有成年人;佩里和埃尔德,1996年)中,手势传达的信息与言语传达的信息之间的差异或不匹配,可以标志着认知发展的准备状态。手势 - 言语不匹配的实际产生是否有助于认知发展是一个悬而未决的问题。也就是说,在单一交流行为中跨模态表达两条不同信息的行为,是否能提高儿童将该知识转换到新水平的能力,从而在单一模态中表达这些信息呢?需要更多的研究来调查产生手势 - 言语不匹配的行为本身是否有助于过渡。即使结果表明产生手势 - 言语不匹配在促进认知变化方面作用不大,但不匹配仍然是说话者认知发展潜力的可靠标志。因此,理解手势和言语之间的关系在临床环境中可能会很有用。例如,有一些证据表明,双词言语出现延迟的儿童自然分为两组:最终能说出双词言语的儿童,尽管比正常情况晚(即发育较晚的儿童),以及在口语方面持续存在严重困难且可能永远无法将单词组合成一个连贯语句的儿童(费尔德曼、霍兰德、肯普和亚诺斯基,1992年;塔尔、托拜厄斯和莫里森,1991年)。观察手势和言语传达不同信息的组合,可能是一种有用的临床工具,用于在相对年幼时区分哪些儿童会发育较晚,哪些儿童在没有干预的情况下掌握口语会有很大困难(见斯塔尔,1996年,关于单侧脑损伤儿童中手势和言语之间的关系与双词组合的早发与晚发相关的初步证据)。总之,对于说话者和听众来说,手势和言语是单个过程的两个方面,每种模态都贡献其独特的表征水平。手势以其非常适合的整体、形象形式传达信息,而言语以表征语言结构的分段、组合方式传达信息。因此,任何信息的完整表征都是模拟手势模式和离散言语模式的综合。(摘要截断)

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验