Wicke Philipp, Bolognesi Marianna
School of Computer Science, University College Dublin, Belfield, Dublin D4, Ireland.
Faculty of Languages, Literatures and Modern Cultures, University of Bologna, Via Cartoleria 5, Bologna, 40126, Italy.
Cogn Process. 2020 Nov;21(4):615-635. doi: 10.1007/s10339-020-00971-x. Epub 2020 May 7.
An increasingly large body of converging evidence supports the idea that the semantic system is distributed across brain areas and that the information encoded therein is multimodal. Within this framework, feature norms are typically used to operationalize the various parts of meaning that contribute to define the distributed nature of conceptual representations. However, such features are typically collected as verbal strings, elicited from participants in experimental settings. If the semantic system is not only distributed (across features) but also multimodal, a cognitively sound theory of semantic representations should take into account different modalities in which feature-based representations are generated, because not all the relevant semantic information may be easily verbalized into classic feature norms, and different types of concepts (e.g., abstract vs. concrete concepts) may consist of different configurations of non-verbal features. In this paper we acknowledge the multimodal nature of conceptual representations and we propose a novel way of collecting non-verbal semantic features. In a crowdsourcing task we asked participants to use emoji to provide semantic representations for a sample of 300 English nouns referring to abstract and concrete concepts, which account for (machine readable) visual features. In a formal content analysis with multiple annotators we then classified the cognitive strategies used by the participants to represent conceptual content through emoji. The main results of our analyses show that abstract (vs. concrete) concepts are characterized by representations that: 1. consist of a larger number of emoji; 2. include more face emoji (expressing emotions); 3. are less stable and less shared among users; 4. use representation strategies based on figurative operations (e.g., metaphors) and strategies that exploit linguistic information (e.g. rebus); 5. correlate less well with the semantic representations emerging from classic features listed through verbal strings.
越来越多的汇聚证据支持这样一种观点,即语义系统分布于多个脑区,且其中编码的信息是多模态的。在这一框架内,特征规范通常用于将意义的各个部分进行操作化处理,这些部分有助于定义概念表征的分布式本质。然而,此类特征通常作为语言字符串收集而来,是在实验环境中从参与者那里引出的。如果语义系统不仅是分布式的(跨特征),而且是多模态的,那么一个认知合理的语义表征理论就应该考虑到生成基于特征的表征的不同模态,因为并非所有相关的语义信息都能轻易地用语言表述为经典的特征规范,并且不同类型的概念(例如,抽象概念与具体概念)可能由不同的非语言特征配置组成。在本文中,我们承认概念表征的多模态本质,并提出了一种收集非语言语义特征的新方法。在一项众包任务中,我们要求参与者使用表情符号为300个指代抽象和具体概念的英语名词样本提供语义表征,这些名词样本包含(机器可读的)视觉特征。然后,在一项由多个注释者参与的形式内容分析中,我们对参与者通过表情符号来表征概念内容所使用的认知策略进行了分类。我们分析的主要结果表明,抽象(相对于具体)概念的表征具有以下特点:1. 由更多数量的表情符号组成;2. 包含更多的面部表情符号(表达情感);3. 稳定性更低,在用户之间的共享性也更低;4. 使用基于比喻操作(例如隐喻)的表征策略以及利用语言信息的策略(例如画谜);5. 与通过语言字符串列出的经典特征所产生的语义表征的相关性较差。