Department of Physics, IFIBA-University of Buenos Aires, Buenos Aires, Argentina.
Department of Psychology, New York University, New York, United States of America.
PLoS One. 2018 Mar 21;13(3):e0193466. doi: 10.1371/journal.pone.0193466. eCollection 2018.
Sound-symbolic word classes are found in different cultures and languages worldwide. These words are continuously produced to code complex information about events. Here we explore the capacity of creative language to transport complex multisensory information in a controlled experiment, where our participants improvised onomatopoeias from noisy moving objects in audio, visual and audiovisual formats. We found that consonants communicate movement types (slide, hit or ring) mainly through the manner of articulation in the vocal tract. Vowels communicate shapes in visual stimuli (spiky or rounded) and sound frequencies in auditory stimuli through the configuration of the lips and tongue. A machine learning model was trained to classify movement types and used to validate generalizations of our results across formats. We implemented the classifier with a list of cross-linguistic onomatopoeias simple actions were correctly classified, while different aspects were selected to build onomatopoeias of complex actions. These results show how the different aspects of complex sensory information are coded and how they interact in the creation of novel onomatopoeias.
声音象征词类在世界上不同的文化和语言中都有发现。这些词不断被用来编码关于事件的复杂信息。在这里,我们通过一个受控实验来探索创造性语言在传输复杂多感官信息方面的能力,在这个实验中,我们的参与者根据声音、视觉和视听格式中移动的嘈杂物体即兴创作拟声词。我们发现,辅音主要通过声道的发音方式来传达运动类型(滑动、击打或鸣响)。元音通过嘴唇和舌头的形状来传达视觉刺激中的形状(多刺的或圆形的)和听觉刺激中的声音频率。我们训练了一个机器学习模型来对运动类型进行分类,并将其用于验证我们在不同格式下的结果的泛化。我们用一个跨语言拟声词列表来实现这个分类器,简单的动作被正确分类,而不同的方面被选择来构建复杂动作的拟声词。这些结果表明了复杂感官信息的不同方面是如何被编码的,以及它们在创造新拟声词时是如何相互作用的。