Centre for Language Evolution, University of Edinburgh, Edinburgh, UK.
Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands.
Behav Res Methods. 2019 Aug;51(4):1651-1675. doi: 10.3758/s13428-019-01203-7.
We report associations between vowel sounds, graphemes, and colors collected online from over 1,000 Dutch speakers. We also provide open materials, including a Python implementation of the structure measure and code for a single-page web application to run simple cross-modal tasks. We also provide a full dataset of color-vowel associations from 1,164 participants, including over 200 synesthetes identified using consistency measures. Our analysis reveals salient patterns in the cross-modal associations and introduces a novel measure of isomorphism in cross-modal mappings. We found that, while the acoustic features of vowels significantly predict certain mappings (replicating prior work), both vowel phoneme category and grapheme category are even better predictors of color choice. Phoneme category is the best predictor of color choice overall, pointing to the importance of phonological representations in addition to acoustic cues. Generally, high/front vowels are lighter, more green, and more yellow than low/back vowels. Synesthetes respond more strongly on some dimensions, choosing lighter and more yellow colors for high and mid front vowels than do nonsynesthetes. We also present a novel measure of cross-modal mappings adapted from ecology, which uses a simulated distribution of mappings to measure the extent to which participants' actual mappings are structured isomorphically across modalities. Synesthetes have mappings that tend to be more structured than nonsynesthetes', and more consistent color choices across trials correlate with higher structure scores. Nevertheless, the large majority (~ 70%) of participants produce structured mappings, indicating that the capacity to make isomorphically structured mappings across distinct modalities is shared to a large extent, even if the exact nature of the mappings varies across individuals. Overall, this novel structure measure suggests a distribution of structured cross-modal association in the population, with synesthetes at one extreme and participants with unstructured associations at the other.
我们报告了从 1000 多名荷兰语使用者在线收集的元音、字母和颜色之间的关联。我们还提供了开放的材料,包括结构度量的 Python 实现和运行简单跨模态任务的单页 Web 应用程序的代码。我们还提供了来自 1164 名参与者的完整颜色-元音关联数据集,其中包括使用一致性度量识别的 200 多名联觉者。我们的分析揭示了跨模态关联中的显著模式,并引入了跨模态映射中同构的新度量。我们发现,尽管元音的声学特征显著预测了某些映射(复制了先前的工作),但元音音位类别和字母类别甚至是颜色选择的更好预测指标。音位类别是颜色选择的最佳预测指标,这表明除了声学线索外,语音表示的重要性。一般来说,高/前元音比低/后元音更轻、更绿、更黄。联觉者在某些维度上的反应更强烈,与非联觉者相比,他们选择高元音和中前元音时颜色更亮、更黄。我们还提出了一种从生态学中改编的跨模态映射新度量,该度量使用映射的模拟分布来衡量参与者的实际映射在跨模态上的结构化程度。联觉者的映射比非联觉者的映射更结构化,并且跨试验的颜色选择更一致与更高的结构分数相关。尽管如此,绝大多数(约 70%)参与者产生了结构化的映射,这表明即使映射在个体之间存在差异,在不同的模态之间进行同构结构化映射的能力在很大程度上是共享的。总的来说,这种新的结构度量表明人群中存在结构化的跨模态关联分布,联觉者处于一端,而无结构关联的参与者处于另一端。