Brennan Jonathan R, Dyer Chris, Kuncoro Adhiguna, Hale John T
University of Michigan, USA.
DeepMind, London, UK.
Neuropsychologia. 2020 Sep;146:107479. doi: 10.1016/j.neuropsychologia.2020.107479. Epub 2020 May 16.
Brain activity in numerous perisylvian brain regions is modulated by the expectedness of linguistic stimuli. We leverage recent advances in computational parsing models to test what representations guide the processes reflected by this activity. Recurrent Neural Network Grammars (RNNGs) are generative models of (tree, string) pairs that use neural networks to drive derivational choices. Parsing with them yields a variety of incremental complexity metrics that we evaluate against a publicly available fMRI data-set recorded while participants simply listen to an audiobook story. Surprisal, which captures a word's un-expectedness, correlates with a wide range of temporal and frontal regions when it is calculated based on word-sequence information using a top-performing LSTM neural network language model. The explicit encoding of hierarchy afforded by the RNNG additionally captures activity in left posterior temporal areas. A separate metric tracking the number of derivational steps taken between words correlates with activity in the left temporal lobe and inferior frontal gyrus. This pattern of results narrows down the kinds of linguistic representations at play during predictive processing across the brain's language network.
众多脑岛周围脑区的大脑活动受到语言刺激预期性的调节。我们利用计算句法分析模型的最新进展,来测试哪些表征指导了这种活动所反映的过程。循环神经网络语法(RNNG)是(树,字符串)对的生成模型,它使用神经网络来驱动推导选择。用它们进行句法分析会产生各种增量复杂度指标,我们根据参与者简单收听有声读物故事时记录的公开功能性磁共振成像(fMRI)数据集对这些指标进行评估。惊奇值(surprisal)捕捉一个单词的意外程度,当使用表现最佳的长短期记忆(LSTM)神经网络语言模型根据单词序列信息计算时,它与广泛的颞叶和额叶区域相关。RNNG所提供的层次结构的显式编码还捕捉到左后颞叶区域的活动。另一个跟踪单词之间推导步骤数量的指标与左颞叶和额下回的活动相关。这种结果模式缩小了大脑语言网络预测处理过程中起作用的语言表征的种类。