Department of Computer Science, Tufts University, Medford, Massachusetts, United States of America.
Department of Psychology, Tufts University, Medford, Massachusetts, United States of America.
PLoS One. 2022 Jan 7;17(1):e0261811. doi: 10.1371/journal.pone.0261811. eCollection 2022.
Understanding the spread of false or dangerous beliefs-often called misinformation or disinformation-through a population has never seemed so urgent. Network science researchers have often taken a page from epidemiologists, and modeled the spread of false beliefs as similar to how a disease spreads through a social network. However, absent from those disease-inspired models is an internal model of an individual's set of current beliefs, where cognitive science has increasingly documented how the interaction between mental models and incoming messages seems to be crucially important for their adoption or rejection. Some computational social science modelers analyze agent-based models where individuals do have simulated cognition, but they often lack the strengths of network science, namely in empirically-driven network structures. We introduce a cognitive cascade model that combines a network science belief cascade approach with an internal cognitive model of the individual agents as in opinion diffusion models as a public opinion diffusion (POD) model, adding media institutions as agents which begin opinion cascades. We show that the model, even with a very simplistic belief function to capture cognitive effects cited in disinformation study (dissonance and exposure), adds expressive power over existing cascade models. We conduct an analysis of the cognitive cascade model with our simple cognitive function across various graph topologies and institutional messaging patterns. We argue from our results that population-level aggregate outcomes of the model qualitatively match what has been reported in COVID-related public opinion polls, and that the model dynamics lend insights as to how to address the spread of problematic beliefs. The overall model sets up a framework with which social science misinformation researchers and computational opinion diffusion modelers can join forces to understand, and hopefully learn how to best counter, the spread of disinformation and "alternative facts."
理解虚假或危险信念(通常称为错误信息或虚假信息)在人群中的传播从未像现在这样紧迫。网络科学研究人员经常从流行病学家那里借鉴经验,将虚假信念的传播建模为类似于疾病如何通过社交网络传播。然而,在这些受疾病启发的模型中,缺乏对个体当前信念集合的内部模型,认知科学越来越多地记录了心理模型与传入信息之间的相互作用对于其接受或拒绝信念是至关重要的。一些计算社会科学模型分析者分析了具有模拟认知的基于代理的模型,但它们往往缺乏网络科学的优势,即在经验驱动的网络结构方面。我们引入了一种认知级联模型,该模型将网络科学信念级联方法与个体代理的内部认知模型(如意见扩散模型中的认知模型)相结合,将媒体机构作为代理引入意见级联。我们表明,即使使用非常简单的信念函数来捕获错误信息研究中引用的认知效应(不和谐和暴露),该模型也增加了对现有级联模型的表达能力。我们使用我们简单的认知功能在各种图拓扑和机构消息传递模式下对认知级联模型进行了分析。我们从结果中得出结论,该模型的人口水平总体结果在质量上与与 COVID 相关的民意调查中报告的结果相匹配,并且该模型动态提供了有关如何解决有问题的信念传播的见解。总体模型建立了一个框架,社会科学错误信息研究人员和计算意见扩散模型分析者可以利用该框架来理解,并希望学习如何最好地抵制错误信息和“替代事实”的传播。