Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, CA 94720.
Department of Economics, Stanford University, Stanford, CA 94305;
Proc Natl Acad Sci U S A. 2021 May 11;118(19). doi: 10.1073/pnas.2010144118.
We present two models of how people form beliefs that are based on machine learning theory. We illustrate how these models give insight into observed human phenomena by showing how polarized beliefs can arise even when people are exposed to almost identical sources of information. In our first model, people form beliefs that are deterministic functions that best fit their past data (training sets). In that model, their inability to form probabilistic beliefs can lead people to have opposing views even if their data are drawn from distributions that only slightly disagree. In the second model, people pay a cost that is increasing in the complexity of the function that represents their beliefs. In this second model, even with large training sets drawn from exactly the same distribution, agents can disagree substantially because they simplify the world along different dimensions. We discuss what these models of belief formation suggest for improving people's accuracy and agreement.
我们提出了两种基于机器学习理论的人们形成信念的模型。我们通过展示即使人们接触到几乎相同的信息来源,极化的信念也如何产生,来说明这些模型如何深入了解观察到的人类现象。在我们的第一个模型中,人们形成的信念是最佳拟合其过去数据(训练集)的确定性函数。在该模型中,他们无法形成概率信念会导致人们即使数据来自仅略有分歧的分布也会有相反的观点。在第二个模型中,人们为代表其信念的函数的复杂性增加而付出代价。在这个第二个模型中,即使从完全相同的分布中抽取了大量的训练集,由于代理人沿着不同的维度简化了世界,他们也可能会产生很大的分歧。我们讨论了这些信念形成模型对于提高人们的准确性和一致性的意义。