Northwestern University, Kellogg School of Management, Evanston, IL, USA.
Northwestern University, Kellogg School of Management, Evanston, IL, USA.
Trends Cogn Sci. 2023 Oct;27(10):947-960. doi: 10.1016/j.tics.2023.06.008. Epub 2023 Aug 3.
Human social learning is increasingly occurring on online social platforms, such as Twitter, Facebook, and TikTok. On these platforms, algorithms exploit existing social-learning biases (i.e., towards prestigious, ingroup, moral, and emotional information, or 'PRIME' information) to sustain users' attention and maximize engagement. Here, we synthesize emerging insights into 'algorithm-mediated social learning' and propose a framework that examines its consequences in terms of functional misalignment. We suggest that, when social-learning biases are exploited by algorithms, PRIME information becomes amplified via human-algorithm interactions in the digital social environment in ways that cause social misperceptions and conflict, and spread misinformation. We discuss solutions for reducing functional misalignment, including algorithms promoting bounded diversification and increasing transparency of algorithmic amplification.
人类的社会学习越来越多地发生在在线社交平台上,如 Twitter、Facebook 和 TikTok。在这些平台上,算法利用现有的社会学习偏见(即对有威望、同群体、道德和情感信息,或“PRIME”信息的偏好)来维持用户的注意力并最大限度地提高参与度。在这里,我们综合了新兴的关于“算法介导的社会学习”的见解,并提出了一个框架,从功能失配的角度来考察其后果。我们认为,当算法利用社会学习偏见时,PRIME 信息通过数字社交环境中的人机交互被放大,从而导致社会误解和冲突,并传播错误信息。我们讨论了减少功能失配的解决方案,包括促进有限多样化的算法和增加算法放大的透明度。