Empirical Studies of Conflict Project, Princeton University, Princeton, NJ, USA.
AI for Good Research Lab, Microsoft, Redmond, WA, USA.
Sci Adv. 2024 Nov;10(44):eadn3750. doi: 10.1126/sciadv.adn3750. Epub 2024 Oct 30.
Do search engine algorithms systematically expose users to content from unreliable sites? There is widespread concern that they do, but little systematic evidence that search engine algorithms, rather than user-expressed preferences, are driving current exposure to and engagement with unreliable information sources. Using two datasets totaling roughly 14 billion search engine result pages (SERPs) from Bing, the second most popular search engine in the U.S., we show that search exposes users to few unreliable information sources. The vast majority of engagement with unreliable information sources from search occurs when users are explicitly searching for information from those sites, despite those searches being an extremely small share of the overall search volume. Our findings highlight the importance of accounting for user preference when examining engagement with unreliable sources from web search.
搜索引擎算法是否会系统性地让用户接触到不可靠的网站内容?人们普遍担心会这样,但几乎没有系统的证据表明是搜索引擎算法而不是用户表达的偏好驱动着当前对不可靠信息源的接触和参与。我们使用来自美国第二大搜索引擎必应的两个包含约 140 亿个搜索引擎结果页面(SERPs)的数据集,结果表明,搜索让用户接触到的不可靠信息源很少。用户虽然只占总搜索量的极小一部分,但他们通过搜索接触不可靠信息源的绝大多数情况是在明确搜索这些网站的信息。我们的研究结果凸显了在考察网络搜索中与不可靠来源的互动时,考虑用户偏好的重要性。