Cornell University Tech Policy Institute, Menlo Park, CA, United States of America.
Stanford Center for International Security and Cooperation, Stanford, CA, United States of America.
PLoS One. 2023 Jul 18;18(7):e0288109. doi: 10.1371/journal.pone.0288109. eCollection 2023.
Advances in Artificial Intelligence (AI) are poised to transform society, national defense, and the economy by increasing efficiency, precision, and safety. Yet, widespread adoption within society depends on public trust and willingness to use AI-enabled technologies. In this study, we propose the possibility of an AI "trust paradox," in which individuals' willingness to use AI-enabled technologies exceeds their level of trust in these capabilities. We conduct a two-part study to explore the trust paradox. First, we conduct a conjoint analysis, varying different attributes of AI-enabled technologies in different domains-including armed drones, general surgery, police surveillance, self-driving cars, and social media content moderation-to evaluate whether and under what conditions a trust paradox may exist. Second, we use causal mediation analysis in the context of a second survey experiment to help explain why individuals use AI-enabled technologies that they do not trust. We find strong support for the trust paradox, particularly in the area of AI-enabled police surveillance, where the levels of support for its use are both higher than other domains but also significantly exceed trust. We unpack these findings to show that several underlying beliefs help account for public attitudes of support, including the fear of missing out, optimism that future versions of the technology will be more trustworthy, a belief that the benefits of AI-enabled technologies outweigh the risks, and calculation that AI-enabled technologies yield efficiency gains. Our findings have important implications for the integration of AI-enabled technologies in multiple settings.
人工智能 (AI) 的进步有望通过提高效率、精度和安全性来改变社会、国防和经济。然而,要在社会中广泛采用这些技术,就需要公众信任并愿意使用人工智能技术。在这项研究中,我们提出了人工智能“信任悖论”的可能性,即个人使用人工智能技术的意愿超过了他们对这些技术能力的信任水平。我们进行了一项两部分的研究来探索信任悖论。首先,我们进行了联合分析,在不同领域(包括武装无人机、普通外科手术、警察监控、自动驾驶汽车和社交媒体内容审核)中改变人工智能技术的不同属性,以评估信任悖论是否存在以及在什么条件下存在。其次,我们在第二项调查实验的背景下使用因果中介分析来帮助解释为什么人们会使用他们不信任的人工智能技术。我们发现,信任悖论得到了强有力的支持,特别是在人工智能技术支持的警察监控领域,人们对其使用的支持度不仅高于其他领域,而且明显超过了信任度。我们详细说明了这些发现,表明包括错失恐惧、对未来版本的技术更值得信任的乐观态度、相信人工智能技术的好处大于风险以及计算人工智能技术可以带来效率提升的几种潜在信念有助于解释公众支持的态度。我们的研究结果对人工智能技术在多个领域的融合具有重要意义。