Suppr超能文献

寻找可信人工智能的“金发姑娘区”。

In search of a Goldilocks zone for credible AI.

机构信息

School of Psychology, University of Aberdeen, Aberdeen, AB24 2UB, UK.

School of Natural and Computing Sciences, University of Aberdeen, Aberdeen, AB24 2UB, UK.

出版信息

Sci Rep. 2021 Jul 1;11(1):13687. doi: 10.1038/s41598-021-93109-8.

Abstract

If artificial intelligence (AI) is to help solve individual, societal and global problems, humans should neither underestimate nor overestimate its trustworthiness. Situated in-between these two extremes is an ideal 'Goldilocks' zone of credibility. But what will keep trust in this zone? We hypothesise that this role ultimately falls to the social cognition mechanisms which adaptively regulate conformity between humans. This novel hypothesis predicts that human-like functional biases in conformity should occur during interactions with AI. We examined multiple tests of this prediction using a collaborative remembering paradigm, where participants viewed household scenes for 30 s vs. 2 min, then saw 2-alternative forced-choice decisions about scene content originating either from AI- or human-sources. We manipulated the credibility of different sources (Experiment 1) and, from a single source, the estimated-likelihood (Experiment 2) and objective accuracy (Experiment 3) of specific decisions. As predicted, each manipulation produced functional biases for AI-sources mirroring those found for human-sources. Participants conformed more to higher credibility sources, and higher-likelihood or more objectively accurate decisions, becoming increasingly sensitive to source accuracy when their own capability was reduced. These findings support the hypothesised role of social cognition in regulating AI's influence, raising important implications and new directions for research on human-AI interaction.

摘要

如果人工智能(AI)要帮助解决个人、社会和全球问题,人类既不应低估也不应高估其可信度。在这两个极端之间,存在着一个理想的“恰到好处”的可信度区间。但是,什么因素会保持这种可信度呢?我们假设,在这个区间内,最终起作用的是那些适应性地调节人类之间一致性的社会认知机制。这个新颖的假设预测,在与 AI 的交互过程中,人类的功能偏见会以类似的方式出现。我们使用协作记忆范式检验了这一预测的多个测试,参与者观看了 30 秒或 2 分钟的家庭场景,然后对来自 AI 或人类来源的场景内容进行二选一的强制选择决策。我们在不同的来源中操纵可信度(实验 1),并且从单一来源中操纵特定决策的估计可能性(实验 2)和客观准确性(实验 3)。正如预测的那样,每种操作都产生了与人类来源相似的 AI 来源的功能偏见。参与者更倾向于可信度更高的来源,以及可能性更高或更客观准确的决策,当他们自己的能力降低时,他们对来源准确性的敏感性会越来越高。这些发现支持了社会认知在调节 AI 影响方面的假设作用,为人类与 AI 交互的研究提出了重要的启示和新方向。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/49e2/8249604/19482660359f/41598_2021_93109_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验