Laux Johann, Wachter Sandra, Mittelstadt Brent
Oxford Internet Institute University of Oxford 1 St Giles Oxford OX1 3JS UK.
Regul Gov. 2024 Jan;18(1):3-32. doi: 10.1111/rego.12512. Epub 2023 Feb 6.
In its AI Act, the European Union chose to understand trustworthiness of AI in terms of the acceptability of its risks. Based on a narrative systematic literature review on institutional trust and AI in the public sector, this article argues that the EU adopted a simplistic conceptualization of trust and is overselling its regulatory ambition. The paper begins by reconstructing the conflation of "trustworthiness" with "acceptability" in the AI Act. It continues by developing a prescriptive set of variables for reviewing trust research in the context of AI. The paper then uses those variables for a narrative review of prior research on trust and trustworthiness in AI in the public sector. Finally, it relates the findings of the review to the EU's AI policy. Its prospects to successfully engineer citizen's trust are uncertain. There remains a threat of misalignment between levels of actual trust and the trustworthiness of applied AI.
欧盟在其《人工智能法案》中选择从人工智能风险的可接受性角度来理解人工智能的可信度。基于对公共部门机构信任与人工智能的叙述性系统文献综述,本文认为欧盟采用了一种过于简单化的信任概念,并且过度宣扬其监管野心。本文首先重构了《人工智能法案》中“可信度”与“可接受性”的 conflation(此处可能有误,原词可能为“混淆”之类的意思)。接着,它提出了一组规范性变量,用于在人工智能背景下审视信任研究。然后,本文使用这些变量对公共部门中人工智能领域先前关于信任和可信度的研究进行叙述性综述。最后,它将综述结果与欧盟的人工智能政策联系起来。其成功塑造公民信任的前景尚不确定。实际信任水平与应用人工智能的可信度之间仍存在不一致的威胁。