Vallor Shannon, Vierkant Tillmann
School of Philosophy, Psychology and Language Sciences University of Edinburgh, Edinburgh, Scotland.
Edinburgh Futures Institute, University of Edinburgh, Edinburgh, Scotland.
Minds Mach (Dordr). 2024;34(3):20. doi: 10.1007/s11023-024-09674-0. Epub 2024 Jun 5.
The , commonly described as a core challenge for the effective governance of, and trust in, AI and autonomous systems (AI/AS), is traditionally associated with a failure of the epistemic and/or the control condition of moral responsibility: the ability to know what we are doing and exercise competent control over this doing. Yet these two conditions are a red herring when it comes to understanding the responsibility challenges presented by AI/AS, since evidence from the cognitive sciences shows that individual humans face very similar responsibility challenges with regard to these two conditions. While the problems of epistemic opacity and attenuated behaviour control are not unique to AI/AS technologies (though they can be exacerbated by them), we show that we can learn important lessons for AI/AS development and governance from how philosophers have recently revised the traditional concept of moral responsibility in response to these challenges to responsible human agency from the cognitive sciences. The resulting instrumentalist views of responsibility, which emphasize the forward-looking and flexible role of agency cultivation, hold considerable promise for integrating AI/AS into a healthy moral ecology. We note that there nevertheless a gap in AI/AS responsibility that has yet to be extensively studied and addressed, one grounded in a relational asymmetry of between human agents and sociotechnical systems like AI/AS. In the conclusion of this paper we note that attention to this vulnerability gap must inform and enable future attempts to construct trustworthy AI/AS systems and preserve the conditions for responsible human agency.
通常被描述为人工智能和自主系统(AI/AS)有效治理及信任的核心挑战,传统上与道德责任的认知和/或控制条件的缺失相关:即知道我们在做什么并对这种行为进行有效控制的能力。然而,在理解AI/AS带来的责任挑战时,这两个条件却是误导因素,因为认知科学的证据表明,个体人类在这两个条件方面也面临非常相似的责任挑战。虽然认知不透明和行为控制减弱的问题并非AI/AS技术所独有(尽管它们可能会加剧这些问题),但我们表明,我们可以从哲学家最近如何修订道德责任的传统概念以应对认知科学对负责任的人类行为主体提出的这些挑战中,为AI/AS的开发和治理汲取重要教训。由此产生的责任工具主义观点强调行为主体培养的前瞻性和灵活性作用,有望将AI/AS融入健康的道德生态。我们注意到,AI/AS责任方面仍存在一个尚未得到广泛研究和解决的差距,这个差距基于人类行为主体与AI/AS等社会技术系统之间的关系不对称。在本文结论中,我们指出,关注这一脆弱性差距必须为未来构建可信赖的AI/AS系统以及维护负责任的人类行为主体条件的尝试提供信息并使之成为可能。