Suppr超能文献

寻找差距:人工智能、责任机构与脆弱性

Find the Gap: AI, Responsible Agency and Vulnerability.

作者信息

Vallor Shannon, Vierkant Tillmann

机构信息

School of Philosophy, Psychology and Language Sciences University of Edinburgh, Edinburgh, Scotland.

Edinburgh Futures Institute, University of Edinburgh, Edinburgh, Scotland.

出版信息

Minds Mach (Dordr). 2024;34(3):20. doi: 10.1007/s11023-024-09674-0. Epub 2024 Jun 5.

Abstract

The , commonly described as a core challenge for the effective governance of, and trust in, AI and autonomous systems (AI/AS), is traditionally associated with a failure of the epistemic and/or the control condition of moral responsibility: the ability to know what we are doing and exercise competent control over this doing. Yet these two conditions are a red herring when it comes to understanding the responsibility challenges presented by AI/AS, since evidence from the cognitive sciences shows that individual humans face very similar responsibility challenges with regard to these two conditions. While the problems of epistemic opacity and attenuated behaviour control are not unique to AI/AS technologies (though they can be exacerbated by them), we show that we can learn important lessons for AI/AS development and governance from how philosophers have recently revised the traditional concept of moral responsibility in response to these challenges to responsible human agency from the cognitive sciences. The resulting instrumentalist views of responsibility, which emphasize the forward-looking and flexible role of agency cultivation, hold considerable promise for integrating AI/AS into a healthy moral ecology. We note that there nevertheless a gap in AI/AS responsibility that has yet to be extensively studied and addressed, one grounded in a relational asymmetry of between human agents and sociotechnical systems like AI/AS. In the conclusion of this paper we note that attention to this vulnerability gap must inform and enable future attempts to construct trustworthy AI/AS systems and preserve the conditions for responsible human agency.

摘要

通常被描述为人工智能和自主系统(AI/AS)有效治理及信任的核心挑战,传统上与道德责任的认知和/或控制条件的缺失相关:即知道我们在做什么并对这种行为进行有效控制的能力。然而,在理解AI/AS带来的责任挑战时,这两个条件却是误导因素,因为认知科学的证据表明,个体人类在这两个条件方面也面临非常相似的责任挑战。虽然认知不透明和行为控制减弱的问题并非AI/AS技术所独有(尽管它们可能会加剧这些问题),但我们表明,我们可以从哲学家最近如何修订道德责任的传统概念以应对认知科学对负责任的人类行为主体提出的这些挑战中,为AI/AS的开发和治理汲取重要教训。由此产生的责任工具主义观点强调行为主体培养的前瞻性和灵活性作用,有望将AI/AS融入健康的道德生态。我们注意到,AI/AS责任方面仍存在一个尚未得到广泛研究和解决的差距,这个差距基于人类行为主体与AI/AS等社会技术系统之间的关系不对称。在本文结论中,我们指出,关注这一脆弱性差距必须为未来构建可信赖的AI/AS系统以及维护负责任的人类行为主体条件的尝试提供信息并使之成为可能。

相似文献

1
Find the Gap: AI, Responsible Agency and Vulnerability.
Minds Mach (Dordr). 2024;34(3):20. doi: 10.1007/s11023-024-09674-0. Epub 2024 Jun 5.
2
Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability.
Sci Eng Ethics. 2020 Aug;26(4):2051-2068. doi: 10.1007/s11948-019-00146-8. Epub 2019 Oct 24.
3
Artificial Moral Responsibility: How We Can and Cannot Hold Machines Responsible.
Camb Q Healthc Ethics. 2021 Jul;30(3):435-447. doi: 10.1017/S0963180120000985.
4
When Doctors and AI Interact: on Human Responsibility for Artificial Risks.
Philos Technol. 2022;35(1):11. doi: 10.1007/s13347-022-00506-6. Epub 2022 Feb 19.
5
Who is responsible? US Public perceptions of AI governance through the lenses of trust and ethics.
Public Underst Sci. 2024 Jul;33(5):654-672. doi: 10.1177/09636625231224592. Epub 2024 Feb 7.
6
Artificial Agents in Natural Moral Communities: A Brief Clarification.
Camb Q Healthc Ethics. 2021 Jul;30(3):455-458. doi: 10.1017/S0963180120001000.
7
First-person representations and responsible agency in AI.
Synthese. 2021;199(3-4):7061-7079. doi: 10.1007/s11229-021-03105-8. Epub 2021 Mar 19.
8
Diffused responsibility: attributions of responsibility in the use of AI-driven clinical decision support systems.
AI Ethics. 2022;2(4):747-761. doi: 10.1007/s43681-022-00135-x. Epub 2022 Jan 24.
9
Instrumental Robots.
Sci Eng Ethics. 2020 Dec;26(6):3121-3141. doi: 10.1007/s11948-020-00259-5. Epub 2020 Aug 19.
10
Ethics and governance of trustworthy medical artificial intelligence.
BMC Med Inform Decis Mak. 2023 Jan 13;23(1):7. doi: 10.1186/s12911-023-02103-9.

本文引用的文献

1
Free will without consciousness?
Trends Cogn Sci. 2022 Jul;26(7):555-566. doi: 10.1016/j.tics.2022.03.005. Epub 2022 Apr 12.
2
Robot Responsibility and Moral Community.
Front Robot AI. 2021 Nov 22;8:768092. doi: 10.3389/frobt.2021.768092. eCollection 2021.
3
Technological Answerability and the Severance Problem: Staying Connected by Demanding Answers.
Sci Eng Ethics. 2021 Aug 24;27(5):59. doi: 10.1007/s11948-021-00334-5.
4
Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability.
Sci Eng Ethics. 2020 Aug;26(4):2051-2068. doi: 10.1007/s11948-019-00146-8. Epub 2019 Oct 24.
5
Stranger than Fiction: Costs and Benefits of Everyday Confabulation.
Rev Philos Psychol. 2018;9(2):227-249. doi: 10.1007/s13164-017-0367-y. Epub 2017 Oct 26.
6
7
Attributing Agency to Automated Systems: Reflections on Human-Robot Collaborations and Responsibility-Loci.
Sci Eng Ethics. 2018 Aug;24(4):1201-1219. doi: 10.1007/s11948-017-9943-x. Epub 2017 Jul 18.
8
The power of subliminal and supraliminal eye contact on social decision making: An individual-difference perspective.
Conscious Cogn. 2016 Feb;40:131-40. doi: 10.1016/j.concog.2016.01.001. Epub 2016 Jan 25.
9
"Ain't no one here but us social forces": constructing the professional responsibility of engineers.
Sci Eng Ethics. 2012 Mar;18(1):13-34. doi: 10.1007/s11948-010-9225-3. Epub 2010 Aug 5.
10
Subliminal exposure to national flags affects political thought and behavior.
Proc Natl Acad Sci U S A. 2007 Dec 11;104(50):19757-61. doi: 10.1073/pnas.0704679104. Epub 2007 Dec 4.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验