Suppr超能文献

只是为战争和人工智能武器做准备。

Just preparation for war and AI-enabled weapons.

作者信息

Regan Mitt, Davidovic Jovana

机构信息

Georgetown Law, Georgetown College, Georgetown University, Washington, DC, United States.

Philosophy Department, The University of Iowa, Iowa City, IA, United States.

出版信息

Front Big Data. 2023 May 12;6:1020107. doi: 10.3389/fdata.2023.1020107. eCollection 2023.

Abstract

This paper maintains that the just war tradition provides a useful framework for analyzing ethical issues related to the development of weapons that incorporate artificial intelligence (AI), or "AI-enabled weapons." While development of any weapon carries the risk of violations of jus ad bellum and jus in bello, AI-enabled weapons can pose distinctive risks of these violations. The article argues that developing AI-enabled weapons in accordance with jus ante bellum principles of just preparation for war can help minimize the risk of these violations. These principles impose two obligations. The first is that before deploying an AI-enabled weapon a state must rigorously test its safety and reliability, and conduct review of its ability to comply with international law. Second, a state must develop AI-enabled weapons in ways that minimize the likelihood that a security dilemma will arise, in which other states feel threatened by this development and hasten to deploy such weapons without sufficient testing and review. Ethical development of weapons that incorporate AI therefore requires that a state focus not only on its own activity, but on how that activity is perceived by other states.

摘要

本文认为,正义战争传统为分析与包含人工智能(AI)的武器或“人工智能赋能武器”的发展相关的伦理问题提供了一个有用的框架。虽然任何武器的发展都存在违反战争法和战争中的正义的风险,但人工智能赋能武器可能带来这些违规行为的独特风险。文章认为,根据战争前正义的战争准备原则来开发人工智能赋能武器有助于将这些违规行为的风险降至最低。这些原则规定了两项义务。第一项义务是,在部署人工智能赋能武器之前,国家必须严格测试其安全性和可靠性,并审查其遵守国际法的能力。第二项义务是,国家必须以尽量减少安全困境出现可能性的方式来开发人工智能赋能武器,在这种安全困境中,其他国家会因这种发展感到受到威胁,并在没有充分测试和审查的情况下匆忙部署此类武器。因此,包含人工智能的武器的伦理发展要求一个国家不仅要关注自身的活动,还要关注其他国家如何看待这种活动。

相似文献

1
Just preparation for war and AI-enabled weapons.
Front Big Data. 2023 May 12;6:1020107. doi: 10.3389/fdata.2023.1020107. eCollection 2023.
2
Applying the rules of just war theory to engineers in the arms industry.
Sci Eng Ethics. 2006 Oct;12(4):685-700. doi: 10.1007/s11948-006-0064-1.
3
Nuclear weapons and medicine: some ethical dilemmas.
J Med Ethics. 1983 Dec;9(4):200-6. doi: 10.1136/jme.9.4.200.
5
Introducing as a cosmopolitan approach to humanitarian intervention.
Eur J Int Relat. 2016 Dec;22(4):897-919. doi: 10.1177/1354066115607370. Epub 2015 Oct 14.
6
Ethics and governance of trustworthy medical artificial intelligence.
BMC Med Inform Decis Mak. 2023 Jan 13;23(1):7. doi: 10.1186/s12911-023-02103-9.
7
On the purpose of meaningful human control of AI.
Front Big Data. 2023 Jan 9;5:1017677. doi: 10.3389/fdata.2022.1017677. eCollection 2022.
8
Adopting AI: how familiarity breeds both trust and contempt.
AI Soc. 2023 May 12:1-15. doi: 10.1007/s00146-023-01666-5.
9
Transparent human - (non-) transparent technology? The Janus-faced call for transparency in AI-based health care technologies.
Front Genet. 2022 Aug 22;13:902960. doi: 10.3389/fgene.2022.902960. eCollection 2022.

引用本文的文献

1
Editorial: Ethical challenges in AI-enhanced military operations.
Front Big Data. 2023 Jun 19;6:1229252. doi: 10.3389/fdata.2023.1229252. eCollection 2023.

本文引用的文献

1
Validating and Verifying AI Systems.
Patterns (N Y). 2020 Jun 12;1(3):100037. doi: 10.1016/j.patter.2020.100037.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验