Regan Mitt, Davidovic Jovana
Georgetown Law, Georgetown College, Georgetown University, Washington, DC, United States.
Philosophy Department, The University of Iowa, Iowa City, IA, United States.
Front Big Data. 2023 May 12;6:1020107. doi: 10.3389/fdata.2023.1020107. eCollection 2023.
This paper maintains that the just war tradition provides a useful framework for analyzing ethical issues related to the development of weapons that incorporate artificial intelligence (AI), or "AI-enabled weapons." While development of any weapon carries the risk of violations of jus ad bellum and jus in bello, AI-enabled weapons can pose distinctive risks of these violations. The article argues that developing AI-enabled weapons in accordance with jus ante bellum principles of just preparation for war can help minimize the risk of these violations. These principles impose two obligations. The first is that before deploying an AI-enabled weapon a state must rigorously test its safety and reliability, and conduct review of its ability to comply with international law. Second, a state must develop AI-enabled weapons in ways that minimize the likelihood that a security dilemma will arise, in which other states feel threatened by this development and hasten to deploy such weapons without sufficient testing and review. Ethical development of weapons that incorporate AI therefore requires that a state focus not only on its own activity, but on how that activity is perceived by other states.
本文认为,正义战争传统为分析与包含人工智能(AI)的武器或“人工智能赋能武器”的发展相关的伦理问题提供了一个有用的框架。虽然任何武器的发展都存在违反战争法和战争中的正义的风险,但人工智能赋能武器可能带来这些违规行为的独特风险。文章认为,根据战争前正义的战争准备原则来开发人工智能赋能武器有助于将这些违规行为的风险降至最低。这些原则规定了两项义务。第一项义务是,在部署人工智能赋能武器之前,国家必须严格测试其安全性和可靠性,并审查其遵守国际法的能力。第二项义务是,国家必须以尽量减少安全困境出现可能性的方式来开发人工智能赋能武器,在这种安全困境中,其他国家会因这种发展感到受到威胁,并在没有充分测试和审查的情况下匆忙部署此类武器。因此,包含人工智能的武器的伦理发展要求一个国家不仅要关注自身的活动,还要关注其他国家如何看待这种活动。