Moreno Jonathan, Gross Michael L, Becker Jack, Hereth Blake, Shortland Neil D, Evans Nicholas G
Department of Bioethics, School of Medicine, University of Pennsylvania, Philadelphia, PA, United States.
School of Political Science, University of Haifa, Haifa, Israel.
Front Big Data. 2022 Sep 9;5:978734. doi: 10.3389/fdata.2022.978734. eCollection 2022.
The military applications of AI raise myriad ethical challenges. Critical among them is how AI integrates with human decision making to enhance cognitive performance on the battlefield. AI applications range from augmented reality devices to assist learning and improve training to implantable Brain-Computer Interfaces (BCI) to create bionic "super soldiers." As these technologies mature, AI-wired warfighters face potential affronts to cognitive liberty, psychological and physiological health risks and obstacles to integrating into military and civil society during their service and upon discharge. Before coming online and operational, however, AI-assisted technologies and neural interfaces require extensive research and human experimentation. Each endeavor raises additional ethical concerns that have been historically ignored thereby leaving military and medical scientists without a cogent ethics protocol for sustainable research. In this way, this paper is a "prequel" to the current debate over enhancement which largely considers neuro-technologies once they are already out the door and operational. To lay the ethics foundation for AI-assisted warfighter enhancement research, we present an historical overview of its technological development followed by a presentation of salient ethics research issues (ICRC, 2006). We begin with a historical survey of AI neuro-enhancement research highlighting the ethics lacunae of its development. We demonstrate the unique ethical problems posed by the convergence of several technologies in the military research setting. Then we address these deficiencies by emphasizing how AI-assisted warfighter enhancement research must pay particular attention to military necessity, and the medical and military cost-benefit tradeoffs of emerging technologies, all attending to the unique status of warfighters as experimental subjects. Finally, our focus is the enhancement of friendly or compatriot warfighters and not, as others have focused, enhancements intended to pacify enemy warfighters.
人工智能在军事领域的应用引发了无数伦理挑战。其中关键的问题是人工智能如何与人类决策相结合,以提高战场上的认知能力。人工智能的应用范围从增强现实设备(用于辅助学习和改进训练)到可植入式脑机接口(BCI),目的是打造仿生“超级士兵”。随着这些技术的成熟,与人工智能相连的战士在服役期间和退伍后,其认知自由可能受到侵犯,面临心理和生理健康风险,以及融入军事和民间社会的障碍。然而,在上线运行之前,人工智能辅助技术和神经接口需要进行广泛的研究和人体试验。每一项努力都会引发更多一直被历史忽视的伦理问题,从而使军事和医学科学家缺乏一套用于可持续研究的有说服力的伦理规范。从这个意义上说,本文是当前关于增强技术辩论的“前传”,当前辩论主要关注神经技术一旦已经推出并投入使用后的情况。为了为人工智能辅助的战士增强研究奠定伦理基础,我们首先对其技术发展进行历史概述,然后介绍突出的伦理研究问题(红十字国际委员会,2006年)。我们从对人工智能神经增强研究的历史调查开始,突出其发展过程中的伦理空白。我们展示了军事研究环境中几种技术融合所带来的独特伦理问题。然后我们强调人工智能辅助的战士增强研究必须特别关注军事必要性、新兴技术的医学和军事成本效益权衡,以及战士作为实验对象的独特地位,以此来解决这些不足之处。最后,我们关注的是增强友军或本国战士的能力,而不是像其他人所关注的那样,增强旨在制服敌方战士的能力。
BMC Med Ethics. 2022-1-26
Camb Q Healthc Ethics. 2022-10
Eur J Psychotraumatol. 2014-8-14
J Dent Res. 2021-12
Camb Q Healthc Ethics. 2022-10
J Law Med Ethics. 2022
Neuroethics. 2021-10
J Med Ethics. 2021-4-21
J Med Ethics. 2020-4-3
Hastings Cent Rep. 2020-1
J Neurosci Methods. 2020-2-15
J Neural Eng. 2020-1-28