Tomsett Richard, Preece Alun, Braines Dave, Cerutti Federico, Chakraborty Supriyo, Srivastava Mani, Pearson Gavin, Kaplan Lance
Emerging Technology, IBM Research Europe, Hursley Park Road, Hursley SO21 2JN, UK.
Crime and Security Research Institute, Cardiff University, Friary House, Greyfriars Road, Cardiff CF10 3AE, UK.
Patterns (N Y). 2020 Jul 10;1(4):100049. doi: 10.1016/j.patter.2020.100049.
Artificial intelligence (AI) systems hold great promise as decision-support tools, but we must be able to identify and understand their inevitable mistakes if they are to fulfill this potential. This is particularly true in domains where the decisions are high-stakes, such as law, medicine, and the military. In this Perspective, we describe the particular challenges for AI decision support posed in military coalition operations. These include having to deal with limited, low-quality data, which inevitably compromises AI performance. We suggest that these problems can be mitigated by taking steps that allow rapid trust calibration so that decision makers understand the AI system's limitations and likely failures and can calibrate their trust in its outputs appropriately. We propose that AI services can achieve this by being both interpretable and uncertainty-aware. Creating such AI systems poses various technical and human factors challenges. We review these challenges and recommend directions for future research.
人工智能(AI)系统作为决策支持工具具有巨大潜力,但要实现这一潜力,我们必须能够识别并理解它们不可避免的错误。在法律、医学和军事等高风险决策领域尤其如此。在这篇观点文章中,我们描述了军事联合行动中人工智能决策支持所面临的特殊挑战。这些挑战包括必须处理有限的、低质量的数据,这不可避免地会影响人工智能的性能。我们建议,可以通过采取能够实现快速信任校准的措施来缓解这些问题,以便决策者了解人工智能系统的局限性和可能的故障,并能够适当地校准他们对其输出的信任。我们提出,人工智能服务可以通过具备可解释性和不确定性感知来实现这一点。创建这样的人工智能系统会带来各种技术和人为因素方面的挑战。我们审视了这些挑战,并为未来的研究推荐了方向。