Cox Louis Anthony
Department of Business Analytics, University of Colorado School of Business, and MoirAI, 503 N. Franklin Street, Denver, CO 80218, USA.
Entropy (Basel). 2021 May 13;23(5):601. doi: 10.3390/e23050601.
For an AI agent to make trustworthy decision recommendations under uncertainty on behalf of human principals, it should be able to explain its recommended decisions make preferred outcomes more likely and what risks they entail. Such rationales use causal models to link potential courses of action to resulting outcome probabilities. They reflect an understanding of possible actions, preferred outcomes, the effects of action on outcome probabilities, and acceptable risks and trade-offs-the standard ingredients of normative theories of decision-making under uncertainty, such as expected utility theory. Competent AI advisory systems should also notice changes that might affect a user's plans and goals. In response, they should apply both learned patterns for quick response (analogous to fast, intuitive "System 1" decision-making in human psychology) and also slower causal inference and simulation, decision optimization, and planning algorithms (analogous to deliberative "System 2" decision-making in human psychology) to decide how best to respond to changing conditions. Concepts of conditional independence, conditional probability tables (CPTs) or models, causality, heuristic search for optimal plans, uncertainty reduction, and value of information (VoI) provide a rich, principled framework for recognizing and responding to relevant changes and features of decision problems via both learned and calculated responses. This paper reviews how these and related concepts can be used to identify probabilistic causal dependencies among variables, detect changes that matter for achieving goals, represent them efficiently to support responses on multiple time scales, and evaluate and update causal models and plans in light of new data. The resulting causally explainable decisions make efficient use of available information to achieve goals in uncertain environments.
为了让人工智能代理在不确定性情况下代表人类委托人做出值得信赖的决策建议,它应该能够解释其推荐的决策如何使偏好的结果更有可能实现,以及这些决策会带来哪些风险。这些基本原理使用因果模型将潜在的行动方案与结果概率联系起来。它们反映了对可能行动、偏好结果、行动对结果概率的影响以及可接受风险和权衡的理解——这些都是不确定性下规范性决策理论(如预期效用理论)的标准要素。合格的人工智能咨询系统还应该注意到可能影响用户计划和目标的变化。作为回应,它们应该既应用学习到的模式进行快速响应(类似于人类心理学中快速、直观的“系统1”决策),也应用较慢的因果推理和模拟、决策优化以及规划算法(类似于人类心理学中深思熟虑的“系统2”决策)来决定如何最好地应对不断变化的情况。条件独立性、条件概率表(CPT)或模型、因果关系、启发式搜索最优计划、不确定性降低以及信息价值(VoI)等概念提供了一个丰富、有原则的框架,用于通过学习和计算的响应来识别和应对决策问题的相关变化和特征。本文回顾了如何使用这些及相关概念来识别变量之间的概率因果依赖关系、检测对实现目标至关重要的变化、有效地表示这些变化以支持在多个时间尺度上的响应,以及根据新数据评估和更新因果模型与计划。由此产生的具有因果可解释性的决策能够有效地利用可用信息,在不确定的环境中实现目标。