The Vector Institute for Artificial Intelligence, Toronto, ON, Canada.
The School of Engineering, The University of Guelph, Guelph, ON, Canada.
Psychon Bull Rev. 2021 Apr;28(2):454-475. doi: 10.3758/s13423-020-01825-5. Epub 2020 Nov 6.
Artificial intelligence powered by deep neural networks has reached a level of complexity where it can be difficult or impossible to express how a model makes its decisions. This black-box problem is especially concerning when the model makes decisions with consequences for human well-being. In response, an emerging field called explainable artificial intelligence (XAI) aims to increase the interpretability, fairness, and transparency of machine learning. In this paper, we describe how cognitive psychologists can make contributions to XAI. The human mind is also a black box, and cognitive psychologists have over 150 years of experience modeling it through experimentation. We ought to translate the methods and rigor of cognitive psychology to the study of artificial black boxes in the service of explainability. We provide a review of XAI for psychologists, arguing that current methods possess a blind spot that can be complemented by the experimental cognitive tradition. We also provide a framework for research in XAI, highlight exemplary cases of experimentation within XAI inspired by psychological science, and provide a tutorial on experimenting with machines. We end by noting the advantages of an experimental approach and invite other psychologists to conduct research in this exciting new field.
人工智能由深度神经网络驱动,已经达到了一种复杂的程度,以至于很难或不可能表达模型如何做出决策。当模型做出对人类福祉有影响的决策时,这个黑盒问题尤其令人担忧。作为回应,一个新兴的领域,即可解释人工智能(XAI),旨在提高机器学习的可解释性、公平性和透明度。在本文中,我们描述了认知心理学家如何为 XAI 做出贡献。人类的思维也是一个黑盒,认知心理学家通过实验已经有 150 多年的建模经验。我们应该将认知心理学的方法和严谨性转化为研究人工智能黑盒,以实现可解释性。我们为心理学家提供了一个 XAI 的综述,认为当前的方法存在一个盲点,可以通过实验认知传统来补充。我们还为 XAI 的研究提供了一个框架,强调了受心理学科学启发的 XAI 内的实验范例,并提供了一个关于与机器一起实验的教程。最后,我们注意到实验方法的优势,并邀请其他心理学家在这个令人兴奋的新领域开展研究。