Dey Sanjoy, Chakraborty Prithwish, Kwon Bum Chul, Dhurandhar Amit, Ghalwash Mohamed, Suarez Saiz Fernando J, Ng Kenney, Sow Daby, Varshney Kush R, Meyer Pablo
Center for Computational Health, IBM Thomas J. Watson Research Center, Yorktown Heights, NY 10598, USA.
IBM Research AI, IBM Thomas J. Watson Research Center, Yorktown Heights, NY 10598, USA.
Patterns (N Y). 2022 May 13;3(5):100493. doi: 10.1016/j.patter.2022.100493.
Rapid advances in artificial intelligence (AI) and availability of biological, medical, and healthcare data have enabled the development of a wide variety of models. Significant success has been achieved in a wide range of fields, such as genomics, protein folding, disease diagnosis, imaging, and clinical tasks. Although widely used, the inherent opacity of deep AI models has brought criticism from the research field and little adoption in clinical practice. Concurrently, there has been a significant amount of research focused on making such methods more interpretable, reviewed here, but inherent critiques of such explainability in AI (XAI), its requirements, and concerns with fairness/robustness have hampered their real-world adoption. We here discuss how user-driven XAI can be made more useful for different healthcare stakeholders through the definition of three key personas-data scientists, clinical researchers, and clinicians-and present an overview of how different XAI approaches can address their needs. For illustration, we also walk through several research and clinical examples that take advantage of XAI open-source tools, including those that help enhance the explanation of the results through visualization. This perspective thus aims to provide a guidance tool for developing explainability solutions for healthcare by empowering both subject matter experts, providing them with a survey of available tools, and explainability developers, by providing examples of how such methods can influence in practice adoption of solutions.
人工智能(AI)的迅速发展以及生物、医学和医疗保健数据的可得性推动了各种各样模型的开发。在基因组学、蛋白质折叠、疾病诊断、成像和临床任务等广泛领域都取得了重大成功。尽管深度学习AI模型被广泛使用,但其固有的不透明性引发了研究领域的批评,并且在临床实践中很少被采用。与此同时,大量研究致力于使这些方法更具可解释性,本文对此进行了综述,但对AI中的可解释性(XAI)的固有批评、其要求以及对公平性/稳健性的担忧阻碍了它们在现实世界中的应用。我们在此讨论如何通过定义三个关键角色——数据科学家、临床研究人员和临床医生,使用户驱动的XAI对不同的医疗保健利益相关者更有用,并概述不同的XAI方法如何满足他们的需求。为了说明这一点,我们还介绍了几个利用XAI开源工具的研究和临床示例,包括那些通过可视化帮助增强结果解释的工具。因此,这一观点旨在为医疗保健领域开发可解释性解决方案提供一种指导工具,通过为主题专家提供可用工具的调查来赋予他们权力,并通过提供此类方法如何影响解决方案在实践中的采用的示例来为可解释性开发者提供帮助。