Malinverno Luca, Barros Vesna, Ghisoni Francesco, Visonà Giovanni, Kern Roman, Nickel Philip J, Ventura Barbara Elvira, Šimić Ilija, Stryeck Sarah, Manni Francesca, Ferri Cesar, Jean-Quartier Claire, Genga Laura, Schweikert Gabriele, Lovrić Mario, Rosen-Zvi Michal
Porini SRL, Via Cavour, 222074 Lomazzo, Italy.
AI for Accelerated Healthcare & Life Sciences Discovery, IBM R&D Laboratories, University of Haifa Campus, Mount Carmel, Haifa 3498825, Israel.
Patterns (N Y). 2023 Sep 8;4(9):100830. doi: 10.1016/j.patter.2023.100830.
The black-box nature of most artificial intelligence (AI) models encourages the development of explainability methods to engender trust into the AI decision-making process. Such methods can be broadly categorized into two main types: post hoc explanations and inherently interpretable algorithms. We aimed at analyzing the possible associations between COVID-19 and the push of explainable AI (XAI) to the forefront of biomedical research. We automatically extracted from the PubMed database biomedical XAI studies related to concepts of causality or explainability and manually labeled 1,603 papers with respect to XAI categories. To compare the trends pre- and post-COVID-19, we fit a change point detection model and evaluated significant changes in publication rates. We show that the advent of COVID-19 in the beginning of 2020 could be the driving factor behind an increased focus concerning XAI, playing a crucial role in accelerating an already evolving trend. Finally, we present a discussion with future societal use and impact of XAI technologies and potential future directions for those who pursue fostering clinical trust with interpretable machine learning models.
大多数人工智能(AI)模型的黑箱性质促使可解释性方法的发展,以增强人们对AI决策过程的信任。此类方法大致可分为两种主要类型:事后解释和本质上可解释的算法。我们旨在分析新型冠状病毒肺炎(COVID-19)与将可解释人工智能(XAI)推向生物医学研究前沿之间可能存在的关联。我们从PubMed数据库中自动提取了与因果关系或可解释性概念相关的生物医学XAI研究,并针对XAI类别手动标注了1603篇论文。为了比较COVID-19前后的趋势,我们拟合了一个变化点检测模型,并评估了发表率的显著变化。我们表明,2020年初COVID-19的出现可能是XAI关注度增加背后的驱动因素,在加速一个已经在发展的趋势方面发挥了关键作用。最后,我们对XAI技术的未来社会应用和影响以及那些致力于通过可解释机器学习模型增强临床信任的人未来可能的方向进行了讨论。