Suppr超能文献

相信黑盒:医疗保健的机器学习不需要可解释性即可成为基于证据的。

Believing in black boxes: machine learning for healthcare does not need explainability to be evidence-based.

机构信息

Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada.

Department of Anesthesiology & Pain Medicine, University of Toronto, Toronto, Ontario, Canada; Department of Philosophy, University of Toronto, Toronto, Ontario, Canada.

出版信息

J Clin Epidemiol. 2022 Feb;142:252-257. doi: 10.1016/j.jclinepi.2021.11.001. Epub 2021 Nov 5.

Abstract

OBJECTIVE

To examine the role of explainability in machine learning for healthcare (MLHC), and its necessity and significance with respect to effective and ethical MLHC application.

STUDY DESIGN AND SETTING

This commentary engages with the growing and dynamic corpus of literature on the use of MLHC and artificial intelligence (AI) in medicine, which provide the context for a focused narrative review of arguments presented in favour of and opposition to explainability in MLHC.

RESULTS

We find that concerns regarding explainability are not limited to MLHC, but rather extend to numerous well-validated treatment interventions as well as to human clinical judgment itself. We examine the role of evidence-based medicine in evaluating inexplicable treatments and technologies, and highlight the analogy between the concept of explainability in MLHC and the related concept of mechanistic reasoning in evidence-based medicine.

CONCLUSION

Ultimately, we conclude that the value of explainability in MLHC is not intrinsic, but is instead instrumental to achieving greater imperatives such as performance and trust. We caution against the uncompromising pursuit of explainability, and advocate instead for the development of robust empirical methods to successfully evaluate increasingly inexplicable algorithmic systems.

摘要

目的

探讨机器学习在医疗保健(MLHC)中的可解释性的作用,以及其在有效和道德的 MLHC 应用方面的必要性和意义。

研究设计和环境

本评论与越来越多的关于 MLHC 和医学人工智能(AI)应用的文献资料相结合,为支持和反对 MLHC 中可解释性的论点提供了一个重点叙述性的综述。

结果

我们发现,对可解释性的担忧不仅限于 MLHC,而是扩展到许多经过充分验证的治疗干预措施,以及人类临床判断本身。我们研究了循证医学在评估不可解释的治疗和技术方面的作用,并强调了 MLHC 中可解释性概念与循证医学中机械推理概念之间的类比。

结论

最终,我们的结论是,MLHC 中可解释性的价值不是内在的,而是实现性能和信任等更大目标的工具。我们警告不要不顾一切地追求可解释性,而是提倡开发强大的实证方法,以成功评估越来越不可解释的算法系统。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验