Suppr超能文献

相信黑盒:医疗保健的机器学习不需要可解释性即可成为基于证据的。

Believing in black boxes: machine learning for healthcare does not need explainability to be evidence-based.

机构信息

Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada.

Department of Anesthesiology & Pain Medicine, University of Toronto, Toronto, Ontario, Canada; Department of Philosophy, University of Toronto, Toronto, Ontario, Canada.

出版信息

J Clin Epidemiol. 2022 Feb;142:252-257. doi: 10.1016/j.jclinepi.2021.11.001. Epub 2021 Nov 5.

Abstract

OBJECTIVE

To examine the role of explainability in machine learning for healthcare (MLHC), and its necessity and significance with respect to effective and ethical MLHC application.

STUDY DESIGN AND SETTING

This commentary engages with the growing and dynamic corpus of literature on the use of MLHC and artificial intelligence (AI) in medicine, which provide the context for a focused narrative review of arguments presented in favour of and opposition to explainability in MLHC.

RESULTS

We find that concerns regarding explainability are not limited to MLHC, but rather extend to numerous well-validated treatment interventions as well as to human clinical judgment itself. We examine the role of evidence-based medicine in evaluating inexplicable treatments and technologies, and highlight the analogy between the concept of explainability in MLHC and the related concept of mechanistic reasoning in evidence-based medicine.

CONCLUSION

Ultimately, we conclude that the value of explainability in MLHC is not intrinsic, but is instead instrumental to achieving greater imperatives such as performance and trust. We caution against the uncompromising pursuit of explainability, and advocate instead for the development of robust empirical methods to successfully evaluate increasingly inexplicable algorithmic systems.

摘要

目的

探讨机器学习在医疗保健(MLHC)中的可解释性的作用,以及其在有效和道德的 MLHC 应用方面的必要性和意义。

研究设计和环境

本评论与越来越多的关于 MLHC 和医学人工智能(AI)应用的文献资料相结合,为支持和反对 MLHC 中可解释性的论点提供了一个重点叙述性的综述。

结果

我们发现,对可解释性的担忧不仅限于 MLHC,而是扩展到许多经过充分验证的治疗干预措施,以及人类临床判断本身。我们研究了循证医学在评估不可解释的治疗和技术方面的作用,并强调了 MLHC 中可解释性概念与循证医学中机械推理概念之间的类比。

结论

最终,我们的结论是,MLHC 中可解释性的价值不是内在的,而是实现性能和信任等更大目标的工具。我们警告不要不顾一切地追求可解释性,而是提倡开发强大的实证方法,以成功评估越来越不可解释的算法系统。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验