Suppr超能文献

相信我:一项关于机器学习方法在癌症药物敏感性预测中的可靠性和可解释性的调查。

Trust me if you can: a survey on reliability and interpretability of machine learning approaches for drug sensitivity prediction in cancer.

机构信息

Center for Bioinformatics, Chair for Bioinformatics, Saarland Informatics Campus (E2.1) Saarland University, Campus, D-66123 Saarbrücken, Saarland, Germany.

出版信息

Brief Bioinform. 2024 Jul 25;25(5). doi: 10.1093/bib/bbae379.

Abstract

With the ever-increasing number of artificial intelligence (AI) systems, mitigating risks associated with their use has become one of the most urgent scientific and societal issues. To this end, the European Union passed the EU AI Act, proposing solution strategies that can be summarized under the umbrella term trustworthiness. In anti-cancer drug sensitivity prediction, machine learning (ML) methods are developed for application in medical decision support systems, which require an extraordinary level of trustworthiness. This review offers an overview of the ML landscape of methods for anti-cancer drug sensitivity prediction, including a brief introduction to the four major ML realms (supervised, unsupervised, semi-supervised, and reinforcement learning). In particular, we address the question to what extent trustworthiness-related properties, more specifically, interpretability and reliability, have been incorporated into anti-cancer drug sensitivity prediction methods over the previous decade. In total, we analyzed 36 papers with approaches for anti-cancer drug sensitivity prediction. Our results indicate that the need for reliability has hardly been addressed so far. Interpretability, on the other hand, has often been considered for model development. However, the concept is rather used intuitively, lacking clear definitions. Thus, we propose an easily extensible taxonomy for interpretability, unifying all prevalent connotations explicitly or implicitly used within the field.

摘要

随着人工智能 (AI) 系统数量的不断增加,降低与使用相关的风险已成为最紧迫的科学和社会问题之一。为此,欧盟通过了《欧盟人工智能法案》,提出了可以概括为可信性这一保护伞术语的解决方案策略。在抗癌药物敏感性预测中,机器学习 (ML) 方法被开发用于医疗决策支持系统,这需要极高的可信度。这篇综述概述了用于抗癌药物敏感性预测的 ML 方法的全貌,包括对四大 ML 领域(监督学习、无监督学习、半监督学习和强化学习)的简要介绍。特别是,我们探讨了在过去十年中,抗癌药物敏感性预测方法在多大程度上纳入了与可信度相关的特性,特别是可解释性和可靠性。总的来说,我们分析了 36 篇具有抗癌药物敏感性预测方法的论文。结果表明,到目前为止,对可靠性的需求几乎没有得到解决。另一方面,可解释性经常被用于模型开发。然而,该概念通常是直观使用的,缺乏明确的定义。因此,我们提出了一个易于扩展的可解释性分类法,明确或隐含地统一了该领域中所有常见的内涵。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4811/11299037/28e0d42dc801/bbae379f1.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验