Petkovic Dragutin, Kobzik Lester, Re Christopher
Computer Science Department, San Francisco State University (SFSU), 1600 Holloway Ave, San Francisco, CA 94132, USA,
Pac Symp Biocomput. 2018;23:623-627.
The goals of this workshop are to discuss challenges in explainability of current Machine Leaning and Deep Analytics (MLDA) used in biocomputing and to start the discussion on ways to improve it. We define explainability in MLDA as easy to use information explaining why and how the MLDA approach made its decisions. We believe that much greater effort is needed to address the issue of MLDA explainability because of: 1) the ever increasing use and dependence on MLDA in biocomputing including the need for increased adoption by non-MLD experts; 2) the diversity, complexity and scale of biocomputing data and MLDA algorithms; 3) the emerging importance of MLDA-based decisions in patient care, in daily research, as well as in the development of new costly medical procedures and drugs. This workshop aims to: a) analyze and challenge the current level of explainability of MLDA methods and practices in biocomputing; b) explore benefits of improvements in this area; and c) provide useful and practical guidance to the biocomputing community on how to address these challenges and how to develop improvements. The workshop format is designed to encourage a lively discussion with panelists to first motivate and understand the problem and then to define next steps and solutions needed to improve MLDA explainability.
本次研讨会的目标是讨论生物计算中当前使用的机器学习和深度分析(MLDA)在可解释性方面的挑战,并开始探讨改进方法。我们将MLDA中的可解释性定义为易于使用的信息,用于解释MLDA方法做出决策的原因和方式。我们认为,由于以下原因,需要付出更大的努力来解决MLDA可解释性问题:1)生物计算中对MLDA的使用和依赖不断增加,包括非MLD专家对其采用率提高的需求;2)生物计算数据和MLDA算法的多样性、复杂性和规模;3)基于MLDA的决策在患者护理、日常研究以及新的昂贵医疗程序和药物开发中的重要性日益凸显。本次研讨会旨在:a)分析和质疑生物计算中MLDA方法和实践目前的可解释性水平;b)探索该领域改进的益处;c)就如何应对这些挑战以及如何进行改进,为生物计算社区提供有用且实用的指导。研讨会的形式旨在鼓励与小组成员进行热烈讨论,首先激发并理解问题,然后确定改进MLDA可解释性所需的后续步骤和解决方案。