Budhkar Aishwarya, Song Qianqian, Su Jing, Zhang Xuhong
Department of Computer Science, Luddy School of Informatics, Computing, and Engineering, Indiana University Bloomington, 700 N Woodlawn Ave, Bloomington, IN 47408, USA.
Department of Health Outcomes and Biomedical Informatics, College of Medicine, University of Florida, 1889 Museum Rd, Suite 7000, Gainesville, FL 32611, USA.
Comput Struct Biotechnol J. 2025 Jan 10;27:346-359. doi: 10.1016/j.csbj.2024.12.027. eCollection 2025.
The widespread adoption of Artificial Intelligence (AI) and machine learning (ML) tools across various domains has showcased their remarkable capabilities and performance. Black-box AI models raise concerns about decision transparency and user confidence. Therefore, explainable AI (XAI) and explainability techniques have rapidly emerged in recent years. This paper aims to review existing works on explainability techniques in bioinformatics, with a particular focus on omics and imaging. We seek to analyze the growing demand for XAI in bioinformatics, identify current XAI approaches, and highlight their limitations. Our survey emphasizes the specific needs of both bioinformatics applications and users when developing XAI methods and we particularly focus on omics and imaging data. Our analysis reveals a significant demand for XAI in bioinformatics, driven by the need for transparency and user confidence in decision-making processes. At the end of the survey, we provided practical guidelines for system developers.
人工智能(AI)和机器学习(ML)工具在各个领域的广泛应用展示了它们卓越的能力和性能。黑箱人工智能模型引发了对决策透明度和用户信心的担忧。因此,近年来可解释人工智能(XAI)和可解释性技术迅速兴起。本文旨在回顾生物信息学中可解释性技术的现有研究,特别关注组学和成像。我们试图分析生物信息学中对XAI日益增长的需求,识别当前的XAI方法,并突出它们的局限性。我们的调查强调了在开发XAI方法时生物信息学应用和用户的特定需求,并且我们特别关注组学和成像数据。我们的分析表明,由于在决策过程中对透明度和用户信心的需求,生物信息学中对XAI有巨大需求。在调查结束时,我们为系统开发者提供了实用指南。