IBISC, Univ Evry, Université Paris-Saclay, 23 boulevard de France, 91034, Evry, France.
BMC Bioinformatics. 2020 Nov 4;21(1):501. doi: 10.1186/s12859-020-03836-4.
The use of predictive gene signatures to assist clinical decision is becoming more and more important. Deep learning has a huge potential in the prediction of phenotype from gene expression profiles. However, neural networks are viewed as black boxes, where accurate predictions are provided without any explanation. The requirements for these models to become interpretable are increasing, especially in the medical field.
We focus on explaining the predictions of a deep neural network model built from gene expression data. The most important neurons and genes influencing the predictions are identified and linked to biological knowledge. Our experiments on cancer prediction show that: (1) deep learning approach outperforms classical machine learning methods on large training sets; (2) our approach produces interpretations more coherent with biology than the state-of-the-art based approaches; (3) we can provide a comprehensive explanation of the predictions for biologists and physicians.
We propose an original approach for biological interpretation of deep learning models for phenotype prediction from gene expression data. Since the model can find relationships between the phenotype and gene expression, we may assume that there is a link between the identified genes and the phenotype. The interpretation can, therefore, lead to new biological hypotheses to be investigated by biologists.
利用预测基因特征来辅助临床决策变得越来越重要。深度学习在从基因表达谱预测表型方面具有巨大潜力。然而,神经网络被视为黑盒,其中提供了准确的预测而没有任何解释。这些模型的可解释性要求越来越高,特别是在医学领域。
我们专注于解释从基因表达数据构建的深度神经网络模型的预测。确定了影响预测的最重要神经元和基因,并将其与生物学知识联系起来。我们在癌症预测方面的实验表明:(1)在大型训练集上,深度学习方法优于经典机器学习方法;(2)与基于最先进方法的方法相比,我们的方法产生的解释更符合生物学;(3)我们可以为生物学家和医生提供对预测的全面解释。
我们提出了一种从基因表达数据预测表型的深度学习模型的生物学解释的原始方法。由于模型可以找到表型和基因表达之间的关系,我们可以假设鉴定的基因与表型之间存在联系。因此,这种解释可以产生新的生物学假设,供生物学家进行研究。