Max Kelsen, Brisbane, QLD, 4006, Australia.
QIMR Berghofer Medical Research Institute, Brisbane, QLD, 4006, Australia.
Sci Rep. 2021 Jan 29;11(1):2641. doi: 10.1038/s41598-021-81773-9.
For complex machine learning (ML) algorithms to gain widespread acceptance in decision making, we must be able to identify the features driving the predictions. Explainability models allow transparency of ML algorithms, however their reliability within high-dimensional data is unclear. To test the reliability of the explainability model SHapley Additive exPlanations (SHAP), we developed a convolutional neural network to predict tissue classification from Genotype-Tissue Expression (GTEx) RNA-seq data representing 16,651 samples from 47 tissues. Our classifier achieved an average F1 score of 96.1% on held-out GTEx samples. Using SHAP values, we identified the 2423 most discriminatory genes, of which 98.6% were also identified by differential expression analysis across all tissues. The SHAP genes reflected expected biological processes involved in tissue differentiation and function. Moreover, SHAP genes clustered tissue types with superior performance when compared to all genes, genes detected by differential expression analysis, or random genes. We demonstrate the utility and reliability of SHAP to explain a deep learning model and highlight the strengths of applying ML to transcriptome data.
为了让复杂的机器学习 (ML) 算法在决策制定中得到广泛应用,我们必须能够识别出推动预测的特征。可解释性模型允许 ML 算法具有透明度,但它们在高维数据中的可靠性尚不清楚。为了测试可解释性模型 SHapley Additive exPlanations (SHAP) 的可靠性,我们开发了一个卷积神经网络,用于从代表来自 47 种组织的 16651 个样本的 Genotype-Tissue Expression (GTEx) RNA-seq 数据中预测组织分类。我们的分类器在保留的 GTEx 样本上的平均 F1 得分为 96.1%。使用 SHAP 值,我们确定了 2423 个最具区分性的基因,其中 98.6%也通过所有组织的差异表达分析来识别。SHAP 基因反映了与组织分化和功能相关的预期生物学过程。此外,与所有基因、差异表达分析检测到的基因或随机基因相比,SHAP 基因在聚类组织类型方面表现出色。我们展示了 SHAP 用于解释深度学习模型的实用性和可靠性,并强调了将 ML 应用于转录组数据的优势。