Esser-Skala Wolfgang, Fortelny Nikolaus
Computational Systems Biology Group, Department of Biosciences and Medical Biology, University of Salzburg, Hellbrunner Straße 34, 5020, Salzburg, Austria.
NPJ Syst Biol Appl. 2023 Oct 10;9(1):50. doi: 10.1038/s41540-023-00310-8.
Deep neural networks display impressive performance but suffer from limited interpretability. Biology-inspired deep learning, where the architecture of the computational graph is based on biological knowledge, enables unique interpretability where real-world concepts are encoded in hidden nodes, which can be ranked by importance and thereby interpreted. In such models trained on single-cell transcriptomes, we previously demonstrated that node-level interpretations lack robustness upon repeated training and are influenced by biases in biological knowledge. Similar studies are missing for related models. Here, we test and extend our methodology for reliable interpretability in P-NET, a biology-inspired model trained on patient mutation data. We observe variability of interpretations and susceptibility to knowledge biases, and identify the network properties that drive interpretation biases. We further present an approach to control the robustness and biases of interpretations, which leads to more specific interpretations. In summary, our study reveals the broad importance of methods to ensure robust and bias-aware interpretability in biology-inspired deep learning.
深度神经网络表现出令人印象深刻的性能,但存在可解释性有限的问题。受生物学启发的深度学习,其计算图的架构基于生物学知识,实现了独特的可解释性,其中现实世界的概念被编码在隐藏节点中,这些节点可以按重要性排序并因此得到解释。在基于单细胞转录组训练的此类模型中,我们之前证明,节点级解释在重复训练时缺乏稳健性,并且会受到生物学知识偏差的影响。相关模型缺少类似研究。在这里,我们测试并扩展了我们的方法,以在P-NET(一种基于患者突变数据训练的受生物学启发的模型)中实现可靠的可解释性。我们观察到解释的可变性以及对知识偏差的敏感性,并确定了驱动解释偏差的网络属性。我们进一步提出了一种控制解释的稳健性和偏差的方法,这会带来更具体的解释。总之,我们的研究揭示了确保受生物学启发的深度学习中具有稳健性和偏差意识的可解释性的方法的广泛重要性。