• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

受生物学启发的深度神经网络的可靠可解释性。

Reliable interpretability of biology-inspired deep neural networks.

作者信息

Esser-Skala Wolfgang, Fortelny Nikolaus

机构信息

Computational Systems Biology Group, Department of Biosciences and Medical Biology, University of Salzburg, Hellbrunner Straße 34, 5020, Salzburg, Austria.

出版信息

NPJ Syst Biol Appl. 2023 Oct 10;9(1):50. doi: 10.1038/s41540-023-00310-8.

DOI:10.1038/s41540-023-00310-8
PMID:37816807
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10564878/
Abstract

Deep neural networks display impressive performance but suffer from limited interpretability. Biology-inspired deep learning, where the architecture of the computational graph is based on biological knowledge, enables unique interpretability where real-world concepts are encoded in hidden nodes, which can be ranked by importance and thereby interpreted. In such models trained on single-cell transcriptomes, we previously demonstrated that node-level interpretations lack robustness upon repeated training and are influenced by biases in biological knowledge. Similar studies are missing for related models. Here, we test and extend our methodology for reliable interpretability in P-NET, a biology-inspired model trained on patient mutation data. We observe variability of interpretations and susceptibility to knowledge biases, and identify the network properties that drive interpretation biases. We further present an approach to control the robustness and biases of interpretations, which leads to more specific interpretations. In summary, our study reveals the broad importance of methods to ensure robust and bias-aware interpretability in biology-inspired deep learning.

摘要

深度神经网络表现出令人印象深刻的性能,但存在可解释性有限的问题。受生物学启发的深度学习,其计算图的架构基于生物学知识,实现了独特的可解释性,其中现实世界的概念被编码在隐藏节点中,这些节点可以按重要性排序并因此得到解释。在基于单细胞转录组训练的此类模型中,我们之前证明,节点级解释在重复训练时缺乏稳健性,并且会受到生物学知识偏差的影响。相关模型缺少类似研究。在这里,我们测试并扩展了我们的方法,以在P-NET(一种基于患者突变数据训练的受生物学启发的模型)中实现可靠的可解释性。我们观察到解释的可变性以及对知识偏差的敏感性,并确定了驱动解释偏差的网络属性。我们进一步提出了一种控制解释的稳健性和偏差的方法,这会带来更具体的解释。总之,我们的研究揭示了确保受生物学启发的深度学习中具有稳健性和偏差意识的可解释性的方法的广泛重要性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf09/10564878/642785dffc02/41540_2023_310_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf09/10564878/0e8c1221ae46/41540_2023_310_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf09/10564878/2bfb7aff0c51/41540_2023_310_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf09/10564878/5b867ff8d9a9/41540_2023_310_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf09/10564878/0ef9ca8bb90b/41540_2023_310_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf09/10564878/af8dc92a3f04/41540_2023_310_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf09/10564878/642785dffc02/41540_2023_310_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf09/10564878/0e8c1221ae46/41540_2023_310_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf09/10564878/2bfb7aff0c51/41540_2023_310_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf09/10564878/5b867ff8d9a9/41540_2023_310_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf09/10564878/0ef9ca8bb90b/41540_2023_310_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf09/10564878/af8dc92a3f04/41540_2023_310_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf09/10564878/642785dffc02/41540_2023_310_Fig6_HTML.jpg

相似文献

1
Reliable interpretability of biology-inspired deep neural networks.受生物学启发的深度神经网络的可靠可解释性。
NPJ Syst Biol Appl. 2023 Oct 10;9(1):50. doi: 10.1038/s41540-023-00310-8.
2
On inductive biases for the robust and interpretable prediction of drug concentrations using deep compartment models.关于使用深度房室模型对药物浓度进行稳健且可解释预测的归纳偏差
J Pharmacokinet Pharmacodyn. 2024 Aug;51(4):355-366. doi: 10.1007/s10928-024-09906-x. Epub 2024 Mar 26.
3
Development and Validation of a Robust and Interpretable Early Triaging Support System for Patients Hospitalized With COVID-19: Predictive Algorithm Modeling and Interpretation Study.开发和验证用于 COVID-19 住院患者的强大且可解释的早期分诊支持系统:预测算法建模和解释研究。
J Med Internet Res. 2024 Jan 11;26:e52134. doi: 10.2196/52134.
4
MNC-Net: Multi-task graph structure learning based on node clustering for early Parkinson's disease diagnosis.MNC-Net:基于节点聚类的多任务图结构学习用于早期帕金森病诊断。
Comput Biol Med. 2023 Jan;152:106308. doi: 10.1016/j.compbiomed.2022.106308. Epub 2022 Nov 24.
5
Correcting gradient-based interpretations of deep neural networks for genomics.纠正基于梯度的深度学习神经网络在基因组学中的解释。
Genome Biol. 2023 May 9;24(1):109. doi: 10.1186/s13059-023-02956-3.
6
Predicting molecular properties based on the interpretable graph neural network with multistep focus mechanism.基于具有多步聚焦机制的可解释图神经网络预测分子性质。
Brief Bioinform. 2023 Jan 19;24(1). doi: 10.1093/bib/bbac534.
7
EvoAug: improving generalization and interpretability of genomic deep neural networks with evolution-inspired data augmentations.EvoAug:利用受进化启发的数据增强方法提高基因组深度学习神经网络的泛化能力和可解释性。
Genome Biol. 2023 May 5;24(1):105. doi: 10.1186/s13059-023-02941-w.
8
Brain-computer interfaces inspired spiking neural network model for depression stage identification.基于脑机接口的尖峰神经网络模型用于识别抑郁阶段。
J Neurosci Methods. 2024 Sep;409:110203. doi: 10.1016/j.jneumeth.2024.110203. Epub 2024 Jun 15.
9
MABAL: a Novel Deep-Learning Architecture for Machine-Assisted Bone Age Labeling.MABAL:一种用于机器辅助骨龄标注的新型深度学习架构。
J Digit Imaging. 2018 Aug;31(4):513-519. doi: 10.1007/s10278-018-0053-3.
10
Knowledge-primed neural networks enable biologically interpretable deep learning on single-cell sequencing data.知识引导神经网络使单细胞测序数据的深度学习具有生物可解释性。
Genome Biol. 2020 Aug 3;21(1):190. doi: 10.1186/s13059-020-02100-5.

引用本文的文献

1
From network biology to immunity: potential longitudinal biomarkers for targeting the network topology of the HIV reservoir.从网络生物学到免疫:用于靶向HIV储存库网络拓扑结构的潜在纵向生物标志物。
J Transl Med. 2025 Aug 13;23(1):906. doi: 10.1186/s12967-025-06919-z.
2
Visible neural networks for multi-omics integration: a critical review.用于多组学整合的可视化神经网络:批判性综述
Front Artif Intell. 2025 Jul 17;8:1595291. doi: 10.3389/frai.2025.1595291. eCollection 2025.
3
A Unified Flexible Large Polysomnography Model for Sleep Staging and Mental Disorder Diagnosis.

本文引用的文献

1
Phenotype prediction using biologically interpretable neural networks on multi-cohort multi-omics data.基于多队列多组学生物学数据的可解释神经网络进行表型预测。
NPJ Syst Biol Appl. 2024 Aug 2;10(1):81. doi: 10.1038/s41540-024-00405-w.
2
Complex heatmap visualization.复杂热图可视化。
Imeta. 2022 Aug 1;1(3):e43. doi: 10.1002/imt2.43. eCollection 2022 Sep.
3
Structure-primed embedding on the transcription factor manifold enables transparent model architectures for gene regulatory network and latent activity inference.
一种用于睡眠分期和精神障碍诊断的统一灵活大型多导睡眠图模型。
medRxiv. 2025 May 8:2024.12.11.24318815. doi: 10.1101/2024.12.11.24318815.
4
A spatial hierarchical network learning framework for drug repositioning allowing interpretation from macro to micro scale.一种用于药物重定位的空间分层网络学习框架,允许从宏观到微观尺度进行解释。
Commun Biol. 2024 Oct 30;7(1):1413. doi: 10.1038/s42003-024-07107-3.
5
Designing interpretable deep learning applications for functional genomics: a quantitative analysis.设计可解释的深度学习应用于功能基因组学:一项定量分析。
Brief Bioinform. 2024 Jul 25;25(5). doi: 10.1093/bib/bbae449.
6
Phenotype prediction using biologically interpretable neural networks on multi-cohort multi-omics data.基于多队列多组学生物学数据的可解释神经网络进行表型预测。
NPJ Syst Biol Appl. 2024 Aug 2;10(1):81. doi: 10.1038/s41540-024-00405-w.
7
Molecular causality in the advent of foundation models.基础模型出现中的分子因果关系。
Mol Syst Biol. 2024 Aug;20(8):848-858. doi: 10.1038/s44320-024-00041-w. Epub 2024 Jun 18.
8
Inference of drug off-target effects on cellular signaling using interactome-based deep learning.使用基于相互作用组的深度学习推断药物对细胞信号传导的脱靶效应。
iScience. 2024 Mar 14;27(4):109509. doi: 10.1016/j.isci.2024.109509. eCollection 2024 Apr 19.
转录因子流形上的结构引导嵌入能够实现基因调控网络和潜在活动推断的透明模型架构。
Genome Biol. 2024 Jan 18;25(1):24. doi: 10.1186/s13059-023-03134-1.
4
Incorporating metabolic activity, taxonomy and community structure to improve microbiome-based predictive models for host phenotype prediction.将代谢活性、分类学和群落结构纳入其中,以改进基于微生物组的预测模型,从而预测宿主表型。
Gut Microbes. 2024 Jan-Dec;16(1):2302076. doi: 10.1080/19490976.2024.2302076. Epub 2024 Jan 12.
5
PiDeeL: metabolic pathway-informed deep learning model for survival analysis and pathological classification of gliomas.PiDeeL:用于胶质瘤生存分析和病理分类的代谢途径启发式深度学习模型。
Bioinformatics. 2023 Nov 1;39(11). doi: 10.1093/bioinformatics/btad684.
6
MOViDA: multiomics visible drug activity prediction with a biologically informed neural network model.MOViDA:基于生物学信息的神经网络模型的多组学生物药物活性预测
Bioinformatics. 2023 Jul 1;39(7). doi: 10.1093/bioinformatics/btad432.
7
Biologically informed variational autoencoders allow predictive modeling of genetic and drug-induced perturbations.生物学启发的变分自动编码器允许对遗传和药物诱导的扰动进行预测建模。
Bioinformatics. 2023 Jun 1;39(6). doi: 10.1093/bioinformatics/btad387.
8
A systematic review of biologically-informed deep learning models for cancer: fundamental trends for encoding and interpreting oncology data.基于生物学的癌症深度学习模型的系统评价:肿瘤学数据编码和解释的基本趋势
BMC Bioinformatics. 2023 May 15;24(1):198. doi: 10.1186/s12859-023-05262-8.
9
SigPrimedNet: A Signaling-Informed Neural Network for scRNA-seq Annotation of Known and Unknown Cell Types.SigPrimedNet:一种用于已知和未知细胞类型的单细胞RNA测序注释的信号传导信息神经网络。
Biology (Basel). 2023 Apr 10;12(4):579. doi: 10.3390/biology12040579.
10
PAUSE: principled feature attribution for unsupervised gene expression analysis.暂停:无监督基因表达分析的有原则特征归因。
Genome Biol. 2023 Apr 19;24(1):81. doi: 10.1186/s13059-023-02901-4.