• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

使用 TorchLens 提取和可视化 PyTorch 模型的隐藏激活和计算图。

Extracting and visualizing hidden activations and computational graphs of PyTorch models with TorchLens.

机构信息

Zuckerman Mind Brain Behavior Institute, Columbia University, 3227 Broadway, New York, NY, 10027, USA.

出版信息

Sci Rep. 2023 Sep 1;13(1):14375. doi: 10.1038/s41598-023-40807-0.

DOI:10.1038/s41598-023-40807-0
PMID:37658079
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10474256/
Abstract

Deep neural network models (DNNs) are essential to modern AI and provide powerful models of information processing in biological neural networks. Researchers in both neuroscience and engineering are pursuing a better understanding of the internal representations and operations that undergird the successes and failures of DNNs. Neuroscientists additionally evaluate DNNs as models of brain computation by comparing their internal representations to those found in brains. It is therefore essential to have a method to easily and exhaustively extract and characterize the results of the internal operations of any DNN. Many models are implemented in PyTorch, the leading framework for building DNN models. Here we introduce TorchLens, a new open-source Python package for extracting and characterizing hidden-layer activations in PyTorch models. Uniquely among existing approaches to this problem, TorchLens has the following features: (1) it exhaustively extracts the results of all intermediate operations, not just those associated with PyTorch module objects, yielding a full record of every step in the model's computational graph, (2) it provides an intuitive visualization of the model's complete computational graph along with metadata about each computational step in a model's forward pass for further analysis, (3) it contains a built-in validation procedure to algorithmically verify the accuracy of all saved hidden-layer activations, and (4) the approach it uses can be automatically applied to any PyTorch model with no modifications, including models with conditional (if-then) logic in their forward pass, recurrent models, branching models where layer outputs are fed into multiple subsequent layers in parallel, and models with internally generated tensors (e.g., injections of noise). Furthermore, using TorchLens requires minimal additional code, making it easy to incorporate into existing pipelines for model development and analysis, and useful as a pedagogical aid when teaching deep learning concepts. We hope this contribution will help researchers in AI and neuroscience understand the internal representations of DNNs.

摘要

深度神经网络模型(DNN)是现代人工智能的基础,为生物神经网络中的信息处理提供了强大的模型。神经科学和工程领域的研究人员都在努力更好地理解 DNN 的成功和失败所依赖的内部表示和操作。神经科学家还通过将 DNN 的内部表示与大脑中的表示进行比较,将 DNN 评估为大脑计算的模型。因此,必须有一种方法可以方便、详尽地提取和描述任何 DNN 的内部操作结果。许多模型都是用 PyTorch 实现的,PyTorch 是构建 DNN 模型的领先框架。在这里,我们引入了 TorchLens,这是一个新的开源 Python 包,用于提取和描述 PyTorch 模型中的隐藏层激活。与解决此问题的现有方法相比,TorchLens 具有以下特点:(1)它详尽地提取了所有中间操作的结果,而不仅仅是与 PyTorch 模块对象相关的结果,从而生成了模型计算图中每一步的完整记录;(2)它提供了模型完整计算图的直观可视化以及模型正向传递中每个计算步骤的元数据,以便进一步分析;(3)它包含一个内置的验证过程,可自动验证所有保存的隐藏层激活的准确性;(4)它使用的方法可以自动应用于任何 PyTorch 模型,而无需修改,包括在正向传递中具有条件(if-then)逻辑、递归模型、分支模型(其中层输出并行输入到多个后续层)以及具有内部生成张量的模型(例如,注入噪声)。此外,使用 TorchLens 只需添加很少的额外代码,因此很容易将其集成到现有的模型开发和分析管道中,并且在教授深度学习概念时也很有用。我们希望这项贡献将帮助人工智能和神经科学领域的研究人员理解 DNN 的内部表示。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1983/10474256/ec5e53d58871/41598_2023_40807_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1983/10474256/e253b5cc2635/41598_2023_40807_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1983/10474256/c0cf7d3b9163/41598_2023_40807_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1983/10474256/83a05dff137e/41598_2023_40807_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1983/10474256/ec5e53d58871/41598_2023_40807_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1983/10474256/e253b5cc2635/41598_2023_40807_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1983/10474256/c0cf7d3b9163/41598_2023_40807_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1983/10474256/83a05dff137e/41598_2023_40807_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1983/10474256/ec5e53d58871/41598_2023_40807_Fig4_HTML.jpg

相似文献

1
Extracting and visualizing hidden activations and computational graphs of PyTorch models with TorchLens.使用 TorchLens 提取和可视化 PyTorch 模型的隐藏激活和计算图。
Sci Rep. 2023 Sep 1;13(1):14375. doi: 10.1038/s41598-023-40807-0.
2
TorchLens: A Python package for extracting and visualizing hidden activations of PyTorch models.TorchLens:一个用于提取和可视化PyTorch模型隐藏激活值的Python包。
bioRxiv. 2023 Mar 18:2023.03.16.532916. doi: 10.1101/2023.03.16.532916.
3
DNNBrain: A Unifying Toolbox for Mapping Deep Neural Networks and Brains.DNNBrain:用于映射深度神经网络与大脑的统一工具箱。
Front Comput Neurosci. 2020 Nov 30;14:580632. doi: 10.3389/fncom.2020.580632. eCollection 2020.
4
THINGSvision: A Python Toolbox for Streamlining the Extraction of Activations From Deep Neural Networks.THINGSvision:用于简化从深度神经网络中提取激活值的Python工具箱。
Front Neuroinform. 2021 Sep 22;15:679838. doi: 10.3389/fninf.2021.679838. eCollection 2021.
5
Deep Convolutional Neural Networks Outperform Feature-Based But Not Categorical Models in Explaining Object Similarity Judgments.在解释物体相似性判断方面,深度卷积神经网络的表现优于基于特征的模型,但不优于分类模型。
Front Psychol. 2017 Oct 9;8:1726. doi: 10.3389/fpsyg.2017.01726. eCollection 2017.
6
Task-specific feature extraction and classification of fMRI volumes using a deep neural network initialized with a deep belief network: Evaluation using sensorimotor tasks.使用由深度信念网络初始化的深度神经网络对功能磁共振成像(fMRI)体积进行特定任务特征提取和分类:基于感觉运动任务的评估
Neuroimage. 2017 Jan 15;145(Pt B):314-328. doi: 10.1016/j.neuroimage.2016.04.003. Epub 2016 Apr 11.
7
Analyzing biological and artificial neural networks: challenges with opportunities for synergy?分析生物和人工神经网络:协同的挑战与机遇?
Curr Opin Neurobiol. 2019 Apr;55:55-64. doi: 10.1016/j.conb.2019.01.007. Epub 2019 Feb 19.
8
PyTorch-FEA: Autograd-enabled finite element analysis methods with applications for biomechanical analysis of human aorta.PyTorch-FEA:具有自动微分功能的有限元分析方法及其在人体主动脉生物力学分析中的应用。
Comput Methods Programs Biomed. 2023 Aug;238:107616. doi: 10.1016/j.cmpb.2023.107616. Epub 2023 May 18.
9
MotorNet, a Python toolbox for controlling differentiable biomechanical effectors with artificial neural networks.MotorNet,一个用人工神经网络控制可微分生物力学效应器的 Python 工具包。
Elife. 2024 Jul 30;12:RP88591. doi: 10.7554/eLife.88591.
10
Training deep neural density estimators to identify mechanistic models of neural dynamics.训练深度神经网络密度估计器以识别神经动力学的机制模型。
Elife. 2020 Sep 17;9:e56261. doi: 10.7554/eLife.56261.

引用本文的文献

1
Utilizing protein structure graph embeddings to predict the pathogenicity of missense variants.利用蛋白质结构图形嵌入来预测错义变体的致病性。
NAR Genom Bioinform. 2025 Jul 24;7(3):lqaf097. doi: 10.1093/nargab/lqaf097. eCollection 2025 Sep.
2
Brain-like border ownership signals support prediction of natural videos.类脑边界所有权信号支持对自然视频的预测。
iScience. 2025 Mar 11;28(4):112199. doi: 10.1016/j.isci.2025.112199. eCollection 2025 Apr 18.
3
Computational biology and artificial intelligence in mRNA vaccine design for cancer immunotherapy.

本文引用的文献

1
Statistical inference on representational geometries.关于表示几何的统计推断。
Elife. 2023 Aug 23;12:e82566. doi: 10.7554/eLife.82566.
2
Decoding and synthesizing tonal language speech from brain activity.从大脑活动中解码和合成声调语言的语音。
Sci Adv. 2023 Jun 9;9(23):eadh0478. doi: 10.1126/sciadv.adh0478.
3
Feature-space selection with banded ridge regression.带脊岭回归的特征空间选择。
用于癌症免疫治疗的mRNA疫苗设计中的计算生物学与人工智能
Front Cell Infect Microbiol. 2025 Jan 20;14:1501010. doi: 10.3389/fcimb.2024.1501010. eCollection 2024.
4
Maintenance and transformation of representational formats during working memory prioritization.工作记忆优先化过程中表象格式的维持和转换。
Nat Commun. 2024 Sep 19;15(1):8234. doi: 10.1038/s41467-024-52541-w.
5
Brain-like border ownership signals support prediction of natural videos.类脑边界所有权信号支持对自然视频的预测。
bioRxiv. 2024 Aug 12:2024.08.11.607040. doi: 10.1101/2024.08.11.607040.
6
DeepFocus: fast focus and astigmatism correction for electron microscopy.深度聚焦:用于电子显微镜的快速聚焦和像散校正
Nat Commun. 2024 Jan 31;15(1):948. doi: 10.1038/s41467-024-45042-3.
Neuroimage. 2022 Dec 1;264:119728. doi: 10.1016/j.neuroimage.2022.119728. Epub 2022 Nov 8.
4
Deep language algorithms predict semantic comprehension from brain activity.深度语言算法可以根据大脑活动预测语义理解。
Sci Rep. 2022 Sep 29;12(1):16327. doi: 10.1038/s41598-022-20460-9.
5
THINGSvision: A Python Toolbox for Streamlining the Extraction of Activations From Deep Neural Networks.THINGSvision:用于简化从深度神经网络中提取激活值的Python工具箱。
Front Neuroinform. 2021 Sep 22;15:679838. doi: 10.3389/fninf.2021.679838. eCollection 2021.
6
Diverse Deep Neural Networks All Predict Human Inferior Temporal Cortex Well, After Training and Fitting.各种深度神经网络在经过训练和适配后都能很好地预测人类的下颞叶皮质。
J Cogn Neurosci. 2021 Sep 1;33(10):2044-2064. doi: 10.1162/jocn_a_01755.
7
Limits to visual representational correspondence between convolutional neural networks and the human brain.卷积神经网络与人类大脑之间视觉表示对应关系的局限性。
Nat Commun. 2021 Apr 6;12(1):2065. doi: 10.1038/s41467-021-22244-7.
8
Unsupervised neural network models of the ventral visual stream.腹侧视觉流的无监督神经网络模型。
Proc Natl Acad Sci U S A. 2021 Jan 19;118(3). doi: 10.1073/pnas.2014196118.
9
Using deep reinforcement learning to reveal how the brain encodes abstract state-space representations in high-dimensional environments.利用深度强化学习揭示大脑如何在高维环境中对抽象状态空间表示进行编码。
Neuron. 2021 Feb 17;109(4):724-738.e7. doi: 10.1016/j.neuron.2020.11.021. Epub 2020 Dec 15.
10
Recurrent neural networks can explain flexible trading of speed and accuracy in biological vision.递归神经网络可以解释生物视觉中速度和精度的灵活交易。
PLoS Comput Biol. 2020 Oct 2;16(10):e1008215. doi: 10.1371/journal.pcbi.1008215. eCollection 2020 Oct.