• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
On the Versatile Uses of Partial Distance Correlation in Deep Learning.论部分距离相关在深度学习中的多种用途。
Comput Vis ECCV. 2022 Oct;13686:327-346. doi: 10.1007/978-3-031-19809-0_19. Epub 2022 Nov 1.
2
ReGeNNe: genetic pathway-based deep neural network using canonical correlation regularizer for disease prediction.ReGeNNe:基于遗传途径的深度神经网络,使用正则相关正则化器进行疾病预测。
Bioinformatics. 2023 Nov 1;39(11). doi: 10.1093/bioinformatics/btad679.
3
Deep convolutional neural network and IoT technology for healthcare.用于医疗保健的深度卷积神经网络和物联网技术。
Digit Health. 2024 Jan 17;10:20552076231220123. doi: 10.1177/20552076231220123. eCollection 2024 Jan-Dec.
4
A regularization perspective based theoretical analysis for adversarial robustness of deep spiking neural networks.基于正则化视角的深度尖峰神经网络对抗鲁棒性的理论分析。
Neural Netw. 2023 Aug;165:164-174. doi: 10.1016/j.neunet.2023.05.038. Epub 2023 May 24.
5
Pact-Net: Parallel CNNs and Transformers for medical image segmentation.Pact-Net:用于医学图像分割的并行卷积神经网络和Transformer
Comput Methods Programs Biomed. 2023 Dec;242:107782. doi: 10.1016/j.cmpb.2023.107782. Epub 2023 Sep 1.
6
A medical image classification method based on self-regularized adversarial learning.基于自正则化对抗学习的医学图像分类方法。
Med Phys. 2024 Nov;51(11):8232-8246. doi: 10.1002/mp.17320. Epub 2024 Jul 30.
7
Do it the transformer way: A comprehensive review of brain and vision transformers for autism spectrum disorder diagnosis and classification.采用变压器方法:自闭症谱系障碍诊断和分类的脑和视觉变压器的全面综述。
Comput Biol Med. 2023 Dec;167:107667. doi: 10.1016/j.compbiomed.2023.107667. Epub 2023 Nov 3.
8
Learning Fair Representations via Distance Correlation Minimization.通过最小化距离相关性学习公平表示。
IEEE Trans Neural Netw Learn Syst. 2024 Feb;35(2):2139-2152. doi: 10.1109/TNNLS.2022.3187165. Epub 2024 Feb 5.
9
The deep arbitrary polynomial chaos neural network or how Deep Artificial Neural Networks could benefit from data-driven homogeneous chaos theory.深度任意多项式混沌神经网络或深度人工神经网络如何从数据驱动的均匀混沌理论中受益。
Neural Netw. 2023 Sep;166:85-104. doi: 10.1016/j.neunet.2023.06.036. Epub 2023 Jul 10.
10
Between-Class Adversarial Training for Improving Adversarial Robustness of Image Classification.基于类间对抗训练提高图像分类对抗鲁棒性。
Sensors (Basel). 2023 Mar 20;23(6):3252. doi: 10.3390/s23063252.

本文引用的文献

1
The specious art of single-cell genomics.单细胞基因组学的似是而非的艺术。
PLoS Comput Biol. 2023 Aug 17;19(8):e1011288. doi: 10.1371/journal.pcbi.1011288. eCollection 2023 Aug.
2
Equivariance Allows Handling Multiple Nuisance Variables When Analyzing Pooled Neuroimaging Datasets.等变性允许在分析汇总神经影像数据集时处理多个干扰变量。
Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit. 2022 Jun;2022:10422-10431. doi: 10.1109/cvpr52688.2022.01018. Epub 2022 Sep 27.
3
An Online Riemannian PCA for Stochastic Canonical Correlation Analysis.用于随机典型相关分析的在线黎曼主成分分析
Adv Neural Inf Process Syst. 2021 Dec;34:14056-14068.
4
Learning Invariant Representations using Inverse Contrastive Loss.使用逆对比损失学习不变表示。
Proc AAAI Conf Artif Intell. 2021 Feb;35(8):6582-6591. Epub 2021 May 18.
5
FairALM: Augmented Lagrangian Method for Training Fair Models with Little Regret.FairALM:用于训练遗憾值较小的公平模型的增广拉格朗日方法。
Comput Vis ECCV. 2020 Aug;12357:365-381. doi: 10.1007/978-3-030-58610-2_22. Epub 2020 Oct 7.
6
A Style-Based Generator Architecture for Generative Adversarial Networks.基于风格的生成对抗网络生成器架构。
IEEE Trans Pattern Anal Mach Intell. 2021 Dec;43(12):4217-4228. doi: 10.1109/TPAMI.2020.2970919. Epub 2021 Nov 3.
7
Feature Screening via Distance Correlation Learning.通过距离相关学习进行特征筛选
J Am Stat Assoc. 2012 Jul 1;107(499):1129-1139. doi: 10.1080/01621459.2012.695654.

论部分距离相关在深度学习中的多种用途。

On the Versatile Uses of Partial Distance Correlation in Deep Learning.

作者信息

Zhen Xingjian, Meng Zihang, Chakraborty Rudrasis, Singh Vikas

机构信息

University of Wisconsin-Madison.

Butlr.

出版信息

Comput Vis ECCV. 2022 Oct;13686:327-346. doi: 10.1007/978-3-031-19809-0_19. Epub 2022 Nov 1.

DOI:10.1007/978-3-031-19809-0_19
PMID:37255993
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10228573/
Abstract

Comparing the functional behavior of neural network models, whether it is a single network over time or two (or more networks) during or post-training, is an essential step in understanding what they are learning (and what they are not), and for identifying strategies for regularization or efficiency improvements. Despite recent progress, e.g., comparing vision transformers to CNNs, systematic comparison of function, especially across different networks, remains difficult and is often carried out layer by layer. Approaches such as canonical correlation analysis (CCA) are applicable in principle, but have been sparingly used so far. In this paper, we revisit a (less widely known) from statistics, called distance correlation (and its partial variant), designed to evaluate correlation between feature spaces of different dimensions. We describe the steps necessary to carry out its deployment for large scale models - this opens the door to a surprising array of applications ranging from conditioning one deep model w.r.t. another, learning disentangled representations as well as optimizing diverse models that would directly be more robust to adversarial attacks. Our experiments suggest a versatile regularizer (or constraint) with many advantages, which avoids some of the common difficulties one faces in such analyses .

摘要

比较神经网络模型的功能行为,无论是单个网络随时间的变化,还是两个(或更多网络)在训练期间或训练后的行为,都是理解它们在学习什么(以及没有学习什么)以及确定正则化或效率提升策略的重要步骤。尽管最近取得了进展,例如将视觉Transformer与卷积神经网络进行比较,但功能的系统比较,尤其是跨不同网络的比较,仍然很困难,并且通常是逐层进行的。诸如典型相关分析(CCA)之类的方法原则上是适用的,但到目前为止使用较少。在本文中,我们重新审视了统计学中一种(不太广为人知)的方法,称为距离相关(及其部分变体),旨在评估不同维度特征空间之间的相关性。我们描述了将其部署到大规模模型所需的步骤——这为一系列令人惊讶的应用打开了大门,从根据另一个模型调整一个深度模型、学习解缠表示到优化对对抗攻击直接更具鲁棒性的各种模型。我们的实验表明,它是一种具有许多优点的通用正则化器(或约束),避免了此类分析中人们面临的一些常见困难。