• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于无监督深度学习中原理性解缠的非线性独立成分分析。

Nonlinear independent component analysis for principled disentanglement in unsupervised deep learning.

作者信息

Hyvärinen Aapo, Khemakhem Ilyes, Morioka Hiroshi

机构信息

Department of Computer Science, University of Helsinki, Helsinki, Finland.

Gatsby Computational Neuroscience Unit, University College London, London, UK.

出版信息

Patterns (N Y). 2023 Oct 13;4(10):100844. doi: 10.1016/j.patter.2023.100844.

DOI:10.1016/j.patter.2023.100844
PMID:37876900
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10591132/
Abstract

A central problem in unsupervised deep learning is how to find useful representations of high-dimensional data, sometimes called "disentanglement." Most approaches are heuristic and lack a proper theoretical foundation. In linear representation learning, independent component analysis (ICA) has been successful in many applications areas, and it is principled, i.e., based on a well-defined probabilistic model. However, extension of ICA to the nonlinear case has been problematic because of the lack of identifiability, i.e., uniqueness of the representation. Recently, nonlinear extensions that utilize temporal structure or some auxiliary information have been proposed. Such models are in fact identifiable, and consequently, an increasing number of algorithms have been developed. In particular, some self-supervised algorithms can be shown to estimate nonlinear ICA, even though they have initially been proposed from heuristic perspectives. This paper reviews the state of the art of nonlinear ICA theory and algorithms.

摘要

无监督深度学习中的一个核心问题是如何找到高维数据的有用表示,有时也称为“解纠缠”。大多数方法都是启发式的,缺乏适当的理论基础。在线性表示学习中,独立成分分析(ICA)在许多应用领域都取得了成功,并且它是有原则的,即基于一个定义明确的概率模型。然而,由于缺乏可识别性,即表示的唯一性,将ICA扩展到非线性情况一直存在问题。最近,已经提出了利用时间结构或一些辅助信息的非线性扩展。这样的模型实际上是可识别的,因此,已经开发了越来越多的算法。特别是,一些自监督算法可以被证明能够估计非线性ICA,尽管它们最初是从启发式的角度提出的。本文综述了非线性ICA理论和算法的现状。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4ba6/10591132/d1ca537df221/gr6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4ba6/10591132/6f040523578e/gr1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4ba6/10591132/65bc71d51856/gr2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4ba6/10591132/f86af79e2bd6/gr3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4ba6/10591132/927097fdc572/gr4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4ba6/10591132/bbfbf0bdedbf/gr5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4ba6/10591132/d1ca537df221/gr6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4ba6/10591132/6f040523578e/gr1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4ba6/10591132/65bc71d51856/gr2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4ba6/10591132/f86af79e2bd6/gr3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4ba6/10591132/927097fdc572/gr4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4ba6/10591132/bbfbf0bdedbf/gr5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4ba6/10591132/d1ca537df221/gr6.jpg

相似文献

1
Nonlinear independent component analysis for principled disentanglement in unsupervised deep learning.用于无监督深度学习中原理性解缠的非线性独立成分分析。
Patterns (N Y). 2023 Oct 13;4(10):100844. doi: 10.1016/j.patter.2023.100844.
2
Unsupervised representation learning of spontaneous MEG data with nonlinear ICA.基于非线性独立成分分析的自发脑磁图数据无监督表征学习
Neuroimage. 2023 Jul 1;274:120142. doi: 10.1016/j.neuroimage.2023.120142. Epub 2023 Apr 28.
3
Unsupervised Learning of Disentangled Representation via Auto-Encoding: A Survey.基于自动编码的无监督学习的解缠表示:综述。
Sensors (Basel). 2023 Feb 20;23(4):2362. doi: 10.3390/s23042362.
4
Nonlinear and noisy extension of independent component analysis: theory and its application to a pitch sensation model.独立成分分析的非线性与噪声扩展:理论及其在音高感知模型中的应用
Neural Comput. 2005 Jan;17(1):115-44. doi: 10.1162/0899766052530866.
5
Local linear independent component analysis based on clustering.基于聚类的局部线性独立成分分析。
Int J Neural Syst. 2000 Dec;10(6):439-51. doi: 10.1142/S0129065700000429.
6
Unsupervised learning of disentangled representations in deep restricted kernel machines with orthogonality constraints.深度受限核机器中带正交约束的非监督解缠表示学习。
Neural Netw. 2021 Oct;142:661-679. doi: 10.1016/j.neunet.2021.07.023. Epub 2021 Jul 26.
7
Unsupervised and self-supervised deep learning approaches for biomedical text mining.无监督和自监督深度学习方法在生物医学文本挖掘中的应用。
Brief Bioinform. 2021 Mar 22;22(2):1592-1603. doi: 10.1093/bib/bbab016.
8
Small Data Challenges in Big Data Era: A Survey of Recent Progress on Unsupervised and Semi-Supervised Methods.大数据时代的小数据挑战:无监督和半监督方法的最新进展综述。
IEEE Trans Pattern Anal Mach Intell. 2022 Apr;44(4):2168-2187. doi: 10.1109/TPAMI.2020.3031898. Epub 2022 Mar 4.
9
Advances in blind source separation (BSS) and independent component analysis (ICA) for nonlinear mixtures.用于非线性混合的盲源分离(BSS)和独立成分分析(ICA)的进展。
Int J Neural Syst. 2004 Oct;14(5):267-92. doi: 10.1142/S012906570400208X.
10
An Unsupervised Method for Artefact Removal in EEG Signals.一种 EEG 信号伪迹去除的无监督方法。
Sensors (Basel). 2019 May 18;19(10):2302. doi: 10.3390/s19102302.

引用本文的文献

1
Discovering governing equations of biological systems through representation learning and sparse model discovery.通过表征学习和稀疏模型发现来揭示生物系统的控制方程。
NAR Genom Bioinform. 2025 Apr 26;7(2):lqaf048. doi: 10.1093/nargab/lqaf048. eCollection 2025 Jun.

本文引用的文献

1
Learnable latent embeddings for joint behavioural and neural analysis.可学习的潜在嵌入物,用于联合行为和神经分析。
Nature. 2023 May;617(7960):360-368. doi: 10.1038/s41586-023-06031-6. Epub 2023 May 3.
2
Unsupervised representation learning of spontaneous MEG data with nonlinear ICA.基于非线性独立成分分析的自发脑磁图数据无监督表征学习
Neuroimage. 2023 Jul 1;274:120142. doi: 10.1016/j.neuroimage.2023.120142. Epub 2023 Apr 28.
3
Uncovering the structure of clinical EEG signals with self-supervised learning.利用自监督学习揭示临床脑电图信号的结构。
J Neural Eng. 2021 Mar 31;18(4). doi: 10.1088/1741-2552/abca18.
4
Normalizing Flows: An Introduction and Review of Current Methods.归一化流:当前方法的介绍与综述
IEEE Trans Pattern Anal Mach Intell. 2021 Nov;43(11):3964-3979. doi: 10.1109/TPAMI.2020.2992934. Epub 2021 Oct 1.
5
Learning Invariance from Transformation Sequences.从变换序列中学习不变性。
Neural Comput. 1991 Summer;3(2):194-200. doi: 10.1162/neco.1991.3.2.194.
6
Decoding attentional states for neurofeedback: Mindfulness vs. wandering thoughts.解码神经反馈的注意力状态:正念与游离思维。
Neuroimage. 2019 Jan 15;185:565-574. doi: 10.1016/j.neuroimage.2018.10.014. Epub 2018 Oct 11.
7
Information Dropout: Learning Optimal Representations Through Noisy Computation.信息丢失:通过噪声计算学习最优表示
IEEE Trans Pattern Anal Mach Intell. 2018 Dec;40(12):2897-2905. doi: 10.1109/TPAMI.2017.2784440. Epub 2018 Jan 10.
8
Developing an index of dose of exposure to early childhood obesity community interventions.制定儿童期肥胖症社区干预措施暴露剂量指数。
Prev Med. 2018 Jun;111:135-141. doi: 10.1016/j.ypmed.2018.02.036. Epub 2018 Mar 6.
9
Representation learning: a review and new perspectives.表示学习:综述与新视角。
IEEE Trans Pattern Anal Mach Intell. 2013 Aug;35(8):1798-828. doi: 10.1109/TPAMI.2013.50.
10
Investigating the electrophysiological basis of resting state networks using magnetoencephalography.运用脑磁图研究静息态网络的电生理基础。
Proc Natl Acad Sci U S A. 2011 Oct 4;108(40):16783-8. doi: 10.1073/pnas.1112685108. Epub 2011 Sep 19.