Suppr超能文献

通过神经信息分解发现高阶相互作用。

Discovering Higher-Order Interactions Through Neural Information Decomposition.

作者信息

Reing Kyle, Ver Steeg Greg, Galstyan Aram

机构信息

Information Sciences Institute, University of Southern California, Los Angeles, CA 90292, USA.

出版信息

Entropy (Basel). 2021 Jan 7;23(1):79. doi: 10.3390/e23010079.

Abstract

If regularity in data takes the form of higher-order functions among groups of variables, models which are biased towards lower-order functions may easily mistake the data for noise. To distinguish whether this is the case, one must be able to quantify the contribution of different orders of dependence to the total information. Recent work in information theory attempts to do this through measures of multivariate mutual information (MMI) and information decomposition (ID). Despite substantial theoretical progress, practical issues related to tractability and learnability of higher-order functions are still largely unaddressed. In this work, we introduce a new approach to information decomposition-termed Neural Information Decomposition (NID)-which is both theoretically grounded, and can be efficiently estimated in practice using neural networks. We show on synthetic data that NID can learn to distinguish higher-order functions from noise, while many unsupervised probability models cannot. Additionally, we demonstrate the usefulness of this framework as a tool for exploring biological and artificial neural networks.

摘要

如果数据中的规律性表现为变量组之间的高阶函数形式,那么偏向低阶函数的模型可能很容易将数据误判为噪声。为了辨别是否是这种情况,必须能够量化不同阶依赖性对总信息的贡献。信息论领域的近期工作试图通过多变量互信息(MMI)和信息分解(ID)的度量来做到这一点。尽管取得了重大的理论进展,但与高阶函数的可处理性和可学习性相关的实际问题在很大程度上仍未得到解决。在这项工作中,我们引入了一种新的信息分解方法——称为神经信息分解(NID)——它既有理论基础,又能在实践中使用神经网络进行有效估计。我们在合成数据上表明,NID能够学会将高阶函数与噪声区分开来,而许多无监督概率模型则无法做到。此外,我们证明了这个框架作为探索生物和人工神经网络的工具的有用性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/aa1a/7827712/0e2b7ea838c2/entropy-23-00079-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验