Suppr超能文献

部分信息分解作为一种统一的方法来指定神经目标函数。

Partial information decomposition as a unified approach to the specification of neural goal functions.

作者信息

Wibral Michael, Priesemann Viola, Kay Jim W, Lizier Joseph T, Phillips William A

机构信息

MEG Unit, Brain Imaging Center, Goethe University, Heinrich Hoffmann Straße 10, 60528 Frankfurt am Main, Germany.

Department of Non-linear Dynamics, Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany.

出版信息

Brain Cogn. 2017 Mar;112:25-38. doi: 10.1016/j.bandc.2015.09.004. Epub 2015 Oct 21.

Abstract

In many neural systems anatomical motifs are present repeatedly, but despite their structural similarity they can serve very different tasks. A prime example for such a motif is the canonical microcircuit of six-layered neo-cortex, which is repeated across cortical areas, and is involved in a number of different tasks (e.g. sensory, cognitive, or motor tasks). This observation has spawned interest in finding a common underlying principle, a 'goal function', of information processing implemented in this structure. By definition such a goal function, if universal, cannot be cast in processing-domain specific language (e.g. 'edge filtering', 'working memory'). Thus, to formulate such a principle, we have to use a domain-independent framework. Information theory offers such a framework. However, while the classical framework of information theory focuses on the relation between one input and one output (Shannon's mutual information), we argue that neural information processing crucially depends on the combination of multiple inputs to create the output of a processor. To account for this, we use a very recent extension of Shannon Information theory, called partial information decomposition (PID). PID allows to quantify the information that several inputs provide individually (unique information), redundantly (shared information) or only jointly (synergistic information) about the output. First, we review the framework of PID. Then we apply it to reevaluate and analyze several earlier proposals of information theoretic neural goal functions (predictive coding, infomax and coherent infomax, efficient coding). We find that PID allows to compare these goal functions in a common framework, and also provides a versatile approach to design new goal functions from first principles. Building on this, we design and analyze a novel goal function, called 'coding with synergy', which builds on combining external input and prior knowledge in a synergistic manner. We suggest that this novel goal function may be highly useful in neural information processing.

摘要

在许多神经系统中,解剖学基序会反复出现,尽管它们结构相似,但却能执行截然不同的任务。这种基序的一个典型例子是六层新皮层的典型微电路,它在各个皮层区域重复出现,并参与多种不同的任务(例如感觉、认知或运动任务)。这一观察结果引发了人们对寻找该结构中信息处理的共同潜在原则(即“目标函数”)的兴趣。根据定义,这样一个通用的目标函数不能用特定于处理领域的语言来表述(例如“边缘滤波”“工作记忆”)。因此,为了阐述这样一个原则,我们必须使用一个与领域无关的框架。信息论提供了这样一个框架。然而,虽然信息论的经典框架关注的是一个输入与一个输出之间的关系(香农互信息),但我们认为神经信息处理关键取决于多个输入的组合以产生处理器的输出。为了解决这个问题,我们使用了香农信息论的一个最新扩展,称为部分信息分解(PID)。PID能够量化几个输入分别(唯一信息)、冗余(共享信息)或仅共同(协同信息)提供给输出的信息。首先,我们回顾PID的框架。然后将其应用于重新评估和分析信息论神经目标函数的几个早期提议(预测编码、信息最大化和相干信息最大化、高效编码)。我们发现PID能够在一个通用框架中比较这些目标函数,并且还提供了一种从第一原理设计新目标函数的通用方法。在此基础上,我们设计并分析了一种名为“协同编码”的新型目标函数,它基于以协同方式组合外部输入和先验知识。我们认为这种新型目标函数在神经信息处理中可能非常有用。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验