Kay Jim W, Ince Robin A A
Department of Statistics, University of Glasgow, Glasgow G12 8QQ, UK.
Institute of Neuroscience and Psychology, University of Glasgow, Glasgow G12 8QQ, UK.
Entropy (Basel). 2018 Mar 30;20(4):240. doi: 10.3390/e20040240.
The Partial Information Decomposition, introduced by Williams P. L. et al. (2010), provides a theoretical framework to characterize and quantify the structure of multivariate information sharing. A new method ( I dep ) has recently been proposed by James R. G. et al. (2017) for computing a two-predictor partial information decomposition over discrete spaces. A lattice of maximum entropy probability models is constructed based on marginal dependency constraints, and the unique information that a particular predictor has about the target is defined as the minimum increase in joint predictor-target mutual information when that particular predictor-target marginal dependency is constrained. Here, we apply the I dep approach to Gaussian systems, for which the marginally constrained maximum entropy models are Gaussian graphical models. Closed form solutions for the I dep PID are derived for both univariate and multivariate Gaussian systems. Numerical and graphical illustrations are provided, together with practical and theoretical comparisons of the I dep PID with the minimum mutual information partial information decomposition ( I mmi ), which was discussed by Barrett A. B. (2015). The results obtained using I dep appear to be more intuitive than those given with other methods, such as I mmi , in which the redundant and unique information components are constrained to depend only on the predictor-target marginal distributions. In particular, it is proved that the I mmi method generally produces larger estimates of redundancy and synergy than does the I dep method. In discussion of the practical examples, the PIDs are complemented by the use of tests of deviance for the comparison of Gaussian graphical models.
威廉姆斯·P·L等人(2010年)提出的部分信息分解,提供了一个理论框架来表征和量化多元信息共享的结构。詹姆斯·R·G等人(2017年)最近提出了一种新方法(I_dep),用于计算离散空间上的双预测器部分信息分解。基于边际依赖约束构建了最大熵概率模型格,并且将特定预测器关于目标的唯一信息定义为当该特定预测器 - 目标边际依赖受到约束时联合预测器 - 目标互信息的最小增加量。在这里,我们将I_dep方法应用于高斯系统,对于该系统,边际约束最大熵模型是高斯图形模型。推导了单变量和多变量高斯系统的I_dep PID的闭式解。提供了数值和图形说明,以及I_dep PID与巴雷特·A·B(2015年)讨论的最小互信息部分信息分解(I_mmi)的实际和理论比较。使用I_dep获得的结果似乎比其他方法(如I_mmi)给出的结果更直观,在I_mmi中,冗余和唯一信息分量被约束为仅依赖于预测器 - 目标边际分布。特别是,证明了I_mmi方法通常比I_dep方法产生更大的冗余和协同估计。在实际例子的讨论中,通过使用高斯图形模型比较的偏差检验来补充PID。