Schürmann Thomas
Jülich Supercomputing Centre, Jülich Research Centre, 52425 Jülich, Germany.
Neural Comput. 2015 Oct;27(10):2097-106. doi: 10.1162/NECO_a_00775. Epub 2015 Aug 27.
We compare an entropy estimator H(z) recently discussed by Zhang (2012) with two estimators, H(1) and H(2), introduced by Grassberger (2003) and Schürmann (2004). We prove the identity H(z) ≡ H(1), which has not been taken into account by Zhang (2012). Then we prove that the systematic error (bias) of H(1) is less than or equal to the bias of the ordinary likelihood (or plug-in) estimator of entropy. Finally, by numerical simulation, we verify that for the most interesting regime of small sample estimation and large event spaces, the estimator H(2) has a significantly smaller statistical error than H(z).
我们将Zhang(2012年)最近讨论的熵估计器H(z)与Grassberger(2003年)和Schürmann(2004年)引入的两个估计器H(1)和H(2)进行比较。我们证明了恒等式H(z) ≡ H(1),而Zhang(2012年)并未考虑到这一点。然后我们证明H(1)的系统误差(偏差)小于或等于熵的普通似然(或插入)估计器的偏差。最后,通过数值模拟,我们验证了对于小样本估计和大事件空间这个最有趣的情况,估计器H(2)的统计误差比H(z)显著更小。