Jiao Jiantao, Venkat Kartik, Han Yanjun, Weissman Tsachy
Department of Electrical Engineering, Stanford University, Stanford, CA 94305 USA.
Department of Electronic Engineering, Tsinghua University, Beijing 100084, China.
IEEE Trans Inf Theory. 2015 May;61(5):2835-2885. doi: 10.1109/tit.2015.2412945. Epub 2015 Mar 13.
We propose a general methodology for the construction and analysis of essentially minimax estimators for a wide class of functionals of finite dimensional parameters, and elaborate on the case of discrete distributions, where the support size is unknown and may be comparable with or even much larger than the number of observations . We treat the respective regions where the functional is nonsmooth and smooth separately. In the nonsmooth regime, we apply an unbiased estimator for the best polynomial approximation of the functional whereas, in the smooth regime, we apply a bias-corrected version of the maximum likelihood estimator (MLE). We illustrate the merit of this approach by thoroughly analyzing the performance of the resulting schemes for estimating two important information measures: 1) the entropy [Formula: see text] and 2) [Formula: see text], > 0. We obtain the minimax rates for estimating these functionals. In particular, we demonstrate that our estimator achieves the optimal sample complexity ≍ /ln for entropy estimation. We also demonstrate that the sample complexity for estimating (), 0 < < 1, is ≍ /ln , which can be achieved by our estimator but not the MLE. For 1 < < 3/2, we show the minimax rate for estimating () is ( ln ) for infinite support size, while the maximum rate for the MLE is . For all the above cases, the behavior of the minimax rate-optimal estimators with samples is essentially that of the MLE (plug-in rule) with ln samples, which we term "effective sample size enlargement." We highlight the practical advantages of our schemes for the estimation of entropy and mutual information. We compare our performance with various existing approaches, and demonstrate that our approach reduces running time and boosts the accuracy. Moreover, we show that the minimax rate-optimal mutual information estimator yielded by our framework leads to significant performance boosts over the Chow-Liu algorithm in learning graphical models. The wide use of information measure estimation suggests that the insights and estimators obtained in this paper could be broadly applicable.
我们提出了一种通用方法,用于构建和分析有限维参数的广泛一类泛函的本质上极小极大估计量,并详细阐述离散分布的情况,其中支撑集大小未知,且可能与观测次数相当甚至远大于观测次数。我们分别处理泛函非光滑和光滑的相应区域。在非光滑区域,我们应用一个无偏估计量来对泛函进行最佳多项式逼近,而在光滑区域,我们应用最大似然估计量(MLE)的偏差校正版本。我们通过全面分析所得方案在估计两个重要信息度量方面的性能来说明这种方法的优点:1)熵[公式:见原文]和2)[公式:见原文],[具体条件] > 0。我们得到了估计这些泛函的极小极大速率。特别地,我们证明了我们的估计量在熵估计中实现了最优样本复杂度[公式:见原文]≍[公式:见原文]/ln[公式:见原文]。我们还证明了估计[公式:见原文],0 < [具体条件] < 1的样本复杂度为[公式:见原文]≍[公式:见原文]/ln[公式:见原文],这可以由我们的估计量实现,但MLE无法实现。对于1 < [具体条件] < 3/2,我们表明对于无限支撑集大小,估计[公式:见原文]的极小极大速率为([公式:见原文]ln[公式:见原文]),而MLE的最大速率为[公式:见原文]。对于上述所有情况,具有[公式:见原文]个样本的极小极大速率最优估计量的行为本质上与具有[公式:见原文]ln[公式:见原文]个样本的MLE(插件规则)相同,我们将其称为“有效样本量扩大”。我们强调了我们的方案在熵和互信息估计方面的实际优势。我们将我们的性能与各种现有方法进行比较,并证明我们的方法减少了运行时间并提高了准确性。此外,我们表明我们的框架产生的极小极大速率最优互信息估计量在学习图形模型时比Chow - Liu算法有显著的性能提升。信息度量估计的广泛应用表明本文获得的见解和估计量可能具有广泛的适用性。