Benkeser David, Petersen Maya, van der Laan Mark J
Department of Biostatistics and Bioinformatics, Emory University.
Graduate Group in Biostatistics, University of California, Berkeley.
J Am Stat Assoc. 2020;115(532):1917-1932. doi: 10.1080/01621459.2019.1668794. Epub 2019 Oct 21.
When predicting an outcome is the scientific goal, one must decide on a metric by which to evaluate the quality of predictions. We consider the problem of measuring the performance of a prediction algorithm with the same data that were used to train the algorithm. Typical approaches involve bootstrapping or cross-validation. However, we demonstrate that bootstrap-based approaches often fail and standard cross-validation estimators may perform poorly. We provide a general study of cross-validation-based estimators that highlights the source of this poor performance, and propose an alternative framework for estimation using techniques from the efficiency theory literature. We provide a theorem establishing the weak convergence of our estimators. The general theorem is applied in detail to two specific examples and we discuss possible extensions to other parameters of interest. For the two explicit examples that we consider, our estimators demonstrate remarkable finite-sample improvements over standard approaches.
当预测结果是科学目标时,必须确定一个用于评估预测质量的指标。我们考虑使用用于训练算法的相同数据来衡量预测算法性能的问题。典型方法包括自助法或交叉验证。然而,我们证明基于自助法的方法常常失败,并且标准交叉验证估计量可能表现不佳。我们对基于交叉验证的估计量进行了全面研究,突出了这种不佳表现的根源,并使用效率理论文献中的技术提出了一种替代估计框架。我们给出了一个定理,确立了我们估计量的弱收敛性。该一般定理被详细应用于两个具体例子,并且我们讨论了对其他感兴趣参数的可能扩展。对于我们考虑的两个明确例子,我们的估计量相对于标准方法在有限样本上有显著改进。