Mozgunov P, Jaki T, Gasparini M
Department of Mathematics and Statistics, Lancaster University, Lancaster, UK.
Dipartimento di Scienze Matematiche, Politecnico di Torino, Turin, Italy.
J Appl Stat. 2019 Mar 14;46(13):2314-2337. doi: 10.1080/02664763.2019.1586848. eCollection 2019.
Squared error loss remains the most commonly used loss function for constructing a Bayes estimator of the parameter of interest. However, it can lead to suboptimal solutions when a parameter is defined on a restricted space. It can also be an inappropriate choice in the context when an extreme overestimation and/or underestimation results in severe consequences and a more conservative estimator is preferred. We advocate a class of loss functions for parameters defined on restricted spaces which infinitely penalize boundary decisions like the squared error loss does on the real line. We also recall several properties of loss functions such as symmetry, convexity and invariance. We propose generalizations of the squared error loss function for parameters defined on the positive real line and on an interval. We provide explicit solutions for corresponding Bayes estimators and discuss multivariate extensions. Four well-known Bayesian estimation problems are used to demonstrate inferential benefits the novel Bayes estimators can provide in the context of restricted estimation.
平方误差损失仍然是构建感兴趣参数的贝叶斯估计器时最常用的损失函数。然而,当参数定义在受限空间上时,它可能会导致次优解。在极端高估和/或低估会导致严重后果且更倾向于保守估计器的情况下,它也可能是一个不合适的选择。对于定义在受限空间上的参数,我们提倡一类损失函数,这类函数会像平方误差损失在实直线上那样对边界决策进行无限惩罚。我们还回顾了损失函数的几个性质,如对称性、凸性和不变性。我们针对定义在正实直线和区间上的参数提出了平方误差损失函数的推广形式。我们给出了相应贝叶斯估计器的显式解,并讨论了多元扩展。使用四个著名的贝叶斯估计问题来证明新型贝叶斯估计器在受限估计背景下能提供的推断优势。