Liang Hua, Li Runze
J Am Stat Assoc. 2009;104(485):234-248. doi: 10.1198/jasa.2009.0127.
This article focuses on variable selection for partially linear models when the covariates are measured with additive errors. We propose two classes of variable selection procedures, penalized least squares and penalized quantile regression, using the nonconvex penalized principle. The first procedure corrects the bias in the loss function caused by the measurement error by applying the so-called correction-for-attenuation approach, whereas the second procedure corrects the bias by using orthogonal regression. The sampling properties for the two procedures are investigated. The rate of convergence and the asymptotic normality of the resulting estimates are established. We further demonstrate that, with proper choices of the penalty functions and the regularization parameter, the resulting estimates perform asymptotically as well as an oracle procedure (Fan and Li 2001). Choice of smoothing parameters is also discussed. Finite sample performance of the proposed variable selection procedures is assessed by Monte Carlo simulation studies. We further illustrate the proposed procedures by an application.
本文聚焦于协变量存在加性误差时部分线性模型的变量选择问题。我们提出了两类变量选择方法,即惩罚最小二乘法和惩罚分位数回归法,采用非凸惩罚原则。第一种方法通过应用所谓的衰减校正方法来校正由测量误差导致的损失函数偏差,而第二种方法通过使用正交回归来校正偏差。研究了这两种方法的抽样性质。建立了所得估计量的收敛速度和渐近正态性。我们进一步证明,通过适当选择惩罚函数和正则化参数,所得估计量的渐近性能与一种神谕方法(Fan和Li,2001)相当。还讨论了平滑参数的选择。通过蒙特卡罗模拟研究评估了所提出的变量选择方法的有限样本性能。我们通过一个应用实例进一步说明了所提出的方法。