de Leeuw Jan, Lange Kenneth
Department of Statistics, University of California, Los Angeles, CA 90095.
Comput Stat Data Anal. 2009 May 15;53(7):2471-2484. doi: 10.1016/j.csda.2009.01.002.
Majorization methods solve minimization problems by replacing a complicated problem by a sequence of simpler problems. Solving the sequence of simple optimization problems guarantees convergence to a solution of the complicated original problem. Convergence is guaranteed by requiring that the approximating functions majorize the original function at the current solution. The leading examples of majorization are the EM algorithm and the SMACOF algorithm used in Multidimensional Scaling. The simplest possible majorizing subproblems are quadratic, because minimizing a quadratic is easy to do. In this paper quadratic majorizations for real-valued functions of a real variable are analyzed, and the concept of sharp majorization is introduced and studied. Applications to logit, probit, and robust loss functions are discussed.
优化方法通过将一个复杂问题替换为一系列更简单的问题来解决最小化问题。求解这一系列简单的优化问题可确保收敛到复杂原始问题的一个解。通过要求近似函数在当前解处优于原始函数来保证收敛。优化的主要示例是期望最大化(EM)算法和多维缩放中使用的缩放最大化(SMACOF)算法。最简单的可能的优化子问题是二次的,因为最小化二次函数很容易。本文分析了实变量实值函数的二次优化,并引入和研究了精确优化的概念。讨论了其在逻辑回归、概率单位回归和稳健损失函数中的应用。