Gu Bin, Wei Xiyuan, Zhang Hualin, Chang Yi, Huang Heng
School of Artificial Intelligence, Jilin University, Changchun, 130012 Jilin, China.
Mohamed bin Zayed University of Artificial Intelligence, Abu Dhabi, U.A.E.
Neural Comput. 2024 Apr 23;36(5):897-935. doi: 10.1162/neco_a_01636.
Zeroth-order (ZO) optimization is one key technique for machine learning problems where gradient calculation is expensive or impossible. Several variance, reduced ZO proximal algorithms have been proposed to speed up ZO optimization for nonsmooth problems, and all of them opted for the coordinated ZO estimator against the random ZO estimator when approximating the true gradient, since the former is more accurate. While the random ZO estimator introduces a larger error and makes convergence analysis more challenging compared to coordinated ZO estimator, it requires only O(1) computation, which is significantly less than O(d) computation of the coordinated ZO estimator, with d being dimension of the problem space. To take advantage of the computationally efficient nature of the random ZO estimator, we first propose a ZO objective decrease (ZOOD) property that can incorporate two different types of errors in the upper bound of convergence rate. Next, we propose two generic reduction frameworks for ZO optimization, which can automatically derive the convergence results for convex and nonconvex problems, respectively, as long as the convergence rate for the inner solver satisfies the ZOOD property. With the application of two reduction frameworks on our proposed ZOR-ProxSVRG and ZOR-ProxSAGA, two variance-reduced ZO proximal algorithms with fully random ZO estimators, we improve the state-of-the-art function query complexities from Omindn1/2ε2,dε3 to O˜n+dε2 under d>n12 for nonconvex problems, and from Odε2 to O˜nlog1ε+dε for convex problems. Finally, we conduct experiments to verify the superiority of our proposed methods.
零阶(ZO)优化是机器学习问题中的一项关键技术,在这些问题中梯度计算成本高昂或无法进行。已经提出了几种方差减少的ZO近端算法来加速非光滑问题的ZO优化,并且在逼近真实梯度时,所有这些算法都选择了协调ZO估计器而非随机ZO估计器,因为前者更准确。虽然与协调ZO估计器相比,随机ZO估计器引入的误差更大且使收敛分析更具挑战性,但它仅需要O(1)计算,这明显少于协调ZO估计器的O(d)计算,其中d是问题空间的维度。为了利用随机ZO估计器计算效率高的特性,我们首先提出了一种零阶目标下降(ZOOD)性质,该性质可以在收敛速率的上界中纳入两种不同类型的误差。接下来,我们提出了两个用于ZO优化的通用约简框架,只要内部求解器的收敛速率满足ZOOD性质,这两个框架就可以分别自动推导出凸问题和非凸问题的收敛结果。通过将这两个约简框架应用于我们提出的ZOR-ProxSVRG和ZOR-ProxSAGA这两种具有完全随机ZO估计器的方差减少的ZO近端算法,对于非凸问题,我们将当前最优的函数查询复杂度从O(min(dn1/2,ε2,dε3))提高到了d>n12时的O(˜n + dε2),对于凸问题,从O(dε2)提高到了O(˜nlog(1/ε)+dε)。最后,我们进行实验以验证我们提出的方法的优越性。