Zhou Wei, Zhang Hai-Tao, Wang Jun
IEEE Trans Neural Netw Learn Syst. 2022 Jul;33(7):3065-3078. doi: 10.1109/TNNLS.2020.3049056. Epub 2022 Jul 6.
Sparse Bayesian learning (SBL) is a popular machine learning approach with a superior generalization capability due to the sparsity of its adopted model. However, it entails a matrix inversion at each iteration, hindering its practical applications with large-scale data sets. To overcome this bottleneck, we propose an efficient SBL algorithm with O(n) computational complexity per iteration based on a Gaussian-scale mixture prior model. By specifying two different hyperpriors, the proposed efficient SBL algorithm can meet two different requirements, such as high efficiency and high sparsity. A surrogate function is introduced herein to approximate the posterior density of model parameters and thereby to avoid matrix inversions. Using a data-dependent term, a joint cost function with separate penalty terms is reformulated in a joint space of model parameters and hyperparameters. The resulting nonconvex optimization problem is solved using a block coordinate descent method in a majorization-minimization framework. Finally, the results of extensive experiments for sparse signal recovery and sparse image reconstruction on benchmark problems are elaborated to substantiate the effectiveness and superiority of the proposed approach in terms of computational time and estimation error.
稀疏贝叶斯学习(SBL)是一种流行的机器学习方法,由于其采用的模型具有稀疏性,因此具有卓越的泛化能力。然而,它在每次迭代时都需要进行矩阵求逆,这阻碍了其在大规模数据集上的实际应用。为了克服这一瓶颈,我们基于高斯尺度混合先验模型提出了一种每次迭代计算复杂度为O(n)的高效SBL算法。通过指定两种不同的超先验,所提出的高效SBL算法可以满足两种不同的要求,如高效率和高稀疏性。本文引入了一个替代函数来近似模型参数的后验密度,从而避免矩阵求逆。使用一个数据依赖项,在模型参数和超参数的联合空间中重新构建了一个带有单独惩罚项的联合代价函数。在主元最小化框架下,使用块坐标下降法解决由此产生的非凸优化问题。最后,详细阐述了在基准问题上进行稀疏信号恢复和稀疏图像重建的大量实验结果,以证实所提方法在计算时间和估计误差方面的有效性和优越性。