Suppr超能文献

协同学习贝叶斯优化

Co-Learning Bayesian Optimization.

作者信息

Guo Zhendong, Ong Yew-Soon, He Tiantian, Liu Haitao

出版信息

IEEE Trans Cybern. 2022 Sep;52(9):9820-9833. doi: 10.1109/TCYB.2022.3168551. Epub 2022 Aug 18.

Abstract

Bayesian optimization (BO) is well known to be sample efficient for solving black-box problems. However, BO algorithms may get stuck in suboptimal solutions even with plenty of samples. Intrinsically, such a suboptimal problem of BO can attribute to the poor surrogate accuracy of the trained Gaussian process (GP), particularly that in the regions where the optimal solutions locate. Hence, we propose to build multiple GP models instead of a single GP surrogate to complement each other, thus resolving the suboptimal problem of BO. Nevertheless, according to the bias-variance tradeoff equation, the individual prediction errors can increase when increasing the diversity of models, which may lead even worse overall surrogate accuracy. On the other hand, based on the theory of the Rademacher complexity, it has been proven that exploiting the agreement of models on unlabeled information can reduce the complexity of hypothesis space, therefore achieving the required surrogate accuracy with fewer samples. Such value of model agreement has been extensively demonstrated for co-training style algorithms to boost model accuracy with a small portion of samples. Inspired by the above, we propose a novel BO algorithm labeled as co-learning BO (CLBO), which exploits both model diversity and agreement on unlabeled information to improve the overall surrogate accuracy with limited samples, therefore achieving more efficient global optimization. Through tests on five numerical toy problems and three engineering benchmarks, the effectiveness of the proposed CLBO has been well demonstrated.

摘要

贝叶斯优化(BO)以解决黑箱问题时样本效率高而闻名。然而,即使有大量样本,BO算法也可能陷入次优解。本质上,BO的这种次优问题可归因于训练的高斯过程(GP)的代理精度较差,尤其是在最优解所在的区域。因此,我们建议构建多个GP模型而不是单个GP代理来相互补充,从而解决BO的次优问题。然而,根据偏差-方差权衡方程,增加模型的多样性时,个体预测误差可能会增加,这可能导致整体代理精度更差。另一方面,基于拉德马赫复杂度理论,已证明利用模型在未标记信息上的一致性可以降低假设空间的复杂度,从而用更少的样本实现所需的代理精度。模型一致性的这种价值已在协同训练风格的算法中得到广泛证明,以用一小部分样本提高模型精度。受上述启发,我们提出了一种名为协同学习BO(CLBO)的新型BO算法,该算法利用模型多样性和在未标记信息上的一致性,以在有限样本的情况下提高整体代理精度,从而实现更高效的全局优化。通过对五个数值玩具问题和三个工程基准的测试,所提出的CLBO的有效性得到了充分证明。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验