Takeno Shion, Fukuoka Hitoshi, Tsukada Yuhki, Koyama Toshiyuki, Shiga Motoki, Takeuchi Ichiro, Karasuyama Masayuki
Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya, Aichi, 466-8555, Japan.
RIKEN Center for Advanced Intelligence Project, Chuo-ku, Tokyo, 103-0027, Japan
Neural Comput. 2022 Sep 12;34(10):2145-2203. doi: 10.1162/neco_a_01530.
Bayesian optimization (BO) is a popular method for expensive black-box optimization problems; however, querying the objective function at every iteration can be a bottleneck that hinders efficient search capabilities. In this regard, multifidelity Bayesian optimization (MFBO) aims to accelerate BO by incorporating lower-fidelity observations available with a lower sampling cost. In our previous work, we proposed an information-theoretic approach to MFBO, referred to as multifidelity max-value entropy search (MF-MES), which inherits practical effectiveness and computational simplicity of the well-known max-value entropy search (MES) for the single-fidelity BO. However, the applicability of MF-MES is still limited to the case that a single observation is sequentially obtained. In this letter, we generalize MF-MES so that information gain can be evaluated even when multiple observations are simultaneously obtained. This generalization enables MF-MES to address two practical problem settings: synchronous parallelization and trace-aware querying. We show that the acquisition functions for these extensions inherit the simplicity of MF-MES without introducing additional assumptions. We also provide computational techniques for entropy evaluation and posterior sampling in the acquisition functions, which can be commonly used for all variants of MF-MES. The effectiveness of MF-MES is demonstrated using benchmark functions and real-world applications such as materials science data and hyperparameter tuning of machine-learning algorithms.