论文标题
贝叶斯优化和信息协方差
Bayesian Optimization with Informative Covariance
论文作者
论文摘要
贝叶斯优化是一种全球优化未知和昂贵目标的方法。它将替代贝叶斯回归模型与采集功能结合在一起,以决定在哪里评估目标。典型的回归模型由具有固定协方差函数的高斯过程给出。但是,这些功能无法表达事先取决于输入的信息,包括最佳的可能位置。固定模型的普遍性导致了通过信息丰富的均值功能利用先前信息的共同实践。在本文中,我们强调说,这些模型的性能较差,尤其是在高维度中。我们提出了新颖的信息协方差函数,以优化,利用非平稳性来编码搜索空间某些区域的偏好,并在优化过程中自适应促进局部探索。我们证明,即使在较弱的先验信息下,提出的功能也可以提高贝叶斯优化的样本效率。
Bayesian optimization is a methodology for global optimization of unknown and expensive objectives. It combines a surrogate Bayesian regression model with an acquisition function to decide where to evaluate the objective. Typical regression models are given by Gaussian processes with stationary covariance functions. However, these functions are unable to express prior input-dependent information, including possible locations of the optimum. The ubiquity of stationary models has led to the common practice of exploiting prior information via informative mean functions. In this paper, we highlight that these models can perform poorly, especially in high dimensions. We propose novel informative covariance functions for optimization, leveraging nonstationarity to encode preferences for certain regions of the search space and adaptively promote local exploration during optimization. We demonstrate that the proposed functions can increase the sample efficiency of Bayesian optimization in high dimensions, even under weak prior information.