论文标题

$π$ bo:增强采集功能,并具有用户信念进行贝叶斯优化

$π$BO: Augmenting Acquisition Functions with User Beliefs for Bayesian Optimization

论文作者

Hvarfner, Carl, Stoll, Danny, Souza, Artur, Lindauer, Marius, Hutter, Frank, Nardi, Luigi

论文摘要

贝叶斯优化(BO)已成为机器学习(ML)算法的超参数优化(HPO)的既定框架和流行的工具。虽然以样品效率而闻名,但香草bo无法利用从业者对最佳的潜在位置的先验信念。因此,BO无视宝贵的信息来源,从而减少了对ML从业者的吸引力。为了解决此问题,我们提出了$π$ bo,这是一种采集功能概括,该函数概括了对最佳位置的先验信念,以用户提供的概率分布形式。与以前的方法相反,$π$ bo在概念上很简单,并且可以轻松地与现有库和许多获取功能集成在一起。当将$π$ bo应用于普通预期的改进函数时,我们会提供遗憾的界限,并以普通利率与先前的普通利率证明融合。此外,我们的实验表明,$π$ bo在广泛的基准和先前特征上均优于竞争方法。我们还证明,$π$ bo的最新性能提高了一项流行的深度学习任务,而与突出的BO方法相比,12.5 $ \ times $时间准确的速度。

Bayesian optimization (BO) has become an established framework and popular tool for hyperparameter optimization (HPO) of machine learning (ML) algorithms. While known for its sample-efficiency, vanilla BO can not utilize readily available prior beliefs the practitioner has on the potential location of the optimum. Thus, BO disregards a valuable source of information, reducing its appeal to ML practitioners. To address this issue, we propose $π$BO, an acquisition function generalization which incorporates prior beliefs about the location of the optimum in the form of a probability distribution, provided by the user. In contrast to previous approaches, $π$BO is conceptually simple and can easily be integrated with existing libraries and many acquisition functions. We provide regret bounds when $π$BO is applied to the common Expected Improvement acquisition function and prove convergence at regular rates independently of the prior. Further, our experiments show that $π$BO outperforms competing approaches across a wide suite of benchmarks and prior characteristics. We also demonstrate that $π$BO improves on the state-of-the-art performance for a popular deep learning task, with a 12.5 $\times$ time-to-accuracy speedup over prominent BO approaches.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源