论文标题
重新审视黑盒优化:通过大规模的基准测试改善算法选择向导
Black-Box Optimization Revisited: Improving Algorithm Selection Wizards through Massive Benchmarking
论文作者
论文摘要
用于机器学习的黑盒优化研究的现有研究遭受了低推广性的损失,这是由于用于培训和测试不同优化算法的问题实例的通常选择性选择。除其他问题外,这种做法促进了过度拟合和表现不佳的用户准则。为了解决这一缺点,我们在这项工作中提出了一个基准套件,最佳套件,该套件涵盖了广泛的黑色盒子优化问题,包括学术基准到现实世界的应用,从数字上的离散到综合问题,从小规模的问题,从动态范围内散发出来,从而遍及整体问题,从而遍及整体问题,从而遍及整体问题,从而遍及整体问题。 (ABBO),一种通用算法选择向导。 ABBO使用三种不同类型的算法选择技术,在所有基准套件上都取得了竞争性能。它在包括Yabbob和LSGO在内的其中一些艺术品上的表现明显优于以前的艺术状态。 ABBO依靠许多高质量的基础组件。在没有任何特定任务的参数化的情况下获得其出色的性能。 Optimsuite基准集合,Abbo Wizard及其基本求解器都合并为开源的Nevergrad平台,在那里它们可用于可复制的研究。
Existing studies in black-box optimization for machine learning suffer from low generalizability, caused by a typically selective choice of problem instances used for training and testing different optimization algorithms. Among other issues, this practice promotes overfitting and poor-performing user guidelines. To address this shortcoming, we propose in this work a benchmark suite, OptimSuite, which covers a broad range of black-box optimization problems, ranging from academic benchmarks to real-world applications, from discrete over numerical to mixed-integer problems, from small to very large-scale problems, from noisy over dynamic to static problems, etc. We demonstrate the advantages of such a broad collection by deriving from it Automated Black Box Optimizer (ABBO), a general-purpose algorithm selection wizard. Using three different types of algorithm selection techniques, ABBO achieves competitive performance on all benchmark suites. It significantly outperforms previous state of the art on some of them, including YABBOB and LSGO. ABBO relies on many high-quality base components. Its excellent performance is obtained without any task-specific parametrization. The OptimSuite benchmark collection, the ABBO wizard and its base solvers have all been merged into the open-source Nevergrad platform, where they are available for reproducible research.