论文标题

当地搜索是神经建筑搜索的强大基线

Local Search is a Remarkably Strong Baseline for Neural Architecture Search

论文作者

Ottelander, T. Den, Dushatskiy, A., Virgolin, M., Bosman, P. A. N.

论文摘要

神经架构搜索(NAS),即神经网络设计的自动化,近年来提出了越来越复杂的搜索算法。然而,与简单基线的固体比较经常缺少。同时,最近的回顾性研究发现许多新算法并不比随机搜索(RS)更好。在这项工作中,我们首次考虑了NAS的简单本地搜索(LS)算法。我们特别将具有网络准确性和网络复杂性的多目标NAS公式视为两个目标,因为理解这两个目标之间的权衡是NAS中最有趣的方面。将提出的LS算法与RS和两种进化算法(EAS)进行比较,因为这些算法通常被认为是多目标优化的理想选择。为了促进可重复性,我们创建并发布了两个名为Macronas-C10和Macronas-C100的基准数据集,其中包含200K保存的网络评估,用于两个已建立的图像分类任务CIFAR-10和CIFAR-100。我们的基准旨在与现有基准相互补,尤其是因为它们更适合多目标搜索。我们还考虑了一个更大的体系结构空间的问题版本。尽管我们发现并表明所考虑的算法以根本不同的方式探索了搜索空间,但我们还发现LS的表现大大优于RS,甚至表现效果几乎与最先进的EAS一样好。我们认为,这提供了有力的证据,表明LS确实是NAS的竞争基准,应对新的NAS算法进行基准测试。

Neural Architecture Search (NAS), i.e., the automation of neural network design, has gained much popularity in recent years with increasingly complex search algorithms being proposed. Yet, solid comparisons with simple baselines are often missing. At the same time, recent retrospective studies have found many new algorithms to be no better than random search (RS). In this work we consider, for the first time, a simple Local Search (LS) algorithm for NAS. We particularly consider a multi-objective NAS formulation, with network accuracy and network complexity as two objectives, as understanding the trade-off between these two objectives is arguably the most interesting aspect of NAS. The proposed LS algorithm is compared with RS and two evolutionary algorithms (EAs), as these are often heralded as being ideal for multi-objective optimization. To promote reproducibility, we create and release two benchmark datasets, named MacroNAS-C10 and MacroNAS-C100, containing 200K saved network evaluations for two established image classification tasks, CIFAR-10 and CIFAR-100. Our benchmarks are designed to be complementary to existing benchmarks, especially in that they are better suited for multi-objective search. We additionally consider a version of the problem with a much larger architecture space. While we find and show that the considered algorithms explore the search space in fundamentally different ways, we also find that LS substantially outperforms RS and even performs nearly as good as state-of-the-art EAs. We believe that this provides strong evidence that LS is truly a competitive baseline for NAS against which new NAS algorithms should be benchmarked.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源