论文标题
用于微分体系结构搜索的单层优化
Single-level Optimization For Differential Architecture Search
论文作者
论文摘要
在本文中,我们指出的是,差异架构搜索(飞镖)使在BI级优化框架中,在不同数据集中更新了针对网络权重和体系结构参数有偏见的体系结构参数的梯度。这些偏见导致不可行操作的架构参数超过了可学习的操作。此外,使用SoftMax作为体系结构参数的激活函数和不适当的学习率将加剧偏见。结果,经常观察到在搜索阶段不可行的操作占主导地位。为了减少偏见,我们建议使用单级替代双级优化和非竞争激活功能(如Sigmoid)来替代SoftMax。结果,我们可以稳步搜索高性能的体系结构。 NAS基准201的实验验证了我们的假设,并稳定地发现了几乎最佳的体系结构。在飞镖空间上,我们在Imagenet-1k上搜索具有77.0%Top1精度的最先进的体系结构(训练设置遵循PDARTS,而没有任何其他模块),并且稳步搜索体系结构可与最佳的最佳成绩相提并论,最高为76.5%的Top1精度(但从搜索架构中选择最佳)。
In this paper, we point out that differential architecture search (DARTS) makes gradient of architecture parameters biased for network weights and architecture parameters are updated in different datasets alternatively in the bi-level optimization framework. The bias causes the architecture parameters of non-learnable operations to surpass that of learnable operations. Moreover, using softmax as architecture parameters' activation function and inappropriate learning rate would exacerbate the bias. As a result, it's frequently observed that non-learnable operations are dominated in the search phase. To reduce the bias, we propose to use single-level to replace bi-level optimization and non-competitive activation function like sigmoid to replace softmax. As a result, we could search high-performance architectures steadily. Experiments on NAS Benchmark 201 validate our hypothesis and stably find out nearly the optimal architecture. On DARTS space, we search the state-of-the-art architecture with 77.0% top1 accuracy (training setting follows PDARTS and without any additional module) on ImageNet-1K and steadily search architectures up-to 76.5% top1 accuracy (but not select the best from the searched architectures) which is comparable with current reported best result.