论文标题

不断发展的神经建筑搜索搜索空间

Evolving Search Space for Neural Architecture Search

论文作者

Ci, Yuanzheng, Lin, Chen, Sun, Ming, Chen, Boyu, Zhang, Hongwen, Ouyang, Wanli

论文摘要

神经建筑设计的自动化一直是人类专家的令人垂涎的替代品。最近的作品具有较小的搜索空间,该空间易于优化,但最佳解决方案的上限有限。这些方法需要额外的人力设计,以提出有关特定任务和算法容量的更合适的空间。为了进一步增强神经体系结构搜索的自动化程度,我们提出了一个神经搜索空间演化(NSE)方案,该方案通过维护优化的搜索空间子集来迭代地放大了先前工作的结果。这种设计最大程度地减少了设计良好的搜索空间的必要性。我们通过引入可学习的多分支环境,进一步扩展了可获得架构的灵活性。通过采用提出的方法,在对即将到来的搜索空间进行渐进式搜索期间可以实现一致的性能增益。我们使用333m的Flops在Imagenet上获得了77.3%的Top-1重新培训,这在先前的自动生成的体系结构中产生了最先进的性能,而这些架构不涉及知识蒸馏或重量修剪。当采用延迟约束时,我们的结果也比以前最佳表现的移动模型具有77.9%的TOP-1重新训练精度。

The automation of neural architecture design has been a coveted alternative to human experts. Recent works have small search space, which is easier to optimize but has a limited upper bound of the optimal solution. Extra human design is needed for those methods to propose a more suitable space with respect to the specific task and algorithm capacity. To further enhance the degree of automation for neural architecture search, we present a Neural Search-space Evolution (NSE) scheme that iteratively amplifies the results from the previous effort by maintaining an optimized search space subset. This design minimizes the necessity of a well-designed search space. We further extend the flexibility of obtainable architectures by introducing a learnable multi-branch setting. By employing the proposed method, a consistent performance gain is achieved during a progressive search over upcoming search spaces. We achieve 77.3% top-1 retrain accuracy on ImageNet with 333M FLOPs, which yielded a state-of-the-art performance among previous auto-generated architectures that do not involve knowledge distillation or weight pruning. When the latency constraint is adopted, our result also performs better than the previous best-performing mobile models with a 77.9% Top-1 retrain accuracy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源