论文标题
具有基于设置的任务自适应元元素的神经网络的快速结构修剪
Rapid Structural Pruning of Neural Networks with Set-based Task-Adaptive Meta-Pruning
论文作者
论文摘要
随着深度神经网络的大小不断增长,并越来越多地部署到更具资源有限的设备上,最近人们对网络修剪方法引起了人们的兴趣,该方法的目的是消除给定网络的重要权重或激活。大多数现有修剪技术的一个常见局限性是,它们在修剪之前至少需要一次预训练网络,因此我们只能在推理时间减少记忆和计算中受益。但是,通过快速的结构修剪降低神经网络的培训成本可能是有益的,可以通过云计算来最大程度地减少货币成本,或者可以在资源有限的设备上实现设备学习。最近引入的随机体重修剪方法可以消除预处理的需求,但是它们经常在常规修剪技术上获得次优的性能,并且由于它们执行非结构化的修剪而不允许更快的训练。为了克服其局限性,我们提出了基于集合的任务自适应元修剪(邮票),该任务自动自适应通过在大型参考数据集上预估计的网络来通过在其上生成修剪掩码作为目标数据集的函数。为了确保目标任务上的最大性能改进,我们通过参考数据集的不同子集进行掩码生成器,以便在训练的几个梯度步骤中可以很好地概括到任何看不见的数据集。我们在基准数据集上针对最近的高级修剪方法验证了邮票,在该数据集上,它不仅以相似的精度获得了比基线相比的压缩率显着提高,而且还可以更快地训练速度。
As deep neural networks are growing in size and being increasingly deployed to more resource-limited devices, there has been a recent surge of interest in network pruning methods, which aim to remove less important weights or activations of a given network. A common limitation of most existing pruning techniques, is that they require pre-training of the network at least once before pruning, and thus we can benefit from reduction in memory and computation only at the inference time. However, reducing the training cost of neural networks with rapid structural pruning may be beneficial either to minimize monetary cost with cloud computing or to enable on-device learning on a resource-limited device. Recently introduced random-weight pruning approaches can eliminate the needs of pretraining, but they often obtain suboptimal performance over conventional pruning techniques and also does not allow for faster training since they perform unstructured pruning. To overcome their limitations, we propose Set-based Task-Adaptive Meta Pruning (STAMP), which task-adaptively prunes a network pretrained on a large reference dataset by generating a pruning mask on it as a function of the target dataset. To ensure maximum performance improvements on the target task, we meta-learn the mask generator over different subsets of the reference dataset, such that it can generalize well to any unseen datasets within a few gradient steps of training. We validate STAMP against recent advanced pruning methods on benchmark datasets, on which it not only obtains significantly improved compression rates over the baselines at similar accuracy, but also orders of magnitude faster training speed.