论文标题
通过动态网络重新布线DNN的可调稳定的修剪框架
A Tunable Robust Pruning Framework Through Dynamic Network Rewiring of DNNs
论文作者
论文摘要
本文提出了一种动态网络重新布线(DNR)方法,以生成修剪的深神经网络(DNN)模型,该模型可抵抗对抗性攻击,但在清洁图像上保持了很高的精度。特别是,披露的DNR方法基于使用混合损失函数的统一约束优化公式,该功能将超高模型压缩与健壮的对抗训练合并。该训练策略会根据根据混合损耗函数计算的每层标准化动量动态调整层间连通性。与需要多次训练迭代的现有强大的修剪框架相反,提议的学习策略仅通过一次训练迭代实现了整体目标修剪比率,并且可以调整以支持不规则和结构化的通道修剪。为了评估DNR的优点,在CIFAR-10,CIFAR-100以及TININ-IMAGENET上使用两个广泛接受的模型,即VGG16和RESNET-18进行实验。与基线未压缩模型相比,DNR在所有数据集上提供超过20X的压缩,而干净或对抗分类的精度没有显着下降。此外,我们的实验表明,与通过最新的替代方案相比,DNR始终发现具有更好清洁和对抗性图像分类性能的压缩模型。
This paper presents a dynamic network rewiring (DNR) method to generate pruned deep neural network (DNN) models that are robust against adversarial attacks yet maintain high accuracy on clean images. In particular, the disclosed DNR method is based on a unified constrained optimization formulation using a hybrid loss function that merges ultra-high model compression with robust adversarial training. This training strategy dynamically adjusts inter-layer connectivity based on per-layer normalized momentum computed from the hybrid loss function. In contrast to existing robust pruning frameworks that require multiple training iterations, the proposed learning strategy achieves an overall target pruning ratio with only a single training iteration and can be tuned to support both irregular and structured channel pruning. To evaluate the merits of DNR, experiments were performed with two widely accepted models, namely VGG16 and ResNet-18, on CIFAR-10, CIFAR-100 as well as with VGG16 on Tiny-ImageNet. Compared to the baseline uncompressed models, DNR provides over20x compression on all the datasets with no significant drop in either clean or adversarial classification accuracy. Moreover, our experiments show that DNR consistently finds compressed models with better clean and adversarial image classification performance than what is achievable through state-of-the-art alternatives.