论文标题

HYDRA:修剪对抗性稳健的神经网络

HYDRA: Pruning Adversarially Robust Neural Networks

论文作者

Sehwag, Vikash, Wang, Shiqi, Mittal, Prateek, Jana, Suman

论文摘要

在安全至关重要但计算资源受限的应用程序中,深度学习面临两个关键挑战:缺乏针对对抗性攻击的鲁棒性和较大的神经网络大小(通常是数百万个参数)。虽然研究界已经广泛探索了独立解决这些挑战之一的强大培训和网络修剪的使用,但只有少数最近的作品共同研究了这些挑战。但是,这些作品继承了一种用于良性训练的启发式修剪策略,该策略在与强大的训练技术集成时,包括对抗性训练和可验证的鲁棒训练。为了克服这一挑战,我们建议使修剪技术意识到强大的训练目标,并让培训目标指导与修剪联系的搜索。我们通过将修剪目标制定为一种经验风险最小化问题来实现这种见解,该问题可有效地使用SGD解决。我们证明,我们的方法(标题为Hydra)同时以最先进的良性和稳健的精度实现了压缩网络。我们通过四种强大的训练技术展示了我们在CIFAR-10,SVHN和Imagenet数据集中的方法的成功:迭代对抗训练,随机平滑,混合材料和Crown-IBP。我们还证明了非持bust网络中高度健壮的子网络的存在。我们的代码和压缩网络可在\ url {https://github.com/inspire-group/compactness-robustness}上公开获得。

In safety-critical but computationally resource-constrained applications, deep learning faces two key challenges: lack of robustness against adversarial attacks and large neural network size (often millions of parameters). While the research community has extensively explored the use of robust training and network pruning independently to address one of these challenges, only a few recent works have studied them jointly. However, these works inherit a heuristic pruning strategy that was developed for benign training, which performs poorly when integrated with robust training techniques, including adversarial training and verifiable robust training. To overcome this challenge, we propose to make pruning techniques aware of the robust training objective and let the training objective guide the search for which connections to prune. We realize this insight by formulating the pruning objective as an empirical risk minimization problem which is solved efficiently using SGD. We demonstrate that our approach, titled HYDRA, achieves compressed networks with state-of-the-art benign and robust accuracy, simultaneously. We demonstrate the success of our approach across CIFAR-10, SVHN, and ImageNet dataset with four robust training techniques: iterative adversarial training, randomized smoothing, MixTrain, and CROWN-IBP. We also demonstrate the existence of highly robust sub-networks within non-robust networks. Our code and compressed networks are publicly available at \url{https://github.com/inspire-group/compactness-robustness}.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源