论文标题

一种综合方法来生产高效率的强大模型

An Integrated Approach to Produce Robust Models with High Efficiency

论文作者

Li, Zhijian, Wang, Bao, Xin, Jack

论文摘要

深度神经网络(DNN)对于实际用途需要有效而健壮。量化和结构简化是使DNN适应移动设备的有前途的方法,对抗训练是使DNN稳健的最流行方法。在这项工作中,我们试图通过将收敛的松弛量化算法(二进制 - 雷克斯(BR))应用于强大的对抗性训练的模型,通过Feynman-Kac形式(Enresnet)进行重新NETS集合来获得这两种功能。我们还发现,高精度,例如三元(TNN)和4位,量化会产生稀疏的DNN。但是,这种稀疏性是在预设培训下非结构的。为了解决对抗性训练的问题,危害了DNNS在干净的图像和稀疏性方面的准确性,我们设计了一种权衡损失功能,该损失功能有助于DNN保留其自然的准确性并提高频道的稀疏性。通过我们的权衡损失功能,我们实现了这两个目标,而在较弱的攻击和强大ATTCK下的抵抗力下的阻力下没有降低。与具有权衡损失函数的量化Enresnet一起,我们提供具有高效率的强大模型。

Deep Neural Networks (DNNs) needs to be both efficient and robust for practical uses. Quantization and structure simplification are promising ways to adapt DNNs to mobile devices, and adversarial training is the most popular method to make DNNs robust. In this work, we try to obtain both features by applying a convergent relaxation quantization algorithm, Binary-Relax (BR), to a robust adversarial-trained model, ResNets Ensemble via Feynman-Kac Formalism (EnResNet). We also discover that high precision, such as ternary (tnn) and 4-bit, quantization will produce sparse DNNs. However, this sparsity is unstructured under advarsarial training. To solve the problems that adversarial training jeopardizes DNNs' accuracy on clean images and the struture of sparsity, we design a trade-off loss function that helps DNNs preserve their natural accuracy and improve the channel sparsity. With our trade-off loss function, we achieve both goals with no reduction of resistance under weak attacks and very minor reduction of resistance under strong attcks. Together with quantized EnResNet with trade-off loss function, we provide robust models that have high efficiency.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源