论文标题
量化的间隔限制训练的传播确认可靠量化的神经网络
Quantization-aware Interval Bound Propagation for Training Certifiably Robust Quantized Neural Networks
论文作者
论文摘要
我们研究训练和证明对抗性量化的量化神经网络(QNN)的问题。量化是一种通过使用低位整数算术运行的神经网络来提高神经网络的技术,因此在行业中通常采用。最近的工作表明,经过证实的稳健的浮点神经网络可能会在量化后容易受到对抗性攻击的影响,并且对量化表示的认证是必要的,以确保鲁棒性。在这项工作中,我们提出了量化感知的间隔结合传播(QA-IBP),这是一种训练鲁棒QNN的新方法。受非量化网络的强大学习进步的启发,我们的培训算法计算了实际网络的抽象表示的梯度。与现有方法不同,我们的方法可以处理QNN的离散语义。基于QA-IBP,我们还制定了一个完整的验证程序,用于验证QNN的对抗鲁棒性,该QNN可以保证终止并产生正确的答案。与现有方法相比,我们的验证程序的主要优点是它完全在GPU或其他加速器设备上运行。我们通过实验证明,我们的方法显着胜过现有方法,并建立了培训和证明QNN鲁棒性的新最新方法。
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs). Quantization is a technique for making neural networks more efficient by running them using low-bit integer arithmetic and is therefore commonly adopted in industry. Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization, and certification of the quantized representation is necessary to guarantee robustness. In this work, we present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs. Inspired by advances in robust learning of non-quantized networks, our training algorithm computes the gradient of an abstract representation of the actual network. Unlike existing approaches, our method can handle the discrete semantics of QNNs. Based on QA-IBP, we also develop a complete verification procedure for verifying the adversarial robustness of QNNs, which is guaranteed to terminate and produce a correct answer. Compared to existing approaches, the key advantage of our verification procedure is that it runs entirely on GPU or other accelerator devices. We demonstrate experimentally that our approach significantly outperforms existing methods and establish the new state-of-the-art for training and certifying the robustness of QNNs.