论文标题
SOK:深神经网络的认证鲁棒性
SoK: Certified Robustness for Deep Neural Networks
论文作者
论文摘要
深度神经网络(DNN)的巨大进步已导致了各种任务的最新表现。但是,最近的研究表明,DNN容易受到对抗性攻击的影响,在将这些模型部署到诸如自动驾驶之类的安全至关重要的应用中时,这引起了极大的关注。已经提出了针对对抗攻击的不同防御方法,包括:a)经验防御,通常可以在不提供鲁棒性认证的情况下再次自适应攻击; b)确认可靠的方法,包括稳健性验证,可为在某些条件下的任何攻击和相应的鲁棒训练方法提供稳健精度的下限。在本文中,我们将认证强大的方法以及相关的实用和理论含义和发现系统化。我们还提供了有关不同数据集上现有鲁棒性验证和培训方法的第一个全面基准。特别是,我们1)为鲁棒性验证和培训方法提供了分类法,并总结了代表性算法的方法论,2)揭示了这些方法之间的特征,优势,局限性和基本联系的特征,理论障碍,理论障碍,主要挑战,以及未来的良好方法,以及dnns and and decififififififififical a n and dnns and dnns and dnn and dnn的基本联系。 20多个代表性的证明鲁棒方法。
Great advances in deep neural networks (DNNs) have led to state-of-the-art performance on a wide range of tasks. However, recent studies have shown that DNNs are vulnerable to adversarial attacks, which have brought great concerns when deploying these models to safety-critical applications such as autonomous driving. Different defense approaches have been proposed against adversarial attacks, including: a) empirical defenses, which can usually be adaptively attacked again without providing robustness certification; and b) certifiably robust approaches, which consist of robustness verification providing the lower bound of robust accuracy against any attacks under certain conditions and corresponding robust training approaches. In this paper, we systematize certifiably robust approaches and related practical and theoretical implications and findings. We also provide the first comprehensive benchmark on existing robustness verification and training approaches on different datasets. In particular, we 1) provide a taxonomy for the robustness verification and training approaches, as well as summarize the methodologies for representative algorithms, 2) reveal the characteristics, strengths, limitations, and fundamental connections among these approaches, 3) discuss current research progresses, theoretical barriers, main challenges, and future directions for certifiably robust approaches for DNNs, and 4) provide an open-sourced unified platform to evaluate 20+ representative certifiably robust approaches.