论文标题

物理世界中的视觉对抗攻击和防御:一项调查

Visually Adversarial Attacks and Defenses in the Physical World: A Survey

论文作者

Wei, Xingxing, Pu, Bangzheng, Lu, Jiefan, Wu, Baoyuan

论文摘要

尽管深层神经网络(DNN)已被广泛应用于各种现实世界中,但它们容易受到对抗性例子的影响。计算机视觉中当前的对抗攻击可以根据其不同的攻击形式分为数字攻击和物理攻击。与在数字像素中产生扰动的数字攻击相比,在现实世界中,物理攻击更为实用。由于由身体对抗性的例子引起的严重安全问题,已经提出了许多作品来评估过去几年中DNN的身体对抗性鲁棒性。在本文中,我们总结了一项调查与当前的身体对抗性攻击以及计算机视觉中的身体对抗防御。为了建立分类法,我们分别从攻击任务,攻击形式和攻击方法中组织了当前的物理攻击。因此,读者可以从不同方面对该主题有系统的了解。为了进行身体防御,我们从预处理,进行了进行进行预处理和后处理中,为DNN模型建立了分类法,以实现对对抗性防御的全面覆盖。根据上述调查,我们最终讨论了该研究领域的挑战,并进一步讨论了未来方向的前景。

Although Deep Neural Networks (DNNs) have been widely applied in various real-world scenarios, they are vulnerable to adversarial examples. The current adversarial attacks in computer vision can be divided into digital attacks and physical attacks according to their different attack forms. Compared with digital attacks, which generate perturbations in the digital pixels, physical attacks are more practical in the real world. Owing to the serious security problem caused by physically adversarial examples, many works have been proposed to evaluate the physically adversarial robustness of DNNs in the past years. In this paper, we summarize a survey versus the current physically adversarial attacks and physically adversarial defenses in computer vision. To establish a taxonomy, we organize the current physical attacks from attack tasks, attack forms, and attack methods, respectively. Thus, readers can have a systematic knowledge of this topic from different aspects. For the physical defenses, we establish the taxonomy from pre-processing, in-processing, and post-processing for the DNN models to achieve full coverage of the adversarial defenses. Based on the above survey, we finally discuss the challenges of this research field and further outlook on the future direction.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源