论文标题

迈向深视力算法的对抗性鲁棒性

Towards Adversarial Robustness of Deep Vision Algorithms

论文作者

Yan, Hanshu

论文摘要

深度学习方法在解决计算机视觉任务方面取得了巨大的成功,并且它们已被广泛用于人工智能的系统,用于图像处理,分析和理解。但是,已证明深度神经网络在输入数据中容易受到对抗性扰动的影响。深度神经网络的安全问题已经浮出水面。必须全面研究深视力算法的对抗性鲁棒性。这次演讲重点是图像分类模型和图像Denoisers的对抗性鲁棒性。我们将从三个角度讨论深视觉算法的鲁棒性:1)鲁棒性评估(我们提出观察以评估Denoisers的鲁棒性),2)2)鲁棒性的改善(HAT,Tisode和CIF(开发出来)(为了鲁棒性的视觉模型而开发出来),以及3)与稳健的能力之间的联系(3)我们可以找到新的一般性能力(我们可以找到新的差异(我们),我们可以找到差异(我们差异化,我们都可以犯下。看不见的类型的现实世界噪声)。

Deep learning methods have achieved great success in solving computer vision tasks, and they have been widely utilized in artificially intelligent systems for image processing, analysis, and understanding. However, deep neural networks have been shown to be vulnerable to adversarial perturbations in input data. The security issues of deep neural networks have thus come to the fore. It is imperative to study the adversarial robustness of deep vision algorithms comprehensively. This talk focuses on the adversarial robustness of image classification models and image denoisers. We will discuss the robustness of deep vision algorithms from three perspectives: 1) robustness evaluation (we propose the ObsAtk to evaluate the robustness of denoisers), 2) robustness improvement (HAT, TisODE, and CIFS are developed to robustify vision models), and 3) the connection between adversarial robustness and generalization capability to new domains (we find that adversarially robust denoisers can deal with unseen types of real-world noise).

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源