论文标题

有效地找到具有DNN预处理的对抗性示例

Efficiently Finding Adversarial Examples with DNN Preprocessing

论文作者

Chauhan, Avriti, Afzal, Mohammad, Karmarkar, Hrishikesh, Elboher, Yizhak, Madhukar, Kumar, Katz, Guy

论文摘要

深度神经网络(DNN)无处不在,经常执行相当复杂的任务,这对于机器无法进行。通过这样做,他们做出了很多决策,根据申请,如果出错,可能会造成灾难性。这需要正式的论点,即潜在的神经网络满足某些理想的特性。鲁棒性是DNN的关键属性之一,尤其是在将其部署在安全或业务关键应用中时。从非正式的话来说,如果对其输入的更改可能以相当大的方式影响输出(例如,更改该输入的分类),则DNN并不强大。寻找对抗性示例的任务是证明每当适用时缺乏鲁棒性。尽管这可以借助受限的优化技术可行,但由于大型网络,可伸缩性成为挑战。本文提出了通过预处理DNN来大量简化优化问题来收集的信息。我们的实验证明这是有效的,并且比最先进的做法要好得多。

Deep Neural Networks (DNNs) are everywhere, frequently performing a fairly complex task that used to be unimaginable for machines to carry out. In doing so, they do a lot of decision making which, depending on the application, may be disastrous if gone wrong. This necessitates a formal argument that the underlying neural networks satisfy certain desirable properties. Robustness is one such key property for DNNs, particularly if they are being deployed in safety or business critical applications. Informally speaking, a DNN is not robust if very small changes to its input may affect the output in a considerable way (e.g. changes the classification for that input). The task of finding an adversarial example is to demonstrate this lack of robustness, whenever applicable. While this is doable with the help of constrained optimization techniques, scalability becomes a challenge due to large-sized networks. This paper proposes the use of information gathered by preprocessing the DNN to heavily simplify the optimization problem. Our experiments substantiate that this is effective, and does significantly better than the state-of-the-art.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源