论文标题

使用注意力关系图蒸馏消除深层神经网络的后门触发器

Eliminating Backdoor Triggers for Deep Neural Networks Using Attention Relation Graph Distillation

论文作者

Xia, Jun, Wang, Ting, Ding, Jiepin, Wei, Xian, Chen, Mingsong

论文摘要

由于人工智能(AI)技术的繁荣,越来越多的后门是由对手来攻击深层神经网络(DNNS)的设计。尽管最先进的方法神经注意力蒸馏(NAD)可以有效地消除DNN的后门触发因素,但它仍然遭受非副攻击(ACC)的损失(ACC),因为它降低了(既然是),因为它既降低了(AC),因为它的范围(既降低),因为它的范围(并在范围内),因为它的范围(既然是),因为它既有较低的攻击率(均为较低)专注于使用相同顺序的注意力(即注意图)的后门防御。在本文中,我们介绍了一个名为“注意力关系图”蒸馏(ARGD)的新型后门防御框架,该框架使用我们提出的注意力关系图(ARGS)充分探讨了注意力特征与不同订单之间的相关性。基于知识蒸馏过程中教师和学生模型之间的ARG对齐,ARGD可以消除比NAD更多的后门触发器。全面的实验结果表明,在最新的六次后门攻击中,ARGD的表现高达94.85%的ASR,而ACC可以提高高达3.23%。

Due to the prosperity of Artificial Intelligence (AI) techniques, more and more backdoors are designed by adversaries to attack Deep Neural Networks (DNNs).Although the state-of-the-art method Neural Attention Distillation (NAD) can effectively erase backdoor triggers from DNNs, it still suffers from non-negligible Attack Success Rate (ASR) together with lowered classification ACCuracy (ACC), since NAD focuses on backdoor defense using attention features (i.e., attention maps) of the same order. In this paper, we introduce a novel backdoor defense framework named Attention Relation Graph Distillation (ARGD), which fully explores the correlation among attention features with different orders using our proposed Attention Relation Graphs (ARGs). Based on the alignment of ARGs between both teacher and student models during knowledge distillation, ARGD can eradicate more backdoor triggers than NAD. Comprehensive experimental results show that, against six latest backdoor attacks, ARGD outperforms NAD by up to 94.85% reduction in ASR, while ACC can be improved by up to 3.23%.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源