论文标题

靠近您的敌人:通过模仿教师学习攻击

Look Closer to Your Enemy: Learning to Attack via Teacher-Student Mimicking

论文作者

Wang, Mingjie, Guo, Jianxiong, Li, Sirui, Xiao, Dingwen, Tang, Zhiqing

论文摘要

深度神经网络在工业互联网领域中具有明显的先进人重新识别(REID)应用程序,但它们仍然很脆弱。因此,研究REID系统的鲁棒性至关重要,因为存在使用这些脆弱性来妥协工业监视系统的风险。当前的对抗方法着重于使用受害者模型(VM)的错误分类反馈生成攻击样本,从而忽略了VM的认知过程。我们试图通过通过VM认知解密生成真实的REID攻击实例来解决这一问题。这种方法具有优势,例如可以更好地传递开放设定的REID测试,更容易的VM误导以及增强了现实且无法检测到的突击图像的创建。但是,在VM中解密认知机制的任务被普遍认为是一个巨大的挑战。在本文中,我们提出了一个新颖的不起眼且可控制的Reid攻击基线,LCYE(靠近您的敌人),以产生对抗性查询图像。具体而言,LCYE首先通过模仿代理任务的教师学生记忆来提炼VM的知识。这些知识先验是一个明确的加密令牌,封装了VM认为必不可少和合理的元素,并有助于促进精确的对抗性误导。此外,从LCYE的多个相对任务框架中受益,我们从对抗性攻击的角度研究了REID模型的解释性和概括,包括跨域改编,跨模型共识和在线学习过程。在四个REID基准测试中进行的广泛实验表明,我们的方法的表现优于其他最先进的攻击者,而白色框,黑盒和目标攻击的边距很大。可以在https://github.com/mingjiewang0606/lcye-attack_reid上找到源代码。

Deep neural networks have significantly advanced person re-identification (ReID) applications in the realm of the industrial internet, yet they remain vulnerable. Thus, it is crucial to study the robustness of ReID systems, as there are risks of adversaries using these vulnerabilities to compromise industrial surveillance systems. Current adversarial methods focus on generating attack samples using misclassification feedback from victim models (VMs), neglecting VM's cognitive processes. We seek to address this by producing authentic ReID attack instances through VM cognition decryption. This approach boasts advantages like better transferability to open-set ReID tests, easier VM misdirection, and enhanced creation of realistic and undetectable assault images. However, the task of deciphering the cognitive mechanism in VM is widely considered to be a formidable challenge. In this paper, we propose a novel inconspicuous and controllable ReID attack baseline, LCYE (Look Closer to Your Enemy), to generate adversarial query images. Specifically, LCYE first distills VM's knowledge via teacher-student memory mimicking the proxy task. This knowledge prior serves as an unambiguous cryptographic token, encapsulating elements deemed indispensable and plausible by the VM, with the intent of facilitating precise adversarial misdirection. Further, benefiting from the multiple opposing task framework of LCYE, we investigate the interpretability and generalization of ReID models from the view of the adversarial attack, including cross-domain adaption, cross-model consensus, and online learning process. Extensive experiments on four ReID benchmarks show that our method outperforms other state-of-the-art attackers with a large margin in white-box, black-box, and target attacks. The source code can be found at https://github.com/MingjieWang0606/LCYE-attack_reid.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源