论文标题
TransCab:可转移的清洁通道后门,以自然触发在现实世界中的对象检测
TransCAB: Transferable Clean-Annotation Backdoor to Object Detection with Natural Trigger in Real-World
论文作者
论文摘要
对象检测是各种关键计算机视觉任务的基础,例如分割,对象跟踪和事件检测。要以令人满意的精度训练对象探测器,需要大量数据。但是,由于注释大型数据集涉及的大量劳动力,这种数据策展任务通常被外包给第三方或依靠志愿者。这项工作揭示了此类数据策展管道的严重脆弱性。我们提出了MacAb,即使数据策展人可以手动审核图像,即使数据策展人可以手动审核,也可以将干净的图像制作清洁图像将后门浸入对象探测器中。我们观察到,当后门被不明确的自然物理触发器激活时,在野外实现了错误分类和披肩的后门效应。与使用清洁标签的现有图像分类任务相比,带有清洁通道的非分类对象检测具有挑战性,这是因为在每个框架内具有多个对象的复杂性,包括受害者和非维克蒂姆对象。通过建设性地滥用深度学习框架使用的图像尺度函数,II结合了所提出的对抗性清洁图像复制技术,以及结合毒药数据选择标准的III,鉴于攻击预算有限,可以确保MACAB的功效。广泛的实验表明,在各种现实世界中,MACAB在90%的攻击成功率中表现出超过90%的攻击成功率。这包括披肩和错误分类后门效应,甚至限制了较小的攻击预算。无法通过最先进的检测技术有效地识别中毒样品。全面的视频演示位于https://youtu.be/ma7l_lpxkp4上,该演示基于YOLOV4倒置后门的0.14%的毒药率,后门和更快的R-CNN MISNN MISSIFFICATION BACKEDOR。
Object detection is the foundation of various critical computer-vision tasks such as segmentation, object tracking, and event detection. To train an object detector with satisfactory accuracy, a large amount of data is required. However, due to the intensive workforce involved with annotating large datasets, such a data curation task is often outsourced to a third party or relied on volunteers. This work reveals severe vulnerabilities of such data curation pipeline. We propose MACAB that crafts clean-annotated images to stealthily implant the backdoor into the object detectors trained on them even when the data curator can manually audit the images. We observe that the backdoor effect of both misclassification and the cloaking are robustly achieved in the wild when the backdoor is activated with inconspicuously natural physical triggers. Backdooring non-classification object detection with clean-annotation is challenging compared to backdooring existing image classification tasks with clean-label, owing to the complexity of having multiple objects within each frame, including victim and non-victim objects. The efficacy of the MACAB is ensured by constructively i abusing the image-scaling function used by the deep learning framework, ii incorporating the proposed adversarial clean image replica technique, and iii combining poison data selection criteria given constrained attacking budget. Extensive experiments demonstrate that MACAB exhibits more than 90% attack success rate under various real-world scenes. This includes both cloaking and misclassification backdoor effect even restricted with a small attack budget. The poisoned samples cannot be effectively identified by state-of-the-art detection techniques.The comprehensive video demo is at https://youtu.be/MA7L_LpXkp4, which is based on a poison rate of 0.14% for YOLOv4 cloaking backdoor and Faster R-CNN misclassification backdoor.