论文标题

立体声症:用对抗性扰动欺骗立体声网络

Stereopagnosia: Fooling Stereo Networks with Adversarial Perturbations

论文作者

Wong, Alex, Mundhra, Mukund, Soatto, Stefano

论文摘要

我们研究图像对抗性扰动对训练立体声的深度学习模型的估计估计的影响。我们表明,不可感知的加性扰动可以显着改变差异图,并相应地改变了场景的几何形状。这些扰动不仅会影响他们制作的特定模型,而且会转移到具有不同损失功能的不同体系结构的模型中。我们表明,当用于对抗数据增强时,我们的扰动会导致训练有素的模型,这些模型在不牺牲模型的整体准确性的情况下更健壮。这与图像分类中观察到的内容不同,在图像分类中,将扰动的图像添加到训练集中会使模型不易受到对抗性扰动的影响,但会损害整体准确性。我们使用最新的立体声网络测试我们的方法,并评估其在公共基准数据集上的性能。

We study the effect of adversarial perturbations of images on the estimates of disparity by deep learning models trained for stereo. We show that imperceptible additive perturbations can significantly alter the disparity map, and correspondingly the perceived geometry of the scene. These perturbations not only affect the specific model they are crafted for, but transfer to models with different architecture, trained with different loss functions. We show that, when used for adversarial data augmentation, our perturbations result in trained models that are more robust, without sacrificing overall accuracy of the model. This is unlike what has been observed in image classification, where adding the perturbed images to the training set makes the model less vulnerable to adversarial perturbations, but to the detriment of overall accuracy. We test our method using the most recent stereo networks and evaluate their performance on public benchmark datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源