论文标题

迈向远程度量学习的认证鲁棒性

Towards Certified Robustness of Distance Metric Learning

论文作者

Yang, Xiaochen, Guo, Yiwen, Dong, Mingzhi, Xue, Jing-Hao

论文摘要

公制学习旨在学习一个距离度量标准,以便在将不同的实例推开时将语义上相似的实例放在一起。许多现有的方法考虑在特征空间中最大化或至少限制距离距离的距离,这些距离将相似的实例对分开以保证其概括能力。在本文中,我们主张在输入空间中施加对抗性边缘,以改善公制学习算法的概括和鲁棒性。我们首先表明,对抗缘,定义为训练实例与其最接近的对手示例之间的距离,它既考虑了特征空间中的距离余量以及度量和三重限制之间的相关性。接下来,为了增强实例扰动的鲁棒性,我们建议通过最大程度地减少称为扰动损失的新型损失函数来扩大对抗缘。提出的损失可以看作是数据依赖性的正规器,并很容易插入任何现有的度量学习方法中。最后,我们表明,通过使用算法鲁棒性的理论技术,扩大边缘对概括能力有益。 16个数据集上的实验结果证明了所提出的方法比现有的最新方法的优越性在歧视精度和鲁棒性抗可能的噪声方面具有优势。

Metric learning aims to learn a distance metric such that semantically similar instances are pulled together while dissimilar instances are pushed away. Many existing methods consider maximizing or at least constraining a distance margin in the feature space that separates similar and dissimilar pairs of instances to guarantee their generalization ability. In this paper, we advocate imposing an adversarial margin in the input space so as to improve the generalization and robustness of metric learning algorithms. We first show that, the adversarial margin, defined as the distance between training instances and their closest adversarial examples in the input space, takes account of both the distance margin in the feature space and the correlation between the metric and triplet constraints. Next, to enhance robustness to instance perturbation, we propose to enlarge the adversarial margin through minimizing a derived novel loss function termed the perturbation loss. The proposed loss can be viewed as a data-dependent regularizer and easily plugged into any existing metric learning methods. Finally, we show that the enlarged margin is beneficial to the generalization ability by using the theoretical technique of algorithmic robustness. Experimental results on 16 datasets demonstrate the superiority of the proposed method over existing state-of-the-art methods in both discrimination accuracy and robustness against possible noise.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源