论文标题

通过多个批范围和多个目标的对抗性示例来推进深度度量学习

Advancing Deep Metric Learning Through Multiple Batch Norms And Multi-Targeted Adversarial Examples

论文作者

Singh, Inderjeet, Kakizaki, Kazuya, Araki, Toshinori

论文摘要

深度度量学习(DML)是机器学习中的一个重要领域,具有广泛的实用应用,专注于学习视觉相似性。众所周知,诸如对抗性示例(AX)之类的输入遵循与干净数据不同的分布,从而导致了DML系统的错误预测。本文提出了MDPROP,这是一个框架,以同时提高DML模型在多个分布后的清洁数据和输入上的性能。 MDPROP通过斧头生成过程利用多分布数据,同时在训练DML模型期间通过多个批归一层层利用分离的学习。 MDPROP是第一个生成特征空间多目标轴以对训练模型的嵌入空间区域进行有针对性的正则化,从而改善了嵌入空间密度,从而有助于改善训练型模型的概括。从全面的实验分析中,我们表明,与常规方法相比,MDPROP最高可提高2.95%的清洁数据回忆@1分数@1分数@1分数@1分数,而对不同输入分布的鲁棒性高达2.12倍。

Deep Metric Learning (DML) is a prominent field in machine learning with extensive practical applications that concentrate on learning visual similarities. It is known that inputs such as Adversarial Examples (AXs), which follow a distribution different from that of clean data, result in false predictions from DML systems. This paper proposes MDProp, a framework to simultaneously improve the performance of DML models on clean data and inputs following multiple distributions. MDProp utilizes multi-distribution data through an AX generation process while leveraging disentangled learning through multiple batch normalization layers during the training of a DML model. MDProp is the first to generate feature space multi-targeted AXs to perform targeted regularization on the training model's denser embedding space regions, resulting in improved embedding space densities contributing to the improved generalization in the trained models. From a comprehensive experimental analysis, we show that MDProp results in up to 2.95% increased clean data Recall@1 scores and up to 2.12 times increased robustness against different input distributions compared to the conventional methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源