论文标题

通过虚假的负面取消来增强对比度的自我监督学习

Boosting Contrastive Self-Supervised Learning with False Negative Cancellation

论文作者

Huynh, Tri, Kornblith, Simon, Walter, Matthew R., Maire, Michael, Khademi, Maryam

论文摘要

自我监督的代表性学习使对比度学习的进步助长了重大飞跃,该学习旨在学习将附近阳性输入对的转变,同时将负面对截然不同。虽然可以可靠地生成正对(例如,作为同一图像的不同视图),但很难准确地建立负面对,而不论其语义内容或视觉特征如何,都将其定义为来自不同图像的样本。对比学习中的一个基本问题是减轻假阴性的影响。对比的假底片在表示学习中引起了两个关键问题:放弃语义信息和缓慢的收敛性。在本文中,我们提出了新的方法来识别虚假负面因素,并提出了两种减轻其效果的策略,即虚假的消除和吸引力,同时系统地执行严格的评估来详细研究此问题。我们的方法比现有的基于学习的方法表现出一致的改进。没有标签,我们在Imagenet上的1000个语义类别中识别出40%精度的假否定性,并且在使用1%标签的Finetuntuntun时,在先前的最新时间上,TOP-1准确性的绝对准确性获得了5.8%。我们的代码可从https://github.com/google-research/fnc获得。

Self-supervised representation learning has made significant leaps fueled by progress in contrastive learning, which seeks to learn transformations that embed positive input pairs nearby, while pushing negative pairs far apart. While positive pairs can be generated reliably (e.g., as different views of the same image), it is difficult to accurately establish negative pairs, defined as samples from different images regardless of their semantic content or visual features. A fundamental problem in contrastive learning is mitigating the effects of false negatives. Contrasting false negatives induces two critical issues in representation learning: discarding semantic information and slow convergence. In this paper, we propose novel approaches to identify false negatives, as well as two strategies to mitigate their effect, i.e. false negative elimination and attraction, while systematically performing rigorous evaluations to study this problem in detail. Our method exhibits consistent improvements over existing contrastive learning-based methods. Without labels, we identify false negatives with 40% accuracy among 1000 semantic classes on ImageNet, and achieve 5.8% absolute improvement in top-1 accuracy over the previous state-of-the-art when finetuning with 1% labels. Our code is available at https://github.com/google-research/fnc.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源