论文标题
自由徒手的深度自我监督的代表性学习
Deep Self-Supervised Representation Learning for Free-Hand Sketch
论文作者
论文摘要
在本文中,我们首次解决了自由监督的代表性学习的问题。重要的是解决草图社区所面临的常见问题 - 很难获得带注释的监督数据。这个问题非常具有挑战性,因为草图是高度抽象的,并且要遵守不同的绘图样式,从而使现有的解决方案量身定制为不适合的照片。 Key for the success of our self-supervised learning paradigm lies with our sketch-specific designs: (i) we propose a set of pretext tasks specifically designed for sketches that mimic different drawing styles, and (ii) we further exploit the use of a textual convolution network (TCN) in a dual-branch architecture for sketch feature learning, as means to accommodate the sequential stroke nature of sketches.我们通过在一个百万级的草图数据集上通过两个与草图相关的应用程序(检索和识别)来证明我们的草图特定设计的优势,并表明所提出的方法的表现优于最先进的无监督的表示的学习方法,并显着缩小了有监督的表示学习之间的绩效差距。
In this paper, we tackle for the first time, the problem of self-supervised representation learning for free-hand sketches. This importantly addresses a common problem faced by the sketch community -- that annotated supervisory data are difficult to obtain. This problem is very challenging in that sketches are highly abstract and subject to different drawing styles, making existing solutions tailored for photos unsuitable. Key for the success of our self-supervised learning paradigm lies with our sketch-specific designs: (i) we propose a set of pretext tasks specifically designed for sketches that mimic different drawing styles, and (ii) we further exploit the use of a textual convolution network (TCN) in a dual-branch architecture for sketch feature learning, as means to accommodate the sequential stroke nature of sketches. We demonstrate the superiority of our sketch-specific designs through two sketch-related applications (retrieval and recognition) on a million-scale sketch dataset, and show that the proposed approach outperforms the state-of-the-art unsupervised representation learning methods, and significantly narrows the performance gap between with supervised representation learning.