论文标题
关系引导的表示学习
Relation-Guided Representation Learning
论文作者
论文摘要
深层自动编码器(DAE)通过神经网络的强大代表性在学习数据表示方面取得了巨大的成功。但是,大多数DAE仅专注于能够从潜在空间重建数据并忽略丰富潜在结构信息的最主要结构。在这项工作中,我们提出了一种新的表示学习方法,该方法明确地建模和利用样本关系,而样本关系又被用作指导表示形式学习的监督。与以前的工作不同,我们的框架很好地保留了样本之间的关系。由于对成对关系本身的预测本身是一个基本问题,因此我们的模型可以从数据中自适应地学习它们。这为编码真实数据歧管提供了很大的灵活性。关系和表示学习的重要作用是在聚类任务上评估的。基准数据集的广泛实验证明了我们方法的优势。通过寻求将样品嵌入子空间中,我们进一步表明我们的方法可以解决大规模和样本外问题。
Deep auto-encoders (DAEs) have achieved great success in learning data representations via the powerful representability of neural networks. But most DAEs only focus on the most dominant structures which are able to reconstruct the data from a latent space and neglect rich latent structural information. In this work, we propose a new representation learning method that explicitly models and leverages sample relations, which in turn is used as supervision to guide the representation learning. Different from previous work, our framework well preserves the relations between samples. Since the prediction of pairwise relations themselves is a fundamental problem, our model adaptively learns them from data. This provides much flexibility to encode real data manifold. The important role of relation and representation learning is evaluated on the clustering task. Extensive experiments on benchmark data sets demonstrate the superiority of our approach. By seeking to embed samples into subspace, we further show that our method can address the large-scale and out-of-sample problem.