论文标题
嵌入图形编码器用于图形群集
Embedding Graph Auto-Encoder for Graph Clustering
论文作者
论文摘要
近年来,旨在通过无监督的方法将图形节点划分为各个组的图形聚类是一个有吸引力的话题。为了提高代表性能力,已经开发了基于半监督图卷积网络(GCN)的几种图形自动编码器(GAE)模型,与传统的聚类方法相比,它们取得了良好的结果。但是,所有现有的方法都无法利用GAE生成的表示形式的正交属性,或者分开聚类和神经网络的学习。我们首先证明,放松的K均值将在使用的内部产品中获得最佳分区。在有关放松K均值的理论分析的驱动下,我们设计了一个基于GAE的特定模型,以与理论一致,即嵌入图形自动编码器(EGAE)。同时,学习的表示形式可以很好地解释,因此该表示形式也可以用于其他任务。为了进一步诱导神经网络以产生适合特定聚类模型的深度特征,同时学习了放松的K-均值和GAE。因此,可以将放松的K均值等效视为试图学习可以由某些质心载体线性构建的表示形式的解码器。因此,Egae由一个编码器和双解码器组成。进行了广泛的实验,以证明耶和华的优势和相应的理论分析。
Graph clustering, aiming to partition nodes of a graph into various groups via an unsupervised approach, is an attractive topic in recent years. To improve the representative ability, several graph auto-encoder (GAE) models, which are based on semi-supervised graph convolution networks (GCN), have been developed and they achieve good results compared with traditional clustering methods. However, all existing methods either fail to utilize the orthogonal property of the representations generated by GAE, or separate the clustering and the learning of neural networks. We first prove that the relaxed k-means will obtain an optimal partition in the inner-products used space. Driven by theoretical analysis about relaxed k-means, we design a specific GAE-based model for graph clustering to be consistent with the theory, namely Embedding Graph Auto-Encoder (EGAE). Meanwhile, the learned representations are well explainable such that the representations can be also used for other tasks. To further induce the neural network to produce deep features that are appropriate for the specific clustering model, the relaxed k-means and GAE are learned simultaneously. Therefore, the relaxed k-means can be equivalently regarded as a decoder that attempts to learn representations that can be linearly constructed by some centroid vectors. Accordingly, EGAE consists of one encoder and dual decoders. Extensive experiments are conducted to prove the superiority of EGAE and the corresponding theoretical analyses.