论文标题
单层图卷积网络用于推荐
Single-Layer Graph Convolutional Networks For Recommendation
论文作者
论文摘要
图形卷积网络(GCN)及其变体在各种推荐任务上都受到了很大的关注,并取得了启动性能。但是,许多现有的GCN模型倾向于在所有相关节点之间执行递归聚集,这会导致严重的计算负担。此外,它们与复杂的建模技术结合使用多层体系结构。尽管有效,但过量的模型参数在很大程度上阻碍了其在现实世界中的推荐系统中的应用。为此,在本文中,我们提出了单层GCN模型,与现有模型相比,它能够实现卓越的性能以及复杂性的较差。我们的主要贡献是三倍。首先,我们提出了一个原则上的相似性度量指标,名为“分布相似性”(DA相似性),该指标可以指导邻居采样过程并明确评估输入图的质量。我们还证明,通过理论分析和经验模拟,DA相似性与最终性能有正相关。其次,我们提出了一种简化的GCN体系结构,该体系结构采用单个GCN层来汇总通过DA相似性过滤的邻居的信息,然后生成节点表示。此外,汇总步骤是无参数的操作,因此可以以预处理方式进行,以进一步降低红色训练和推理成本。第三,我们在四个数据集上进行了广泛的实验。结果证明,就建议性能而言,提出的模型在训练中的表现大大优于现有的GCN模型,并且在训练中的速度均高达几个数量级的速度。
Graph Convolutional Networks (GCNs) and their variants have received significant attention and achieved start-of-the-art performances on various recommendation tasks. However, many existing GCN models tend to perform recursive aggregations among all related nodes, which arises severe computational burden. Moreover, they favor multi-layer architectures in conjunction with complicated modeling techniques. Though effective, the excessive amount of model parameters largely hinder their applications in real-world recommender systems. To this end, in this paper, we propose the single-layer GCN model which is able to achieve superior performance along with remarkably less complexity compared with existing models. Our main contribution is three-fold. First, we propose a principled similarity metric named distribution-aware similarity (DA similarity), which can guide the neighbor sampling process and evaluate the quality of the input graph explicitly. We also prove that DA similarity has a positive correlation with the final performance, through both theoretical analysis and empirical simulations. Second, we propose a simplified GCN architecture which employs a single GCN layer to aggregate information from the neighbors filtered by DA similarity and then generates the node representations. Moreover, the aggregation step is a parameter-free operation, such that it can be done in a pre-processing manner to further reduce red the training and inference costs. Third, we conduct extensive experiments on four datasets. The results verify that the proposed model outperforms existing GCN models considerably and yields up to a few orders of magnitude speedup in training, in terms of the recommendation performance.