论文标题

对抗性有向图嵌入

Adversarial Directed Graph Embedding

论文作者

Zhu, Shijie, Li, Jianxin, Peng, Hao, Wang, Senzhang, He, Lifang

论文摘要

有向图的节点表示学习对于促进许多图形挖掘任务至关重要。为了捕获节点之间的有向边缘,现有方法主要学习每个节点,源向量和目标向量的两个嵌入向量。但是,这些方法分别学习源和目标向量。对于具有非常低固定性或超级的节点,无法有效地学习相应的目标向量或源向量。在本文中,我们提出了一个基于生成对抗网络的新型有向图嵌入框架,称为DGGAN。主要思想是使用对抗机制来部署一个共同学习每个节点的源和目标向量的歧视器和两个发电机。对于给定的节点,对两个发电机进行了训练,以从同一基础分布中生成其假目标和源邻居节点,并且歧视器旨在区分邻居节点是真实的还是假的。这两个发电机被配合到统一的框架中,并可以相互加强,以学习更多强大的源和目标向量。广泛的实验表明,在有向图上的多个图形挖掘任务上,DGGAN始终如一,显着优于现有的最新方法。

Node representation learning for directed graphs is critically important to facilitate many graph mining tasks. To capture the directed edges between nodes, existing methods mostly learn two embedding vectors for each node, source vector and target vector. However, these methods learn the source and target vectors separately. For the node with very low indegree or outdegree, the corresponding target vector or source vector cannot be effectively learned. In this paper, we propose a novel Directed Graph embedding framework based on Generative Adversarial Network, called DGGAN. The main idea is to use adversarial mechanisms to deploy a discriminator and two generators that jointly learn each node's source and target vectors. For a given node, the two generators are trained to generate its fake target and source neighbor nodes from the same underlying distribution, and the discriminator aims to distinguish whether a neighbor node is real or fake. The two generators are formulated into a unified framework and could mutually reinforce each other to learn more robust source and target vectors. Extensive experiments show that DGGAN consistently and significantly outperforms existing state-of-the-art methods across multiple graph mining tasks on directed graphs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源