论文标题

NVDIFF:通过节点向量的扩散来生成图

NVDiff: Graph Generation through the Diffusion of Node Vectors

论文作者

Chen, Xiaohui, Li, Yukun, Zhang, Aonan, Liu, Li-Ping

论文摘要

学习生成图是一个挑战,因为图形是一组编码复杂组合结构的成对连接的无序节点。最近,几项工作提出了基于标准化流或基于得分扩散模型的图形生成模型。但是,这些模型需要与同一过程并行生成节点和边缘,该过程的维度不必要地很高。我们提出了NVDIFF,该NVDIFF采用VGAE结构,并在样品节点向量之前使用基于得分的生成模型(SGM)作为灵活性。通过仅在潜在空间中建模节点向量,NVDIFF显着降低了扩散过程的尺寸,从而提高了采样速度。基于NVDIFF框架,我们引入了一个基于注意力的分数网络,能够捕获图形的本地和全局上下文。实验表明,NVDIFF显着降低了计算,并且可以比竞争方法建模更大的图表。同时,与以前的方法相比,它比各种数据集获得了优越或竞争性的性能。

Learning to generate graphs is challenging as a graph is a set of pairwise connected, unordered nodes encoding complex combinatorial structures. Recently, several works have proposed graph generative models based on normalizing flows or score-based diffusion models. However, these models need to generate nodes and edges in parallel from the same process, whose dimensionality is unnecessarily high. We propose NVDiff, which takes the VGAE structure and uses a score-based generative model (SGM) as a flexible prior to sample node vectors. By modeling only node vectors in the latent space, NVDiff significantly reduces the dimension of the diffusion process and thus improves sampling speed. Built on the NVDiff framework, we introduce an attention-based score network capable of capturing both local and global contexts of graphs. Experiments indicate that NVDiff significantly reduces computations and can model much larger graphs than competing methods. At the same time, it achieves superior or competitive performances over various datasets compared to previous methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源