论文标题

强大的因果图表示学习,以防止混杂效应

Robust Causal Graph Representation Learning against Confounding Effects

论文作者

Gao, Hang, Li, Jiangmeng, Qiang, Wenwen, Si, Lingyu, Xu, Bing, Zheng, Changwen, Sun, Fuchun

论文摘要

流行的图神经网络模型在图表学习方面取得了重大进展。但是,在本文中,我们发现了一种不断忽视的现象:用完整图测试的预先训练的图表学习模型的表现不佳,该模型用良好的图表测试。该观察结果表明,图中存在混杂因素,这可能会干扰模型学习语义信息,而当前的图表表示方法并未消除其影响。为了解决这个问题,我们建议强大的因果图表示学习(RCGRL)学习可靠的图形表示反对混杂效应。 RCGRL引入了一种主动方法,可以在无条件的力矩限制下生成仪器变量,这使图表学习模型能够消除混杂因素,从而捕获与下游预测有因果关系的歧视性信息。我们提供定理和证据,以保证所提出的方法的理论有效性。从经验上讲,我们对合成数据集和多个基准数据集进行了广泛的实验。结果表明,与最先进的方法相比,RCGRL具有更好的预测性能和泛化能力。

The prevailing graph neural network models have achieved significant progress in graph representation learning. However, in this paper, we uncover an ever-overlooked phenomenon: the pre-trained graph representation learning model tested with full graphs underperforms the model tested with well-pruned graphs. This observation reveals that there exist confounders in graphs, which may interfere with the model learning semantic information, and current graph representation learning methods have not eliminated their influence. To tackle this issue, we propose Robust Causal Graph Representation Learning (RCGRL) to learn robust graph representations against confounding effects. RCGRL introduces an active approach to generate instrumental variables under unconditional moment restrictions, which empowers the graph representation learning model to eliminate confounders, thereby capturing discriminative information that is causally related to downstream predictions. We offer theorems and proofs to guarantee the theoretical effectiveness of the proposed approach. Empirically, we conduct extensive experiments on a synthetic dataset and multiple benchmark datasets. The results demonstrate that compared with state-of-the-art methods, RCGRL achieves better prediction performance and generalization ability.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源