论文标题
图形神经网络的信息混淆
Information Obfuscation of Graph Neural Networks
论文作者
论文摘要
尽管图神经网络的出现(GNN)在许多应用程序中大大改善了节点和图表的学习,但邻里聚合方案将其他漏洞暴露于寻求提取有关敏感属性的节点级信息的对手。在本文中,我们研究了使用图形结构化数据学习时通过信息混淆来保护敏感属性的问题。我们提出了一个框架,通过具有总变化和Wasserstein距离的对抗训练,在局部局部过滤了预定的敏感属性。我们的方法对推理攻击产生了强有力的防御,而任务绩效仅遭受了微小的损失。从理论上讲,我们分析了框架对最坏的对手的有效性,并表征了最大化预测准确性和最小化信息泄漏之间固有的权衡。推荐系统,知识图和量子化学的多个数据集的实验表明,所提出的方法在各种图形结构和任务上提供了强大的防御,同时为下游任务提供了竞争性的GNN编码器。
While the advent of Graph Neural Networks (GNNs) has greatly improved node and graph representation learning in many applications, the neighborhood aggregation scheme exposes additional vulnerabilities to adversaries seeking to extract node-level information about sensitive attributes. In this paper, we study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data. We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance. Our method creates a strong defense against inference attacks, while only suffering small loss in task performance. Theoretically, we analyze the effectiveness of our framework against a worst-case adversary, and characterize an inherent trade-off between maximizing predictive accuracy and minimizing information leakage. Experiments across multiple datasets from recommender systems, knowledge graphs and quantum chemistry demonstrate that the proposed approach provides a robust defense across various graph structures and tasks, while producing competitive GNN encoders for downstream tasks.