论文标题

厌倦了过度平滑的?压力图图是您需要的!

Tired of Over-smoothing? Stress Graph Drawing Is All You Need!

论文作者

Li, Xue, Cheng, Yuanzhi

论文摘要

在设计和应用图形神经网络时,我们经常陷入一些优化的陷阱,其中最具欺骗性的是我们只能通过求解过度平滑的模型来构建一个深层的模型。基本原因是我们不了解图形神经网络的工作方式。压力图图可以为图表中的消息迭代提供独特的视点,例如,过度平滑问题的根源在于图模型无法保持节点之间的理想距离。我们进一步阐明了过度光滑的触发条件,并提出了应力图神经网络。通过引入来自压力迭代的有吸引力和排斥性的消息,我们将展示如何在不防止过度光滑的情况下建立深层模型,如何使用排斥信息以及如何优化当前的消息传播方案以近似于全部压力消息传播。通过在23个数据集上执行不同的任务,我们验证了我们有吸引力和排斥模型的有效性以及压力迭代与图形神经网络之间的派生关系。我们认为,应力图图将是理解和设计图神经网络的流行资源。

In designing and applying graph neural networks, we often fall into some optimization pitfalls, the most deceptive of which is that we can only build a deep model by solving over-smoothing. The fundamental reason is that we do not understand how graph neural networks work. Stress graph drawing can offer a unique viewpoint to message iteration in the graph, such as the root of the over-smoothing problem lies in the inability of graph models to maintain an ideal distance between nodes. We further elucidate the trigger conditions of over-smoothing and propose Stress Graph Neural Networks. By introducing the attractive and repulsive message passing from stress iteration, we show how to build a deep model without preventing over-smoothing, how to use repulsive information, and how to optimize the current message-passing scheme to approximate the full stress message propagation. By performing different tasks on 23 datasets, we verified the effectiveness of our attractive and repulsive models and the derived relationship between stress iteration and graph neural networks. We believe that stress graph drawing will be a popular resource for understanding and designing graph neural networks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源