论文标题
用放大镜表征图形神经网络框架的效率
Characterizing the Efficiency of Graph Neural Network Frameworks with a Magnifying Glass
论文作者
论文摘要
图形神经网络(GNN)由于在各种与图形相关的学习任务中的成功而受到了极大的关注。然后,已经开发了几个GNN框架,以快速简便地实施GNN模型。尽管它们很受欢迎,但他们的实现和系统性能尚未得到充分理解。特别是,与基于整个图的训练的传统GNN不同,最近使用不同的图形采样技术开发了最近的GNN,用于在大图上对GNN进行迷你批次培训。尽管它们提高了可扩展性,但他们的培训时间仍然取决于框架中的实现,因为采样及其相关操作可以引入不可忽略的开销和计算成本。此外,从绿色计算的角度来看,框架“环保”的框架是“环保的”。在本文中,我们对两个主流GNN框架以及三个最先进的GNN提供了深入的研究,以在运行时和功率/能量消耗方面分析其性能。我们在几个不同级别进行了广泛的基准实验,并提出了详细的分析结果和观察结果,这可能有助于进一步的改进和优化。
Graph neural networks (GNNs) have received great attention due to their success in various graph-related learning tasks. Several GNN frameworks have then been developed for fast and easy implementation of GNN models. Despite their popularity, they are not well documented, and their implementations and system performance have not been well understood. In particular, unlike the traditional GNNs that are trained based on the entire graph in a full-batch manner, recent GNNs have been developed with different graph sampling techniques for mini-batch training of GNNs on large graphs. While they improve the scalability, their training times still depend on the implementations in the frameworks as sampling and its associated operations can introduce non-negligible overhead and computational cost. In addition, it is unknown how much the frameworks are 'eco-friendly' from a green computing perspective. In this paper, we provide an in-depth study of two mainstream GNN frameworks along with three state-of-the-art GNNs to analyze their performance in terms of runtime and power/energy consumption. We conduct extensive benchmark experiments at several different levels and present detailed analysis results and observations, which could be helpful for further improvement and optimization.