论文标题

SmartSage:使用存储架构培训大规模的图形神经网络

SmartSAGE: Training Large-scale Graph Neural Networks using In-Storage Processing Architectures

论文作者

Lee, Yunjae, Chung, Jinha, Rhu, Minsoo

论文摘要

图形神经网络(GNN)可以通过学习每个对象的表示(即图节点)的表示以及在不同对象(即连接节点的边缘)之间的关系,在各种基于图的任务中实现最新性能。尽管具有优势,但在生产环境中利用这些算法仍面临几个挑战,因为图节点和边缘的数量达到数十亿到数十亿到数十亿比例,需要大量的培训存储空间。不幸的是,最先进的ML框架采用了一种内存的处理模型,该模型极大地阻碍了ML从业人员的生产率,因为它要求整体工作设置适合DRAM容量。在这项工作中,我们首先在最先进的大规模GNN培训算法(图形)上进行了详细的表征。然后,根据表征,我们探讨了利用容量优化的NVM SSD来存储存储器的GNN数据的可行性,这使大规模的GNN训练超出了主内存尺寸的范围。然而,鉴于DRAM和SSD之间的性能差距很大,因此盲目利用SSD作为DRAM的直接替代品会导致巨大的性能损失。因此,我们基于存储处理(ISP)体系结构开发了SmartSage,即我们的软件/硬件共同设计。我们的工作表明,基于ISP的大规模GNN培训系统可以同时实现高容量存储和高性能,从而为ML从业人员提供了训练大型GNN数据集的机会,而不会受到主内存大小的物理限制的阻碍。

Graph neural networks (GNNs) can extract features by learning both the representation of each objects (i.e., graph nodes) and the relationship across different objects (i.e., the edges that connect nodes), achieving state-of-the-art performance in various graph-based tasks. Despite its strengths, utilizing these algorithms in a production environment faces several challenges as the number of graph nodes and edges amount to several billions to hundreds of billions scale, requiring substantial storage space for training. Unfortunately, state-of-the-art ML frameworks employ an in-memory processing model which significantly hampers the productivity of ML practitioners as it mandates the overall working set to fit within DRAM capacity. In this work, we first conduct a detailed characterization on a state-of-the-art, large-scale GNN training algorithm, GraphSAGE. Based on the characterization, we then explore the feasibility of utilizing capacity-optimized NVM SSDs for storing memory-hungry GNN data, which enables large-scale GNN training beyond the limits of main memory size. Given the large performance gap between DRAM and SSD, however, blindly utilizing SSDs as a direct substitute for DRAM leads to significant performance loss. We therefore develop SmartSAGE, our software/hardware co-design based on an in-storage processing (ISP) architecture. Our work demonstrates that an ISP based large-scale GNN training system can achieve both high capacity storage and high performance, opening up opportunities for ML practitioners to train large GNN datasets without being hampered by the physical limitations of main memory size.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源