MQ-GNN: A MULTI-QUEUE PIPELINED ARCHITECTURE FOR SCALABLE AND EFFICIENT GNN TRAINING

MQ-GNN: A Multi-Queue Pipelined Architecture for Scalable and Efficient GNN Training

MQ-GNN: A Multi-Queue Pipelined Architecture for Scalable and Efficient GNN Training

Blog Article

Graph Neural Networks (GNNs) are powerful tools for learning graph-structured data, but their scalability is hindered by inefficient mini-batch generation, data transfer bottlenecks, and costly inter-GPU synchronization.Existing training frameworks fail to overlap these stages, leading to suboptimal resource utilization.This paper proposes MQ-GNN, a multi-queue pipelined framework that maximizes training efficiency by interleaving GNN training stages and optimizing resource utilization.MQ-GNN introduces Ready-to-Update Asynchronous Consistent Model (RaCoM), which enables asynchronous footjoy weste herren gradient sharing and model updates while ensuring global consistency through adaptive periodic synchronization.Additionally, it employs global neighbor sampling with caching to reduce data transfer overhead and an adaptive queue-sizing strategy to balance computation and memory efficiency.

Experiments on four large-scale datasets and ten baseline models demonstrate that MQ-GNN achieves up to ${4.6, imes }$ faster training time and 30% improved GPU utilization while maintaining competitive accuracy.These results establish MQ-GNN as a scalable 730 sunken lake road and efficient solution for multi-GPU GNN training.The code is available at MQ-GNN.

Report this page