FlowWalker: A Memory-efficient and High-performance GPU-based Dynamic Graph Random Walk Framework
arxiv(2024)
摘要
Dynamic graph random walk (DGRW) emerges as a practical tool for capturing
structural relations within a graph. Effectively executing DGRW on GPU presents
certain challenges. First, existing sampling methods demand a pre-processing
buffer, causing substantial space complexity. Moreover, the power-law
distribution of graph vertex degrees introduces workload imbalance issues,
rendering DGRW embarrassed to parallelize. In this paper, we propose
FlowWalker, a GPU-based dynamic graph random walk framework. FlowWalker
implements an efficient parallel sampling method to fully exploit the GPU
parallelism and reduce space complexity. Moreover, it employs a sampler-centric
paradigm alongside a dynamic scheduling strategy to handle the huge amounts of
walking queries. FlowWalker stands as a memory-efficient framework that
requires no auxiliary data structures in GPU global memory. We examine the
performance of FlowWalker extensively on ten datasets, and experiment results
show that FlowWalker achieves up to 752.2x, 72.1x, and 16.4x speedup compared
with existing CPU, GPU, and FPGA random walk frameworks, respectively. Case
study shows that FlowWalker diminishes random walk time from 35
pipeline of ByteDance friend recommendation GNN training.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要