Accelerating Recommendation System Training by Leveraging Popular Choices.

International Conference on Very Large Data Bases(2021)

引用 2|浏览5
暂无评分
摘要
Recommender models are commonly used to suggest relevant items to a user for e-commerce and online advertisement-based applications. These models use massive embedding tables to store numerical representation of items' and users' categorical variables (memory intensive) and employ neural networks (compute intensive) to generate final recommendations. Training these large-scale recommendation models is evolving to require increasing data and compute resources. The highly parallel neural networks portion of these models can benefit from GPU acceleration however, large embedding tables often cannot fit in the limited-capacity GPU device memory. Hence, this paper deep dives into the semantics of training data and obtains insights about the feature access, transfer, and usage patterns of these models. We observe that, due to the popularity of certain inputs, the accesses to the embeddings are highly skewed with a few embedding entries being accessed up to 10000x more. This paper leverages this asymmetrical access pattern to offer a framework, called FAE, and proposes a hot-embedding aware data layout for training recommender models. This layout utilizes the scare GPU memory for storing the highly accessed embeddings, thus reduces the data tranfers from CPU to GPU. At the same time, FAE engages the GPU to accelerate the executions of these hot embedding entires. Experiments on productions-scale recommendation models with real datasets show that FAE reduces the overall training time by 2.3x and 1.52x in comparison to XDL CPU-only and HDL CPU-GPU execution while maintaing baseline accuracy.
更多
查看译文
关键词
recommendation system training,popular
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要