A Fast Parallel Stochastic Gradient Method for Matrix Factorization in Shared Memory Systems

ACM TIST(2015)

引用 103|浏览155
暂无评分
摘要
Matrix factorization is known to be an effective method for recommender systems that are given only the ratings from users to items. Currently, stochastic gradient (SG) method is one of the most popular algorithms for matrix factorization. However, as a sequential approach, SG is difficult to be parallelized for handling web-scale problems. In this article, we develop a fast parallel SG method, FPSG, for shared memory systems. By dramatically reducing the cache-miss rate and carefully addressing the load balance of threads, FPSG is more efficient than state-of-the-art parallel algorithms for matrix factorization.
更多
查看译文
关键词
shared memory algorithm,recommender system,matrix factorization,parallel computing,mathematical software,stochastic gradient descent
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要