谷歌浏览器插件
订阅小程序
在清言上使用

QuicK-means: Acceleration of K-means by learning a fast transform

Machine Learning(2019)

引用 2|浏览25
暂无评分
摘要
K-means -- and the celebrated Lloyd algorithm -- is more than the clustering method it was originally designed to be. \r\nIt has indeed proven pivotal to help increase the speed of many machine learning and data analysis techniques such as indexing, nearest-neighbor search and prediction, data compression, Radial Basis Function networks; its beneficial use has been shown to carry over to the acceleration of kernel machines (when using the Nystrom method). \r\nHere, we propose a fast extension of K-means, dubbed QuicK-means, that rests on the idea of expressing the matrix of the $K$ centroids as a product of sparse matrices, a feat made possible by recent results devoted to find approximations of matrices as a product of sparse factors. Using such a decomposition squashes the complexity of the matrix-vector product between the factorized $K \\times D$ centroid matrix $\\mathbf{U}$ and any vector from $\\mathcal{O}(K D)$ to $\\mathcal{O}(A \\log A+B)$, with $A=\\min (K, D)$ and $B=\\max (K, D)$, where $D$ is the dimension of the training data. This drastic computational saving has a direct impact in the assignment process of a point to a cluster, meaning that it is not only tangible at prediction time, but also at training time, provided the factorization procedure is performed during Lloyd\u0027s algorithm. We precisely show that resorting to a factorization step at each iteration does not impair the convergence of the optimization scheme and that, depending on the context, it may entail a reduction of the training time. Finally, we provide discussions and numerical simulations that show the versatility of our computationally-efficient QuicK-means algorithm.
更多
查看译文
关键词
K-means,Non-negative Matrix Factorization,Density-based Clustering,Clustering Algorithms,Semi-supervised Clustering
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要