Much Faster Algorithms for Matrix Scaling

2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)(2017)

引用 123|浏览138
暂无评分
摘要
We develop several efficient algorithms for the classical Matrix Scaling problem, which is used in many diverse areas, from preconditioning linear systems to approximation of the permanent. On an input n×n matrix A, this problem asks to find diagonal (scaling) matrices X and Y (if they exist), so that XAY ε-approximates a doubly stochastic matrix, or more generally a matrix with prescribed row and column sums. We address the general scaling problem as well as some important special cases. In particular, if A has m nonzero entries, and if there exist X and Y with polynomially large entries such that XAY is doubly stochastic, then we can solve the problem in total complexity Õ(m + n 4/3 ). This greatly improves on the best known previous results, which were either Õ(n 4 ) or O(mn 1/2 /ε). Our algorithms are based on tailor-made first and second order techniques, combined with other recent advances in continuous optimization, which may be of independent interest for solving similar problems.
更多
查看译文
关键词
matrix scaling,iterative algorithms,first-order method,second-order method,doubly stochastic
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要