Iterative Refinement for 𝓁-norm Regression.

SODA(2019)

引用 53|浏览108
暂无评分
摘要
We give improved algorithms for the ℓp-regression problem, minx ‖x‖p such that Ax = b, for all p ∊ (1, 2) ∪ (2, ∞). Our algorithms obtain a high accuracy solution in iterations, where each iteration requires solving an m × m linear system, with m being the dimension of the ambient space.Incorporating a procedure for maintaining an approximate inverse of the linear systems that we need to solve at each iteration, we give algorithms for solving ℓp-regression to 1/poly(n) accuracy that runs in time Õp(mmax{ω, 7/3}), where ω is the matrix multiplication constant. For the current best value of ω > 2.37, this means that we can solve ℓp regression as fast as ℓ2 regression, for all constant p bounded away from 1.Our algorithms can be combined with nearly-linear time solvers for linear systems in graph Laplacians to give minimum ℓp-norm flow / voltage solutions to 1/poly(n) accuracy on an undirected graph with m edges in time.For sparse graphs and for matrices with similar dimensions, our iteration counts and running times improve upon the p-norm regression algorithm by [Bubeck-Cohen-Lee-Li STOC'18], as well as general purpose convex optimization algorithms. At the core of our algorithms is an iterative refinement scheme for ℓp-norms, using the quadratically-smoothed ℓp-norms introduced in the work of Bubeck et al. Formally, given an initial solution, we construct a problem that seeks to minimize a quadratically-smoothed ℓp norm over a subspace, such that a crude solution to this problem allows us to improve the initial solution by a constant factor, leading to algorithms with fast convergence.
更多
查看译文
关键词
regression,refinement
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要