Non-Convex Rank/Sparsity Regularization and Local Minima

2017 IEEE International Conference on Computer Vision (ICCV)(2017)

引用 20|浏览23
暂无评分
摘要
This paper considers the problem of recovering either a low rank matrix or a sparse vector from observations of linear combinations of the vector or matrix elements. Recent methods replace the non-convex regularization with ℓ_1 or nuclear norm relaxations. It is well known that this approach can be guaranteed to recover a near optimal solutions if a so called restricted isometry property (RIP) holds. On the other hand it is also known to perform soft thresholding which results in a shrinking bias which can degrade the solution. In this paper we study an alternative non-convex regularization term. This formulation does not penalize elements that are larger than a certain threshold making it much less prone to small solutions. Our main theoretical results show that if a RIP holds then the stationary points are often well separated, in the sense that their differences must be of high cardinality/rank. Thus, with a suitable initial solution the approach is unlikely to fall into a bad local minima. Our numerical tests show that the approach is likely to converge to a better solution than standard ℓ_1/nuclear-norm relaxation even when starting from trivial initializations. In many cases our results can also be used to verify global optimality of our method.
更多
查看译文
关键词
restricted isometry property,sparsity regularization,nonconvex rank,standard nuclear-norm relaxation,bad local minimum,suitable initial solution,main theoretical results,alternative nonconvex regularization term,RIP,nuclear norm relaxations,linear combinations,sparse vector,low rank matrix
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要