On The Minimax Optimality Of The Em Algorithm For Learning Two-Component Mixed Linear Regression

24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS)(2021)

引用 27|浏览68
暂无评分
摘要
We study the convergence rates of the EM algorithm for learning two-component mixed linear regression under all regimes of signal-to-noise ratio (SNR). We resolve a long-standing question that many recent results have attempted to tackle: we completely characterize the convergence behavior of EM, and show that the EM algorithm achieves minimax optimal sample complexity under all SNR regimes. In particular, when the SNR is sufficiently large, the EM updates converge to the true parameter theta* at the standard parametric convergence rate O((d/n)(1/2)) after O(log(n/d)) iterations. In the regime where the SNR is above O((d/n)(1/4)) and below some constant, the EM iterates converge to a O(SNR-1 (d/n)(1/2)) neighborhood of the true parameter, when the number of iterations is of the order O(SNR-2 log(n/d)). In the low SNR regime where the SNR is below O((d/n)(1/4)), we show that EM converges to a O((d/n)(1/4)) neighborhood of the true parameters, after O((n/d)(1/2)) iterations. Notably, these results are achieved under mild conditions of either random initialization or an efficiently computable local initialization. By providing tight convergence guarantees of the EM algorithm in middle-to-low SNR regimes, we fill the remaining gap in the literature, and significantly, reveal that in low SNR, EM changes rate, matching the n(-1/4) rate of the MLE, a behavior that previous work had been unable to show.
更多
查看译文
关键词
em algorithm,mixed linear regression,minimax optimality,learning,two-component
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要