谷歌浏览器插件
订阅小程序
在清言上使用

Stagewise Training With Exponentially Growing Training Sets.

IEEE transactions on neural networks and learning systems(2024)

引用 0|浏览5
暂无评分
摘要
In the world of big data, training large-scale machine learning problems has gained considerable attention. Numerous innovative optimization strategies have been presented in recent years to accelerate the large-scale training process. However, the possibility of further accelerating the training process of various optimization algorithms remains an unresolved subject. To begin addressing this difficult problem, we exploit the researched findings that when training data are independent and identically distributed, the learning problem on a smaller dataset is not significantly different from the original one. Upon that, we propose a stagewise training technique that grows the size of the training set exponentially while solving nonsmooth subproblem. We demonstrate that our stagewise training via exponentially growing the size of the training sets (STEGSs) are compatible with a large number of proximal gradient descent and gradient hard thresholding (GHT) techniques. Interestingly, we demonstrate that STEGS can greatly reduce overall complexity while maintaining statistical accuracy or even surpassing the intrinsic error introduced by GHT approaches. In addition, we analyze the effect of the training data growth rate on the overall complexity. The practical results of applying l2,1 -and l0 -norms to a variety of large-scale real-world datasets not only corroborate our theories but also demonstrate the benefits of our STEGS framework.
更多
查看译文
关键词
Gradient hard thresholding (GHT) algorithms,proximal algorithms,stagewise training strategy,stochastic optimization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要