谷歌浏览器插件
订阅小程序
在清言上使用

Proximal Boosting: Aggregating Weak Learners to Minimize Non-Differentiable Losses

Neurocomputing(2023)

引用 0|浏览26
暂无评分
摘要
Gradient boosting is a prediction method that iteratively combines weak learners to produce a complex and accurate model. From an optimization point of view, the learning procedure of gradient boosting mimics a gradient descent on a functional variable. This paper proposes to build upon the proximal point algorithm, when the empirical risk to minimize is not differentiable, in order to introduce a novel boosting approach, called proximal boosting . It comes with a compagnon algorithm inspired by Grubb and Bagnell (2011) and called residual proximal boosting , which is aimed at better controlling the approximation error. Theoretical convergence is proved for these two procedures under different hypotheses on the empirical risk and advantages of leveraging proximal methods for boosting are illustrated by numerical experiments on simulated and real-world data. In particular, we exhibit a favorable comparison over gradient boosting regarding convergence rate and prediction accuracy.
更多
查看译文
关键词
Boosting,Proximal point method,Convex optimization,Functional optimization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要