Are Ensembles Getting Better all the Time?
arxiv(2023)
摘要
Ensemble methods combine the predictions of several base models. We study
whether or not including more models always improves their average performance.
This question depends on the kind of ensemble considered, as well as the
predictive metric chosen. We focus on situations where all members of the
ensemble are a priori expected to perform as well, which is the case of several
popular methods such as random forests or deep ensembles. In this setting, we
show that ensembles are getting better all the time if, and only if, the
considered loss function is convex. More precisely, in that case, the average
loss of the ensemble is a decreasing function of the number of models. When the
loss function is nonconvex, we show a series of results that can be summarised
as: ensembles of good models keep getting better, and ensembles of bad models
keep getting worse. To this end, we prove a new result on the monotonicity of
tail probabilities that may be of independent interest. We illustrate our
results on a medical prediction problem (diagnosing melanomas using neural
nets) and a "wisdom of crowds" experiment (guessing the ratings of upcoming
movies).
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要