Generalization Bounds for Uniformly Stable Algorithms
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018)(2018)
摘要
Uniform stability of a learning algorithm is a classical notion of algorithmic stability introduced to derive high-probability bounds on the generalization error (Bousquet and Elisseeff, 2002). Specifically, for a loss function with range bounded in [0,1], the generalization error of a γ-uniformly stable learning algorithm on n samples is known to be within O((γ +1/n) √(n log(1/δ))) of the empirical error with probability at least 1-δ. Unfortunately, this bound does not lead to meaningful generalization bounds in many common settings where γ≥ 1/√(n). At the same time the bound is known to be tight only when γ = O(1/n). We substantially improve generalization bounds for uniformly stable algorithms without making any additional assumptions. First, we show that the bound in this setting is O(√((γ + 1/n) log(1/δ))) with probability at least 1-δ. In addition, we prove a tight bound of O(γ^2 + 1/n) on the second moment of the estimation error. The best previous bound on the second moment is O(γ + 1/n). Our proofs are based on new analysis techniques and our results imply substantially stronger generalization guarantees for several well-studied algorithms.
更多查看译文
关键词
learning algorithm,loss function,generalization error,algorithmic stability
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络