Stochastic Gradient Succeeds for Bandits
ICML 2023(2024)
摘要
We show that the stochastic gradient bandit algorithm converges to a
globally optimal policy at an O(1/t) rate, even with a constant
step size. Remarkably, global convergence of the stochastic gradient bandit
algorithm has not been previously established, even though it is an old
algorithm known to be applicable to bandits. The new result is achieved by
establishing two novel technical findings: first, the noise of the stochastic
updates in the gradient bandit algorithm satisfies a strong “growth
condition” property, where the variance diminishes whenever progress becomes
small, implying that additional noise control via diminishing step sizes is
unnecessary; second, a form of “weak exploration” is automatically achieved
through the stochastic gradient updates, since they prevent the action
probabilities from decaying faster than O(1/t), thus ensuring that every
action is sampled infinitely often with probability 1. These two findings can
be used to show that the stochastic gradient update is already “sufficient”
for bandits in the sense that exploration versus exploitation is automatically
balanced in a manner that ensures almost sure convergence to a global optimum.
These novel theoretical findings are further verified by experimental results.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要