Why does the two-timescale Q-learning converge to different mean field solutions? A unified convergence analysis
arxiv(2024)
摘要
We revisit the unified two-timescale Q-learning algorithm as initially
introduced by Angiuli et al. . This algorithm
demonstrates efficacy in solving mean field game (MFG) and mean field control
(MFC) problems, simply by tuning the ratio of two learning rates for mean field
distribution and the Q-functions respectively. In this paper, we provide a
comprehensive theoretical explanation of the algorithm's bifurcated numerical
outcomes under fixed learning rates. We achieve this by establishing a diagram
that correlates continuous-time mean field problems to their discrete-time
Q-function counterparts, forming the basis of the algorithm. Our key
contribution lies in the construction of a Lyapunov function integrating both
mean field distribution and Q-function iterates. This Lyapunov function
facilitates a unified convergence of the algorithm across the entire spectrum
of learning rates, thus providing a cohesive framework for analysis.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要