Mean-field Analysis on Two-layer Neural Networks from a Kernel Perspective
arxiv(2024)
摘要
In this paper, we study the feature learning ability of two-layer neural
networks in the mean-field regime through the lens of kernel methods. To focus
on the dynamics of the kernel induced by the first layer, we utilize a
two-timescale limit, where the second layer moves much faster than the first
layer. In this limit, the learning problem is reduced to the minimization
problem over the intrinsic kernel. Then, we show the global convergence of the
mean-field Langevin dynamics and derive time and particle discretization error.
We also demonstrate that two-layer neural networks can learn a union of
multiple reproducing kernel Hilbert spaces more efficiently than any kernel
methods, and neural networks acquire data-dependent kernel which aligns with
the target function. In addition, we develop a label noise procedure, which
converges to the global optimum and show that the degrees of freedom appears as
an implicit regularization.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要