Neural reproducing kernel Banach spaces and representer theorems for deep networks
arxiv(2024)
摘要
Studying the function spaces defined by neural networks helps to understand
the corresponding learning models and their inductive bias. While in some
limits neural networks correspond to function spaces that are reproducing
kernel Hilbert spaces, these regimes do not capture the properties of the
networks used in practice. In contrast, in this paper we show that deep neural
networks define suitable reproducing kernel Banach spaces.
These spaces are equipped with norms that enforce a form of sparsity,
enabling them to adapt to potential latent structures within the input data and
their representations. In particular, leveraging the theory of reproducing
kernel Banach spaces, combined with variational results, we derive representer
theorems that justify the finite architectures commonly employed in
applications. Our study extends analogous results for shallow networks and can
be seen as a step towards considering more practically plausible neural
architectures.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要