Feature learning in neural networks and kernel machines that recursively learn features

arxiv(2023)

引用 0|浏览34
暂无评分
摘要
Neural networks have achieved impressive results on many technological and scientific tasks. Yet, their empirical successes have outpaced our fundamental understanding of their structure and function. Identifying mechanisms driving the successes of neural networks can provide principled approaches for improving neural network performance and developing simple and effective alternatives. In this work, we isolate a key mechanism driving feature learning in fully connected neural networks by connecting neural feature learning to a statistical estimator known as average gradient outer product. We subsequently leverage this mechanism to design \textit{Recursive Feature Machines} (RFMs), which are kernel machines that learn features. We show that RFMs (1) accurately capture features learned by deep fully connected neural networks, and (2) outperform a broad spectrum of models including neural networks on tabular data. Furthermore, we show how RFMs shed light on recently observed deep learning phenomena including grokking, lottery tickets, simplicity biases, and spurious features. We provide a Python implementation to make our method easily accessible [\url{https://github.com/aradha/recursive_feature_machines}].
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要