Understanding MLP-Mixer as a Wide and Sparse MLP
arxiv(2023)
摘要
Multi-layer perceptron (MLP) is a fundamental component of deep learning, and
recent MLP-based architectures, especially the MLP-Mixer, have achieved
significant empirical success. Nevertheless, our understanding of why and how
the MLP-Mixer outperforms conventional MLPs remains largely unexplored. In this
work, we reveal that sparseness is a key mechanism underlying the MLP-Mixers.
First, the Mixers have an effective expression as a wider MLP with
Kronecker-product weights, clarifying that the Mixers efficiently embody
several sparseness properties explored in deep learning. In the case of linear
layers, the effective expression elucidates an implicit sparse regularization
caused by the model architecture and a hidden relation to Monarch matrices,
which is also known as another form of sparse parameterization. Next, for
general cases, we empirically demonstrate quantitative similarities between the
Mixer and the unstructured sparse-weight MLPs. Following a guiding principle
proposed by Golubeva, Neyshabur and Gur-Ari (2021), which fixes the number of
connections and increases the width and sparsity, the Mixers can demonstrate
improved performance.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要