Understanding Sparse Neural Networks from their Topology via Multipartite Graph Representations
arxiv(2023)
摘要
Pruning-at-Initialization (PaI) algorithms provide Sparse Neural Networks
(SNNs) which are computationally more efficient than their dense counterparts,
and try to avoid performance degradation. While much emphasis has been directed
towards how to prune, we still do not know what topological
metrics of the SNNs characterize good performance. From prior work, we
have layer-wise topological metrics by which SNN performance can be predicted:
the Ramanujan-based metrics. To exploit these metrics, proper ways to represent
network layers via Graph Encodings (GEs) are needed, with Bipartite Graph
Encodings (BGEs) being the de-facto standard at the current stage.
Nevertheless, existing BGEs neglect the impact of the inputs, and do not
characterize the SNN in an end-to-end manner. Additionally, thanks to a
thorough study of the Ramanujan-based metrics, we discover that they are only
as good as the layer-wise density as performance predictors, when paired
with BGEs. To close both gaps, we design a comprehensive topological analysis
for SNNs with both linear and convolutional layers, via (i) a new input-aware
Multipartite Graph Encoding (MGE) for SNNs and (ii) the design of new
end-to-end topological metrics over the MGE. With these novelties, we show the
following: (a) The proposed MGE allows to extract topological metrics that are
much better predictors of the accuracy drop than metrics computed from current
input-agnostic BGEs; (b) Which metrics are important at different sparsity
levels and for different architectures; (c) A mixture of our topological
metrics can rank PaI algorithms more effectively than Ramanujan-based metrics.
The codebase is publicly available at https://github.com/eliacunegatti/mge-snn.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要