LNPT: Label-free Network Pruning and Training
arxiv(2024)
摘要
Pruning before training enables the deployment of neural networks on smart
devices. By retaining weights conducive to generalization, pruned networks can
be accommodated on resource-constrained smart devices. It is commonly held that
the distance on weight norms between the initialized and the fully-trained
networks correlates with generalization performance. However, as we have
uncovered, inconsistency between this metric and generalization during training
processes, which poses an obstacle to determine the pruned structures on smart
devices in advance. In this paper, we introduce the concept of the learning
gap, emphasizing its accurate correlation with generalization. Experiments show
that the learning gap, in the form of feature maps from the penultimate layer
of networks, aligns with variations of generalization performance. We propose a
novel learning framework, LNPT, which enables mature networks on the cloud to
provide online guidance for network pruning and learning on smart devices with
unlabeled data. Our results demonstrate the superiority of this approach over
supervised training.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要