Knowledge Translation: A New Pathway for Model Compression
CoRR(2024)
摘要
Deep learning has witnessed significant advancements in recent years at the
cost of increasing training, inference, and model storage overhead. While
existing model compression methods strive to reduce the number of model
parameters while maintaining high accuracy, they inevitably necessitate the
re-training of the compressed model or impose architectural constraints. To
overcome these limitations, this paper presents a novel framework, termed
Knowledge Translation (KT), wherein a “translation” model
is trained to receive the parameters of a larger model and generate compressed
parameters. The concept of KT draws inspiration from language translation,
which effectively employs neural networks to convert different languages,
maintaining identical meaning. Accordingly, we explore the potential of neural
networks to convert models of disparate sizes, while preserving their
functionality. We propose a comprehensive framework for KT, introduce data
augmentation strategies to enhance model performance despite restricted
training data, and successfully demonstrate the feasibility of KT on the MNIST
dataset. Code is available at .
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要