A unified pruning framework for vision transformers

arxiv(2023)

引用 14|浏览14
暂无评分
摘要
Conclusion In this study, we proposed a novel method called UP-ViTs to prune ViTs in a unified manner. Our framework can prune all components in a ViT and its variants, maintain the models’ structure, and generalize well into downstream tasks. UP-ViTs achieve state-of-the-art results when pruning various ViT backbones. Moreover, we studied the transferring ability of the compressed model and found that our UP-ViTs also outperform original ViTs. We also extended our method into NLP tasks and obtained more efficient transformer models. Please refer to the appendix for more details.
更多
查看译文
关键词
unified pruning framework,transformers,vision
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要