Block change learning for knowledge distillation

Information Sciences(2020)

引用 8|浏览23
暂无评分
摘要
Deep neural networks perform well but require high-performance hardware for their use in real-world environments. Knowledge distillation is a simple method for improving the performance of a small network by using the knowledge of a large complex network. Small and large networks are referred to as student and teacher models, respectively. Previous knowledge distillation approaches perform well in a relatively small teacher network (20–30 layers) but poorly in large teacher networks (50 layers). Here, we propose an approach called block change learning that performs local and global knowledge distillation by changing blocks comprised of layers. The method focuses on the knowledge transfer without losing information in a large teacher model, as the approach considers intra-relationships between layers using local knowledge distillation and inter-relationships between corresponding blocks. The results are demonstrated this approach as superior to state-of-the-art methods using feature extraction datasets (Market1501 and DukeMTMC-relD) and object classification datasets (CIFAR-100 and Caltech256). Furthermore, we showed that the performance of the proposed approach was superior to that of a fine-tuning approach using pretrained models.
更多
查看译文
关键词
Knowledge distillation,Model compression,Convolutional neural network
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要