Improve Knowledge Distillation via Label Revision and Data Selection
arxiv(2024)
摘要
Knowledge distillation (KD) has become a widely used technique in the field
of model compression, which aims to transfer knowledge from a large teacher
model to a lightweight student model for efficient network development. In
addition to the supervision of ground truth, the vanilla KD method regards the
predictions of the teacher as soft labels to supervise the training of the
student model. Based on vanilla KD, various approaches have been developed to
further improve the performance of the student model. However, few of these
previous methods have considered the reliability of the supervision from
teacher models. Supervision from erroneous predictions may mislead the training
of the student model. This paper therefore proposes to tackle this problem from
two aspects: Label Revision to rectify the incorrect supervision and Data
Selection to select appropriate samples for distillation to reduce the impact
of erroneous supervision. In the former, we propose to rectify the teacher's
inaccurate predictions using the ground truth. In the latter, we introduce a
data selection technique to choose suitable training samples to be supervised
by the teacher, thereby reducing the impact of incorrect predictions to some
extent. Experiment results demonstrate the effectiveness of our proposed
method, and show that our method can be combined with other distillation
approaches, improving their performance.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要