Rethinking Intermediate Layers design in Knowledge Distillation for Kidney and Liver Tumor Segmentation
CoRR(2023)
摘要
Knowledge distillation(KD) has demonstrated remarkable success across various
domains, but its application to medical imaging tasks, such as kidney and liver
tumor segmentation, has encountered challenges. Many existing KD methods are
not specifically tailored for these tasks. Moreover, prevalent KD methods often
lack a careful consideration of what and from where to distill knowledge from
the teacher to the student. This oversight may lead to issues like the
accumulation of training bias within shallower student layers, potentially
compromising the effectiveness of KD. To address these challenges, we propose
Hierarchical Layer-selective Feedback Distillation (HLFD). HLFD strategically
distills knowledge from a combination of middle layers to earlier layers and
transfers final layer knowledge to intermediate layers at both the feature and
pixel levels. This design allows the model to learn higher-quality
representations from earlier layers, resulting in a robust and compact student
model. Extensive quantitative evaluations reveal that HLFD outperforms existing
methods by a significant margin. For example, in the kidney segmentation task,
HLFD surpasses the student model (without KD) by over 10pp, significantly
improving its focus on tumor-specific features. From a qualitative standpoint,
the student model trained using HLFD excels at suppressing irrelevant
information and can focus sharply on tumor-specific details, which opens a new
pathway for more efficient and accurate diagnostic tools.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要