Dual Expert Distillation Network for Generalized Zero-Shot Learning
arxiv(2024)
摘要
Zero-shot learning has consistently yielded remarkable progress via modeling
nuanced one-to-one visual-attribute correlation. Existing studies resort to
refining a uniform mapping function to align and correlate the sample regions
and subattributes, ignoring two crucial issues: 1) the inherent asymmetry of
attributes; and 2) the unutilized channel information. This paper addresses
these issues by introducing a simple yet effective approach, dubbed Dual Expert
Distillation Network (DEDN), where two experts are dedicated to coarse- and
fine-grained visual-attribute modeling, respectively. Concretely, one coarse
expert, namely cExp, has a complete perceptual scope to coordinate
visual-attribute similarity metrics across dimensions, and moreover, another
fine expert, namely fExp, consists of multiple specialized subnetworks, each
corresponds to an exclusive set of attributes. Two experts cooperatively
distill from each other to reach a mutual agreement during training. Meanwhile,
we further equip DEDN with a newly designed backbone network, i.e., Dual
Attention Network (DAN), which incorporates both region and channel attention
information to fully exploit and leverage visual semantic knowledge.
Experiments on various benchmark datasets indicate a new state-of-the-art.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要