Towards Better Understanding Meta-learning Methods through Multi-task Representation Learning Theory

arXiv (Cornell University)(2020)

引用 0|浏览0
暂无评分
摘要
In this paper, we consider the framework of multi-task representation (MTR) learning where the goal is to use source tasks to learn a representation that reduces the sample complexity of solving a target task. We start by reviewing recent advances in MTR theory and show that they can provide novel insights for popular meta-learning algorithms when analyzed within this framework. In particular, we highlight a fundamental difference between gradient-based and metric-based algorithms and put forward a theoretical analysis to explain it. Finally, we use the derived insights to improve the generalization capacity of meta-learning methods via a new spectral-based regularization term and confirm its efficiency through experimental studies on classic few-shot classification and continual learning benchmarks. To the best of our knowledge, this is the first contribution that puts the most recent learning bounds of MTR theory into practice of training popular meta-learning methods.
更多
查看译文
关键词
better understanding,meta-learning,multi-task
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要