Unleashing the Power of Multi-Task Learning: A Comprehensive Survey Spanning Traditional, Deep, and Pretrained Foundation Model Eras
arxiv(2024)
摘要
MTL is a learning paradigm that effectively leverages both task-specific and
shared information to address multiple related tasks simultaneously. In
contrast to STL, MTL offers a suite of benefits that enhance both the training
process and the inference efficiency. MTL's key advantages encompass
streamlined model architecture, performance enhancement, and cross-domain
generalizability. Over the past twenty years, MTL has become widely recognized
as a flexible and effective approach in various fields, including CV, NLP,
recommendation systems, disease prognosis and diagnosis, and robotics. This
survey provides a comprehensive overview of the evolution of MTL, encompassing
the technical aspects of cutting-edge methods from traditional approaches to
deep learning and the latest trend of pretrained foundation models. Our survey
methodically categorizes MTL techniques into five key areas: regularization,
relationship learning, feature propagation, optimization, and pre-training.
This categorization not only chronologically outlines the development of MTL
but also dives into various specialized strategies within each category.
Furthermore, the survey reveals how the MTL evolves from handling a fixed set
of tasks to embracing a more flexible approach free from task or modality
constraints. It explores the concepts of task-promptable and -agnostic
training, along with the capacity for ZSL, which unleashes the untapped
potential of this historically coveted learning paradigm. Overall, we hope this
survey provides the research community with a comprehensive overview of the
advancements in MTL from its inception in 1997 to the present in 2023. We
address present challenges and look ahead to future possibilities, shedding
light on the opportunities and potential avenues for MTL research in a broad
manner. This project is publicly available at
https://github.com/junfish/Awesome-Multitask-Learning.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要