Taming Pre-trained LLMs for Generalised Time Series Forecasting via Cross-modal Knowledge Distillation
arxiv(2024)
摘要
Multivariate time series forecasting has recently gained great success with
the rapid growth of deep learning models. However, existing approaches usually
train models from scratch using limited temporal data, preventing their
generalization. Recently, with the surge of the Large Language Models (LLMs),
several works have attempted to introduce LLMs into time series forecasting.
Despite promising results, these methods directly take time series as the input
to LLMs, ignoring the inherent modality gap between temporal and text data. In
this work, we propose a novel Large Language Models and time series alignment
framework, dubbed LLaTA, to fully unleash the potentials of LLMs in the time
series forecasting challenge. Based on cross-modal knowledge distillation, the
proposed method exploits both input-agnostic static knowledge and
input-dependent dynamic knowledge in pre-trained LLMs. In this way, it empowers
the forecasting model with favorable performance as well as strong
generalization abilities. Extensive experiments demonstrate the proposed method
establishes a new state of the art for both long- and short-term forecasting.
Code is available at .
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要