Unified Language-Vision Pretraining in LLM with Dynamic Discrete Visual Tokenization
arXiv (Cornell University)(2023)
摘要
Recently, the remarkable advance of the Large Language Model (LLM) has
inspired researchers to transfer its extraordinary reasoning capability to both
vision and language data. However, the prevailing approaches primarily regard
the visual input as a prompt and focus exclusively on optimizing the text
generation process conditioned upon vision content by a frozen LLM. Such an
inequitable treatment of vision and language heavily constrains the model's
potential. In this paper, we break through this limitation by representing both
vision and language in a unified form. Specifically, we introduce a
well-designed visual tokenizer to translate the non-linguistic image into a
sequence of discrete tokens like a foreign language that LLM can read. The
resulting visual tokens encompass high-level semantics worthy of a word and
also support dynamic sequence length varying from the image. Coped with this
tokenizer, the presented foundation model called LaVIT can handle both image
and text indiscriminately under the same generative learning paradigm. This
unification empowers LaVIT to serve as an impressive generalist interface to
understand and generate multi-modal content simultaneously. Extensive
experiments further showcase that it outperforms the existing models by a large
margin on massive vision-language tasks. Our code and models are available at
https://github.com/jy0205/LaVIT.
更多查看译文
关键词
Vision Language Learning,Large Language Model,Pretraining
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要