HyperLLaVA: Dynamic Visual and Language Expert Tuning for Multimodal Large Language Models
arxiv(2024)
摘要
Recent advancements indicate that scaling up Multimodal Large Language Models
(MLLMs) effectively enhances performance on downstream multimodal tasks. The
prevailing MLLM paradigm, e.g., LLaVA, transforms visual features into
text-like tokens using a static vision-language mapper, thereby enabling
static LLMs to develop the capability to comprehend visual information
through visual instruction tuning. Although promising, the static tuning
strategy [The static tuning refers to the trained model with static
parameters.] that shares the same parameters may constrain performance across
different downstream multimodal tasks. In light of this, we introduce
HyperLLaVA, which involves adaptive tuning of the projector and LLM parameters,
in conjunction with a dynamic visual expert and language expert, respectively.
These experts are derived from HyperNetworks, which generates adaptive
parameter shifts through visual and language guidance, enabling dynamic
projector and LLM modeling in two-stage training.
Our experiments demonstrate that our solution significantly surpasses LLaVA
on existing MLLM benchmarks, including MME, MMBench, SEED-Bench, and
LLaVA-Bench. [Our project is available on the link
https://github.com/DCDmllm/HyperLLaVA].
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要